It’s the return of Amazon Web Services’ big annual mass, re:Invent. As usual, the cloud giant is increasing its announcements, even if it means confusing its users and aficionados somewhat. Expected at the turn, he announced numerous innovations in artificial intelligence. Starting with the hardware part with the introduction of new data center components, expected to provide 12% more computing power, while improving energy efficiency and flexibility for AI-related workloads.
For the record, AWS has been building hyperscale data centers for 18 years and GPU-based servers for AI workloads for 13 years. Today occupied by thousands of customers, these data centers (distributed between 34 regions for 108 availability zones) find themselves at the heart of a new kind of battle: in fact, while the use of Generative AI continues to grow and as demands for GPU capacity increase, AWS data centers must adapt to support increasingly higher power densities.
Liquid cooling adapted to AI servers and revised rack design
In detail, these key improvements include a simplified electrical and mechanical design for high availability. For example, AWS simplified power distribution and, in doing so, reduced the number of potential points of failure by 20%. Other changes should be noted in terms of liquid cooling implemented for the latest AI servers: “The updated cooling systems will seamlessly integrate air and liquid cooling capabilities for the most powerful AI chipsets, like AWS Trainium2 and rack-scale AI supercomputing solutions such as GB200 NVL72, as well as network switches and storage servers from AWS.
The giant has also been working on other changes to rack design and control systems, enabling AWS to support a 6x increase in rack power density over the next two years, and a another 3x increase in the future. This is partly achieved by a new power shelf, which efficiently distributes data center power throughout the rack, reducing power conversion losses.
After hardware, Amazon goes all out on software
Beyond hardware announcements, the firm has focused on software innovations, particularly those aimed at a particular area: customer service. Amazon Connect, its omnichannel contact center tool available in the cloud, benefits from a burst of updates, first integrating generative AI.
This ranges from customer segmentation powered by generative AI for targeted campaigns to native WhatsApp Business messaging for omnichannel support to the secure collection of sensitive customer data in chat interactions or even simplified management of bots. Conversational AI in the Connect interface, and new improvements to Q, the firm’s AI assistant, in Connect.
Amazon Q, darling of the firm and rival of Microsoft’s Copilot
The firm seeks to make the creation of conversational AI bots as simple as possible for its users. Now, with Amazon Q in Connect, it is possible to create, edit and manage conversational AI bots powered by Amazon Lex (its chatbot generator) directly in the Connect web interface. This includes supporting customer self-service interactions across interactive voice response (IVR) and digital channels, in addition to assisting contact center agents with responses and recommended actions.
The system is able to intelligently search configured knowledge bases, customer information, web content and third-party application data to answer customer questions when they do not match predefined intents. As proof of its desire to integrate with other systems, Amazon even announced that it was natively integrating the digital channels and unified delivery of its Connect tool into Salesforce CRM.
For enterprise customers, this enables a single routing and workflow system for both companies’ channels, easily directing calls, chats and cases to the appropriate self-service interaction or agent. Currently, only a preview version is available.
Security is not left out, with administrators able to set custom guardrails for their instance, setting restrictions on response generation and monitoring Amazon Q performance in Connect. With this update, Amazon is entering the topic of advanced conversational features powered by LLMs. It is now clear: the Seattle firm is ready to take on Microsoft and its famous Copilot. The Redmond giant has in fact unveiled an additional suite of AI agents for Copilot in Microsoft 365 during its Ignite conference at the end of November.
Users across industries rely on the AWS portfolio
What better way to prove to your customers the effectiveness of a solution than the testimony of another customer? Applying this principle, AWS features Midea, a Chinese company specializing in the production of household appliances and air conditioners, which uses Amazon Connect and generative AI services to build more efficient contact centers and improve customer experiences.
The firm has deployed the solution in 14 countries and regions around the world where it operates with an implementation time reduced from several months to a few weeks. The firm is also now reporting reductions of around 30% in operational costs. It complements Amazon Connect’s latest integrated GenAI capabilities, including Lex, Q and Connect Contact Lens within its call centers.
For its part, Wiwynn, a leading manufacturer of data center compute and storage devices, uses AWS Outposts to reduce deployment time by 90% and reduce capital costs. Previously, deploying a new factory’s manufacturing automation and warehouse management system typically took 10 weeks total, including hardware installation, resource provisioning and access point deployment.
With Outposts, Wiwynn says she’s gone to just one week for all of this. An efficiency which, in the long term, translates into another key figure: Wiwynn was able to deliver production environments 10 months ahead of schedule, a key indicator for the future of Industry 4.0.
Selected for you