Code name: Olympus. Project: a multimodal model capable of processing images, videos and text at the same time. Objective: gain independence. Here in a few words are the outlines of Amazon’s plan in terms of artificial intelligence.
The cloud and e-commerce giant actually wants to have its own LLM, as reported The Information. If for the moment, little is known about this model, it is a safe bet that the firm will shed light on the subject next week during its annual AWS re:Invent event in Las Vegas.
A model with 2000 billion parameters
The model would, among other things, understand scenes in images and videos and help customers search for specific scenes, like a game-winning basketball shot, using simple text commands. Comprising 2,000 billion parameters, it would therefore exceed GPT-4, the OpenAI model which has 1,000 billion parameters and would in fact become the “largest” existing model. It would also be available on Bedrock, AWS’s dedicated generative AI service.
Last year, during the last edition of its annual conference, AWS previewed the Amazon Titan Image Generator model for Bedrock. The latter “can be used to rapidly generate and iterate images at low cost, can understand complex messages and generate relevant images with accurate object composition and limited distortions”, specifies the cloud provider. The firm already had several generative AI models in its portfolio, including Titan Text Lite and Titan Text Express.
Gradually get rid of LLMs developed by third parties
By offering its own multimodal model, Amazon seeks to reduce its dependence on the Claude family of models developed by the start-up Anthropic. The two companies now know each other well, as evidenced by the second financial boost the start-up received. This funding provided by Amazon to the tune of four billion dollars should help the young company move forward in the development of its AI products and models.
In return, OpenAI’s rival makes a major concession and adopts AWS chips for training and inference of its models. Another consideration: Anthropic provides AWS customers with access to its models – including the Claude 3 family of models –, in particular via Amazon Bedrock.
At the same time, Amazon plans to rely on Claude for the next version of its Alexa assistant in order to make the latter capable of responding to complex requests. Obviously, the giant intends to monetize this offer. However, it is unclear whether Amazon will have to pay additionally to Anthropic for the use of Claude in Alexa. And it is still possible that everything changes with the arrival of Olympus. To be continued.
Selected for you