Amazon forms new team to train its AI model 'Olympus'
Amazon has formed a new team to train its large language model (LLM) codenamed "Olympus," aiming to compete with leading models from OpenAI and Alphabet, according to a Reuters report. Reportedly, the model boasts two trillion parameters, potentially making it one of the largest models in training. For context, OpenAI's GPT-4 models, considered among the best in the field, have one trillion parameters. The specifics of the project remain undisclosed, and Amazon has chosen not to comment on the matter.
Rohit Prasad leads team training 'Olympus'
The "Olympus" model's development is spearheaded by Rohit Prasad, former Alexa chief, who now reports directly to Amazon CEO Andy Jassy. As the head scientist of general artificial intelligence (AI) at Amazon, Prasad has assembled researchers from the Alexa AI and Amazon science teams to collaborate on training models. In the past, Amazon has trained smaller models like "Titan" and partnered with AI model start-ups such as Anthropic and AI21 Labs, providing their services to Amazon Web Services (AWS) users.
In-house developed models aim to boost AWS offerings
Amazon envisions that creating in-house models like "Olympus" could enhance its offerings on AWS, where enterprise clients seek access to top-performing models. Training larger AI models comes with a higher price tag due to the increased computing power needed. In an earnings call in April, Amazon executives revealed that the company would ramp up investment in LLMs and generative AI while scaling back on fulfillment and transportation in its retail business.