OpenAI has agreed to purchase $38 billion of capacity at Amazon Web Services, the first deal it has signed with the industry leader in cloud infrastructure and the latest indication that the $500 billion artificial intelligence firm is no longer dependent on Microsoft.
The agreement announced on Monday will allow OpenAI to start running workloads on AWS infrastructure instantaneously, accessing hundreds of thousands of graphics processing units (GPUs) of Nvidia in the U.S., with an expansion planned in the coming years.
Amazon shares have ended 4 percent up on Monday, showing the highest closing value of the stock. The e-commerce giant has gained 14 percent in the past two trading days, the most favorable two-day trading since November 2022.
The initial stage of the agreement will be based on the current AWS data centers, and in the future, Amazon will expand more infrastructure on OpenAI.
Vice president of compute and machine learning services of AWS, Dave Brown, stated that “It’s completely separate capacity that we’re putting down. Some of that capacity is already available, and OpenAI is making use of that.”
OpenAI has been on a dealmaking spree of late, declaring about $1.4 trillion build-out deals with companies such as NVIDIA, Broadcom, Oracle, and Google, and has caused some to skeptically call the AI bubble, and others to doubt whether the nation has the power and resources necessary to make the grandiose promises a reality.
OpenAI has had a unique cloud deal with Microsoft, which initially supported this firm in 2019 and has dedicated up to $13 billion until this year.
Microsoft indicated that it was no longer the sole cloud provider of OpenAI and that it had transitioned to a model where it had a right of first refusal to new requests in January.
Preferential status of Microsoft also lapsed last week as per its newly agreed commercial terms with OpenAI, and therefore, the ChatGPT creator can now collaborate with the other hyperscalers.
However, OpenAI has already made cloud agreements with Oracle and Google, though AWS remains by far the leader in the market.
In Monday’s release, OpenAI CEO Sam Altman said, “Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
OpenAI will continue to spend a lot with Microsoft, which confirms that promise by buying incremental $250 billion of Azure services last week.
In the case of Amazon, the agreement is meaningful not only in the size and scale of the agreement itself but also due to the fact that the cloud giant has close relations with OpenAI competitor Anthropic.
Amazon has spent billions of dollars on Anthropic and is now building a $11 billion data center campus in New Carlisle, Indiana, specifically to serve Anthropic’s workload.
In the release, AWS CEO Matt Garman added that “The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”
Amazon has announced more than 20 percent year-on-year revenue growth in AWS in its earnings report last week, surpassing analyst expectations.
However, at Microsoft and Google, the growth was higher, and they had reported cloud growth of 40 percent and 34 percent, respectively.
The existing agreement with OpenAI now is specifically to use Nvidia chips, two of them being popular Blackwell chips, though the use of other silicon can be considered in the future. Meanwhile, Anthropic is using the custom-built Trainium chip designed by Amazon in the new facility.
“We like Trainium because we’re able to give customers something that gives them better price performance and honestly gives them choice,” Brown stated, adding that he can’t issue any information on “anything we’ve done with OpenAI on Trainium at this point.”
The infrastructure will assist with inference, including the powering of ChatGPTs in real-time responses, also the training of the next-generation frontier models.
The current OpenAI can be scaled with AWS as required in the next seven years, although no plan beyond 2026 has been developed.
The foundation models of OpenAI, such as open-weight options, are already accessible via Bedrock, an AWS-managed service that provides access to top AI systems.
OpenAI models are applied in AWS by companies such as Peloton, Thomson Reuters, Comscore, and Triomics to code and solve mathematical problems, as well as scientific analysis and agentic workflows. Hence, the announcement on Monday will form a more direct relationship.
Brown said, “As part of this deal, OpenAI is a customer of AWS. They’ve committed to buying compute capacity from us, and we’re charging OpenAI for that capacity. It’s very, very straightforward.”
In the case of OpenAI, the most valuable private AI firm, the AWS deal is another move towards preparing to eventually go public.
OpenAI is signaling its independence and operational maturity by diversifying its cloud partners and locking in long-term capacity across providers.
Altman stated in a recent livestream that an IPO is “the most likely path” given OpenAI’s capital requirements. CFO Sarah Friar has shared that belief, positioning the recent corporate restructuring as a step toward going public.
								
				
											
