Inside Amazon's playbook for handling sensitive questions about its huge OpenAI deal

The cloud giant is preparing employees to answer gnarly questions about its new relationship with OpenAI, according to an internal document.

  • Amazon is prepping for customer questions following its $50 billion investment in OpenAI.
  • Amazon employees are instructed to stress its continued commitment to Anthropic and Nova models.
  • It also warns against saying customers can "access" OpenAI models through AWS.

When Amazon announced a $50 billion investment and sweeping cloud partnership with OpenAI recently, a question surfaced inside the cloud giant: What does this mean for Anthropic?

Amazon is one of Anthropic's biggest investors, and has a deep cloud partnership with the startup. OpenAI and Anthropic are arch enemies, so there are potential tensions and conflicts from Amazon working so closely with both AI labs.

The tech giant addressed some of this in internal talking points it prepared recently for sales and marketing teams at Amazon. Business Insider reviewed a copy of the guidelines.

The document is the latest example of how intertwined relationships have become at the leading edge of the generative AI field. This technology requires a massively expensive rewiring of the cloud industry and the tech sector in general.

That's led to tech giants investing heavily in many of the top AI startups. For instance, Anthropic has taken billions of dollars from Google and Amazon, which are themselves arch rivals in cloud computing and online product search.

"Within the guardrails"

AWS CEO Matt Garman

AWS CEO Matt Garman

The Anthropic issue is just one of several delicate topics covered in the memo. The guidance lays out approved language, prohibited phrasing, and prepared responses to questions ranging from competitive dynamics to accusations that the OpenAI arrangement amounts to a circular financing.

Taken together, the materials reveal how deliberately Amazon is shaping the narrative around one of the most consequential AI alliances in the industry.

"It is very important that all our marketing stays within the guardrails," the memo said. An Amazon spokesperson declined to comment.

Reassurance for AWS customers

In its guidance, Amazon instructed employees to reassure AWS customers that it maintains "strong relationships" with Anthropic as well as other AI model providers including Meta, Mistral, and Cohere.

"We will continue to work closely with all model providers and only expect these partnerships to strengthen over time as customer demand for multiple models increases," the document stated.

Not "OpenAI on AWS"

As part of Amazon's new deal with OpenAI, the companies created an AI system architecture called Stateful Runtime Environment, or SRE. The new service is powered by OpenAI models and available on Amazon Bedrock, the cloud giant's platform for customers to access various models.

Amazon is tightly controlling how employees describe the SRE offering.

AWS employees may say the SRE "is powered by OpenAI models," "is enabled by OpenAI models," or "integrates with OpenAI models," according to the talking points in the document reviewed by Business Insider.

However, AWS staff are explicitly told not to say SRE "enables access to OpenAI models" or allows customers to "call OpenAI models." The document warns against describing the SRE as a "passthrough" to GPT models or suggesting that OpenAI's frontier models are generally available on AWS.

AWS employees are also told not to imply that OpenAI is "offering" the SRE. Instead, the companies are "jointly collaborating to offer" it.

The distinction is deliberate. OpenAI models will underpin the SRE, but customers can't directly call them through existing Bedrock APIs. (Application Programming Interfaces, or APIs, are a common way for applications to access and share data).

That positioning by Amazon separates its new deal from Microsoft's arrangement of hosting OpenAI models on its Azure cloud service. It also reinforces that AWS is not merely reselling OpenAI models, but embedding its models inside a specific infrastructure service.

Many operational details still remain undisclosed. Pricing, technical limits, and regional availability are all listed internally as "stay tuned."

Is this a circular deal?

The internal memo also addresses the question of whether Amazon's $50 billion investment in OpenAI amounts to a circular deal. Such concerns have become common in the AI industry where tech giants invest in startups, only to see a lot of that money come back in the form of cloud spending.

In Amazon's case, OpenAI is expanding its existing AWS cloud agreement by $100 billion over 8 years and committing to use 2 gigawatts of AWS Trainium chips as part of last month's deal.

AWS employees are instructed to push back on any circular financing claims. The document argued that companies frequently invest in and do business with one another, particularly in capital-intensive industries. It says Amazon's reasons for investing in OpenAI are distinct from OpenAI's reasons for using AWS infrastructure.

Giving up on Nova?

The talking points further prepare teams to respond to concerns that the arrangement could sideline Amazon's own Nova AI models or its Quick agentic AI application offering. As part of the deal, Amazon will become the exclusive provider of OpenAI's Frontier service, which has enterprise technology features that are similar to Quick.

The guidance emphasized Amazon's continued commitment to Nova and Quick, and the importance of providing choice to clients. Customers use multiple AI models even within the same application, the guidance stated.

The talking points also address chip supply concerns.

Given OpenAI's massive infrastructure needs, AWS anticipates customers asking whether Trainium capacity will be constrained. Amazon has previously said AWS growth would have been stronger if not for tight capacity issues.

The guidance, however, reassures teams that many customers will still have access to Trainium for their own AI workloads.

Have a tip? Contact this reporter via email at ekim@businessinsider.com or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address, a nonwork WiFi network, and a nonwork device; here's our guide to sharing information securely.

The post Inside Amazon's playbook for handling sensitive questions about its huge OpenAI deal appeared first on Business Insider