TLDW logo

Anthropic, Microsoft, and NVIDIA Announce Partnerships

By Microsoft

Summary

## Key takeaways - **Mutual Customer Relationships**: We are increasingly going to be customers of each other. We will use Anthropic models, they will use our infrastructure, and we'll go to market together to help our customers realize the value of AI. [00:39], [00:43] - **Four Key Partnership Pillars**: This announcement encompasses four key things: First, customers of Microsoft Foundry will be able to access Anthropic's Claude models. Second, we are continuing access for Claude across our Copilot family. Third, Anthropic is committing to Azure capacity. And finally, NVIDIA and Anthropic are also establishing a partnership to support Anthropic's future growth. [00:50], [01:08] - **Claude on All Major Clouds**: Anthropic will be the first model that is available on all three of the biggest clouds. [01:51], [01:59] - **Gigawatt NVIDIA Capacity Boost**: We're excited to announce up to a gigawatt of capacity. And that's just for now. That's just where we're starting. [03:06], [03:11] - **Three Simultaneous Scaling Laws**: We're seeing three scaling laws happening at the same time. Pre-training is scaling incredibly well still. Post-training, the more compute you give it, the smarter the AI. And then of course, inference time scaling, test time scaling, the more the AI thinks, the higher the quality of answer. [08:26], [08:46] - **Beyond Zero-Sum AI Hype**: As an industry, we really need to move beyond any type of zero-sum narrative or winner-take-all hype. What's required now is the hard work of building broad, durable capabilities together so that this technology can deliver real, tangible, local success for every country, every sector, and every customer. [07:37], [08:01]

Topics Covered

  • Why does multi-cloud choice accelerate AI diffusion?
  • How will NVIDIA's hardware supercharge Anthropic's models?
  • Can enterprise GTM unlock AI's economic potential?
  • Are three scaling laws driving AI's future?

Full Transcript

Super excited to be here with Dario and Jensen.

What both of you are doing - NVIDIA at the silicon layer and Anthropic at the cognition layer is shaping what every developer and every organization will be able to build on going forward.

On our side, we have been scaling NVIDIA across this fungible Azure fleet at the speed of light.

Something that Jensen's been talking to me about for multiple years now, and we're continuing to set pace, including with the AI superfactory we announced just last week.

And we have been deepening our partnership with Anthropic as well, incorporating their models across the entire Copilot family.

And today, I'm excited to share that we are taking another step.

We are increasingly going to be customers of each other.

We will use Anthropic models.

They will use our infrastructure and we'll go to market together to help our customers realize the value of AI.

This announcement encompasses four key things: First, customers of Microsoft Foundry will be able to access Anthropic's Claude models.

Second, we are continuing access for Claude across our Copilot family.

Third, Anthropic is committing to Azure capacity.

And finally, NVIDIA and Anthropic are also establishing a partnership to support Anthropic's future growth.

For us, this is all about deepening our commitment to bringing the best infrastructure, model choice, and applications to our customers.

And of course, this all builds on the partnership we have with OpenAI, which remains a critical partner for Microsoft and OpenAI and our customers and provides more innovation and choice.

With that, Dario, why don't you tell us a little more about what this partnership means to Anthropic?

Yes, in terms of Microsoft, I think, as you mentioned, Satya, both of us believe in choice.

And we're excited to bring our models as a choice to Microsoft Azure.

And Anthropic will be the first model that is available on all three of the biggest clouds.

Second, Microsoft has a reputation as a strong enterprise company, and Anthropic does as well.

So we have the opportunity to work together, to go to market together and to provide intelligence to the world together.

And we're excited to accelerate the diffusion of this technology as the technology continues to improve.

And finally, we're very excited to get additional capacity, that we can use both to train our models to support Microsoft first-party products and to sell together.

And that brings me to the NVIDIA part of it.

We are very excited to add substantial support and substantial use of NVIDIA's accelerators for use by Anthropic.

NVIDIA has led the way in this field in many ways, has helped make this entire AI boom possible.

We think this is going to be the beginning, just the beginning of a very long partnership.

We are excited to work together to co-optimize models together, starting with Blackwell and then moving on to Vera Rubin.

We're excited to announce up to a gigawatt of capacity.

And that's just for now.

That's just where we're starting.

And so, we're very excited to continue that, to continue the co-optimization to further build out NVIDIA's already incredible ecosystem.

That's fantastic.

Well said, Dario.

And one of our core beliefs is that you can't make progress just in one layer of the stack.

You have to advance every layer - silicon systems models applications while optimizing effectively for all the things that customers care about, COGS latency performance.

And one of the things that we are also establishing today is this new partnership, as you described, between NVIDIA and Anthropic.

So Jensen, maybe you should talk a little bit about what you all are excited about.

Thanks Satya.

You know, NVIDIA's DNA is to build the most advanced computing systems in the world and to accelerate the most challenging workloads in the world on the most important platforms in the world.

And this conference call right here embodies that very thing.

This is a dream come true for us.

We've admired the work of Anthropic and Dario for a long time, and this is the first time we are going to deeply partner with Anthropic to accelerate Claude.

I can't wait to go accelerate Claude.

The work that Anthropic has done, the seminal work in AI safety, the advances of Claude code, the engineers of NVIDIA love Claude code.

The fact that you can go and literally refactor your code for you, I mean, it's pretty an amazing thing.

And the work on MCP, the Model Context Protocol, has completely revolutionized the agentic AI landscape.

And so the contributions of Anthropic, the advanced research that is done there, the incredible researchers, the incredible infrastructure team that works at Anthropic, that makes it possible for you to scale up to what you have already done, is really quite phenomenal.

And now your business is on a rocket ship.

It's just scaling so incredibly.

And so, I can't wait to go accelerate Claude and Grace Blackwell with NVLink.

I'm really, really hoping for an order of magnitude speed up, and that's going to help you scale even faster, drive down token economics, and really make it possible for us to spread AI everywhere.

And so I'm really, really super excited about that.

Now, the work that we've done with Microsoft over the years, Satya, it's broad and deep.

I mean, it's incredible all the things that we do, the work that we've done to left, shift, shift left, all of our engineering so that at the moment we have new technology, it appears on Azure.

Notice the scale that we've now already achieved with Grace Blackwell 200 and 300, the number of systems that are already out there helping researchers pioneer the next frontier in AI, really fantastic work there.

But we do everything from data processing, to search, to image recognition, to fraud detection, to all kinds of stuff that we're doing, from classical computing, to classical ML, to generative AI, to agentic AI.

The work that we're doing just spans the entire range of technology.

But what's really incredible is, of course, Microsoft has the world's best enterprise go-to-market.

This is the next giant frontier - enterprise and industrial AI.

That's where, as you know, the vast majority of world's economics economy is.

And in order for us to get to every single enterprise, the enterprise go-to-market takes decades to build up.

It's not one of those things, that just because you put it on the cloud, you're going to be able to serve the world's enterprise.

The enterprise go-to-market is very complicated.

And this is where the two of us have such great harmony because NVIDIA's computing is in every enterprise.

And we're in every enterprise in every single country.

Now, this partnership of the three of us will be able to bring AI, bring Claude to every enterprise, to every industry around the world.

And so this is a really exciting time.

I'll close with this - as an industry, we really need to move beyond any type of zero-sum narrative or winner-take-all hype.

What's required now is the hard work of building broad, durable capabilities together so that this technology can deliver real, tangible, local success for every country, every sector, and every customer.

The opportunity is simply too big to approach any other way.

Jensen and Dario, any last thoughts on this?

I'm just excited to, you know, take a large number of chips and use it to serve our mutual enterprise customers together to make the smartest models, the models that run the fastest, for the lowest possible cost.

Jensen?

You know, I think the world is just barely realizing where we are in the AI journey.

You know, we're seeing three scaling laws happening at the same time.

Pre-training is scaling incredibly well still.

Post-training, the more compute you give it, the smarter the AI.

And then of course, inference time scaling, test time scaling, the more the AI thinks, the higher the quality of answer.

And so we're now at a point with AI where it is very clear that the more compute we give it, the more cost-effective compute we give it, the smarter the tokens, the smarter the AI is going to be.

And the smarter the AI, the more adoption, both in the new applications that integrate these AI APIs and how more frequently you use it.

And so the quality of these AI models now have really reached an inflection point.

And so we've got these two simultaneous, exponentially increasing compute demand, and I guess the thing that's really great is they're going to need a lot more Azure compute resources, and they're going to need a lot more GPUs.

And we're just delighted to partner with you, Dario, to bring AI to the world.

Absolutely.

Thank you so much Dario and Jensen.

I'm really looking forward to everything we're going to build together, and more importantly, how customers can benefit from all this innovation, and really thrive with their business and their outcomes.

So thank you all for joining today.

Thank you.

Loading...

Loading video analysis...