TLDW logo

Anthropic C.E.O. Dario Amodei Says Massive A.I. Spending Could Haunt Some Companies

By The New York Times

Summary

Topics Covered

  • Scaling Laws Predict Ubiquitous AI Mastery
  • Compute Uncertainty Breeds Overextension Risk
  • Enterprise Models Beat Consumer Wars
  • Ban AI Chips to China for Security
  • AI Displaces Jobs, Demands Society Restructure

Full Transcript

Heat.

Heat.

Heat.

Heat.

>> [music] [music] >> Heat. Heat.

>> Heat. Heat.

[music] Heat. Heat.

>> [music] >> Heat. Heat.

>> Heat. Heat.

[music] Heat.

[music] [music] Heat.

>> [music] >> Please welcome Andrew Ross Sorcin and his guest, CEO and co-founder of Anthropic, Dario Amade.

>> Good afternoon everybody. [music]

I hope you guys all had a great lunch.

We have a huge afternoon starting with Dario Amod here. He is one of the most consequential people here uh in the world of artificial intelligence. He's

the co-founder and CEO of Anthropic. Of

course known for its clawed model. It's

one of the fastest growing technology companies in history and uniquely backed now by all three giants uh tech giants Amazon, Microsoft and Google. That's new

at least one of them. Uh and by the way he has been at this uh really longer than most. uh he worked at BYU then

than most. uh he worked at BYU then Google was an early employee at OpenAI where he led the development of uh chat GPT2 and three uh and the reason that we wanted to speak with him this year more

than anything else is because he has singularly been perhaps the most outspoken and candid person about AI when it comes to the way he's been thinking about jobs and job losses and

selling chips to China and politics on our country and where all of this goes.

So welcome to you. Thank you for being here.

>> Thank you for having me. Um, we got a lot to talk about. Um, including, by the way, are we in an AI bubble? But I

promise you we will get there. I I I'll start here though. um which is I mentioned you were a research scientist uh back at BYU uh 2014 and if I had sat

with you then and said we're going to sit together in 20125 talking about AI you would have told me what what would have been your

expectation for what we would have happened so I'll tell you what I am surprised by and what and what I'm not um I'm not surprised by the economic

impacts of the technology, the the value that it's creating, you know, the fact that, you know, I walk by any billboard in New York and, you know, it's it's it's kind of everything about

>> 2014 that this would this would be this would be real by now >> in in some form that this would be real, that it would be central to the economy, that it would be central to national security, that it would be central to

scientific research. I you know, I I I

scientific research. I you know, I I I don't think I imagine that I would be leading one of the companies in the space, right? I think that I think that

space, right? I think that I think that would have been you know very surprising to me. I didn't think of that as as kind

to me. I didn't think of that as as kind of as kind of my role at the time and the exact way in which things happened all of the you know all the kind of strange lingo we've developed around

language models all of the kind of financialization of it. You know, if you think about if you think about the implications of the models becoming as smart and as powerful as they are and

scaling in the way that they are, in the way that me and some of my colleagues and predicted, it all makes sense. But I

I don't think I would have derived it from first principles.

>> Okay. Well, then let's go straight to the question I said I'd ask at the beginning. Um because maybe this is the

beginning. Um because maybe this is the place thinking about where this goes and the fact that I didn't think you would say by the way that you thought that this is where we were going to be in in 2025 because I think even people back

then thought this would be a much longer road. But if you're right, do you look

road. But if you're right, do you look at the amount of economic uh muscle that's being put into this industry right now? I mean, it really does

right now? I mean, it really does represent potentially almost all of the growth in the United States GDP right now. Literally.

now. Literally.

>> Yes.

>> That we are in some form of a bubble.

Are we overspending? Does the math of all of this make sense?

>> So, this is really complicated and I want to separate out the technological side of it from the economic side of it.

On the technological side of it, I feel really solid. I think I'm one of the

really solid. I think I'm one of the most bullish people around and I think it pencils out. On the economic side, you know, I I I have my concerns where

even if the technology is is, you know, really powerful and fulfills all its promises. I think there may be players

promises. I think there may be players in the ecosystem who if they just make a timing error, if they just get it off by a little bit, bad things could happen.

So, let me go through both of them. On

the technological side, the the reason that I'm not in honestly by the pure technology not that surprised is myself and some of the people who eventually

became my my my co-founders, we were the first to document the scaling laws of AI, which is you put more compute, you put more data into AI with small modifications. We've seen these things

modifications. We've seen these things like reasoning models and test time compute. They're all tiny little tweaks.

compute. They're all tiny little tweaks.

And I've been watching that trend for the last 12 years or so since I joined the field. And the thing that is most

the field. And the thing that is most striking about all of it is as you train these models in this very simple way, you know, with a few simple modifications, they get better and

better at every task under the sun, they get better at coding. They get better at doing science. They get better at biio

doing science. They get better at biio medicine. They get better at the law.

medicine. They get better at the law.

They get better at finance. They get

better at at, you know, materials and manufacturing. and and that's just

manufacturing. and and that's just that's just a listing of all the sources of e of of of value of value in our economy. If I just take anthropic

economy. If I just take anthropic itself, which because we work so much in the enterprise side, I think we're a good barometer, maybe a pure barometer than the others, which kind of filter

through consumers, which kind of have their habits and their use cases. We

look at our revenue, it's grown 10x a year every year for the last three years. zero to 100 million in 2023, 100

years. zero to 100 million in 2023, 100 million to a billion in 2024, 1 billion to it's going to land somewhere between eight and 10 a at the end of this year.

Will it continue? I don't know. But the

technology is driving there and the economic value is coming with it. I it

will you know that that trend is going to slow down for sure, but it's still going to be really fast. And so I have this confidence that the the ult

eventually the economic value is the e eventually the economic value is going to be there. Um but when you but let's just go to this because there are companies that are spending hundred

billion dollars a year more. You're

going to be spending 50. You look at what Sam Alman who was here last year plans to be spending. These are

extraordinary numbers. And this is all a bet, a big bet that this is going to scale in this way. And and my question is, is there a real way to pencil this

out or is this uh more of a a gut feeling at this point?

>> So, let me let Yeah. So, that really gets to the second part of it. And and I I I I will describe as transparently as I can. I think there's a real dilemma

I can. I think there's a real dilemma deriving from uncertainty in how quickly the economic value is going to grow and the lag times on building the data centers that that drives this. So I

think there's genuine uncertainty.

There's genuine dilemma which we as a company try to manage as responsibly as we can. And then I think there are some

we can. And then I think there are some players who you know who who are yoloing who who pull pull the wrist dial too far and I'm very concerned.

>> Who is yoloing?

>> So I that's a question I'm not going to answer. Um uh uh so on the first one

answer. Um uh uh so on the first one >> put yourself >> we'll come back to that >> put yourself we won't put put yourself in in the position of of anthropic put

yourself in my position you you you've seen this revenue curve that goes up 10x a year for three years you're like okay what's going to happen next year if I'm really dumb and I extrapolate the

pattern 10 to 100red billion I don't believe that just to be clear I don't I don't believe that at all even though it's happened in the last three years just at this scale, I don't believe it.

But that's one of the outer bounds of, you know, the the outer limits of possibility. If I go in and I'm like

possibility. If I go in and I'm like this enterprise and that enterprise and this use case and this is our go to market motion, if I try and do it, you know, bottom up, then, you know, maybe

it's maybe it's 20 or 30 or something like that. So there is what what I've

like that. So there is what what I've been calling internally this cone of uncertainty where I don't know if a year from now it's going to be 20 billion or

you know it's it's going to be 50 or or like you know it's it's it's very uncertain. I try to plan in a

uncertain. I try to plan in a conservative way so I plan for the lower side of it but but that is very disconcerting. And you add to that the

disconcerting. And you add to that the idea that building the data centers has a long lag time. it's like a year or two. So I have

time. it's like a year or two. So I have to decide now, literally now or in some cases a few months ago, how how much compute I need to buy in, you know,

early 20 for to serve the models in early 2027 when I get to that revenue amount. And there's two coupled dangers.

amount. And there's two coupled dangers.

One is that if I if I don't buy enough compute, I won't be able to serve all the customers I want. I'll have to turn them away and send them to to to my competitors. If I buy too much compute,

competitors. If I buy too much compute, of course, I might not get enough revenue to pay for that compute. And and

and you know, and and you know, in the extreme case, there's kind of the risk of going bankrupt. And how how much buffer there is in that in that cone. Um

it's basically determined by my margins.

If I have 80% margins, I can buy $20 billion of compute and it could serve hundred billion dollar hundred billion dollars of revenue. But because the cone is so wide, it's hard to avoid, you

know, making a mistake on on one on on one side on one side or the other. Now,

we've been a relatively responsible company. Um, and I think because we

company. Um, and I think because we focus on enterprise, I think we have a better business model. I think we have better margins. I think we're being

better margins. I think we're being responsible about it. But again, let's say you have a different business model.

Let's say you have a consumer business model. you know, you're you're you're

model. you know, you're you're you're you're kind of the source of your revenue isn't as good, your margins aren't as certain, and let's say you're a person who just kind of like

constitutionally just wants to yolo things or just likes big numbers. Um,

uh, then that then you may turn that dial pretty far. And so I think there is a real underlying risk. Whenever there's

uncertainty, there's a risk of overextension. We all face it. I face

overextension. We all face it. I face

it, all the other companies face it.

There's there's an inherent risk when when the timing of the economic value is is uncertain. There's an inherent risk

is uncertain. There's an inherent risk of of underreacting or overextension.

And because the companies are competing with each other and frankly we genuinely need to compete with, you know, our authoritarian adversaries. Um uh there

authoritarian adversaries. Um uh there there's there's kind of a lot of pressure to push things. So I think there's some amount of irreducible risk here and I absolutely don't want to deny

this but at the same time is that I think there are some players who are not managing that risk well who are taking unwise risks.

>> Let me ask you about that with maybe you'll mention who that is. I think we all know who that is.

>> Um you have said that you're going to break even uh this is privately at least to your investors by 2028 even with the spending plans I think. Uh Sam Alman,

who you worked with and for uh says that he's going to do it by 2030. Uh I'll use his math, not yours. He would have to go from a $74 billion loss in the course of

two years to being profitable two years later. Does that make sense to you?

later. Does that make sense to you?

>> So look, I don't I don't know the internal financials of any other company. I can't I can't say anything

company. I can't I can't say anything about what the economics of any other of of of of any other company is. I will

just go back to our own calculation and the cone of uncertainty where where where where where where we basically say we want to buy enough compute that we're confident you know even in the 10th

percentile you know scenario like might be in a bad position but like we think we can pay for it. There's there's some end to the curve where you know things go so badly that you know we can't pay that that there's all there's always a

tail risk. There's not zero, but we're

tail risk. There's not zero, but we're trying to manage that risk well while also buying an amount an amount of compute that allows us to be competitive with the other players. We're very

efficient in training. We're very

efficient in inference. We have good margins. I think I you know I think I

margins. I think I you know I think I think we can manage it. I think the odds are on our side.

>> What are we supposed to think of what I think people now describe as circular deals? Uh back in the day we called this

deals? Uh back in the day we called this vendor financing. Yes. But in this

vendor financing. Yes. But in this context you have a situation where Nvidia in particular um but others as well effectively have been taking stakes

in companies and invariably those companies are using some of that money one way or the other considering money is fungeible and going to buy Nvidia chips.

>> Yes. So, I mean, we've done some of these deals, not not at the same scale as as as some other players. Um, but,

uh, we we've done some of these deals, and I I can just I just I just I can just I I can just kind of walk you through not a specific deal because I'm not going to go into details, but but but kind of like a stylized what these

deals often look like and kind of why they can make sense. So if you want to buy a gigawatt of compute right that buying the chips and building the chips

building everything that costs you know roughly let's say $50 billion of of capital expense to to kind of to kind of fund that and you can think of that as useful lifetime as people argue about it

but maybe it's five years. So that's 10 $10 billion for basic basically for 5 years. And and so if you're a company,

years. And and so if you're a company, you know, you're you're you know, you're a company that's making, you know, 810 billion of revenue, you think that's growing, you don't know how fast it's growing, you have to make the decision right now, and you don't have $50

billion. You don't have $50 billion on

billion. You don't have $50 billion on you. So, you know, a thing you can do, a

you. So, you know, a thing you can do, a deal you can make with a large player who has an incentive to to to to do this because they're the one selling the chips or providing the cloud is they'll say, "Okay, you know, I'll give you, I

don't know, 20% of it. I'll invest $10 billion. So that lets you pay for the

billion. So that lets you pay for the first year and then for and then for the other you can kind of pay as you go because I know you don't have $50 billion now. But but you know looking at

billion now. But but you know looking at how things are are are growing. That

isn't a crazy bet. We're already almost we're already almost at at at we're already almost at the $10 billion of revenue. So it it kind of it you know it

revenue. So it it kind of it you know it takes a year to build the data center.

It's financed for a year. So you're

you're you know you're basically saying I need to get 10 billion dollars of of of revenue per year two years from now.

Um so I don't think there's anything wrong with that. One player has capital and has an interest because they're selling the because they're selling the you know they're selling the chips and the other player is pretty confident

they'll have they'll have the revenue at the right time but they don't have $50 billion at hand. So I don't think there's anything inappropriate about that in principle. Now, if you start stacking these where they get to huge

amounts of money and you're saying, you know, by by 2027 or 2028, I need to make $200 billion a year, then yeah, you can you you can you can can overextend yourself. Of course, it's all a matter

yourself. Of course, it's all a matter of the size.

>> But I think is one of the key questions behind the math of this entire industry is what's called the depreciation schedule, if you will, for these chips.

And there seems to be a big debate about it. Meaning um when you buy a new chip,

it. Meaning um when you buy a new chip, is that chip going to work for you effectively for three or four years or is it going to work for you for six or seven or eight years or even 10 years?

And depending on where you think the math really lands, all of this pencils out or really doesn't what do you think the schedule is?

>> Look, from from from our point of view, we make very conservative assumptions here. Um I don't think there's a

here. Um I don't think there's a particular depreciation schedule, right?

When new chips come out, the issue isn't the lifetime of the chips. Chips chips

keep working for a long time. The issue

is new chips come out that are faster and cheaper and and >> you may need them if your competitors have them.

>> Yeah. Yeah. And so and so kind of the value of old chips goes down somewhat.

In fact, that that can happen, you know, a year after you buy the chips because, you know, now there are multiple companies, TPUs, GPUs coming out with coming out with new chips. So I think the way we think about it is we take

into account that you know the old chips are going to be less valuable as as as time goes on and you know we assume very aggressive you know a kind of continuation of the of the chip

efficiency curve and again I can only speak for anthropic like we make conservative assumptions here and and we think we're going to be we think we're going to be okay in basically almost all worlds. Can't be literally all worlds

worlds. Can't be literally all worlds but we think we're going to be okay in almost all worlds. I can't speak for other companies. Again, I could imagine

other companies. Again, I could imagine that there may be other players out there who are, you know, who are deliluding themselves and and you know, making making assumptions that are very far inflected on the optimistic.

>> So, we're clear. There's only two of you who are who are not >> I look I don't know who you're talking about. Okay. I I just have no idea.

about. Okay. I I just have no idea.

>> Um, let me ask you this um about the models themselves and how you see the competition. So one of the things that's

competition. So one of the things that's happened literally in the last week and there has been a complete sort of meltdown in the valley over what's happening here and Sarapai was also here

last year and it appears at least that his new model has gotten a lot of uh people excited about what he's doing and that Google which I used to think from the beginning given all of the data they

have uh should have been sort of the the winner by default and you have a memo that went out from Sam Alman saying that code red everyone's got to get back to their desk to figure out what the what the next thing is to break the to get to

the next place. How do you stack rank right now where these models are and how important do you think it is in any given moment?

>> Yeah. So this is one of the cases where I'm just very grateful that Enthropic is taking a different path. Right. On one

hand >> the path being the enterprise. Do we

>> the the path being the enterprise right?

Both of the players that you that you mentioned right both of these other two players are you know primarily focused on the consumer. They try and do some enterprise work, but they're they're and and they're fighting in consumer. That

is, you know, that is that is the reason for kind of the code red, the intense fighting, right? It's, you know, Google

fighting, right? It's, you know, Google has a search monopoly that they're trying to defend and the center of what OpenAI is doing is in is in consumer. So

those those two are fighting it out. For

both of them, serving businesses is secondary. And so what we've found is

secondary. And so what we've found is over time, we've optimized our models more and more for the needs of businesses. The one that's gone the

businesses. The one that's gone the fastest has been coding. um you know I think that has really um you know moved forward the quickest but we're starting to go beyond that to finance biomed

retail you know energy manu kind of all of that um and so and so and so what we find is that these model wars as much as our models are like really good like you

know the one we released last week opus 4.5 is hands down I think almost everyone thinks the best model for coding so I think that's it's very important that we continue to have this this model superiority but there's a way

in which we're kind of going in a different direction or or on a different dimension. And so we we kind of have to

dimension. And so we we kind of have to worry less about this back and forth. We

have a little bit of a privileged position where we can just keep growing and just keep developing our models and we don't have to do any code reds. But

what is the the moat around any of these businesses? Um and and when I say that,

businesses? Um and and when I say that, I assume if Google is success is as successful as it wants to be or OpenAI or any or or meta or anybody else who's involved in this that they think that

one day if we ever get to AGI that all these models effectively will be able to do what either you know any of them do and whether it's you know is there a mode is it the persistent memory. So I'm

I use chat GPT for certain things. it

knows uh me now because it's I've been you know asking it different questions is or do you think people just switch back and forth whoever's got the the latest thing?

>> Yeah. So I you look I can only speak to the enterprise side. What I will say is is it it is surprising how different the personality and capabilities of the models are if you're building for

businesses versus if you're building for if you're building for consumers. Um you

just focus on different things. You

focus less on engagement. you focus more on coding, high intellectual activities, scientific ability. Um, and you know, I

scientific ability. Um, and you know, I don't think it's true that if we got AGI, they would all converge to the same place. Is everyone in this audience

place. Is everyone in this audience converge to the same place? Is everyone

in this audience a copy of everyone else because we're all agent? No, we're we're all specialized. Specialization exists,

all specialized. Specialization exists, you know, in in in it exists alongside general intelligence. And then I think

general intelligence. And then I think there's all the standard enterprise stuff as well, which is that um you know, companies build relationships with you. They get used to using certain

you. They get used to using certain models. Uh and and we're starting to see

models. Uh and and we're starting to see that like even our even our API business, which is basically just selling the raw model. You wouldn't

think that would be very sticky.

Companies have great difficulty switching from one model to another because they have downstream customers who use the model and they like the current model and you prompt and interact with the models in different ways and they have different

personalities. it's actually quite hard

personalities. it's actually quite hard to switch. So, so I think there really

to switch. So, so I think there really is a durable business here.

>> Um, one quick AGI question. It's a

science question which is do you think just the way transformers work today and just compute power alone from a scalability sense that that is what will get to AGI or do you think there's some other ingredient and maybe there's a

technical question but I'm trying to keep it very very easy that has to be included in this that gets you to some place where this stuff is actually going to really think on its own. No, I think

I think scaling is going to get us there again with small every once in a while there will be a small modification, you know, so small you may not even read about it. It's just something going on

about it. It's just something going on in the lab. I have been watching these scaling laws for for 10 years.

>> So what's your what's your what's your timeline now?

>> There's no one particular point right that this is what I've said over I've never liked these terms AGI artificial super intelligence. I don't know what it

super intelligence. I don't know what it means. There's just an exponential just

means. There's just an exponential just like we had an exponential with Moore's law chips getting faster and faster until they could you know do any you know simple calculation you know faster

than faster faster than any human I think the models are just going to get more and more capable at everything we release a new model gets better at coding it gets better at science you know now models are routinely winning

you know ma high school math olympiads they're moving on to college math olympiads they're starting to do new new new new mathematics for the first time I've had internal people at Enthropic say, "I don't write any code anymore. I

don't write I don't open up an editor and and and and write code. I just let Claude code write the first draft and and all I do is edit it." Um, we had never reached that point before. And the

drum beat is just going to continue. And

and I I I I don't think there's any privilege point around there's no point at which the models start to do something different. what what we're

something different. what what we're going to see in the future is just like we've going to be seen in the past except more so the models are just going to get more and more intellectually capable and you know the the revenue is

going to keep adding zeros.

>> Let me ask a couple policy questions. Um

you have been outspoken we we spoke to uh the president of Taiwan earlier today.

>> Um >> you have been outspoken about the idea that we should not be selling Nvidia chips for example the the most advanced chips to China. By the way, it's interesting that you now have a

partnership with Nvidia. Uh Jensen Wong, who's also been here, was not so happy with you when you made those comments.

Do you have a new view on that?

>> My my view hasn't changed. So, I I definitely will say, and this has this has always been the case. You know, I have an enormous amount of respect for Jensen and for Nvidia. You know, Jensen is an immigrant who came to the the US

with nothing. He built the most, you

with nothing. He built the most, you know, the the most valuable company in the world. This this isn't personal.

the world. This this isn't personal.

This is a policy question. This is a question of how to def how best to defend our national security. And there

my view hasn't changed. It's it's it's the following which is if we go back to my picture of the models getting smarter and smarter as we continue to improve them. A phrase I've used in an essay I

them. A phrase I've used in an essay I wrote a year and a half ago was eventually the models are going to get to the point where they look like a country of geniuses in a data center.

And so once we get to that point, think about what that country of geniuses in the data center can do and which existing country on earth it's plopped

down in. Um if it's plopped down in an

down in. Um if it's plopped down in an authoritarian country, I feel like they can outsmart us in every way, intelligence, defense, economic value,

R&D, you you know, and I worry that they'll be able to oppress their own people, that they'll be able to have a perfect surveillance state. And so I have always felt that we need to have

the advantage here and this is a national security issue. Right? Some

people are saying this is an e they they they they say this is an economic issue that it's an analogy to like the internet or 5G. We need to diffuse our stack like we needed to beat Huawei in

in you know in telecommunications. You

know I don't I don't see it that way. I

think we're building a growing and singular capability that has singular national security implications and democracies need to get there first. It

is absolutely it is absolutely an imperative if we if we sell these chips to China that just makes it more likely they will get there first. It's common

sense.

>> Okay. But do you think that could happen here? So we had Alex Karp here earlier

here? So we had Alex Karp here earlier and there has been lots of worries about surveillance here. talk about in a

surveillance here. talk about in a democracy. Yes. Right.

democracy. Yes. Right.

>> What what is your concern there? Um I

should say by the way there was a period of time where you called the president, this was before he was the president, a feudal warlord at one point. Um so how do you think about the president today,

America today and this idea that AI and surveillance could come together?

>> Yeah. So, so look, I I want to I want to say over and over again that, you know, that I I you know, I think that that the tendency to drag this down into being about specific personalities and and you

know, and and and and specific fights, I think, is not helpful here. We should

really think we should really think at a at a policy level here. And it's not about one administration. It's not about another administration. We should have

another administration. We should have principles here. And and I think the the

principles here. And and I think the the principle I would give that that I think is is is very important is actually it can happen anywhere, right? Uh you know

a a we should we should worry about concentration of power in democracies not as much as we worry about it in authoritarian um uh states but you know we we need to make sure that the

technology is governed in in you know in a way that that allows people to participate that gives people basic rights. And so the formulation I've

rights. And so the formulation I've always given when I think about how to apply these models for national security is I think we should aggressively use

them in in every possible way except in the ways that would make us more like our authoritarian adversaries. Right? We

need to beat them but we need to not do the things that would cause us to become them. Um that is the one constraint we

them. Um that is the one constraint we should observe.

>> Okay. Let me ask you a separate question and maybe you'll say it's a fight but you can take it out of out of the the the person if if you want which is you have been very vocal about your concerns

about the chip issue but also what could happen to jobs uh regulating regulating this technology so it doesn't do bad things or other things like that. Uh

David Saxs u who works as the AIS are at the White House said this about you. He

says that Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is

principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.

>> Again, I don't think this should be about specific individuals, right? This

isn't about any particular administration. This is about this is

administration. This is about this is about policy questions. I mean, you know, going all the way back to 20 2016, I've written papers about AI, right?

before I even had a company, right?

Before there there there even, you know, could have been a plan around anything like regulatory capture. And and by the way, almost all the AI regulation that we've supported has has exemptions for

small players, right? The main um AI bill we supported, SB53, um you know, doesn't even apply at all to startups with under $500 [clears throat] million in revenue. So,

we've been very careful about this. And

you know, if there's one point, again, I want to say again, I think people should focus on the policy. You can throw out these accusations and they don't match reality at all. They don't match the reality of the bills we've actually

supported. They don't match the reality.

supported. They don't match the reality.

>> There are these two worlds right now, which is, by the way, you know, Andre Harowitz and others are are have have one super PAC. You guys are building another super PAC to approach regulation

of this industry completely differently.

And the question is why? What do you see that they don't? So, so I you know again I I want to keep this at the policy level. what what how I see this

level. what what how I see this technology. I I I am concerned and I can

technology. I I I am concerned and I can understand where folks are coming from but I am concerned that there are some who see this technology as analogous to previous technological revolutions as

being like the internet as being like telecommunications where yes there are some issues but you know the the market will figure it out which I think was maybe a more reasonable view in in these

previous technological revolutions. I

think those who are closest to AI don't feel this way. If you pull the actual researchers who who work on AI, not investors who invest in some AI

application companies, not general tech commentators who, you know, think they know something about AI, but the actual people who are building the technology, they're excited about the potential, but

they're also worried. They're worried

about the national security risks.

They're worried about alignment of the models. They're worried about the

models. They're worried about the economic impacts of the models. And for

example, the idea that we would put a moratorum on all all kind of regulation or all state regulation without a federal, you know, without without a

federal framework uh for 10 years, which you know was attempted in the summer and I think it was just attempted this last week again and it failed because it was very unpopular because even the the

average person understands that this is a new and and powerful technology and so you know I think we I I am more than you I I I I I am maybe the most optimistic

about the upsides, right? I wrote this whole essay, machines of loving grace, where I said AI is gonna, you know, extend might even extend the human lifespan to 150, 10 years after we get

the the country of geniuses in the data center because we'll have a virtual biologist that can make discoveries much faster than we can that that you know it could drive you know economic growth to

you know five or five or 10%. I'm

incredibly optimistic about the technology, frankly, much more optimistic than some of people who describe themselves as boosters of the technology. Um, but but nothing that

technology. Um, but but nothing that powerful doesn't have a significant number of downsides. And and we as a society, as a polity, need to think

ahead about those downsides. Saying that

for 10 years we won't regulate that technology, it's like it's like saying I'm driving a car. I'm going to rip out the steering wheel because I don't need to steer for 10 years. Okay. So, here's

the the question about the downside then. One of the downsides beyond

then. One of the downsides beyond hacking and everything else that I know you worry about is jobs. You spoke about it on 60 Minutes recently, but I want to know not just that you you think that there's a good chance, and I don't want

to put words in your mouth, that it could be, you know, half of all entry- level jobs get lost. I want to know what you think should be done about it.

>> Absolutely. So, you know, I I I think at the end of the day, I warn about these things not to be a profit of doom, but because warning about them is the first step towards solving them. And if we

don't warn about them, then we'll just we, you know, we'll just kind of blindly walk into the landmine. It'll blow us up. Um, if we warn about them, if we see

up. Um, if we warn about them, if we see the landmine, we can walk around it and we can avoid it. So, I have been thinking a lot about these ideas. I've

been thinking about them inside enthropic where we where you know Claude is starting to write a lot of our code and we're thinking about how the jobs change. So I think there's several

change. So I think there's several levels of it that maybe go from short-term to long-term or or just kind of requiring more and more of the resources of society to happen. I think

some of it can happen in the private sector and even in our relationships with customers. Every customer we work

with customers. Every customer we work with has a following trade-off and it's not eitheror. They can um you know they

not eitheror. They can um you know they can increase efficiency by basically having AIS do what what humans used to do and there's plenty of that things like you know insurance claims

processing or know your customer whole workflows that can just be done endtoend via AI and I think we'll need a lot less humans for them it it will increase efficiency it'll save cost it'll do the

same thing for lower cost and much less people needed but you can also do things where you can create a lot of new value and even in cases where AI does 90% % of the job, not 100% but 90%. The humans

can be 10 times more leverage and sometimes you need 10 times more of them to do 100 times what you did before because it's so efficient and valuable.

And so encouraging companies to do as much of the second relative to the first. We know they're going to do the

first. We know they're going to do the first. We're not trying to stop them

first. We're not trying to stop them from doing the first, but if they can do more of the second than the first, maybe more jobs can be created than >> Does that mean we need government incentives? Is that a

incentives? Is that a >> So So again, that's that's that's level one. Level two is the involvement of the

one. Level two is the involvement of the government. I don't see retraining

government. I don't see retraining programs as a panacea, but I think we're going to need to do some form of that.

Companies are going to do it. Companies

are going to have to work with governments to do it. And I but I I I do think fiscally at some point the government is going to need to step in.

I don't know if that's tax policy, but but this world of fast growth, right, we did this report where we said even current models, it looks like they will

increase productivity by 1.6% a year.

That's almost, you know, almost a doubling of productivity and the models are getting better and better. So, I

think we're going to get up to five 5% a year, maybe 10% a year. That's a big pie. That's a big pie that we can we can

pie. That's a big pie that we can we can give out to the people who are who are not such fortunate beneficiaries, right?

If the wealth concentrates there there there there really is a big pie here.

So, I think the government is going to need to have some role here. That's

level two. And I think level three is over the long run the structure of a society that has built powerful AI is just going to have to be different. If

we if we go back to John Maynard Kanes, right, economic possibilities for our grandchildren, right? He invented this

grandchildren, right? He invented this idea of technological unemployment. He

suggested that maybe his grandchildren would only have to work 15 or 20 hours hours a week, right? That's like a different way of structuring the society. Some people will always want to

society. Some people will always want to work, you know, as hard as possible. Um,

uh, you know, there there'll always be segments of society who who want to do that. But, you know, c can c can we have

that. But, you know, c can c can we have a world where work doesn't for many people doesn't need to have the centrality that it does that that people find their their locus of meaning

elsewhere or work is about is about different things. It's more about

different things. It's more about fulfillment than it is about economic economic survival. Um that there's so

economic survival. Um that there's so many possibilities here. I think society is flexible and and society can I'm not suggesting anything top down. I think

society needs to restructure itself. We

all need to figure out how to operate in the in in the post in the post AGI age.

So I think those three levels will go from fast and and easy for it for you know individual companies to do to requiring a lot of consensus and and and very slow to do. But over the years we're we're going to need to do all

three of these things.

>> Uh Dario, I hope you come back so we can have a conversation uh as we have to do all of those things to figure out what comes next. Want to thank you for a

comes next. Want to thank you for a fantastic conversation. Thank you,

fantastic conversation. Thank you, Andrew. Thank you. Thank you so much.

Andrew. Thank you. Thank you so much.

Loading...

Loading video analysis...