TLDW logo

Sacks, Andreessen & Horowitz: How America Wins the AI Race Against China

By a16z

Summary

## Key takeaways - **Europe's AI leadership is regulation, not innovation**: European "AI leadership" means defining regulations in Brussels, not fostering innovation. This approach is akin to strangling new technologies in their crib before potentially offering them support later, a strategy reminiscent of 'tax it, regulate it, subsidize it.' [00:07], [00:16] - **US AI strategy: Unleash innovation, not fear**: The US AI strategy under Trump focuses on unleashing innovation and supporting infrastructure, energy, and exports, contrasting with the Biden administration's heavy-handed, fear-driven regulatory approach. The goal is to win the global AI race by empowering private sector innovation. [03:45], [04:49] - **Regulatory capture threatens Silicon Valley's permissionless innovation**: Certain AI companies are pursuing regulatory capture by advocating for pre-approval systems, which undermines Silicon Valley's core principle of permissionless innovation. This shift from a startup-friendly environment to a bureaucracy-navigating one favors large incumbents and hinders American competitiveness. [08:44], [10:58] - **Open source AI is synonymous with freedom and competitiveness**: Open source AI is crucial for freedom, allowing users to run models on their own hardware and control their information. While the best open-source models currently come from China, promoting US open-source initiatives is vital for maintaining competitiveness and offering an alternative to centralized control. [36:42], [37:35] - **AI Doomerism serves the left's control agenda**: AI doomerism, fueled by Hollywood narratives and pseudoscientific studies, is being adopted by the left as a replacement for climate doomerism to justify economic control and information dominance. This narrative conveniently supports a centralized, Orwellian AI that can be used for censorship and political agendas. [56:48], [57:21]

Topics Covered

  • Europe's AI Leadership: Regulating Instead of Innovating
  • Permissionless Innovation: The Engine of Silicon Valley
  • Government Licensing of GPUs and AI Models Would Kill Innovation
  • AI Doomerism: A New Catastrophe Narrative for the Left
  • Over-regulating AI Lets China Win the Tech Race

Full Transcript

the Europeans, I mean, they have a

really different mindset for all this

stuff. When when they talk about AI

leadership, what they mean is that

they're taking the lead in defining the

regulations. You know, they get together

in Brussels and figure out what all the

rules should be. And that's what they

call leadership.

>> It's almost like a game show or

something. They do everything they can

to strangle them in their crib and then

if they if they make it if they make it

through like a decade of abuse of small

companies, then they're going to give

them the money to grow.

>> Ronald Reagan had a line about this,

which is if it moves, tax it. If it

keeps moving, regulate it. If it stops

moving, subsidize it.

>> Yeah.

>> Uh the Europeans are definitely at the

subsidize it stage.

>> David, welcome to the Asz podcast.

Thanks for joining.

>> Yeah, good to be here.

>> So, David, you're the AI and cryptozar.

Why don't you first talk about why it

makes sense to have those as a

portfolio? What do they have to do with

each other? And then I'll have you lay

out what's the what's the what's the

Trump plan on those those two categories

and how we're doing.

>> Well, there are two technologies that I

guess are relatively new. And so there's

a lot of fear of them and uh I think

people don't actually know that much

about them. They don't really know what

to make of them. I think that from a

policy standpoint and we can talk about

the similarities and differences. The

the approaches are a little different.

Um

I think with crypto, the main thing

that's needed is regulatory certainty.

You know, all the entrepreneurs I've

talked to over the years, they they all

say the same thing, which is just tell

us what the rules are. were happy to

comply, but people, you know, Washington

won't tell us what they are. And in

fact, during the Biden years, you had um

an SEC chairman who uh took an approach,

which I guess has been called um

regulation through uh enforcement, which

basically means you just get prosecuted.

They don't tell you what the rules are.

You just basically get indicted and then

everyone else is supposed to divine what

the rules are um as you get prosecuted

and fined and imprisoned. So, that was

the approach for several years. and um

you know and and and there and as a

result of that basically the whole

crypto industry was in the process of

moving offshore and uh and America I

think was being deprived of this

industry of the future. And so President

Trump during his campaign and last year

he gave a now famous speech in Nashville

in which he declared that he would make

the United States the crypto capital of

the planet and that he would fire

Gendler. That was like the big applause

line. I think he He's talked about I

applauded

>> he's talked about how surprised he was

at you know what a big uh you know

ovation he got at that. So he said it

again and you know the crowd erupted

again but in any event he promised

basically to provide this clarity so

that um you know the the industry would

um understand what the rules are be able

to comply in turn that gives that should

provide uh greater protection for

consumers and um businesses everyone

who's part of the ecosystem. uh and it

makes America more competitive. So I

think that's sort of the the the the

mandate on on crypto is uh it in a way

it's pro- regulation. It's basically we

we want to put in place regulations in a

in a way AI is kind of the opposite

where I think the B administration was

too heavy-handed. They were starting to

really regulate this area without even

understanding what it was. they weren't

they had no one had really taken the

time to understand uh how AI was even

being used what the real u dangers were

there was this intense fear-mongering

and as a result of that you know the

approach of the b administration was uh

they were in the process of of

implementing very heavy-handed

regulations on both the software and

hardware side and we can drill into that

I think that with the Trump

administration um the approach has been

that we want the United States to win

the AI race. Um, it's a global

competition. Um, sometimes we mention

the fact that China is our probably our

main competitor in this area. They're

the only other country that has the

technological capability, the talent,

the knowhow, the um expertise to to beat

us in this area. And we want to make

sure the United States wins. Um, and of

course in the US it's not really the

government that's responsible for

innovation. It's the private sector. So

that means that our companies have to

win and if you're imposing all sorts of

crazy um burdensome regulation on them

then that's going to hurt not not help.

So um the president gave a I think very

important AI policy speech a couple

months ago on July 23rd where he

declared in no uncertain terms that we

had to win the AI race and he laid out

several pillars on on how we do that. it

was pro- innovation, pro infrastructure,

which also means uh pro energy and pro

export. And we can drill into all those

things if you want, but that was sort of

the the high line. And so I think that,

you know, again with with AI, um the

idea is kind of like how do we unleash

innovation? And I think with crypto,

it's been more about how do we create

regulatory certainty. Um but you know,

in terms of my role, like what you know,

why why am I doing both? I mean, I think

the common denominator is just again,

these are new technologies. They um, you

know, they're both um, you know,

obviously come from from the tech

industry, which has a very different

culture than Washington does. And I kind

of see it as my role to help be a bridge

between what's happening in Silicon

Valley and what's happening in

Washington and helping Washington

understand not just the policy that's

needed or or the innovation that's

happening but also kind of culturally,

you know, what what makes the tech

industry different and special and and

how that needs to be protected um from

uh government doing something uh

excessively heavy-handed. So, um, so

David, you know, we're going to talk a

lot about AI today, but just on crypto,

I've I've had this interesting

experience this year kind of after the,

you know, election, you know, kind of

people adjusted to the change, uh, of

government. And I've had this discussion

with, you know, a number of people who,

let's say, in in politics who were

previously anti-crypto, who have been,

you know, trying to figure out how to

kind of get to a more sensible position.

Um, and then also actually people in the

in the financial services industry who

kind of followed it from a distance, um,

and maybe, you know, participated in

the, you know, the various debunking

things without really understanding what

was happening. Uh but the the common

denominator has been they're like you

know Mark I didn't really understand um

how bad it was. Um, I basically thought

you guys in tech were basically just

whining a lot. Um, and you know,

pleading as a special interest, um, and

kind of doing the normal thing. And I

figured, you know, the the, you know,

the horror stories were kind of made up,

you know, of people getting prosecuted

and, you know, entrepreneurs getting

their houses raided by the FBI and, you

know, like the whole paniply of things

that, you know, that happened and like I

didn't really I I now in retrospect now

that I now that I go back and look, I'm

like, "Oh my god, this was actually much

worse than I thought." Um, what do you

do you do you have that experience? And

as as as you're in there and kind of as

you now have like a complete view of

everything that happens, like do do you

think people understand actually how bad

it was?

>> I mean, I think it's a a great point. I

mean, I didn't really know either. You

kind of heard um generally I mean, we

knew that there was debanking going on

and and by the way, it wasn't just

crypto companies that were being

debanked, but their founders were being

debanked personally. So, if you were the

founder of a crypto company, you

couldn't open a bank account. I mean,

that's a huge problem. It's like how do

you how do you transact? How do you make

payments? um how do you pay people? I

mean it basically deprivives you of a

livelihood. It's a very extreme form of

censorship. So that was definitely

happening. And then of course you have

all the the prosecutions uh that the SEC

was behind. So yeah, it was it was

really bad. I remember um back in I

think it was in March uh we had a crypto

summit at the White House and um one of

the attendees said that you know a year

ago uh I would have thought it was more

likely that I'd be in jail than that I'd

be at the White House. Um and so it was

you know it was a really big milestone

for for the industry. They had never

ever received any kind of recognition

like that. The idea that this was even a

industry that you would do an event at

the White House. I mean um at a minimum

I think crypto is seen as very day

class. Um but uh but in any event yeah

no it's been a it's been a huge shift. I

mean, we basically have have um have

stopped that. And um and you know, it

was very unfair because again, these

founders wanted to comply with the

rules, but they weren't told what they

were. Um and that was all part of a

deliberate strategy, I think, to drive

crypto offshore.

>> Yeah. One of the things that um is very

different between crypto and AI that

we've noticed is that on the crypto

front, everybody just wanted rules. um

and the industry was relatively unified.

Whereas in AI, we've seen very like

interesting kind of calls coming from

inside the house with uh certain

companies really going for regulatory

capture. You know, people who have early

leads saying let's cut off all new

companies from developing AI and so

forth. Um what do you make of that and

and where do you think that's going?

>> I think it's I think it's a very big

problem. Um I actually recently

criticized uh one of our AI model

companies for engaging in regulatory

capture strategy.

>> Yes.

>> Um very fair criticism by the way.

>> It is very fair and actually um you know

of course they denied it and then uh

should I tell the story? I mean

>> yeah sure rarely do you get vindicated

on on X so so thoroughly and completely

as I did on this because after this

company it was basically anthropic after

they denied it. There was um I mean what

basically happened is that Jack Clark

who's a co-founder and head of policy

philanthropic

>> gave a speech at a a conference where he

said that uh he compared fear of AI to a

child um you know seeing monsters in the

dark or or or thinking there were

monsters in the dark and but then you

turn the lights on and the monsters are

there. I thought that was such a um

ridiculous analogy. I mean it's

basically purile. I mean, it's it's so

childish just to be almost

self-indicting because you're basically

admitting the fear is made up, not real.

>> Uh, in any event, so I said, "Well, this

is like fear-mongering and part of their

regulatory capture strategy." And of

course, they denied it. But then um a

lawyer who was in the crowd at his

speech said, "Well, yeah, but Jack's not

telling you what he said during the Q&A,

which he basically admitted that

everything that Anthropic was doing was

uh you know, with with like things like

SB53, which is supposedly just

implementing transparency." He said,

"No, that's all that he admitted that

was just a stepping stone to their real

goal,

>> which was to get an approval, a system

of pre-approvals in Washington before

you can release new models." and he

admitted as part of the Q&A that making

people very afraid was part of their

strategy. So again, just as much as a

smoking gun as you could ever get um in

a in a spaton X. But the reason why I

think that that approach is so damaging

is that the thing that's uh really made

I think Silicon Valley special over the

past several decades is permissionless

innovation. Right? It's that two guys in

a garage

>> can just pursue their idea. you know,

maybe they raise some capital from

angels or VC firms. B basically people

who are willing to lose all of their

money and these are people who, you

know, are young founders. They could

also be, you know, it could be the the

the dropout, the future dropout in a

dorm room and they're able just to

pursue their idea. And um and the only

reason that I think has happened in

Silicon Valley whereas you look at in

you know industries like um I don't know

like pharma or um you know healthcare or

defense or banking or these highly

regulated industries where you just

don't see a lot of startups is because

they're all heavily regulated which

means you have to go to Washington to

get permission to do things. And the

thing I've seen in Washington is just

that, you know, the approvals get set up

for reasons, but those reasons very

quickly stop mattering. And it just

matters like how good your government

affairs team is at navigating through

the bureaucracy and figuring out how to

get those approvals. And it's not

something that your typical startup

founders are going to be good at. Um,

it's something that big companies get

good at because they've got the

resources and that's exactly what

regulatory capture means. So the the

whole basis of Silicon Valley success,

the reason why it's really the the crown

jewel of the American economy and the

envy of the rest of the world. We see

all these attempts by all these other

countries to create their own Silicon

Valley. The the reason that's the case

is because of permissionless innovation.

And what is being contemplated and

discussed and implemented with respect

to AI is an approval system for both

software and hardware. Uh and this is

not theoretical. This has already been

happening on the the hardware side. One

of the last things that the Biden

administration did the last week the

Biden administration was um impose the

so-called Biden diffusion rule which

requires that every sale of a GPU on

Earth be licensed by the government

which is to say preapproved um unless it

fits into some category of exception but

basically the overall idea is that you

know uh compute is now going to be um a

licensed and and pre-approved um

category. uh we rescended that. Um and

then on the the software side like I

said I mean the goal very clearly is to

start with these reporting requirements

to the government to the states and then

where that ramps up to is you have to go

to Washington to get permission uh

before you release a new model and um

you know it's this would drastically

slow down innovation and make America

less competitive. I mean, you know,

these approvals can take months, they

can take years. Uh, when models are when

when a new chip is released every year

and we have licenses that have been

sitting in the hopper for 2 years, I

mean, the requests are obsolete by the

time they finally get approved. And um,

that would be even more true with models

where, you know, the the cycle time is,

you know, like three three or four

months for a new model. I mean, you

know, and and and what exactly is a

bureaucracy in Washington going to know

about uh this technology that, you know,

that they're that they're going to be in

a good position to to approve it. Any

event, but this is what is being

contemplated right now and I think it

would be a disaster for for Silicon

Valley, but also and for innovation, but

therefore for American competitiveness

and I think we will lose the AI race uh

to countries like China if you know if

this is if this is the set of rules that

we have.

>> Yeah. One of the really diabolical

things about their argument is if they

really believed there was a monster then

why are they buying GPUs like at a rate

faster than anybody? And then the other

thing that we know from being in the

industry is their reputation is the

literally the worst security practices

in the entire industry with respect to

their own code. So if you were building

this monster, the last thing you'd want

to do is like leave a bunch of holes

around for people to hack it. Um so they

don't believe anything they're saying.

It's like completely made up to try and

maintain their lead by the same

psychotic. I think there is um I think

it's a heady drug to basically say that

um that you know we're we're creating

this you know new super intelligence

that is going to could destroy humanity

but we're the only ones who are virt

virtuous enough

>> to ensure that this is done correctly

right and I think that you know

>> it's a good recruiting tool yeah join

the virtuous team

>> yes um I think that's right

>> the

>> but yeah but I think that is definitely

um you So I I I think of of all the

companies that particular one has been

the most aggressive

uh in terms of the regulatory capture

and pushing these um for the these

regulations. And just I mean let's just

bring it up a level just doesn't have to

be about them. Um there's now something

like 1,200 bills going through state uh

legislators right now to regulate AI. uh

25% of them are in um the the top four

blue states which are California, New

York, Colorado, and Illinois. Uh over a

hundred measures have already passed. Um

I think three of them just got signed in

the last month in California alone.

>> I'll tell you just let me tell you what

Colorado is actually Colorado, Illinois,

and California have all done some

version of a thing called algorithmic

discrimination.

um which I think is it's really

troubling where where it's headed. Um

what what this concept means is that if

the model produces an output

that has a disperate impact on a

protected group then that is algorithmic

discrimination and the list of protected

groups is um is very long. It's more

than just the the the usual ones. So,

for example, in Colorado, they've

defined um people who um may not have

English language proficiency as a

protected group. So, I guess if the

model says something bad about, you

know, illegal aliens and that would be,

you know, that would basically violate

the law. I don't know exactly how model

companies are even supposed to comply

with this rule. I mean, presumably,

presumably discrimination is already

illegal. So if you're a business and you

violate the civil rights laws and you

engage in discrimination, you're already

liable for that. You know, there's no

reason you know if if you happen to um

you know make that mistake and you use

any kind of tool in the process of doing

it, we don't really need to go after the

tool developer because we can already go

after the business that's made that

decision. But the whole purpose of of

these laws is to get at the tool.

They're making not just the the business

that is using AI liable, they're making

the tool developer liable. And I don't

even know how the tool developer is

supposed to anticipate this because how

do you know all the ways that your tool

is going to be used? How do you know

that um that this output, you know,

especially if the output is 100% true

and accurate and the model is doing its

job? um you know then how are you

supposed to know that that was that

output was used as part of a decision

that had a disperate impact?

Nevertheless, you're liable. And the

only way that I can see for model

developers to even attempt to comply

with this is to build a DEI layer into

their models that tries to anticipate

could this answer have a disparit

impact? And if it does, we either can't

give you the answer. We have to sanitize

or distort the answer. Uh, and you know,

you just take this to its logical

conclusion and we're back to, you know,

woke AI, which by the way was a major

objective of the Biden administration.

Um, that that Biden executive order on

AI that we rescended as part of the

Trump administration had something like

20 pages of DEI language in it. They

were very much trying to promote

>> uh DEI values, they called it, in models

>> and and we saw what the result of that

was. you know, uh, it was, you know, we

saw the whole black George Washington

thing where history was being rewritten

in real time because somebody built, you

know, a DEI layer into the model. Um,

and um, and I, you know, and I I almost

feel like the term woke AI is

insufficient

>> to explain what's going on because it

somehow trivializes it. I mean, what

we're really talking about is Orwellian

AI. You know, we're talking about AI

that um that lies to you, that um that

distorts an an answer that rewrites

history in real time to serve a current

political agenda of the people who are

in power. I mean, it's it's um it's very

Orwellian and um and we were definitely

on that path before President Trump's

election. Uh it was part of the Biden

EO. We saw it happen, you know, in in

the release of that first Gemini model.

uh that was not an accident that you

know that th those distorted outputs

came from somewhere. Um so it was you

know to just to me this is the the

biggest risk of AI actually is it's not

it was not described by James Cameron it

was described by George Orwell you know

it's in my view it's not the Terminator

it's 1984 that um you know that as AI

eats the internet and becomes the main

way that we interact and get our

information online that it'll be used by

the people in power to um to control the

information we receive. Um that it'll

contain anid ideological bias that

essentially it'll censor us. All that

trust and safety apparatus that was

created for social media will be ported

over to this new world of AI. Mark, I

know that you've spoken about this quite

a bit. I think you're absolutely right

about that. And then on top of that,

you've got the surveillance um issues

where, you know, AI is going to know

everything about you. Uh it's going to

be your kind of personal assistant. And

so it's kind of the perfect tool for the

government to um monitor and control

you. And to me that is by far the

biggest risk of of AI. And that's the

thing we should be working towards

preventing. And the problem is a lot of

these regulations that are being whipped

up by these um fear-mongering

techniques, they're actually empowering

the government to uh to engage in this

type of control that I think we should

all be very afraid of. Actually,

>> Sam Alman earlier this week said that in

2028 or by 2028, he expects to have

automated researchers. I'm I'm I'm

curious just for your sort of state of

of AI sort of model development or or

just progress in general and what do you

think are the implications? Some people

have been sort of, you know, saying

that, you know, AGI is two years away,

sort of the AI 2027 papers, the old

Ashen Brener's situational awareness

papers. I'm curious kind of what's your

reading of the of the state of play in

terms of a AI development and what are

the implications from that?

>> So, so my sense is that um people are in

Silicon Valley are kind of pulling back

from the let's call it imminent AGI

narrative. Um I saw Andre Carpathy gave

an interview where all of a sudden he's

he's um rewritten this and he says AGI

is at least a decade away. He's

basically saying that um that you know

reinforcement learning has its limits. I

mean it's it's very useful. It's the

main paradigm right now that they're

making a lot of progress with. But he

says that actually the way that humans

learn is not really through

reinforcement. We do something a little

different which I think is a good thing

because it means that human and AI will

be synergistic, right? Right. I mean the

the AI's um understanding if it's based

on RL will be a little different than

the way that we intuit it and and

reason. Uh but in any event, I I sense

more of a pullback from this imminent

AGI narrative. Um you know, the idea

that AGI is two years away. Of course,

it's like kind of unclear what people

mean by AGI, but it's kind of, you know,

it was used in this like scary way that

um it's kind of the super intelligence

that would grow beyond our control. Um I

feel like people are pulling back from

that and understanding that yes we're

still making a lot of progress and the

progress is amazing but at the same time

you know what we mean by intelligence is

multifaceted

and it's not like you know there's

progress being made along some

dimensions but it's not along every

dimension.

Um, and so therefore I think again I

would just I mean I've described

actually the situation we're in right

now as a little bit of a Goldilock

scenario where um you know the extremes

would be you know you kind of have the

the scary Terminator situation imminent

super intelligence that'll grow beyond

our control on the other the other

narrative you hear in the press a lot is

that we're in a big bubble. So in other

words the whole thing is fake

>> and the media is basically pushing both

narratives at the same time.

>> Yeah. Um, but any event, I think that

the truth is is more in the middle. It's

kind of a Goldilock scenario where um

we're seeing a lot of innovation. I

think the um the progress is impressive.

Uh it's I think we're going to see big

productivity gains in the economy from

this. But uh but I like the observations

that Bology made recently where he said

there's a couple of things that really

struck me. One was um AI is

polytheistic, not monotheistic. Meaning

what we're seeing is many instead of

just one all- knowing all powerful god.

What we're seeing is a bunch of smaller

deities more specialized models

um you know uh you know it's it's not

that sort of um we're not on that kind

of recursive self-improvement

uh track just yet but you know we're

seeing many different kinds of models

make progress in different areas. Um and

then the other the other um one was just

his observation that AI was um was

middle to middle whereas humans are end

to end and therefore the relationship is

is pretty synergistic and um I think

that's right. I mean I think I think all

th those observations are like resonate

with me in terms of where we're at right

now.

>> Yeah. And that that that's very

consistent with what we're seeing as

well where um you know ideas that we

thought would for sure get subsumed by

the big models are becoming amazingly

differentiated businesses. Um just

because the fat tail of the universe is

very fat and you need really kind of

specific understanding of certain

scenarios to build an effective model

and that that's just how it's going. you

know, no model has just like figured out

how to do everything.

>> Yeah. I mean, and the models work best

when they have context, you know, and um

the more I mean, we've all seen this,

the more general your prompt, the less

likely it is that you're going to be

able to um you know, get get a great

response. And um I don't know, if you

tell the AI uh you know, uh something

very general like um what business can I

create to make a billion dollars? it

it's not going to give you something

actionable. You know, you have to get

very specific about what you're trying

to do and it has to have access to

relevant data and then it can give you

some specific answers to to um to a

prompt. Um and I think this is, you

know, partly Bali's point, which is, you

know, the the the AI does not come up

with its own objective. You know, it

needs to be prompted. It needs to be

told what to do. We've seen no evidence

that that's at this stage that that's

changing. we're still at step zero in

terms of AI kind of, you know, somehow

coming up with its own objective. And um

uh and as a result of that, you know,

the model has to be prompted and then it

gives you an output and that output has

to be validated. You have to somehow

make sure it's correct because models

can still be wrong. Um and more likely

you have to iterate a few times because

it doesn't give you exactly what you

want. So now you kind of reprompt and

we've all had this experience, right?

you this is why like the chat the chat

um interface is so is so necessary is

because it takes you a few times to kind

of iterate to get to the output that

actually has value for you. Um again you

know humans are antenn and the the um

the AI is middle to middle. I just don't

you know we haven't seen any evidence

that that fundamental dynamic is is

changing. Um I mean I think you know

we're at the I mean I'd love to hear

what you guys think about this. this. I

mean, we're obviously at the outset of

of agents and um you know, and agents

you can give an objective to and then'll

be able to take tasks on your behalf.

But I suspect that the agents will work

better as well when they have a much

more narrow context. They're much less

likely to go off the rails, start going

in weird directions. If you give it like

a very broad, you know, a very broad

task, it's just not likely to completely

figure it out before it needs human

intervention. But if you give it

something very narrow to do, then it's

much more likely to be successful. Um,

so, you know, I would just guess like,

okay, just you tell the AI, um, you

know, sell my product. You know, it's

it's very unlikely that it's just going

to figure out like what that means and

how to do that. But if you're a sales

rep and you're using the AI to help you,

there's probably very specific tasks

that you can tell it to do and it would

be much more successful doing that. So,

I I just tend to think I mean this also

kind of speaks to the whole job loss

narrative. I just think that this is

going to be a very synergistic tool for

a long time. I don't think it's going to

wipe out um human human jobs. I don't

think the need for human cognition is

going away. Um it's something that, you

know, we'll we'll all use to kind of get

this big productivity boost at least for

the foreseeable future. I mean, I don't

know. I don't know if anyone can any of

us can predict what's going to happen

beyond five or 10 years, but I mean

that's just what I'm seeing right now. I

don't know. I'm curious what what are

you guys seeing this front?

>> Yeah, generally consistent with that

things are improving. So like on agents

um the the early agents, the longer the

running task, the more they would go

like completely bananas and off off the

rails. People are working on that. I do

think like everything's working better

in a context. I at least from what we've

seen that will continue and even you

know to your point on like super smart

models there's like a dozen video models

out there and there's not one that's the

best at everything or even close to the

best at everything. There's like

literally a dozen that are all the best

at one thing. Um, and

which is a little surprising at least to

me because you would think, you know,

just the sheer size of the data would be

an advantage, but even that um hasn't

quite proven out. Uh, you know, it it is

like depending on what you want. Do you

want a meme? Do you want a movie? Do you

want, you know, an ad? Like it's all

very very different. And I think this

gets to your main point, which is, and

Mark Zuckerberg said something that I

really liked. He's like, look,

intelligence is not life. Um, and these

things that we associate

with life, um, like we have an

objective. Um, we have free will, we're

sentient. uh th those just aren't part

of a mathematical model that is you know

searching through a distribution and

figuring out an answer or even you know

a model that you know through a

reinforcement learning technique can

kind of improve its logic. So it's just

like the the the comparison to humans I

think is it just falls short in a lot of

ways is what we're seeing you know we're

just different

>> you know then the models are very good

at things they're better than humans at

things already many things

>> the other thing I'd bring up related to

this which is sort of this which I think

is a little orthogonal but also quite

related which is sort of is is the

future of the world going to be one or a

small number of companies or for that

matter governments or super AIs that

kind of own and control anything. Um,

and sort of all the value rolls up into

a handful of of entities and and you

know, and there you get into this

there's like the hyper capitalist

version of it where a few companies make

all the money or there's like the hyper

communist version of it where you have

total state control or whatever.

>> Um, you know, or is this a technology

that's going to diffuse out um and be

like in everybody's hands um and be a

tool of empowerment and creativity um

and individual effort, you know,

expressiveness and um and um and as a

tool for basically everybody to use. I

think one of the really striking things

about this period of time and you being

in this role is that this this is the

period in which the scenario number two

is very I think very clearly playing out

um which is that this is the time in

which AI is actually hyperdemocratizing.

I think actually AI is actually

hyperdemocratized. It has spread to more

individuals um uh both in the country

and around the world uh in the shortest

period of time of any new technology I

think in history. you know, we're we're

it's something like 600 million, you

know, users today on on rapidly on the

way to a billion, rapidly on the way to

to 5 billion, you know, kind of across

all the all the consumer products. And

then and then the best AIs in the world

are are in the consumer products, right?

Um and so if you use the if you use, you

know, current day Chad GPT or Grock or

any of these things like, you know, I I

I can't spend more money, you know, uh

and uh and and get access to better AI.

It's it's it's in the consumer products.

And so just in practice, what you have

like playing out in real time is this

technology is going to be in everybody's

hands. Um, and everybody is going to be

able to use it to optim, you know,

optimize the things that they do. Have

it be a thought partner. Have it be, you

know, somebody, you know, a a a um, you

know, an assistant for for for building

companies, you know, starting companies

or creating art or, you know, doing all

the things that people want to do. Um,

you know, my wife was just using it this

morning to design a new uh an

entrepreneurship curriculum for our

10-year-old, right? Um, you know, right?

Like, you know, like literally it's

like, oh, wow, that's like a really

great idea. Um, and it took her a couple

hours and she has like a full curriculum

for him to be able to start his first

video game company and here's all the

different skills that he needs to learn

and here's all the resources and and

like that's just a level of capability.

I mean to to to you know to have done

that without without these modern

consumer AI tools you know you'd have to

go like hire a you know special ed

education specialist or something you

know which is basically impossible um to

do that kind of thing. So and and you

know everybody has these stories now in

their lives among people they know. So

we're we have the I I think we have like

a lot of proof that the the track that

this on is that this is going to be in

everybody's hands and in fact that is

going to be a really good thing and and

I think and I think David I think you

guys are really playing a key role in

making making it happen.

>> I think it's so important that this um

technology remain decentralized because

the kind of the Orwellian concern is

kind of the ultimate centralization

>> and fortunately so far what we're seeing

in the market is that it's hyper

competitive. There's five major model

companies all making huge investments.

Um, and the uh the the benchmark the the

the model performance the evaluations

are relatively clustered and there's a

lot of leaprogging going on. So, you

know, Grock releases new model, it

leaprogs chat GBT and but then CGT

releases something new, they they leap

frog. So they're all like very

competitive and close to each other. And

I think that's uh that's a good thing.

And it's the opposite of what was

predicted, you know, through this like

imminent AGI story where the the the

sort of the the storytelling there was

that one model would get a lead and then

it would direct its own intelligence to

making itself better and then so

therefore its lead would get bigger and

bigger and you kind of get this

recursive self-improvement and pretty

soon you're off to the the singularity.

And um and and we haven't really seen

that, you know, we haven't seen one

model completely pull away in terms of

capabilities. Um and I think that's a a

good thing. thing. And so Eric to your

point about this narrative about the

virtual AI researcher that that was one

variant of this sort of um imminent AGI

narrative is that the steps would be um

you know models get smarter then models

create virtual AI researcher and then

you get a million virtual AI researchers

and then you know it's singularity and I

think just the slight of hand in that is

um what is a virtual AI researcher right

it's like a very easy thing to say, but

like what does that really mean? And you

know, Toby's point about um you know, AI

is still middle to middle. It doesn't

it's not end to end. So if an AI

researcher is endtoend, there's like

things that has to you things the person

has to figure out. They've got to set

their own objective. They've got to be

able to pivot in ways that AI can't. Um

you know, like is it really is that

really feasible to create a virtual AI

researcher? I think there's like parts

of the job that you know AI could get

really good at or even better than

humans, but probably that tool has to be

used by a human AI researcher. So I

guess the the the the the argument I

suspect could be um sort of teological

in the sense that you might need AGI to

create a virtual AI researcher as

opposed to the other way around. And if

that's the case, you don't just, you

know, you're not going to get like

singularity. So, I'm a little bit

skeptical of that claim. Um, you know,

we'll see. S what Sam say he could do it

in 2028. I mean, I guess we'll see. And

>> I I I think all those claims tend to be

like recruiting ideas as opposed to

actual predictions.

>> He's he's not the first to to mention

that idea. Other other model companies

have been promoting it, but um yeah,

Leopold mentioned that too. Um you know,

we'll we'll we'll see. But I suspect

that that's what's wrong with that

argument is that virtual AI researcher

um requires AGI and so the idea that

you're going to get AGI through a

virtual AI researcher is um is backwards

but we'll see you know we'll see.

>> David, you've also you and the

administration I think have also been

very supportive of open source AI which

I think also dovtales into this in terms

of like the market being very

competitive.

>> Yes.

>> Do you want to spend a moment on what

you guys have been able to do on that

and how you think about it?

Yeah, I mean so so just to the you open

source is very important because I just

think it's synonymous with um you know

freedom I mean software freedom. You

basically could run your own models on

your own hardware and retain control

over your own information.

And by the way this is what enterprises

typically do all the time is you know um

about half the global data center market

is you know is on prem meaning

enterprises and governments create their

own data centers. they don't go to the

big clouds and by the way I've got

nothing against the hyperscalers but

just you know people like to run their

own um you know own data centers and you

know maintain control over their own

data and that kind of thing and I think

that will be true for consumers to some

degree as well um so I do think it's an

important area that we should you know

want to encourage and promote the irony

right now in the market is that um is

that uh the best open source models are

Chinese. Uh, and it's it's sort of a

quirk, right? It's it's the opposite of

what you'd expect. You'd expect like the

American system would promote open and

somehow the Chinese system would promote

closed. That has kind of um, you know,

ended up being a little backwards. Um, I

think there's like good reasons for it.

Uh, it could just be it could just be

kind of a historical accident. the fact

that the Deep Seek founder was like very

committed to open source and kind of

just kind of that like got things

started that way or it could be part of

a deliberate strategy. Uh if you're

China and you're trying to catch up,

open source is a really good way to do

that because you get all the non-aligned

developers to want to help your project

which they can't do if you know with a

closed project. So it's a great strategy

for catching up. And then also if you

think that your business model you know

as a company or as a country is let's

say scale manufacturing of hardware then

you'd want the the software part to be

free or cheap because it's your

compliment right so you try to

commoditize your compliment and I don't

know whether it's by accident or part of

design that that seems to be what the

Chinese strategy has been I think that

the the right answer for the US in this

is to you know to encourage encourage

our own open source. I mean, we I think

it'd be a great thing if we saw more

open source initiatives get get going. I

guess there's one promising one called

um Reflection, which um was founded by

former um you know, engineers from uh

from Google Deep Mind. Uh so I hope we

see more open source innovation in in

the west. But um but look, I think it's

it's very important. is critical and

like I said I in my view it's synonymous

with with freedom and it's definitely

not something we want to suppress. Now

just back to the the closed ecosystem

for a second. It's true we have five

major competitors there and they're all

spending a lot of money. I do worry a

little bit that at some point some point

in time that the model consol not the

model the um the market consolidates and

we end up with you know like a monopoly

or duopoly or something like that as

we've seen in other technology markets.

We saw this with search and you know and

so on down the line. And I just think

that it would be good if this market

stayed more competitive than just one or

two winners. Um and um I I don't I don't

really know what to do about that. I'm

just making that observation. And I I do

think that having open source as an

option always then ensures that even if

the market does consolidate that you do

have an alternative and it's an

alternative that's more fully within

your control as opposed to a large

corporation or um you know or or the

deep state you know working with that

corporation as we saw in the Twitter

files that you know the deep state was

working with all these um you know

social media companies um you know in in

implementing much more widespread

censorship than I think we any of us

thought possible. So

uh we've seen evidence in the past and

again in the social networking space

about um about how the government could

get involved in nefarious ways and it

would be good to have alternatives so

that so you know to to prevent that or

to to make it less likely that that

scenario comes about with AI.

>> Yeah. Well, as you know, we we and

others uh um are very aggressively

investing um in not new model companies

of many kinds, including new foundation

model companies and then also, you know,

as you're probably there are a whole

bunch of new open source efforts um you

know, that are not yet public that, you

know, hopefully will bear fruit over the

next couple years. So, I think at least

at least in the in the medium term, I

think we're looking at an explosion of

model development as opposed to

consolidation and then, you know, we'll

we'll see what happens from there.

>> Yeah, that's really good to hear. I mean

I think you know if we assess kind of

the the state of the AI race um visav v

China uh this is the only area where we

appear to be behind is in this area of

open source models. Um yeah

>> I I think you know if you don't care

whether it's open or closed I think we

have the lead.

>> Uh I think our top models uh model

companies are ahead of the top Chinese

companies although they're quite good

but uh but just this narrow area of open

source seems to be where they have an

advantage. So, it's great to hear that

you guys are seeing, you know, a lot

more um efforts um coming to market.

>> Yeah. Yeah, there's more coming. Yep.

Good.

>> Yeah. Yeah. Definitely more coming.

>> The um Peter Teal quipped, you know,

many years ago that he thought, you

know, crypto would be libertarian or

decentralizing and that uh AI would be

communist or or centralizing. And I

think one thing we've perhaps we've

learned that uh technology isn't

deterministic and that there are a set

of choices that determine whether these

technologies are decentralizing or or

centralizing. And um maybe we could

segue use that as a segue to go deeper

into the the state of the race with

China. Uh maybe David you could lay out

the sort of the what's most important to

to get right. You've already indicated

you know open source is one example. Um

you know you you alluded earlier to sort

of our strategy as relates to chips. Um

you know some people say that yes it's

it's a good idea to do what we're doing

because it'll you know limit domestic

semiconductor production. Other people

say, "Oh, well, you know, some of these

companies say chips are their biggest,

you know, limiting factors and so are we

enabling them in some way." Why don't

you talk about the sort of state of play

and then our strategy?

>> Yeah. So, you know, when we talk about

winning the AI race, um, sometimes we

say we're in a race against China,

sometimes we just leave it a little bit

more vague because I I don't think we

should become overly obsessed with our

uh, competitors or adversaries. I think

whether we win or not will mostly have

to do with the decisions we make about

our own technology ecosystem, not about,

you know, what we do v visa v them. And

so the president in in his July 23rd

speech on AI policy, I think mentioned a

few of the key pillars of of, you know,

of of how we win this this AI race. Um,

and by the way, I'm not saying it ever

ends. This it might be an infinite game,

but um but we want to be in the lead at

least. And um and I I I I do think that

um there could be a period of time where

like you know take the internet where I

mean the internet's still going on. It's

happened but we understand that kind of

who the winners are is kind of baked

now. Um so there could be a period of

time in which you know it's kind of

baked who the the winners in AI are. But

um in any event um you know in terms of

how we win this race um you know I

mentioned a few of the key pillars.

Number one is innovation. You know it's

very important to support the private

sector because they're the ones who do

the innovation. We're not going to

regulate our way to beating our our

adversary. We just have to out innovate

them. Um I mentioned I I think right now

the biggest obstacle is the the the um

frenzy of overregulation happening at

the states. I desperately think we need

um a federal a single federal standard.

Uh a patchwork of 50 different

regulatory regimes is going to be

incredibly burdensome to comply with. I

think even the people who support a lot

of this regulation are now acknowledging

that we're going to need a federal

standard. Uh the problem is that when

they talk about it, what they really

want is to federalize the most ownorous

version of all the state laws

>> and that can't be allowed either. So,

you know, there's going to be a battle

to come. I think as the states um become

more and more unwieldy, you know, as it

becomes more of a trap for startups that

they now have to report into 50

different states at 50 different times

to 50 different agencies about 50

different things, people are going to

realize this is crazy and they're going

to try to federalize it. And then the

question, I think, is whether we get

preemption heavy or preeemption light.

Um you know, do we do we get a I think

everyone's gonna ultimately be in favor

of a of a single federal standard. Um

because I think one of America's

greatest advantages is that we have a

large national market, right? Not 50

separate state markets like kind of, you

know, Europe before the EU wasn't

competitive at all in the internet

because it's 30 different regulatory

regimes. And so, you know, if you're a

European startup and even if you won

your country, it didn't get you very far

because you still had to like, you know,

figure out how to compete in 30 other

countries before you could even win

Europe. And then meanwhile your American

competitors won the entire American

market and is ready to scale up

globally. So the fact that we have a

single national market is just

fundamental to our competitiveness and

it's why you know winners in America

then go on to kind of win the whole

world. So we have to preserve that and I

think we will eventually get some some

federal preeemption. I think the

question will just again be whether we

preempt heavy or preempt light. Um,

second big area is infrastructure, you

know, and energy. You know, we want to

help this this um amazing infrastructure

boom that's happening. And the biggest,

I think, limiting factor there is just

going to be around energy. Uh, I think

President Trump's been incredibly

far-sighted in this. I mean, he was

talking about drill baby drill many

years ago. Um, he understood that energy

is the basis for everything. It's

definitely the basis for this AI boom.

And we want to basically get all of

these unnecessary regulations, the

permitting restrictions, a lot of the

nimism out of the way so that um AI

companies can build data centers and get

power for them. And we can talk about

that more if you want, but I think that

that's a second really huge um uh part

of the of what it's going to take to win

the AI race. And then the third area is

is around exports. And maybe this has

been the most controversial one. Um, and

it really speaks to the cultural divide

between Silicon Valley and and

Washington.

So, I think all of us in Silicon Valley

understand that the way that you win a

technology race is by building the

biggest ecosystem, right? You get the

most developers building on your your

platform, you get the most apps in your

app store, everyone just uses you. I

mean, you know, those are the companies

that typically win are the ones that get

all the users, all the developers, and

so on. And so we in Silicon Valley have

a partnership mentality. You know, we

want to just publish the APIs and get

everyone using them. Uh Washington has a

different mentality, right? It's much

more of a command and control. We want

you to get approved. Um you know, we

kind of want to hoard this technology.

Only America should have it. Um and and

this this was really fundamental, I

think, to the Biden diffusion rule where

the the point of that rule is to stop

diffusion, right? Diffusion is a is a is

a bad word. Um but in Silicon Valley we

understand that diffusion is how you

win.

>> Um I know I mean I don't think we ever

called it diffusion before. That was a

new word for me. We just called it

usage.

>> Yeah.

>> Uh but we understand that like getting

the most users is how you win. Um so

there's like a fundamental culture clash

going on right now. And um you know the

way I kind of parse it is that what we

decide to sell to China is always going

to be comp uh complicated because you

know they're our competitor and our

adversary and there's the whole

potential dual use and so the the the

question of what you sell to China is is

nuanced. Um but what we sell to the rest

of the world that should be an easy

question which is we should want to do

business with the rest of the world. We

should want to have the largest

ecosystem possible. uh and every country

we exclude from our technology alliance

were basically driving into the arms of

China and it makes their ecosystem

bigger and what we saw under the the the

the Biden years is that they were

constantly pushing other countries uh

into the arms of China starting with the

Gulf states in um October of 2023.

Basically the the Gulf States and I'm

talking about countries like Saudi

Arabia, UAE, long-standing US allies,

they weren't allowed to buy chips from

the US. In other words, they weren't

allowed to set up data centers and

participate in in AI. And um you know,

here we are telling all these countries

that, you know, AI is fundamental to the

future. It's going to be the basis of

the economy, and yet we're excluding you

from participating in the American tech

stack. Well,

>> you know, it's obvious what they're

going to do. You know, the only play

we're giving them is to go to China. And

so, you know, all of these rules

basically just create pent-up demand for

for um for Chinese chips and models. and

it creates a Huawei belt and road. And

we are hearing that um that Huawei is

starting to uh proliferate or diffuse um

in the Middle East and in Southeast

Asia. And I just think it's a really

counterproductive strategy. We're

completely shooting ourselves in the

foot. And like the the greatest irony is

that the people who've been pushing this

strategy of driving all these countries

into China's arms have called themselves

China hawks. you know, as if what

they're doing is uh is hurting China.

No, it's like it's it's helping China. I

mean, it's basically just handing them

markets. And our products are better. Uh

but if you don't give these countries a

choice to buy the American tech stack,

obviously they're going to go with the

Chinese tech stack. And um you know,

China China's out there promoting um you

know, uh Deepseek models and Huawei

chips and they're not like ringing their

hands about

you know whether um you know exporting

chips for a data center in UAE is going

to like create the Terminator and you

know all these like ridiculous

narratives that we've invented to you

know reasons we've invented not to sell

American technology to our friends. So,

um, you know, that that has ended up

being, I think, surprisingly maybe the

most controversial part of what we've

advocated for. Um, but, uh, but there

you have it. So, in any event, I'll stop

there. Those are kind of some of the

major pillars of what we've been

advocating.

>> Should we go deeper on sort of the

infrastructure energy point in terms of

what it's really going to take to to to

get enough capacity or what's what's

most important in that second bullet you

were talking about? Yeah. I mean, well,

so I mean, there there are definitely

people who are much more knowledgeable

about energy than I am and are experts

in the space. But here's what I've been

able to kind of divine is um so first of

all, the administration, President Trump

has signed multiple executive orders to

uh to allow for nuclear to make

permitting easier. We've even freed up

federal land uh for for data centers to

hopefully trying to help get around some

of these state and local restrictions.

And obviously the president has made it

a lot easier to um to stand up new um uh

energy projects, uh power generation,

all that kind of stuff. Um I still think

though that we have um a growing nimi

problem at the state and local level uh

in in the US that that is becoming a

little bit worrisome.

uh and um and uh and if if we don't

figure out a way to to address it, then

it could really slow down uh the build

out of this infrastructure. Um in terms

of of power, so my understanding is that

nuclear is going to take 5 or 10 years.

It's just it's just not something that

we're going to be able to do in the next

two or three years. So in the short

term, it really means that like gas is

the way that these data centers are

going to get um powered. And the the

issue with gas is the shortage there is

not I mean America has plenty of natural

gas and um there's uh it exists in

plenty of enough red states where you

could just build out data centers like

close to the source which be smart. Uh

the issue is there's like a shortage of

these um gas turbines. You know there's

only like two or three companies that

make these things and like there's like

a backlog of two or three years. So I

think that's probably the the the

immediate problem there that needs to

get solved. Um

uh however I do think that in the next 2

or 3 years we could get a lot more out

of the grid. So, I've had um you know,

energy executives tell me that uh that

if we could just shed um 40 hours a year

of peak load from the grid to like

backup generators, to diesel, things

like that, you could free up an

additional 80 gawatts of power, which is

a lot because I guess the way that it

works is um

is, you know, the the grid is only used

about 50% uh 50% of the capacity is used

throughout the here uh because they have

to build enough capacity for the the

peak the peak days like the hottest day

in summer, the coldest day in winter.

Those are your peak days. Uh and they

don't want to um commit to a bunch of

the capacity being used and then you

know you find out that you have a really

cold day in winter and people can't get

enough heat for their their homes. And

so um so they can't like overcommit to

you know to say contracts or data

centers, things like that. Um, but if

you could again just like um if you

could shed that like 40 hours a year of

peak load to backup

uh then you would be able to free up 80

80 gigawatts which is a lot. Um and that

would definitely get us through the next

you know two or three years until the

gas turbine bottleneck's been

alleviated. Um and then eventually you

get to to nuclear. Uh so that would be

very good. I think um the the issue

there is just there's a whole bunch of

insane regulations preventing

uh you know load shedding. So like for

example, you can't use diesel. Um

and uh this you know uh Chris Wright

who's the secretary of energy is is very

good on all this stuff and I think he's

working on unraveling all of this so we

could actually uh actually do this.

>> It's funny David as you talk about this

stuff. It's I I can't help but it's a

little bit like it's a little bit like

the principle is just do the opposite of

the EU.

Yeah.

>> Yeah.

>> Like basically I think everything we've

talked about so far is basically the

opposite of the European approach.

>> Yeah. Well, I mean the Europeans I mean

they have a really different mindset for

all this stuff. when when they talk

about AI leadership, um what they mean

is that they're taking the lead in

defining the regulations and they're

all, you know, it's like that's what

they're proud of is that like they think

that's what their comparative advantage

is is that, you know, they get together

in Brussels and figure out what all the

rules should be

>> and that's what they call leadership. Um

>> the the EU the EU just announced a uh I

shouldn't beat on them too much, but

they just announced a big new growth

fund, a big new public private sector uh

tech growth fund to grow uh EU companies

to scale. And I I just I would literally

I was just like, well, they it's

actually quite it's almost like a game

show or something. They do everything

they can to strangle them in their crib

>> and then if they if they make it if they

make it through like a decade of abuse

of small companies, then they're going

to give the money to grow.

>> Um

>> Well,

well, it's what Yeah. Ronald Reagan had

a line about this which is if it moves

tax it. If it keeps moving regulate it.

If it stops moving subsidize it.

>> Yeah.

>> Uh the Europeans are definitely at the

subsidiz it stage.

>> Yeah. And I Yeah. I shouldn't be on them

too much but I just like I've always

been proud to be an American but

particularly now u because like we we

just the fact that we've we've we've

been we we are we it really feels like

we're reentering on core American values

in a lot of the things that we're

talking about which is just really

great.

>> Yeah. I mean again it's you know our our

view is that the first of all we have to

win the I race. We want America to lead

in this critical area. It's like

fundamental for our economy and our

national security. How do you do that?

Well, our companies have to be

successful because they're the ones who

do the innovation. Again, you're not

going to regulate your way to winning

the AI race. I'm not saying we don't

need any regulations, but the point is

just that's not how that's not what's

going to determine whether we're the

winners or not. David, you recently

tweeted that climate dumerism perhaps is

giving way to AI dumerism based on, you

know, Bill Bill Gates recent comments.

Um, what do you mean by this? Do do you

mean it's going to be a major flank, you

know, of of of the, you know, US left or

what do what do you mean by this

comment? Well, I I think the left needs

a central organizing catastrophe to

justify their uh takeover of the economy

and to to regulate everything and

especially to control the information

space. Uh and I think that you're seeing

that kind of the allure of the whole

climate change doomer narrative has kind

of faded. Maybe it's the fact that they

predicted 10 years ago that the whole

world would be underwater in 10 years

and that hasn't happened. So it's like

at a certain point you get discredited

by your own catastrophic predictions. I

suspect that's where we'll be with AI

dumerism in a few years. But in the

meantime, it's a really good narrative

to kind of take the place of the climate

dumerism. There's actually a lot of

similarities, I would say. Um, you know,

you've kind of got um there's a lot of

kind of um pre-existing Hollywood um

storytelling and pop culture that

supports this idea. You know, you've got

the Terminator movies and the Matrix and

all this kind of stuff. Uh so people

have been taught to be afraid of this.

Um and then you know you you um there's

enough kind of pseudocience behind it,

you know, kind of like um you know with

uh like you've got all these contrived

studies that like the one where they

claimed that the AI researcher got

blackmailed by his own AI model or

whatever. Look, it's very easy to steer

the model towards um the the answer that

you want and uh and a lot of these

studies have been very contrived, but

there's this patina of pseudocience to

it. It's certainly technical enough that

the average person doesn't feel

comfortable saying that this doesn't

make any sense. I mean, it's more like

you're not an expert. What do you know?

Uh and even Republican politicians, I

think, um are kind of falling for this.

So yeah, I mean it's a rarely desirable

nar and of course you know as AI touches

more and more things, more and more

parts of the economy, every business is

going to use it to some degree. If you

can regulate AI, then that kind of gives

you a lot of control over lots of other

things. And like I mentioned, AI is kind

of eating the internet. It's like the

main way that you're getting

information. So again, if you can kind

of get your hooks into what the AI is

showing people, now you can control what

they see and hear and think. uh which

dovetales with the whole with the left

censorship, you know, agenda, which

they've never given up on. Uh dovetales

with their agenda to brainwash kids, uh

which we, you know, which is kind of the

whole woke thing. So, I mean, this is

this is going to be very desirable for

the left. And and this is why I mean,

look, they're they're already doing

this. This is not like some prediction

on my part. Basically, after um scam,

bankr run fraud uh did what he did with

FTX and got sent to jail. Well, he was

like a big effective altruist and he had

made um pandemics like their big cause.

They needed a new cause and they got

behind this idea of of X risk which

existential risk. The idea being if

there's like a 1% chance of um AI ending

the world then we should drop everything

and just focus on that because you do

the expected value calculation and so if

it ends humanity then that's the only

thing you should focus on even if it's

you know very small percentage chance.

But they really like reorganized behind

this with all and you know they've got

quite a few advocates and actually it's

it's an amazing story about how much

influence they were able to achieve um

largely behind the scenes or in the

shadows during the Biden years. They

basically convinced all of the major

Biden staffers of this um view of this

like imminent super intelligence is

coming. We should be really afraid of

it. We need to consolidate control over

it. there should only be, you know,

ideally two or three companies that have

it. We don't want anyone in the rest of

the world to get it. And then, you know,

what they said is, you know, once we

make sure that there's only two or three

American companies, we'll u we'll solve

the coordination problems. That's what

they consider to be the, you know, the

free market. Uh we'll solve those

coordination problems for buying those

um companies and we'll be able to

control this whole thing and prevent the

the genie from escaping the bottle. I

think it was like this totally paranoid

version of what would happen. And it's

already being it's already in the

process of being refuted. But this this

vision is fundamentally what animated

the Biden executive order on AI. It's

what animated the Biden diffusion rule

and Mark. I mean, you've talked about

how you were in a meeting with Biden

folks and they were going to basically

ban open source and

>> they were going to they're basically

going to anoint two or three winners and

that was it. Um

>> yeah, they told us that they told us

that explicitly. Um and uh yeah, and

they told us exactly what you just said.

They told us they're going to ban open

source. And when we challenged them on

the ability to ban open source because

it's, you know, we're talking about, you

know, math, you know, like mathematical

algorithms that are taught in textbooks

and YouTube videos and at universities.

Um, you know, they they they said we'll

during the Cold War, we banned entire

areas of physics um and put them off

limits and we'll do the same thing for

math if we have to.

>> Um, yeah. And that's the um Yeah, that's

the uh Yeah, that was the

>> And you'll be happy to know that the guy

who actually said that is now an

anthropic employee. It's not.

>> No, that's exactly right. All all those

and and I mean literally the minute the

Biden administration was over, all the

top Biden AI employees went to go work

at Enthropic, which tells you who they

were working with during

>> the Biden years. Um,

>> yeah.

>> Yeah.

>> But no, but I mean this was this is very

much the narrative. you sort of had this

imminent uh super intelligence and then

you know the one of the refrains you

heard um was that uh AI is like nuclear

weapons and uh and GPUs are like uranium

or plutonium or something

>> and um and so and and therefore we need

like an you know the way the proper way

to to regulate this is with like an

international atomic energy commission

>> and so you know again everything would

be sort of centralized and um controlled

and they would anoint two or three

winners and you Now this I think this

narrative really started to fall apart

>> uh with the launch of DeepSeek

>> which really happened in the first I

don't know couple weeks of the Trump

administration because you know if you

asked any of these people what they

thought of um of China during this time

when they were pushing all these

regulations they basically you know and

specifically well wait if if we shoot

ourselves in the foot by overregulating

AI you know won't China just win the AI

race if you were to ask them that what

they what they would have said they did

say is that China is so far behind us it

doesn't matter. And furthermore, uh, and

this was said completely without

evidence that if we basically slow down

to impose, you know, all these

supposedly healthy regulations, well,

China will just copy us and do the same

thing. I think it was an absurdly naive

view. I think that if we shoot ourselves

in the foot, China will just be like,

"Thank you very much. We'll just

>> take leadership in this technology. Uh,

why wouldn't we?" Um, but this is what

they said. And um and you know when when

when the Biden executive order on AI was

um crafted, there was no discussion

whatsoever of the China compete, you

know, it was just it was just again

assumed that we were so far ahead that

um that we could basically do anything

to our companies and it would just be it

wouldn't it wouldn't really affect our

our um competitiveness.

And I think I think that narrative

really started to fall apart with um

DeepSeek at the model level. uh back in

April uh Huawei launched a technology

called cloud matrix in which they

compensated for the fact that their

chips individually are not as good as

Nvidia's chips by networking more of

them together. They took 384 of them you

use their prowess in networking to

create you know this rack system cloud

matrix and it was demonstrated to show

that you know yes Nvidia chips are

better they're much more power efficient

but at the rack level at the system

level you know um Huawei could get the

job done with these you know asend chips

and cloud matrix and so again I think

that showed that you know we're not the

only game in town on chips which means

that if we don't sell our chips to you

know our our friends and allies in the

Middle East and other places, then

Huawei certainly will. Uh so I think

it's just been kind of one revelation

after another in which we've learned

that um that a lot of their

preconceptions and belief were wrong.

And we've talked about the fact that um

that the that the markets ended up being

much more decentralized than they ever

could have predicted. And and I'd also

say one other thing which is they they

um they also believe that there'd be you

know um imminent catastrophes that

haven't so this is kind of like the

equivalent to the global warming thing

where we're all supposed to be

underwater by now.

>> They they were saying that models

trained on I think I don't know like 10

to 25

>> flops or whatever were like way too

risky. Well, every single model now at

at the frontier is trained on that level

of compute. And so they would have

banned us from even being at the place

we're at today if we had listened to

these people back in 2023. So just a

couple years ago. So that's like really

important to keep in mind that their

predictions of imminent catastrophe have

already been refuted. And so things are

moving in a direction that um that I

think are very different than you know

what they thought in you know let's call

it the first year after the launch of

chat GPT.

>> Right. So David just to uh come back

real quick while we still have you on on

crypto. Um so the administration um and

I think the country had a significant

victory uh earlier this year with the

president signing the u stablecoin bill

into law uh which was the the genius

act. Um and I I think that I'll just

tell you what we see is like the

positive consequences of that law have

been even bigger than we thought. Um and

I would say that's both for the stable

coin industry and you now see actually a

lot of a lot of financial institutions

of all kinds embracing stable coins um

in a way that that they weren't before.

um and you know sort of the the the

phenomenon spreading in America you know

by the way doing you know being being in

the lead and doing very well there. Um

but but but just also more broadly just

as a signal to the crypto industry that

like this you know this really is a you

know this really is a new day. Um and

there really there really are going to

be regulatory frameworks that that make

these things possible and and you know

that that are responsible but also make

this industry really possible to

flourish. um and in the US um as you

know there's a second piece of

legislation um you know being

constructed right now which is the

market structure bill uh called the the

clarity act um which is sort of the

phase two of the legislative agenda um

and I wondered maybe if you could just

tell us a little bit about your view of

of the importance of that of that bill

and and then you know kind of how do you

think that process is going

>> I think it's extremely important um so

as you mentioned we we passed the the

Genius Act few few months ago but that

was just for stable coins stable coins

are about 6% of the total market cap in

terms of tokens. So 94% are all the

other types of tokens and the Clarity

Act would apply to all of that and

provide the regulatory framework for all

those other crypto projects and

companies. Um, you know, if we could be

sure that, you know, currently we have a

great SEC chairman, Paul Atkins, and if

we could be sure that Paul Atkins and a

person like Paul Atkins was always at

the SEC forever, then we wouldn't

necessarily need legislation because

they're already in the process of

implementing like much better rules and

providing regulatory clarity. But the

truth is that we don't know for sure.

And if you're a founder who's trying to

make a decision now about where you're

going to build your company, you want to

have certainty for 10 years out, 20

years out. uh you know we want to

encourage long-term projects and so

again I think it's very important to pro

to canonize

uh the the the rules that that's first

to provide the clarity and then to make

sure there's enough stability around

them and sort of canonize those rules in

legislation. That's the only way that

you provide that long-term stability. I

think that we will get the Clarity Act

done. Like you mentioned it passed the

House with about 300 votes of about 78

Democrats. So it was substantially

bipartisan. I think it will ultimat

we're it's now going through the Senate.

Um I think it will ultimately get done.

Uh we're negotiating with a dozen or so

Democrat. We have to get to 60 votes. So

that's the the hard part is under the

filibuster. We got to get to 60. Uh so

but we're negotiating with about a dozen

Democrats. And I I do think that we will

ultimately get to to that number. We by

the way we ended up having 68 uh votes

in the Senate for genius uh including 18

Democrats. So, I do think that even if

we just get,

you know, twothirds of the number of

Democrats that we got for Genius, then,

you know, we'll we'll be we'll be fine

on on clarity. But I, you know, it this

will provide the the regulatory

framework again for all the other tokens

besides stable coins. And um I think

it's just a critical piece of of

legislation. And uh yeah, this this

would ultimately I think kind of

complete the crypto agenda where we've

kind of you know moving from you know

Biden's war on crypto to Trump's crypto

capital of the planet. Um and then you

know I think the industry will have the

stability it needs and can just focus on

innovating and

>> there'll be you know rule updates and

things like that but you know we'll

fundamentally have the foundation for

the industry in place.

>> On Genius Act um you know President

Trump really made that bill possible. I

mean, first of all, it was his election

that completely shifted the conversation

>> on crypto. We would still be, you know,

if a different result had been reached,

we would have again like figure out at

the SEC. The founders would still be

getting prosecuted. We wouldn't know

what the rules are. Elizabeth Warren

would be calling the shots. So,

President Trump's election made

everything possible. And it's his

commitment to the industry and he and

his commitment to keeping his um

promises during the election that's made

all of this possible. But also I mean he

got directly involved in making sure the

Genius Act passed. It was the

legislation was declared dead many

times. Uh I saw it with my own eyes that

you know he was able to persuade um Rick

Calstrren votes and uh twist arms cajul

uh and um charm and um you know he

ultimately got it done and um and I

think that clarity will be a similar

result. people are always prematurely

declaring these things to be dead or

whatever. Um there are a lot of twists

and turns in the legislative process.

It's definitely true that you don't want

to see the sausage getting made, but um

but anyway, I think we're on a good

track right now.

>> Good. Fantastic.

>> Great.

>> Pete Buddhajes went on allin recently

and you guys talked about the the left's

identity crisis and you he's hoping for

more of a moderate, you know, center

center left um to to to emerge. At the

same time, we see Mdani in New York. I'm

I'm curious what you you think of what

are you seeing in terms of what is the

future in terms of for the for the

Democratic party in terms of it is there

a more moderate presence or is it kind

of this Mani style pop you know woke

populism?

>> I mean it certainly seems to me that

Mani and and um I don't know like the

woke socialism seems to be the future of

the party. I mean that's where all the

energy is and their base. I mean I don't

want that to be the case. I'd rather

have a rational Democrat party, but um

but that seems to be where their base

is, where the energy is. And um you

don't really hear um Democrats um within

the party trying to self police and

distance themselves from that. Uh you

saw I mean all the major figures in the

Democrat party have endorsed Ma Donnie.

So yeah, I mean that's where that party

seems to be headed. Um I I I think that

um partly it's where their base is at. I

think partly it it might be um a a

misread of of uh it sort of it could be

kind of like a partial reaction to to

Trump where they feel like um you know

establishment politics has kind of

failed and so they need a populism of

the left to compete with a populism to

the right. And so I think that's maybe

part of the calculation for why they're

going in this direction. But I don't you

know fundamentally I don't think it

works. I don't think socialism works. I

don't think the um you know, defund the

police, empty all the jails policies

work. Um so, you know, I think we're

about to get another um you know, uh

case uh a teaching moment in New York.

Unfortunately, it's not going to be good

for the city. Uh but, uh you know, we

we've seen this movie before, but uh but

yeah, that's that's where I mean, it

does appear that's where the Democrat

party is. Um I don't completely get it.

I mean, other people have made this

observation, but they do seem to be on

the 20% side of every 8020 issue. Um,

uh, you know, opening the border, um,

you know, on the, um, soft on crime

stuff, you know, releasing all the the

repeat offenders and, um,

and just sort of this, um, you know,

anti- capitalist approach um, you know,

which I think will be disastrous for the

economy. I mean this but this is this is

kind of where the party's at right now.

It is it's a little scary because it

does mean that if we lose elections in

places where we do lose elections it's

like you know there you could end up

with something really horrible. Not just

like you know we're not just playing in

the 40 yard lines anymore in American

politics. Uh and that that is a little

bit scary.

>> Yeah.

>> And I I do think that you know if it

weren't for Donald Trump I think in a

way we we might already be there. Um,

you know, I think, you know, but we have

to make sure that this um this um the

Trump revolution continues.

La lastly, um we were just talking about

New York. Um recently in an episode all

in in San Francisco, you you you know

endorsed bringing the the National

Guard. You know, Ben off had his

comments. He he sort of, you know, went

back and forth with those comments. I'm

curious if speaking of teaching moments,

I'm curious if you see San Francisco as

savable in in some sense and and what

what needs to be true to to get there if

so.

>> Well, Daniel Luri is the best mayor

we've had in decades. So, I think he's

doing a very good job um within the

constraints that San Francisco presents.

Uh so, the the the mayor job,

unfortunately, we have a weak mayor in

San Francisco. I I don't mean him, I

just mean like the way it's all set up.

the board of supervisors has, you know,

a ton of power and um over time they've

been able to kind of transfer power from

the mayor to to themselves. Um and then

of course you got all these leftwing

judges. I mean, it's just amazing to me

that um there's a case right now, this

is a case that galvanized me several

years ago, the case of Troy Mallister

who was a repeat offender who uh killed

pe two people on New Year's Eve. I think

it was like uh 2020.

Uh and um he was arrested four times,

you know, uh in the year before that. Uh

he he he ended up killing these two

people and he had a very very long

criminal history. He had committed armed

robbery before, stolen many cars.

Uh and he should have been in jail. He

should not have been released. But he

was basically released thanks to the

zero bail policies of Chase Bodin, who

was then the district attorney who we

got recalled. There was a huge outcry. I

mean, even in San Francisco for there to

be a recall of a polit I mean, you got

to be like seriously leftwing to um

recall

>> to basically alienate San Francisco and

Chase Bin managed to be so far out there

that he alienated even San Francisco.

>> And yet, I don't know why Tory Mallister

isn't sentenced already and in jail for

20 years plus, but his case is still

pending through the courts, never

ending. And there's a leftwing judge

who's considering just giving him

diversion. basically means you just get

released maybe with an ankle bracelet or

something. Um, that's insane. So, I

mean, that's what we're dealing with in

San Francisco. I mean, like crazy

leftwing judges who want to release all

the criminals. Uh, and um, you know, and

so I just wonder like is Daniel up

against too many constraints and

therefore I know he doesn't want the

president to send in the National Guard,

but maybe ultimately it would be

helpful. Um but in any event, I think

the president has has agreed to kind of

hold off on that um out of you,

you know, Daniel had a good conversation

with the president and asked him to hold

back and um and you know, the president

uh agreed and is giving him time to

implement his solutions. And look, if if

um if Daniel and his team can keep

making progress and fix the problems

without the National Guard having to

come in, then so much the better. Um,

we'll just see if and and I know he

wants to and like I said, he's the best

mayor we've had in decades. It's just a

question of whether he'll be too

constrained by the other powers that be

in the city.

>> David, thank you so much for coming on

the podcast.

>> Yeah.

>> Fantastic. Thank you, David.

>> Yeah. And thank you for the work. It's

we we as much as anybody appreciate the

work that you've done to uh fix the

things in the past and and put us on a

great road to the future. Yep.

>> Well, thanks and I appreciate what you

guys have done as well. So, thank you

for

for your support, everything you're

doing. So,

>> yeah,

>> I appreciate it.

>> Definitely.

[Music]

Loading...

Loading video analysis...