Marc Andreessen and Ben Horowitz on the State of AI
By a16z
Summary
## Key takeaways - **AI Creativity: Clearing Humanity's Bar?**: AI's current creative and intelligence capabilities are already exceeding 99.99% of humanity, raising questions about what truly constitutes human genius versus advanced remixing. [01:13], [02:20] - **Human Leadership: Beyond Raw Intelligence**: Success in leadership and entrepreneurship relies on more than just IQ; it requires emotional understanding, theory of mind, and the ability to see decisions through others' eyes, which AI currently struggles to replicate. [12:23], [13:45] - **AI and Theory of Mind: A Complex Relationship**: While advanced LLMs show proficiency in theory of mind, creating personas and simulating focus groups, the military's experience suggests leaders too far ahead of their followers can lose this crucial ability. [15:33], [16:27] - **AI is Not a Bubble: Demand Outstrips Supply**: Despite massive infrastructure investment, AI is unlikely to be a bubble because demand is exceptionally high, unlike past tech bubbles where market adoption lagged behind investment. [23:09], [24:34] - **Platform Shifts: Incumbents Can't Rest**: Incumbent tech giants often miss new platform shifts due to execution challenges or a focus on existing models, demonstrating that new market leaders typically emerge from new entrants. [26:43], [27:57] - **The US-China AI Race: A Game of Inches**: The US currently leads in AI conceptual innovation, but China excels at implementation and scaling. This race is close, demanding rapid progress and avoiding self-imposed constraints to prevent falling behind. [35:24], [36:37]
Topics Covered
- Are LLMs Intelligent or Just Remixing?
- The Rarity of True Innovation: Bridging Domains
- Intelligence Isn't The Only Factor for Success
- AI's User Experience is Still Unformed, Unlike Today's Chatbots and Search Engines
- China's Industrial Ecosystem Could Lead in Robotics, Even If US Leads in Software
Full Transcript
I think we don't yet know the shape and
form of the ultimate products. Just one
just obvious historical analogy is you
know the personal computer from sort of
invention in 1975 through to you know
basically 1992 was a text prompt system
17 years in you know the whole industry
took a left turn into gooies and never
looked back and then by the way you know
5 years after that the industry took a
left turn into web browsers and never
looked back right and you know look I'm
I'm sure there will be chat bots 20
years from now but I'm pretty confident
that both the current chatbot companies
and many new companies are going to
figure out many kinds of user
experiences that are radically different
that we don't we don't even know
Please join me in welcoming Mark Andre
and Ben Horowitz with general partner
Eric Torberg.
Follow me into our solo. Get in the flow
and you can pitch it like a photo. Music
makes me maintain and make melodies for
MC's motivation breaks. I'm never
>> Thank you for the rock Kim. Who did
that?
>> Ben picks the Ben picked the music.
Mark, there's been a lot of talk lately
about the limitations of LLMs that they
they can't do true invention of say new
science that they can't do true creative
genius that it's just combining or
packaging.
You have thoughts here. What say you?
>> Yeah. So, so for me, yes, you get all
these questions and yeah, they usually
come in either, you know, sort of are
are language models intelligent in the
sense of like can they actually um you
know, can they actually process
information and have sort of conceptual
breakthroughs? the way that people can.
And then there's are language models or
or video models creative, you know, can
they can they create new art? Um
actually have genuine creative
breakthroughs. And of course my my my
answer to both of those is well can
people do those things? Um and um I
think there's two two two questions
there which is like okay even if some
people are quote unquote intelligent as
in having original uh conceptual
breakthroughs and not just let's just
say regurgitating the training set um uh
or following scripts um uh how you know
what percentage of people can actually
do that and I say I've only met a few
some of them are here in the room um but
uh you know not that many most people
never do and then creativity I mean how
many people are actually genuinely
creative right and so you you kind of
point to a Beethoven or you know Van
Gogh or something like that you're like
Okay, that's creativity and yeah, that's
creativity and then how many Beethovenas
and Van Go are there? Obviously, not
very many. So, so one is just like okay,
like you know, if it's if it if if if
these things clear the bar of, you know,
99.99% of humanity, you know, then
that's pretty interesting just in and of
itself. But then I you dig into it
further and you're like, okay, like how
many actual real conceptual
breakthroughs have there ever been
actually ever in human history as
compared to sort of remixing remixing
ideas. Um and you know like if you look
at the history of technology it's almost
always the case that the big
breakthroughs are the result of you know
usually at least 40 years of sort of
work ahead of time you know four decades
right in fact language models themselves
are the culmination of eight decades
right of previous work and so there's
remixing and then in the arts it's the
exact same thing you know novels and
music and everything like you know there
are clearly creative leaps but you know
there's just tremendous amounts of
influence that came in from from people
who came before and even if you think
about like somebody with the creativity
of a Beethoven like there There's a lot
of Beethoven in Mozart and Heiden and in
the composers that came before and so
there's just tremendous amounts of of of
remixing and combination. And so it's
it's it's a little bit of an angel's
dancing on the head of a pin question
which is like if you can get if you can
get you know within you know I don't
know one you know 001% of of kind of
worldbeating uh you know generational
creativity intelligence like you're
you're you're probably all the way
there. So I so emotionally I want to
like hold out hope that there is you
know still something special about human
creativity and I I certainly believe
that and I and I and I very much want to
believe that but um I don't know when I
use these things I'm like wow they seem
to be awfully smart and awfully
creative. So I'm I'm I'm I'm pretty
convinced that they're going to clear
the bar.
>> Yeah.
the I think that seems to be a common
theme in your analysis when when people
talk about the limitations of LMS, you
know, can they do transfer learning or
or just learning in general? You seem to
ask, can people do this?
>> Yes. Can people do these things? Well,
it's like lateral thinking, right? So,
yeah, so it's like reasoning in or out
of distribution, right? And so it's
like, okay, I know a lot of people who
are very good at reasoning inside
distribution. How many people do I
actually know who are good at reasoning
outside of distribution and doing
transfer learning? And and the answer is
like I know a handful. Like I I know a
few I know a few people um where
whenever you ask them a question, you
get an extremely original answer. And
usually that answer involves bringing in
some idea from some adjacent space and
basically being able to bridge domains.
Um and so you know you'll ask them a
question about I don't know you know
finance and they'll they'll bring you an
answer from psychology or you ask them a
question about psychology and they'll
bring you an answer from biology right
or whatever it is. And so I I know you
know I know I don't know sitting here
today probably three I probably know
three people who can do that reliably
out of the you know I I've got you know
I've got 10,000 in my address book. Um
and so three out of 10,000
>> Yeah. is not is not that high a
percentage. By by the way, I find this
very encouraging. Like I I yeah,
immediately the mood in the room has
gone completely to hell. Um I find this
very encouraging. I find this very
encouraging because look at what
humanity has been able to build, right?
Despite all of our limitations, right?
And and look at all the creativity that
we've been able to exhibit and all the
amazing art and all the amazing movies
and all the amazing novels and all the
amazing technical inventions and
scientific breakthroughs. And so we, you
know, we've been able to do, you know,
everything we've been able to do with
the limitations that we have. Um, and
so, you know, I think that, you know,
like, you know, do you need to get to
the thing where you are 100% positive
that's actually doing, you know,
original thinking? I don't think so. Uh,
I I think it'd be great if you did, and
I think ultimately we'll probably
conclude that that's what's happening.
Um, but like I it's not necessary for
like just tremendous amounts of
improvement.
>> Ben, we were just celebrating some some
hip-hop legends at your paid in full uh
event last week and so you think a lot
about creative genius. How do you think
about this question?
Yeah, I mean I think that uh I agree
with Mark that it's
whatever it is, it's very useful. um
even if it isn't all the way that level
uh I think that
you know there's something about the
actual like real time human experience
that humans are very into um at least in
art where
you know with the current state of the
technology kind of the the pre-training
doesn't have quite the the right data to
to to get to um what you really want to
Uh but it, you know, it's
pretty good.
>> It is pretty good.
>> How many So how many true conceptual So
Ben Ben Ben Ben's Ben, one of Ben's
nonprofit activities is something called
the Pain and Fall Foundation, which is
honoring and actually providing
essentially a pension a pension for uh
you know, for sort of, you know, the
great innovators in in in rap and
hip-hop. Um and and so he has he knows
and has many of we were just at the
event and he you know has many of the
kind of leading lights of that field for
the last 50 years you know perform and
and it's really fun to meet them and
talk to them. Um but like how many
people in that entire field over the
course of the last 50 years would you
classify as like a true conceptual
innovator?
>> Yeah. Well, you know it's interesting.
Well, it depends how broadly you define
it but
>> you know there were several of them
there last you know on Saturday. Rock, I
think. Yeah, Rockm, you'd certainly put
in that category. Dr. Dre, you'd
certainly put in that category. George
Clinton, you'd certainly put in that
category. Um,
>> you know, in a narrower sense, like Cool
G Rap certainly had a new idea. Um,
but you know, it depends like a
fundamental kind of musical
breakthrough, you probably just say like
Rock Kim and George Clinton. Um, are
they excited?
>> So, so two out of
>> Well, I mean those of the guys who were
there.
>> Oh, yeah. Yeah.
>> Yeah. But yeah, it's a tiny percentage.
Tiny tiny tiny tiny tiny.
>> We had Jared at the fireside last night
with Jared Leto. He was talking about
how many people in Hollywood are are
really scared or against this um what's
happening here is is what do you see in
you know when you talk to the Dr. JS the
nas the Kanye's are they excited? Are
they using it? Are they
>> Yeah. No, I the so everybody uh who I
speak to there are definitely people who
are scared in music but like there are a
lot of people who are very very
interested in it and particularly the
hip-hop guys are interested because um
it's almost like a replay of what they
did right that they just took other
music and they kind of built new music
out of it and I think that you know AI
is uh a fantastic creative tool for
them. like way opens up the pallet and
then for you know a lot of what hip-hop
is is it's um kind of telling a very
specific story of a specific time and
place which um having intimate knowledge
and being trained just on that thing is
is actually an advantage as opposed
being like a generally smart uh music
model. Um, people also use the same
logic of, hey, whatever is more
intelligent will rule whatever is less
intelligent. And and Mark, you recently
uh
not not said by anybody who owns a cat.
>> Yeah, exactly.
Mark, you recently tweeted, "A supreme
shape rotator can only rotate shapes,
but a supreme word cell can rotate shape
rotators." And and also,
>> someone's clapping here. And also, high
IQ experts work for midiq generalists.
What means?
>> Yeah. What what means? Uh yeah. So it's
the PhDs all work for MBAs, right? So
it's like, you know, okay. So yeah, like
so yeah. Well, I just, you know, just
take it up a level. It's just like when
you look at the world today, do you
think we're being ruled by the smart
ones,
right? Like is that is that your big
conclusion from like current events,
current affairs,
right? Like okay, we put the geniuses in
charge. Like
>> you mean Kamla and Trump aren't the
best?
>> Well, now let's not even be specific
towards the US. let's just look all over
the world.
>> Um, you know, yeah. And so like it's
just like
there's this thing. So I think two two
things are true. One is we we probably
all kind of underwrite the importance of
intelligence. Um, and actually there's a
whole kind of backstory here of like
intelligence actually turns out to be
this like incredibly inflammatory, you
know, kind of topic for lots of reasons
over the last hundred years. um uh which
which we could we could talk about in
great detail but like it it and you know
it's and the even the just the very idea
that like some people are smarter than
other people you just like really freaks
people out and people people don't like
to talk about it we really struggle with
that as a society and so like and then
it is true that intelligence is like in
humans intelligence is correlated to
almost every kind of positive life
outcome right and so u intelligence
generally in the social sciences what
they'll tell you is what they call fluid
intelligence or or the G factor or IQ is
sort of it's sort of 0.4 four correlated
to basically everything. Um, and so it's
0.4 correlation to like educational
outcomes and like you know professional
outcomes and and you know income and by
the way also like life satisfaction and
by the way non-violence you know being
able to solve problems without physical
violence and so forth. And so like on
the one hand like we probably all
underrate intelligence. Um on the other
hand the people who are in the fields
that involve intelligence probably
overrate intelligence. Um, and you might
even you might even coin a term like
maybe like intelligence supremacist or
something like that where it's just like
oh like intelligence is very important
and so therefore maybe it's like the
most important thing or the only thing
but but then you look at reality and
you're like okay that's clearly not the
case.
>> Yeah. It's still zero only 0.4 right.
Yeah.
>> Well so to start with it's only 0.4 and
you know in the social sciences 0.4 is a
giant correlation factor right like most
most things that where you can correlate
whether it's you know genes or observed
behavior whatever to anything in the
social sciences the correlations are
much smaller than that. So 0.4 is is
tiny, but it's still only 0.4. So, even
if you're like a fullon if you even if
you're like a full-on genetic
determinist and you're just like, you
know, genetic IQ just like drives all
these outcomes, like it still doesn't
explain, you know, 6 uh of the
correlation and so that leaves it but
but that's just on the individual level.
Then you just look at the collective
level. Well, you just look at the
collective level and it's like a famous
famous observation is you take a b you
take a bunch of you take any group of
people, you put them in a mob and the
mob is dumber, right, than the average
and and and you put a bunch of smart
people in a mob and they definitely turn
dumber like and you see that all the
time, right? Um uh and so you put people
in groups and they they they behave very
differently and then you you create and
then you create these you know questions
around like who's in charge whether
who's in charge at a at a company or
who's in charge of a of a country and
like it it's whatever the filtration
process it's clearly not it's not it's
it's not it's certainly not only on IQ
and it may not even be primarily on IQ
and so so therefore it's just like this
assumption that you kind of hear in some
of the AI circles uh which is like
inevitably the smart you know kind of
thing is going to govern the dumb thing
like I I just think that's like very
easily uh it's just sort of very easily
and obviously falsified like
intelligence isn't sufficient and then
you just you just you just convey it.
You know, we're all in this room lucky
enough to know a lot of smart people and
you you just kind of observe smart
people and like some smart people, you
know, really figure out how to have
their stuff together and become very
successful and a lot of smart people
never do. Um and so there there's there
there must be there obviously are and
there and there in fact must be many
other factors that have to do with
success um and have to do with like
who's in charge than just raw
intelligence. It it begs the the
follow-up question of what are what are
some examples of what that might be you
know skills sort of outside of
intelligence and more particularly
specifically why couldn't AI systems you
know learn them?
>> Yeah. So Ben like what what what other
than intelligence what what in your
experience determines for example
success in leadership or in
entrepreneurship or in in solving
complex problems or organizing people?
>> Yeah.
There there are many things um you know
like a lot of it is uh being able to
have a confrontation in the correct way
and like there's some intelligence in
that but a lot of it is just under
really understanding who you're talking
to
you know being able to interpret
everything about how they're thinking
about it and just kind of generally
seeing decisions through the eyes of the
people working in the company not
through your eyes is the
skill that you you know you develop by
talking to people all the time,
understanding what they're saying, so
forth, these kinds of things. And it's
just um
you know, it's certainly not an IQ
thing. And not that like I I could
imagine an AI training on any individual
and like figuring it all out and knowing
what to say and so forth. Um but then
you also need that integrated with uh
you know like whatever the business
ought to be doing. So you're not you're
not trying to do what's popular. You're
trying to get people to do what's
correct even if they don't like it. And
you know that's a lot of management. So
uh
it's not a problem anybody's working on
currently but maybe they will.
>> It's some right some combination of like
courage some combination of motivation
some combination of um of emotional uh
understanding theory of mind.
>> Yeah. what you know what do people want
like you know married to you know what
needs to be done and then like how
talented are they like which ones can
you afford like if they jump out the
window it's fine you know which one's
not fine you know this kind of thing
it's a there's a lot of like weird
subtleties to it uh and it's very
situational I think the hardest thing
about it uh and why management books are
so bad is because it's situational um
you know like your company your product,
your people, your org chart is very very
different than you know, yeah, here are
the five steps to building a strategy.
It's like, well, that's the most useless
thing I ever read because it has
nothing to do with you.
>> So, one of the interesting things on
this like on this is right the the
concept of theory of mind is really
important, right? So, theory of mind is
can you in your head model what's
happening in the other person's head,
right? And and and you would think that
maybe that you know maybe obviously
people who are smarter should be better
at that. It turns out that that may not
be true and I'll the reason to believe
that that's not true which which is as
follows. So the so the the US military
is was the early adopter and has
continued to be sort of the leading
adopter in in US society of of of
actually IQ testing and they uh they
they they basically launder it through
something called the ASVAB which is
their they call vocational aptitude
battery test but it's basically an I
it's essentially an IQ test. Um and uh
so they they they they still use
basically explicit IQ tests and they
they slot people uh into different
specialties and roles uh you know in
part with according IQ um including into
leadership roles. Um and and so they
they know what everybody's IQ is and
they they kind of organize organize
around that. And one of the things that
they found over the years is if the
leader is more than one standard
deviation of IQ away from the followers,
it's a real problem. Um and and and
that's true in both directions, right?
Um if if the leader is not smart enough
to be able to right manage the you know
is to be able to mo you know for
somebody who is less smart to model the
mental behavior of somebody who's more
smart is inher of course inherently very
challenging and maybe impossible. But it
turns out the reverse is also true which
is if the leader is two standard
deviations above the norm of the
organization that he's running. He also
loses theory of mind. Right? it it's
it's actually very hard for very smart
people to um to to model uh the internal
thought processes of even moderately
smart people. Um and so there there's
actually there's actually a real there's
actually a real need to have a level of
connection there that's not just right
and therefore by inference if you had a
person or a or a machine that had you
know a thousand IQ or something like it
it may just be it would be so alien its
understanding of reality would be so
alien to the people or the things that
it was managing that it wouldn't it
wouldn't even be able to connect in any
sort any sort of realistic way. So again
this is a very good argument that like
it yeah this is the world is going to be
far from organized by IQ for yeah for
centuries to come.
>> Yeah and Zuckerberg had a great line
which is intelligence is not life and
life life has a lot of dimensionality to
it that is independent of intelligence.
I think that
you know if you spend all your time
working on intelligence you lose track
of that. We sometimes say about uh some
specific people that they're too smart
to properly model or or you know um too
they s sort of assume too much
rationality on other people or they just
o overthink things or overrationalize
them. Um yeah just to your point that
it's it's on everything.
>> Yeah. Yeah. Yeah. People often people
seldom do what's in their best interest
I should say.
>> You know I also suspect this kind of
gets more into the biology side of
things. I you know there's more and more
scientific evidence that basically also
that like human human cognition human
cognition or human I don't whatever you
want to call it self-awareness
information processing decision-m sort
of experience um uh is is not purely a
brain like the basically the d the sort
of mind famous mindbody dualism is just
not correct like and again this is an
argument against sort of IQ supremacism
or intelligent supremacism is it's not
you know we we human beings didn't
experience existence just through the
rational thought u and and and
specifically not through just the
rational thought of the brain, but
rather it's a whole body experience,
right? And there's there's there's
aspects of our nervous system and
there's aspects of everything from our
gut biome to, you know, to to you know,
to to smells, you know, to alactory
senses and and um you know, and hormones
and like all all kinds of like
biochemical kind of aspects to life. Um
I sus if you just kind of track the
research, I suspect we're going to find
is human cognition is a full body
experience. Uh much much more than much
more than people thought. Um and and so
therefore to actually and this you know
this is like a and this is you know one
of the kind of big fundamental
challenges in the AI AI field right now
which is you know the form of AI that we
have working is is the is the fully mind
body dual version of it which is it's
just a disembodi you know like a
disembodied brain you know the robotics
revolution for sure is coming when that
happens when we put AI in in physical
objects that move around the world you
know you're going to be able to get
closer to having that kind of you know
inte integrated intellectual physical
you know experience you're going to have
sensors in the robots are going to be
a lot more data.
But it is just to me at least reading
the research like that all those ideas
feel very nent and we have a lot of work
to do to try to figure that out.
>> Do you have a sense for how they are at
theory of mind today? Um or do you have
a where the limitations are? you like to
talk to them a lot. Are there any
particular things that are particularly
surprising to you as you do?
>> Yeah, I would say generally they're
really good. Um, yeah. And so like one
of the one one of the more I find one of
the more fascinating ways, you know, to
to work with language models is actually
have them create personas. Um, and uh
and then you know, basically have well
actually so I like I like I like
basically I like Socratic dialogues. I
like when things are argued out and like
a Socratic dialogue. And so you you know
tell a tell any advanced LLM today to
create a Socratic dialogue and it'll
either make up the personas, you can
tell what it is, it does a good job. it
has this very very annoying property
which is it wants everybody to be happy.
Um and so it wants all of its personas
to agree. Um and so by default it will
have a uh it will have a briefly
interesting discussion and then it will
sort of figure out you know basically it
like you're watching I don't know PBS
special or something. It'll it'll kind
of figure out how to bring everybody
into agreement. Everybody's happy at the
end of the discussion. And of course I
hate that. Like it drives me
nuts. I don't want that. So instead I I
tell it I'm like make the conversation
more tense, right? And like fraught with
like anger and like you know people, you
know, going to get like increasingly
upset throughout the conversation. And
then it starts to get really
interesting. Um and then I and then I
tell it, you know, bring it, you know,
use introduce a lot more cursing. Um you
know, really have them go at it like all
the gloves come off, they're going for
pull full, you know, you know,
reputational destruction of each other.
>> You do a lot of these skits.
>> Yeah, these skits. And then I get
carried away and then I'm like it turns
out they're all like secret ninjas and
then they all start fighting and you've
got Einstein you know you know you know
hitting you know Neils Bore with
nunchucks and it and by the way it's
happy to do that too. Um so you do have
to you have to you have to control
yourself but it it is very good a theory
of mind and then I'll give you another
example. There's a there's a startup
actually in the UK uh in uh in in the
world of politics and and what what they
found is that um they found that
language models now are good enough so
specifically for for politics which is
sort of a sub subcategory where this
this idea matters. Um so you know in
politics people do focus group you do
focus groups of voters all the time and
and by the way many businesses also do
that. Um, you know, so you get a bunch
of people together from different
backgrounds in a room and you kind of
guide them through a discussion and try
to get their their points of view on
things. And and focus groups are often
surprising. Like politicians who if you
talk to politicians who do focus groups,
they're often surprising. They're often
surprised by the things that they
thought voters cared about is actually
not the things that voters care about.
And so you can actually learn a lot by
doing this. But focus groups are very
expensive to run. And then there's a
long lag time because they have to be
actually physically organized and you
have to recruit people and vet people
and and so forth. Um, and so it turns
out that the the the the
state-of-the-art models now are are good
enough at this so they can actually they
can they can correctly um accurately
reproduce a focus group of real people
um inside the model. Um so so so they're
good enough to clear that bar. In other
words, you you can basically have a
focus group actually happening in the
model where you create personas in the
model and then it actually accurately
represents, you know, a college student
from, you know, Kentucky is contrasted
to a housewife from Tennessee is
contrasted to a, you know, whatever
whatever you you just like specify this.
And so, you know, they're good enough to
clear they're good enough to clear that
bar and, you know, we'll we'll see how
far they get.
>> I want to segue to the bubble
conversation. Uh, Amin and G2, Jensen
and Matt spoke about the enormous scale
of physical infrastructure being built
out. AI capex is 1% of GDP. How should
we understand and think about this
bubble question?
>> Well, I think the fact that it's a
question means we're not in a bubble.
That's the first thing to understand. I
mean, a bubble is a psychological
phenomenon um as much as anything. And
in order to get to a bubble, everybody
has to believe it's not a bubble. That
that's sort of the the core mechanic of
it. And that, you know, we call that
capitulation. Everybody just gives up.
Like, okay, I'm not going to short these
stocks anymore. I'm tired of losing all
my money. I'm going to go long. Uh and
we saw that actually in,
you know, and a little bit of question
like really what was the tech bubble? Um
but in the kind of.com era right as the
prices went through the roof Warren
Buffett started inviting investing in
tech. So like and he swore he would
never invest in tech because you didn't
understand it. And so if he capitulated
nobody was saying it was a bubble when
it became like a quote unquote bubble.
Now if you look at that phenomenon
um the internet clearly was not a
bubble. Uh you know it was a real thing.
It was in the short term there was a
kind of price dislocation that happened
because uh the the market um you know
there were just not enough people on the
network to make those products go at the
time uh and then the prices kind of
outran the market.
you know, in AI, it's much harder to see
that because there's so much demand in
the short term, right? Like we don't
have a demand problem right now. And
like the idea that we're going to have a
demand problem five years from now to me
seems quite absurd. Uh, you know, could
there be like weird bottlenecks that
that appear, you know, like we just at
some point we just don't have enough
cooling or something like that? Yeah,
you know, maybe. But like like right now
if you look at demand and supply and
what's going on and multiples uh against
growth,
it doesn't look like a bubble at all to
me. Um but
I don't know. Do you think it's a
bubble Mark?
>> Yeah. Look, I I would just say this.
Yeah. Like nobody know So nobody knows
in the sense of like the experts like if
you're talking to anybody at like a
hedge fun or a bank or whatever, like
they definitely don't know. Um uh
generally the CEOs don't know. So it the
>> by the way a lot of VCs don't know that
they just get upset like VCs get like
emotionally upset when you guys have
higher valuations like it it makes them
like like angry. Uh and you know and I I
get it all the time and I'm like what
are you mad about? Like the is
working man be happy. Come on. But so so
like there's a lot of emotion around
like people wanting it to be be a
bubble.
>> Yeah. No, nothing's worse than passing
on a deal and then having the company
become a great success. Like it's just I
was just just put
>> that that valuation is outrageous.
>> You can be furious about that for 30
years in our business. It's it's
amazing. Um and you can find Yeah. You
come up with all kinds of reasons to
cope and and explain why it wasn't your
mistake. But it's you know it's the
world it's the world that's wrong, not
me. Right. So there there's a lot of
that. Yeah. Uh yeah. So I I just I would
just I would just say like I would
always say bring the conversation back
to ground truth fundamentals. And the
the two big ground truth fundamentals
are number one, does the technology
actually work? Um, you know, can it
deliver on its promise? And then number
two, is are customers paying for it? Um,
and if those if those two things are
true, then it's very hard to it's very
hard to uh like as long as those two
things stay grounded. Um, you know, gen
generally generally things are going to
I think are going to be on track.
>> Yeah.
When Gavin was up here with DG, he said
chatbt was a Pearl Harbor moment for
Google, the moment when the giant wakes
up. when when we look at history and and
platform shifts, what determine whether
the incumbent actually wins the next
wave versus versus new entrance or how
should we think about that in in AMP?
>> Well, you know, reacting to it is
important. Um, but that doesn't mean
like I it's a Pearl Harbor moment. I
think
Google got their head out of their ass.
It was the sound of it.
Uh so you know they're not going to get
completely run over but
nonetheless like I don't think open AI
is going away. So like they they
definitely let that happened. Um yeah
some of it to speed and then just look
it's execution over a long period of
time and uh you know some of these very
large companies to varying degrees have
lost their ability to execute. And so if
you're talking about a brand new
platform and you're talking about, you
know, kind of building for a long time,
it's like, you know, Microsoft got
caught with their pants down on Google.
Um, Microsoft's still like very strong,
but they missed that whole opportunity.
They also missed the opportunity. You
know, Apple was nothing and Microsoft
fully believed that they were going to
own mobile computing. They completely
missed that one. But they were still so
big from their Windows monopoly they
could build into other things. So you
know I think generally the new companies
have won the new markets. Uh
and that doesn't mean the big company
the biggest companies the biggest
monopolies from the prior generation
just last a long time is the way I would
look at it.
>> Yeah. I I also think we don't quite know
like it's all happened so fast we we
actually don't I think we don't yet know
the shape and form of the ultimate
products.
>> Yeah.
>> Um Right. And so and so like because
it's it's tempting and this is kind of
what what always happens. It's kind of
it's kind of tempting to look I'm not
saying what that's what these guys did
on stage but it's kind of tempting to
look you sometimes you hear the kind of
reductive version of this which is
basically it's like oh there's either
going to be a chatbot or a search engine
right the competition is between a
chatbot and a search engine and the
problem Google has is the classic
problem of dis you know disruption. are
you going to disrupt the 10 blue links
model and swap in you know at you know
sort of uh AI answers and you know
potentially disrupt the advertising
model and then the problem OpenAI has is
they have the the full you know the full
chat product but you know they don't
have the advertising yet and they don't
have the distribution Google scale
distribution and so you know you kind of
say okay that's a fairly that's a fairly
cl like that'd be straight out of a like
you know the innovator's dilemma you
know business textbook like this is just
a very clear you know one one versus one
you know kind of dynamic but that
assumes that you know the mistake that
you could make in thinking way is that
assumes that the forms of the product in
5 10 15 20 years that that are going to
be the main things that people use are
going to be either a search engine or a
chatbot, right? Um and and you know the
just there's you know there's just
obvious historical analogies. One just
obvious historical analogy is, you know,
the the personal computer from sort of
invention in 1975 through to, you know,
basically 1992,
you know, was was a was a text prompt
system, right? Um, you know, and at the
time, by the way, an interactive text
prompt was a big advance over the
previous generation of like punch card
systems, time sharing systems. Uh, and
then, you know, it was, you know, 1992,
so was what, seven, 17 years in, you
know, the whole industry took a left
turn into GUIEs and never looked back,
you know. And then by the way, you know,
5 years after that, the industry took a
left turn in web browsers and never
looked back, right? And so the very
shape and form and nature of the user
experience and how it and how it fits
into our lives, uh, you know, is is is I
think still unformed. And so like and
you know, look, I'm I'm sure there will
be chat bots 20 years from now, but I
I'm I'm pretty confident that, um, you
know, both the current chatbot companies
and many new companies are going to
figure out many kinds of user
experiences that are radically different
that we don't we don't even know yet.
Yeah. And by and by the way, that's one
of the things of course that keeps the
tech industry fun, which is it, you
know, especially on the especially on
the software side, you know, is it's not
it's not it's not obvious what the shape
and form of the products are. And
there's just I think there's just
tremendous headroom for invention.
>> As as you're coaching entrepreneurs and
the entrepreneurs in this room, what
what else feels different about this era
or or or other advice that you find
yourself whether it's around uh sort of
the talent wars that are going on or
other aspects that feel unique to this
era? What what other advice do you want
to be leaving our entrepreneurs with?
that's unique to this era. Well, like I
I I actually think you said the right
thing, which is this is a unique era.
And so
trying to
learn the organizational design lessons
of the past or trying to learn um kind
of too much from the last generation is
can be deceptive
because things really are different.
like the way these you know the way your
companies are getting built is is quite
different in in many aspects. Uh and you
know the types of
you know what the just like our observ
observation on like PhD AI researchers
is just very different than like a
traditional um engineer full stack
engineer or something like that. So you
know I I think you do have to think
through a lot of things from first
principles uh because it is different
and like you know observing from the
outside it's really different.
>> Yeah.
>> Yeah. And I would just offer like I I do
think things are going to change. So I
already talked about I think the shape
and form of products is going to change.
Um uh and so like I think there's still
a lot of creativity there. I also think
and I I I let's say I think that um like
in a in a world of supply and demand the
thing that creates gluts is shortages.
Um right so like when something becomes
too scarce there becomes a massive
economic incentive to figure out how to
unlock new supply and so the the the
current generation of AI companies are
really struggling with uh particular
shortages of of the really talented AI
researchers and engineers and then
they're really you know challenged with
shortage of of infrastructure capacity
chips and and data centers and power. Um
I I don't want to call timing on this.
There will come a time when both of
those things become gluts. Um and so you
I don't know I don't know that we can
plan for that. Um although I I would
just say the following. Number one, um
the the the researcher engineer side of
things, it is striking. It is striking
to the degree to which um there are
excellent, you know, outstanding models
coming out of China now. Um you know,
and and in a m from multiple companies
and you know, specifically, you know, uh
Deepseek and and Quinn and Kimmy. Um it
is striking how the teams that are
making those are not, you know, the name
brand, you know, for the most part,
these are not like the name brand people
with their names on all the papers. Um
and and so like China is successfully
decoding how to like basically take
young people and train them up in the
field.
>> Well, and XAI to a large extent too.
>> Yeah. Yeah. And so I I think that I
think there's going to be and look it
makes sense up until it it makes sense
that for a while it's going to be this
super esoteric skill set and people are
going to pay through the nose for it.
But like you know there's no question
the information is right being
transferred into the environment. People
are learning how to do this. Um you know
college kids are figuring it out. Um,
and so, um, you know, there's there's
and I don't know that there's ever going
to be a talent glut per se, but like I
think for sure there's gonna there's
gonna be a lot more people in the future
who of course know know how to build
these things. Um, and then and then by
the way also of course you know AI
building AI, right? So that the the the
the tools themselves are going to be
better better at at contributing to
that. And so and and I think that I
think this is good because I think that
you know the current level of of
shortage of of engineers and researchers
is is is too constraining. And then and
then on the chip side I don't I don't
want to I'm not a chip guy and I don't
want to call call it specifically but
like it it's never been the case. It's
never been the case in the ship industry
that there's ever, you know, every every
shortage in the ship industry has always
resulted in a glut uh because the the
profit the profit pool of a shortage,
the margins get too big, the incentive
uh for other people to come in and
figure out how to commoditize the
function get too big. And so, you know,
Nvidia has like, you know, the best
position probably anybody's ever had in
chips. But notwithstanding that, I I
find it hard to believe that there's
going to be this level of pressure on
infrastructure in 5 years.
>> Yeah. And even if the bottleneck within
the infrastructure moves, so if if it
becomes power, if it becomes cooling or
or or anything else, then you'll have a
chip glut for sure. Yeah.
>> So So I I think over the I I would just
say this, it's likely the challenges
that we that we all have in five years
from now are going to be different
challenges.
>> Yeah. Yeah. Yeah. Like don't
don't definitely this industry of all
industries don't look at us as static
like you know the positions uh could
change very very fast.
Let's actually close on a more of this
macro note. Mark, you mentioned China.
Last month, we were in DC and what one
of the big questions the senator has is
how should we make sense of sort of the
state of the AI race visa v China. Do
you you want to share just the the high
level um summary of what what you shared
with them?
>> Yeah. So my sense of things and I and I
think the current I think the current if
you just observe currently specifically
like deep sea quan and and these models
coming out of China I my my sense
basically is like I would say the US
specifically in the west generally but
you know more and more specifically the
US is like the conceptual innovations
are you know have been you know coming
out coming out of coming out of the US
coming out of the west you know kind of
the the big kind of conceptual
breakthroughs um uh China is extremely
good at picking up ideas and
implementing them and scaling them and
commoditizing them and and you know
that's that they do that obviously
throughout the manufacturing world. Um
and and they're doing it now very I
think successfully uh sort of in AI. Um
and so I would say that they're running
they're running the catch-up game like
really well. Um you know and then
there's there's sort of always this
question of like how much of that is
like being done let's just say like
authentically um uh you know through
hard work um and smart people and then
how much is being done with maybe a
little bit of help um maybe a little USB
stick uh in the middle of the night uh
you know kind of help um Okay.
>> So, uh, you know, there's always a
little bit of a question, but like
either either way, uh, you know, they're
they're doing a great job. Uh, obviously
they they aspire to, you know, more than
that. Um, and there are many very smart
and creative people in China. And so,
you know, it will be interesting now to
see, you know, the level to which the
conceptual breakthroughs start to come
from there and whether they whether they
pull ahead. Um, and so, but like I would
say like what we tell people in
Washington is like look this this is a
foot this is now this is a full-on race.
It's a foot race. It's a game of inches.
Like we're not going to have a 5year
lead. we're going to have like maybe a
six-month lead. Like we have to run
fast. We have to win. Like we we have to
we have to do this. We we can't and then
we can't put constraints on our
companies that the the the Chinese
government isn't putting on their own
companies. And so um you know we'll just
lose and you know do do you really want
do you really want to wake up in the
morning and live in a world you know
really controlled and run by Chinese AI?
Most of us would say no we don't want to
live in that world. Um and so um so so
that so there's that and I would say I
feel moderately good about that just
because I think that I think we're we're
really good at software. um you know the
minute this goes into you know embodied
AI in the form of robotics I think
things get a lot scarier and you know
this is the thing I'm now spending time
in DC trying to really educate people on
which is you know the ch because the US
and the west have chosen to
de-industrialize to the extent that we
have over the last 40 years um you know
China China specifically now has this
giant industrial ecosystem for building
you know sort of mechanical electrical
and um uh and semiconductor and now
software you know devices of all kinds
including phones and drones phones and
cars um and robots. Um and so uh you
know there's going to be a phase two to
the AI revolution. It's going to be
robotics. It's going to happen you know
pretty quickly here I think. Um and when
it does like even if the US stays ahead
in software like the robots's got to get
built and that's not an easy thing. And
it's not just like a company that does
that. It's got to be an entire
ecosystem. Um and it's you know it's
going to be you know it's going to you
know like I mean you know the car
industry was not three car companies. It
was thousands and thousands of of of
component suppliers building all the
parts. And it's been the same thing for
airplanes and the same thing for
computers and everything else. Um it's
going to be the same thing for robotics.
Um and you know by default sitting here
today that's all going to happen in
China. And so even if they never quite
catch us in software they might just
like lap us in hardware and and that'll
be that. Um you know the good news is I
I think there's a growing awareness in
there's a growing awareness I would say
across the political political spectrum
in the US that like de-industrialization
went too far. Uh and there's a growing
desire to kind of figure out how to
reverse that. Um, and um, you know, I
say I'm guardedly optimistic that we'll
be making progress on that, but I think
there's a lot of work to be done
>> on that call to arms. Uh, let's wrap.
Uh, thank you Mark and Ben. To to wrap
up, I'd like to welcome back.
>> Thank you, everybody.
[Applause]
[Music]
Loading video analysis...