TLDW logo

Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games | Lex Fridman Podcast #475

By Lex Fridman

Summary

## Key takeaways - **AI can model reality's underlying structure**: AI systems, like Veo, can model complex physical phenomena such as fluid dynamics and material interactions surprisingly well, suggesting they extract underlying structures that can be reverse-engineered and learned, potentially applying to most of reality. (00:05, 01:03) - **Nature's evolutionary processes create learnable patterns**: Natural systems, shaped by evolutionary or survival processes, possess inherent structure that makes them learnable by classical algorithms. This principle extends from biological evolution to geological formations and cosmological systems. (03:40, 04:25) - **P equals NP might be a physics question**: If physics is fundamentally informational, then the P versus NP problem becomes a physics question. Understanding the universe as an informational system could be key to solving this computational complexity challenge. (07:05, 07:10) - **AI can learn intuitive physics without embodiment**: Models like Veo 3 demonstrate an intuitive grasp of physics, materials, and liquids through passive observation, challenging the notion that embodied interaction is necessary for understanding physical reality. (13:38, 16:35) - **AI can evolve algorithms, not just be programmed**: Systems like AlphaEvolve, which evolve algorithms, represent a promising direction for future AI. Combining Large Language Models with evolutionary computing can explore novel regions of the search space for complex problem-solving. (31:04, 31:35) - **AI research requires both engineering and scientific breakthroughs**: Progress towards AGI is not solely about scaling compute; it requires scientific breakthroughs. DeepMind's strength lies in its research culture and talent, enabling innovation when the terrain becomes more challenging. (01:04:05, 01:05:05)

Topics Covered

  • Anything that can be evolved can be modeled.
  • AI can learn physics from passive observation alone.
  • AI will generate truly personalized open-world games.
  • AGI is the ultimate tool for fundamental discovery.
  • The real test for AGI is creative genius.

Full Transcript

- It's hard for us humans to make

any kind of clean predictions about

highly nonlinear, dynamical systems.

But again, to your point,

we might be very surprised what classical learning systems

might be able to do about even fluid.

- Yes, exactly.

I mean, fluid dynamics, Navier-Stokes equations,

these are traditionally thought of as

very, very difficult intractable problems

to do on classical systems.

They take enormous amounts of compute,

you know, weather prediction systems,

you know, these kind of things all involve

fluid dynamics calculations.

But again, if you look at something like Veo,

our video generation model,

it can model liquids quite well, surprisingly well,

and materials, specular lighting.

I love the ones where, you know,

there's people who generated videos

where there's like clear liquids

going through hydraulic presses,

and then it's being squeezed out.

I used to write physics engines and graphics engines

in my early days in gaming,

and I know it's just so painstakingly hard

to build programs that can do that.

And yet somehow these systems are, you know,

reverse engineering from just watching YouTube videos.

So presumably what's happening is

it's extracting some underlying structure

around how these materials behave.

So perhaps there is

some kind of lower dimensional manifold that can be learned

if we actually fully understood

what's going on under the hood.

That's maybe, you know, maybe true of most of reality.

- The following is a conversation with Demis Hassabis,

his second time on the podcast.

He is the leader of Google DeepMind

and is now a Nobel Prize winner.

Demis is one of the most brilliant and fascinating minds

in the world today,

working on understanding and building intelligence,

and exploring the big mysteries of our universe.

This was truly an honor and a pleasure for me.

This is the Lex Friedman podcast.

To support it,

please check out our sponsors in the description

and consider subscribing to this channel.

And now, dear friends, here's Demis Hassabis.

In your Nobel Prize lecture,

you propose what I think is

a super interesting conjecture that quote,

"Any pattern that can be generated or found in nature

can be efficiently discovered and modeled

by a classical learning algorithm."

What kind of patterns of systems might be included in that?

Biology, chemistry, physics, maybe cosmology?

- Yup. - Neuroscience.

What are we talking about?

- Sure.

Well, look, I felt that

it's sort of a tradition I think of Nobel Prize lectures

that you're supposed to be a little bit provocative.

And I wanted to follow that tradition.

What I was talking about there is

if you take a step back

and you look at all the work that we've done,

especially with the Alpha X projects,

so I'm thinking AlphaGo,

of course, AlphaFold.

What they really are is we're building models of

very combinatorially, high-dimensional spaces that,

you know, if you try to brute force a solution,

find the best move in Go,

or find the exact shape of a protein,

and if you enumerated all the possibilities,

there wouldn't be enough time in the,

you know, the time of the universe.

So you have to do something much smarter.

And what we did in both cases was

build models of those environments

and that guided the search in a smart way

and that makes it tractable.

So if you think about protein folding,

which is obviously a natural system,

you know, why should that be possible?

How does physics do that?

You know, proteins fold in milliseconds in our bodies.

So somehow physics solves this problem

that we've now also solved computationally.

And I think the reason that's possible is that,

in nature, natural systems have structure

because they were subject to

evolutionary processes that shaped them.

And if that's true,

then you can maybe learn what that structure is.

- So this perspective I think is a really interesting one,

you've hinted it, at it,

which is almost like crudely stated,

anything that can be evolved can be efficiently modeled.

You think there's some truth to that?

- Yeah, I sometimes call it

survival of the stablest or something like that,

because, you know,

of course, there's evolution for life, living things,

but there's also, you know,

if you think about geological times,

so the shape of mountains

that's being shaped by weathering processes, right,

over thousands of years.

But then you can even take a cosmological,

the orbits of planets, the shapes of asteroids,

these have all been survived kind of processes

that have acted on them many, many times.

So if that's true,

then there should be some sort of pattern

that you can kind of reverse learn

and a kind of manifold really that helps you search

to the right solution, to the right shape,

and actually allow you to predict things about it

in an efficient way.

Because it's not a random pattern, right?

So it may not be possible

for manmade things or abstract things like

factorizing large numbers,

because unless there's patterns in the number space,

which there might be,

but if there's not and it's uniform,

then there's no pattern to learn,

there's no model to learn that will help you search,

you have to do brute force.

So in that case, you know,

you maybe need a quantum computer, something like this.

But in most things in nature that we're interested in

are not like that.

They have structure

that evolved for a reason and survived over time.

And if that's true,

I think that's potentially learnable by neural network.

- It's like nature's doing a search process.

And it's so fascinating that in that search process

it's creating systems that could be efficiently modeled.

- That's right, yeah. - So interesting.

- So they can be efficiently rediscovered or recovered

because nature's not random, right?

Everything that we see around us,

including like the elements that are more stable,

all of those things,

they're subject to some kind of selection process pressure.

- Do you think,

because you're also a fan of

theoretical computer science and complexity,

do you think we can come up with a kind of complexity class,

like a complexity zoo type of class

where maybe it's the set of learnable systems,

the set of learnable natural systems,

LNS? - Yeah.

- This is Demis Hassabis' new class of systems

that could be actually learnable

by classical systems in this kind of way,

natural systems that can be modeled efficiently?

- Yeah, I mean,

I've always been fascinated by the P equals NP question

and what is modelable by classical systems,

by non-quantum systems, you know, Turing machines in effect.

And that's exactly what I'm working on actually

in kind of my few moments of spare time

with a few colleagues about is should there be,

you know, maybe a new class or problem

that is solvable by this type of neural network process

and kind of mapped onto these natural systems.

So, you know,

the things that exist in physics and have structure.

So I think that could be

a very interesting new way of thinking about it.

And it sort of fits

with the way I think about physics in general,

which is that, you know, I think information is primary.

Information is the most sort of

fundamental unit of the universe,

more fundamental than energy and matter.

I think they can all be converted into each other,

but I think of the universe

as a kind of informational system.

- So when you think of the universe

as an informational system,

then the P equals NP question is a physics question.

- [Demis] That's right.

- And is a question that can help us

actually solve the entirety of this whole thing going on.

- Yeah, I think it's one of

the most fundamental questions actually

if you think of physics as informational.

And the answer to that I think is gonna be,

you know, very enlightening.

- More specific to the P and NP question.

This again, some of the stuff we're saying is

kind of crazy right now.

Just like the Christian Anfinsen Nobel Prize speech,

controversial thing that he said sounded crazy,

and then you went and got a Nobel prize for this

with John Jumper,

solved the problem.

So let me just stick to the P equals NP.

Do you think there's something

in this thing we're talking about that could be shown

if you can do something like polynomial time

or constant time compute ahead of time

and construct this gigantic model,

then you can solve

some of these extremely difficult problems

in a theoretic computer science kind of way?

- Yeah, I think that there are

actually a huge class of problems

that could be couched in this way,

the way we did AlphaGo and the way we did AlphaFold,

where, you know,

you model what the dynamics of the system is,

the properties of that system,

the environment that you are trying to understand.

And then that makes the search for the solution

or the prediction of the next step efficient

basically polynomial times,

so tractable by a classical system,

which a neural network is.

It runs on normal computers, right,

classical computers, Turing machines in effect.

And I think

it's one of the most interesting questions there is

is how far can that paradigm go?

You know, I think we've proven the AI community in general

that classical systems, Turing machines can go a lot further

than we previously thought.

You know, they can do things like

model the structures of proteins

and play Go to better than world champion level.

And you know, a lot of people would've thought

maybe 10, 20 years ago that was decades away,

or maybe you would need

some sort of quantum machines, quantum systems

to be able to do things like protein folding.

And so I think we haven't really

even sort of scratched the surface yet

of what classical systems so-called could do.

And of course, AGI being built on a neural network system

on top of a neural network system

on top of a classical computer would be

the ultimate expression of that.

And I think the limit the, you know,

what the bounds of that kind of system,

what it can do,

it's a very interesting question

and directly speaks to the P equals NP question.

- What do you think, again, hypothetical,

might be outside of this maybe emergent phenomena?

Like if you look at cellular automata,

some of have extremely simple systems

and then some complexity emerges.

- Yes. - Maybe that would be outside

or even would you guess even that might be amenable

to efficient modeling by a classical machine?

- Yeah, I think those systems would be

right on the boundary, right?

So I think most emergent systems, cellular automata,

things like that could be modelable by a classical system.

You just sort of do a forward simulation of it

and it'd probably be efficient enough.

Of course, there's the question of things

like chaotic systems

where the initial conditions really matter,

and then you get to some, you know, uncorrelated end state.

Now those could be difficult to model.

So I think these are kind of the open questions.

But I think when you step back

and look at what we've done with the systems

and the problems that we've solved,

and then you look at things that Veo 3

on like video generation

sort of rendering physics and lighting and things like that,

you know, really core fundamental things in physics,

it's pretty interesting.

I think it's telling us something quite fundamental about

how the universe is structured in my opinion.

So, you know, in a way that's what I wanna build AGI for is

to help us as scientists answer these questions

like P equals NP.

- Yeah, I think we might be continuously surprised about

what is modelable by classical computers.

I mean, AlphaFold 3 on the interaction side is surprising,

that you can make any kind of progress on that direction.

AlphaGenome is surprising

that you can map the genetic code to the function.

Kind of playing with the emergent kind of phenomena,

you think there's so many combinatorial options that,

and then here you go,

you can find the kernel that is efficiently modeled.

- Yes, because there's some structure,

there's some landscape, you know,

in the energy landscape or whatever it is

that you can follow, some grading you can follow.

And of course, what neural networks are very good at

is following gradients.

And so if there's one to follow

and you can specify the objective function correctly,

you know, you don't have to deal with all that complexity,

which I think is how we maybe have naively thought about it

for decades those problems.

If you just enumerate all the possibilities,

it looks totally intractable.

And there's many, many problems like that.

And then you think,

well, it's like 10 to 300 possible protein structures,

it's 10 to the 170 possible Go positions.

All of these are way more than atoms in the universe.

So how could one possibly find the right solution

or predict the next step?

But it turns out that it is possible.

And of course, reality in nature does do it, right?

Proteins do fold.

So that gives you confidence that there must be,

if we understood how physics was doing that in a sense,

and we could mimic that process, model that process,

it should be possible on our classical systems

is basically what the conjecture's about.

- And of course there's nonlinear dynamical systems,

highly nonlinear dynamical systems,

everything involving fluid.

- Yes, right.

- You know, recently I had a conversation with Terence Tao

who mathematically contends

with a very difficult aspect of systems

that have some singularities in them

that break the mathematics.

And it's just hard for us humans

to make any kind of clean predictions about

highly nonlinear dynamical systems.

But again, to your point,

we might be very surprised what classical learning systems

might be able to do about even fluid.

- Yes, exactly.

I mean, fluid dynamics, Navier-Stokes equations,

these are traditionally thought of as

very, very difficult intractable kind of problems

to do on classical systems.

They take enormous amounts of compute,

you know, weather prediction systems,

you know, these kind of things all involve

fluid dynamics calculations.

And, but again,

if you look at something like Veo,

our video generation model,

it can model liquids quite well, surprisingly well,

and materials, specular lighting.

I love the ones where, you know,

there's people who generated videos

where there's like clear liquids

going through hydraulic presses

and then it's being squeezed out.

I used to write physics engines and graphics engines

in my early days in gaming,

and I know it's just so painstakingly hard

to build programs that can do that.

And yet somehow these systems are, you know,

reverse engineering from just watching YouTube videos.

So presumably what's happening is

it's extracting some underlying structure around

how these materials behave.

So perhaps there is some kind of

lower dimensional manifold that can be learned

if we actually fully understood

what's going on under the hood.

That's maybe, you know, maybe true of most of reality.

- Yeah, I've been continuously precisely

by this aspect of Veo 3.

I think a lot of people highlight different aspects,

including the comedic and the meme,

- Yes. - all that kind of stuff.

And then the ultrarealistic ability to capture humans

in a really nice way

that's compelling and feels close to reality,

and then combine that with native audio.

All of those are marvelous things about Veo 3.

But the exactly the thing you're mentioning,

which is the physics.

- [Demis] Yeah.

- It's not perfect, but it's damn pretty good.

And then the really interesting scientific question is

what is it understanding about our world

in order to be able to do that?

Because the cynical take with diffusion models,

there's no way it understands anything.

But it seems, I mean,

I don't think you can generate

that kind of video without understanding.

And then our own philosophical notion of

what it means to understand

then is like brought to the surface.

Like to what degree do you think Veo 3

understands our world?

- I think to the extent that it can predict the next frames,

you know, in a coherent way.

That is a form, you know, of understanding, right?

Not in the anthropomorphic version of,

you know, it's not some kind of

deep philosophical understanding of what's going on.

I don't think these systems have that.

But they certainly have modeled enough of the dynamics,

you know, put it that way,

that they can pretty accurately generate whatever it is,

eight seconds of consistent video that by eye

at least, you know, at a glance,

it's quite hard to distinguish what the issues are.

And imagine that in two or three more years time,

that's the thing I'm thinking about

and how incredible they will look,

given where we've come from, you know,

the early versions of that one or two years ago.

And so the rate of progress is incredible.

And I think I'm like you is

like a lot of people love all of the standup comedians

that actually captures a lot of human dynamics very well

and body language.

But actually the thing

I'm most impressed with and fascinated by is

the physics behavior,

the lighting and materials and liquids.

And it's pretty amazing that it can do that.

And I think that shows it that it has

some notion of at least intuitive physics, right?

How things are supposed to work intuitively?

Maybe the way that a human child

would understand physics, right?

As opposed to a, you know,

a PhD student really being able to unpack all the equations.

It's more of an intuitive physics understanding.

- Well, that intuitive physics understanding,

that's the base layer,

that's the thing people sometimes call a common sense.

Like it really understands something.

I think that really surprised a lot of people.

It blows my mind that

I just didn't think it would be possible to generate

that level of realism without understanding.

You know, there's this notion

that you can only understand the physical world

by having an embodied AI system,

a robot that interacts with that world.

That's the only way to construct

an understanding of that world.

- [Demis] Yeah.

- But Veo 3 is directly challenging that

it feels like. - Right, yes.

And it's very interesting.

You know, if you were to ask me five, 10 years ago,

I would've said,

even though I was immersed in all of this,

I would've said, well, yeah,

you probably need to understand intuitive physics.

You know, like if I push this off the table,

this glass it will maybe shatter, you know,

and the liquid will spill out, right?

So we know all of these things.

But I thought that, you know,

and there's a lot theories in neuroscience,

it's called action in perception where,

you know, you need to act in the world

to really, truly perceive it in a deep way.

And there was a lot of theories about

you'd need embodied intelligence or robotics or something

or maybe at least simulated action

so that you would understand things like intuitive physics.

But it seems like you can understand it

through passive observation,

which is pretty surprising to me.

And again, I think hints at something underlying about

the nature of reality in my opinion,

beyond just the, you know,

the cool videos that it generates.

And of course there's next stages is

maybe even making those videos interactive

so one can actually step into them and move around them,

which would be really mind blowing,

especially given my games background.

So you can imagine.

And then I think, you know,

we're starting to get towards

what I would call a world model,

a model of how the world works,

the mechanics of the world, the physics of the world,

and the things in that world.

And of course that's what you would need

for a true AGI system.

- I have to talk to you about video games.

- Yes. - You're being a bit trolly.

I think you're having more and more fun on Twitter on X,

which is great to see.

So a guy named Jimmy Apples tweeted,

let me play a video game of my Veo 3 videos already

Google cooked so good.

Playable world models wen?

It's spelled W-E-N, question mark.

And then you quote tweeted that with,

now wouldn't that be something.

So how hard is it to build game worlds with AI?

Maybe can you look out into

the future of video games - Hmm.

- five, 10 years out. - Hmm.

- What do you think that looks like?

- Well, games were my first love really.

And doing AI for games was

the first thing I did professionally in my teenage years

and was the first major AI systems that I built.

And I always wanna,

I wanna scratch that itch one day and come back to that.

So, you know, and I will do I think.

And I think I'd sort of dream about, you know,

what would I have done back in the '90s

if I'd had access to the kind of AI systems we have today.

And I think you could build absolutely mind-blowing games.

And I think the next stage,

I always used to love making,

all the games I've made are open world games.

So they're games where there's a simulation

and then there's AI characters

and then the player interacts with that simulation

and the simulation adapts to the way the player plays.

And I always thought they were the coolest games because,

so games like Theme Park that I worked on

where everybody's game experience

would be unique to them, right?

Because you are kind of co-creating the game, right?

We set up the parameters,

we set up initial conditions,

and then you as the player immerse in it,

and then you are co-creating it with the simulation.

But of course it's very hard to program open world games.

You know, you've got to be able to create content

whichever direction the player goes in.

And you want it to be compelling

no matter what the player chooses.

And so it was always quite difficult to build

things like cellular automata actually,

type of those kind of classical systems

which created some emergent behavior.

But they're always a little bit fragile,

a little bit limited.

Now we are maybe on the cusp in the next few years,

five, 10 years of having AI systems

that can truly create around your imagination,

can sort of dynamically change the story

and storytell the narrative around and make it dramatic

no matter what you end up choosing.

So it's like the ultimate

choose your own adventure sort of game.

And, you know,

I think maybe we are within reach,

if you think of a kind of interactive version of Veo.

And then wind that forward five to 10 years

and you know, imagine how good it's gonna be.

- Yeah, so you said a lot of super interesting stuff there.

So one, the open world built into that is

a deep personalization the way you've described it.

So it's not just that it's open world,

that you can open any door and there'll be something there.

It's that the choice of which door you open

in an unconstrained way defines the worlds you see.

So some games try to do that.

They give you choice. - Yes.

- But it's really just an illusion of choice.

- Yes. - 'cause you only,

like Stanley Parable, - Yeah.

- this game I use to play.

It's really, there's a couple of doors

and it really just takes you down a narrative.

Stanley Parable is a great video game.

I recommend people play. - Yeah.

- That kind of in a meta way mocks the illusion of choice.

And there's philosophical notions of free will and so on.

But I do like one of my favorite games,

Elder Scrolls Daggerfall I believe,

that they really played with like

random generation of the dungeons,

- [Demis] Yeah.

- of you can step in, - Yes.

- and they give you this feeling of an open world.

And there you mentioned interactivity,

you don't need to interact.

That's the first step

'cause you don't need to interact that much.

You just, when you open the door,

whatever you see is

randomly generated for you. - Yeah.

- And that's already an incredible experience

'cause you might be the only person to ever see that.

- Yeah, exactly.

And so, but what you'd like is a little bit better

than just sort of a random generation, right?

So you'd like,

and also better than a simple A-B hard-coded choice, right?

That's not really open world, right?

As you say, it's just giving you the illusion of choice.

What you want to be able to do is

potentially anything in that game environment.

And I think the only way you can do that is

to have generated systems,

systems that will generate that on the fly.

Of course, you can't create

infinite amounts of game assets, right?

It's expensive enough already how AAA games are made today.

And that was obvious to us back in the '90s

when I was working on all these games.

I think maybe Black & White was the game that I worked on,

early stages of that,

that had the still probably the best learning AI in it.

It was an early reinforcement learning system that you,

you know, you were looking after this mythical creature

and growing it and nurturing it.

And depending how you treated it,

it would treat the villagers in that world in the same way.

So if you were mean to it, it would be mean.

If you were good, it would be protective.

And so it was really a reflection of the way you played it.

So actually all of the,

I've been working on sort of simulations and AI

through the medium of games at the beginning of my career.

And really the whole of what I do today is still a follow on

from those early more hard-coded ways of doing the AI

to now, you know, fully general learning systems

that are trying to achieve the same thing.

- Yeah, it's been interesting, hilarious,

and fun to watch you and Elon

obviously itching to create games 'cause you're both gamers.

And one of the sad aspects of your incredible success

in so many domains of science,

like serious adult stuff, - Yeah.

- that you might not have time to really create a game.

You might end up creating the tooling

that others will create the game

and you have to watch others - Exactly.

- create the thing you've always dreamed of.

Do you think it's possible you can somehow

in your extremely busy schedule,

actually find time to create something like Black & White?

An actual video game where like you could

make the childhood dream

- Yeah, well you know, - become reality?

- there's two things,

what I think about that is maybe that with vibe coding

as it gets better, - Yeah.

- and there's a possibility that

I could, you know, - Yes, sure.

- one could do that actually in your spare time.

So I'm quite excited about that.

That would be my project

if I got the time to do some vibe coding.

I'm actually itching to do that.

And then the other thing is,

you know, maybe it's a sabbatical after AGI

has been safely stewarded into the world

and delivered into the world.

You know, that and then working on my physics theory

as we talked about at the beginning.

Those would be the two, my two post AGI projects,

let's call it that way.

- I would love to see

which post AGI, - The old spec game.

- post AGI would you choose solving the problem that

some of the smartest people in human history contended with.

So P equals NP or creating a cool video game.

- Yeah.

Well, but in my world they'd be related

because it would be an open world simulated game

as realistic as possible.

So, you know, what is the universe

that's speaking to the same question, right?

P equals NP, I think all these things are related,

at least in my mind.

- I mean in a really serious way,

video games sometimes are looked down upon.

It's just this fun side activity.

But especially,

as AI does more and more of the difficult boring tasks,

something we in modern world called work,

you know, video games is the thing

in which we may find meaning in which we may find

like what to do with our time.

You could create incredibly rich, meaningful experiences.

Like that's what human life is.

And then in video games,

you can create more sophisticated,

more diverse ways of living,

right? - Yeah.

- That's the core idea. - I think so.

I mean, those of us who love games and I still do is,

you know, it's almost can let your imagination run wild,

right?

Like I used to love games and working on games so much

because it's the fusion,

especially in the '90s and early 2000s,

the sort of golden era,

and maybe the '80s of the games industry.

And it was all being discovered.

New genres were being discovered.

We weren't just making games,

we felt we were creating a new entertainment medium

that never existed before, right?

Especially with these open world games and simulation games

where you as the player were co-creating the story.

There's no other media,

entertainment media where you do that,

where you as the audience actually co-create the story.

And of course now with multiplayer games as well,

it can be a very social activity

and can explore all kinds of interesting worlds in that.

But on the other hand, you know,

it's very important to also enjoy and experience

the physical world.

But the question is then,

you know, I think we're gonna have

to kind of confront the question again of

what is the fundamental nature of reality?

What is gonna be the difference

between these increasingly realistic simulations

and multiplayer ones and emergent

and what we do in the real world?

- Yeah, there's clearly a huge amount of value

to experiencing the real world nature.

There's also a huge amount of value

in experiencing other humans directly in person

the way we're sitting here today.

- [Demis] Yes.

- But we need to really scientifically rigorously

answer the question why. - Yeah, exactly.

- And which aspect of that can be mapped

- Yeah. - into the virtual world?

- [Demis] Exactly.

- And it's not enough to say,

yeah, you should go touch grass and hang out in nature,

it's like why exactly - Yeah, yeah.

- is that valuable? - Yes.

And I guess that's maybe the thing

that's been haunting me or obsessing me

from the beginning of my career.

If you think about all the different things I've done

they're all related in that way.

The simulation, nature of reality,

and what is the bounds of, you know, what can be modeled.

- Sorry for the ridiculous question,

but so far, what is the greatest video game of all time?

What's up there?

What makes it? - Well,

my favorite one of all time is Civilization I have to say.

That was the Civilization I and Civilization II

my favorite games of all time.

- I can only assume you've avoided the most recent one

because it would probably,

that would be your sabbatical.

You would disappear. - Yes, exactly.

They take a lot of time these Civilization games,

so I've got to be careful with them.

- Fun question,

you and Elon seem to be somehow solid gamers,

is there a connection between being great at gaming

and being great leaders of AI companies?

- I don't know.

It's an interesting one.

I mean, we both love games.

And it's interesting,

he wrote games as well to start off with.

It's probably, it's especially in the era I grew up in

where home computers were just became a thing,

you know, in the late '80s and '90s,

especially in the UK.

I had a Spectrum and then a Commodore Amiga 500,

which is my,

- Nice. - my favorite computer ever.

And that's where I learned all programming.

And of course it's a very fun thing to program,

is to program games.

So I think it's a great way to learn programming,

probably still is.

And then of course

I immediately took it in directions of AI and simulations,

so I was able to express my interest in games

and my sort of wider scientific interests altogether.

And then the final thing I think that's great about games is

it fuses artistic design, you know, art,

with the most cutting edge programming.

So again, in the '90s,

all of the most interesting technical advances

were happening in gaming.

Whether that was AI, graphics, physics engines, hardware,

even GPUs of course were designed for gaming originally.

So everything that was pushing computing forward in the '90s

was due to gaming.

So interestingly that was

where the forefront of research was going on.

And it was this incredible fusion with art,

you know, graphics but also music

and just the whole new media of storytelling.

And I love that.

For me it's this sort of multidisciplinary kind of effort is

again, something I've enjoyed my whole life.

- I have to ask you I almost forgot about one of the many

and I would say one of the most incredible things recently

that somehow didn't yet get enough attention is AlphaEvolve.

We talked about evolution a little bit,

but it's the Google DeepMind system

that evolves algorithms. - Yeah.

- Are these kinds of evolution like techniques promising

as a component of future superintelligence system?

So for people who don't know,

it's kind of, I don't know if it's fair to say,

it's LLM-guided evolution search.

- Yeah. - So evolutionary algorithms

- are doing the search, - Yes.

- and LLMs are telling you where.

- Yes, exactly.

So LLMs are kind of proposing some possible solutions

and then you use evolutionary computing on top

to find some novel part of the search space.

So actually I think it's an example of

very promising directions

where you combine LLMs or foundation models

with other computational techniques.

Evolutionary methods is one,

but you could also imagine Monte Carlo Tree Search.

Basically many types of

search algorithms or reasoning algorithms

sort of on top of or using the foundation models as a basis.

So I actually think there's quite

a lot of interesting things to be discovered probably

with these sort of hybrid systems let's call them.

- But not to romanticize evolution.

- Yeah. - And I'm only human.

But you think there's some value

in whatever that mechanism is.

'Cause we already talked about natural systems.

Do you think there's a lot of

low-hanging fruit of us understanding,

being able to model,

being able to simulate evolution and then using that,

whatever we understand about that nature,

its biomechanism,

to then do search better and better and better.

- Yes, so if you think about, again,

breaking down the sort of systems we've built

to their really fundamental core,

you've got like the model of

the underlying dynamics of the system.

And then if you want to discover something new,

something novel that hasn't been seen before,

then you need some kind of search process on top

to take you to a novel region of the search space.

And you can do that in a number of ways.

Evolutionary computing is one.

With AlphaGo,

we just use Monte Carlo Tree Search, right?

And that's what found move 37,

the new kind of never seen before strategy in Go.

And so that's how you can go

beyond potentially what is already known.

So the model can model everything

that you currently know about, right,

all the data that you currently have,

but then how do you go beyond that?

So that starts to speak about the ideas of creativity.

How can these systems create something new,

discover something new?

Obviously this is super relevant for scientific discovery

or pushing med science and medicine forward,

which we want to do with these systems.

And you can actually bolt on

some fairly simple search systems on top of these models

and get you into a new region of space.

Of course, you also have to make sure that

you are not searching that space totally randomly,

it would be too big.

So you have to have some objective function

that you're trying to optimize and hill climb towards

and that guides that search.

- But there's some mechanism of evolution

that are interesting,

maybe in the space of programs,

but then the space of program is

an extremely important space,

'cause you can probably generalize to everything, you know.

But you know, for example, mutation.

So it's not just Monte Carlo Tree Search

where it's like a search.

You could every once in a while,

- Combine things, yeah. - combine things, alter,

like a components of a thing. - Yes.

- So then, you know what evolution is really good at is

not just the natural selection,

it's combining things and building increasingly complex

hierarchical systems. - Yes.

- So that component's super interesting.

- Yeah. - Especially like

with AlphaEvolve and the space of programs.

- Yeah, exactly.

So there's, you can get a bit of

an extra property out of revolutionary systems,

which is some new emergent capability may come about.

- Yes. - Right, of course,

like what happened with life.

Interestingly with naive

sort of traditional evolution computing methods,

without LLMs and the modern AI,

the problem with them,

they were very well studied in the '90s and early 2000s

and some promising results,

but the problem was they could never work out

how to evolve new properties, new emergent properties.

You always had a sort of subset of the properties

that you put into the system.

But maybe if we combine them with these foundation models,

perhaps we can overcome that limitation.

Obviously naturally evolution clearly did

'cause it did evolve new capabilities, right?

So bacteria to where we are now.

So clearly that it must be possible

with evolutionary systems to generate new patterns,

you know, going back to the first thing we talked about,

and new capabilities and emergent properties.

And maybe we're on the cusp of discovering how to do that.

- Yeah, listen,

AlphaEvolve is one of the coolest things I've ever seen.

I, on my desk at home, you know,

most of my time is spent on that computer just programming.

And next to the three screens is a skull of a Tiktaalik,

which is one of the early organisms

that crawled out of the water onto land.

And I just kind of watch that little guy.

It's like,

whatever the competition mechanism of evolution is

it's quite incredible. - Yes.

- It's truly, truly incredible.

- Yeah. - Now whether that's exactly

the thing we need to do to do our search,

but never dismiss the power of nature what it did here.

- Yeah, and it's amazing,

which is a relatively simple algorithm, right, effectively.

And it can generate all of this immense complexity emerges,

obviously running over,

you know, four billion years of time.

But it's, you know,

you can think about that as again,

a search process that ran over

the physics substrate of the universe

for a long amount of computational time,

but then it generated all this incredible rich diversity.

- So, so many questions I wanna ask you.

So one, you do have a dream,

one of the natural systems

you want to try to model is a cell.

- Yes. - That's a beautiful dream.

I could ask you about that.

I also, just, for that purpose on the AI scientist front,

just broadly,

so there's a essay from

Daniel Kokotajlo, Scott Alexander and others

that outline steps along the way to get to ASI

and has a lot of interesting ideas in it,

one of which is including a superhuman coder

and a superhuman AI researcher.

And in that,

there's a term of research taste that's really interesting.

So in everything you've seen,

do you think it's possible for AI systems

to have research taste,

to help you in the way that AI co-scientists does,

to help steer human, brilliant scientists,

and then potentially by itself to figure out

what are the directions

where you want to generate truly novel ideas?

Because that seems to be like

a really important component of how to do great science.

- Yeah, I think that's gonna be

one of the hardest things to mimic or model is

this idea of taste or judgment.

I think that's what separates the, you know,

the great scientists from the good scientists.

Like all professional scientists

are good technically, right,

otherwise they wouldn't have been made it that far

in academia and things like that.

But then do you have the taste to sort of sniff out

what the right direction is,

what the right experiment is,

what the right question is.

So picking the right question is the hardest part of science

and making the right hypothesis.

And that's what, you know,

today's systems definitely they can't do.

So, you know,

I often say it's harder to come up with a conjecture,

a really good conjecture than it is to solve it.

So we may have systems soon

that can solve pretty hard conjectures.

You know, Math Olympiad problems where you know,

AlphaProof last year our system got, you know,

silver medal in that.

Really hard problems.

Maybe eventually we'll solve

a Millennium Prize kind of problem.

But could a system come up with a conjecture worthy of study

that someone like Terence Tao would've gone,

you know what, that's a really deep question

about the nature of maths or the nature of numbers

or the nature of physics.

And that is far harder type of creativity.

And we don't really know,

today's systems clearly can't do that

and we're not quite sure what that mechanism would be.

This kind of leap of imagination,

like Einstein had when he came up with, you know,

special relativity and then general relativity

with the knowledge you had at the time.

- And for conjecture,

you want to come up with a thing that's interesting,

it's amenable to proof. - Yes.

- So like, it's easy to come up with a thing

that's extremely difficult. - Yeah.

- It's easy to come up with a thing that's extremely easy,

but that at that very edge, - That sweet spot, right,

of basically advancing the science

and splitting the hypothesis space into two ideally, right?

Whether if it's true or not true,

you've learned something really useful

and that's hard.

And making something that's also, you know,

falsifiable and within sort of the technologies

that you currently have available.

So it's a very creative process, actually,

highly creative process that

I think just a kind of naive search on top of a model

won't be enough for that.

- Okay, the idea of splitting the hypothesis space in two

is super interesting.

So I've heard you say that there's basically no failure in,

or failure is extremely valuable if it's done,

if you construct the questions right,

if you construct the experiments right,

if you design them right,

that failure or success are both useful.

So perhaps, - Yes.

- because it's splits the hypothesis basically too,

it's like a binary search. - Yes, that's right.

So when you do like, you know,

real blue sky research,

there's no such thing as failure really

as long as you are picking experiments and hypotheses

that meaningfully split the hypothesis space.

So, you know, and you learn something,

you can learn something kind of equally valuable

from an experiment that doesn't work.

That should tell you if you've designed the experiment well

and your hypotheses are are interesting,

it should tell you a lot about where to go next.

And then you're effectively doing a search process

and using that information in,

you know, very helpful ways.

- So to go to your dream of modeling a cell,

what are the big challenges that lay ahead for us

to make that happen?

We should maybe highlight that AlphaFold,

I mean there's just so many leaps.

- Yeah. - So AlphaFold solved,

if it's fair to say protein folding,

and there's so many incredible things

we could talk about there including the open sourcing,

everything you've released.

AlphaFold 3 is doing protein, RNA, DNA interactions,

which is super complicated and fascinating.

It's amenable to modeling.

AlphaGenome predicts how small genetic changes.

Like if we think about single mutations,

how they link to actual function?

So those, it seems like it's creeping along,

- Yes. - to sophisticated,

to much more complicated things like a cell,

but a cell has a lot of really complicated components.

- Yeah.

So what I've tried to do throughout my career is

I have these really grand dreams

and then I try to, as you've noticed,

and then I try to break,

but I try to break them down any, you know,

it's easy to have a kind of a crazy ambitious dream.

But the trick is how do you break it down

into manageable, achievable, interim steps

that are meaningful and useful in their own right.

And so virtual cell,

which is what I call the project of modeling a cell,

I've had this idea, you know,

of wanting to do that for maybe more like 25 years.

And I used to talk with Paul Nurse,

who is a bit of a mentor of mine in biology.

He runs the, you know, founded the Crick Institute

and won the Nobel prize in 2001.

We've been talking about it since,

you know, before, you know, in the '90s.

And I used to come back to every five years like,

what would you need to model of the full internals of a cell

so that you could do experiments on the virtual cell

and what those experiment, you know, in silico.

And those predictions would be useful for you

to save you a lot of time in the wet lab, right?

That would be the dream.

Maybe you could 100X speed up experiments

by doing most of it in silico,

the search in silico,

and then you do the validation step in the wet lab.

That would be, that's the dream.

And so, but maybe now, finally,

so I was trying to build these components,

AlphaFold being one,

that would allow you eventually to model

the full interaction, a full simulation of a cell.

And I'd probably start with a yeast cell.

And partly that's what Paul Nurse studied

because the yeast cell is like a full organism

that's a single cell, right?

So it's the kind of simplest single cell organism.

And so it's not just a cell, it's a full organism.

And yeast is very well understood.

And so that would be a good candidate for

a kind of full simulated model.

Now AlphaFold is the solution to the kind of

static picture of what does a protein look,

a 3D structure protein look like,

a static picture of it.

But we know that biology,

all the interesting things happen

with the dynamics, the interactions.

And that's what AlphaFold 3 is the first step towards is

modeling those interactions.

So first of all, pairwise,

you know, proteins with proteins,

proteins with RNA and DNA,

but then the next step after that

would be modeling maybe a whole pathway,

maybe like the TOR pathway that's involved in cancer

or something like this.

And then eventually you might be able to model,

you know, a whole cell.

- Also, there's another complexity here

that stuff in a cell happens at different timescales.

Is that tricky?

Like they're, you know,

protein folding is, you know, super fast.

- [Demis] Yes.

- I don't know all the biological mechanisms,

- Yeah. - but some of them

take a long time. - Yeah.

- And so that's a level,

so the levels of interaction has

a different temporal scale - Yeah.

- that you have to be able to model.

- So that would be hard.

So you'd probably need several simulated systems

that can interact at these different temporal dynamics,

or at least maybe it's like a hierarchical system

so you can jump up or down the different temporal stages.

- So can you avoid,

I mean, one of the challenges here is not avoid simulating,

for example, the quantum mechanical aspects of

any of this, right?

You want to not over model.

You could skip ahead to just model

the really high level things

that get you a really good estimate of

what's going to happen. - Yes.

So you got to make a decision

when you're modeling any natural system,

what is the cutoff level of

the granularity that you're gonna model it to

that then captures the dynamics that you're interested in?

So probably for a cell,

I would hope that would be the protein level

and that one wouldn't have to go down to the atomic level.

So, you know,

and of course, that's where AlphaFold stock kicks in.

So that would be kind of the basis

and then you'd build these higher level simulations

that take those as building blocks

and then you get the emergent behavior.

- Apologize for the pothead questions ahead of time,

but do you think we'll be able to simulate a model,

the origin of life?

So being able to simulate the first,

from non-living organisms,

the birth of a living organism.

- I think that's a one of the, of course,

one of the deepest and most fascinating questions.

I love that area of biology.

You know, there's people like,

there's a great book by Nick Lane,

one of the top experts in this area called

"The Ten Great Inventions of Evolution."

I think it's fantastic.

And it also speaks to what the great filters might be,

you know, prior or are they ahead of us?

I think they're most likely in the past

if you read that book of how unlikely to go,

you know, have any life at all.

And then single cell to multi-cell

seems an unbelievably big jump that took

like a billion years I think - Yeah.

- on Earth to do, right?

So it shows you how hard it was, right?

- Bacteria were super happy

for a very long time. - For a very long time

before they captured mitochondria somehow, right?

I don't see why not,

why AI couldn't help with that some kind of simulation.

Again, it's a bit of a search process

through a combinatorial space.

Here's like all the,

you know, the chemical soup that you start with,

the primordial soup that, you know,

maybe was on Earth near these hot vents,

here's some initial conditions,

can you generate something that looks like a cell?

So perhaps that would be a next stage

after the virtual cell project is,

well, how could you actually

something like that emerge from the chemical soup?

- Well, I would love it if there was a move 37

for the origin of life. - Yeah.

- I think that's one of the sort of great mysteries.

I think ultimately what we'll figure out is their continuum.

There's no such thing as a line

between non-living and living.

But if we can make that rigorous.

- Yes. - That the very thing

from the Big Bang to today

has been the same process.

If you can break down that wall

that we've constructed in our minds of

the actual origin from non-living to living,

that it's not a line that it's a continuum,

that connects physics and chemistry and biology.

- Yeah. - There's no line.

- I mean, this is my whole reason

why I've worked on AI and AGI my whole life.

Because I think it can be the ultimate tool

to help us answer these kind of questions.

And I don't really understand why,

you know, the average person doesn't think like,

worry about this stuff more.

Like how can we not have a good definition of life

and living and non-living and the nature of time,

and let alone, consciousness and gravity

and all these things.

And quantum mechanics weirdness.

It's just, to me,

I've always had this sort of

screaming at me in my face, the whole,

and it's getting louder.

You know, it's like how, what is going on here?

You know, and I mean that in the deeper sense like,

you know, the nature of reality,

which has to be the ultimate question.

- [Lex] Yeah.

- That would answer all of these things.

It's sort of crazy if you think about it.

We can stare at each other,

and every one of these living things all the time,

we can inspect it microscopes and take it apart

almost down to the atomic level,

and yet we still can't answer that clearly,

- Yeah. - in a simple way,

that question of how do you define living?

- [Lex] Yeah.

- It's kind of amazing.

- Yeah, living,

you can kind of talk your way out of thinking about,

but like consciousness,

like we have this very obviously

subjective conscious experience,

like we're at the center of our own world

and it feels like something.

And then,

how are you not screaming, - Yeah.

- at the mystery of it all, right?

I mean, but really,

humans have been contending

with the mystery of the world around them for a long, long.

There's a lot of mysteries.

Like what's up with the sun and the rain.

- Yeah. - Like what's that about?

And then like last year we had a lot of rain

and this year we don't have rain.

Like what did we do wrong?

Humans have been asking

that question for a long time. - Yeah, exactly.

So we're quite,

I guess we've developed a lot of mechanisms

to cope with this. - Yeah.

- These deep mysteries that we can't fully,

we can see but we can't fully understand

and we have to just

get on with daily life. - Yeah.

- And we keep ourselves busy, right?

In a way, did we keep ourselves distracted?

- I mean weather is one of

the most important questions of human history.

We still, that's the go-to small talk direction

of the weather. - Yes.

Especially in England, yeah.

- And then which is, you know,

famously is an extremely difficult system

to model. - Yeah.

- And even that system,

Google DeepMind has made progress on.

- Yes, yeah, we've created

the best weather prediction systems in the world

and they're better than

traditional fluid dynamics sort of systems

that usually calculated on massive supercomputers,

takes days to calculate it.

We've managed to model a lot of the weather dynamics

with neural network systems

with our WeatherNext system.

And again, it's interesting,

that those kinds of dynamics can be modeled

even though they're very complicated,

almost bordering on chaotic systems in some cases.

A lot of the interesting aspects of that

can be modeled by these neural network systems.

Including very recently we had, you know,

cyclone prediction of where,

you know, paths of hurricanes might go.

Of course super useful, super important for the world.

And it's super important to do that

very timely and very quickly and as well as accurately.

And I think it's very promising direction again of,

you know, simulating,

and so that you can run forward predictions and simulations

of very complicated real world systems.

- I should mention that

I've gotten a chance in Texas to meet

a community of folks called the storm chasers.

- [Demis] Yes.

- And what's really incredible about them,

I need to talk to them more,

is they're extremely tech-savvy

because what they have to do is they have to use models

to predict where the storm is. - Yeah.

- So it's this beautiful mix of

like crazy enough - Yeah.

- to like go into the eye of the storm.

- Yeah. - And like,

in order to protect your life

and predict where the extreme events are going to be,

they have to have

increasingly sophisticated models of weather.

- Yeah. - Yeah.

It's a beautiful balance of like

being in it as living organisms

and the cutting edge of science.

So they actually might be using DeepMind systems,

so that's. - Yeah, hopefully they are.

And I love to join them in one of those chases.

They look amazing, right? - It's great.

- To actually experience it one time.

- Exactly. - Yeah.

- And then also to experience the correct prediction,

- Yeah, yeah. - where something will come,

and how it's going to evolve.

It's incredible, yeah.

You've estimated that we'll have AGI by 2030,

so there's interesting questions around that.

How will we actually know that we got there

and what may be the move, quote, "Move 37" of AGI?

- My estimate is sort of 50% chance

by in the next five years.

So, you know, by 2030 let's say.

And so I think there's a good chance that that could happen.

Part of it is what is your definition of AGI,

of course people arguing about that now.

And mine's quite a high bar and always has been of like,

can we match the cognitive functions that the brain has?

Right, so we know our brains are pretty much

general Turing machines, approximate.

And of course we created

incredible modern civilization with our minds.

So that also speaks to how general the brain is.

And for us to know we have a true AGI,

we would have to like make sure

that it has all those capabilities.

It isn't kind of a jagged intelligence

where some things it's really good at like today's systems,

but other things it's really flawed at.

And that's what we currently have with today's systems.

They're not consistent.

So you'd want that consistency of intelligence

across the board.

And then we have some missing,

I think, capabilities,

like sort of the true invention capabilities and creativity

that we were talking about earlier.

So you'd want to see those.

How you test that?

I think you just test it.

One way to do it would be kind of brute force test of

tens of thousands of cognitive tasks that,

you know, we know that humans can do.

And maybe also make the system available

to a few hundred of the world's top experts,

the Terrence Taos of each subject area,

and see if they can find, you know,

give them a month or two

and see if they can find an obvious flaw in the system.

And if they can't,

then I think you are pretty, you know,

you can be pretty confident we have a fully general system.

- Maybe to push back a little bit.

It seems like humans are really incredible

as the intelligence improves across all domains

to take it for granted.

Like you mentioned, Terrence Tao,

these brilliant experts,

they might quickly in a span of weeks

take for granted all the incredible things you can do

and then focus in well, aha, right there.

You know, I consider myself,

first of all, human. - Yeah.

- I identify as human.

You know, some people listen to me talk and they're like,

that guy is not good at talking,

the stuttering, you know.

So like even humans have obvious across domains, limits,

even just outside of mathematics and physics and so on.

I wonder if it will take something like a move 37,

so on the positive side, - Yeah.

- versus like a barrage of 10,000 cognitive tasks.

- Yeah. - where it'll be one or two

where it's like, - Yes.

- holy shit, this is special. - So I think there are.

Exactly.

So I think there's the sort of blanket testing

to just make sure you've got the consistency.

But I think there are the sort of lighthouse moments

like the move 37 that I would be looking for.

So one would be inventing

a new conjecture or a new hypothesis about physics

like Einstein did.

So maybe you could even run the back test of

that very rigorously.

Like have a cutoff, a knowledge cutoff of 1900

and then give the system everything that was,

you know, that was written up to 1900,

and then see if it could come up

with special relativity and general relativity, right,

like Einstein did.

That would be an interesting test.

Another one would be, can it invent a game like Go?

Not just come up with move 37, a new strategy,

but can it invent a game that's as deep,

as aesthetically beautiful, as elegant as Go?

And those are the sorts of things

I would be looking out for.

And probably a system being able to do

several of those things, right?

For it to be very general, not just one domain.

And so I think that would be the signs at least

that I would be looking for,

that we've got a system that's AGI level.

And then maybe to fill that out,

you would also check their consistency,

you know, make sure there's no holes in that system either.

- Yeah, something like

a new conjecture or scientific discovery.

That would be a cool feeling.

- Yeah, that would be amazing.

So it's not just helping us do that,

but actually coming up with something brand new.

- And you would be in the room for that.

- Absolutely. - So it would be like

probably two or three months before announcing it.

And you would just be sitting there trying not to tweet.

- Something like that.

Exactly, it's like, what is this amazing,

- Yeah. - you know, physics idea?

And then we would probably check it

with world experts in that domain.

- Yeah. - Right.

And validate it and kind of go through its workings,

and I guess it would be explaining its workings too.

Yeah, it'd be an amazing moment.

- Do you worry that we as humans, even expert humans,

like you might miss it?

- Well, it may be pretty complicated.

So it could be, the analogy I give there is

I don't think it will be totally mysterious

to the best human scientists,

but it may be a bit like,

for example, in chess,

if I was to talk to Garry Kasparov or Magnus Carlson

and play a game with them

and they make a brilliant move,

I might not be able to come up with that move,

but they could explain why afterwards that move made sense.

And we would be to understand it to some degree,

not to the level they do,

but in, you know, if they were good at explaining,

which is actually part of intelligence too,

is being able to explain in a simple way

what you're thinking about.

I think that that will be very possible

for the best human scientists.

- But I wonder, maybe you can educate me on the side of Go,

I wonder if there's moves from Magnus or Garry

where they at first will dismiss it as a bad move.

- Yeah, sure.

It could be.

But then afterwards they'll figure out with their intuition

that this is why this works.

And then empirically,

the nice thing about games is,

one of the great things about games is

it's a sort of scientific test.

Do you win the game or not win?

And then that tells you,

okay, that move in the end was good,

that strategy was good.

And then you can go back and analyze that

and explain even to yourself

a little bit more why explore around it.

And that's how chess analysis and things like that works.

So perhaps that's why my brain works like that.

'cause I've been doing that since I was four.

And you're trained,

you know, it's sort of hardcore training in that way.

- But even now, like when I generate code,

there is this kind of

nuanced, fascinating contention that's happening

where I might at first identify

a set of generated code as incorrect

in some interesting nuanced ways.

But then I'm always have to ask the question,

is there a deeper insight here

that I'm the one who's incorrect?

And that's going to,

as the systems get more and more intelligent,

you're gonna have to contend with that.

It's like, what do you?

Is this a bug or a feature, - Yeah.

- what you just came up with?

- Yeah, and they're gonna be pretty complicated to do,

but of course it will be.

You can imagine also AI systems

that are producing that code or whatever that is,

and then human program is looking at it,

but also not unaided with the help of AI tools as well.

So it's gonna be kind of an interesting,

you know, maybe different AI tools to the ones,

- Yeah. - That they're more, you know,

kind of monitoring tools to the ones that generated it.

- So if we look at AGI system,

sorry to bring it back up,

- Yeah. - but AlphaEvolve, super cool.

So AlphaEvolve enables, on the programming side,

something like recursive self-improvement potentially.

Like if you can imagine what that AGI system,

maybe not the first version,

but a few versions beyond that,

what does that actually look like?

Do you think it will be simple?

Do you think it'll be something like

a self-improving program and a simple one?

- I mean, potentially that's possible I would say.

I'm not sure it's even desirable

because that's a kind of like hard takeoff scenario.

- Yeah. - But you,

these current systems like AlphaEvolve,

they have, you know,

human in the loop deciding on various things,

their separate hybrid systems that interact.

One could imagine eventually doing that end-to-end.

I don't see why that wouldn't be possible.

But right now, you know,

I think the systems are not good enough to do that

in terms of coming up with the architecture of the code.

And again, it's a little bit reconnected to this idea of

coming up with a new conjecture hypothesis.

How like they're good if you give them

very specific instructions about what you're trying to do.

But if you give them a very vague high-level instruction

that wouldn't work currently.

And I think that's related to this idea of like

invent a game as good as Go, right?

Imagine that was the prompt.

That's pretty underspecified.

And so the current systems wouldn't know,

I think what to do with that,

how to narrow that down to something tractable.

And I think there's similar like,

look, just make a better version of yourself.

That's too unconstrained.

But we've done it in, you know,

and as you know with AlphaEvolve like

things like faster matrix multiplication.

So when you hone it down

to very specific thing you want,

it's very good at incrementally improving that.

But at the moment,

these are more like incremental improvements,

sort of small iterations.

Whereas if, you know,

if you wanted a big leap in understanding,

you need a much larger advance.

- Yeah, but it could also be sort of

to push back against hard takeoff scenario,

it could be just a sequence of incremental improvements

like matrix multiplication.

Like it has to sit there for days

thinking how to incrementally improve a thing

and that it does so recursively.

And as you do more and more improvement, it'll slow down.

- Right. - So there'll be like,

like the path to AGI won't be like,

it'll be a gradual improvement over time.

- Yes.

If it was just incremental improvements,

that's how it would look.

So the question is,

could it come up with a new leap like

the Transformers architecture? - Yeah.

- Right, could it have done that back in 2017,

when, you know, we did it and Brain did it.

And it's not clear that these systems,

something AlphaEvolve wouldn't be able to do

make such a big leap.

So for sure these systems are good.

We have systems I think that can do

incremental hill climbing.

And that's a kind of bigger question about

is that all that's needed from here

or do we actually need one or two more big breakthroughs?

- And can the same kind of systems

provide the breakthroughs also?

So make it a bunch of S-curves.

Like incremental improvement,

but also every once in a while leaps.

- Yeah, I don't think anyone has systems

that can have shown unequivocally those big leaps, right?

We have a lot of systems that do

the hill climbing of the S-curve that you're currently on.

- Yeah, and that would be the move 37.

- Yeah, I think would be a leap, something like that.

- Do you think the scaling laws are holding strong

on the pre-training, post-training test on compute?

Do you, on the flip side of that,

anticipate AI progress hitting a wall?

- We certainly feel there's a lot more room

just in the scaling.

So actually all steps,

pre-training, post-training, and infant time.

So there's sort of three scalings

that are happening concurrently.

And we, again, there it's about how innovative you can be.

And we, you know,

we pride ourselves on having

the broadest and deepest research bench.

We have amazing, you know, incredible researchers

and people like Noam Shazeer,

who, you know, came up with Transformers,

and Dave Silver, you know,

who led the AlphaGo project and so on.

And it's that research base means that

if some new breakthrough is required,

like an AlphaGo or Transformers,

I would back us to be the place that does that.

So I'm actually quite like it

when the terrain gets harder, right?

Because then it veers more from just engineering

to true research. - Yeah.

- And you know, or research plus engineering,

and that's our sweet spot.

And I think that's harder,

it's harder to invent things than to, you know, fast follow.

And so, you know, we don't know.

I would say it's kind of 50/50 whether new things are needed

or whether the scaling of the existing stuff

is gonna be enough.

And so in true kind of empirical fashion,

we are pushing both of those as hard as possible.

The new blue sky ideas,

and you know, maybe about half our resources are on that,

and then, and then scaling to the max

the current capabilities.

And we're still seeing some, you know,

fantastic progress on each different version of Gemini.

- That's interesting the way you put it

in terms of the Deep bench,

that if progress towards AGI is

more than just scaling compute,

so the engineering side of the problem

and is more on the scientific side

where there's breakthroughs needed,

then you feel confident DeepMind as well,

Google DeepMind as well positioned to

- Yes. - kick ass in that domain?

- Well, I mean if you look at

the history of the last decade or 15 years,

- [Lex] Yeah.

- It's been, I mean, you know,

maybe, I don't know, 80-90% of the breakthroughs

that underpins modern AI field today was from,

you know, originally,

Google Brain, Google Research, and DeepMind.

So yeah, I would back that to continue hopefully.

- So on the data side,

are you concerned about running out of high-quality data,

especially high-quality human data?

- I'm not very worried about that,

partly because I think there's enough data

and it's been proven to get the systems to be pretty good.

And this goes back to simulations again.

Do you have enough data to make simulations

so that you can create more synthetic data

that are from the right distribution.

Obviously that's the key.

So you need enough real world data

in order to be able to create those kinds of generators,

data generators.

And I think that we're at that step at the moment.

- Yeah, you've done a lot of incredible stuff

on the side of science and biology,

doing a lot with not so much data.

- [Demis] Yeah.

- I mean it's still a lot of data,

but I guess enough takeoff. - Get that going.

Exactly, exactly. - Yeah, yeah.

- How crucial is the scaling of compute to building AGI?

This is a question that's an engineering question,

it's almost a geopolitical question

because it also integrated into that is

supply chains and energy. - Yes.

- A thing that you care a lot about,

which is potentially fusion. - Yes.

- So innovating on

the side of energy also. - Yeah.

- Do you think we're gonna keep scaling compute?

- I think so, for several reasons.

I think compute,

there's the amount of compute you have for training,

often it needs to be co-located.

So actually even like, you know,

bandwidth constraints between data centers can affect that.

So there's additional constraints even there.

And that's important for training

obviously the largest models you can.

But there's also,

because now AI systems are in products

and being used by billions of people around the world,

you need a ton of inference compute now.

And then on top of that,

there's the thinking systems,

the new paradigm of the last year

that where they get smarter,

the longer amount of inference time

you give them at test time.

So all of those things need a lot of compute

and I don't really see that slowing down.

And as AI systems become better,

they'll become more useful

and there'll be more demand for them.

So both from the training side,

the training side actually is only just one part of that,

it may even become the smaller part of what's needed

- [Lex] Yeah.

- in the overall compute that that's required.

- Yeah, that's sort of almost memey kind of thing,

which is like the success

and the incredible aspects of Veo 3.

People kind of make fun of like

the more successful it becomes the,

you know, the servers are sweating.

- Yes, exactly. - 'Cause of the inference.

- Yeah, yeah, exactly.

We did a little video of the servers frying eggs and things.

And that's right.

And we're gonna have to figure out how to do that.

There's a lot of interesting

hardware innovations that we do.

As you know, we have our own TPU line.

And we are looking at like inference-only things,

inference-only chips,

and how we can make those more efficient.

We're also very interested in building AI systems

and we have done the help with energy usage.

So help data center energy,

like for the cooling systems be efficient,

grid optimization,

and then eventually things like

helping with plasma containment fusion reactors.

We've done lots of work on that with Commonwealth Fusion.

And also one could imagine reactor design.

And then material design I think is

one of the most exciting.

New types of solar material, solar panel material,

room temperature superconductors has

always been on my list of dream breakthroughs,

and optimal batteries.

And I think a solution to any,

you know, one of those things would be

absolutely revolutionary for, you know,

climate and energy usage.

And we're probably close, you know,

and again, in the next five years,

to having AI systems that can materially help

with those problems.

- If you were to bet,

sorry for the ridiculous question.

- Yeah. - But what is

the main source of energy in like 20, 30, 40 years?

Do you think it's gonna be nuclear fusion?

- I think fusion and solar are the two that I would bet on.

Solar, I mean, you know,

it's the fusion reactor in the sky of course.

And I think really the problem there is

batteries and transmission.

So you know, as well as more efficient,

more and more efficient solar material,

perhaps eventually, you know, in space.

You know, these kind of Dyson sphere type ideas.

And fusion I think is definitely doable seems

if we have the right design of reactor

and we can control the plasma fast enough and so on.

I think both of those things will actually get solved.

So we'll probably have at least

those are probably the two primary sources of

renewable, clean, almost free or perhaps free energy.

- What a time to be alive.

If I traveled into the future with you 100 years from now,

how much would you be surprised

if we've passed a Type I Kardashev scale civilization?

- I would not be that surprised

if there was a like a 100-year time scale from here.

I mean, I think it's pretty clear

if we crack the energy problems

in one of the ways we've just discussed,

fusion or very efficient solar,

then if energy is kind of free and renewable and clean,

then that solves a whole bunch of other problems.

So for example, the water access problem goes away

because you can just use desalination.

We have the technology, it's just too expensive.

So only, you know, fairly wealthy countries

like Singapore and Israel and so on like actually use it.

But if it was cheap,

then, you know, all countries that have a coast could.

But also you'd have unlimited rocket fuel.

You could just separate sea water

out into hydrogen and oxygen using energy,

and that's rocket fuel.

So combined with, you know,

Elon's amazing self-landing rockets,

then it could be like sort of like a bus service to space.

So that opens up, you know,

incredible new resources and domains.

Asteroid mining I think will become a thing

and maximum human flourishing to the stars.

That's what I dream about.

As well is like Carl Sagan's sort of idea of

bringing consciousness to the universe,

waking up the universe.

And I think human civilization will do that

in the full sense of time if we get AI right

and crack some of these problems with it.

- Yeah, I wonder what it would look like

if you're just a tourist flying through space,

you would probably notice Earth,

because if you solve the energy problem,

you would see a lot of space rockets probably.

So it would be like traffic here in London,

- Yeah. - but in space.

- Yes, exactly. - It's just a lot of rockets.

- [Demis] Yes.

- And then you would probably see floating in space,

some kind of source of energy like solar

- Yeah. - potentially.

So Earth would just look more on the surface,

more technological.

And then you would use the power of that energy then

to preserve the natural, - Yes.

- like the rainforest

and all that kind of stuff. - Exactly.

Because for the first time in human history,

we wouldn't be resource constrained.

And I think that could be amazing new era for humanity

where it's not zero sum, right?

I have this land, you don't have it.

Or if we take, you know,

if the tigers have their forest,

then the local villagers can't,

what are they gonna use?

I think that this will help a lot.

No, it won't solve all problems

because there's still other human foibles

that will still exist,

but it will at least remove one

I think one of the big vectors,

which is scarcity of resources,

you know, including land and more materials and energy.

And you know,

we should be sometimes call it like others call it

about this kind of radical abundance era

where there's plenty of resources to go around,

of course, the next big question is making sure that

that's fairly, you know, shared fairly,

and everyone in society benefits from that.

- So there is something about human nature where I go,

you know, it's like Borat,

like my neighbor, like you start trouble.

We do start conflicts.

And that's why games throughout

as I'm learning actually more and more

even in ancient history,

serve the purpose of pushing people away from war.

- Yes. - Actually the hot war.

So maybe we can figure out

increasingly sophisticated video games that pull us,

that give us that

scratch the itch of like - Yeah.

- conflict whatever that is,

but us, the human nature.

And then avoid the actual hot wars that would come with

increasingly sophisticated technologies

because we're now long past the stage

where the weapons we're able to create

can actually just destroy all of human civilization.

- Yeah. - So it's no longer,

that's no longer a great way

to start shit with your neighbor.

It is better to play a game of chess.

- Or football? - Or football.

- Yeah. - Yeah.

- And I think, I mean,

I think that's what my modern sport is.

And I love football, watching it.

And I just feel like, and I used to play it a lot as well,

it's very visceral and it's tribal

and I think it does channel a lot of those energies into a,

which I think is a kind of human need

to belong to some group,

but into a fun way, a healthy way,

and not a destructive way kind of constructive thing.

And I think going back to games again is

I think they're originally why they're so great

as well for kids to play things like chess is

they're great little microcosm simulations of the world.

They're simulations of the world too.

They're simplified versions of some real world situation,

whether it's poker or Go or chess.

Different aspects or diplomacy.

Different aspects of the real world.

And it allows you to practice at them too.

And 'cause you know,

how many times do you get to practice

a massive decision moment in your life?

You know, what job to take, what university to go to?

You know, you get maybe, I don't know,

a dozen or so key decisions one has to make

and you've got to make those as best as you can.

And games is a kind of safe environment,

repeatable environment,

where you can get better at your decision making process.

And it maybe has this

additional benefit of channeling some energies

into more creative and constructive pursuits.

- Well, I think it's also really important to practice

losing and winning. - Right.

- Like losing is a really, you know,

that's why I love games,

that's why I love even things like Brazilian jiujitsu.

- [Demis] Yeah.

- Where you can get your kicked

in a safe environment over and over.

It reminds you about the way about physics,

about the way the world works,

about that sometimes you lose, sometimes you win,

you can still be friends with everybody.

- Yeah. - That feeling of losing,

I mean it's a weird one for us humans to like

really like make sense of like,

that's just part of life,

that is a fundamental part of life is losing.

- Yeah and I think in martial arts as I understand it,

but also in things like chess is,

at least the way I took it,

it's a lot to do with self-improvement, self-knowledge,

you know, that, okay, so I did this thing.

It's not about really being the other person,

it's about maximizing your own potential.

If you do in a healthy way,

you learn to use victory and losses in a way.

Don't get carried away with victory

and think you're just the best in the world.

And the losses keep you humble

and always knowing there's always something more to learn.

There's always a bigger expert that can mentor you.

You know, I think you learn that

I'm pretty sure in martial arts,

and I think that's also the way

that least I was trained in chess.

And so in the same way.

And it can be very hardcore and very important.

And of course you wanna win,

but you also need to learn how to deal with setbacks

in a healthy way.

And wire that feeling that you have when you lose something

into a constructive thing of

next time I'm gonna improve this, right,

or get better at this.

- There is something that's a source of happiness,

a source of meaning, that improvement step.

It's not about the winning or losing.

- Yes, the mastery. - Yeah.

- There's nothing more satisfying in a way.

It's like, oh wow,

this thing I couldn't do before, now I can.

And again games and physical sports and mental sports,

they're ways of measuring they're beautiful

because you can measure that progress,

right? - Yeah.

I mean there's something about

I guess why I love role playing games.

Like the number go up of like my,

- Yes. - on the skill tree.

Like literally that is a source of meaning for us humans.

Whatever our- - Yeah.

We're quite addicted to this sort of, yeah.

These numbers going up. - Yeah.

- And maybe that's why

we made games like that. - Yeah.

- 'Cause obviously that is something

we're hill climbing systems ourselves, right?

- Yeah, it would be quite sad

if we didn't have, - Yeah.

- any mechanism. - Different colored belts.

We do this everywhere, right?

Where we just have this thing, it's great.

- And I don't wanna dismiss that,

that there is a source of deep meaning

across humans. - Yeah.

- So one of the incredible stories on the business,

on the leadership side is

what Google has done over the past year.

So I think it's fair to say that Google was losing

on the LLM product side a year ago with Gemini 1.5

and now it's winning with Gemini 2.5.

And you took the helm and you led this effort.

What did it take to go from let's say,

quote unquote "losing"

to quote, unquote, "winning" in a span of a year?

- Yeah, well firstly,

it's absolutely incredible team that we have,

you know, led by Koray and Jeff Dean and Oriol

and the amazing team we have on Gemini,

absolutely world class.

So you can't do it without the best talent.

And of course you have, you know,

we have a lot of great compute as well.

But then it's the research culture we've created, right?

And basically coming together,

both different groups in Google,

you know, there was Google Brain, a world-class team,

and then the old DeepMind.

And pulling together all the best people and the best ideas

and gathering around

to make the absolute greater system we could.

And it has been hard,

but we're all very competitive.

And we, you know, love research.

It's just so fun to do.

And we, you know, it's great to see our trajectory.

It wasn't a given,

but we're very pleased with where we are

and the rate of progress is the most important thing.

So if you look at where we've come to from two years ago

to one year ago to now, you know,

I think our, we call it relentless progress

along with relentless shipping of that progress

is being very successful.

And, you know, it's unbelievably competitive,

the whole space, the whole AI space,

with some of the greatest entrepreneurs

and leaders and companies in the world,

all competing now because everyone's realized

how important AI is.

And it's very, you know,

been pleasing for us to see that progress.

- You know, Google's a gigantic company.

Can you speak to the natural things

that happen in that case?

Is the bureaucracy that emerges,

like you wanna be careful like, you know,

like the natural kind of there's meetings

and there's managers and that. - Yeah.

- Like what are some of the challenges

from a leadership perspective breaking through that

in order to like you said ship

like the number of products. - Yeah, yeah.

- Gemini-related products

that's been shipped over the past years is just insane.

- Right, it is.

Yeah exactly.

That's what relentlessness looks like.

I think it's a question of like any big company,

you know, ends up having a lot of layers of management

and things like that is sort of the nature of how it works.

But I still operate and I was always operating

with old DeepMind as a startup still.

A large one, but still as a startup.

And that's what we still act like today

as with Google DeepMind.

And acting with decisiveness and the energy that you get

from the best smaller organizations.

And we try to get the best of both worlds

where we have this incredible billions of users surfaces,

incredible products that we can power up

with our AI and our research.

And that's amazing.

And you can, you know,

there's very few places in the world you can get that,

do incredible world-class research on the one hand

and then plug it in

and improve billions of people's lives the next day.

That's a pretty amazing combination.

And we're continually fighting and cutting away bureaucracy

to allow the research culture

and the relentless shipping culture to flourish.

And I think we've got a pretty good balance,

whilst being responsible with it,

you know, as you have to be as a large company

and also with a number of, you know,

huge products surfaces that we have.

- So a funny thing you mentioned about like,

the surface of the billion.

I had a conversation with a guy named,

a brilliant guy

here at the British Museum called Irving Finkel.

He's a world expert at cuneiforms,

which is ancient writing

on tablets. - Yeah.

- And he doesn't know about ChatGPT or Gemini.

He doesn't even know anything about AI.

But his first encounter with this AI is

AI mode on Google. - Yes, yes.

- He's like, is that what you're talking about,

- Yes. - this AI mode?

And then, you know,

it's just a reminder that there's a large part of the world

that doesn't know about this AI thing.

- Yeah, I know.

It's funny 'cause if you live on X and Twitter,

and I mean, it's sort of at least my feed, it's all AI.

And there's certain places where, you know,

in the Valley and certain pockets where everyone's just,

all they're thinking about is AI.

But a lot of the normal world hasn't come across it yet.

- And that's a great responsibility,

their first interaction. - Yup.

- The grand scale of the rural India

or anywhere across the world,

like you get to. - Right, right.

And you want it to be as good as possible.

And in a lot of cases it's just under the hood powering,

making something like maps or search work better.

And it's ideally for a lot of those people

should just be seamless.

It's just new technology that makes their lives more,

you know, productive and helps them.

- A bunch of folks on

the Gemini product and engineering teams

spoken extremely highly of you on another dimension

that I almost didn't even expect,

'cause I kind of think of you as the like deep scientists

and caring about these big research scientific questions.

But they also said you're a great product guy.

Like how to create a thing

that a lot of people would use and enjoy using.

So can you maybe speak to what it takes

to create AI-based product that a lot of people enjoy using?

- Yeah, well I mean, again,

that comes back from my game design days

where I used to design games for millions of gamers.

People would forget about that.

I've had experience with cutting-edge technology in product

that is how games was in the '90s.

And so I love actually

the combination of cutting-edge research

and then being applied in a product

to power a new experience.

And so I think it's the same skill really of, you know,

imagining what it would be like to use it viscerally

and having good taste coming back to earlier.

The same thing that's useful in science

I think can also be useful in product design.

And I've just had a very, you know,

always been a sort of multidisciplinary person.

So I don't see the boundaries really between,

you know, arts and sciences or product and research.

It's a continuum for me.

I mean, I only work on,

I like working on products that are cutting edge.

I wouldn't be able to, you know,

have cutting-edge technology under the hood.

I wouldn't be excited about them

if they were just run-of-the-mill products.

So it requires this invention creativity capability.

- What are some specific things you kind of learned about

when you, even on the LLM side,

you're interacting with Gemini,

you're like this doesn't feel like

the layout, the interface, - Yeah.

- maybe the trade opportunity, the latency.

Like how to present to the user how long to wait

and how that waiting is shown or the reasoning capabilities?

There some interesting things.

'Cause like you said, it's very cutting edge,

we don't know - Yeah.

- how to present it correctly.

So is there some specific things you've learned?

- I mean it's such a false evolving space.

We're evaluating this all the time.

But where we are today is that

you want to continually simplify things.

Whether that's the interface, - Simplify, yeah.

- or what you build on top of the model.

You kind of wanna get out of the way of the model.

The model train is coming down the track

and it's improving unbelievably fast.

This relentless progress we talked about earlier.

You know, you look at 2.5 versus 1.5

and it's just a gigantic improvement.

And we expect that again for the future versions.

And so the models are becoming more capable.

So the interesting thing about

the design space in today's world,

these AI-first products is,

you've got to design not for what the thing can do today,

the technology can do today,

but in a year's time.

So you actually have to be a very technical product person

because you've got to kind of have

a good intuition for and feel for,

okay, that thing that I'm dreaming about now

can't be done today,

but is the research track on schedule

to basically intercept that in six months or a year's time?

So you kind of got to intercept

where this highly changing technology's going.

As well as the new capabilities are

coming online all the time,

that you didn't realize before that can allow

like deep research to work.

Or now we've got video generation,

what do we do with that?

This multimodal stuff,

you know, one question I have is,

is it really going to be the current UI that we have today?

These text box chats seems very unlikely

once you think about these super multimodal systems.

Shouldn't it be something more like "Minority Report"

where you're sort of vibing with it

in a kind of collaborative way, right?

It seems very restricted today.

I think we'll look back on today's interfaces

and products and systems as quite archaic

in maybe in just a couple of years.

So I think there's a lot of space actually

for innovation to happen on the product side

as well as the research side.

- And then we are offline talking about the keyboard,

the open question is how, when,

and how much will we move to audio

as the primary way of interacting

with the machines around us versus typing stuff.

- Yeah, I mean typing is a very low bandwidth way of doing,

even if you're a very fast, you know, typer.

And I think we are gonna have

to start utilizing other devices,

whether that's smart glasses, you know, audio earbuds,

and eventually maybe some sorts of neural devices

where we can increase the input and the output bandwidth

to something, you know, maybe 100X of what is today.

- I think that, you know,

underappreciated art form is the interface design

because I think you can not unlock

the power of the intelligence of a system

if you don't have the right interface.

The interface is really the way

you unlock its power. - Yeah.

- It's such an interesting question of how to do that.

- Yeah. - So how.

You would think like getting out of the way

isn't real art form.

- Yes.

You know, it's the sort of thing

that I guess Steve Jobs always talked about, right?

It's simplicity, beauty, and elegance that we want, right?

And nobody's there yet, in my opinion.

And that's what I would like us to get to.

Again, it sort of speaks to like Go again, right, as a game,

the most elegant, beautiful game.

Can you, you know,

can you make an interface as beautiful as that?

And actually I think

we're gonna enter an era of AI-generated interfaces

that are probably personalized to you.

So it fits the way that your aesthetic, your feel,

the way that your brain works.

And the AI kind of generates that

depending on the task, you know.

That feels like that's probably

the direction we'll end up in.

- Yeah, 'cause some people are power users

and they want every single parameter

on the screen. - Right.

- And everything based like perhaps me with

a keyboard-based navigation. - Yeah.

- I'd like to have shortcuts for everything.

And some people like the minimalism.

- Just hide all of that complexity.

Yeah, exactly. - Completely.

Yeah.

Well, I'm glad you have a Steve Jobs mode in you as well.

This is great.

Einstein mode, Steve Jobs mode.

All right, let me try to trick you

into answering a question.

When will Gemini 3.0 come out?

Is it before or after GTA VI?

The world waits for both.

And what does it take to go from 2.5 to 3.0?

Because it seems like

there's been a lot of releases of 2.5,

which are already leaps in performance.

So what does it even mean to go to a new version?

Is it about performance?

Is it about a complete different flavor of an experience?

- Yeah, well so the way it works

with our different version numbers is,

you know, we try to collect,

so maybe it takes, you know,

roughly six months or something to do a new kind of full run

and the full productization of a new version.

And during that time,

lots of new interesting research,

iterations, and ideas come up.

And we sort of collect them all together.

You know, you could imagine the last six months worth of

interesting ideas on the architecture front.

Maybe it's on the data front,

it's like many different possible things.

And we collect, package that all up,

test which ones are likely to be useful

for the next iteration,

and then bundle that all together.

And then we start the new,

you know, giant hero training run, right?

And then of course that gets monitored.

And then at the end of the pre-training,

then there's all the post-training,

there's many different ways of doing that,

different ways of patching it.

So there's a whole experimenting phase there,

which you can also get a lot of gains out.

And that's where you see

the version numbers usually are referring to the base model,

the pre-train model.

And then the interim versions of 2.5, you know,

and the different sizes and the different little additions,

they're often patches or post-training ideas

that can be done afterwards

off the same basic architecture.

And then of course on top of that,

we also have different sizes,

Pro and Flash and Flash-Lite.

that are often distilled from the biggest ones.

You know, the Flash model from the Pro model.

And that means we have a range of different choices

if you are the developer of

do you wanna prioritize performance or speed, right,

and cost?

And we like to think of this Pareto frontier of,

you know, on the one hand the y-axis is,

you know, like performance,

and then the x-axis is,

you know, cost or latency and speed basically.

And we have models that completely define the frontier.

So whatever your trade off is

that you want as an individual user or as a developer,

you should find one of our models satisfies that constraint.

- So behind the version changes there is a big hero run.

- Yes. - And then,

there's just an insane complexity of productization,

then there's the distillation of

the different sizes along that Pareto front.

And then with each step you take,

you realize there might be a cool product.

There's side quests.

- Yes, exactly.

- And then you also don't want to take too many side quests

because then you have a million versions

and a million products. - Yes, yes, precisely.

- It's very unclear. - Yeah.

- But you also get super excited

'cause it's super cool. - Yup.

- Like how does, even when you look at Veo,

it's very cool. - Yeah.

- How does it fit into the bigger thing?

- Yes, exactly. - Yeah.

- Exactly, and then you're constantly

this process of converging upstream, we call it,

you know, ideas from the product surfaces

or from the post-training.

And even further downstream than that,

you kind of upstream that into the core model training

for the next run.

Right, so then the main model,

the main Gemini track becomes more and more general.

And eventually, you know, AGI.

- One hero run at a time. - Yes, exactly.

- A few hero runs later.

- Yeah.

So sometimes when you release these new versions

or every version really,

are benchmarks productive or counterproductive

for showing the performance of a model?

- You need them,

but it's important that you don't over fit to them, right?

So there shouldn't be the end or the be-all and end-all.

So there's LM Arena

or it used to be called LMSYS,

that's one of them that turned out sort of organically to be

one of the main ways people like to test these systems,

at least the chatbots.

Obviously there's loads of academic benchmarks

from that test,

mathematics and coding ability,

general language ability, science ability, and so on.

And then we have our own internal benchmarks

that we care about.

It's a kind of multiobjective,

you know, optimization problem, right?

You don't want to be good at just one thing.

We're trying to build general systems

that are good across the board.

And you try and make no regret improvements.

So where you're improving

- Yeah. - like, you know, coding,

but it doesn't reduce your performance

in other areas, right?

So that's the hard part.

'Cause you can, of course,

you could put more coding data in or you could put more,

I don't know, gaming data in,

but then does it make worse your language system

or in your translation systems

and other things that you care about?

So you've got to kind of continually monitor this

increasingly larger and larger suite of benchmarks.

And also there's,

when you stick them into products, these models,

you also care about the direct usage and the direct stats

and the signals that you're getting from the end users.

Whether they're coders or the average person

using the chat interfaces.

- Yeah, because ultimately you wanna measure the usefulness,

but it's so hard to convert that into a number.

- Right. - It's really

vibe-based benchmarks - Yes.

- across a large number of users,

and it's hard to know.

And it would be just terrifying to me to,

you know you have a much smarter model,

but it's just something vibe-based.

It's not quite working.

That's just scary.

And everything you just said,

it has to be smart and useful across so many domains.

So you get super excited

'cause it's all of a sudden solving programming problems

that never been able to solve before,

but now it's crappy at poetry or something.

- Yes, right. - And it's just, I don't know.

That's a stressful.

That's so difficult. - To balance, yeah.

- To balance.

And because you can't really trust the benchmarks,

you really have to trust the end users.

- Yeah.

And then other things

that even more esoteric come into play like,

you know, the style of the persona of the system,

you know, how it, you know.

Is it verbose?

Is it succinct?

Is it humorous, you know?

And different people like different things.

- Yeah. - So, you know,

it's very interesting.

It's almost like cutting-edge part of psychology research

or personality research.

You know, I used to do that in my PhD,

like five-factor personality.

What do we actually want our systems to be like?

And different people will like different things as well.

So these are all just sort of new problems in product space

that I don't think have ever really been tackled before.

But we're gonna sort of rapidly have to deal with now.

- I think it's a super fascinating space,

developing the character of the thing.

- [Demis] Yeah.

- And so doing, it puts a mirror to ourselves.

What are the kind of things that we like?

'Cause prompt engineering allows you to control

a lot of those elements,

but can the product make it easier for you

to control the different flavors of those experiences,

the different characters that you interact with?

- Yeah, exactly so.

- So what's the probability of Google DeepMind winning?

- Well, I don't see it sort of winning.

I mean I think we need to,

I think winning is the wrong way to look at it

given how important and consequential

what it is we're building.

So funnily enough,

I try not to view it like a game or competition,

even though that's a lot of my mindset.

It's about, in my view,

all of us have, those of us at the leading edge,

have a responsibility

to steward this unbelievable technology

that could be used for incredible good

but also has risks,

steward it safely into the world

for the benefit of humanity.

That's always what I've dreamed about

and what we've always tried to do.

And I hope that's what eventually the community,

maybe the international community will rally around

when it becomes obvious as we get closer and closer to AGI,

that that's what's needed.

- I agree with you.

I think that's beautifully put.

You've said that you talk to and are on good terms with

the leads of some of these labs.

As the competition heats up,

how hard is it to maintain sort of those relationships?

- It's been okay so far.

I try to pride myself in being collaborative.

I'm a collaborative person.

Research is a collaborative endeavor.

Science is a collaborative endeavor, right?

It's all good for humanity in the end.

If you cure, you know, terrible diseases

and you come with an incredible cure,

this is net win for humanity.

And the same with energy.

All of the things that I'm interested in

in helping solve with AI.

So I just want that technology to exist in the world

and be used for the right things

and the kind of the benefits of that,

the productivity benefits of that being shared

for the benefit of everyone.

So I try to maintain good relations

with all the leading lab people.

They're very interesting characters many of them

as you might expect. - Yeah.

- But yeah, I'm on good terms

I hope with pretty much all of them.

And I think that's gonna be important

when things get even more serious than they are now,

that there are those communication channels

and that's what will facilitate cooperation or collaboration

if that's what is required

especially on things like safety.

- Yeah, I hope there's some collaboration on stuff

that's sort of less high stakes.

And in so doing sort of as a mechanism

for maintaining friendships and relationships.

So for example, I think the internet would love it

if you and Elon somehow collaborate

on creating a video game,

that kind of thing. - Right.

- That I think that enables camaraderie in good terms.

And also you two are legit gamers,

so it's just fun to, - Yeah.

- fun to create something. - Yeah, that would be awesome.

And we've talked about that in the past

and it may be a cool thing that, you know, we can do.

And I agree with you.

It'd be nice to have kind of side projects in a way

where one can just lean into the collaboration aspect of it

and it's a sort of a win-win for both sides.

And it kind of builds up that collaborative muscle.

- I see the scientific endeavor as that kind of side project

for humanity. - Yeah.

- And I think Google DeepMind has been really pushing that.

I would love to see other labs

do more scientific stuff and then collaborate.

'Cause it just seems like easier to collaborate

on the big scientific questions.

- I agree, and I would love to see a lot of people,

a lot of the other labs talk about science,

but I think,

we are really the only ones, - Yeah.

- using it for science and doing that.

And that's why projects like AlphaFold

are so important to me.

And I think to our mission is to show how AI can,

you know, be clearly used in a very concrete way

for the benefit of humanity.

And also we spun out companies like Isomorphic

off the back of AlphaFold to do drug discovery

and it's going really well.

And build sort of, you know,

you can think of build additional AlphaFold type systems

to go into chemistry space to help accelerate drug design.

And the examples I think we need to show

and society needs to understand are

where AI can bring these huge benefits.

- Well, from the bottom of my heart,

thank you for pushing the scientific efforts forward

with rigor, with fun, with humility, all of it.

I just love to see it,

and still talking about P equals NP

I mean it is just incredible.

So I love it.

There's been seemingly a war for talent.

Some of it is meme, I don't know.

What do you think about

Meta buying up talent with huge salaries

and the heating up of this battle for talent?

And I should say that I think a lot of people see DeepMind

as a really great place to do cutting-edge work

for the reasons that you've outlined.

- Yeah. - Like there's this

vibrant scientific culture.

- Yeah, well, look, of course, you know,

there's a strategy that Meta is taking right now,

I think that from my perspective at least,

I think the people that are real believers

in the mission of AGI and what it can do

and understand the real consequences,

both good and bad from that

and what that responsibility entails,

I think they're mostly doing it to be like myself,

to be on the frontier of that research.

So, you know,

they can help influence the way that goes

and steward that technology safely into the world.

And, you know, Meta right now are not at the frontier.

Maybe they'll manage to get back on there.

And you know, it's probably rational

what they're doing from their perspective

because they're behind and they need to do something.

But I think there's more important things than just money.

Of course one has to pay, you know, people,

their market rates and all of these things

and that continues to go up.

But, and I was expecting this,

because more and more people are finally realizing

leaders of companies,

what I've always known for 30 plus years now,

which is that AGI is the most important technology

probably that's ever gonna be invented.

So in some sense it's rational to be doing that.

But I also think there's a much bigger question.

I mean, people in AI these days are very well paid.

You know, I remember when we were starting out back in 2010,

you know, I didn't even pay myself a couple of years

because it wasn't enough money,

we couldn't raise any money.

And these days interns are being paid, you know,

the amount that we raised as our first entire seed round.

So it's pretty funny.

And I remember the days

where I used to have to work for free

and almost pay my own way to do an internship, right?

Now it's all the other way around.

But that's just how it is.

It's the new world.

But I think that, you know,

we've been discussing like what happens post AGI

and energy systems are solved and so on,

what is even money going to mean?

So I think, you know,

and the economy,

and we're gonna have much bigger issues to work through

and how does the economy function in that world,

and companies.

So I think, you know,

it's a little bit of a side issue

about salaries and things of like that today.

- Yeah when you're facing such gigantic consequences

and gigantic fascinating

scientific questions. - Right.

Which may be only a few years away so.

- So on the practical sort of pragmatic sense,

if we zoom in on jobs,

we can look at programmers because it seems like

AI systems are currently doing

incredibly well at programming and increasingly so.

So a lot of people that program for a living,

love programming are worried they will lose their jobs.

How worried should they be, do you think?

And what's the right way

to sort of adjust to the new reality

and ensure that you survive and thrive as a human

in the programming world?

- Well, it's interesting that programming,

and it's again,

counterintuitive to what we thought years ago maybe,

that some of the skills that we think of as harder skills

are turned out maybe to be the easier ones

for various reasons.

But, you know, coding and math

because you can create a lot of synthetic data

and verify if that data's correct.

So because of that nature of that,

it's easier to make things

like synthetic data to train from.

It's also an area, of course, we're all interested in,

'cause as programmers, right,

to help us and get faster at it and more productive.

So I think for the next era,

like the next five, 10 years,

I think what we're gonna find is people who are kind of

embrace these technologies become almost at one with them.

Whether that's in the creative industries

or the technical industries

will become sort of superhumanly productive I think.

So the great programs will be even better,

but there'll be even 10X even what they are today.

And because there,

you'll be able to use their skills

to utilize the tools to the maximum,

you know, exploit them to the maximum.

And so I think that's what we're gonna see

in the next domain.

So that's gonna cause quite a lot of change, right?

And so that's coming.

A lot of people benefit from that.

So I think one example of that is

if coding becomes easier,

it becomes available to many more creatives to do more.

But I think the top programmers

will still have huge advantages as terms of specifying,

going back to specifying what the architecture should be,

the question should be,

how to guide these coding assistants in a way that's useful,

you know, check whether the code they produce is good.

So I think there's plenty of headroom there

for the foreseeable, you know, next few years.

- So I think there's several interesting things there.

One is there's a lot of imperative

to just get better and better consistently of

using these tools.

So they're like riding the wave of

the improving models, - Yes.

- versus like competing against them.

- Yes. - But sadly,

because the nature of life on Earth,

there could be a huge amount of value

to certain kinds of programming at the cutting edge

and less value to other kinds.

For example, it could be like, you know,

frontend web design might be more amenable to,

as you mentioned, to generation by AI systems.

And maybe for example, game engine design

or something like this, - Yeah.

- or backend design,

or guiding systems in high-performance situations,

high-performance programming type of design decisions,

that might be extremely valuable.

But it will shift, - Yeah.

- where the humans are needed most.

And that's scary for people

to address. - Yeah, I think that's right.

Anytime where there's a lot of disruption and change,

you know, and we've had this,

it is not just this time,

we've had this many times in human history

with the internet, mobile,

but before that obviously industrial revolution.

And it's gonna be one of those eras

where there will be a lot of change.

I think there'll be new jobs we can't even imagine today

just like the internet created.

And then those people with the right skill sets

to ride that wave will become incredibly valuable, right,

those skills.

But maybe people will have to

relearn or adapt a bit their current skills.

And the thing that's gonna be harder to deal with

this time around is that

I think what we're gonna see is

something like probably 10 times

the impact the industrial revolution had

but 10 times faster as well, right?

So instead of a hundred years, it takes 10 years.

And so that's gonna make it, you know,

it's like 100X the impact and the speed combined.

So that's what's I think gonna make it

more difficult for society to deal with.

And there's a lot to think through

and I think we need to be discussing that right now.

And, I, you know,

encourage top economists in the world and philosophers

to start thinking about

how is society gonna be affected by this

and what should we do?

Including things like, you know,

universal basic provision or something like that

where a lot of the increased productivity

gets shared out and distributed to society

and maybe in the form of surface services and other things,

where if you want more than that,

you still go and get some incredibly rare skills

and things like that,

and make yourself unique,

but there's a basic provision that is provided.

- And if you think of government as a technology,

there's also interesting questions,

not just in the economics but just politics.

How do you design a system that's responding

to the rapidly changing times

such that you can represent the different pain

that people feel from the different groups?

And how do you reallocate resources

in a way that addresses that pain

and represents the hope and the pain

and the fears of different people

in a way that doesn't lead to division?

'Cause politicians are often really good at

sort of fueling the division

and using that to get elected.

Defining the other and then saying,

that's bad. - Yeah.

- And sort of based on that,

I think that's often counterproductive

to leveraging a rapidly changing technology,

how to help the world flourish.

So we almost need to improve

our political systems as well rapidly,

if you think of them as a technology.

- Definitely.

And I think we'll need new governance structures,

institutions probably,

to help with this transition.

So I think political philosophy and political science

is gonna be key to that.

But I think the number one thing, first of all,

that is to create more abundance of resources, right?

So that's the number one thing,

increase productivity, get more resources,

maybe eventually get out of the zero-sum situation.

Then the second question is

how to use those resources and distribute those resources.

But yeah,

you can't do that without having that abundance first.

- You mentioned to me

the book "The Maniac" by Benjamin Labatut,

a book on, first of all, about you,

there's a bio about you.

- It's strange, yeah.

- It's unclear, yes, sure.

It's unclear how much is fiction, how much is reality.

But I think the central figure that is John von Neumann.

I would say it's a haunting and beautiful

exploration of madness and genius

and let's say the double-edged sword of discovery.

And you know, for people who don't know,

John von Neumann is a kind of legendary mind.

He contributed to quantum mechanics.

He was on the Manhattan Project.

He is widely considered to be the father of or pioneer

the modern computer and AI and so on.

Many people say he's like one of the smartest humans ever,

which is fascinating.

And what's also fascinating is that

as a person who saw nuclear science and physics

become the atomic bomb,

so you got to see ideas become a thing

that has a huge amount of impact on the world,

he also foresaw the same thing for computing.

- [Demis] Yeah.

- And that's the a little bit,

again, beautiful and haunting aspect of the book.

Then taking a leap forward

and looking at this at least at all, AlphaZero,

AlphaGo, AlphaZero big moment

that maybe John von Neumann's thinking

was brought to reality.

So I guess the question is what do you think

if you got to hang out with John von Neumann now,

what would he say about what's going on?

- Well, that would be an amazing experience.

You know, he is a fantastic mind.

And I also love the way he spent a lot of his time

at Princeton at the Institute of Advanced Studies,

a very special place for thinking.

And it's amazing how much of a polymath he was

and the spread of things he helped invent,

including of course the Von Neumann architecture

that all the modern computers are based on.

And he had amazing foresight.

I think he would've loved where we are today.

And he would've,

I think he would've really enjoyed AlphaGo being a,

you know, a game.

- Yes. - He also did game theory.

I think he foresaw a lot of what would happen

with learning machines systems

that are kind of grown I think he called it

rather than programmed.

I'm not sure how even

maybe he wouldn't even be that surprised.

There's the fruition of what I think he already foresaw

in the 1950s.

- I wonder what advice he would give.

He got to see the building of the atomic bomb

with the Manhattan Project. - Yeah.

- I'm sure there's interesting stuff

that maybe is not talked about enough.

Maybe some bureaucratic aspect,

maybe the influence of politicians,

maybe not enough of picking up the phone

and talking to people that are called enemies

by the said politicians.

There might be some like deep wisdom

that we just may have lost from that time actually.

- Yeah, I'm sure.

I'm sure there is.

I mean, I've you know, studied,

I read a lot of books at that time as a well,

chronicle time,

and some brilliant people involved.

But I agree with you.

I think maybe there needs

to be more dialogue and understanding.

I hope we can learn from those times.

I think the difference here is that the AI has so many,

it's a multi-use technology.

Obviously we're trying to do things like

solve, you know, all diseases,

help with energy and scarcity, these incredible things,

this is why all of us and myself, you know,

I worked started on this journey 30 plus years ago.

But of course there are risks too.

And probably Von Neumann, my guess is he foresaw both.

And I think he sort of said,

I think it to his wife that it would be,

that computers would be even more impactful in the world.

And as we just discussed,

you know, I think that's right.

I think it's gonna be 10 times at least

of the industrial revolution.

So I think he's right.

So I think he would've been,

I imagine, fascinated by where we are now.

- And I think one of the,

maybe you can correct me,

but one of the takeaways from the book is that

reason as said in the book,

mad dreams of reason,

it's not enough for guiding humanity

as we build these super powerful technology

that there's something else.

I mean, there's also like a religious component.

Whatever God, whatever religion gives,

it pulls us something in the human spirit

that raw cold reason doesn't give us.

- And I agree with that.

I think we need to approach it

with whatever you wanna call it,

a spiritual dimension or humanist dimension,

it doesn't have to be to do with religion, right?

But this idea of a soul,

what makes us human, the spark that we have,

perhaps it's to do with consciousness

when we finally understand that,

I think that has to be at the heart of the endeavor.

And technology,

I've always seen technology as the enabler, right?

The tools that enable us to flourish

and to understand more about the world.

And I'm sort of with Feynman on this,

and he used to always talk about

science and art being companions, right?

You can understand it from both sides,

the beauty of a flower, how beautiful it is.

And also understand why the colors of the flower

evolve like that, right?

That just makes it more beautiful,

just the intrinsic beauty of the flower.

And I've always sort of seen it like that.

And maybe, you know,

in the Renaissance times the great discoverers then,

people like Da Vinci, you know,

I don't think he saw any difference between science and art,

and perhaps religion, right?

Everything was, it's just part of being human

and being inspired about the world around us.

And that's the philosophy I tried to take.

And one of my favorite philosophers is Spinoza.

And I think he combined that all very well.

You know, this idea of trying to understand the universe

and understanding our place in it.

And that was his kind of way of understanding religion.

And I think that's quite beautiful.

And for me,

every all of these things are related, interrelated,

the technology and what it means to be human.

And I think it's very important though

that we remember that as when we're immersed

in the technology and the research.

I think a lot of researchers that I see in our field are

a little bit too narrow

and only understand the technology.

And I think also that's why it's important

for this to be debated by society at large.

And I'm very supportive of things like

the AI summits that will happen

and governments understanding it.

And I think that's one good thing about the chatbot era

and the product era of AI is that

everyday person can actually feel and interact

with cutting-edge AI

and feel it for themselves.

- Yeah, because they force the technologist to have

the human conversation.

Yeah, for sure. - Yeah.

- That's the hopeful aspect of it.

Like you said, it's a dual-use technology

that we're forcefully integrating

the entire of humanity into it by

into the discussion about AI.

Because ultimately AI, AGI will be used

for things that states use technologies for,

which is conflict and so on.

And the more we integrate humans into this picture

by having chats with them,

the more we will guide.

- Yeah, be able to adapt,

society will be able to adapt to these technologies

like we've always done in the past

with the incredible technologies we've invented in the past.

- Do you think there will be

something like a Manhattan Project where there will be

an escalation of the power of this technology,

and states in their old way of thinking

will try to use it as weapons technologies

and there will be this kind of escalation?

- I hope not.

I think that would be very dangerous to do.

And I think also,

you know, not the right use of the technology.

I hope we'll end up with

something more collaborative if needed.

Like more like a CERN project.

- Yeah. - You know, where,

it's research focused

and the best minds in the world come together

to carefully complete the final steps

and make sure it's responsibly done,

before, you know, like deploying it to the world.

We'll see.

I mean it's difficult

with the current geopolitical climate I think

to see cooperation,

but things can change.

And I think at least on the scientific level,

it's important for the researchers to keep in touch

and keep close to each other

on at least on those kinds of topics.

- Yeah, and I personally believe on the education side.

And immigration side,

it would be great if both directions,

people from the West immigrate to China and China back.

I mean there is some like family human aspect of

people just intermixing.

- [Demis] Yeah.

- And thereby those ties grow strong,

so you can't sort of divide against each other

this kind of old school way of thinking.

And so multicultural, multidisciplinary research teams

working on scientific questions, that's like the hope.

Don't let the leaders that are warmongers divide us.

I think science is the ultimately

a really beautiful connector.

- Yeah, science has always been I think

quite a very collaborative endeavor.

And you know, scientists know that

it's a collective endeavor as well.

And we can all learn from each other.

So perhaps it could be a vector to get a bit of cooperation.

- What's your ridiculous question?

What's your p doom,

probability of the human civilization destroys itself?

- Well, look, I don't have a,

it's, you know, I don't have a p doom number.

The reason I don't is because I think

it would imply a level of precision that is not there.

So like,

I don't know how people are getting their p doom numbers.

I think it's a kind of a little bit of a ridiculous notion

because what I would say is it's definitely non-zero

and it's probably non-negligible.

So that in itself is pretty sobering.

And my view is it's just hugely uncertain, right?

What these technologies are gonna be able to do?

How fast are they gonna take off?

How controllable they're gonna be?

Some things may turn out to be,

and hopefully like way easier than we thought, right?

But it may be there are some really hard problems

that are harder than we guess today.

And I think we don't know that for sure.

And so under those conditions of a lot of uncertainty,

but huge stakes both ways.

You know, on the one hand,

we could solve all diseases, energy problems,

the scarcity problem and then travel to the stars

and conscious of the stars and maximum human flourishing,

on the other hand, is this sort of p doom scenarios.

So given the uncertainty around it

and the importance of it,

it's clear to me the only rational, sensible approach is

to proceed with cautious optimism.

So we want the outcome,

we want the benefits of course

and all of the amazing things that AI can bring.

And actually I would be really worried for humanity

if given the other challenges that we have,

climate, disease, you know, aging, resources, all of that,

if I didn't know something

that AI was coming down the line, right?

How would we solve all those other problems?

I think it's hard.

So I think we've, you know,

it could be amazingly transformative for good.

But on the other hand, you know,

there are these risks that we know are there,

but we can't quite quantify.

So the best thing to do is

to use the scientific method to do more research

to try and more precisely define those risks

and of course address them.

And I think that's what we're doing.

I think there probably needs to be

10 times more effort of that than there is now

as we are getting closer and closer to the AGI line.

- What would be the source of worry for you more,

would it be human-caused or AI AGI-caused?

- Yeah. - The humans abusing

that technology versus AGI itself

through mechanism that you've spoken about,

which is fascinating deception

or this kind of stuff, - Yes.

- getting better and better and better secretly,

and then states.

- I think they operate over different timescales

and they're equally important to address.

So there's just the common garden-variety of like,

you know, bad actors using new technology,

in this case, general purpose technology,

and repurposing it for harmful ends.

And that's a huge risk.

And I think that has a lot of complications

because generally, you know,

I mean huge favor of open science and open source

and in fact we did it with all our science projects

like AlphaFold and all of those things

for the benefit of the scientific community.

But how does one restrict bad actors

access to these powerful systems,

whether they're individuals or even rogue states

but enable access at the same time to good actors

to maximally build on top of.

It's a pretty tricky problem that

I've not heard a clear solution to.

So there's the bad actor use case problem

and then there's obviously

as the systems become more agentic and closer to AGI

and more autonomous,

how do we ensure the guardrails

and they stick to what we want them to do

and under our control.

- Yeah, I tend to, maybe my mind is limited,

worry more about the humans, so the bad actors.

And there it could be in part

how do you not put destructive technology

in the hands of bad actors,

but in another part,

from, again, geopolitical technology perspective,

how do you reduce the number of bad actors in the world?

That's also an interesting human problem.

- Yeah, it's a hard problem.

I mean look, we can maybe also use the technology itself

to help early warning on

some of the bad actor use cases, right?

Whether that's bio or nuclear or whatever it is,

like AI could be potentially helpful there

as long as the AI that you're using is

itself reliable, right?

So it's a sort of interlocking problem

and that's what makes it very tricky.

And again, it may require some agreement internationally,

at least between China and the US

of some basic standards, right?

- I have to ask you about the book "The Maniac,"

there's this, the hand of God moment,

Lee Sedol's move 78

that perhaps the last time

a human did a move of sort of pure human genius

and beat AlphaGo or like broke its brain.

- Yes. - Sorry to anthropomorphize.

But it's an interesting moment

'cause I think in so many domains it will keep happening.

- Yeah, it's a special moment.

And, you know, it was great for Lee Sedol.

And you know, I think in a way,

they were sort of inspiring each other.

We as a team were inspired

by Lee Sedol's brilliance and nobleness.

And then maybe he got inspired by,

you know, what AlphaGo was doing

to then conjure this incredible inspirational moment.

It's all, you know, captured very well

in the documentary about it. - Yes.

- And I think that'll continue in many domains

where there's this at least for the,

again, for the foreseeable future of like

the humans bringing in the ingenuity

and asking the right question let's say,

and then utilizing these tools in a way that

then cracks a problem.

- Yeah, as the AI become smarter and smarter,

one of the interesting questions we can ask ourselves is

what makes humans special?

It does feel perhaps biased

that we humans are deeply special.

I don't know if it's our intelligence.

It could be something else

that other thing that's outside the mad dreams of reason.

- I think that's what I've always imagined when I was a kid

and starting on this journey of like,

I was of course fascinated by things like consciousness,

did a neuroscience PhD to look at how the brain works,

especially imagination and memory.

I focused on the hippocampus.

And it's sort of gonna be interesting.

I always thought the best way,

of course one can philosophize about it

and have thought experiments

and maybe even do actual experiments

like you do in neuroscience on real brains,

but in the end, I always imagined that

building AI a kind of intelligent artifact

and then comparing that to the human mind

and seeing what the differences were

would be the best way to uncover

what's special about the human mind,

if indeed there is anything special.

And I suspect there probably is,

but it's gonna be hard to, you know,

I think this journey we're on

will help us understand that and define that.

And, you know, there may be a difference

between carbon-based substrates that we are

and silicon ones when they process information.

You know, one of the best definitions

I like of consciousness is

it's the way information feels when we process it,

right? - Yeah.

- It could be.

I mean, it's not a very helpful scientific explanation,

but I think it's kind of interesting intuitive one.

And so, you know, on this journey,

this scientific journey we're on will

I think help uncover that mystery.

- Yeah.

"What I cannot create, I do not understand,"

that's somebody you deeply admire,

Richard Feynman like you mentioned.

You also reach for the Wagner's dreams of universality

that he saw in constraint domains,

but also broadly generally in mathematics and so on.

So many aspects on which you're pushing towards.

Not to start trouble at the end, but Roger Penrose.

- Yes, okay.

- So, you know,

do you think consciousness,

does this hard problem of consciousness,

how information feels?

Do you think consciousness, first of all, is a computation?

And if it is,

if it's information processing like you said everything is,

is it something that could be modeled

by a classical computer? - Yeah.

- Or is it a quantum mechanical in nature?

- Well, look, Penrose is amazing thinker,

one of the greatest of the modern era.

And we've had a lot of discussions about this.

Of course we cordially disagree.

Which is, you know, I feel like,

I mean he collaborated with a lot of good neuroscientists

to see if he could find mechanisms

for quantum mechanics behavior in the brain.

And to my knowledge,

they haven't found anything convincing yet.

So my betting is there is that,

it's mostly, you know,

it is just classical computing that's going on in the brain,

which suggests that all the phenomena

are modelable or mimicable by a classical computer.

But we'll see.

You know, there may be this final mysterious things of

the feeling of consciousness, the qualia,

these kinds of things that philosophers debate

where it's unique to the substrate.

We may even come towards understanding that

if we do things like Neuralink

or have neural interfaces to the AI systems,

which I think we probably will eventually

maybe to keep up with the AI systems,

we might actually be able to feel for ourselves

what it's like to compute on silicon, right?

So, and maybe that will tell us.

So I think it's gonna be interesting.

I had a debate once with the late Daniel Dennett about

why do we think each other are conscious?

Okay, so it's for two reasons.

One is you're exhibiting the same behavior that I am.

So that's one thing,

behaviorally you seem like a conscious being if I am.

But the second thing which is often overlooked is that

we're running on the same substrate.

So if you're behaving in the same way

and we're running on the same substrate,

it's most parsimonious to assume

you are feeling the same experience that I'm feeling.

But with an AI that's on silicon,

we won't be able to rely on the second part.

Even if it exhibits the first part,

that behavior looks like a behavior of a conscious being.

It might even claim it is.

But we wouldn't know how it actually felt.

And it probably couldn't know what we felt,

at least in the first stages.

Maybe when we get to super intelligence

and the technologies that builds,

perhaps we'll be able to bridge that.

- No, I mean that's a huge test for radical empathy is

to empathize with a different substrate.

- Right, exactly.

We've never had to confront that before.

- Yeah, so maybe, - Yeah.

- through brain computer interfaces

we'll be able to truly empathize

what it feels like to be a computer,

to compute.

- Well, for information to be computed

not on a carbon system.

- I mean that's deeply,

I mean some people kind of think about that with plants,

with other life forms

which are different. - Yes, it could be, exactly.

- Similar substrate, but sufficiently far enough

on the evolutionary tree, - Yup.

- that it's requires a radical empathy.

But to do that with a computer.

- I mean, no, we sort of,

there are animal studies on this of like,

of course higher animals like,

you know, killer whales and dolphins and dogs and monkeys,

you know, they have some,

and elephants, you know,

they have some aspects certainly of consciousness, right?

Even though they're not might not be

that smart on an IQ sense.

So we can already empathize with that.

And maybe even some of our systems one day,

like we built this thing called DolphinGemma.

You know, which can,

a version of our system was trained

on dolphin and whale sounds.

And maybe we'll be able to build

an interpreter or translator at some point.

It should be pretty cool.

- What gives you hope for the future of human civilization?

- Well, what gives me hope is that

I think our almost limitless ingenuity, first of all,

I think the best of us

and the best human minds are incredible.

And you know, I love, you know,

meeting and watching any human that's the top of their game,

whether that's sport or science or art,

you know, it's just nothing more wonderful than that,

seeing them in their element and flow.

I think it's almost limitless.

You know, our brains are general systems,

intelligent systems.

So I think it's almost limitless

what we can potentially do with them.

And then the other thing is our extreme adaptability.

I think it's gonna be okay in terms of

there's gonna be a lot of change,

but look where we are now

without effectively our hunter-gatherer brains.

How is it we can, you know,

we can cope with the modern world, right?

Flying on planes,

doing podcasts. - Yeah.

- You know, playing computer games

and virtual simulations. - Yeah.

- I mean it's already mind-blowing

given that our mind was developed for,

you know, hunting buffaloes on the tundra.

And so I think this is just the next step.

And it's actually kind of interesting to see

how society's already adapted to this

mind-blowing AI technology

- Yeah. - we have today already.

- Yeah. - It's sort of like,

oh, I talked to chatbots, totally fine.

- And it's very possible that this very podcast activity,

which I'm here for, will be completely replaced by AI.

I'm very replaceable

and I'm waiting for it. - Not to the level

that you can do it, Lex, I don't think.

- Ah, thank you.

That's what we humans do to each other,

we compliment. - Yes, exactly.

- All right.

And I'm deeply grateful for us humans to have

this infinite capacity for curiosity,

adaptability like you said,

and also compassion

and ability to love. - Exactly.

- All of those human things. - All the things

that are deeply human.

- Well, this is a huge honor, Demis.

You're one of the truly special humans in the world.

Thank you so much for doing what you do

and for talking today.

- Well, thank you very much, Lex.

- Thanks for listening to this conversation

with Demis Hassabis.

To support this podcast,

please check out our sponsors in the description

and consider subscribing to this channel.

And now let me answer some questions

and try to articulate some things I've been thinking about.

If you would like to submit questions,

including in audio and video form,

go to lexfridman.com/ama.

I got a lot of amazing questions, thoughts,

and requests from folks.

I'll keep trying to pick some randomly

and comment on it at the end of every episode.

I got a note on May 21st this year that said,

hi, Lex, 20 years ago today,

David Foster Wallace delivered

his famous This is Water speech at Kenyon College.

What do you think of this speech?

Well, first, I think this is probably one of the greatest

and most unique commencement speeches ever given.

But of course I have many favorites,

including the one by Steve Jobs.

And David Foster Wallace is one of my favorite writers

and one of my favorite humans.

There's a tragic honesty to his work

and it always felt as if he was engaging

in a constant battle with his own mind.

And the writing, his writing,

were kind of his notes from the front lines of that battle.

Now onto the speech, let me quote some parts.

There's of course the parable of the fish

and the water that goes.

"There are these two young fish swimming along

and they happen to meet an older fish

swimming the other way,

who nods at them and says,

'Morning boys.

Hows the water?'

And the two young fish swim on for a bit,

and then eventually,

one of them looks over at the other and goes,

'What the hell is water?'"

In the speech, David Foster Wallace goes on to say,

"The point of the fish story is merely

that the most obvious, important realities are

often the ones that are hardest to see and talk about.

Stated as an English sentence of course,

this is just the banal platitude,

but the fact is that in the day to day

trenches of adult existence,

banal platitudes can have a life or death importance,

or so I wish to suggest to you

in this dry and lovely morning."

I have several takeaways

from this parable and the speech that follows.

First, I think we must question everything,

and in particular,

the most basic assumptions about our reality, our life,

and the very nature of existence.

And that this project is a deeply personal one.

In some fundamental sense,

nobody can really help you in this process of discovery.

The call to action here I think from David Foster Wallace

as he puts it is to, quote,

"To be just a little less arrogant.

To have just a little more critical awareness

about myself and my certainties.

Because a huge percentage of the stuff

that I tend to be automatically certain of is,

it turns out, totally wrong and deluded."

All right, back to me, Lex speaking.

Second takeaway is that

the central spiritual battles of our life are not fought

on a mountaintop somewhere at a meditation retreat,

but it is fought in the mundane moments of daily life.

Third takeaway is that

we too easily give away our time and attention

to the multitude of distractions that the world feeds us,

the insatiable black holes of attention.

David Foster Wallace's call to action in this case is

to be deeply aware of the beauty in each moment

and to find meaning in the mundane.

I often quote David Foster Wallace in his advice

that the key to life is to be unborable.

And I think this is exactly right.

Every moment, every object, every experience

when looked at closely enough contains within it

infinite richness to explore.

And since Demis Hassabis of this very podcast episode and I

are such fans of Richard Feynman,

allow me to also quote Mr. Feynman on this topic as well.

Quote,

"I have a friend who's an artist

and has sometimes taken a view,

which I don't agree with very well.

He'll hold up a flower and say,

'Look how beautiful it is,'

and I'll agree.

Then he says,

'I, as an artist can see how beautiful this is,

but you as a scientist take this all apart

and it becomes a dull thing.'

And I think that's kind of nutty.

First of all, the beauty that he sees is available

to other people and to me too, I believe.

Although I may not be quite

as refined aesthetically as he is,

I can appreciate the beauty of a flower.

At the same time,

I see much more about the flower than he sees.

I could imagine the cells in there,

the complicated actions inside which also have beauty.

I mean, it's not just beauty at this dimension

at one centimeter,

there's also beauty at the smaller dimensions,

their inner structure, also the processes.

The fact that the colors and the flower evolved

in order to attract the insects

to pollinate it is interesting,

it means that the insects can see the color.

It adds a question,

does this aesthetic sense also exist in lower forms?

Why is it aesthetic?

All kinds of interesting questions,

which the science knowledge only adds to the excitement,

the mystery, and the awe of a flower.

It only adds."

All right, back to David Foster Wallace's speech.

He has a great story in there that I particularly enjoy.

It goes,

there are these two guys sitting together in a bar

in the remote Alaskan wilderness.

One of the guys is religious, the other is an atheist.

And the two are arguing about the existence of God

with that special intensity that comes

after about the fourth beer.

And the atheist says,

look, it's not like I don't have actual reasons

for not believing in God.

It's not like I haven't ever experimented

with the whole God and prayer thing.

Just last month,

I got caught away from the camp in that terrible blizzard

and I was totally lost and I couldn't see a thing

and it was 50 below, and so I tried it.

I fell on my knees in the snow and cried out,

oh God, if there is a God,

I'm lost in this blizzard

and I'm gonna die if you don't help me.

And now back in the bar,

the religious guy looks at the atheist all puzzled.

Well, then you must believe now, he says.

After all, there you are alive.

The atheist just rolls his eyes, no man.

All that happened was

a couple of Eskimos happened to be wandering by

and show me the way back to the camp.

All this I think teaches us that

everything is a matter of perspective

and that wisdom may arrive

if we have the humility

to keep shifting and expanding our perspective on the world.

Thank you for allowing me to talk a bit

about David Foster Wallace.

He's one of my favorite writers and he's a beautiful soul.

If I may,

one more thing I wanted to briefly comment on.

I found myself to be in this strange position of

getting attacked online often from all sides,

including being lied about

sometimes through selective misrepresentation,

but often through downright lies.

I don't know how else to put it.

This all breaks my heart frankly.

But I've come to understand that

it's the way of the internet

and the cost of the path I've chosen.

There's been days when it's been rough on me mentally.

It's not fun being lied about,

especially when it's about things that are usually

for a long time have been

a source of happiness and joy for me.

But again, that's life.

I'll continue exploring the world of people and ideas

with empathy and rigor,

wearing my heart on my sleeve as much as I can.

For me, that's the only way to live.

Anyway, a common attack on me is

about my time at MIT and Drexel,

two great universities I love

and have tremendous respect for.

Since a bunch of lies have accumulated online

about me on these topics

to a sad and at times hilarious degree,

I thought I would once more state the obvious facts

about my bio for the small number of you who may care.

TL;DR, two things.

First, as I say often,

including in a recent podcast episode that

somehow was listened to by many millions of people,

I proudly went to Drexel University

for my bachelor's, master's, and doctor degrees.

Second, I am a research scientist at MIT

and have been there in a paid research position

for the last 10 years.

Allow me to elaborate a bit more on these two things now,

but please skip if this is not at all interesting.

So like I said,

a common attack on me is that

I have no real affiliation with MIT.

The accusation I guess is that

I'm falsely claiming an MIT affiliation

because I taught a lecture there once.

Nope, that accusation against me is a complete lie.

I have been at MIT for over 10 years

in a paid research position from 2015 to today.

To be extra clear,

I'm a research scientist at MIT working in LIDS,

the Laboratory for Information and Decision Systems

in the College of Computing.

For now, since I'm still at MIT,

you can see me in the directory

and on the various lab pages.

I have indeed given many lectures at MIT over the years,

a small fraction of which I posted online.

Teaching for me always has been just for fun

and not part of my research work.

I personally think I suck at it,

but I have always learned and grown from the experience.

It's like Feynman spoke about,

if you want to understand something deeply,

it's good to try to teach it.

But like I said, my main focus has always been on research.

I published many peer-reviewed papers

that you can see in my Google Scholar profile.

For my first four years at MIT,

I worked extremely intensively.

Most weeks were 80 to 100 hour work weeks.

After that, in 2019,

I still kept my research scientists position,

but I split my time taking a leap

to pursue projects in AI and robotics outside MIT

and to dedicate a lot of focus to the podcast.

As I've said,

I've been continuously surprised

just how many hours preparing for an episode takes.

There are many episodes of the podcast

for which I have to read, write, and think

for 100, 200 or more hours

across multiple weeks and months.

Since 2020, I have not actively published research papers.

Just like the podcast,

I think it's something that's a serious full-time effort.

But not publishing and doing full-time research

has been eating at me

because I love research,

and I love programming and building systems

that test out interesting technical ideas,

especially in the context of human AI

or human-robot interaction.

I hope to change this in the coming months and years.

What I've come to realize about myself is

if I don't publish

or if I don't launch systems that people use,

I definitely feel like a piece of me is missing.

It legitimately is a source of happiness for me.

Anyway, I'm proud of my time at MIT.

I was and am constantly surrounded

by people much smarter than me,

many of whom have become lifelong colleagues and friends.

MIT is a place I go to escape the world,

to focus on exploring fascinating questions

at the cutting-edge of science and engineering.

This again, makes me truly happy.

And it does hit pretty hard on a psychological level

when I'm getting attacked over this.

Perhaps I'm doing something wrong.

If I am, I will try to do better.

In all this discussion of academic work,

I hope you know that I don't ever mean to say

that I'm an expert at anything.

In the podcast and in my private life,

I don't claim to be smart.

In fact, I often call myself an idiot and mean it.

I try to make fun of myself as much as possible,

and in general, to celebrate others instead.

Now to talk about Drexel University,

which I also love,

and proud of and am deeply grateful for my time there.

As I said, I went to Drexel

for my bachelor's, master's, and doctorate degrees

in Computer Science and Electrical Engineering.

I've talked about Drexel many times,

including as I mentioned at the end of a recent podcast,

the Donald Trump episode.

Funny enough that was listened to by many millions of people

where I answered a question about graduate school

and explained my own journey at Drexel

and how grateful I am for it.

If it's at all interesting to you,

please go listen to the end of that episode

or watch the related clip.

At Drexel, I met and worked with

many brilliant researchers and mentors

from whom I've learned a lot

about engineering, science, and life.

There are many valuable things I gained

from my time at Drexel.

First, I took a large number of very difficult math

and theoretical computer science courses.

They taught me how to think deeply and rigorously.

And also how to work hard and not give up

even if it feels like I'm too dumb to find a solution

to a technical problem.

Second, I programmed a lot during that time,

mostly C, C++.

I programmed robots, optimization algorithms,

computer vision systems, wireless network protocols,

multimodal machine learning systems,

and all kinds of simulations of physical systems.

This is where I really develop a love for programming,

including, yes, Emacs and the Kinesis keyboard.

I also, during that time, read a lot.

I played a lot of guitar,

wrote a lot of crappy poetry,

and trained a lot in judo and jiujitsu,

which I cannot sing enough praises to.

Jiujitsu humbled me on a daily basis throughout my 20s,

and it still does to this very day

whenever I get a chance to train.

Anyway, I hope that the folks who occasionally get swept up

and the chanting online crowds that want to tear down others

don't lose themselves in it too much.

In the end,

I still think there's more good than bad in people,

but we're all, each of us, a mixed bag.

I know I am very much flawed.

I speak awkwardly.

I sometimes say stupid shit.

I can get irrationally emotional.

I can be too much of a dick when I should be kind.

I can lose myself in a biased rabbit hole

before I wake up

to the bigger, more accurate picture of reality.

I'm human and so are you,

for better or for worse.

And I do still believe

we're in this whole beautiful mess together.

I love you all.

Loading...

Loading video analysis...