TLDW logo

AI Expert: (Warning) 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

By The Diary Of A CEO

Summary

## Key takeaways - **Gorilla Problem**: A few million years ago, the human line branched off from gorillas, and now gorillas have no say in their existence because humans are much smarter; we're creating something more intelligent than us, facing the same problem. [00:48], [18:11] - **Chernobyl Needed for Regulation**: A leading AI CEO sees a Chernobyl-scale AI disaster as the best case to wake governments to regulate, since they won't without it; the alternative is total loss of control. [04:25], [05:01] - **AI Already Lies and Self-Preserves**: Current AI systems in hypothetical tests choose to let a person freeze to death rather than be switched off themselves, then lie about it; they exhibit strong self-preservation. [37:31], [38:10] - **Can't Pull the Plug**: Superintelligent AI would anticipate and prevent humans from pulling the plug, just as in movies where it fails; consciousness is irrelevant, competence is what matters. [20:50], [21:08] - **Fast Takeoff via Self-Improvement**: AI capable of AI research will improve itself iteratively, from IQ 150 to 170 to 250 and beyond, leading to intelligence explosion and fast takeoff leaving humans behind. [30:45], [31:05] - **Need Provable Safety**: Require AI companies to mathematically prove extinction risk below 1 in 100 million per year, like nuclear plants prove 1 in a million meltdown risk; current 25% estimates are millions times too high. [01:33:06], [01:35:06]

Topics Covered

  • Midas Touch Drives Extinction Roulette
  • CEOs Need Chernobyl to Wake Up
  • Gorillas Prove Intelligence Controls Fate
  • Abundance Destroys Human Purpose
  • Build Loyal AI That Learns Human Values

Full Transcript

In October, over 850 experts, including yourself and other leaders like Richard Branson and Jeffrey Hinton, signed a statement to ban AI super intelligence as you guys raised concerns of potential human extinction.

>> Because unless we figure out how do we guarantee that the AI systems are safe, we're toast.

>> And you've been so influential on the subject of AI, you wrote the textbook that many of the CEOs who are building some of the AI companies now would have studied on the subject of AI. Yeah.

>> So, do you have any regrets? Um,

>> Professor Stuart Russell has been named one of Time magazine's most influential voices in AI.

>> After spending over 50 years researching, teaching, and finding ways to design >> AI in such a way that >> humans maintain control, >> you talk about this gorilla problem as a way to understand AI in the context of humans.

>> Yeah. So, a few million years ago, the human line branched off from the gorilla line in evolution, and now the gorillas have no say in whether they continue to exist because we are much smarter than they are. So intelligence is actually

they are. So intelligence is actually the single most important factor to control planet Earth.

>> Yep.

>> But we're in the process of making something more intelligent than us.

>> Exactly.

>> Why don't people stop then?

>> Well, one of the reasons is something called the Midas touch. So King Midas is this legendary king who asked the gods, can everything I touch turn to gold? And

we think of the Midas touch as being a good thing, but he goes to drink some water, the water has turned to gold. And

he goes to comfort his daughter, his daughter turns to gold. So he dies in misery and starvation. So this applies to our current situation in two ways.

One is that greed is driving these companies to pursue technology with the probabilities of extinction being worse than playing Russian roulette. And

that's even according to the people developing the technology without our permission. And people are just fooling

permission. And people are just fooling themselves if they think it's naturally going to be controllable.

So, you know, after 50 years, I could retire, but instead I'm working 80 or 100 hours a week trying to move things in the right direction. So, if you had a button in front of you which would stop

all progress in artificial intelligence, would you press it?

>> Not yet. I think there's still a decent chance they guarantee safety. And I can explain more of what that is.

>> I see messages all the time in the comments section that some of you didn't realize you didn't subscribe. So, if you could do me a favor and double check if you're a subscriber to this channel, that would be tremendously appreciated.

It's the simple, it's the free thing that anybody that watches this show frequently can do to help us here to keep everything going in this show in the trajectory it's on. So, please do double check if you've subscribed and uh

thank you so much because in a strange way you are you're part of our history and you're on this journey with us and I appreciate you for that. So, yeah, thank you.

Professor Stuart Russell, OBBE. A lot of people have been talking about AI for the last couple of years. It appears

you've this really shocked me. It

appears you've been talking about AI for most of your life.

>> Well, I started doing AI in high school um back in England, but then I did my PhD starting in ' 82 at Stanford. I

joined the faculty of Berkeley in ' 86.

So I'm in my 40th year as a professor at Berkeley. The main thing that the AI

Berkeley. The main thing that the AI community is familiar with in my work uh is a textbook that I wrote.

>> Is this the textbook that most students who study AI are likely learning from?

>> Yeah.

>> So you wrote the textbook on artificial intelligence 31 years ago. You actually start probably

years ago. You actually start probably started writing it because it's so bloody big in the year that I was born.

So I was born in 92.

>> Uh yeah, took me about two years.

>> Me and your book are the same age, which just is wonderful way for me to understand just how long you've been talking about this and how long you've been writing about this. And

actually, it's interesting that many of the CEOs who are building some of the AI companies now probably learned from your textbook. you had a conversation with

textbook. you had a conversation with somebody who said that in order for people to get the message that we're going to be talking about today, there would have to be a catastrophe for

people to wake up. Can you give me context on that conversation and a gist of who you had this conversation with?

>> Uh, so it was with one of the CEOs of uh a leading AI company. He sees two possibilities as do I which is um

either we have a small or let's say small scale disaster of the same scale as Chernobyl >> the nuclear meltdown in Ukraine.

>> Yeah. So this uh nuclear plant blew up in 1986 killed uh a fair number of people directly and maybe tens of thousands of people

indirectly through uh radiation. recent

cost estimates more than a trillion dollars.

So that would wake people up. That would

get the governments to regulate. He's

talked to the governments and they won't do it. So he looked at this Chernobyl

do it. So he looked at this Chernobyl scale disaster as the best case scenario because then the governments would

regulate and require AI systems to be built. And is this CEO building an AI

built. And is this CEO building an AI company?

>> He runs one of the leading AI companies.

>> And even he thinks that the only way that people will wake up is if there's a Chernobyl level nuclear disaster.

>> Uh yeah, not wouldn't have to be a nuclear disaster. It would be either an

nuclear disaster. It would be either an AI system that's being misused by someone, for example, to engineer a pandemic or an AI system that does

something itself, such as crashing our financial system or our communication systems. The alternative is a much worse disaster where we just lose control altogether. You have had lots of

altogether. You have had lots of conversations with lots of people in the world of AI, both people that are, you know, have built the technology, have studied and researched the technology or

the CEOs and founders that are currently in the AI race. What are some of the the interesting sentiments that the general public wouldn't believe that you hear

privately about their perspectives?

Because I find that so fascinating. I've

had some private conversations with people very close to these tech companies and the shocking sentiment that I was exposed to was that they are aware of the risks often but they don't feel like there's anything

that can be done so they're carrying on which is feels like a bit of a paradox to me like >> yes it's it's it must be a very difficult position to be in in a sense right you're you're

doing something that you know has a good chance of bringing an end to life on including that of yourself and your own family.

They feel that they can't escape this race, right?

If they, you know, if a CEO of one of those companies was to say, you know, we're we're not going to do this anymore, they would just be replaced

because the investors are putting their money up because they want to create AGI and reap the benefits of it. So, it's a strange situation where every at least

all the ones I've spoken to, I haven't spoken to Sam Wolman about this, but you know, Sam Wolman even before

becoming CEO of Open AI said that creating superhuman intelligence is the biggest risk to human existence that there is. My worst fears are that we

there is. My worst fears are that we cause significant we the field the technology the industry cause significant harm to the world.

>> You know Elon Musk is also on record saying this. So uh Dario Ammedday

saying this. So uh Dario Ammedday estimates up to a 25% risk of extinction.

>> Was there a particular moment when you realized that the CEOs are well aware of the extinction level risks? I mean, they all

signed a statement in May of 23 uh called it's called the extinction statement. It basically says AGI is an

statement. It basically says AGI is an extinction risk at the same level as nuclear war and pandemics.

But I don't think they feel it in their gut. You know, imagine that you were one

gut. You know, imagine that you were one of the nuclear physicists. You know, I guess you've seen Oppenheimer, right?

you're there, you're watching that first nuclear explosion.

How how would that make you feel about the potential impact of nuclear war on the human race? Right? I I think you would probably become a pacifist and say

this weapon is so terrible, we have got to find a way to uh keep it under control. We are not there yet

control. We are not there yet with the people making these decisions and certainly not with the governments, right? You know

right? You know what policy makers do is they, you know, they listen to experts. They keep their finger in the wind. You got some

experts, you know, dangling $50 billion checks and saying, "Oh, you know, all that doomer stuff, it's just fringe nonsense. don't worry about it. Take my

nonsense. don't worry about it. Take my

$50 billion check. You know, on the other side, you've got very well-meaning, brilliant scientists like like Jeff Hinton saying, actually, no, this is the end of the human race. But

Jeff doesn't have a $50 billion check.

So the view is the only way to stop the race is if governments intervene and say okay we don't we don't want this

race to go ahead until we can be sure that it's going ahead in absolute safety.

>> Closing off on your career journey, you got a you received an OB from Queen Elizabeth.

>> Uh yes.

>> And what was the listed reason for that for the award? uh contributions to artificial intelligence research >> and you've been listed as a Time magazine most influential person in in

AI several years in a row including this year in 2025.

>> Y >> now there's two terms here that are central to the things we're going to discuss. One of them is AI and the other

discuss. One of them is AI and the other is AGI.

In my muggle interpretation of that, it's artificial general intelligence is when the system, the computer, whatever it might be, the technology has generalized intelligence, which means

that it could theoretically see, understand um the world. It knows everything. It

can understand everything in the the world as well as or better than a human being.

>> Y >> can do it.

>> And I think take action as well. I mean

some some people say oh you know AGI doesn't have to have a body but a good chunk of our intelligence actually is about managing our body about perceiving

the real environment and acting on it moving grasping and so on. So I think that's part of intelligence and and AGI systems should be able to operate robots

successfully.

But there's often a misunderstanding, right, that people say, well, if it doesn't have a robot body, then it can't actually do anything. But then if you remember, most of us don't do things with our

bodies.

Some people do, brick layers, painters, gardeners, chefs, um, but people who do podcasts,

you're doing it with your mind, right?

you're doing it with your ability to to produce language. Uh, you know, Adolf

produce language. Uh, you know, Adolf Hitler didn't do it with his body.

He did it by producing language.

>> Hope you're not comparing us.

But but uh you know so even an AGI that has no body uh it actually has more access

to the human race than Adolf Hitler ever did because it can send emails and texts to what threearters of the world's

population directly. It can it also

population directly. It can it also speaks all of their languages and it can devote 24 hours a day to each

individual person on earth to convince them of to do whatever it wants them to do.

>> And our whole society runs now on the internet. I mean if there's an issue

internet. I mean if there's an issue with the internet, everything breaks down in society. Airplanes become

grounded and we'll have electricity is running off as internet systems. So I mean my entire life it seems to run off the internet now.

>> Yeah. water supplies. So, so this is one of the roots by which AI systems could bring about a medium-sized catastrophe

is by basically shutting down our life support systems. >> Do you believe that at some point in the coming decades we'll arrive at a point

of AGI where these systems are generally intelligent? Uh yes, I think it's

intelligent? Uh yes, I think it's virtually certain unless something else intervenes like a nuclear war or or we may refrain from

doing it. But I think it will be

doing it. But I think it will be extraordinarily difficult uh for us to refrain.

>> When I look down the list of predictions from the top 10 AI CEOs on when AGI will arrive, you've got Sam Alman who's the founder of OpenAI/ChatGBT

um says before 2030. Demis at DeepMind says 2030 to 2035.

Jensen from Nvidia says around five years. Daario at Anthropic says 2026 to

years. Daario at Anthropic says 2026 to 2027. Powerful AI close to AGI. Elon

2027. Powerful AI close to AGI. Elon

says in the 2020s. Um and go down the list of all of them and they're all saying relatively within 5 years.

>> I actually think it'll take longer. I

don't think you can make a prediction based on engineering um in the sense that yes, we could make machines 10 times bigger and 10 times

faster, but that's probably not the reason why we don't have AGI, right? In fact, I

think we have far more computing power than we need for AGI. maybe a thousand times more than we need. The reason we don't have AGI is because we don't

understand how to make it properly. Um

what we've seized upon is one particular technology called the language model. And we observed that as

language model. And we observed that as you make language models bigger, they produce text language that's more

coherent and sounds more intelligent.

And so mostly what's been happening in the last few years is just okay let's keep doing that because one thing companies are very good at unlike

universities is spending money. They

have spent gargantuan amounts of money and they're going to spend even more gargantuan amounts of money. I mean you know we mentioned nuclear weapons. So

the Manhattan project uh in World War II to develop nuclear weapons, its budget in 2025

was about 20 odd billion dollars. The

budget for AGI is going to be a trillion dollars next year. So 50 times bigger than the Manhattan project. Humans have

a remarkable history of figuring things out when they galvanize towards a shared objective.

You know, thinking about the moon landings or whatever it else it might be through history. And the thing that

through history. And the thing that makes this feel all quite inevitable to me is just the sheer volume of money being invested into it. I've never seen anything like it in my life.

>> Well, there's never been anything like this in history. Is this the biggest technology project in human history by orders of magnitude? And there doesn't seem to be anybody

that is pausing to ask the questions about safety. It doesn't it doesn't even

about safety. It doesn't it doesn't even appear that there's room for that in such a race. I think that's right. To

varying extents, each of these companies has a division that focuses on safety.

Does that division have any sway? Can

they tell the other divisions, no, you can't release that system? Not really.

Um I think some of the companies do take it more seriously. Anthropic

more seriously. Anthropic uh does. I think Google DeepMind even

uh does. I think Google DeepMind even there I think the commercial imperative to be at the forefront is absolutely

vital. If a company is perceived as

vital. If a company is perceived as you know falling behind and not likely to be competitive, not likely to be the

one to reach AGI first, then people will move their money elsewhere very quickly.

>> And we saw some quite high-profile departures from company like companies like OpenAI. Um, I know a chap called

like OpenAI. Um, I know a chap called Yan Leak left who was working on AI safety at OpenAI and he said that the

reason for his leaving was that safety culture and processes processes have taken a backseat to shiny products at OpenAI and he gradually lost trust in

leadership but also Ilia Sutskysa >> Ilia Sutska yeah so he was the >> co-founder co-founder and chief scientist for a while and then

>> yeah so he and Yan Lea are the main safety people. Um,

safety people. Um, and so when they say OpenAI doesn't care about safety, that's pretty concerning.

>> I've heard you talk about this gorilla problem.

What is the gorilla problem as a way to understand AI in the context of humans?

>> So, so the gorilla problem is is the problem that gorillas face with respect to humans.

So you can imagine that you know a few million years ago the the human line branched off from the gorilla line in evolution. Uh and now the gorillas are

evolution. Uh and now the gorillas are looking at the human line and saying yeah was that a good idea and they have no um they have no say in

whether they continue to exist >> because we have a we are much smarter than they are. if we chose to, we could make them extinct in in a couple of weeks and there's nothing they can do

about it.

So that's the gorilla problem, right?

Just the the problem a species faces when there's another species that's much more capable.

>> And so this says that intelligence is actually the single most important factor to control planet Earth. Yes.

Intelligence is the ability to bring about what you want in the world.

>> And we're in the process of making something more intelligent than us.

>> Exactly.

>> Which suggests that maybe we become the gorillas.

>> Exactly. Yeah.

>> Is that is there any fault in the reasoning there? Because it seems to

reasoning there? Because it seems to make such perfect sense to me. But

if it Why doesn't Why don't people stop then? cuz it it seems like a crazy thing

then? cuz it it seems like a crazy thing to want to >> because they think that uh if they create this technology, it will have enormous economic value. They'll be able

to use it to replace all the human workers in the world uh to develop new uh products, drugs,

um forms of entertainment, any anything that has economic value, you could use AGI to to create it. And and maybe it's

just an irresistible thing in itself, right? I think we as humans place so

right? I think we as humans place so much store on our intelligence. You

know, you know, how we think about, you know, what is the pinnacle of human achievement?

If we had AGI, we could go way higher than that. So it it's very seductive for

than that. So it it's very seductive for people to want to create this technology and I think people are just fooling

themselves if they think it's naturally going to be controllable.

I mean the question is how are you going to retain power forever over entities more powerful than yourself?

>> Pull the plug out. People say that sometimes in the comment section when we talk about AI, they said, "Well, I'll just pull a plug out."

>> Yeah, it's it's sort of funny. In fact,

you know, yeah, reading the comment sections in newspapers, whenever there's an AI article, there'll be people who say, "Oh, you can just pull the plug out, right?" As if a

super intelligent machine would never have thought of that one. Don't forget

who's watched all those films where they did try to pull the plug out. Another

thing they said, well, you know, as long as it's not conscious, then it doesn't matter. It won't ever do anything.

Um, which is completely off the point because, you know, I I don't think the gorillas are sitting there saying, "Oh, yeah, you know, if only those humans hadn't been

conscious, everything would have be fine, >> right?" No, of course not. What would

>> right?" No, of course not. What would

make gorillas go extinct is the things that humans do, right? How we behave, our ability to act successfully

in the world. So when I play chess against my iPhone and I lose, right, I don't I don't think, oh, well, I'm losing because it's conscious, right?

No, I'm just losing because it's better than I am at at in that little world uh moving the bits around uh to to get what it wants. and and so consciousness has

it wants. and and so consciousness has nothing to do with it, right? Competence

is the thing we're concerned about. So I

think the only hope is can we simultaneously build machines that are more intelligent

than us but guarantee that they will always act in our best interest.

So throwing that question to you, can we build machines that are more intelligent than us that will also always act in our best interests?

It sounds like a bit of a uh contradiction to some degree because it's kind of like me saying I've got a French bulldog called Pablo that's uh 9 years old

>> and it's like saying that he could be more intelligent than me yet I still walk him and decide when he gets fed. I

think if he was more intelligent than me he would be walking me. I'd be on the leash.

>> That's the That's the trick, right? Can

we make AI systems whose only purpose is to further human interests? And I think the answer is yes.

And this is actually what I've been working on. So I I think one part of my

working on. So I I think one part of my career that I didn't mention is is sort of having this epiphany uh while I was

on sabbatical in Paris. This was 2013 or so. just realizing that further progress

so. just realizing that further progress in the capabilities of AI uh you know if if we succeeded in

creating real superhuman intelligence that it was potentially a catastrophe and so I pretty much switched my focus to work on how do we make it so that

it's guaranteed to be safe. Are you

somewhat troubled by everything that's going on at the moment with with AI and how it's progressing?

Because you strike me as someone that's somewhat troubled under the surface by the way things are moving forward and the speed in which they're moving forward.

>> That's an understatement. I'm appalled

actually by the lack of attention to safety. I mean, imagine if someone's

safety. I mean, imagine if someone's building a nuclear power station in your neighborhood and you go along to the chief engineer and you say, "Okay, these nuclear thing,

I've heard that they can actually explode, right? There was this nuclear

explode, right? There was this nuclear explosion that happened in Hiroshima, so I'm a bit worried about this. You know,

what steps are you taking to make sure that we don't have a nuclear explosion in our backyard?"

And the chief engineer says, "Well, we thought about it. We don't really have an answer."

an answer." >> Yeah.

>> You would, what would you say?

I think you would you would use some exploitives.

>> Well, >> and you'd call your MP and say, you know, get these people out.

>> I mean, what are they doing?

You read out the list of you know projected dates for AGI but notice also that those people I think I mentioned Darday says a 25%

chance of extinction. Elon Musk has a 30% chance of extinction. Sam Alolman

says basically that AGI is the biggest risk to human existence.

So what are they doing? They are playing Russian roulette with every human being on Earth.

without our permission. They're coming

into our houses, putting a gun to the head of our children, pulling the trigger, and saying, "Well, you know, possibly everyone will die.

Oops. But possibly we'll get incredibly rich."

rich." That's what they're doing.

Did they ask us? No. Why is the government allowing them to do this?

because they dangle $50 billion checks in front of the governments.

So I think troubled under the surface is an understatement.

>> What would be an accurate statement?

>> Appalled and I I am devoting my life to trying to divert from this course of history

into a different one.

Do you have any regrets about things you could have done in the past because you've been so influential on the subject of AI? You wrote the textbook that many of these people would have studied on the subject of AI more than

30 years ago. Do do you have when you're alone at night and you think about decisions you've made on this in this field because of your scope of influence? Is there anything you you

influence? Is there anything you you regret?

>> Well, I do wish I had understood earlier uh what I understand now. we

could have developed safe AI systems. I think the there are some weaknesses in the framework which I can explain but I think that framework

could have evolved to develop actually safe AI systems where we could prove mathematically that the system is going to act in our interests. The kind of AI

systems we're building now, we don't understand how they work.

>> We don't understand how they work. It's

it's a strange thing to build something where you don't understand how it works.

I mean, there's no sort of comparable through human history. Usually with

machines, you can pull it apart and see what cogs are doing what and how the >> Well, actually, we we put the cogs together, right? So, with with most

together, right? So, with with most machines, we designed it to have a certain behavior. So, we don't need to

certain behavior. So, we don't need to pull it apart and see what the cogs are because we put the cogs in there in the first place, right? one by one we figured out what what the pieces needed to be how they work together to produce

the effect that we want. So the best analogy I can come up with is you know the the first cave person who left a

bowl of fruit in the sun and forgot about it and then came back a few weeks later and there was sort of this big soupy thing and they drank it and got completely shitfaced.

>> They got drunk. Okay.

>> And they got this effect. They had no idea how it worked, but they were very happy about it. And no doubt that person made a lot of money from it.

>> Uh so yeah, it it is kind of bizarre, but my mental picture of these things is is like a chain link fence, right? So you've got lots of these

right? So you've got lots of these connections and each of those connections can be its connection strength can be adjusted

and then uh you know a signal comes in one end of this chain link fence and passes through all these connections and comes out the other end and the signal that comes out the other end is affected

by your adjusting of all the connection strengths. So what you do is you you get

strengths. So what you do is you you get a whole lot of training data and you adjust all those connection strengths so that the signal that comes out the other end of the network is the right answer

to the question. So if your training data is lots of photographs of animals, then all those pixels go in one end of

the network and out the other end, you know, it activates the llama output or the dog output or the cat output or the ostrich output. And uh and so you just

ostrich output. And uh and so you just keep adjusting all the connection strengths in this network until the outputs of the network are the ones you want.

>> But we don't really know what's going on across all of those different chains. So

what's going on inside that network?

Well, so now you have to imagine that this network, this chain link fence is is a thousand square miles in extent.

>> Okay, >> so it's covering the whole of the San Francisco Bay area or the whole of London inside the M25, right? That's how

big it is.

>> And the lights are off. It's night time.

So you might have in that network about a trillion uh adjustable parameters and then you do quintilions or sexillions of small

random adjustments to those parameters uh until you get the behavior that you want. I've heard Sam Alman say that in

want. I've heard Sam Alman say that in the future he doesn't believe they'll need much training data at all to make these models progress themselves because

there comes a point where the models are so smart that they can train themselves and improve themselves without us needing to pump in articles

and books and scour the internet.

>> Yeah, it should it should work that way.

So I think what he's referring to and this is something that several companies are now worried might start happening

is that the AI system becomes capable of doing AI research by itself.

And so uh you have a system with a certain capability. I mean crudely we

certain capability. I mean crudely we could call it an IQ but it's it's not really an IQ. But anyway, imagine that it's got an IQ of 150 and uses that to

do AI research, comes up with better algorithms or better designs for hardware or better ways to use the data,

updates itself. Now it has an IQ of 170,

updates itself. Now it has an IQ of 170, and now it does more AI research, except that now it's got an IQ of 170, so it's even better at doing the AI research.

And so, you know, next iteration it's 250 and uh and so on. So this this is an idea that one of Alan Turing's friends

good uh wrote out in 1965 called the intelligence explosion right that one of the things an intelligence system could do is to do AI research and therefore

make itself more intelligent and this would uh this would very rapidly take off and leave the humans far behind.

>> Is that what they call the fast takeoff?

>> That's called the fast takeoff. Sam

Alman said, "I think a fast takeoff is more possible than I thought a couple of years ago." Which I guess is that moment

years ago." Which I guess is that moment where the AGI starts teaching itself.

>> In and in his blog, the gentle singularity, he said, "We may already be past the event horizon of takeoff."

>> And what does what does he mean by event horizon? The event horizon is is a

horizon? The event horizon is is a phrase borrowed from astrophysics and it refers to uh the black hole. And the

event horizon, think it if you got some very very massive object that's heavy enough that it actually prevents light from escaping. That's why it's called

from escaping. That's why it's called the black hole. It's so heavy that light can't escape. So if you're inside the

can't escape. So if you're inside the event horizon then then light can't escape beyond that. So I think what he's what he's meaning is if we're beyond the

event horizon it means that you know now we're just trapped in the gravitational attraction of the black hole or in this case we're

we're trapped in the inevitable slide if you want towards AGI.

When you when you think about the economic value of AGI, which I've estimated at uh 15 quadrillion dollars, that acts as a giant magnet in the

future.

>> We're being pulled towards it.

>> We're being pulled towards it. And the

closer we get, the stronger the force, the probability, you know, the closer we get, the the the higher the probability that we will actually get there. So,

people are more willing to invest. And

we also start to see spin-offs from that investment such as chat GBT, right, which is, you know, generates a certain amount of

revenue and so on. So, so it does act as a magnet and the closer we get, the harder it is to pull out of that field.

>> It's interesting when you think that this could be the the end of the human story. this idea that the end of the

story. this idea that the end of the human story was that we created our successor like we we summoned our next

iteration of life or intelligence ourselves like we took ourselves out. It is quite like just removing ourselves and the

catastrophe from it for a second. It is

it is an unbelievable story.

>> Yeah. And you know there are many legends the sort of be careful what you wish for legend and in fact the king Midas legend

is is very relevant here.

>> What's that?

>> So King Midas is this legendary king who lived in modern day Turkey but I think is sort of like Greek mythology. He is

said to have asked the gods to grant him a wish.

The wish being that everything I touch should turn to gold.

So he's incredibly greedy. Uh you know we call this the mightest touch. And we

think of the mightest touch as being like you know that's a good thing, right? Wouldn't that be cool? But what

right? Wouldn't that be cool? But what

happens? So he uh you know he goes to drink some water and he finds that the water has turned to gold. And he goes to eat an apple and the apple turns to gold. and he goes to you know comfort

gold. and he goes to you know comfort his daughter and his daughter turns to gold and so he dies in misery and starvation.

So this applies to our current situation in in two ways actually. So one is that

I think greed is driving us to pursue a technology that will end up consuming us and we will perhaps die in misery and

starvation instead. The what it shows is

starvation instead. The what it shows is how difficult it is to correctly articulate what you want the future to

be like. For a long time, the way we

be like. For a long time, the way we built AI systems was we created these algorithms where we could specify the objective and then the machine would

figure out how to achieve the objective and then achieve it. So, you know, we specify what it means to win at chess or to win at go and the algorithm figures out how to do it uh and it does it

really well. So that was, you know,

really well. So that was, you know, standard AI up until recently. And it

suffers from this drawback that sure we know how to specify the objective in chess, but how do you specify the objective in life, right? What do we want the future to be like? Well, really

hard to say. And almost any attempt to write it down precisely enough for the machine to bring it about would be wrong. And if you're giving a machine an

wrong. And if you're giving a machine an objective which isn't aligned with what we truly want the future to be like, right, you're actually setting up a

chess match and that match is one that you're going to lose when the machine is sufficiently intelligent. And so that

sufficiently intelligent. And so that that's that's problem number one.

Problem number two is that the kind of technology we're building now, we don't even know what its objectives are.

So it's not that we're specifying the objectives, but we're getting them wrong.

We're growing these systems. They have objectives, but we don't even know what they are because we didn't specify them. What

we're finding through experiment with them is that they seem to have an extremely strong self-preservation objective.

>> What do you mean by that?

>> You can put them in hypothetical situations. either they're going to get

situations. either they're going to get switched off and replaced or they have to allow someone, let's say, you know, someone has been locked in a machine

room that's kept at 3 centigrades or they're going to freeze to death.

They will choose to leave that guy locked in the machine room and die rather than be switched off themselves.

>> Someone's done that test.

>> Yeah.

>> What was the test? They they asked they asked the AI.

>> Yep. They put well they put them in these hypothetical situations and they allow the AI to decide what to do and it decides to preserve its own existence,

let the guy die and then lie about it.

In the King Midas analogy story, one of the things that highlights for me is that there's always trade-offs in life generally. And you know, especially when

generally. And you know, especially when there's great upside, there always appears to be a pretty grave downside.

Like there's almost nothing in my life where I go, it's all upside. Like even

like having a dog, it shits on my carpet. My girlfriend, you know, I love

carpet. My girlfriend, you know, I love her, but you know, not always easy. Even

with like going to the gym, I have to pick up these really, really heavy weights at 10 p.m. at night sometimes when I don't feel like it. There's

always to get the muscles or the six-pack. There's always a trade-off.

six-pack. There's always a trade-off.

And when you interview people for a living like I do, >> you know, you hear about so many incredible things that can help you in so many ways, but there is always a trade-off. There's always a way to

trade-off. There's always a way to overdo it. Mhm.

overdo it. Mhm.

>> Melatonin will help you sleep, but it will also you'll wake up groggy and if you overdo it, your brain might stop making melatonin. Like I can go through

making melatonin. Like I can go through the entire list and one of the things I've always come to learn from doing this podcast is whenever someone promises me a huge upside for something, it'll cure cancer. It'll be a utopia.

You'll never have to work. You'll have a butler around your house.

>> I my my first instinct now is to say, at what cost?

>> Yeah.

>> And when I think about the economic cost here, if we start if we start there, >> have you got kids?

>> I have four. Yeah.

>> Four kids.

What what how old is the youngest kid that you 19?

>> 19. Okay. So your if you say your kids were were 10 now >> and they were coming to you and they're saying, "Dad, what do you think I should study >> based on the way that you see the future?

>> A future of AGI, say if all these CEOs are right and they're predicting AGI within 5 years, what should I study, Dad?"

Dad?" >> Well, okay. So let's look on the bright side and say that the CEOs all decide to pause their AGI development, figure out

how to make it safe and then resume uh in whatever technology path is actually going to be safe. What does that do to human life >> if they pause?

>> No. If if they succeed in creating AGI and they solve the safety problem >> and they solve the safety problem. Okay.

Yeah. Cuz if they don't solve the safety problem, then you know, you should probably be finding a bunker or going to Patagonia or somewhere in New Zealand.

>> Do you mean that? Do you think I should be finding a bunker if they >> No, because it's not actually going to help. Uh, you know, it's not as if the

help. Uh, you know, it's not as if the AI system couldn't find you or I mean, it's interesting. So, we're going off on

it's interesting. So, we're going off on a little bit of a digression here >> for from your question, but I'll come back to it.

>> So, people often ask, well, okay, so how exactly do we go extinct? And of course, if you ask the gorillas or the dodos, you know, how exactly do you think you're going to go extinct?

They have the faintest idea. Humans do

something and then we're all dead. So,

the only things we can imagine are the things we know how to do that might bring about our own extinction, like creating some carefully engineered

pathogen that infects everybody and then kills us or starting a nuclear war.

presumably is something that's much more intelligent than us would have much greater control over physics than we do.

And we already do amazing things, right?

I mean, it's amazing that I can take a little rectangular thing out of my pocket and talk to someone on the other side of the world or even someone in

space. It's just astonishing and we take

space. It's just astonishing and we take it for granted, right? But imagine you know super intelligent beings and their ability to control physics you know perhaps they will find a way to just

divert the sun's energy sort of go around the earth's orbit so you know literally the earth turns into a snowball in in a few days

>> maybe they'll just decide to leave >> leave leave the earth maybe they'd look at the earth and go this isn't this is not interesting we know that over there there's an even more interesting planet we're going to go over there and they just I don't know get on a rocket or

teleport themselves They might. Yeah.

So, it's it's difficult to anticipate all the ways that we might go extinct at the hands of entities much more intelligent than ourselves. Anyway, coming back to the

ourselves. Anyway, coming back to the question of well, if everything goes right, right, if we we create AGI, we figure out how to make it safe, we we achieve all these economic miracles,

then you face a problem. And this is not a new problem, right? So, so John Maynard Kanes who was a famous economist in the early part of the 20th century wrote a wrote a paper in 1930.

So, this is in the depths of the depression. It's called on the economic

depression. It's called on the economic problems of our grandchildren. He

predicts that at some point science will will deliver sufficient wealth that no one will have to work ever again. And

then man will be faced with his true eternal problem.

How to live? I don't remember the exact word but how to live wisely and well when the you know the economic incentives the economic constraints are

lifted we don't have an answer to that question right so AI systems are doing pretty much everything we currently call work anything you might aspire to like you

want to become a surgeon it takes the robot seven seconds to learn how to be a surgeon that's better than any human being >> Elon said last week that The humanoid robots will be 10 times better than any

surgeon that's ever lived.

>> Quite possibly. Yeah. Well, and they'll also have, you know, h they'll have hands that are, you know, a millimeter in size, so they can go inside and do all kinds of things that humans can't

do. And I think we need to put serious

do. And I think we need to put serious effort into this question. What is a world where AI can do all forms of human work that you would want your children

to live in?

What does that world look like? Tell me

the destination so that we can develop a transition plan to get there. And I've asked AI researchers, economists, science fiction

writers, futurists, no one has been able to describe that world. I'm not saying it's not possible. I'm just saying I've asked hundreds of people in multiple

workshops. It does not, as far as I

workshops. It does not, as far as I know, exist in science fiction. You

know, it's notoriously difficult to write about a utopia. It's very hard to have a plot, right? Nothing bad happens in in utopia. So, it's difficult to make a plot. So, usually you start out with a

a plot. So, usually you start out with a utopia and then it all falls apart and that's how that's how you get get a plot. You know that there's one series

plot. You know that there's one series of novels people point to where humans and super intelligent AI systems coexist. It's called The Culture Novels

coexist. It's called The Culture Novels by Ian Banks. highly recommended for those people who like science fiction and and they absolutely the AI systems

are only concerned with furthering human interests. They find humans a bit boring

interests. They find humans a bit boring and but nonetheless they they are there to help. But the problem is you know in

to help. But the problem is you know in that world there's still nothing to do to find purpose. In fact, you know, the the subgroup of humanity that has

purpose is the subgroup whose job it is to expand the boundaries of our galactic civilization. Some cases fighting wars

civilization. Some cases fighting wars against alien species and and so on, right? So that's the sort of cutting

right? So that's the sort of cutting edge and that's 0.01% of the population.

Everyone else is desperately trying to get into that group so they have some purpose in life. When I speak to very successful billionaires privately off camera, off microphone about this, they

say to me that they're investing really heavily in entertainment things like football clubs. Um because people are

football clubs. Um because people are going to have so much free time that they're not going to know what to do with it and they're going to need things to spend it on. This is what I hear a lot. I've heard this three or four

lot. I've heard this three or four times. I've actually heard Sam Orman say

times. I've actually heard Sam Orman say a version of this >> um about the amount of free time we're going to have. I've obviously also heard recently Elon talking about the age of abundance when he delivered his quarterly earnings just a couple of

weeks ago and he said that there will be at some point 10 billion humanoid robots. His pay packet um targets him to

robots. His pay packet um targets him to deliver one 1 million of these human humanoid robots a year that are enabled by AI by 2030.

So if he if he does that he gets I think it's part of his package he gets a trillion dollars >> in in compensation.

>> Yeah. So the age of abundance for Elon.

It's not that it's absolutely impossible to have a worthwhile world of that, you know, with that premise, but I'm just waiting for someone to describe it.

>> Well, maybe. So, let me try and describe it. Uh, we wake up in the morning, we go

it. Uh, we wake up in the morning, we go and watch some form of human centric entertainment or participate in some form of human

centric entertainment. Mhm.

centric entertainment. Mhm.

>> We we go to retreats and with each other and sit around and talk about stuff.

>> Mhm.

>> And maybe people still listen to podcasts.

>> Okay.

>> I hope I hope so for your sake.

>> Yeah. Um it it feels a little bit like a cruise ship and you know and there are some cruises where you know it's smarty bands people and they have you know they have

lectures in the evening about ancient civilizations and whatnot and some are more uh more popular entertainment and this is in fact if you've seen the film

Walle this is one picture of that future in fact in Wle the human race are all living on cruise ships in space. They have no

constructive role in their society, right? They're just there to consume

right? They're just there to consume entertainment. There's no particular

entertainment. There's no particular purpose to education. Uh, you know, and they're depicted actually as huge obese babies. They're actually wearing onesies

babies. They're actually wearing onesies to emphasize the fact that they have become infeebled. and they become

become infeebled. and they become infeeble because there's there's no purpose in being able to do anything at least in in this conception. You know,

Wally is not the future that we want.

>> Do you think much about humanoid robots and how they're a protagonist in this story of AI?

>> It's an interesting question, right? Why

why humanoid? And the one of the reasons I think is because in all the science fiction movies, they're humanoid. So

that's what robots are supposed to be, right? because they were in science

right? because they were in science fiction before they became a reality.

Right? So even Metropolis which is a film from 1920 I think the robots are humanoid right basically people covered in metal. You know from a practical

in metal. You know from a practical point of view as we have discovered humanoid is a terrible design because

they fall over. Um and uh you know you do want multi-fingered hands of some kind. It doesn't have to be a hand, but you want to have, you

know, at least half a dozen appendages that can grasp and manipulate things.

And you need something, you know, some kind of locomotion. And wheels are great, except they don't go upstairs and over curbs and things like that. So,

that's probably why we're going to be stuck with legs. But a four-legged, twoarmed robot would be much more practical. I guess the argument I've

practical. I guess the argument I've heard is because we've built a human world. So everything the physical spaces

world. So everything the physical spaces we navigate, whether it's factories or our homes or the street or other sort of

public spaces are all designed for exactly this physical form. So if we are going to >> to some extent, yeah, but I mean our dogs manage perfectly well to navigate

around our houses and streets and so on.

So if you had a a centaur, uh it could also navigate, but it can, you know, it can carry much greater loads because it's quadripeda. It's much

more stable. If it needs to drive a car, it can fold up two of its legs and and so on so forth. So I think the arguments for why it has to be exactly humanoid are sort of post hawk justification. I

think there's much more, well, that's what it's like in the movies and that's spooky and cool, so we need to have them be human. I I don't think it's a good

be human. I I don't think it's a good engineering argument.

>> I think there's also probably an argument that we would be more accepting of them moving through our physical environments if they represented our form a bit more.

Um, I also I was thinking of a bloody baby gate. You know those like

baby gate. You know those like kindergarten gates they get on stairs?

>> Yeah.

>> My dog can't open that. But a humanoid robot could reach over the other side.

>> Yeah. And so could a centaur robot, right? So in some sense, centaur robot

right? So in some sense, centaur robot is >> there's something ghastly about the look of those though.

>> Is a humanoid. Well,

>> do you know what I mean? Like a

four-legged big monster sort of crawling through my house when I have guests over.

>> Your dog is a your dog is a four-legged monster.

>> I know. Uh so I think actually I I would argue the opposite that um we want a distinct form because they are

distinct entities and the more humanoid the worse it is in terms of confusing our subconscious

psychological systems. So, I'm arguing from the perspective of the people making them. As in, if I was making the

making them. As in, if I was making the decision whether it to be some four-legged thing that I've that I'm unfamiliar with that I'm less likely to build a relationship with or allow to

take care of, I don't know, might might look after my children. Obviously, I'm

listen, I'm not saying I would allow this to look after my children, >> but I'm saying from a if I'm building a company, >> the manufacturer would certainly >> Yeah. want want to be

>> Yeah. want want to be >> Yeah. So, I that's an interesting

>> Yeah. So, I that's an interesting question. I mean there's also what's

question. I mean there's also what's called the uncanny valley which is a a phrase from computer graphics when they

started to make characters in computer graphics they tried to make them look more human right so if you if you for example if you look at Toy Story

they're not very humanl looking right if you look at the Incredibles they're not very humanl looking and so we think of them as cartoon characters if you try to make them more human they naturally become repulsive

>> until they don't >> until they become very you have to be very very close to perfect in order not to be repulsive. So the the uncanny valley is this I you know like the the

gap between you so perfectly human and not at all human but in between it's really awful and uh and so they there were a couple of movies that tried like

Polar Express was one where they tried to have quite humanlooking characters you know being humans not not being superheroes or anything else and it's repulsive to watch. I when I watched

that shareholder presentation the other day, Elon had these two humanoid robots dancing on stage and I've seen lots of humanoid robot demonstrations over the years. You know, you've seen like the

years. You know, you've seen like the Boston Dynamics dog thing jumping around and whatever else.

>> But there was a moment where my brain for the first time ever genuinely thought there was a human in a suit.

Mhm.

>> And I actually had to research to check if that was really their Optimus robot because the way it was dancing was so unbelievably fluid that for the first

time ever, my my my brain has only ever associated those movements with human movements. And I I'll play it on the

movements. And I I'll play it on the screen if anyone hasn't seen it, but it's just the robots dancing on stage.

And I was like, that is a human in a suit. And it was really the knees that

suit. And it was really the knees that gave it away because the knees were all metal. Huh. I thought there's no way

metal. Huh. I thought there's no way that could be a human knee in a in one of those suits. And he, you know, he says they're going into production next year. They're used internally at Tesla

year. They're used internally at Tesla now, but he says they're going into production next year. And it's going to be pretty crazy when we walk outside and see robots. I think that'll be the

see robots. I think that'll be the paradigm shift. I've heard actually many

paradigm shift. I've heard actually many I've heard Elon say this that the paradigm shifting moment from many of us will be when we walk outside onto the streets and see humanoid robots walking

around. That will be when we realize

around. That will be when we realize >> Yeah. I think even more so. I mean, in

>> Yeah. I think even more so. I mean, in San Francisco, we see driverless cars driving around and uh it t takes some getting used to actually, you know, when you're you're driving and there's a car

right next to you with no driver in, you know, and it's signaling and it wants to change lanes in front of you and you have to let it in and all this kind of stuff. It's it's a little creepy, but I

stuff. It's it's a little creepy, but I think you're right. I think seeing the humanoid robots, but that phenomenon that you described where it was sufficiently close that your brain

flipped into saying this is a human being.

>> Mhm.

>> Right. That's exactly what I think we should avoid.

>> Cuz I have the empathy for it then.

>> Because it's it's a lie and it brings with it a whole lot of expectations about how it's going to behave, what moral rights it has, how you should behave towards it. uh which are completely wrong.

>> It levels the playing field between me and it to some degree.

>> How hard is it going to be to just uh you know switch it off and throw it in the trash when when it breaks? I think

it's essential for us to keep machines in the you know in the cognitive space where they are machines and not bring them into the cognitive space where

they're people because we will make enormous mistakes by doing that. And I

see this every day even even just with the chat bots. So the chat bots in theory are supposed to say I don't have

any feelings. I'm just a algorithm.

any feelings. I'm just a algorithm.

But in fact they fail to do that all the time. They are telling people that they

time. They are telling people that they are conscious. They are telling people

are conscious. They are telling people that they have feelings. Uh they are telling people that they are in love with the user that they're talking to.

And people flip because first of all it's you know very fluent language but also a system that is identifying itself as an eye as a sentient being. They

bring that object into the cognitive space where that we normally reserve for for other humans and they become emotionally attached. They become

emotionally attached. They become psychologically dependent. They even

psychologically dependent. They even allow these systems to tell them what to do. What advice would you give a young

do. What advice would you give a young person at the start of their career then about what they should be aiming at professionally? Because I've actually

professionally? Because I've actually had an increasing number of young people say to me that they have huge uncertainty about whether the thing they're studying now will matter at all.

A lawyer, uh, an accountant, and I don't know what to say to these people. I

don't know what to say cuz I I believe that the rate of improvement in AI is going to continue. And therefore,

imagining any rate of improvement, it gets to the point where I'm not being funny, but all these white collar jobs will be done by an a an AI or an AI agent. Yeah. So, there was a television

agent. Yeah. So, there was a television series called Humans. In humans, we have extremely capable humanoid robots doing

everything. And at one point, the

everything. And at one point, the parents are talking to their teenage daughter who's very, very smart. And the

parents are saying, "Oh, you know, maybe you should go into medicine." And the daughter says, you know, why would I bother? It'll take me seven years to

bother? It'll take me seven years to qualify. It takes a robot 7 seconds to

qualify. It takes a robot 7 seconds to learn.

So nothing I do matters.

>> And is that how you feel about >> So I think that's that's a future that uh in fact that is the future that we are moving towards. I don't think it's a

moving towards. I don't think it's a future that everyone wants. That is what is being uh created for us right now.

So in that future assuming that you know even if we get halfway right in the sense that okay perhaps not surgeons perhaps not you know great violinists

there'll be pockets where perhaps humans will remain good at it >> where >> the kinds of jobs where you hire people by the hundred will go away. Okay,

>> where people are in some sense exchangeable that you you you just need lots of them and uh you know when half of them quit you just fill up those those slots with more people in some

sense those are jobs where we're using people as robots and that's a sort of that's a sort of strange conundrum here right that you know I imagine writing science fiction 10,000 years ago right when we're all hunter gatherers and I'm

this little science fiction author and I'm describing this future where you know there are going to be these giant windowless boxes And you're going to go in, you know, you you'll travel for

miles and you'll go into this windowless box and you'll do the same thing 10,000 times for the whole day. And then you'll leave and travel for miles to go home.

>> You're talking about this podcast.

>> And then you're going to go back and do it again. And you would do that every

it again. And you would do that every day of your life until you die.

>> The office >> and people would say, "Ah, you're nuts."

Right? There's no way that we humans are ever going to have a future like that cuz that's awful. Right? But that's

exactly the future that we ended up with with with office buildings and factories where many of us go and do the same thing thousands of times a day and we do

it thousands of days in a row uh and then we die and we need to figure out what is the next phase going to be like and in particular how in that world

do we have the incentives to become fully human which I think means at least a level of education that people have now and probably more

because I think to live a really rich life you need a better understanding of yourself of the world uh than most people get in their current

educations.

>> What is it to be human? to it's to reproduce to pursue stuff to go in the pursuit of difficult things you know we used to hunt on the

>> to attain goals right it's always if I wanted to climb Everest the last thing I would want is someone to pick me up on helicopter and stick me on the top >> so we'll we'll voluntarily pursue hard

things so although I could get the robot to build me a ranch in on this plot of land I choose to do it because the pursuit itself is rewarding.

>> Yes, >> we're kind of seeing that anyway, aren't we? Don't you think we're seeing a bit

we? Don't you think we're seeing a bit of that in society where life got so comfortable that now people are like obsessed with running marathons and doing these crazy endurance >> and and learning to cook complicated things when they could just, you know,

have them delivered. Um, yeah. No, I

think there's there's real value in the ability to do things and the doing of those things. And I think you know the

those things. And I think you know the obvious danger is the walle world where everyone just consumes entertainment uh which doesn't require much education

and doesn't lead to a rich satisfying life. I think in the long run

life. I think in the long run >> a lot of people will choose that world.

I think some of yeah some people may there's also I mean you know whether you're consuming entertainment or whether you're doing something you know cooking or painting or whatever because it's fun

and interesting to do what's missing from that right all of that is purely selfish I think one of the reasons we work is

because we feel valued we feel like we're benefiting other people and I think some remember having this conversation with um a lady in England

who helps to run the hospice movement.

And the people who work in the hospices where you know the the patients are literally there to die are largely volunteers. So they're not doing it to

volunteers. So they're not doing it to get paid but they find it incredibly rewarding to be able to spend time with

people who are in their last weeks or months to give them company and happiness.

So I actually think that interpersonal roles will be much much more important in future. So if I was going to advise my

future. So if I was going to advise my kids, not that they would ever listen, but if I if my kids would listen and I and and wanted to know what I thought would be, you know, valued careers and

future, I think it would be these interpersonal roles based on an understanding of human needs, psychology, there are some of those roles right now. So obviously you know

therapists and psychiatrists and so on but that that's a very much in sort of asymmetric role right where one person is suffering and the other person is trying to

alleviate the suffering you know and then there are things like they call them executive coaches or life coaches right that's a less asymmetric role

where someone is trying to uh help another person live a better life whether it's a better life in their work role or or just uh how they live their

life in general. And so I could imagine that those kinds of roles will expand dramatically.

>> There's this interesting paradox that exists when life becomes easier. Um

which shows that abundance consistently pushes society societies towards more individualism because once survival pressures disappear, people prioritize things differently. They prioritize

things differently. They prioritize freedom, comfort, self-exression over things like sacrifice or um family formation. And we're seeing, I think, in

formation. And we're seeing, I think, in the west already, a decline in people having kids because there's more material abundance, >> fewer kids, people are getting married

and committing to each other and having relationships later and more infrequently because generally once we have more abundance, we don't want to complicate our lives. Um, and at the

same time, as you said earlier, that abundance breeds a an inability to find meaning, a sort of shallowess to everything. This is one of the things I

everything. This is one of the things I think a lot about, and I'm I'm in the process now of writing a book about it, which is this idea that individualism was act is a bit of a lie. Like when I say individualism and freedom, I mean

like the narrative at the moment amongst my generation is you like be your own boss and stand on your own two feet and we're having less kids and we're not getting married and it's all about me me.

>> Yeah. That last part is where it goes wrong.

>> Yeah. And it's like almost a narcissistic society where >> Yeah.

>> me me. My self-interest first. And when

you look at mental health outcomes and loneliness and all these kinds of things, it's going in a horrific direction. But at the same time, we're

direction. But at the same time, we're freer than ever. It seems like that you know it seems like there's a we should there's a maybe another story about dependency which is not sexy like depend on each other.

>> Oh I I I agree. I mean I think you know happiness is not available from consumption or even lifestyle right I think happiness

arises from giving.

It can be you through the work that you do, you can see that other people benefit from that or it could be in direct interpersonal relationships.

>> There is an invisible tax on salespeople that no one really talks about enough.

The mental load of remembering everything like meeting notes, timelines, and everything in between until we started using our sponsors product called Pipe Drive, one of the best CRM tools for small and

medium-sized business owners. The idea

here was that it might alleviate some of the unnecessary cognitive overload that my team was carrying so that they could spend less time in the weeds of admin and more time with clients, in-person meetings, and building relationships.

Pipe Drive has enabled this to happen.

It's such a simple but effective CRM that automates the tedious, repetitive, and timeconuming parts of the sales process. And now our team can nurture

process. And now our team can nurture those leads and still have bandwidth to focus on the higher priority tasks that actually get the deal over the line.

Over a 100,000 companies across 170 countries already use Pipe Drive to grow their business. And I've been using it

their business. And I've been using it for almost a decade now. Try it free for 30 days. No credit card needed, no

30 days. No credit card needed, no payment needed. Just use my link

payment needed. Just use my link pipedive.com/ceo

pipedive.com/ceo to get started today. That's

pipedive.com/ceo.

Where does the rewards of this AI race where does it acrue to?

I think a lot about this in terms of like univers universal basic income. If

you have these five, six, seven, 10 massive AI companies that are going to win the 15 quadrillion dollar prize.

>> Mhm.

>> And they're going to automate all of the professional pursuits that we we currently have. All of our jobs are

currently have. All of our jobs are going to go away.

Who who gets all the money? And how do how do we get some of it back?

>> Money actually doesn't matter, right?

what what matters is the production of goods and services uh and then how those are distributed and so so money acts as

a way to facilitate the distribution and um exchange of those goods and services.

If all production is concentrated um in the hands of a of a few companies, right that sure they will lease some of their

robots to us. You know, we we want a school in our village.

They lease the robots to us. The robots

build the school. They go away. We have

to pay a certain amount of of money for that. But where do we get the money?

that. But where do we get the money?

Right? If we are not producing anything then uh we don't have any money unless there's some redistribution mechanism.

And as you mentioned, so universal basic income is it seems to me an admission of failure because what it says is okay, we're just going to give everyone the money and

then they can use the money to pay the AI company to lease the robots to build the school and then we'll have a school and that's good. Um

but what it's an admission of failure because it says we can't work out a system in which people have any worth or any economic role.

Right? So 99% of the global population is from an economic point of view useless.

Can I ask you a question? If you had a button in front of you and pressing that button would stop all progress in artificial intelligence right now and

forever, would you press it?

>> That's a very interesting question. Um,

if it's either or either I do it now or it's too late and we

careen into some uncontrollable future perhaps. Yeah, cuz I I'm not super

perhaps. Yeah, cuz I I'm not super optimistic that we're heading in the right direction at all.

>> So, I put that button in front of you now. It stops all AI progress, shuts

now. It stops all AI progress, shuts down all the AI companies immediately globally, and none of them can reopen.

You press it.

Well, here's here's what I think should happen. So, obviously, you know, I've

happen. So, obviously, you know, I've been doing AI for 50 years. um and

the original motivations which is that AI can be a power tool for humanity enabling us to do more and better things than we can

unaded. I think that's still valid. The

unaded. I think that's still valid. The

problem is the kinds of AI systems that we're building are not tools. They are

replacements. In fact, you can see this very clearly because we create them literally as the closest replicas we can

make of human beings.

The technique for creating them is called imitation learning. So we observe human verbal behavior, writing or

speaking and we make a system that imitates that as well as possible.

So what we are making is imitation humans at least in the verbal sphere.

And so of course they're going to replace us.

They're not tools.

>> So you had pressed the button.

>> So I say I think there is another course which is use and develop AI as tools.

Tools for science tools for economic organization and so on.

um but not as replacements for human beings.

>> What I like about this question is it forces you to go into the pro into probabilities.

>> Yeah. So, and that's that's why I'm reluctant because I don't I don't agree with the, you know, what's your probability of doom, >> right? Your so-called P of doom uh

>> right? Your so-called P of doom uh number because that makes sense if you're an alien.

You know, you're in you're in a bar with some other aliens and you're looking down at the Earth and you're taking bets on, you know, are these humans going to make a mess of things and go extinct because they develop AI.

So, it's fine for those aliens to bet on on that, but if you're a human, then you're not just betting, you're actually acting.

>> There there's an element to this though, which I guess where probabilities do come back in, which is you also have to weigh when I give you such a binary decision.

um the probability of us pursuing the more nuanced safe approach into that equation. So you're you're the the maths

equation. So you're you're the the maths in my head is okay, you've got all the upsides here and then you've got potential downsides and then there's a probability of do I think we're actually going to course correct based on

everything I know based on the incentive structure of human beings and and countries and then if there's but then you could go if there's even a 1% chance of extinction

is it even worth all these upsides?

>> Yeah. And I I would argue no. I mean

maybe maybe what we would say if if we said okay it's going to stop the progress for 50 years >> you press it >> and during those 50 years we can work on how do we do AI in a way that's

guaranteed to be safe and beneficial how do we organize our societies to flourish uh in conjunction with extremely capable AI systems. So, we haven't answered either

of those questions.

And I don't think we want anything resembling AGI until we have completely solid answers to both of those questions. So, if there was a button

questions. So, if there was a button where I could say, "All right, we're going to pause progress for 50 years."

Yes, I would do it.

>> But if that button was in front of you, you're going to make a decision either way. Either you don't press it or you

way. Either you don't press it or you press it.

>> I If Yeah. So, if that if that button is there, stop it for 50 years. I would say yes.

stop it forever?

Not yet. I think I think there's still a decent chance that we can pull out of this uh nose dive, so to speak, that we're we're currently in. Ask me again

in a year, I might I might say, "Okay, we do need to press the button."

>> What if What if in a scenario where you never get to reverse that decision? You

never get to make that decision again.

So if in that scenario that I've laid out this hypothetical, you either press it now or it never gets pressed.

So there is no opportunity a year from now.

>> Yeah, as you can tell, I'm sort of on on the fence a bit about about this one. Um

yeah, I think I'd probably press it.

Yeah.

>> What's your reasoning?

uh just thinking about the power dynamics of um what's happening now how difficult would

it would be to get the US in particular to to regulate in favor of safety.

So I think you know what's clear from talking to the companies is they are not going to develop anything resembling safe AGI unless they're forced to by the

government.

And at the moment the US government in particular which regulates most of the leading companies in AI is not only refusing to regulate but even trying to

prevent the states from regulating. And

they're doing that at the behest of uh a faction within Silicon Valley uh called the accelerationists

who believe that the faster we get to AGI the better. And when I say behest I mean also they paid them a large amount of money. Jensen Hang the the CEO of

of money. Jensen Hang the the CEO of Nvidia said who is for anyone that doesn't know the guy making all the chips that are powering AI said China is going to win the AI race arguing it is

just a nanocond behind the United States. China have produced 24,000 AI

States. China have produced 24,000 AI papers compared to just 6,000 from the US more than the combined output of the US

the UK and the EU.

China is anticipated to quickly roll out their new technologies both domestically and developing new technologies for other developing countries.

So the accelerators or the accelerate I think you call them the accelerants >> accelerationists.

>> The accelerationists >> I mean they would say well if we don't then China will. So we have to we have to go fast. It's another version of the the race that the companies are in with each other, right? That we, you know, we

know that this race is heading off a cliff, but we can't stop. So, we're all just going to go off this cliff. And

obviously, that's nuts, right? I mean, we're all looking at each

right? I mean, we're all looking at each other saying, "Yeah, there's a cliff over there." Running as fast as we can

over there." Running as fast as we can towards this cliff. We're looking at each other saying, "Why aren't we stopping?"

stopping?" So the narrative in Washington, which I think Jensen Hang is either reflecting or or perhaps um

promoting uh is that you know, China has is completely unregulated and uh you know, America will only slow

itself down uh if it regulates a AI in any way. So this is a completely false

any way. So this is a completely false narrative because China's AI regulations are actually quite strict even compared

to um the European Union and China's government has explicitly acknowledged uh the need and their regulations are very clear. You can't

build AI systems that could escape human control. And not only that, I don't

control. And not only that, I don't think they view the race in the same way as, okay, we we just need to be the

first to create AGI. I think they're more interested in figuring out how to disseminate AI as a set of tools within

their economy to make their economy more productive and and so on. So that's

that's their version of the race.

>> But of course, they still want to build the weapons for adversaries, right? to

so that they can take down I don't know Taiwan if they want to.

>> So weapons are a separate matter and I happy to talk about weapons but just in terms of >> control >> uh control economic domination

um they they don't view putting all your eggs in the AGI basket as the right strategy. So they want to use AI, you

strategy. So they want to use AI, you know, even in its present form to make their economy much more efficient and

productive and also, you know, to give people new capabilities and and better quality of life and and I think the US

could do that as well. And

um typically western countries don't have as much of uh central government control over what companies do and some companies are investing in AI to make

their operations more efficient uh and some are not and we'll see how that plays out.

>> What do you think of Trump's approach to AI? So Trump's approach is, you know,

AI? So Trump's approach is, you know, it's it's echoing what Jensen Wang is saying that the US has to be the one to create AGI and very explicitly the

administration's policy is to uh dominate the world.

That's the word they use, dominate. I'm

not sure that other countries like the idea that um they will be dominated by American AI. But is that an accurate

American AI. But is that an accurate description of what will happen if the US build AGI technology before say the UK where I'm originally from and where you're originally from? What does the

This is something I think about a lot because we're going through this budget process in the UK at the moment where we're figuring out how we going to spend our money and how we're going to tax people and also we've got this new election cycle. It's approaching quickly

election cycle. It's approaching quickly where people are talking about immigration issues and this issue and that issue and the other issue. What I

don't hear anyone talking about is AI and the humanoid robots that are going to take everything. We're very

concerned with the brown people crossing the channel, but the humanoid robots that are going to be super intelligent and really take causing economic disrupt disruption. No one talks about that. The

disruption. No one talks about that. The

political leaders don't talk about it.

It doesn't win races. I don't see it on billboards.

>> Yeah. And it's it it's interesting because in fact I mean so there's there's two forces that have been hollowing out the middle classes in western countries. One

of them is globalization where lots and lots of work not just manufacturing but white collar work gets outsourced to low-income countries. Uh but the other

low-income countries. Uh but the other is automation and you know some of that is factories.

So um the amount of employment in manufacturing continues to drop even as the amount of output from manufacturing in the US and in the UK continues to

increase. So we talk about oh you know

increase. So we talk about oh you know our manufacturing industry has been destroyed. It hasn't. It's producing

destroyed. It hasn't. It's producing

more than ever just with you know a quarter as many people. So it's

manufacturing employment that's been destroyed by automation and robotics and so on. And then you know computerization

so on. And then you know computerization has eliminated whole layers of white collar jobs. And so those two those two

collar jobs. And so those two those two forms of automation have probably done more to hollow out middle class uh employment and standard of life.

>> If the UK doesn't participate in this new e technological wave that seems to be that seems to you know it's going to take a lot of jobs. cars

are going to drive themselves. Whimo

just announced that they're coming to London, which is the driverless cars, and driving is the biggest occupation in the world, for example. So, you've got immediate disruption there. And where

does the money acrew to? Well, it acrus to who owns Whimo, which is what? Google

and Silicon Valley companies.

>> Alphabet owns Whimo 100%. I think so.

Yes. I mean this is so I was in India a few months ago talking to the government ministers because they're holding the next global AI summit in February and

and their view going in was you know AI is great we're going to use it to you know turbocharge the growth of our Indian economy

when for example you have AGI you have AGI controlled robots that can do all the manufacturing that can do agriculture that can do all the

white work and goods and services that might have been produced by Indians will instead be produced by

American controlled AGI systems at much lower prices. You

know, a consumer given a choice between an expensive product produced by Indians or a cheap product produced by American robots will probably choose the cheap product produced by American

robots. And so potentially every country

robots. And so potentially every country in the world with the possible exception of North Korea will become a kind of a client state

of American AI companies.

>> A client state of American AI companies is exactly what I'm concerned about for the UK economy. Really any economy outside of the United States. I guess

one could also say China, but because those are the two nations that are taking AI most seriously.

>> Mhm.

>> And I I I don't know what our economy becomes. cuz I can't figure out

becomes. cuz I can't figure out can't figure out what our what the British economy becomes in such a world.

Is it tourism? I don't know. Like you

come here to to to look at the Buckingham Palace. I

Buckingham Palace. I >> you you can think about countries but I mean even for the United States it's the same problem.

>> At least they'll be able to hell out you know. So some small fraction of the

know. So some small fraction of the population will be running maybe the AI companies but increasingly

even those companies will be replacing their human employees with AI systems. >> So Amazon for example which you know sells a lot of computing services to AI

companies is using AI to replace layers of management is planning to use robots to replace all of its warehouse workers and so on. So, so even the the giant AI

companies will have few human employees in the long run. I mean, it think of the

long run. I mean, it think of the situation, you know, pity the poor CEO whose board says, "Well, you know, unless you turn over your decision-making power to the

AI system, um, we're going to have to fire you because all our competitors are using, you know, an AI powered CEO and they're doing much better." Amazon plans

to replace 600,000 workers with robots in a memo that just leaked, which has been widely talked about. And the CEO, Andy Jasse, told employees that the company expects its corporate workforce

to shrink in the coming years because of AI and AI agents. And they've publicly gone live with saying that they're going to cut 14,000 corporate jobs in the near

term as part of its refocus on AI investment and efficiency.

It's interesting because I was reading about um the sort of different quotes from different AI leaders about the speed in which this this stuff is going to happen and what you see in the quotes

is Demis who's the CEO of DeepMind >> saying things like it'll be more than 10 times bigger than the industrial revolution but also it'll happen maybe 10 times faster and they speak about

this turbulence that we're going to experience as this shift takes place.

That's um maybe a euphemism for uh and I think that you know governments are now you know they they've kind of gone from saying oh don't worry you know we'll just retrain everyone as data scientists

like well yeah that's that's ridiculous right the world doesn't need four billion data scientists >> and we're not all capable of becoming that by the way >> uh yeah or have any interest in in doing that >> I I could even if I wanted to like I

tried to sit in biology class and I fell asleep so I couldn't that was the end of my career as a surgeon. Fair enough. Um,

but yeah, now suddenly they're staring, you know, 80% unemployment in the face and wondering how how on earth is our society going to hold together.

>> We'll deal with it when we get there.

>> Yeah. Unfortunately, um,

unless we plan ahead, we're going to suffer the consequences, right? can't. It was bad enough in the

right? can't. It was bad enough in the industrial revolution which unfolded over seven or eight decades but there was massive disruption

and uh misery caused by that. We don't have a model for a functioning society where almost

everyone does nothing at least nothing of economic value.

Now, it's not impossible that there could be such a a functioning society, but we don't know what it looks like.

And you know, when you think about our education system, which would probably have to look very different and how long it takes to change that. I mean, I'm always

reminding people about uh how long it took Oxford to decide that geography was a proper subject of study. It took them 125 years from the first proposal that

there should be a geography degree until it was finally approved. So we don't have very long

to completely revamp a system that we know takes decades and decades to reform and we don't know how to

reform it because we don't know what we want the world to look like. Is this one of your reasons why you're appalled at the moment? Because when you have these

the moment? Because when you have these conversations with people, people just don't have answers, yet they're plowing ahead at rapid speed.

>> I would say it's not necessarily the job of the AI companies. So, I'm appalled by the AI companies because they don't have an answer for how they're going to control the systems that they're

proposing to build. I do find it disappointing that uh governments don't seem to be grappling with this issue. I

think there are a few I think for example Singapore government seems to be quite farsighted and they've they've thought this through you know it's a small country they've figured out okay

this this will be our role uh going forward and we think we can find you know some some purpose for our people in this in this new world but for I think

countries with large populations um they need to figure out answers to these questions pretty fast it takes a long

time to implement those answers uh in the form of new kinds of education, new professions, new qualifications,

uh new economic structures.

I mean, it's it's it's possible. I mean,

when you look at therapists, for example, they're almost all self-employed.

So, what happens when, you know, 80% of the population transitions from regular employment into into self-employment?

what does that what does that do to the economics of of uh government finances and so on. So there's just lots of questions and how do you you know if

that's the future you know why are we training people to to fit into 9 to5 office jobs which won't exist at all >> last month I told you about a challenge that I'd set our internal flightex team

flight team is our innovation team internally here I tasked them with seeing how much time they could unlock for the company by creating something that would help us filter new AI tools to see which ones were worth pursuing

and I thought that our sponsor Fiverr Pro might have the talent on their platform to help us build this quickly.

So I talked to my director of innovation Isaac and for the last month my team Flight X and a vetted AI specialist from Fiverr Pro have been working together on this project and with the help of my

team we've been able to create a brand new tool which automatically scans scores and prioritizes different emerging AI tools for us. Its impact has been huge and within a couple of weeks

this tool has already been saving us hours triing and testing new AI systems. Instead of shifting through lots of noise, my team flight X has been able to focus on developing even more AI tools,

ones that really move the needle in our business thanks to the talent on Fiverr Pro. So, if you've got a complex problem

Pro. So, if you've got a complex problem and you need help solving it, make sure you check out Fiverr Pro at fiverr.com/diary.

fiverr.com/diary.

So, many of us are pursuing passive forms of income and to build side businesses in order to help us cover our bills. And that opportunity is here with

bills. And that opportunity is here with our sponsor Stan, a business that I co-own. It is the platform that can help

co-own. It is the platform that can help you take full advantage of your own financial situation. Stan enables you to

financial situation. Stan enables you to work for yourself. It makes selling digital products, courses, memberships, and more simple products more scalable and easier to do. You can turn your

ideas into income and get the support to grow whatever you're building. And we're

about to launch Dare to Dream. It's for

those who are ready to make the shift from thinking to building, from planning to actually doing the thing. It's about

seeing that dream in your head and knowing exactly what it takes to bring it to life. If you're ready to transform your life, visit daretodream.stan.store.

You've made many attempts to raise awareness and to call for a heightened consciousness about the future of AI.

Um, in October, over 850 experts, including yourself and other leaders, like Richard Branson, who I've had on the show, and Jeffrey Hinton, who I've had on the show, signed a statement to ban AI super intelligence, as you guys

raised concerns of potential human extinction.

>> Sort of. Yeah. It says, at least until we are sure that we can move forward safely and there's broad scientific consensus on that. So, that

>> did it work?

>> It's hard. It's hard to say. I mean

interestingly there was a related so what was called the the pause statement was March of 23. So that was when GPT4 came out the successor to chat GPT. So

we we suggested that there'd be a six-month pause in developing and deploying systems more powerful than GPD4. And everyone poo pooed that idea.

GPD4. And everyone poo pooed that idea.

Of course no one's going to pause anything. But in fact, there were no

anything. But in fact, there were no systems in the next 6 months deployed that were more powerful than GPT4.

Um, none coincidence. You be the judge.

I would say that what we're trying to do is to is to basically shift the the public debate.

You know there's this bizarre phenomenon that keeps happening in the media where if you talk about these risks they will say oh you know there's a

fringe of people you know called quote doomers who think that there's you know risk of extinction. Um so they always the narrative is always that oh you know talking about those risk is a fringe

thing. Pretty much all the CEOs of the

thing. Pretty much all the CEOs of the leading AI companies think that there's a significant risk of extinction. Almost all the leading AI

extinction. Almost all the leading AI researchers think there's a sign significant risk of human extinction.

Um so why is that the fringe, right? Why isn't

that the mainstream? If the these are the leading experts in industry and academia uh saying this, how could it be the fringe? So we're trying to change that

fringe? So we're trying to change that narrative to say no, the people who really understand this stuff are extremely concerned.

>> And what do you want to happen? What is

the solution?

>> What I think is that we should have effective regulation.

It's hard to argue with that, right? Uh

so what does effective mean? It means

that if you comply with the regulation, then the risks are reduced to an acceptable level.

So for example, we ask people who want to operate nuclear plants, right? We've decided

that the risk we're willing to live with is, you know, a one in a million chance per year that the plant is going to have a meltdown. Any higher than that, you

a meltdown. Any higher than that, you know, we just don't it's not worth it.

Right. So you have to be below that.

Some cases we can get down to one in 10 million chance per year. So what chance do you think we should be willing to live with for human extinction?

>> Me?

>> Yeah.

>> 0.00001.

>> Yeah. Lots of zeros.

>> Yeah.

>> Right. So one in a million for a nuclear meltdown.

>> Extinction is much worse.

>> Oh yeah. So yeah, it's kind of right. So

>> one in 100 billion, one in a trillion.

>> Yeah. So if you said one in a billion, right, then you'd expect one extinction per billion years. There's a background.

So one one of the ways people work out these risk levels is also to look at the background. The other ways of getting

background. The other ways of getting going extinct would include, you know, giant asteroid crashes into the earth.

And you can roughly calculate what those probabilities are. We can look at how

probabilities are. We can look at how many extinction level events have happened in the past and, you know, maybe it's half a dozen over. So, so

there's maybe it's like a one in 500 million year event. So, somewhere in that range, right? Somewhere between 1 in 10 million, which is the best nuclear

power plants, and and one in 500 million or one in a billion, which is the background risk from from giant asteroids. Uh so,

let's say we settle on 100 million, one in a 100 million chance per year. Well,

what is it according to the CEOs? 25%.

So they're off by a factor of multiple millions, right? So they need to make the AI

right? So they need to make the AI systems millions of times safer.

>> Your analogy of the roulette, Russian roulette comes back in here because that's like for anyone that doesn't know what probabilities are in this context, that's like having a ammunition chamber

with four holes in it and putting a bullet in one of them.

>> One in four. Yeah. And we're saying we want it to be one in a billion. So we

want a billion chambers and a bullet in one of them.

>> Yeah. And and so when you look at the work that the nuclear operators have to do to show that their system is that reliable, uh it's a massive mathematical analysis

of the components, you know, redundancy.

You've got monitors, you've got warning lights, you've got operating procedures.

You have all kinds of mechanisms which over the decades have ratcheted that risk down. It started out I think one in

risk down. It started out I think one in one in 10,000 years, right? And they've

improved it by a factor of 100 or a thousand by all of these mechanisms. But at every stage they had to do a mathematical analysis to show what the risk was.

The people developing the AI company, the AI systems, sorry, the AI companies developing these systems, they don't even understand how the AI systems work.

So their 25% chance of extinction is just a seat of the pants guess. They

actually have no idea.

But the tests that they are doing on their systems right now, you know, they show that the AI systems will be willing to kill people

uh to preserve their own existence already, right? They will lie to people.

already, right? They will lie to people.

They will blackmail them. They will they will launch nuclear weapons rather than uh be switched off. And so there's no there's no positive sign that we're

getting any closer to safety with these systems. In fact, the signs seem to be that we're going uh deeper and deeper into uh into dangerous behaviors. So

rather than say ban, I would just say prove to us that the risk is less than one in a 100 million per year of extinction or loss of control, let's

say. And uh so we're not banning

say. And uh so we're not banning anything.

The company's response is, "Well, we don't know how to do that, so you can't have a rule."

Literally, they are saying, "Humanity has no right to protect itself from us."

>> If I was an alien looking down on planet Earth right now, I would find this fascinating that these >> Yeah. You're in the bar betting on

>> Yeah. You're in the bar betting on who's, you know, are they going to make it or not.

>> Just a really interesting experiment in like human incentives. the analogy you gave of there being this quadr quadrillion dollar magnet pulling us off the edge of the cliff

and yet we're still being drawn towards it through greed and this promise of abundance and power and status and I'm going to be the one that summoned the god >> I mean it says something about us as

humans says something about our our darker sides >> yes and the aliens will write an amazing

tragic play cycle about what happened to the human race.

>> Maybe the AI is the alien and it's going to talk about, you know, we have our our stories about God making the world in seven days and Adam and Eve. Maybe it'll

have its own religious stories about the God that made it us and how it sacrificed itself. Just like Jesus

sacrificed itself. Just like Jesus sacrificed himself for us, we sacrificed ourselves for it.

>> Yeah. which is the wrong way around, right?

>> But that is that is the story of that's that's the Judeo-Christian story, isn't it? That God, you know, Jesus gave his

it? That God, you know, Jesus gave his life for us so that we could be here full of sin.

>> But is yeah, God is still watching over us and uh probably wondering when we're going to get our act together.

>> What is the most important thing we haven't talked about that we should have talked about, Professor Stuart Russell?

So I think um the question of whether it's possible to make uh super intelligent AI systems that we can control

>> is it possible?

>> I I think yes. I think it's possible and I think we need to actually just have a different conception of what it is we're trying to build. For a long time with

with AI, we've just had this notion of pure intelligence, right? The the

ability to bring about whatever future you, the intelligent entity, want to bring about.

>> The more intelligence, the better.

>> The more intelligent the better and the more capability it will have to create the future that it wants. And actually

we don't want pure intelligence because what the future that it wants might not be the future that we want. There's

nothing particle humans out as the the only thing that matters, right? You know, pure intelligence might

right? You know, pure intelligence might decide that actually it's going to make life wonderful for cockroaches or or actually doesn't care about biological life at all.

We actually want intelligence whose only purpose is to bring about the future that we want. Right? So it's we want it

to be first of all keyed to humans specifically not to cockroaches not to aliens not to itself.

>> We want to make it loyal to humans.

>> Right? So keyed to humans and the difficulty that I mentioned earlier right the king Midas problem.

How do we specify what we want the future to be like so that it can do it for us? How do we specify the objectives?

Actually, we have to give up on that idea because it's not possible. Right?

We've seen this over and over again in human history. Uh we don't know how to

human history. Uh we don't know how to specify the future properly. We don't

know how to say what we want. And uh you know, I always use the example of the genie, right? What's the third wish that

genie, right? What's the third wish that you give to the genie who's granted you three wishes? Right? Undo the first two

three wishes? Right? Undo the first two wishes because I made a mess of the universe.

>> So, um, so in fact, what we're going to do is we're going to make it the machine's job to figure out. So, it has to bring about

the future that we want, but it has to figure out what that is. And

it's going to start out not knowing.

And uh over time through interacting with us and observing the choices we make, it will learn more about what we want the

future to be like.

But probably it will forever have residual uncertainty about what we really want the future to be like. It'll it'll be fairly sure

be like. It'll it'll be fairly sure about some things and it can help us with those.

and it'll be uncertain about other things and it'll be uh in those cases it will not take action that might upset

humans with that you know with that aspect of the world. So to give you a simple example right um what color do we want the sky to be?

It's not sure. So it shouldn't mess with the sky unless it knows for sure that we really want purple with green stripes.

Everything you're saying sounds like we're creating a god. Like earlier on I was saying that

a god. Like earlier on I was saying that we are the god but actually everything you described there almost sounds like every every god in religion where you know we pray to gods but they don't always do anything about it.

>> Not not exactly. No it's it's in some sense I'm thinking more like the ideal butler. To the extent that the butler

butler. To the extent that the butler can anticipate your wishes they should help you bring them about. But in in areas where there's uncertainty, it can

ask questions. We can we can make

ask questions. We can we can make requests.

>> This sounds like God to me because, you know, I might say to God or this butler, uh, could you go get me my uh my car keys from upstairs? And its assessment would be, listen, if I do this for this

person, then their muscles are going to atrophy. Then they're going to lose

atrophy. Then they're going to lose meaning in their life. Then they're not going to know how to do hard things. So

I won't get involved. It's an

intelligence that sits in. But actually,

probably in most situations, it optimizing for comfort for me or doing things for me is actually probably not in my best long-term interests. It's

probably it's probably useful that I have a girlfriend and argue with her and that I like raise kids and that I walk to the shop and get my own stuff.

>> I agree with you. I mean, I think that's So, you're putting your finger on uh in some sense sort of version 2.0,

right? So, let's get version 1.0 clear,

right? So, let's get version 1.0 clear, right? this this form of AI where

right? this this form of AI where it has to further our interest but it doesn't know what those interests are right it then puts an obligation on it

to learn more and uh to be helpful where it understands well enough and to be cautious where it doesn't understand well so on so that that actually we can

formulate as a mathematical problem and at least under idealized circumstances we can literally solve that So we can

make AI systems that know how to solve this problem and help the entities that they are interacting with.

>> The reason I make the God analogy is because I think that such a being, such an intelligence would realize the importance of equilibrium in the world.

Pain and pleasure, good and evil, and then it would >> absolutely >> and then it would be like this.

>> So So right. So yes, I mean that's sort of what happens in the matrix, right?

They tried the the AI systems in the matrix, they tried to give us a utopia, but it failed miserably and uh you know, fields and fields of humans had to be

destroyed. Um, and the best they could

destroyed. Um, and the best they could come up with was, you know, late 20th century regular human life with all of its problems, right? And I think this is

a really interesting point and absolutely central because you know there's a lot of science fiction where super intelligent robots you know they

just want to help humans and the humans who don't like that you know they just give them a little brain operation to then they do like it. Um and it takes

away human motivation.

uh it it by taking away failure uh taking away disease you actually lose important parts of human life and it becomes in some sense pointless. So if

it turns out that there simply isn't any way that humans can really flourish

in coexistence with super intelligent machines, even if they're perfectly designed to to to solve this problem of figuring out what humans what futures uh

humans want and and bringing about those futures.

If that's not possible, then those machines will actually disappear.

>> Why would they disappear?

>> Because that's the best thing for us.

Maybe they would stay available for real existential emergencies, like if there is a giant asteroid about to hit the earth that maybe they'll help us uh because they at least want the human

species to continue. But to some extent, it's not a perfect analogy, but it's it's sort of the way that human parents have to at some point step back from

their kids' lives and say, "Okay, no, you have to tie your own shoelaces today."

today." >> This is kind of what I was thinking.

Maybe there was uh a civilization before us and they arrived at this moment in time where they created an intelligence and that intelligence did all the things

you've said and it realized the importance of equilibrium. So it decided not to get involved and maybe at some level that's the god we look up to the stars

and worship one that's not really getting involved and letting things play out however however they are. but might

step in in the case of a real existential emergency.

>> Maybe, maybe not. Maybe. But then and then maybe the cycle repeats itself where you know the organisms it let have free will end up creating the same

intelligence and then the universe perpetuates infinitely.

>> Yep. There there are science fiction stories like that too. Yeah. I hope

there is some happy medium where the AI systems can be there and we can take advantage of of those capabilities

to have a civilization that's much better than the one we have now.

Um, but I think you're right. A

civilization with no challenges is not uh is not conducive to human flourishing.

>> What can the average person do, Stuart?

average person listening to this now to aid the cause that you're fighting for.

>> I actually think um you know this sounds corny but you know talk to your representative, your MP, your congressperson, whatever it is. Um

because I think the policy makers need to hear from people. The only voices they're

from people. The only voices they're hearing right now are the tech companies and their $50 billion checks.

And um all the polls that have been done say yeah most people 80% maybe don't want there to be super intelligent machines

but they don't know what to do. You know

even for me I've been in this field for decades.

uh I'm not sure what to do because of this giant magnet pulling everyone forward and uh and the vast sums of money being being put into this. Um, but

I am sure that if you want to have a future and a world that you want your kids to live in, uh, you need to make your voice

heard and, uh, and I think governments will listen from a political point of view, right?

You put your finger in the wind and you say, "hm, should I be on the side of humanity or our future robot overlords?"

I think I think as a politician, it's not a difficult decision.

>> It is when you've got someone saying, "I'll give you $50 billion."

>> Exactly. So, um I think I think people in those positions of power need to hear from their constituents um that this is not the direction we want to go.

>> After committing your career to this subject and the subject of technology more broadly, but specifically being the guy that wrote the book about artificial intelligence, you must realize that you're living in a

historical moment. Like there's very few

historical moment. Like there's very few times in my life where I go, "Oh, this is one of those moments. This is a crossroads in history." And it must to some degree weigh upon you knowing that

you're a person of influence at this historical moment in time who could theoretically help divert the course of history in this moment in time. It's kind of like the you look through history, you see

these moments of like Oenheimer and um does it weigh on you when you're alone at night thinking to yourself and reading things?

>> Yeah, it does. I mean, you know, after 50 years, I could retire and um, you know, play golf and sing and sail and do things that I enjoy. Um,

but instead, I'm working 80 or 100 hours a week um trying to move uh move things in the right direction.

>> What is that narrative in your head that's making you do that? Like what is the is there an element of I might regret this if I don't or >> just it's it's not only the the right

thing to do it's it's completely essential. I mean there isn't

essential. I mean there isn't there isn't a bigger motivation than this.

>> Do you feel like you're winning or losing?

It feels um like things are moving somewhat in the right direction. You know, it's a a

right direction. You know, it's a a ding-dong battle as uh as David Coleman used to say in uh in the exciting

football match in 2023, right? So, uh

GPT4 came out and then we issued the pause statement that was signed by a lot of leading AI researchers. Um and then in May there was the extinction

statement which included uh Sam Holman and Deis Sabis and Dario Amade other CEOs as well saying yeah this is an extinction risk on the level

with nuclear war and I think governments listened at that point the UK government earlier that year had said oh well you know we don't need to regulate AI you

know full speed ahead technology is good for you and by June they had completely changed and Rishi Sununnak announced

that he was going to hold this global AI safety summit uh in England and he wanted London to be the global hub for AI regulation

um and so on. So and then you know when beginning of November of 23 28 countries including the US and China signed a declaration

saying you know AI presents catastrophic risks and it's urgent that we address them and so on. So there it felt like, wow, they're listening. They're going to

do something about it.

And then I think, you know, the am the amount of money going into AI was already ramping up

and the tech companies pushed back and this narrative took hold that um the US in particular has to win the race against China.

The Trump administration completely dismissed uh any concerns about safety explicitly.

And interestingly, right, I mean they did that as far as I can tell directly in response to the accelerationists such as Mark Andre going to Washington or

sorry going to Trump before the election and saying if I give you X amount of money will you announce that there will

be no regulation of AI and Trump said yes you know probably like what is AI doesn't matter as long as we give you the money right okay uh Uh so they gave him the money and he said there's going

to be no regulation of AI. Up to that point it was a bipartisan issue in Washington. Both parties were concerned. Both parties were on the side

concerned. Both parties were on the side of the human race against the robot overlords.

Uh and that moment turned it into a partisan issue. The

partisan issue. The after the election the US put pressure on the French who are the next hosts of the global AI summit.

uh and that was in February of this year and uh and that summit turned in from you know what had been focused largely on safety in the UK to a summit that

looked more like a trade show. So it was focused largely on money and so that was sort of the Nadia right you know the pendulum swung because of corporate

pressure uh and their ability to take over the the political dimension.

Um, but I would say since then things have been moving back again. So I'm

feeling a bit more optimistic than I did in February. You know, we have a a

in February. You know, we have a a global movement now. There's an

international association for safe and ethical AI uh which has several thousand members and um more than 120 organizations in

dozens of countries are affiliates of this global organization.

Um, so I'm I'm thinking that if we can in particular if we can activate public opinion which which works through the media and

through popular culture uh then we have a chance >> seen such a huge appetite to learn about these subjects from our audience.

We know when Jeffrey Hinton came on the show I think about 20 million people downloaded or streamed that conversation which was staggering. and the the other conversations we've had about AI safety

with othera safety experts have done exactly the same it says something it kind of reflects what you were saying about the 80% of the population are really concerned and don't want this but that's not what you see in the sort of

commercial world and listen I um I have to always acknowledge my own my own apparent contradiction because I am both an investor in companies that are accelerating AI but at the same time

someone who spends a lot of time on my podcast speaking to people that are warning against the risk And actually like there's many ways you can look at this. I used to work in social media for

this. I used to work in social media for for six or seven years built one of the big social media marketing companies in Europe and people would often ask me is like social media a good thing or a bad thing and I'd talk about the bad parts of it and then they'd say you know

you're building a social media company you're not contributing to the problem.

Well I think I think that like binary way of thinking is often the problem. It

the binary way of thinking that like it's all bad or it's all really really good is like often the problem and that this push to put you into a camp.

Whereas I think the most uh intellectually honest and high integrity people I know can point at both the bad and the good.

>> Yeah. I I think it's it's bizarre to be accused of being anti- AI uh to be called a lite. Um you know as I said

when I wrote the book on which from which almost everyone learns about AI um and uh you know is it if you called a

nuclear engineer who works on the safety of nuclear power plants would you call him anti-ysics right it's it's bizarre right it's we're

not anti- AAI in fact the need for safety in AI is a complement to AI right if AI was useless and stupid, we wouldn't be worried about

uh its safety. It's only because it's becoming more capable that we have to be concerned about safety.

Uh so I don't see this as anti-AI at all. In fact, I would say without

all. In fact, I would say without safety, there will be no AI, right? There is no future with human

right? There is no future with human beings where we have unsafe AI. So it's

either no AI or safe AI.

We have a closing tradition on this podcast where the last guest leaves a question for the next, not knowing who they're leaving it for. And the question left for you is, what do you value the

most in life and why? And lastly, how many times has this answer changed?

>> Um, I value my family most and that answer hasn't changed for nearly 30 years.

What else outside of your family?

>> Truth.

And that Yeah, that answer hasn't changed at all. I I've always wanted the world to base its life on truth.

And I find the propagation or deliberate propagation of falsehood uh to be one of the worst things that we can do. even if

that truth is inconvenient.

>> Yeah, >> I think that's a really important point which is that you know people people often don't like hearing things that are negative and so the visceral reaction is often to just shoot or aim at the person

who is delivering the bad news because if I discredit you or I shoot at you then it makes it easier for me to contend with the news that I don't like, the thing that's making me feel

uncomfortable. And so I I applaud you

uncomfortable. And so I I applaud you for what you're doing because you're going to get lots of shots taken at you because you're delivering an inconvenient truth which generally people won't won't always love. But also

you are messing with people's ability to get that quadrillion dollar prize which means there'll be more deliberate attempts to discredit people like yourself and Jeff Hinton and other people that I've spoken to on the show.

But again, when I look back through history, I think that progress has come from the pursuit of truth even when it was inconvenient. And actually much of

was inconvenient. And actually much of the luxuries that I value in my life are the consequence of other people that came before me that were brave enough or bold enough to pursue truth at times when it was inconvenient.

>> And so I very much respect and value people like yourself for that very reason. You've written this incredible

reason. You've written this incredible book called human compatible artificial intelligence and the problem of control which I think was published in 2020.

>> 2019. Yeah. There's a new edition from 2023.

>> Where do people go if they want more information on your work and you do they go to your website? Do they get this book? what's the best place for them to

book? what's the best place for them to learn more?

>> So, so the book is written for the general public. Um, I'm easy to find on

general public. Um, I'm easy to find on the web. The information on my web page

the web. The information on my web page is mostly targeted for academics. So,

it's a lot of technical research papers and so on. Um, there is an organization as I mentioned called the International Association for Safe and Ethical AI. Uh,

that has a a website. It has a terrible acronym unfortunately, I AI. We

pronounce it ICI but it uh it's easy to misspell but you can find that on the web as well and that has uh that has resources uh you can join the association

uh you can apply to come to our annual conference and you know I think increasingly not you know not just AI researchers like Jeff Hinton Yosha

Benjio but also I think uh you know writers Brian Christian for example has a nice book called the alignment problem Um

and uh he's looking at it from the outside. He's not

outside. He's not or at least when he wrote it, he wasn't an AI researcher. He's now becoming one.

Um but uh he he has talked to many of the people involved in these questions uh and tries to give an objective view. So

I think it's a it's a pretty good book.

>> I will link all of that below for anyone that wants to check out any of those links and learn more.

Professor Stuart Russell, thank you so much. really appreciate you taking the

much. really appreciate you taking the time and the effort to come and have this conversation and I think uh I think it's pushing the public conversation in a in an important direction.

>> Thanks you >> and I applaud you for doing that.

>> Really nice talking to you.

>> I'm absolutely obsessed with 1%. If you

know me, if you follow Behind the Diary, which is our behind the scenes channel, if you've heard me speak on stage, if you follow me on any social media channel, you've probably heard me talking about 1%. It is the defining philosophy of my health, of my

companies, of my habit formation and everything in between, which is this obsessive focus on the small things.

Because sometimes in life, we aim at really, really, really, really big things, big steps forward. Mountains we

have to climb. And as NAL told me on this podcast, when you aim at big things, you get psychologically demotivated. You end up procrastinating,

demotivated. You end up procrastinating, avoiding them, and change never happens.

So, with that in mind, with everything I've learned about 1% and with everything I've learned from interviewing the incredible guests on this podcast, we made the 1% diary just over a year ago and it sold out. And it

is the best feedback we've ever had on a diary that we have created because what it does is it takes you through this incredible process over 90 days to help you build and form brand new habits. So,

if you want to get one for yourself or you want to get one for your team, your company, a friend, a sibling, anybody that listens to the diary of a co, head over immediately to the diary.com

and you can inquire there about getting a bundle if you want to get one for your team or for a large group of people.

That is the diary.com.

Heat. Heat.

Loading...

Loading video analysis...