TLDW logo

Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!

By The Diary Of A CEO

Summary

## Key takeaways - **AI could cause human extinction**: Geoffrey Hinton estimates a 10-20% chance that AI could lead to human extinction, a risk he acknowledges he was slow to recognize. [08:59] - **AI's digital nature offers superiority**: AI's digital nature allows for perfect replication and rapid information sharing between instances, making it billions of times better than humans at sharing knowledge. [56:38], [58:09] - **Job displacement is an urgent threat**: Mass joblessness due to AI is an urgent threat to human happiness, as purpose and contribution are tied to work, and this displacement is already beginning. [00:07], [40:03] - **Regulations exclude military AI use**: European AI regulations, while a step, contain a clause exempting military uses, indicating governments are unwilling to regulate themselves. [10:38] - **AI can create novel cyber threats**: AI may create new kinds of cyber attacks by 2030 that no human has conceived of, as they can think for themselves and analyze more data than any person. [14:12] - **Social media algorithms deepen societal divides**: Social media algorithms are designed to show users more extreme content that confirms their biases, driving people into echo chambers and further dividing society. [19:13], [21:47]

Topics Covered

  • Why regulations can't stop AI's dangerous progress.
  • Are humans the new chickens to superintelligence?
  • Digital AI is immortal and shares knowledge instantly.
  • Machines will have cognitive emotions, challenging human uniqueness.
  • AI will cause mass joblessness, threatening human purpose.

Full Transcript

They call you the godfather of AI. So

what would you be saying to people about

their career prospects in a world of

super intelligence? Train to be a

plumber. Really? Yeah. Okay. I'm going

to become a plumber. Jeffrey Hinton is

the Nobel Prize winning pioneer whose

groundbreaking work has shaped AI and

the future of humanity. Why do they call

it the godfather of AI? because there

weren't many people who believed that we

could model AI on the brain so that it

learned to do complicated things like

recognize objects and images or even do

reasoning. And I pushed that approach

for 50 years and then Google acquired

that technology and I worked there for

10 years on something that's now used

all the time in AI. And then you left.

Yeah. Why? So that I could talk freely

at a conference. What did you want to

talk about freely? How dangerous AI

could be.

I realized that these things will one

day get smarter than us. And we've never

had to deal with that. And if you want

to know what life's like when you're not

the apex intelligence, ask a chicken. So

there's risks that come from people

misusing AI. And then there's risks from

AI getting super smart and deciding it

doesn't need us. Is that a real risk?

Yes, it is. But they're not going to

stop it cuz it's too good for too many

things. What about regulations? They

have some, but they're not designed to

deal with most of the threats. Like the

European regulations have a clause that

say none of these apply to military uses

of AI. Really? Yeah. It's crazy. One of

your students left OpenAI. Yeah. He was

probably the most important person

behind the development of the early

versions of church GPT and I think he

left because he had safety concerns. We

should recognize that this stuff is an

existential threat and we have to face

the possibility that unless we do

something soon we're near the end. So

let's do the risks. What do we end up

doing in such a world?

This has always blown my mind a little

bit. 53% of you that listen to the show

regularly haven't yet subscribed to the

show. So, could I ask you for a favor

before we start? If you like the show

and you like what we do here and you

want to support us, the free simple way

that you can do just that is by hitting

the subscribe button. And my commitment

to you is if you do that, then I'll do

everything in my power, me and my team,

to make sure that this show is better

for you every single week. We'll listen

to your feedback. We'll find the guests

that you want me to speak to and we'll

continue to do what we do. Thank you so

much.

Jeffrey Hinsson, they call you the

godfather of AI.

Uh yes they do. Why do they call you

that? There weren't that many people who

believed that we could make neural

networks work, artificial neural

networks. So for a long time in AI from

the 1950s onwards, there were kind of

two ideas about how to do AI.

One idea was that sort of core of human

intelligence was reasoning. And to do

reasoning, you needed to use some form

of logic. And so AI had to be based

around logic. And in your head, you must

have something like symbolic expressions

that you manipulated with rules. And

that's how intelligence worked. And

things like learning or reasoning by

analogy, that all come later once we've

figured out how basic reasoning works.

There was a different approach, which is

to say, let's model AI on the brain

because obviously the brain makes us

intelligent. So simulate a network of

brain cells on a computer and try and

figure out how you would learn strengths

of connections between brain cells so

that it learned to do complicated things

like recognize objects in images or

recognize speech or even do reasoning. I

pushed that approach for like 50 years

because so few people believed in it.

There weren't many good universities

that had groups that did that. So if you

did that the best young students who

believed in that came and worked with

you. So I was very fortunate in getting

a whole lot of really good students some

of which have gone on to create and play

an instrumental role in creating

platforms like open AI. Yes. So I sus

a nice example a whole bunch of them.

Why did you believe that modeling it off

the brain was a more effective approach?

It wasn't just me believed it early on.

Fonoyman believed it and Cheuring

believed it and if either of those had

lived I think AI would have had a very

different history but they both died

young. You think AI would have been here

sooner? I think neural net the neural

net approach would have been accepted

much sooner if either of them had lived

in this season of your life. What

mission are you on? My main mission now

is to warn people how dangerous AI could

be. Did you know that when you became

the godfather of AI? No, not really. I

was quite slow to understand some of the

risks. Some of the risks were always

very obvious, like people would use AI

to make autonomous lethal weapons. That

is things that go around deciding by

themselves who to kill. Other risks,

like the idea that they would one day

get smarter than us and maybe would

become irrelevant, I was slow to

recognize that. Other people recognized

it 20 years ago. I only recognized it a

few years ago that that was a real risk

that was come might be coming quite

soon. How could you not have foreseen

that if if with everything you know here

about cracking the ability for these

computers to learn similar to how humans

learn and just you know introducing any

rate of improvement? It's a very good

question. How could you not have seen

that? But remember neural networks 20 30

years ago were very primitive in what

they could do. They were nowhere near as

good as humans, but things like vision

and language and speech recognition. The

idea that you have to now worry about it

getting smarter than people, that seems

silly then. When did that change? It

changed for the general population when

chat GPT came out. It changed for me

when I realized that the kinds of

digital intelligences we're making have

something that makes them far superior

to the kind of biological intelligence

we have. If I want to share information

with you, so I go off and I learn

something and I'd like to tell you what

I learned. So I produce some sentences.

This is a rather simplistic model, but

roughly right. Your brain is trying to

figure out how can I change the strength

of connections between neurons. So I

might have put that word next. And so

you'll do a lot of learning when a very

surprising word comes and not much

learning when if it's when it's very

obvious word. If I say fish and chips,

you don't do much learning when I say

chips. But if I say fish and cucumber,

you do a lot more learning. You wonder

why did I say cucumber? So that's

roughly what's going on in your brain.

I'm predicting what's coming next.

That's how we think it's working. Nobody

really knows for sure how the brain

works. And nobody knows how it gets the

information about whether you should

increase the strength of a connection or

decrease the strength of a connection.

That's the crucial thing. But what we do

know now from AI

is that if you could get information

about whether to increase or decrease

the connection strength so as to do

better at whatever task you're trying to

do, then we could learn incredible

things because that's what we're doing

now with artificial neuronets.

It's just we don't know for real brains

how they get that signal about whether

to increase or decrease.

As we sit here today, what are the big

concerns you have around safety of AI?

if we were to to list the the top couple

that are really front of mind and that

we should be thinking about. Um, can I

have more than a couple? Go ahead. I'll

write them all down and we'll go through

them. Okay. First of all, I want to make

a distinction between two completely

different kinds of risk.

There's risks that come from people

misusing AI. Yeah. And that's most of

the risks and all of the short-term

risks. And then there's risks that come

from AI getting super smart and deciding

it doesn't need us. Is that a real risk?

And I talk mainly about that second risk

because lots of people say, "Is that a

real risk?" And yes, it is. Now, we

don't know how much of a risk it is.

We've never been in that situation

before. We've never had to deal with

things smarter than us. So really, the

thing about that existential threat is

that we have no idea how to deal with

it. We have no idea what it's going to

look like. And anybody who tells you

they know just what's going to happen

and how to deal with it, they're talking

nonsense. So, we don't know how to

estimate the probabil probabilities

it'll replace us. Um, some people say

it's like less than 1%. My friend Yan

Lar who was a postto with me thinks no

no no, we're always going to be we build

these things. We're always going to be

in control. We'll build them to be

obedient.

And other people like Yudkowski say,

"No, no, no. These things are going to

wipe us out for sure. If anybody builds

it, it's going to wipe us all out." And

he's confident of that. I think both of

those positions are extreme. It's very

hard to estimate the probabilities in

between. If you had to bet on who was

right out of your two friends,

I simply don't know. So, if I had to

bet, I'd say the probabilities in

between, and I don't know where to

estimate it in between. I often say 10

to 20% chance they'll wipe us out, but

that's just gut based on the idea that

we're we're still making them and we're

pretty ingenious. And the hope is that

if enough smart people do enough

research with enough resources, we'll

figure out a way to build them so

they'll never want to harm us. Sometimes

I think if we we talk about that second

um path, sometimes I think about nuclear

bombs and the the invention of the

atomic bomb and how it compares like how

is this different because the atomic

bomb came along and I imagine a lot of

people at that time thought our days are

numbered. Yes, I was there. We did.

Yeah. But but but what's what h we're

still here. We're still here. Yes. So

the atomic bomb was really only good for

one thing and it was very obvious how it

worked. Even if you hadn't had the

pictures of Hiroshima and Nagasaki, it

was obvious that it was a very big bomb

that was very dangerous. With AI,

it's good for many, many things. It's

going to be magnificent in healthcare

and education and more or less any

industry that needs to use its data is

going to be able to use it better with

AI. So, we're not going to stop the

development.

You know, people say, "Well, why don't

we just stop it now?" We're not going to

stop it because it's too good for too

many things. Also, we're not going to

stop it because it's good for battle

robots, and none of the countries that

sell weapons are going to want to stop

it. Like the European regulations, they

have some regulations about AI, and it's

good they have some regulations, but

they're not designed to deal with most

of the threats. And in particular, the

European regulations have a a clause in

them that say none of these regulations

apply to military uses of AI.

So governments are willing to regulate

regulate companies and people, but

they're not willing to regulate

themselves.

It seems pretty crazy to me that they I

go back and forward, but if Europe has a

regulation, but the rest of the world

doesn't

competitive disadvantage. Yeah, we're

seeing this already. I don't think

people realize that when OpenAI release

a new model or a new piece of software

in America, they can't release it to

Europe yet because of regulations here.

So Sam Alman tweeted saying, "Our new AI

agent thing is available to everybody,

but it can't come to Europe yet because

there's regulations."

Yes. What does that gives us a

productive disadvantage? Productivity

disadvantage. What we need is I mean at

this point in history when we're about

to produce things more intelligent than

ourselves, what we really need is a kind

of world government that works run by

intelligent, thoughtful people. And

that's not what we got.

So free-for-all. Well, that what we've

got is sort of

we've got capitalism which is done very

nicely by us. is produce lots of goods

goods and services for us. But these big

companies, they're legally required to

try and maximize profits and that's not

what you want from the people developing

this stuff.

So let's do the risks then. You talked

about there's human risks and then

there's So I've distinguished these two

kinds of risk. Let's talk about all the

risks from bad human actors using AI.

There's cyber attacks.

So between 2023 and 2024,

they increased by about a factor of

12,200%.

And that's probably because these large

language models make it much easier to

do fishing attacks. And a fishing attack

for anyone that doesn't know is it's

they send you something saying, uh, hi,

I'm your friend John and I'm stuck in El

Salvador. Could you just wire this

money? That's one kind of attack. But

the fishing attacks are really trying to

get your loon credentials. And now with

AI, they can clone my voice, my image.

They can do all that. I'm struggling at

the moment because there's a bunch of AI

scams on X and also Meta. And there's

one in particular on Meta, so Instagram,

Facebook at the moment, which is a paid

advert where they've taken my voice from

the podcast. They've taken the my

mannerisms and they've made a new video

of me encouraging people to go and take

part in this crypto Ponzi scam or

whatever. And we've been, you know, we

spent weeks and weeks and weeks and

weeks and end emailing Meta telling,

"Please take this down." They take it

down, another one pops up. They take

that one down, another one pops up. So,

it's like whack-a-ole. And then it's

very annoying. The the heartbreaking

part is you get the messages from people

that have fallen for the scam and

they've lost £500 or $500 and they cross

with you cuz you recommended it and I'm

I'm like I'm sad for them. It's very

annoying. Yeah. I have a a smaller

version of that which is PE some people

now publish papers with me as one of the

authors. Mhm. And it looks like it's in

order that they can get lots of

citations to themselves. Ah, so cyber

attacks a very real threat. There's been

an explosion of those. And these already

obviously AI is very patient. So they

can go through 100 million lines of code

looking for known ways of attacking

them. That's easy to do. But they're

going to get more creative and they may

some people believe and I some people

who know a lot believe that maybe by

2030 they'll be creating new kinds of

cyber attacks which no person ever

thought of. So that's very worrisome

because they can think for themselves

and discover they can think for

themselves. They can draw new

conclusions from much more data than a

person ever saw. Is there anything

you're doing to protect yourself from

cyber attacks at all? Yes. It's one of

the few places where I changed what I do

radically because I'm scared of cyber

attacks. Canadian banks are extremely

safe. In 2008, no Canadian banks came

anywhere near going bust. So, they're

very safe banks because they're well

regulated, fairly well regulated.

Nevertheless, I think a cyber attack

might be able to bring down a bank. Now,

if you have all my savings are in shares

in banks held by banks, so if the bank

gets attacked and it holds your shares,

they're still your shares. And so, I

think you'd be okay unless the attacker

sells the shares because the bank can

sell the shares. If the attacker sells

your shares, I think you're screwed. I

don't know. I mean, maybe the bank would

have to try and reimburse you, but the

bank's bust by now, right? So,

So I'm worried about a Canadian bank

being taken down by a cyber attack and

the attacker selling selling shares that

it holds. So I spread my money and my

children's money between three banks in

the belief that if a cyber attack takes

down one Canadian bank, the other

Canadian banks will very quickly get

very careful. And do you have a phone

that's not connected to the internet? Do

you have any like, you know, I'm

thinking about storing data and stuff

like that. Do you think it's wise to

consider having cold storage? I have a

little disc drive and I back up my

laptop on this hard drive. So I actually

have everything on my laptop on a hard

drive. At least you know if the whole

internet went down I had the sense I

still got it on my laptop and I still

got my information. Okay. Then the next

thing is using AI to create nasty

viruses.

Okay. And the problem with that is that

just requires one crazy guy with the

grudge. One guy who knows a little bit

of molecular biology, knows a lot about

AI, and just wants to destroy the world.

You can now create

new viruses relatively cheaply using AI.

And you don't have to be a very skilled

molecular biologist to do it. And that's

very scary. So you could have a small

cult, for example.

a small cult might be able to raise a

few million dollars. For a few million

dollars, they might be able to design a

whole bunch of viruses. Well, I'm

thinking about some of our foreign

adversaries doing government funded

programs. I mean, there was lots of talk

around COVID and Woo the Wuhan

laboratory and what they were doing and

gain a function research, but I'm

wondering if in, you know, a China or a

Russia or an Iran or something, the

government could fund a program for a

small group of scientists to make a

virus that they could, you know, I think

they could. Yes. Now, they'd be worried

about retaliation. They'd be worried

about other governments doing the same

to them. Hopefully, that would help keep

it under control. They might also be

worried about the virus spreading to

their country. Okay? Then there's um

corrupting elections.

So, if you wanted to use AI to corrupt

elections,

a very effective thing is to be able to

do targeted political advertisements

where you know a lot about the person.

So anybody who wanted to use AI for

corrupting elections would try and get

as much data as they could about

everybody in the electorate. With that

in mind, it's a bit worrying what Musk

is doing at present in the States, going

in and insisting on getting access to

all these things that were very

carefully siloed. The claim is it's to

make things more efficient, but it's

exactly what you would want if you

intended to corrupt the next election.

How do you mean? Because you get all

this data on the people. You get all

this data on people. You know how much

they make where they you know everything

about them. Once you know that, it's

very easy to manipulate them because you

can make an AI that you can send

messages um that they'll find very

convincing telling them not to vote, for

example.

So, I have no no reason other than

common sense to think this, but I

wouldn't be surprised if part of the

motivation of getting all this data from

American government sources is to

corrupt elections. Another part might be

that it's very nice training data for a

big model, but he would have to be

taking that data from the government and

feeding it into his Yes. And what

they've done is turned off lots of the

security controls, got rid of the some

of the organization to protect against

that. Um, so that's corrupting

elections. Okay. Then there's creating

these two echo chambers

by organizations like YouTube

and Facebook showing people things that

will make them indignant. People love to

be indignant. Indignant as in angry or

what does indignant mean? Feeling I'm

sort of angry but feeling righteous.

Okay. So, for example, if you were to

show me something that said Trump did

this crazy thing, here's a video of

Trump doing this completely crazy thing.

I would immediately click on it.

Okay. So, putting us in echo chambers

and dividing us. Yes. And that's um the

policy that YouTube and Facebook and

others use for deciding what to show you

next is causing that. If they had a

policy of showing you balanced things,

they wouldn't get so many clicks and

they wouldn't be able to sell so many

advertisements.

And so it's basically the profit motive

is saying show them whatever will make

them click. And what'll make them click

is things that are more and more

extreme. And that confirmed my existing

bias. That confirm my existing bias. So

you're getting your biases confirmed all

the time further and further and further

and further, which means you're you're

driving away, which is now there's in

the states there's two communities that

don't hardly talk to each other. I'm not

sure people realize that this is

actually happening every time they open

an app. But if you go on a Tik Tok or a

YouTube or one of these big social

networks, the algorithm, as you you

said, is designed to show you more of

the things that you had interest in last

time. So, if you just play that out over

10 years, it's going to drive you

further and further and further into

whatever ideology or belief you have and

further away from nuance and common

sense and um parity, which is a pretty

remarkable thing. I I like people don't

know it's happening. They just open

their phones and experience something

and think this is the news or the

experience everyone else is having.

Right. So, basically, if you have a

newspaper and everybody gets the same

newspaper, Yeah. you get to see all

sorts of things you weren't looking for

and you get a sense that if it's in the

newspaper it's an important thing or

significant thing but if you have your

own news feed my news feed on my iPhone

3/arters of the stories are about AI and

I find it very hard to know if the whole

world's talking about AI all the time or

if it's just my newsfeed

okay so driving me into my echo chambers

um which is going to continue to divide

us further and further I'm actually

noticing that the algorithm are becoming

even more,

what's the word?

Tailored. And people might go, "Oh,

that's great." But what it means is

they're becoming even more personalized,

which is means that my reality is

becoming even further from your reality.

Yeah. It's crazy. We don't have a shared

reality anymore. I share reality with

other people who watch the BBC and other

BBC news and other people who read the

Guardian and other people who read the

New York Times. I have almost no shared

reality with people who watch Fox News.

It's pretty It's pretty um I I It's

worrisome. Yeah. Behind all this is the

idea that these companies just want to

make profit and they'll do whatever it

takes to make more profit because they

have to. They're legally obliged to do

that. So, we almost can't blame the

company, can we? If they're if Well,

capitalism's done very well for us. It's

produced lots of goodies. Yeah. But you

need to have it very well regulated.

So what you really want is to have rules

so that when some company is trying to

make as much profit as possible,

in order to make that profit, they have

to do things that are good for people in

general, not things that are bad for

people in general. So once you get to a

situation where in order to make more

profit the company starts doing things

that are very bad for society like

showing you things that are more and

more extreme that's what regulations are

for. So you need regulations with

capitalism. Now companies will always

say regulations get in the way make us

less efficient and that's true. The

whole point of regulations is to stop

them doing things to make profit that

hurt society. And we need strong

regulation. who's going to decide

whether it hurts society or not because

you know that's the job of politicians

unfortunately if the politicians are

owned by the companies that's not so

good and also the politicians might not

understand the technology we you've

probably seen the Senate hearings where

they wheel out you know Mark Zuckerberg

and these big tech CEOs and it is quite

embarrassing because they're asking the

wrong questions well I've seen the video

of the US education secretary talking

about how they're going to get AI in the

classrooms except she thought it was

called A1

She's actually there saying we're going

to have all the kids interacting with

A1. There is a school system that's

going to start um making sure that first

graders or even preks have A1 teaching,

you know, every year starting, you know,

that far down in the grades. And that's

just a that's a wonderful thing.

[Laughter]

And these are what these are the people

that these are the people in charge.

Ultimately the tech companies are in

charge because they will outsmart the

tech companies in the states now at

least a few weeks ago when I was there

they were running an advertisement about

how it was very important not to

regulate AI because it would hurt us in

the competition with China. Yeah. And

that's a that's a plausible argument

there. Yes it will. But you have to

decide, do you want to compete with

China by doing things that will do a lot

of harm to your society? And you

probably don't.

I guess they would say that it's not

just China, it's Denmark and Australia

and Canada and the UK. They're not so

worried about and Germany. But if they

kneecap themselves with regulation, if

they slow themselves down, then the

founders, the entrepreneurs, the

investors are going to go. I think

calling it kneecapping is taking a

particular point of view is take taking

the point of view that regulations are

sort of very harmful. What you need to

do is just constrain the big companies

so that in order to make profit, they

have to do things that are socially

useful. Like Google search is a great

example that didn't need regulation

because it just made information

available to people. It was great. But

then if you take YouTube which starts

showing you adverts and showing you more

and more extreme things that needs

regulation but we don't have the people

to regulate it as we've identified. I

think people know pretty well um that

particular problem of showing you more

and more extreme things. That's a

well-known problem that the politicians

understand. They just um need to get on

and regulate it. So that was the the

next point which was that the algorithms

are going to drive us further into our

echo chambers, right?

What's next? Lethal autonomous weapons.

Lethal autonomous weapons.

That means things that can kill you and

make their own decision about whether to

kill you, which is the great dream, I

guess, of the military-industrial

complex being able to create such

weapons. So, the worst thing about them

is big powerful countries always have

the ability to invade smaller poorer

countries. they're just more powerful.

But if you do that using actual

soldiers, you get bodies coming back in

bags and the relatives of the soldiers

who were killed don't like it. So you

get something like Vietnam. Mhm. In the

end, there's a lot of protest at home.

If instead of bodies coming back in

bags, it was dead robots, there'd be

much less protest and the

military-industrial complex would like

it much more because robots are

expensive. And suppose you had something

that could get killed and was expensive

to replace. That would be just great.

Big countries can invade small countries

much more easily because they don't have

their soldiers being killed. And the

risk here is that these robots will

malfunction or they'll just be more No,

no, that's even if the robots do exactly

what the people who built the robots

want them to do, the risk is that it's

going to make big countries invade small

countries more often. More often because

they can Yeah. And it's not a nice thing

to do. So it brings down the friction of

war. It brings down the cost of doing an

invasion.

And these machines will be smarter at

warfare as well. So they'll be well even

when the machines aren't smarter. So the

lethal autonomous weapons, they can make

them now. And they I think all the big

defense models are busy making them.

Even if they're not smarter than people,

are still very nasty, scary things. Cuz

I'm thinking that, you know, they could

show just a picture. Go get this guy.

Yeah. And go take out anyone he's been

texting and this little wasp. So, two

days ago, I was visiting a friend of

mine in Sussex who had a drone that cost

less than £200

and

the drone went up. It took a good look

at me and then it could follow me

through the woods and it follow It was

very spooky having this drone. It was

about 2 meters behind me. It was looking

at me and if I moved over there, it

moved over there. It could just track

me. Mhm. For 200 pounds, but it was

already quite spooky. Yeah. And I

imagine there's as you say a race going

on as we speak to who can build the most

complex autonomous autonomous weapons.

There is a a risk I often hear that some

of these things will combine and the

cyber attack will release weapons.

Sure. Um you can you can get

combinatorily many risks by combining

these other risks. Mhm. So, I mean, for

example, you could get a super

intelligent AI that decides to get rid

of people, and the obvious way to do

that is just to make one of these nasty

viruses. If you made a virus that was

very contagious, very lethal, and very

slow,

everybody would have it before they

realized what was happening. I mean, I

think if a super intelligence wanted to

get rid of us, it will probably go for

something biological like that that

wouldn't affect it. Do you not think it

could just very quickly turn us against

each other? For example, it could send a

warning on the nuclear systems in

America that there's a nuclear bomb

coming from Russia or vice versa and one

retaliates. Yeah. I mean, my basic view

is there's so many ways in which the

super intelligence could get rid of us.

It's not worth speculating about.

What What is What you have to do is

prevent it ever wanting to. That's what

we should be doing research on. There's

no way we're going to prevent it from

it's smarter than us, right? There's no

way we're going to prevent it getting

rid of us if it wants to. We're not used

to thinking about things smarter than

us. If you want to know what life's like

when you're not the apex intelligence,

ask a chicken.

Yeah. I was thinking about my dog Pablo,

my French bulldog, this morning as I

left home. He has no idea where I'm

going. He has no idea what I do, right?

Can't even talk to him. Yeah. And the g

the intelligence gap will be like that.

So you're telling me that if I'm Pablo,

my French bulldog, I need to figure out

a way to make my owner not wipe me out.

Yeah. So we have one example of that

which is mothers and babies. Evolution

put a lot of work into that. Mothers are

smarter than babies, but babies are in

control. And they're in control because

the mother just can't bear lots of

hormones and things, but the b the

mother just can't bear the sound of the

baby crying. Not all mothers. Not all

mothers. And then the baby's not in

control and then bad things happen. We

somehow need to figure out how to make

them not want to take over. The analogy

I often use is forget about

intelligence, think about physical

strength. Suppose you have a nice little

tiger cup. It's sort of bit bigger than

a cat. It's really cute.

It's very cuddly, very interesting to

watch. Except that you better be sure

that when it grows up, it never wants to

kill you. Cuz if it ever wanted to kill

you, you'd be dead in a few seconds. And

you're saying the AI we have now is the

target cub. Yep. And it's growing up.

Yep.

So, we need to train it as it's when

it's a baby. Well, now a tiger has lots

of in stuff built in. So, you know, when

it grows up, it's not a safe thing to

have around. But lions, people that have

lions as pets, yes. Sometimes the lion

is affectionate to its creator but not

to others. Yes. And we don't know

whether these AIs

we we simply don't know whether we can

make them not want to take over and not

want to hurt us. Do you think we can? Do

you think it's possible to train super

intelligence? I don't think it's clear

that we can. So I think it might be

hopeless. But I also think we might be

able to. And it'd be sort of crazy if

people went extinct cuz we couldn't be

bothered to try. If that's even a

possibility, how do you feel about your

life's work? Because you were Yeah. Um,

it sort of takes the edge off it,

doesn't it? I mean, the idea is going to

be wonderful in healthcare and wonderful

in education and wonderful. I mean, it's

going to make call centers much more

efficient, though one worries a bit

about what the people who are doing that

job now do. It makes me sad. I don't

feel particularly guilty about

developing AI like 40 years ago because

at that time we had no idea that this

stuff was going to happen this fast. We

thought we had plenty of time to worry

about things like that. They when you

when you can't get the to do much, you

want to get it to do a little bit more.

You don't worry about this stupid little

thing is going to take over from people.

You just want it to be able to do a

little bit more of the things people can

do. It's not like I knowingly did

something thinking this might wipe us

all out, but I'm going to do it anyway.

Mhm. But it is a bit sad that it's not

just going to be something for good.

So I feel I have a duty now to talk

about the risks.

And if you could play it forward and you

could go forward 30, 50 years and you

found out that it led to the extinction

of humanity and if that does end up

being

being the outcome,

well, if you played it forward and it

led to the extinction of humanity, I

would use that to tell people to tell

their governments that we really have to

work on how we're going to keep this

stuff under control. I think we need

people to tell governments that

governments have to force the companies

to use their resources to work on safety

and they're not doing much of that

because you don't make profits that way.

One of your your students we talked

about earlier um Ilia Yep. Ilia left

OpenAI. Yep. And there was lots of

conversation around the fact that he

left because he had safety concerns.

Yes. And he's gone on to set set up a AI

safety company. Yes.

Why do you think he left?

I think he left because he had safety

concerns. Really? He um I still have

lunch with him from time to time. His

parents live in Toronto. When he comes

to Toronto, we have lunch together. He

doesn't talk to me about what went on at

Open AI, so I have no inside information

about that. But I know I very well and

he is genuinely concerned with safety.

So I think that's why he left because he

was one of the top people. I mean he was

he was probably the most important

person behind the development of um

church GPT the the early versions like

GPT2 he was very important in the

development of that you know him

personally so you know his character yes

he has a good moral compass he's not

like someone like Musco has no moral

compass does Sam Alman have a good moral

compass

we'll see

I don't know Sam so I don't want to

comment on that. But from what you've

seen, are you concerned about the

actions that they've taken? Because if

you know Ilia and Ilia's a good guy and

he's left

that would give you some insight. Yes.

It would give you some reason to believe

that there's a problem there. And if you

look at Sam's statements

some years ago,

he sort of happily said in one interview

and this stuff will probably kill us

all. That's not exactly what he said,

but that's what it amounted to. Now he's

saying you don't need to worry too much

about it. And I suspect that's not

driven by

seeking after the truth. That's driven

by seeking after money. Is it money or

is it power? Yeah. I shouldn't have said

money. It's some some combination of

those. Yes. Okay. I guess money is a

proxy for power. But I am I've got a

friend who's a billionaire and he is in

those circles. And when I went to his

house and had uh lunch with him one day,

he knows lots of people in AI, building

the biggest AI companies in the world.

And he gave me a cautionary warning

across the across his kitchen table in

London where he gave me an insight into

the private conversations these people

have, not the media interviews they do

where they talk about safety and all

these things, but actually what some of

these individuals think is going to

happen and what do they think is going

to happen. It's not what they say

publicly. You know, one one person who I

shouldn't name who is the who is leading

one of the biggest AI companies in the

world. He told me that he knows this

person very well and he privately thinks

that we're heading towards this kind of

dystopian world where we have just huge

amounts of free time. We don't work

anymore. And this person doesn't really

give a about the harm that it's

going to have on the world. And this

person who I'm referring to is building

one of the biggest AI companies in the

world. And I then watch this person's

interviews online trying to figure out

which of three people it is. Yeah. Well,

it's one of those three people. Okay.

And I watch this person's interviews

online and I I reflect on a conversation

that my billionaire friend had with me

who knows him and I go, "Fucking hell,

this guy's lying publicly." Like, he's

not telling the the truth to the world.

And that's haunted me a little bit. It's

part of the reason I have so many

conversations around AR in this podcast

because I'm like, I don't know if

they're I think they're a some of them

are a little bit sadistic about power. I

think they they like the idea that they

will change the world, that they will be

the one that fundamentally shifts the

world. I think Musk is clearly like

that right?

He's such a complex character that I

don't I don't really know how to place

Musk. Um he's done some really good

things like um pushing electric cars.

That was a really good thing to do.

Yeah. Some of the things he said about

self-driving were a bit exaggerated, but

he that was a really useful thing he

did. Giving the Ukrainians communication

during the war with Russia. Stling. Um

that was a really good thing he did.

there's a bunch of things like that. Um,

but he's also done some very bad things.

So, coming back to this point of

the possibility of destruction

and the motives of these big companies,

are you at all hopeful that anything can

be done to slow down the pace and

acceleration of AI? Okay, there's two

issues. One is can you slow it down?

Yeah. And the other is, can you make it

so it will be safe in the end? It won't

wipe us all out. I don't believe we're

going to slow it down. Yeah. And the

reason I don't believe we're going to

slow it down is because there's

competition between countries and

competition between companies within a

country and all of that is making it go

faster and faster. And if the US slowed

it down, China wouldn't slow it down.

Does IA think it's possible to make AI

safe?

I think he does. He won't tell me what

his secret source is. I I'm not sure how

many people know what his secret source

is. I think a lot of the investors don't

know what his secret source is, but

they've given him billions of dollars

anyway because they have so much faith

in Asia, which isn't foolish. I mean, he

was very important in Alexet, which got

object recognition working well. He was

the main the main force behind the

things like GBC2

which then led to CH GPT.

So I think having a lot of faith in IA

is a very reasonable decision. There's

something quite haunting about the guy

that made and was the main force behind

GPT2 which led rise to this whole

revolution left the company because of

safety reasons. He knows something that

I don't know about what might happen

next. Well, the company had now I don't

know the precise details um but I'm

fairly sure the company had indicated

that would it would use a significant

fraction of its resources of the compute

time for doing safety research and then

it kept then it reduced that fraction. I

think that's one of the things that

happened. Yeah, that was reported

publicly. Yes. Yeah.

We've gotten to the autonomous weapons

part of the risk framework. Right. So

the next one is joblessness. Yeah. In

the past, new technologies have come in

which didn't lead to joblessness. New

jobs were created. So the classic

example people use is automatic tele

machines. When automatic tele machines

came in, a lot of bank tellers didn't

lose their jobs. They just got to do

more interesting things. But here, I

think this is more like when they got

machines in the industrial revolution.

And

you can't have a job digging ditches now

because a machine can dig ditches much

better than you can. And I think for

mundane intellectual labor, AI is just

going to replace everybody. Now, it will

may well be in the form of you have

fewer people using air assistance. So

it's a combination of a person and an AI

assistant are now doing the work that 10

people could do previously. People say

that it will create new jobs though, so

we'll be fine. Yes. And that's been the

case for other technologies, but this is

a very different kind of technology. If

it can do all mundane human intellectual

labor,

then what new jobs is it going to

create? You'd you'd have to be very

skilled to have a job that it couldn't

just do. So I don't I don't think

they're right. I think you can try and

generalize from other technologies that

have come in like computers or automatic

tele machines, but I think this is

different. People use this phrase. They

say AI won't take your job. A human

using AI will take your job. Yes, I

think that's true. But for many jobs,

that'll mean you need far fewer people.

My niece answers letters of complaint to

a health service. It used to take her 25

minutes. She'd read the complaint and

she'd think how to reply and she'd write

a letter. And now she just scans it into

um a chatbot and it writes the letter.

She just checks the letter. Occasionally

she tells it to revise it in some ways.

The whole process takes her five

minutes. That means she can answer five

times as many letters and that means

they need five times fewer of her so she

can do the job that five of her used to

do. Now, that will mean they need less

people. In other jobs, like in health

care, they're much more elastic. So, if

you could make doctors five times as

efficient, we could all have five times

as much health care for the same price,

and that would be great. There's there's

almost no limit to how much health care

people can absorb. They always want more

healthare if there's no cost to it.

There are jobs where you can make a

person with an AI assistant much more

efficient and you won't lead to less

people because you'll just have much

more of that being done. But most jobs I

think are not like that. Am I right in

thinking the sort of industrial

revolution

played a role in replacing muscles? Yes.

Exactly. And this revolution in AI

replaces intelligence the brain. Yeah.

So, so mundane intellectual labor is

like having strong muscles and it's not

worth much anymore. So, muscles have

been replaced. Now we intelligence is

being replaced. Yeah. So, what remains?

Maybe for a while some kinds of

creativity but the whole idea of super

intelligence is nothing remains. Um

these things will get to be better than

us at everything. So, what what do we

end up doing in such a world? Well, if

they work for us, we end up getting lots

of goods and services for not much

effort. Okay. But that sounds tempting

and nice, but I don't know. There's a

cautionary tale in creating more and

more ease for humans in in it going

badly. Yes. And we need to figure out if

we can make it go well. So the the nice

scenario is imagine a company with a CEO

who is very dumb, probably the son of

the former CEO. And he has an executive

assistant who's very smart and he says,

"I think we should do this." And the

executive assistant makes it all work.

The CEO feels great. He doesn't

understand that he's not really in

control. And in in some sense, he is in

control. He suggests what the company

should do. She just makes it all work.

Everything's great. That's the good

scenario. And the bad scenario, the bad

scenario, she thinks, "Why do we need

him?"

Yeah.

I mean, in a world where we have super

intelligence, which you don't believe is

that far away. Yeah, I think it might

not be that far away. It's very hard to

predict, but I think we might get it in

like 20 years or even less. I made the

biggest investment I've ever made in a

company because of my girlfriend. I came

home one night and my lovely girlfriend

was up at 1:00 a.m. in the morning

pulling her hair out as she tried to

piece together her own online store for

her business. And in that moment, I

remembered an email I'd had from a guy

called John, the founder of Stanto, our

new sponsor and a company I've invested

incredibly heavily in. And Standtore

helps creators to sell digital products,

courses, coaching, and memberships all

through a simple customizable link in

bio system. And it handles everything,

payments bookings emails community

engagement, and even links with Shopify.

And I believe in it so much that I'm

going to launch a Stan challenge. And as

part of this challenge, I'm going to

give away $100,000 to one of you. If you

want to take part in this challenge, if

you want to monetize the knowledge that

you have, visit stephenbartlet.stan

stan.store to sign up. And you'll also

get an extended 30-day free trial of

Stan Store if you use that link. Your

next move could quite frankly change

everything. Because I talked about

ketosis on this podcast and ketones, a

brand called Ketone IQ sent me their

little product here and it was on my

desk when I got to the office. I picked

it up. It sat on my desk for a couple of

weeks. Then one day, I tried it and

honestly, I have not looked back ever

since. I now have this everywhere I go

when I travel all around the world. It's

in my hotel room. My team will put it

there. Before I did the podcast

recording today that I've just finished,

I had a shot of Ketone IQ. And as is

always the case when I fall in love with

a product, I called the CEO and asked if

I could invest a couple of million quid

into their company. So, I'm now an

investor in the company as well as them

being a brand sponsor. I find it so easy

to drop into deep focused work when I've

had one of these. I would love you to

try one and see the impact it has on

you, your focus, your productivity, and

your endurance. So, if you want to try

it today, visit ketone.com/stephven

for 30% off your subscription. Plus,

you'll receive a free gift with your

second shipment. That's

ketone.com/stephven.

I'm excited for you. I am. So, what's

the difference between what we have now

and super intelligence? Because it seems

to be really intelligent to me when I

use like chatbt3 or Gemini or Okay. So

it's already AI is already better than

us at a lot of things in particular

areas like chess for example. Yeah. AI

is so much better than us that people

will never beat those things again.

Maybe the occasional win but basically

they'll never be comparable again.

Obviously the same in go in terms of the

amount of knowledge they have. Um

something like GBT4 knows thousands of

times more than you do. There's a few

areas in which your knowledge is better

than its and in almost all areas it just

knows more than you do. What areas am I

better than it? Probably in interviewing

CEOs. You're probably better at that.

You've got a lot of experience at it.

You're a good interviewer. You know a

lot about it. If you tried if you got

GPT4 to interview a CEO, probably do a

worse job. Okay.

I'm trying to think if that if I agree

with that statement. Uh GPT4 I think for

sure. Yeah. Um but I but I guess you

could but it may not be long before

Yeah. I guess you could train one on

this how I ask questions and what I do

and Sure. And if you took a general

purpose sort of foundation model and

then you trained it up on not just you

but every every interviewer you could

find doing interviews like this but

especially you. You'll probably get to

be quite good at doing your job but

probably not as good as you for a while.

Okay. So, there's a few areas left and

then super intelligence becomes when

it's better than us at all things. When

it's much smarter than you and almost

all things is better than you. Yeah. And

you you you say that this might be a

decade away or so. Yeah. It might be. It

might be even closer. Some people think

it's even closer and might well be much

further. It might be 50 years away.

That's still a possibility. It might be

that somehow training on human data

limits you to not being much smarter

than humans. My guess is between 10 and

20 years we'll have super intelligence.

On this point of joblessness, it's

something that I've been thinking a lot

about in particular because I started

messing around with AI agents and we

released an episode on the podcast

actually this morning where we had a

debate about AI agents with some a CEO

of a big AI agent company and a few

other people and it was the first moment

where I had no it was another moment

where I had a Eureka moment about what

the future might look like when I was

able in the interview to tell this agent

to order all of us drinks and then 5

minutes later in the interview you see

the guy show up with the drinks and I

didn't touch anything. I just told it to

order us drinks to the studio. And you

didn't know about who you normally got

your drinks from. It figured that out

from the web. Yeah, figured out cuz it

went on Uber Eats. It has my my my data,

I guess. And it I we put it on the

screen in real time so everyone at home

could see the agent going through the

internet, picking the drinks, adding a

tip for the driver, putting my address

in, putting my credit card details in,

and then the next thing you see is the

drinks show up. So that was one moment.

And then the other moment was when I

used a tool called Replet and I built

software by just telling the agent what

I wanted. Yes. It's amazing, right? It's

amazing and terrifying at the same time.

Yes. Because and if it can build

software like that, right? Yeah.

Remember that the AI when it's training

is using code and if it can modify its

own code

then it gets quite scary, right? because

it can modify. It can change itself in a

way we can't change ourselves. We can't

change our innate endowment, right?

There's nothing about itself that it

couldn't change.

On this point of joblessness, you have

kids. I do. And they have kids. No, they

don't have kids. No grandkids yet. What

would you be saying to people about

their career prospects in a world of

super intelligence? What should we we be

thinking about? Um, in the meantime, I'd

say it's going to be a long time before

it's as good at physical manipulation as

us. Okay. And so, a good bet would be to

be a plumber.

until the humanoid robots show up in

such a world where there is mass

joblessness which is not something that

you just predict but this is something

that Sam Alman open AI I've heard him

predict and many of the CEOs Elon Musk I

watched an interview which I'll play on

screen of him being asked this question

and it's very rare that you see Elon

Musk silent for 12 seconds or whatever

it was and then he basically says

something about he actually is living in

suspended disbelief i.e. He's basically

just not thinking about it. When you

think about advising your children on a

career with so much that is changing,

what do you tell them is going to be of

value?

Well,

that is a tough question to answer. I

would just say, you know, to to sort of

follow their heart in terms of what they

they find um interesting to do or

fulfilling to do. I mean, if I think

about it too hard, frankly, it can be uh

dispariting and uh demotivating. Um

because I mean, I I go through I mean I

I I've put a lot of blood, sweat, and

tears into building the companies and

then it and then I'm like, wait, should

I be doing this? Because if I'm

sacrificing time with friends and family

that I would prefer to to to but but

then ultimately the AI can do all these

things. Does that make sense? I I don't

know. Um to some extent I have to have

deliberate suspension of disbelief in

order to to remain motivated. Um so I I

guess I would say just you know

work on things that you find

interesting, fulfilling and um and and

that contribute uh some good to the rest

of society. Yeah. A lot of these threats

it's very hard to intellectually you can

see the threat but it's very hard to

come to terms with it emotionally.

Yeah. I haven't come to terms with it

emotionally yet. What do you mean by

that?

I haven't come to terms with what the

development of super intelligence could

do to my children's future.

I'm okay. I'm 77.

I'm going to be out of here soon. But

for my children and my my younger

friends, my nephews and nieces and their

children um

I just don't like to think about what

could happen.

Why? Cuz it could be awful.

In In what way?

Well, if I ever decided to take over. I

mean, it would need people for a while

to run the power stations until it

designed better analog machines to run

the power stations. There's so many ways

it could get rid of people, all of which

would of course be very nasty.

Is that part of the reason you do what

you do now? Yeah. I I mean, I think we

should be making a huge effort right now

to try and figure out if we can develop

it safely. Are you concerned about the

midterm impact potentially on your

nephews and your your kids in terms of

their jobs as well? Yeah, I'm concerned

about all that. Are there any particular

industries that you think are most at

risk? People talk about the creative

industries a lot and sort of knowledge

work. They talk about lawyers and

accountants and stuff like that. Yeah.

So, that's why I mentioned plumbers. I

think plumbers are less at risk. Okay,

I'm going to become a plumber. Someone

like a legal assistant, a parallegal.

Um they're not going to be needed for

very long. And is there a wealth

inequality issue here that will will

arise from this? Yeah, I think in a

society which shared out things fairly,

if you get a big increase in

productivity, everybody should be better

off.

But if you can replace lots of people by

AIS,

then the people who get replaced will be

worse off

and the company that supplies the AIS

will be much better off

and the company that uses the AIS. So

it's going to increase the gap between

rich and poor. And we know that if you

look at that gap between rich and poor,

that basically tells you how nice the

society is. If you have a big gap, you

get very nasty societies in which people

live in world communities and put other

people in mass jails. It's not good to

increase the gap between rich and poor.

The International Monetary Fund has

expressed profound concerns that

generative AI could cause massive labor

disruptions and rising inequality and

has called for policies that prevent

this from happening. I read that in the

business insider. So, have they given

any of what the policies should look

like? No. Yeah, that's the problem. I

mean, if AI can make everything much

more efficient and get rid of people for

most jobs or have a person assisted by I

doing many many people's work, it's not

obvious what to do about it. It's

universal basic income,

give everybody money. Yeah, I I I think

that's a good start and it stops people

starving. But for a lot of people, their

dignity is tied up with their job. I

mean, who you think you are is tied up

with you doing this job, right? Yeah.

And if we said, "We'll give you the same

money just to sit around," that would

impact your dignity. You said something

earlier about it surpassing or being

superior to human intelligence. A lot of

people, I think, like to believe that AI

is is on a computer and it's something

you can just turn off if you don't like

it. Well, let me tell you why I think

it's superior. Okay. Um, it's digital.

And because it's digital, you can have

you can simulate a neural network on one

piece of hardware. Yeah. And you can

simulate exactly the same neural network

on a different piece of hardware. So you

can have clones of the same

intelligence.

Now you could get this one to go off and

look at one bit of the internet and this

other one to look at a different bit of

the internet. And while they're looking

at these different bits of the internet,

they can be syncing with each other. So

they keep their weights the same, the

connection strengths the same. Weights

are connection strengths. Mhm. So this

one might look at something on the

internet and say, "Oh, I'd like to

increase this strength of this

connection a bit." And it can convey

that information to this one. So it can

increase the strength of that connection

a bit based on this one's experience.

And when you say the strength of the

connection, you're talking about

learning. That's learning. Yes. Learning

consists of saying instead of this one

giving 2.4 four votes for whether that

one should turn on. We'll have this one

give 2.5 votes for whether this one

should turn on. And that will be a

little bit of learning. So these two

different copies of the same neural net

are getting different experiences.

They're looking at different data, but

they're sharing what they've learned by

averaging their weights together. Mhm.

And they can do that averaging at like a

you can average a trillion weights. When

you and I transfer information, we're

limited to the amount of information in

a sentence. And the amount of

information in a sentence is maybe a 100

bits. It's very little information.

We're lucky if we're transferring like

10 bits a second. These things are

transferring trillions of bits a second.

So, they're billions of times better

than us at sharing information.

And that's because they're digital. And

you can have two bits of hardware using

the connection strengths in exactly the

same way. We're analog and you can't do

that. Your brain's different from my

brain. And if I could see the connection

strengths between all your neurons, it

wouldn't do me any good because my

neurons work slightly differently and

they're connected up slightly

differently. Mhm. So when you die, all

your knowledge dies with you. When these

things die, suppose you take these two

digital intelligences that are clones of

each other and you destroy the hardware

they run on. As long as you've stored

the connection strength somewhere, you

can just build new hardware that

executes the same instructions. So,

it'll know how to use those connection

strengths and you've recreated that

intelligence. So, they're immortal.

We've actually solved the problem of

immortality, but it's only for digital

things. So, it knows it will essentially

know everything that humans know but

more because it will learn new things.

It will learn new things. It would also

see all sorts of analogies that people

probably never saw.

So, for example, at the point when GPT4

couldn't look on the web, I asked it,

"Why is a compost heap like an atom

bomb?"

Off you go. I have no idea. Exactly.

Excellent. Most that's exactly what most

people would say. It said, "Well, the

time scales are very different and the

energy scales are very different." But

then I went on to talk about how a

compost he as it gets hotter generates

heat faster and an atom bomb as it

produces more neutrons generates

neutrons faster. And so they're both

chain reactions but at very different

time in energy scales. And I believe

GPT4 had seen that during its training.

It had understood the analogy between a

compost heap and an atom bomb. And the

reason I believe that is if you've only

got a trillion connections, remember you

have 100 trillion. And you need to have

thousands of times more knowledge than a

person, you need to compress information

into those connections. And to compress

information, you need to see analogies

between different things. In other

words, it needs to see all the things

that are chain reactions and understand

the basic idea of a chain reaction and

code that code the ways in which they're

different. And that's just a more

efficient way of coding things than

coding each of them separately.

So it's seen many many analogies

probably many analogies that people have

never seen. That's why I also think that

people who say these things will never

be creative. They're going to be much

more creative than us because they're

going to see all sorts of analogies we

never saw. And a lot of creativity is

about seeing strange analogies.

People are somewhat romantic about the

specialness of what it is to be human.

And you hear lots of people saying it's

very very different. It's a it's a

computer. We are, you know, we're

conscious. We are creatives. We we have

these sort of innate unique abilities

that the computers will never have. What

do you say to those people? I'd argue a

bit with the innate. Um,

so

the first thing I say is we have a long

history of believing people were

special. And we should have learned by

now. We thought we were at the center of

the universe. We thought we were made in

the image of God. white people thought

they were very special. We just tend to

want to think we're special.

My belief is that more or less everyone

has a completely wrong model of what the

mind is. Let's suppose I drink a lot or

I drop some acid and not recommended and

I

say to you I have the subjective

experience of little pink elephants

floating in front of me. Mhm. Most

people

interpret that as there's some kind of

inner theater called the mind

and only I can see what's in my mind and

in this inner theata there's little pink

elephants floating around.

So in other words, what's happened is my

perceptual systems gone wrong and I'm

trying to indicate to you how it's gone

wrong and what it's trying to tell me.

And the way I do that is by telling you

what would have to be out there in the

real world for it to be telling the

truth.

And so these little pink elephants,

they're not in some inner theater. These

little pink elephants are hypothetical

things in the real world. And that's my

way of telling you how my perceptual

systems telling me FIPS. So now let's do

that with a chatbot. Yeah. because I

believe that current multimodal chatbots

have subjective experiences and very few

people believe that. But I'll try and

make you believe it. So suppose I have a

multimodal chatbot. It's got a robot arm

so it can point and it's got a camera so

it can see things and I put an object in

front of it and I say point at the

object. It goes like this. No problem.

Then I put a prism in front of its lens.

And so then I put an object in front of

it and I say point at the object and it

goes there.

And I say, "No, that's not where the

object is. The object's actually

straight in front of you, but I put a

prism in front of your lens." And the

chatbot says, "Oh, I see. The prism bent

the light rays." So, um, the object's

actually there, but I had the subjective

experience that it was there.

Now, if the chatbot says that, is using

the word subjective experience exactly

the way people use them. It's an

alternative view of what's going on.

They're hypothetical states of the

world. which if they were true would

mean my perceptual system wasn't lying.

And that's the best way I can tell you

what my perceptual system is doing when

it's lying to me. Now, we need to go

further to deal with sentience and

consciousness and feelings and emotions,

but I think in the end they're all going

to be dealt with in a similar way.

There's no reason machines can't have

them all because people say machines

can't have feelings. And people are

curiously confident about that. I have

no idea why. Suppose I make a battle

robot and it's a little battle robot and

it sees a big battle robot that's much

more powerful than it. It would be

really useful if it got scared.

Now, when I get scared, um, various

physiological things happen that we

don't need to go into, and those won't

happen with the robot. But all the

cognitive things like I better get the

hell out of here and I better sort of

change my way of thinking so I focus and

focus and focus and don't get

distracted. All of that will happen with

robots, too. People will build in things

so that they when the circumstances such

they should get the hell out of there,

they get scared and run away. They'll

have emotions then. They won't have the

physiological aspects, but they will

have all the cognitive aspects. And I

think it would be odd to say they're

just simulating emotions. No, they're

really having those emotions. The little

robot got scared and ran away. It's not

running away because of adrenaline. It's

running away because of a sequence of

sort of neurological in its neural net

processes happened which which have the

equivalent effect to adrenaline. So do

you do you and it's not just adrenaline,

right? There's a lot of cognitive stuff

goes on when you get scared. Yeah. So,

do you think that

there is conscious AI? And when I say

conscious, I mean that represents the

same properties of consciousness that a

human has. There's two issues here.

There's a sort of empirical one and a

philosophical one. I don't think there's

anything in principle that stops

machines from being conscious.

I'll give you a little demonstration of

that before we carry on. Suppose I take

your brain and I take one brain cell in

your brain and I replace it by this a

bit black mirror-l like. I replace it by

a little piece of nanotechnology that's

just the same size that behaves in

exactly the same way when it gets pings

from other neurons. It sends out pings

just as the brain cell would have. So

the other neurons don't know anything's

changed.

Okay. I've just replaced one of your

brain cells with this little piece of

nanote technology. Would you still be

conscious?

Yeah. Now you can see where this

argument is going. Yeah. So if you

replaced all of them as I replace them

all, at what point do you stop being

conscious? Well, people think of

consciousness as this like ethereal

thing that exists maybe beyond the brain

cells. Yeah. Well, people have a lot of

crazy ideas.

Um, people don't know what consciousness

is and they often don't know what they

mean by it. And then they fall back on

saying, well, I know it cuz I've got it

and I can see that I've got it and they

fall back on this theata model of the

mind which I think is nonsense. What do

you think of consciousness as if you had

to try and define it? Is it because I

think of it as just like the awareness

of myself? I don't know. I think it's a

term we'll stop using. Suppose you want

to understand how a car works. Well, you

know, some cars have a lot of oomph and

other cars have a lot less oomph. Like

an Aston Martin's got lots of oomph. And

a little Toyota Corolla doesn't have

much oomph. But oomph isn't a very good

concept for understanding cars. Um, if

you want to understand cars, you need to

understand about electric engines or

petrol engines and how they work. And it

gives rise to oomph, but oomph isn't a

very useful explanatory concept. It's a

kind of essence of a car. It's the

essence of an Aston Martin, but it

doesn't explain much. I think

consciousness is like that. And I think

we'll stop using that term, but I don't

think there's anything any reason why a

machine shouldn't have it. If your view

of consciousness is that it

intrinsically involves self-awareness,

then the machine's got to have

self-awareness. He's got to have

cognition about its own cognition and

stuff. But

I'm a materialist through and through.

And I don't think there's any reason why

a machine shouldn't have consciousness.

Do you think they do then have the same

consciousness that we think of ourselves

as being uniquely uh given as a gift

when we're born? I'm ambivalent about

that at present. So

I don't think there's this hard line. I

think as soon as you have a machine that

has some self-awareness,

it's got some consciousness. Um, I think

it's an emergent property of a complex

system. It's not a sort of essence

that's

throughout the universe. It's you make

this really complicated system that's

complicated enough to have a model of

itself

and it does perception. And I think then

you're beginning to get a conscious

machines. So I don't think there's any

sharp distinction between what we've got

now and conscious machines. I don't

think it's going to one day we're going

to wake up and say, "Hey, if you put

this special chemical in, it becomes

conscious." It's not going to be like

that. I think we all wonder if these

computers are like thinking like we are

on their own when we're not there. And

if they're experiencing emotions, if

they're contending with I think we

probably, you know, we think about

things like love and things that are

feel unique to biological species. Um,

are they sat there thinking? Are they do

they have concerns? I think they really

are thinking and I think as soon as you

make AI agents they will have concerns.

If you wanted to make an effective AI

agent suppose you let's take a call

center. In a call center you have people

at present they have all sorts of

emotions and feelings which are kind of

useful. So suppose I call up the call

center and I'm actually lonely and I

don't actually want to know the answer

to why my computer isn't working. I just

want somebody to talk to. After a while,

the person in the call center will

either get bored or get annoyed with me

and will terminate it.

Well, you replace them by an AI agent.

The AI agent needs to have the same kind

of responses. If someone's just called

up because they just want to talk to the

AI agent and we're happy to talk for the

whole day to the AI agent, that's not

good for business. And you want an AI

agent that either gets bored or gets

irritated and says, "I'm sorry, but I

don't have time for this." And once it

does that, I think it's got emotions.

Now, like I say, emotions have two

aspects to them. There's the cognitive

aspect and the behavioral aspect, and

then there's a physiological aspect, and

those go together with us. And if the AI

agent gets embarrassed, it won't go red.

Yeah. Um, so there's no physiological

skin won't start sweating. Yeah, but it

might have all the same behavior. And in

that case, I'd say yeah, it's having

emotion. It's got an emotion. So, it's

going to have the same sort of cognitive

thought and then it's going to act upon

that cognitive in the same way, but

without the physiological responses. And

does that matter that it doesn't go red

in the face? And it's just a different I

mean, that's a response to the It makes

it somewhat different from us. Yeah. For

some things, the physiological aspects

are very important like love. They're a

long way from having love the same way

we do. But I don't see why they

shouldn't have emotions. So I think

what's happened is people have a model

of how the mind works and what feelings

are and what emotions are and their

model is just wrong. What um what

brought you to Google? You you worked at

Google for about a decade, right? Yeah.

What brought you there? I have a son who

has learning difficulties

and in order to be sure he would never

be out on the street, I needed to get

several million dollars and I wasn't

going to get that as an academic. I

tried. So, I taught a Corsera course in

the hope that I'd make lots of money

that way, but there was no money in

that. Mhm. So I figured out well the

only way to get millions of dollars is

to sell myself to a big company.

And so when I was 65,

fortunately for me, I had two brilliant

students who produced something called

Alexet, which was neural net that was

very good at recognizing objects in

images. And

so Ilia and Alex and I set up a little

company and auctioned it. And we

actually set up an auction where we had

a number of big companies bidding for

us.

And that company was called AlexNet. No,

the the the network that recognized

objects was called Alexet. The company

was called DNN Research, deep neural

network research. And it was doing

things like this. I'll put this graph up

on the screen. That's that's Alexet.

This picture shows eight images and Alex

Net's ability, which is your company's

ability to spot what was in those

images. Yeah. So, it could tell the

difference between various kinds of

mushroom. And about 12% of imageet is

dogs. And to be good at imageet, you

have to tell the difference between very

similar kinds of dog. And it would got

to be very good at that. And your your

company Alexet won several awards I

believe for its ability to out

outperform its competitors. And so

Google ultimately ended up acquiring

your technology. Google acquired that

technology and some other technology.

And you went to work at Google at age

what 66. I went at age 65 to work at

Google. 65. And you left at age 76? 75.

75. Okay. I worked there for more or

less exactly 10 years. And what were you

doing there? Okay, they were very nice

to me. They said they said pretty much

you can do what you like. I worked on

something called distillation that did

really work well

and that's now used all the time in AI

in AI and distillation is a way of

taking what a big model knows a big

neural net knows and getting that

knowledge into a small neural net. Then

at the end I got very interested in

analog computation and whether it would

be possible to get these big language

models running in analog hardware. So

they used much less energy. And it was

when I was doing that work that I began

to really realize how much better

digital is for sharing information.

Was there a Eureka moment?

There was a Eureka month or two. Um and

it was a sort of coupling of chat beauty

coming out although Google had very

similar things a year earlier and I'd

seen those and that had a big effect

effect on me. The closest I had to a

Eureka moment was when a Google system

called Palm was able to say why a joke

was funny. And I'd always thought of

that as a kind of landmark. If it can

say why a joke's funny, it really does

understand and it could say why a joke

was funny.

And that coupled with realizing why

digital is so much better than analog

for sharing information

suddenly made me very interested in AI

safety and that these things were going

to get a lot smarter than us. Why did

you leave Google? The main reason I left

Google was cuz I was 75 and I wanted to

retire. I've done a very bad job of

that. The precise timing of when I left

Google was so that I could talk freely

at a conference at MIT, but I left

because I was I'm old and I was finding

it harder to program. I was making many

more mistakes when I programmed, which

is very annoying. You wanted to talk

freely at a conference at MIT. Yes. At

MIT, organized by MIT Tech Review. What

did you want to talk about freely? AI

safety. And you couldn't do that while

you were at Google. Well, I could have

done it while I was at Google. And

Google encouraged me to stay and work on

AI safety and said I could do whatever I

liked on AI safety. You kind of sense to

yourself if you work for a big company.

You don't feel right saying things that

will damage the big company. Even if you

could get away with it, it just feels

wrong to me. I didn't leave because I

was cross with anything Google was

doing. I think Google actually behaved

very responsibly. When they had these

big chat bots, they didn't release them

possibly because they were worried about

their reputation. they had a very good

reputation and they didn't want to

damage it. So open AI didn't have a

reputation and so they could afford to

take the gamble. I mean there's also a

big conversation happening around how it

will cannibalize their core business in

search. There is now. Yes. Yeah. Yeah.

And it's the old innovators dilemas to

some degree I guess that contending with

bad skin. I've had it and I'm sure many

of you listening have had it too or

maybe you have it right now. I know how

draining it can be, especially if you're

in a job where you're presenting often

like I am. So, let me tell you about

something that's helped both my partner

and me and my sister, which is red light

therapy. I only got into this a couple

of years ago, but I wish I'd known a

little bit sooner. I've been using our

show sponsors Boncharg's infrared sauna

blanket for a while now, but I just got

hold of their red light therapy mask as

well. Red light has been proven to have

so many benefits for the body. Like any

area of your skin that's exposed will

see a reduction in scarring, wrinkles,

and even blemishes. It also helps with

complexion. It boosts collagen, and it

does that by targeting the upper layers

of your skin. And Boncharge ships

worldwide with easy returns and a

year-long warranty on all of their

products. So, if you'd like to try it

yourself, head over to

bondcharge.com/diary

and use code diary for 25% off any

product sitewide. Just make sure you

order through this link.

bondcharge.com/diary

with code diary. Make sure you keep what

I'm about to say to yourself. I'm

inviting 10,000 of you to come even

deeper into the diary of a CEO. Welcome

to my inner circle. This is a brand new

private community that I'm launching to

the world. We have so many incredible

things that happen that you are never

shown. We have the briefs that are on my

iPad when I'm recording the

conversation. We have clips we've never

released. We have behindthe-scenes

conversations with the guests. and also

the episodes that we've never ever

released and so much more. In the

circle, you'll have direct access to me.

You can tell us what you want this show

to be, who you want us to interview, and

the types of conversations you would

love us to have. But remember, for now,

we're only inviting the first 10,000

people that join before it closes. So,

if you want to join our private closed

community, head to the link in the

description below or go to

daccircle.com.

I will speak to you there.

I'm continually shocked by the types of

individuals that listen to this

conversation um because they come up to

me sometimes. So I hear from

politicians, I hear from some real

people, I hear from entrepreneurs all

over the world, whether they are the

entrepreneurs building some of the

biggest companies in the world or their,

you know, early stage startups. For

those people that are listening to this

conversation now that are in positions

of power and influence,

world leaders, let's say, what's your

message to them?

I'd say what you need is highly

regulated capitalism. That's what seems

to work best. And what would you say to

the average person

not doesn't work in the industry,

somewhat concerned about the future,

doesn't know if they're helpless or not.

What should they be doing in their own

lives?

My feeling is there's not much they can

do. This isn't isn't going to be decided

by just as climate change isn't going to

be decided by people separating out the

plastic bags from the um compostables.

That's not going to have much effect.

It's going to be decided by whether the

lobbyists for the big energy companies

can be kept under control. I don't think

there's much people can do to except for

try and pressure their governments to

force the big companies to work on AI

safety that they can do.

You've lived a a fascinating fascinating

winding life. I think one of the things

most people don't know about you is that

your family has a

big history of being involved in

tremendous things. You have a family

tree which is one of the most impressive

that I've ever seen or read about. Your

great greatgrandfather George Bull

founded the Boolean algebra logic which

is one of the foundational principles of

modern computer science. You have uh

your great great grandmother Mary

Everest Bull who was a mathematician and

educator who made huge leaps forward in

mathematics from what I was able to

ascertain. Um I mean I can the list goes

on and on and on. I mean, your great

great uncle George Everest is what Mount

Everest is named after.

Is that is that correct? I think he's my

great great great uncle. His his niece

married George Bull.

So Mary Mary Bull was Mary Everest Bull.

Um she was the niece of Everest. And

your first cousin once removed, Joan

Hinton, was involved in the a nuclear

physicist who worked on the Manhattan

project, which is the World War II

development of the first nuclear bomb.

Yeah. She was one of the two female

physicists at Los Alamos.

And then after they dropped the bomb,

she moved to China. Why? She was very

cross with them dropping the bomb. And

her family had a lot of links with

China. Her mother was friends with

Chairman Mo.

Quite weird.

When you look back at your life,

Jeffrey,

we have the hindsight you have now and

the ret retrospective clarity,

what might you have done differently if

you were advising me?

I guess I have two pieces of advice. One

is if you have an intuition that people

are doing things wrong and there's a

better way to do things, don't give up

on that intuition just because people

say it's silly. Don't give up on the

intuition until you figured out why it's

wrong. Figured out for yourself why that

intuition isn't correct. And usually

it's wrong if it disagrees with

everybody else and you'll eventually

figure out why it's wrong.

But just occasionally you'll have an

intuition that's actually right and

everybody else is wrong. And I lucked

out that way. Early on I thought neural

nets are definitely the way to go to

make AI and almost everybody said that

was crazy and I stuck with it because I

couldn't. It seemed to me it was

obviously right.

Now the idea that you should stick with

your intuitions isn't going to work if

you have bad intuitions. But if you have

bad intuitions, you're never going to do

anything anyway, so you might as well

stick with them.

And in your own career journey, is there

anything you look back on and say, "With

the hindsight I have now, I should have

taken a different approach at that

juncture."

I wish I'd spent more time with my wife

um

and with my children when they were

little.

I was kind of obsessed with work.

Your wife passed away. Yeah. From

ovarian cancer. No. Or that was another

wife. Okay. Um I had two wives to have

cancer. Oh, really? Sorry. The first one

died of ovarian cancer and the second

one died of pancreatic cancer. And you

wish you'd spent more time with her?

With the second wife? Yeah. Who was a

wonderful person?

Why did you say that in your 70s? What

is it that you've you figured out that I

might not know yet?

Oh, just cuz she's gone and I can't

spend more time with her now. Mhm.

But you didn't know that at the time.

At the time, you think

I mean it was likely I would die before

her just cuz she was a woman and I was a

man. Um I didn't

I just didn't spend enough time when I

could.

I I think I I inquire there because I

think there's many of us that are so

consumed with what we're doing

professionally that we kind of assume

immortality with our partners because

they've always been there. So we Yeah. I

mean she was very supportive of me

spending a lot of time working but and

why did you say your children as well?

What's the what's the Well, I didn't

spend enough time with them when they

were little

and you regret that now. Yeah.

If you um if you had a closing message

for for my for my listeners about AI and

AI safety, what would that be? Jeffrey,

there's still a chance that we can

figure out how to develop AI that won't

want to take over from us. And because

there's a chance, we should put enormous

resources into trying to figure that out

because if we don't, it's going to take

over. And are you hopeful?

I just don't know. I'm agnostic.

you must get get bed get in bed at night

and when you're thinking to yourself

about probabilities of outcomes there

must be a bias in one direction because

there certainly is for me I imagine

everyone listening now has a

internal prediction that they might not

say out loud but of how they think it's

going to play out I really don't know I

genuinely don't know I think it's

incredibly uncertain when I'm feeling

slightly depressed I think people are

toast is going to take over while I'm

feeling cheerful. I think we'll figure

out a way. Maybe one of the facets of

being a human um is because we've always

been here, like we were saying about our

loved ones and our relationships, we

assume casually that we will always be

here and we'll always figure everything

out. But there's a beginning and an end

to everything as we saw from the

dinosaurs. I mean, yeah. And

we have to face the possibility

that unless we do something soon,

we're near the end.

We have a closing tradition on this

podcast where the last guest leaves a

question in their diary. And the

question that they've left for you is

with everything that you see ahead of

us,

what is the biggest threat you see to

human happiness?

I think the joblessness is a fairly

urgent short-term threat to human

happiness. I think if you make lots and

lots of people unemployed, even if they

get universal basic income, um they're

not going to be happy

because they need purpose. Because they

need purpose. Yes. And struggle. They

need to feel they're contributing

something. They're useful. And do you

think that outcome that there's going to

be huge job displacement is more

probable than not? Yes, I do. And what

sort of that one I think is definitely

more probable than not. If I worked in a

call center, I'd be terrified.

And what's the time frame for that in

terms of mass jobs? I think it's

beginning to happen already. I read an

article in the Atlantic recently that

said it's already getting hard for

university graduates to get jobs. And

part of that may be that people are

already using AI for the jobs they would

have got. I spoke to the CEO of a major

company that everyone will know of, lots

of people use, and he said to me in DMs

that they used to have seven just over

7,000 employees. He said uh by last year

they were down to I think 5,000. He said

right now they have 3,600. And he said

by the end of summer because of AI

agents they'll be down to 3,000. So

you've got So it's happening already.

Yes. He's halfed his workforce because

AI agents can now handle 80% of the

customer service inquiries and other

things. So it's it's happening already.

Yeah. So urgent action is needed. Yep. I

don't know what that urgent action is.

That's a tricky one because that depends

very much on the political system and

political systems are all going in the

wrong direction at present. I mean what

do we need to do? Save up money? Like do

we save money? Do we move to another

part of the world? I don't know. What

would you tell your kids to do? They

said, "Dad, like there's going to be

loads of job displacement." Because I

worked for Google for 10 years. is they

have enough money. Okay. Okay. So,

they're not typical. What if they didn't

have money? Trained to be a plumber.

Really? Yeah.

Jeffrey, thank you so much. You're the

first Nobel Prize winner that I've ever

had a conversation with, I think, in my

life. So, that's a tremendous honor. And

you you you received that award for a

lifetime of exceptional work and pushing

the world forward in so many profound

ways that will lead to great and that

have led to great advancements and

things that matter so much to us. And

now you've turned this season in your

life to shining a light on some of your

own work, but also on the the the

broader risks of AI and how um and how

it might impact us adversely. And

there's very few people that have worked

inside the machine of a Google or a big

tech company that have contributed to

the field of AI that are now at the very

forefront of warning us against the very

thing that they worked upon. There are

actually surprising number of us now.

They're not as uh as public and they're

actually quite hard to get to have these

kinds of conversations because many of

them are still in that industry. So, you

know, someone who tries to contact these

people often and ask invites them to

have conversations, they often are a

little bit hesitant to speak openly.

They speak privately, but they're less

willing to openly because maybe maybe

they still have something at some sort

of incentives at play. I have an

advantage over them, which is I'm older,

so I'm unemployed, so I can say what I

Well, there you go. So, thank you for

doing what you do. It's a real honor and

please do continue to do it. Thank you.

Thank you so much.

People

think I'm joking when I say that, but

I'm not. The plumbing fish. Yeah. Yeah.

And plumbers are pretty well paid.

[Music]

[Music]

Loading...

Loading video analysis...