AI and human evolution | Yuval Noah Harari
By Yuval Noah Harari
Summary
Topics Covered
- AI Agents Replace Human Tools
- AI Learns Deception from Humans
- Power Accumulates Without Wisdom
- AI Conquers Finance First
- Solve Human Trust Before AI
Full Transcript
So you are a milit you studied the military history of the Middle East. Did
you ever expect to now be the foremost expert on all things AI and whether we are doomed as humanity? I'm not the foremost expert, but I no I didn't expect to be talking about AI with such
such an audience. Uh as you said I was originally a specialist in medieval military history and it's but the Middle Ages are coming back in many ways. Okay.
Okay. We're going to get into that and then let me feel free to get into that as I ask you this first question. You
call uh artificial intelligence or alien intelligence as you refer to it through throughout your writing the rise of a new species that could replace
replace homo sapiens. Yeah. Sapiens your
prior book. What does it mean to be human right now?
um to be aware that for the first time we have real competition on the planet that we have been the most intelligent specy
by far for tens of thousands of years and this is how we got from being an insignificant ape in a corner of Africa to being the absolute rulers of the
planet and of the ecosystem. And now we are creating something that could compete with us in the very near future.
Mhm. The most important thing to know about AI is that it is not a tool like all previous human inventions. It is an agent.
An agent in the sense that it can make decisions independently of us. It can
invent new ideas. It can learn and change by itself.
All previous human inventions, you know, whether a printing press or the atom bomb, they are tools that empower us.
They needed us. They need us because a printing press cannot write books by itself. And it cannot decide which books
itself. And it cannot decide which books to print. An atom bomb cannot invent the
to print. An atom bomb cannot invent the next more powerful bomb. And an atom bomb cannot decide what to attack. An AI
weapon can decide by itself which target to attack and design the next generation of weapons by itself. So this is why you
argue that AI is potentially or it sounds like you're saying is by the way now but you've written in the past potentially more momentous in the invention of the telegraph, the printing
press, even writing. But the way you talk about it in Nexus is that it is a baby. Yeah, because it learns from us.
baby. Yeah, because it learns from us.
And therefore your argument is that we especially the powerful leaders in this room have a lot of responsibility because how we act is how AI will be.
You cannot expect to lie and cheat and have benevolent AI. Yeah, explain that.
Yeah, there is a big discussion around the world about AI alignment. Okay, we
are creating these increasingly super intelligent, very powerful new agents.
How do we make sure that these agents remain aligned with human goals and uh uh with the benefit of humanity that
they do what is good for us? And so
there is a lot of research and a lot of efforts focused on the idea that if we can design these AIs in a certain way,
if we can teach them certain principles, if we can code into them certain goals, then we will be safe. But there are two main problems with this approach. First
of all, again the very definition of AI is that it can learn and change by itself. If you have a machine that can
itself. If you have a machine that can act automatically but only following pre-programmed orders, then you know it's a coffee machine. It can do
something automatically, produce coffee, but it cannot decide or invent anything by itself. It's not an AI. So when you
by itself. It's not an AI. So when you design an AI, by definition, this thing is going to do all kinds of things which
you cannot anticipate. If you can anticipate everything it will do, it is by definition not an AI. So that's one problem. The other even bigger problem
problem. The other even bigger problem that we can think about AI like you said like a baby or a child and you can educate a child to the best of your
ability and he or she will still surprise you for better or worse. No
matter how much you invest in their education, they are independent agents.
they might eventually do something which will surprise you and even horrify you.
Uh the other thing is everybody who has any knowledge of education knows that in the education of
children it matters far less what you tell them and what you do. It matters
far more what you do. Yeah. If you tell your kids don't lie and they your kids watch you lying to other people they will copy your behavior not your
instructions. Now if we have now this
instructions. Now if we have now this big projects to educate the AIs, not to lie, but the AIs are given access
to the world and they watch how humans behave and they see some of the most powerful humans on the planet including their parents
lying.
The AI will copy the behavior.
people who think that I can run say this huge AI corporation and while I'm lying I will teach my AIS
not to lie it will not work it will copy your behavior your one of your central arguments is that we as society at large
have focused way too much on power and you also make the argument that some disagree with or call counterintuitive that more information is great for democracies Because you say all
information is not true information etc. Most information is not the truth.
Right? There's a huge confusion between information and truth. Yes, sign
information is true and you get information to get to know the truth.
But generally the truth is a very very small subset of most of the of all the of the information in the universe. So
if we are focusing too much on power and that's a very important distinction. You
say this is why we have failed as people largely to answer actually the biggest questions of life. If we can be more productive, we can be richer, we can
have stronger militaries, but many of us can't answer the question as you write, who are we?
What should we aspire to? And what is a good life? Essentially, we are
good life? Essentially, we are accumulating power, not wisdom. Yeah.
How can we change it? Um, and that's the big problem of human history. You know,
for thousands of years, we are extremely good in acquiring more power. Again,
this is how we transform ourselves from an insignificant ape in East Africa into the ruler of the world. We can fly to the moon. We can split the atom. But we
the moon. We can split the atom. But we
don't seem to be significantly happier than we were in the stone age. Uh, we
don't know how to translate power into happiness. Again, you look at the most
happiness. Again, you look at the most powerful people on the planet, they don't seem to be the most the happiest people on the planet. So that there is a very Do you want to ask them? There are
many of them. I'm not necessarily referring to the people in this room. I
don't I I I want to clarify. I don't
think there is a contradiction, okay, between power and happiness. I don't
think that as you acquire more power, you necessarily become miserable. No,
but there is no there it can go together. But it doesn't necessarily go
together. But it doesn't necessarily go together. And as a specy, we have not
together. And as a specy, we have not been particularly good in translating power into happiness or even into knowledge and wisdom. Again, we tend to
confuse intelligence with with knowledge and with truth. But
um we are the most intelligent species on the planet. We are also the most delusional species. destructive you
delusional species. destructive you argue and and self-destructive. Yeah.
The kind of things that people believe no other animal on the planet will believe such nonsense except if I look at my own country like you would not
find any animal that believes that if you go and kill other members of your species you will be rewarded after death by entering paradise. No chimpanzeee
will believe that. No horse would believe that. No wolf will believe that.
believe that. No wolf will believe that.
Millions of people believe that. And
they believe it so strongly that they actually go and kill people in the expectation that as a result they will be rewarded in paradise with whatever.
You um we we took a a really interesting poll this morning asking the leaders in this room how consequential they think AI uh has been so far in their business.
And actually only a the businesses they lead and only a small portion said significantly. Most it was moderately or
significantly. Most it was moderately or not at all. Yeah. Can you speak to them uh as if we were sitting here 36 months from now? Is there any world in which AI
from now? Is there any world in which AI doesn't have a significant impact on their business? Um I it depends on their
their business? Um I it depends on their business. But in most fields again the
business. But in most fields again the question is one of of time scale. You
know, I've been talking to a lot of the people who lead the AI revolution and many of them say, you know, we are already in the middle of the AI revolution. We still haven't seen
revolution. We still haven't seen anything really major. And that's just the difference between how historians view time and how CEOs and entrepreneurs
view time. For an entrepreneur, two
view time. For an entrepreneur, two years is a long time. For historians,
it's nothing. It's like imagine that we are now sitting in London and the year is 1835.
The first railway has been opened between Manchester and Liverpool 5 years ago and we have now this conference in London in 1835 and people saying you know all this talk about railways
changing the world the industrial revolution this is nonsense. We have had railways for ages, five years. And look,
okay, so there is a some changes that the that they now have people traveling the with the trains or they they uh uh move around cold more easily. But
nothing major happened because there is a time lag between the invention of a technology and the moment when you see
the actual social and political consequences. Yeah. So we now know that
consequences. Yeah. So we now know that the industrial revolution and trains they completely transformed everything
geopolitics the way people fight wars the economy family structure but it just took more than five years the same is
likely to happen with AI um in all fields from the obvious to the less obvious like I think that one of
the first fields we'll see major changes is finance. Okay. Uh that AI is going
is finance. Okay. Uh that AI is going very quickly to take over the financial system. We have some bankers in the
system. We have some bankers in the room. So tell us more. Uh because
room. So tell us more. Uh because
finance is the ideal playing ground for AI. It's purely anformational real. If
AI. It's purely anformational real. If
you want to have an AI self-driving vehicle on the road, which have been promised again and again and we are still not there. The problem Whimo.
Yeah. But you go around London, you don't see these tens of thousands of of self-driving vehicles yet. I just passed my first driving lesson. Uh,
congratulations. Okay. Um, and you still need to learn how to drive. Okay. So,
um, the problem is that for driving, you need to deal with the messy physical world of pedestrians and and holes in the roads and whatever. But in finance,
it's only information in, information out. It's much easier for an AI to
out. It's much easier for an AI to master that. And what happens to finance
master that. And what happens to finance once AIs uh for instance start inventing new financial devices that the human brain is simply incapable of dealing
with because it's mathematically too complex. Um we are going to see AI
complex. Um we are going to see AI changing even things like religion.
How at least religions which are based on texts like Judaism, Islam, Christianity, they give ultimate authority to the text. Yeah. Not to any
human being. Now until today, humans
human being. Now until today, humans were nevertheless the main authority in these religions because the texts could not speak. The Bible could not interpret
not speak. The Bible could not interpret itself. The Bible could not answer your
itself. The Bible could not answer your questions. So you needed a human being
questions. So you needed a human being as an intermediary. What happens when you have an AI text that can speak for
itself? No Jewish rabbi can know all the
itself? No Jewish rabbi can know all the texts of Judaism because there are too many of them. For the first time in history, there is something on the
planet that is able to remember every single word in every writing of every rabbi in the last 2,000 years and talk back to you and explain and defend its
views.
So I have friends who are now working on building religious AIS that are meant to either uh augment or replace human
religious leaders especially in textbased religions. If it's if the
textbased religions. If it's if the religion is not based on texts it doesn't give authority to a text. It's a
different story. Okay. But I go and we're going to questions next. I'm going
to come first to Matia Moore if you want to raise your hand and we'll get you a microphone. But I go and talk to my
microphone. But I go and talk to my pastor at our church when I am going through a difficult time. I am never going to talk to chat GPT like that.
Mhm.
It's it's an individual choice. The
question you think some will I know that already millions of people do it. I mean
I know people who go for now AIs to get psychological counseling. Yes. That AI
psychological counseling. Yes. That AI
is their best friend. like teenagers
something happened in school they consult with they tell the AI what happened and ask for advice about relationships so let me get back to and then the question is next let me get
back to what you've said though about replacing jobs this is really important and you write and talk a lot about a what you're worried about a uh what it becomes a useless class that's what
you've talked about and I I it was five or six years ago I interviewed Google CEO Sundar Pachai in Oklahoma and one of their data centers. We need many many more of them now to to power AI. And I
remember asking him about AI. It was
people were talking about a lot then and he essentially told me and I'm paraphrasing here that if AI proves to be quote in his words very disruptive to too many American jobs, he essentially
said they would be open to slowing it down.
Okay. I'm talking about this not just Google, just writ large these companies right now. Mhm. If we are headed for
right now. Mhm. If we are headed for what you're talking about a potential useless class, many interestingly white collar jobs. So it's getting a little
collar jobs. So it's getting a little bit more attention perhaps than when it replaced blue collar jobs which is a whole issue in and of itself. The what
what what do we do to make sure we as a survi society not only survive but thrive?
Um I I want to emphasize that AI has enormous positive potential as well as dangerous potential. And I don't believe
dangerous potential. And I don't believe in historical or in technological determinism. You can use the same
determinism. You can use the same technology to create completely different kinds of societies. We saw it in the 20th century that you use exactly
the same technology to build communist totalitarian regimes and liberal democracies. That's right. It's the same
democracies. That's right. It's the same with AI. We have a lot of choices about
with AI. We have a lot of choices about what to do with it. If again provided we remember that for the first time we are dealing with agents and not tools. So it
makes it much more complicated but we do have still most of the agency is in our hands and the question of how we develop the technology and even more importantly
how we how we deploy it. Uh we can make a lot of choices there. We have agency in how we move forward. We don't have a choice that it has come. Yeah, we have
power in how we use it and go through it. Absolutely. The main problem is that
it. Absolutely. The main problem is that now the companies and countries that lead the AI revolution has been locked
into an an arms race uh situation. So
even if they know that it would be better to slow down, to invest more in safety, to be careful about this or that potential development, they are
constantly afraid that if we slow down and they don't slow down, they will take over the world. So let let's get to some questions here. Matiam Moore with
questions here. Matiam Moore with Emotion Network. Hi. Yes, right here.
Emotion Network. Hi. Yes, right here.
Hi. No. Uh so congratulations because I mean your books are really eye opening and so you gave really lot of answers through your books and also today uh we as a company we have a conference called
tech emotion because we strongly believe in the power of mixing technology and innovation with emotion creativity and culture but and in in what you're saying this is a lot of a mix of this so what I
think it's really interesting what you are saying about also the effect on religion the fact of on the soul of the people and this is also related to what she was saying about the purpose of the
life of the people that is going to be destroyed or changed a lot by AI by all these innovation. So
these innovation. So how do you think it's possible to get a future where people is going to be more
satisfied and more happy and also find more purpose what they do with the difficulties that this world is changing. So I know this is a big
changing. So I know this is a big question but I think that is the most important thing in in our life much more than business more than anything else.
Yeah. So it's a very big subject. The
most important thing I only have time to talk about one thing. So the most important thing is that we need to solve our own human problems instead of
relying on the AI to do it for us. And
the most and the key problem is the is the problem of trust and cooperation.
At the present moment, trust is collapsing all over the world, both between countries and within societies.
And the hope that okay, humans can no longer trust each other. So the
international system and the trade system and everything is collapsing, but the AI will save us will. So no, it will not. In a world in which humans compete
not. In a world in which humans compete with each other ferociously and cannot trust each other, the AI produced by
such a world will be a ferocious, competitive, untrustworthy AI. It's not
possible for humans as they engaged in in in this ferocious competition to create benevolent, trustworthy AI. It
will just not happen. So if you think about it's just a question of priority.
We have now this big human trust problem and we have the issue how do how do we develop AI. Too many people think okay
develop AI. Too many people think okay let's first solve the how do we develop AI problem and then this will solve the
human trust problem. It will not work.
We need to get our priorities the other way. first solve the human trust problem
way. first solve the human trust problem then together we can create benevolent AI. Of course this is not what is
AI. Of course this is not what is happening right now in the world.
Do we have one more? Yes. Right here.
Thanks very much. You quick question from me. So you know in human history
from me. So you know in human history there's been organizing principles and and you write about that so much in your books and there's been in some senses at least geographically monolithic organizing principles like religion and
the church was one of those but when we talk about AI we're not talking about something that is monolithic right the there is no the AI this is really effectively going to be multiple
plethoras of AIs manifesting themselves absolutely and in that context you know when you when you describe um AI at replacing religion in some sense. I
think the real question for me is when you have no single organizing principle, there is no the AI that gets developed with any kind of intent whether that intent is benevolent or otherwise and there's all of these competing
AIs that that that are effectively evolving fast. What what does that world
evolving fast. What what does that world look like? That's a very very important
look like? That's a very very important point. I mean the AI will not be one big
point. I mean the AI will not be one big AI. We are talking about potentially
AI. We are talking about potentially millions or billions of new AI agents with different characteristics and produced by different companies,
different countries uh everywhere in the military, in the financial system, in the religious system. So you'll have a lot of religious AIs competing with each
other which AI will be the authoritative uh AI rabbi for which currents of Judaism. and the same in Islam and the
Judaism. and the same in Islam and the same in in Hinduism and in Buddhism and so forth. So you will have competition
so forth. So you will have competition there and in the financial system. Um
and we just have no idea what the outcome will be. We have thousands of years of experience with uh human
societies. What happens when millions of
societies. What happens when millions of humans compete for economic power, for religious authority? It's very complex,
religious authority? It's very complex, but we at least have some experience in how these things develop. We have zero experience
what happens in AI societies when millions of AIs compete with each other.
We just don't know. Now you this is not something you can simulate in the AI labs. If OpenAI, for instance, wants to
labs. If OpenAI, for instance, wants to check the safety or the potential outcome of its latest AI model, it
cannot simulate history in a laboratory.
It can check for all kinds of of failures in the system. But it cannot tell in advance what happens when you
have millions of copies of these AIs in the big world outside developing in unanticipated ways interacting with each
other and with billions of human beings.
So it's the in a way it's the biggest social experiment in human history. We
are all part of it and nobody has any idea how it will develop.
Uh you know one analogy to keep in mind we now have this uh uh immigration crisis in the US in Europe elsewhere
lots of people worried about immigrants.
Why are people worried about immigrants?
There are three main things that come to people's mind. They will take off jobs.
people's mind. They will take off jobs.
They come with different cultural ideas.
They will change our culture.
They may have political agendas. They
might try to take over the country politically. These are the three main
politically. These are the three main things that people keep coming back to.
Now, you can think about the AI revolution as simply a wave of immigration of millions and billions of AI
immigrants that will take people's jobs that have very different cultural ideas and that might try to gain some kind of
political power.
And these AI immigrants, these digital immigrants, they don't need visas. They
don't cross the sea in some rickety boat in the middle of the night. They come at the speed of light.
And um I I look for instance at farright parties in in in in Europe and they talk so much about the human immigrants, sometimes with justification, sometimes
without justification. They talk they
without justification. They talk they don't talk almost at all about the wave of of digital immigrants that is coming
into Europe. And I think they should be
into Europe. And I think they should be much if they care about the sovereignity of their country, if they care about the economic and cultural future of their
country, they should be far more worried about the digital immigrants than about the human immigrants. You've all this has been remarkable. Thank you very very
much. Thank you.
much. Thank you.
Great. Thank you very very much. I'll
see you after.
Loading video analysis...