TLDW logo

Ex-Google Insider WARNS: "You Are Not Prepared For 2027"

By The Diary Of A CEO Clips

Summary

## Key takeaways - **AI Immigrants Dwarf Human Immigration**: AI is like a flood of millions of new digital immigrants with Nobel Prize-level capability working at superhuman speed for less than minimum wage, taking all cognitive labor. We're more worried about immigration from next door, but AI dwarfs it. [05:15] - **NAFTA 2.0: AI Hollows Middle Class**: AI is like NAFTA 2.0 where a country of geniuses in a data center does cognitive labor cheaper than minimum wage, creating cheap goods but hollowing out the social fabric and destroying middle-class jobs like manufacturing outsourcing did. [05:45] - **Junior Lawyers Unhireable, Debt Trap**: Law firms won't hire junior lawyers because AI is way better than a law school graduate, leaving students with massive debt they can't pay off. This breaks the training pipeline from junior to senior lawyers, creating an elite managerial class. [01:47] - **UBI Fails Without Global Redistribution**: The math doesn't work for UBI as a handful of US AI companies won't distribute wealth to everyone, including countries like the Philippines whose customer service economies get automated away. Are we going to have OpenAI pay for the entire Philippines? [01:08] - **Humans Become Politically Useless Class**: This is the last moment human political power matters; soon states won't need humans as GDP comes from AI companies, making humans the useless class unable to unionize like in the industrial revolution when factories needed workers. [04:23] - **Make AI Tier One Voting Issue**: AI reconstitutes every issue from climate change to education and healthcare, yet no one's mentioning it because politicians see no win in highlighting that everybody loses on the default path. Vote only for politicians making AI a tier one issue with guardrails. [07:28]

Topics Covered

  • Abundance Hinges on Redistribution Math
  • AI Ends Intergenerational Knowledge Transfer
  • AI Immigrants Dwarf Human Immigration
  • AI is NAFTA for Cognitive Labor
  • Clarity Fuels Courage for AI Guardrails

Full Transcript

One of the big questions I've had on my mind, I think it's in part because I saw those humanoid robots and I I sent this to my friends and we had a little discussion in WhatsApp is in such a world and I don't know whether you

you're interested in answering this, but what what do what do we do? I was

actually pulled up at the gym the other day with my girlfriend. We sat outside cuz we were watching the shareholder thing and we didn't want to go in yet.

And then we had the conversation which is >> in a world of sustainable abundance where the price of food and the price of manufacturing things, the price of my life generally drops and instead of

having a a cleaner or a housekeeper, I have this robot that's and does all these things for me. What do I end up doing? What is worth pursuing at this

doing? What is worth pursuing at this point? Cuz you say that you know that

point? Cuz you say that you know that the cat is out the bag as it relates to job impact. It's already happening.

job impact. It's already happening.

Well, certain kinds of AI for certain kinds of jobs and we can choose still from here which way we want to go. But

go on. Yeah.

>> And I'm just wondering in such a future way you think about even yourself and your family and your and your friends, what are you going to be spending your time doing in such a world of abundance?

If there was 10 billion question, are we going to get abundance or are we going to get just jobs being automated and then the question is still who's going to pay for people's livelihoods? So the

math as I understand it doesn't currently seem to work out where everyone can get a stipend to pay for their whole life and life quality that as they currently know it and are a

handful of western or US-based AI companies going to consciously distribute that wealth to literally everyone meaning including all the countries around the world whose entire economy was based on a job category that

got eliminated. So, for example, places

got eliminated. So, for example, places like the Philippines where, you know, a huge percentage of the jobs are are customer service jobs. If that got automated away, are we going to have

Open AI pay for all of the Philippines?

Do you think that people in the US are going to prioritize that?

So, then you end up with the problem of you have law firms that are currently not wanting to hire junior lawyers because, well, the AI is way better than a junior lawyer who just graduated from law school. So, you have two problems.

law school. So, you have two problems. You have the law student that just put in a ton of money and is in debt because they just got a law degree that now they can't get hired to pay off. And then you

have law firms whose longevity depends on senior senior lawyers being trained from being a junior lawyer to a senior lawyer. What happens when you don't have

lawyer. What happens when you don't have junior lawyers that are actually learning on the job to become senior lawyers? You just have this sort of

lawyers? You just have this sort of elite managerial class for each of these domains.

>> So you lose intergenerational knowledge transmission.

>> Interesting. And that creates a societal weakening in the social fabric.

>> I was watching some podcasts over the weekend with some successful billionaires who are working in AI talking about how they now feel that we should forgive student loans. And I

think in part this is because of what's happened in New York with was it Mandani?

>> Yeah, Mandani. Yeah, Madani's been elected and they're concerned that socialism is on the rise because the entry level junior people in the society are suppressed under student debt, but also now they're going to struggle to

get jobs, which means they're going to be more socialist in their voting, which means >> a lot of people are going to lose power that want to keep power.

>> Yep. Exactly. That's probably going to happen.

>> Uh, okay. So their concern about suddenly alleviating student debt is in part because they're worried that society will get more socialist when the divide the divide increases

>> which is a version of UBI or just carrying you know a safety net that covers everyone's basic needs. So

relieving student do student debt is on the way to creating kind of universal basic need meeting right >> do you think UBI would work as a concept UBI for anyone that doesn't know is basically >> universal basic income

stipen >> giving people money every month >> but I mean we have that with social security we've done this when it came to pensions that was after the great depression I think in like 1935 1937 FDR

created social security but what happens when you have to pay for everyone's livelihood everywhere in every country again, how can we afford that?

>> Well, if the if the costs go down 10x of making things, >> this is where the math gets very confusing because I think the optimists say you can't imagine how much abundance and how much wealth it will create and so we will be able to generate that

much. But the question is what is the

much. But the question is what is the incentive again for the people who've consolidated all that wealth to redistribute it to everybody else?

>> We just have to tax them. And how will we do that when the corporate lobbering interests of trillion dollar AI companies can massively influence the government more than human you know

political power >> in a way this is the last moment that human political power will matter it's sort of a use it or lose it moment because if we wait to the point where in the past in the industrial revolution

they start automating you know a bunch of the work and people have to do this these jobs people don't want to do in the factory and there's like bad working conditions they can unionize and say hey we don't want to work under those

conditions and their voice mattered because the the factories needed the workers.

>> Mhm.

>> In this case, does the state need the humans anymore? Their GDP is coming in

humans anymore? Their GDP is coming in almost entirely from the AI companies.

So suddenly this political class, this political power base, they become the useless class to borrow a term from Yval Herrari, the author of Sapiens.

In fact, he has a different frame which is that AI is like a new version of of digital. It's like a a flood of

of digital. It's like a a flood of millions of new digital immigrants of alien digital immigrants that are Nobel Prize level capability work at superhuman speed will work for less than

minimum wage. We're all worried about,

minimum wage. We're all worried about, you know, immigration of the other countries next door uh taking labor jobs. What happens when AI immigrants

jobs. What happens when AI immigrants come in and take all of the cognitive labor if you're worried about immigration? You should be way more

immigration? You should be way more worried about AI.

>> Like it dwarfs it.

>> You can think of it like this. I mean,

if you think about um we were sold a bill of goods in the 1990s with NAFTA.

We said, "Hey, we're going to um NAFTA, the North American Free Trade Agreement.

We're going to outsource all of our manufacturing to these developing countries, China, you know, Southeast Asia, and we're going to get this abundance. We're going to get all these

abundance. We're going to get all these cheap goods and it'll create this world of abundance. Well, all of us will be

of abundance. Well, all of us will be better off. But what did that do? Well,

better off. But what did that do? Well,

we did get all these cheap goods. You

can go to Walmart and go to Amazon and things are unbelievably cheap. But it

hollowed out the social fabric. And the

median worker is not seeing upward mobility. In fact, people feel more

mobility. In fact, people feel more pessimistic about that than than ever.

And people can't buy their own homes.

And all of this is because we did get the cheap goods, but we lost the well-paying jobs for everybody in the middle class. And AI is like another

middle class. And AI is like another version of NAFTA. It's like NAFTA 2.0 except instead of China appearing on the world stage who will do the manufacturing labor for cheap, suddenly this country of geniuses in a data

center created by AI appears on the world stage and it will do all of the cognitive labor in the economy for less than minimum wage. And we're being sold

a same story. This is going to create abundance for all, but it's creating abundance in the same way that the last round created abundance. did create

cheap goods, but it also undermined the way that the social fabric works and created mass populism in democracies all around the world.

>> You disagree?

>> No, I agree. I agree.

>> I'm I'm not, you know, >> Yeah. No, I'm trying to play devil's

>> Yeah. No, I'm trying to play devil's advocate as much as I can.

>> Yeah. Yeah, please. Yeah.

>> But um No, I I agree. And it is it's absolutely bonkers how much people care about immigration relative to AI. It's

like it's driving all the election outcomes at the moment across the world.

>> Whereas AI doesn't seem to be part of the conversation >> and AI will reconstitute every other issue that already exists. You care

about climate change or energy, well AI will reconstitute the climate change conversation. If you care about

conversation. If you care about education, AI will reconstitute that conversation. If you care about uh

conversation. If you care about uh healthcare, AI recon, it reconstitutes all of these conversations. And what I think people need to do is AI should be a tier one issue that you're that people are voting for. And you should only vote

for politicians who will make it a tier one issue where you want guardrails to have a conscious selection of AI future and the narrow path to a better AI future rather than the default reckless path.

>> No one's even mentioning it. And when I hear >> Well, it's because there's no political incentives to mention it because there's no currently there's no good answer for the current outcome.

>> Yeah.

>> If I mention it, if I tell people, if I get people to see it clearly, it looks like everybody loses. So, as a politician, why would I win from that?

Although I do think that as the job loss conversation starts to hit, there's going to be an opportunity for politicians who are trying to mitigate that issue finally getting, you know,

some wins and we just people just need to see clearly that the default path is not in their interest.

The default path is companies racing to release the most powerful, inscrutable, uncontrollable technology we've ever invented with the maximum incentive to cut corners on safety. Rising energy

prices, depleting jobs, you know, creating joblessness, creating security risks. That is the default outcome

risks. That is the default outcome because energy prices are going up. They

will continue to go up. People's jobs

will be disrupted and we're going to get more, you know, deep fakes and floods of democracy and all these outcomes from the default path. And if we don't want that, we have to choose a different path.

>> What is the different path? And if we were to sit here in 10 years time and you say and Tristan, you say, do you know what? We we were successful in

know what? We we were successful in turning the wheel and going a different direction. What series of events would

direction. What series of events would have had to happen do you think? Because

I think um the AI companies very much have support from Trump. I watched the I watched the dinners where they sit there with the the 20 30 leaders of these companies and you know Trump is talking about how quickly they're developing,

how fast they're developing. He's

referencing China. He's saying he wants the US to win. So, I mean, in the next couple years, I don't think there's going to be much progress in the United States necessarily.

>> Unless there's a massive political backlash because people recognize that this issue will dominate every other issue.

>> How does that happen?

>> Hopefully conversations like this one.

>> Yeah.

Yeah.

>> I mean, as what I mean is, you know, Neil Postman, who's a wonderful media thinker in the lineage of Marshall McLuhan, used to say, clarity is courage. If people have clarity and feel

courage. If people have clarity and feel confident that the current path is leading to a world that people don't want, that's not in most people's interests, that clarity creates the courage to say, "Yeah, I don't want

that." So, I'm going to devote my life

that." So, I'm going to devote my life to changing the path that we're currently on. That's what I'm doing. And

currently on. That's what I'm doing. And

that's what I think that people who take this on, I I watch if you walk people through this and you have them see the outcome, almost everybody right afterwards says, "What can I do to help?" Obviously, this is something that

help?" Obviously, this is something that we have to change. And so that's what I want people to do is to advocate for this other path. And we haven't talked about AI companions yet, but I think

it's important that we should do that. I

think it's important to integrate that before you get to the other path.

>> Go ahead. Um,

I'm sorry, by the way. I uh not no apologies, but there's just there's so much information to cover and I >> Do you know what's interesting is a side point is how

>> personal this feels to you, but how passionate you are about it.

>> A lot of people come here and they tell me the matter of fact situation, but there's something that feels more sort of emotionally personal when it when we speak about these subjects to you. And

I'm fascinated by that. Why is it so personal to you? Where is that passion coming from?

Because this isn't just your prefrontal cortex, the logical part of your brain.

There's something in your lyic system, your amigdula that's driving every word you're saying.

>> I care about people. I want things to go well for people. I want people to look at their children in the eyes and be able to say like, you know, I think I think I grew up

maybe under a false assumption and something that that really influenced my life was um I used to have this belief that there was some adults in the room somewhere, you know, like we we're doing our thing here, you know, we're in LA,

we're recording this and there's some adults protecting the country, national security, there's some adults who are making sure that geopolitics is stable.

there's some adults that are like making sure that you know industries don't cause toxicity and carcinogens and that you know there's adults who are caring

about stewarding things and making things go well and I think that there have been times in history where there were adults especially born out of massive world

catastrophes like coming out of World War II there was a lot of conscious care about how do we create the institutions and the structures uh Bretton Woods United Nations

positive some economics that would steward the world so we don't have war again and as I in my first round of the social media work as I started entering

into the rooms where the adults were and I recognized that because technology and software was eating the world a lot of the people in power didn't understand the software they didn't understand

technology you know you go to the Senate Intelligence Committee and you talk about >> what social media is doing to democracy and where you Russian psychological influence campaigns were happening, which were

real campaigns. Um, and you realize that

real campaigns. Um, and you realize that I realized that I knew more about that than people who were on the Senate Intelligence Committee >> making the laws.

>> Yeah. And that was a very humbling experience cuz I realized, oh, there's not there's not that many adults out there when when it comes to technologies dominating influence on the world. And

so, there's a responsibility. And I hope people listening to this who are in technology realize that if you understand technology and technology is eating the structures of our world, children's development, democracy,

education, um, you know, journalism, conversation, it is up to people who understand this to be part of stewarding it in a conscious way. And I do know that there

conscious way. And I do know that there have been many people um in part because of things like the social dilemma and some of this work that have basically chosen to devote their lives to moving

in this direction as well. And but what I feel is a responsibility because I know that most people don't understand how this stuff works and they feel insecure because if I don't understand the technology then who am I to

criticize which way this is going to go.

We call this the under the hood bias.

you know, if I don't know how a car engine works, and if I don't have a PhD in the engineering that makes an engine, then I have nothing to say about car accidents. Like, no, you don't have to

accidents. Like, no, you don't have to understand what's the engine in the car to understand the consequence that affects everybody of car accidents.

>> And you can advocate for things like, you know, speed limits and zoning laws and um, you know, turning signals and and brakes and things like this.

>> And so, yeah, I mean, to me, it's just obvious.

It's like I see what's at stake if we don't make different choices. And I think in

different choices. And I think in particular the social media experience for me of seeing in 2013 it was like seeing into the future and and seeing where this was all going to go. Like

imagine you're sitting there in 2013 and the world's like working relatively normally. We're starting to see these

normally. We're starting to see these early effects. But imagine you can kind

early effects. But imagine you can kind of feel a little bit of what it's like to be in 2020 or 2024 in terms of culture. and what the dumpster fire of

culture. and what the dumpster fire of culture has turned into, the problems with children's mental health and psychology and anxiety and depression.

But imagine seeing that in 2013.

Um, you know, I had friends back then who um have reflected back to me. They

said, "Trasan, when I knew you back in those days, it was like you you were you were seeing this kind of slow motion train wreck. You just looked like you

train wreck. You just looked like you were traumatized." And

were traumatized." And >> you look a little bit like that now.

>> Do I? Oh, I hope I hope not.

>> No, you do look a little bit traumatized. It's hard to explain. It's

traumatized. It's hard to explain. It's

like It's like someone who can see a train coming.

>> My friends used to call it um not PTSD, which is post-traumatic stress disorder, but pretraumatic stress disorder of seeing things that

are going to happen before they happen.

And um that might make people think that I think I'm, you know, seeing things early or something. That's not what I care about.

something. That's not what I care about.

I just care about us getting to a world that works for people. I grew up in a world that, you know, a world that mostly worked. You know, I

grew up in a magical time in the 1990s, 1980s, 1990s. And, you know, back then,

1980s, 1990s. And, you know, back then, using a computer was good for you. You

know, I used my first Macintosh and did educational games and learned programming and it didn't cause mass loneliness and mental health problems and, you know, break how democracy

works. And it was just a tool in a

works. And it was just a tool in a bicycle for the mind. And I think the spirit of our organization, Center for Humane Technology, is that that word humane comes from my my co-founder's

father, uh, Jeff Raskin, actually started the Macintosh project at Apple.

So before Steve Jobs took it over um he started the Macintosh project and he wrote a book called the humane interface about how technology could be humane and could be sensitive to human needs and

human vulnerabilities. That was his key

human vulnerabilities. That was his key distinction that just like this chair um hopefully is ergonomic. It's if you're you make an ergonomic chair, it's aligned with the curvature of your

spine. It it makes it works with your

spine. It it makes it works with your anatomy.

>> Mhm. And he had the idea of a humane technology like the Macintosh that works with the ergonomics of your mind that your mind has certain intuitive ways of working like I can drag a window and I

can drag an icon and move that icon from this folder to that folder and making computers easy to use by understanding human vulnerabilities. And I think of

human vulnerabilities. And I think of this new project that is the collective human technology project now is we have to make technology at large humane to

societal vulnerabilities. Technology has

societal vulnerabilities. Technology has to serve and be aligned with human dignity rather than wipe out dignity with with job loss. It has to be humane to child's socialization process so that

technology is actually designed to strengthen children's development rather than undermine it and cause AI suicides which we haven't talked about yet. And

so I just I I deeply believe that we can do this differently. And I feel

this differently. And I feel responsibility in that. On that point of human vulnerabilities, one of the things that makes us human is our ability to connect with others and to form

relationships. And now with AI speaking

relationships. And now with AI speaking language and understanding me and and being which something I don't think people realize is my experience with AI or chat GBT is much different from

yours. Even if we ask the same question,

yours. Even if we ask the same question, >> it will say something different. And I

didn't realize this. I thought, you know, the example I gave the other day was me and my friends were debating who is the best soccer player in the world and I said Messi. My friend said Ronaldo. So, we both went and asked our

Ronaldo. So, we both went and asked our chat GBTs the same question, and it said two different things.

>> Really?

>> Mine said Messi, his says Ronaldo.

>> Well, this reminds me of the social media problem, which is that people think when they open up their newsfeed, they're getting mostly the same news as other people, and they don't realize that they've got a supercomputer that's just calculating the news for them.

>> If you remember in the social element, there's the trailer. And if you typed in uh into Google for a while, if you typed in uh climate change is, and then depending on your location, it would say

not real versus real versus, you know, a madeup thing. And it wasn't trying to

madeup thing. And it wasn't trying to optimize for truth. It was just optimizing for what the most popular queries were in those different locations.

>> Mhm.

>> And I think that that's a really important lesson when you look at things like AI companions where children and regular people are getting different answers based on how they interact with it. If you love the Driver CEO brand and

it. If you love the Driver CEO brand and you watch this channel, please do me a huge favor. Become part of the 15% of

huge favor. Become part of the 15% of the viewers on this channel that have hit the subscribe button. It helps us tremendously and the bigger the channel gets, the bigger the guests.

Loading...

Loading video analysis...