TLDW logo

Elon Musk: 3 Years of X, OpenAI Lawsuit, Bill Gates, Grokipedia & The Future of Everything

By All-In Podcast

Summary

## Key takeaways - **X Algorithm Overcorrects on Engagement**: The X algorithm was overcorrecting by showing users an excessive amount of content similar to what they interacted with, essentially turning up the gain too high on any engagement. This was a bug that has since been mostly fixed. [03:39], [04:45] - **Grokipedia: A Superior Alternative to Wikipedia**: Grokipedia, powered by Grok's AI, is designed to be more neutral and accurate than Wikipedia, offering more comprehensive information. It analyzes publicly available internet data to correct and expand upon Wikipedia articles, aiming to be a fundamentally superior product. [13:35], [16:46] - **Twitter's Wasted Resources and Free Speech Restoration**: Upon acquiring Twitter, vast amounts of wasted resources were discovered, including unused software subscriptions and unused office supplies. More importantly, the platform has moved towards restoring free speech by ending bans and shadow bans, and exposing government collusion in censorship. [36:05], [37:12] - **AI's Supersonic Tsunami and the Need for Safety**: AI is described as a 'supersonic tsunami,' indicating its rapid and powerful advancement. While aiming to slow down AI development, the creation of OpenAI was initially intended as a counterweight to Google's dominance and to prioritize human safety in AI development. [59:11], [59:33] - **Solar Power is the Obvious, Scalable Energy Solution**: The sun provides 99.8% of the solar system's energy, making solar power the most logical and sustainable energy source. The Earth's resources are abundant enough to power the entire planet with solar and battery technology, and China's massive solar panel production capacity highlights the global shift towards this energy source. [01:28:09], [01:29:17]

Topics Covered

  • Pete Buttigieg and Jason Calacanis Virtue Signaling
  • X Algorithm Amplifies Content You Interact With
  • Grok Will Curate Your Following Tab
  • China's Solar Dominance: Enough Panels to Power the US
  • Kardashev Scale: Classifying Civilizations by Energy Use

Full Transcript

let's get started you know we wanted to try something new this week every week

uh you know i get a little upset things perturb me sacks and uh when

it does i just yell and scream disgrazia and so i bought the domain name

disgrazia .com for no reason other than my own amusement but you know what i

i'm not alone in my absolute disgust at what's going on in the world so

this week we're going to bring out a new feature here on the all in

podcast disgrazia corner

he was the best guy around what about the people he murdered what murder

you can act like a man he's just kidding manners

you insulted him a little bit your hair was in the

toilet water disgusting i had to suffocate you little

this is fantastic this is our new feature

chamath you look like you're ready to go why don't you tell you tell everybody

who gets your disgrazia on this one wait we all had to come with a

disgrazia you really missed a memo all right fine enough i got one i

got okay all right just calm down my disgrazia corner goes to jason calcanis oh

here we go come on man you can't and pete budigieg where they in the

first 30 seconds of the interview compared virtue signaling points about how each

one worked at various moments at amnesty international absolutely literally affecting zero

change making no progress in the world but collecting a badge that they use to

hold over other people a lot of letters we wrote a lot of letters which

is good that means it's like a good one because it's behind the scenes it's

gratia jason calcanis and pete budigieg disgrazia great i'm glad that i get the first

one and you you can imagine what's coming next week for you i saw the

sydney sweeney dress today trending on social disgrazia on it's too

much what it's it's too much what is it i didn't i don't even know

what this is you didn't see it bring it up picture okay bring it up

it's a little floppy get a vestito et propo how is this too much it's

disgraceful a little bit uh like look at this oh my god too much it's

elegant too much in my day sacks a little uh cleavage maybe perhaps in the

90s or 2000s some side view this is too much hey guys hey hey

great highbrow subject meta we were

discussing the role politics and the sydney sweden dress

i don't know it's trending on x hi dad put away the phone

what's going on with the algorithm i'm getting sydney sweeney's dress all day and last

week well maybe you should stop everything in 15

times and then sacks poor sacks got you got invited to slut con for two

weeks straight on the algorithm no i say the algorithm has become if you if

you demonstrate actually right you can't even tell if that's a joke or a real

thing it's a real thing it's all too real oh it's actually really yeah for

real i've noticed yeah if you if you demonstrate interest in

anything on x now if you click on it god forbid you like something man

it will give you more of that it will give you a lot more yes

yes so we we did have an issue um you

still have somewhat of an issue where um there was there was an important bug

that was figured out that was sold over the weekend which caused um in network

posts to be uh not shown uh so you basically if you followed someone

you wouldn't see them wouldn't see their posts

um then um the uh the algorithm was not

probably taking into account um if you just uh dwelt on something um

uh but but if you if you interacted with it it would it would go

hog wild so if you pay as david said if you if you were to

favorite reply or engage with it in some way it is going to give you

a torrent of that same thing oh sax so maybe you did you

bookmark slug on i think you bookmarked it here's what i thought was good about

it though is all of a sudden i would see if you happen to sports

with sydney sweeney's boobs yeah that that

okay but what i thought was good about it was that you would see who

else had a take on the same subject matter and that actually has been a

useful part of it yeah and so you do you do get more of a

you get more of like a 360 view on whatever it is that you've shown

interest in yeah yeah it just it's like it was giving you if you

you take a you'd have like it was just going too far obviously it was

over correcting uh it had too much gain on um it just turned up the

gain way too high on any interaction would would you would then get a tarrent

of that it's like it's like oh you had a taste of it we're going

to give you three helpings okay we're going to give you

the food funnel and that's all being done i assume it's all being done with

grok now so it's not like the old yeah that that yeah okay but what

i thought was good about it was that you would see who else had a

take on the same subject matter and that actually has been a useful part of

it yeah so you do you do get more of a you get more of

like a 360 view on whatever it is that you've shown interest in yeah yeah

it just it's like it was giving you if you take a you'd have like

it was just going too far obviously it was over correcting uh it had too

much gain on um it just turned up the gain way too high on any

interaction would you would then get a tar into that it's like oh you had

a taste of it we're going to give you three helpings okay we're going to

force you we're going to give you the food funnel and that's all being done

i assume it's all being done with grok now so it's not like the old

hard -coded algorithm or is it using grok well what what's happening is that

you know we're gradually deleting the uh legacy twitter heuristics now the problem is that

it's like as you delete these heuristics it turns out the one heuristic the one

bug was covering for the other bug and so when you delete one side of

the bug you know it's like that that meme with the internet that way there's

like this very complicated machine and there's like a tiny little wooden stick that's something

that's going which was i guess amazon aws uh east or whatever had something like

that um you know when when when somebody pulled out the little stick there was

what's this oops i think it'd be good if it half of earth you know

it would be great if it showed like one person you follow and then like

it blended the old style which was just reverse chronological of your friends the original

version with this new version so you a little bit of both well you can

still you still have the plot everyone still has the following tab yeah now something

we're going to be adding is the ability to have a curated following tab because

the problem is like if you follow some people and they're maybe a little more

prolific than you're you know you

follow someone and some people are much more you know say a lot more than

others um that that makes the following tab hard to use um so we're going

to add a an option where you can have the following tab be curated so

uh grok will say one of the most interesting things posted by your friends and

and we'll show you that in the following tab it will also be

everything um

but uh but i think having that option uh will make the following tab much

more useful um so it'll be a curated list of people you follow um like

ideally the most interesting stuff that they've said which is kind of what you you

want to look at um and then uh we've mostly fixed the bug which

would um uh give you way too much of something if you interacted with a

particular subject matter um and then the uh the really big change which

is where grok literally reads uh everything that's posted to the platform um

uh which would actually there's there's about 100 million posts per day

so it's 100 million pieces of content per day um i think that's actually just

maybe just in english i think it goes beyond that if it's outside of english

um so uh grok is gonna we're gonna start off reading the uh

really what what grok thinks are the top 10 million of the 100 million and

it will actually read them uh and understand them and uh categorize them and match

them to users it's like this is a this is not a job humans could

ever do um and and then once that is scaling we see well we'll we'll

add the entire 100 million a day um so it's literally going to read through

100 million things and and and and show you the things that it thinks out

of 100 million uh posts per day what are the most interesting posts to you

how much of colossus will that take like a lot of work yeah that's like

is it tens of thousands of servers like to do that every day yeah my

my guess is it's probably on the order of 50k h100 something like that wow

and that will replace search so you'll be able to actually search on twitter and

find things in like with a with a plain language we'll have

semantic search where you can just ask a question um and it will show you

all content uh whether that is text pictures or video that matches your search query

semantically how um how's it been three years in this is a it was a

three -year anniversary like a couple this is three years yeah yeah remember it was

halloween yeah halloween's back halloween's back but it was

the the weekend you took over was halloween

yeah we had a good time yeah uh yeah three years

well thanks three years from now yeah what's the

takeaway three years later you were you you're obviously don't regret buying it it's save

free speech that was good seem to have turned that whole thing around that was

i think a big part of your mission but then you added it to xai

which makes it incredibly valuable as a data source so when you look back on

it the reason you bought it to stop crazy woke mind virus and make

truth exist in the world again great mission accomplished uh and now it has this

great future yeah we've got community notes you can also ask grok about any any

anything you see on the platform um you know just you just press the grok

icon on any x post and we'll analyze analyze it for you um and and

research it as much as you want so you can you can basically have uh

just by by tapping the grok icon you can um assess the um whether that

that post is the truth the whole truth or nothing but the truth or whether

there's something supplemental you need to be explained so i think it i think it's

actually we've made a lot of progress towards uh yeah um freedom of of

speech um and uh and and people being able to tell whether something is false

or not you know you know propaganda the recent update to grok is actually i

think very good um at piercing through propaganda um so um and then we we

used that latest version of grok to create grok to create grokopedia um which i

think is um uh much uh more it's it's it's not just i think more

um neutral and we'll analyze it for you and research it as much as you

want. So you can basically have, just by tapping the croc icon, you can assess

whether that post is the truth, the whole truth, or nothing but the truth, or

whether there's something supplemental that you need to be explained. So I think it's actually,

we've made a lot of progress towards, yeah, freedom of speech

and people being able to tell whether something is false or not false, you know,

propaganda. The recent update to Grok is actually, I think, very good at piercing through

propaganda. So, and then we used that latest version of Grok to create

Grokopedia, which I think is much more, it's

not just, I think, more neutral, and more accurate than

Wikipedia, but it actually has a lot more information than a Wikipedia page. Did you

seed it with Wikipedia? Actually, take a step back. How did you guys, how did

you do this? Well, we used AI.

But meaning like totally unsupervised, just a complete training run on its own, totally synthetic

data, no seeded set, nothing.

Well, it was only just recently possible for us to do this. So we've

finished training on a maximally truth -seeking,

actually truth -seeking, a version of Grok that is good at cogent analysis. So

breaking down any given argument into its axiomatic elements,

assessing whether those axioms are, you know, the basic tests for coagency, the

axioms are likely to be true. They're not contradictory.

The conclusion most likely follows from those axioms. So

we just trained Grok on a lot of critical thinking. So it just got really

good at critical thinking, which was quite hard. And then we took that version of

Grok and said, okay, cycle through the million most popular articles in Wikipedia

and add, modify, and delete. So that means research

the rest of the internet, whatever's publicly available, and correct

the Wikipedia articles and fix mistakes, but also add a lot more

context. So sometimes really the nature of the propaganda

is that, you know, facts are stated that are technically true, but

do not properly represent a picture of the individual or event. This is

critical. Because when you have a bio, as you do, actually we all do on

Wikipedia over time, it's just the people you fired or you beat in business

or have an ax to grind. So it just slowly becomes like the place where

everybody, you know, kind of who hates you then puts their information. I looked at

mine, it was so much more representative and it was five times longer, six times

longer. And what it gave weight to was much more accurate,

And this opportunity was sitting here, I think for a long time. It's just great

that you got to it because they don't update my page, but, you know, I

don't know, twice a month with, you know, and then who is the secret cobble?

There's 50 people who are anonymous, who decide what gets put on it. was a

much better, much more updated page in version one. Yes, this is

version 2 .1 as we put it, as we show at the top. So I

do think actually by the time we get to version 1, .0, it'll be 10

times better. But even at this early stage, as you just mentioned, it's not just

that it's correcting errors, but it is creating a more accurate, realistic and

fleshed out description of people and events. Elon,

do you think that... And subject matters. Like, you can look at articles on physics

and Grokipedia that they're much better than Wikipedia by far. This is what I was

going to ask you, is that do you think that you can take this corpus

of pages now and get Google to de -boost Wikipedia or boost

Grokipedia in traditional search? Because a lot of people still find this and they believe

that it's authoritative because it comes up number one, right? So how do we do

that? How do you flip Google? Yeah, so it really can... If people share a

lot of... If Grokipedia is used elsewhere, like if people cite it on their

websites or post about it on social media or when they do a search,

when Grokipedia shows up, if they click on Grokipedia, it will naturally rise

in Google's rankings. I did

text Sundar because, you know, even sort of a day after launch, if you typed

in Grokipedia, Google would just say, did you mean Wikipedia? yeah. And it wouldn't even

bring Grokipedia up at all. Yeah, that's true. So now... How's the usage then? Have

you seen good growth since it launched? Yeah. Is it very early? It went super

viral. So, yeah, we're seeing it cited all over the place.

But, yeah, it's... And I think we'll see it used more and more as people

refer to it. And people will judge for themselves. When you read a Grokipedia article

about a subject or a person that you know a lot about and you see,

wow, this is way better than Wikipedia, it's more comprehensive, it's way more accurate,

it's neutral instead of biased, then you're going to forward those links around

and say that this is actually the better source. Like, Grokipedia will

succeed, I think, very well because it is fundamentally a superior

product to Wikipedia. It is a better source of information. And we haven't even added

images and video yet. That's going to be awesome. Yeah, we're going to add a

lot of video. So, using GrokImagine to create videos.

And so, if you're trying to explain something,

GrokImagine can take the text from Grok... and you see wow this is way better

than than wikipedia it's it's it's more comprehensive it's way more accurate um

it's not it's it's neutral instead of biased then you're going to set you're going

to forward those links around um and say that this is actually the better source

like it's it grafita will will will succeed i think very well

because it it is fundamentally a superior product to wikipedia it is it is a

better source of information um and we haven't even added uh images and video yet

so that's gonna be awesome yeah we're gonna add a lot of video

um so uh using grok imagine to create videos um

and uh so if you're trying to explain something um

grok imagine can take the text from grokpedia and then generate a video uh

an explanatory video so if you're trying to understand anything from how to tie a

bow tie to you know how do certain chemical reactions work or you know um

really anything um dietary things medical things um we could uh

grok well you can just go on and see the video of how it works

that is created when you have this version that's maximally truth -seeking as a model

do you think that there needs to be a better eval or a benchmark that

people can point to that shows how off of the truth things are so that

if you're going to start a training run with common crawl or if you're going

to use reddit or if you're going to use is it important to be able

to like say hey hold on a second this eval just sucked like you guys

suck on this eval like it's just this is crappy data

yeah i guess i think i mean there are a lot of evals out there

um i've complete confidence that grok is going to succeed um because

wikipedia is actually not a very good product yeah it's it's it's the

information is sparse uh wrong and out of date um and if you

can go if you find if and and it doesn't have you know there are

very few images there's basically no video um so if you have something which is

um you know accurate comprehensive

uh has videos uh where moreover you can ask if there's any

part of it that you're curious about you can just highlight it and grok and

and ask grok right there um like if you're trying to learn something it's just

great it's it's it's it's it's not going to be a little bit better than

than wikipedia it's going to be a hundred times better elon do you think you'll

see like good uniform usage like if you look back on the last three

years since you bought twitter there was a lot of people after you bought twitter

that said i'm leaving twitter elon's bought it i'm going to go to this other

wherever the hell they went and there's all these news and there's

all these and there's all these articles yeah you know yeah but blue sky is

falling is my favorite i guess my my question is as you destroy the woke

mind viral kind of um control of the system and as

you bring truth to the system whether the system is through grokipedia or through x

do people like just look for confirmation bias and they actually don't accept the truth

like what do you like or do you think people are actually going to see

the truth and change yeah but i mean is that like

you thought sydney sweeney's boobs were great looking good yeah solid

solid up there a little share yeah yeah i think we just got

flagged on youtube yeah we did we that that was definitely going to give us

a censorship moment um yeah grade a moves no but but like like but do

people change their mind i mean if there's a i could take it there's no

such thing as grade a move it's off the rails already

david you were trying to ask a serious question go ahead well i just want

to know if people change their mind like can you actually change people's minds by

putting the truth in front of them or do people just take you know they

kind of ignore the truth because they're they feel like they're in some sort of

camp and they're like i'm on the side they want the confirmation but they want

the confirmation bias and they want to stay in a camp and they want to

be tribal about everything um it is remarkable how much people believe things simply because

it is their the the belief of the of their in group you know whatever

their sort of political uh or ideological tribe is um

so um i mean there's some some pretty hilarious videos of

you know um you know uh there's like some guy going around um it's like

a racist nazi or whatever and and then and then and he was like trying

to show them the videos where of the thing that they are talking about um

where he is in fact uh condemning the nazis in strongest possible terms and condemning

racism in the strongest possible terms and they literally don't even want to watch the

videos so so yeah the people or at least some people

would they were preferred um they will stick to whatever their

um ideological views are whatever that sort of political tribal views are uh no matter

what um the the evidence could be staring them in the face and they're just

going to be a flat earther you know there's there is no evidence that you

could show to a flat earther to convince them the world's around because everything is

just a lie uh the world is flat type of thing i think the the

ability to hit at grok in a reply and ask it a question in the

thread has really become like a truth -seeking mission missile on the platform so when

i put up metrics or something like that i reply to myself and i say

at grok is the information i just shared correct and can you find any better

information and please tell me if my argument is correct or if i'm wrong and

then it goes through and then it dm sax and then sax gets in my

replies and tries to correct me no but it does actually a really good job

of like and that combined with community notes now you've got like two swings at

bat the community's consensus view and then grok coming in i think it'd be like

really interesting if grok on like really powerful threads kind of did like its own

version of community notes and had it sitting there ahead of time you know like

you could look at a thread and it just had next to it you know

or maybe on like the specific statistic you could click on it and it would

show you like here's where that statistic's from and it would show you like to

see you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or you know or you know or you know or you know

or you know or Ed has really become like a truth seeking mission missile on

the platform. So when I put up metrics or something like that, I reply to

myself and I say, Ekrok, is the information I just shared correct? And can you

find any better information? And please tell me if my argument is correct, or if

I'm wrong, and then it goes through and then it DM Saks, and then Saks

gets in my replies and tries to correct me. No, but it does actually a

really good job of like, and that combined with community notes, now you've got like

two swings at bat, the community's consensus view, and then Grok coming in. I think

it'd be like really interesting if Grok on like really powerful threads kind of did

like its own version of community notes and had it sitting there ahead of time,

you know, like you could look at a thread and it just had next to

it, you know, or maybe on like the specific statistic, you could click on it

and it would show you like, here's where that statistic's from. I mean, you can,

I mean, pretty much every, I mean, essentially every post on X, unless it's like

advertising or something, has the Grok symbol on it. And you just tap that symbol

and you're one tap away from a Grok analysis, literally just one tap. And we

don't want to clutter the interface with where it's providing an explanation. But I'm just

saying, if you go on X right now, it's one tap to get Grok's analysis.

And Grok will research the X post and give you an accurate answer.

And you can even ask us to do further research and further due diligence. And

you can go as far down the rabbit hole as you want to go. But

I do think like this is consistent with, we want X to be the best

source of truth on the planet by far. And I don't know where you hear

any and all points of view, but where those points of view are corrected

by human editors with community notes. And the essence of community notes is that

people who historically disagree, agree that this community note is

correct. And all of the community notes code

is open source and the data is open source. So you can recreate any community

note from scratch independently. By and large, it's worked very well. Yeah. Yeah.

I think we originally had the idea to have you back on the pod because

it was a three year anniversary of the Twitter acquisition. So I just wanted to

kind of reminisce a little bit. And I remember, yeah, I mean, I remember Where's

that sync? Where's that sync? Well, yeah. So Elon was staying at my house. We

had talked the week before and he told me the deal was going to close.

And so I was like, hey, do you need a place to stay? And he

took me up on it. And the day before he went to the Twitter office,

there was a request made to my staff. Do you happen to have an extra

sync? And they did not, but they were able to. Who has an extra sync,

really? But they were able to locate one at a nearby hardware store. And I

think they paid extra to get it out of the window or something. Well, I

think the store was confused because my security team was asking for any kind of

sync. And like, normally people wouldn't ask for any kind of sync. You need a

sync that puts in your bathroom or connects to a certain kind of plumbing. So

the lecturer asked, he's like, well, what kind of faucets do you want? No, no,

I just want a sink. Yeah. They think it's a mental person. The store was

confused that we just wanted a sink and didn't, and didn't care what, what's the

sync connected to. That was, that was, they were just like, they were like almost

not letting us buy the sync because, they, they thought maybe we'd buy the wrong

sync, you know? It's just rare that somebody wants a sync for specific sake.

For mean purposes. One of my favorite memories was Elon said, hey, you know, swing

by, check it out. And I said, okay, I'll come by. And I drive up

there and I'm looking where to park the car. And I realized there's just parking

spaces around the entire building. And I'm like, okay, this can't be like legal parking,

but I park and it's legal parking. Yeah. I mean, you're in downtown SF, so

you might get your window broken. Yeah. I might not be there when I get

back. But we get in there and the place is empty. And then. Yeah. Yeah.

It, it was seriously empty, except the cafeteria. There was an entire, uh, there were

two, the Twitter headquarters was two buildings. One of the buildings was completely and utterly

empty. Um, and the other building, uh, had like 5 % occupancy. And the

5 % occupancy, we go to the cafeteria, we all go get something to eat.

And we realized there's more people working in the cafeteria than that Twitter. There

were more people making the food than eating the food. And this giant cafeteria, you

know, this giant, really nice, really nice cafeteria. Um, the, you know,

there's, this is where we discovered that the, the actual price of, the lunch was

$400. Um, uh, the, the original price was $20, but it had

five, it went for, it was at 5 % occupancy. So it was 20 times

higher and they still kept making the same amount pretty much. So, and charging the

same amount. So effectively lunch was for $400. Um, and it was a great meeting.

Yes. And, and then, and then there was that, that, that, uh, where we had

the initial meetings, sort of the, sort of trying to figure out what the heck's

going on meetings in the, in, in these, in the, because, you know, there's the

two buildings, two Twitter, Twitter buildings, and one, the one with literally no one in

it. Um, that's, that's where we had the initial meetings. Um, and, um,

and then we, and then we tried drawing on the whiteboard and the, and the,

markers had gone dry so that nobody had used the

whiteboard markers in like two years. So none of the

markers worked. So we were like, this is totally bizarre, but it was, totally clean

because the cleaning crew had come in and done their job and cleaned, an already

clean place for, I don't know, two, three years straight. Um, it was spotless.

I mean, honestly, this is, this is, this is more crazy than any sort of

Mike judge movie or, or, you know, Silicon Valley or anything like that. Um, and,

and then we, I remember going into the men's bathroom and, and, and there's, there's,

there's a table, um, with, uh, you know, um, uh,

hygiene to menstrual hygiene products. Yeah. Yeah. Um,

refreshed every week tampons, like a fresh box of tampons. Um, and, and we're like,

but, but there's literally no one in this building. Um, none of the markers worked

so you're like this is totally bizarre but but it was totally clean because the

cleaning crew had come in and done their job and cleaned it cleaned an already

clean place for i don't two three years straight um it was spotless

i mean honestly this is this is more crazy than any sort of

mike judge movie or or you know silicon valley or anything like that um and

and then we i remember going into the men's bathroom and and and there's there's

a table um with uh you know um uh

hygiene to menstrual hygiene products yeah yeah

um refreshed every week tampons like a fresh box of tampons um

and and we're like but there's literally no one in this building um so

uh but nope hadn't turned off the send fresh tampons to the men's bathroom

in the empty building had not been turned off no so so every week they

would put a fresh box of tampons in an empty building um for years this

happened for years and it must be very confusing to the people that were being

asked to do this because they're like okay i'll throw them away

well i remember when you i guess they're paying us so we'll just put tampon

so seriously have to consider that the string of possibilities necessary in order for anyone

to possibly use that tampon in the men's bathroom uh at the unoccupied second building

of twitter headquarters um because you'd have to be a burglar um who

is a trans man burglar um

who's unwilling to use the woman's bathroom that also has tampons statistically there's no one

in the building so you've broken into the building and at that moment you

have a period yes i mean you're more likely to be

struck by a meteor um than need that tampon okay well i

remember it was i think it was shortly after that you discovered an entire

room yeah the office that was filled with stay woke t -shirts yeah do you

remember this an entire pile of merch yes hashtag stay woke stay woke and also

a big sort of buttons like those magnetic buttons that you put on your shirt

that said uh uh i i am an engineer um

i'm like look if you're an engineer you don't need a button like a big

who's the button for who are you telling that to you could just ship code

we would know we could check your gift i'm like but yeah they're like

scarves um hoodies uh all kinds of merch that said hashtag stay woke yeah

a couple music when you found that i was like my god man the barbarians

are fully within the gates now i mean the barbarians have smashed through the gates

and are looting the merch yes you are rummaging through their holy relics and defiling

them i mean but when you think about it david the amount of waste that

we saw there during those first 30 days just to be serious about it for

a second this was a publicly traded company right so if you think about the

financial duty of those individuals there was a list of sas software we went through

and none of it was being used some of it had never been installed and

they had been paying for it for two years they've been paying for a sas

product for two years and the the one that blew my mind the most that

we canceled was they were paying a certain amount of money per desk to have

desk suiting software in an office where nobody came to work so they were paying

nobody there was there was millions of dollars a year being paid for yes but

for um analysis of pedestrian traffic like software that use cameras to

analyze the pedestrian traffic to figure out where you can leave alleviate pedestrian traffic jams

uh in an empty building right that's like 11 out of 10 on a

dolbert scale yeah it was pretty shout out scott adams you've gone off scale uh

on your dolbert level at that point let's talk about the free speech aspect for

a second because i i think that is the most important legacy of the twitter

acquisition and i think people have short memories and they forget how bad things were

three years ago first of all you had figures as diverse as

president trump jordan peterson jay bodhacharya and tate they were all banned

from twitter and i remember when you opened up the the twitter jails and reinstated

their accounts kind of you know freed all the bad boys of free speech the

best deal yes so you basically gave all the the bad boys of free speech

their their accounts back but second beyond just the the bannings there was the shadow

bannings and twitter had claimed for years that they were not shadow banning this was

a paranoid conservative conspiracy theorist yeah there was a

very aggressive shadow banning by uh what was called the trust and safety group which

of course naturally would be the one that is doing the nefarious shadow banning

um and i just i think you shouldn't have a group called trust and safety

um i mean this is an old name if you ever if there ever was

one um i'm from the trust department

oh really i want to talk to you about your tweets can we see your

dms say that you're from the trust department it's literally that's the ministry of truth

right there yeah and twitter executives have made they had maintained for

years that they were not engaged in this practice including under oath and on the

heels of you opening that up and exposing that because by the way it wasn't

just the fact they were doing it they created an elaborate set of tools to

do this they had checkboxes in the app tools to to uh uh yes to

de -boost uh accounts yes yes and you know subsequently we found out that

other social networking properties have done this as well but you were really first to

expose it this is still being done at the other social media companies uh google

by the way um so um for you know um i don't pick

on google because they're all doing it but uh for search results uh if you

simply push a result uh pretty far down the page or you know the second

page of results like like you know the joke used to be or personal is

i think like where do you hide it They had maintained for years that they

were not engaged in this practice, including under oath. And on the heels of you

opening that up and exposing that, because by the way, it wasn't just the fact

they were doing it. They created an elaborate set of tools to do this. They

had checkboxes in the app. Elaborate set of tools to, yes, to de -boost

accounts, yes. And, you know, subsequently we found out that other social

networking properties have done this as well, but you were really first to expose it.

But this is still being done at the other social media companies. It's Google, by

the way. So for, you know, I don't pick on Google

because they're all doing it. But for search results, if you simply push a result

pretty far down the page or, you know, the second page of results, like, you

know, the joke used to be, or still is, I think, like, where do you

hide a dead? What's the best place to hide a dead body? The second page

of Google search results, because nobody ever goes to the second page of Google search

results. So you could hide a dead body there and nobody would find it. And

you still have – then it's not like you haven't made them go away. You've

just put them on this one page too. So shadow banning, I think, was number

two. So first was banning. Second was shadow banning. I think third to me was

government collusion, government interference. So you released the Twitter files. Nothing like that had ever

been done before where you just – you actually let investigative reporters go through Twitters,

emails, chat groups. I was not looking over their shoulder at all. They just had

direct access to everything. And they found that there was extensive collusion between the FBI

and the Twitter trust and safety group where it turns out the FBI had 80

agents submitting takedown requests. And they were very involved in the banning, the shadow banning,

the censorship, which I don't think we ever had definitive evidence of that before. That

was pretty extraordinary. Yeah, and the U .S. House of Representatives had

hearings on the matter, and a lot of this was unearthed. It's a public record.

So a lot of people – some people on the left still think this is

like made up. I'm like this is just literally – the Twitter files are literally

the files at Twitter. I mean we're literally just talking about – these are the

emails that were sent internally that confirm this. is what's on the Slack channels. And

this is what is shown on the Twitter database as where people have made either

suspensions or shadow bans. Has the government come and asked you to take stuff down

since, or did they just have to – the policy is, hey, listen, you've got

to file a warrant. You've got to come correct as opposed to just putting pressure

on executives. Yeah, our policy at this point is to follow the law. So

if – now, the laws are obviously different in different countries.

So sometimes, you know, I get criticized for like, why don't I push free speech

in XYZ country that doesn't have free speech laws? I'm like because that's not the

law there. And if we don't obey the law, we'll simply be blocked in that

country. So the policy is really just adhere

to the laws in any given country. It is not up to us to agree

or disagree with those laws. And if the people of that country want laws to

be different, then they should, you know, ask their leaders to change the laws. But

anything that – as soon as you start going beyond the law, now you're putting

your thumb on the scale. So the – yeah, I think

that's the right policy is just adhere to the laws within any given country. Now,

sometimes we get, you know, in a bit of a bind like we had gone

into with Brazil where, you know, this judge in Brazil was asking us to

– or telling us to break the law in Brazil and ban

accounts contrary to the law of Brazil. And now we're somewhat stuck. We're like, wait

a second. We're reading the law and it says this is not allowed to happen

and also that – and giving us a gag order. So like we're not allowed

to say it's happening and we have to break the law and the judge

is telling us to break the law. is breaking the law. That's where things get

very difficult. And we were actually banned in Brazil for a while because of that.

I just want to make one final point on the free speech issue and then

we can move on. It's just I think people forget that the censorship wasn't just

about COVID. There was a growing number of categories of thought and opinion that were

being outlawed. The, quote, content moderation, which is another Orwellian euphemism for censorship

was being applied to categories like gender and even climate change.

The definition of hate speech was constantly growing. And more and more people were being

banned or shadow banned. There was more and more things that you couldn't say. This

trend of censorship was growing. It was galloping and it would have continued if it

wasn't I think for the fact that you decided to buy Twitter and opened it

up and it was only on the heels of that that the other social networks

were willing to I think be a little bit chastened in their policies and start

to push back more. Yeah, that's right. Once Twitter broke

ranks, the others had to it became very obvious what the others were doing. And

so they had to mitigate their censorship substantially as because of what Twitter did. And

I mean, perhaps to give them some credit, they also felt that they had the

air cover to to be more inclined towards free speech.

They still do a lot of sort of, you know, shadow banning and and whatnot

at the other social media companies, but it's it's much less than it used to

be. Yeah. Elon, what do you what have you seen in terms of like governments

creating new laws? So we've seen a lot of this crackdown in the UK on

what's being called hateful speech on social media and folks getting arrested and actually

going to prison over it. And it seems like when there's more freedom, the

side that is threatened by that comes out and creates their own counter. Right. There's

a reaction to that. And there seems to be reaction. Are you seeing more of

these laws around the world in response to your opening up free speech through Twitter

and and those changes and what they're enabling that that the governments and the parties

that control those governments aren't aligned and they're stepping in and saying, let's create new

ways of maintaining our control through law. Yeah, there, there is

there's been an overall global movement to suppress free speech. And there's been a lot

of people that are being used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be used to be used to be used to be used

to be used to be Yeah, Elon, what do you what have you seen in

terms of like governments creating new laws? So we've seen a lot of this crackdown

in the UK on what's being called hateful speech on social media and folks getting

arrested and actually going to prison over it. And it seems like when there's more

freedom, the side that is threatened by that comes out and creates their own counter.

Right. There's a reaction to that. And there seems to be reaction. Are you seeing

more of these laws around the world in response to your opening up free speech

through Twitter and and those changes and what they're enabling that that the governments and

the parties that control those governments aren't aligned and they're stepping in and saying, let's

create new ways of maintaining our control through law. Yeah, there is.

There's been an overall global movement to suppress free speech under the name of in

the guise of suppressing hate speech. But then, you know,

it's the problem with that is that your freedom

of speech only matters if people are allowed to say things that you that you

don't like or even that things that you hate. Because if you're allowed to suppress

speech that you don't like, then and, you know, you don't have freedom of

speech and it's only a matter of time before things switch around and then the

shoes on the other foot and they will suppress you. So, uh, suppress not lest

you be suppressed. Um, but, but there, there, there is a, uh, a movement

and I, I, there, or there had, there was a very strong movement to codify,

uh, speech suppression into the law throughout, throughout the world and including the Western world,

um, you know, the Europe and Australia. The UK and Germany were very, um, yeah,

aggressive in this regard. Yes. And my understanding is that in the UK, uh, there's

something like two or 3000 people, uh, in prison for social media posts. Um, and

in fact that this, there's, there's so many people in, uh, that were in prison

for social media posts. Um, and, and many of these things are like, you, you

can't believe that, that someone would actually be put in prison for this. They, they,

they have in a lot of cases released people who have committed violent crimes in

order to, to imprison people who have simply made posts on social media, which is

deeply wrong. Um, and, and, and, uh, underscores why the founders of this country

made the first amendment, the first amendment was freedom of speech.

Why did they do that? It's because in the places that they came from, there

wasn't freedom of speech and you could be in prison or killed for, for saying

things. Can I ask you a question just to maybe move to a different topic?

If you came and did this next week, we will be past the Tesla board

vote. We talked about it last week and we talked about how crazy ISS and

Glass -Lewis is. And we use this one insane example where like Ira, Aaron prize,

didn't get the recommendation from ISS and Glass -Lewis because he didn't meet the gender

requirements. But then Kathleen also didn't. It doesn't make any sense. Can you, so

the, the board vote is on the six. So it was African -American woman. Yeah.

She, she, she, they recommended against her, but then also recommended against, uh, our enterprise,

um, on, on the grounds, he was insufficiently diverse. So I'm like this, like these

things don't make any sense. Yeah. So I do think we've got a fundamental issue

with corporate governance, um, in publicly traded companies where you've got about half of the

stock market, uh, is controlled by passive index funds. Um, and, uh, most of them

out, most of them outsource their decision, uh, to, uh, advisory firms and particularly, particularly

Glass -Lewis and, uh, ISS. I call them corporate ISIS. Um, you know, so

all they do is basically just, they're just terrorists.

Um, so, um, so, and, and, and they had, they own no stock in any

of these companies. Um, right. So I, I think that this, there's a fundamental breakdown

of fiduciary responsibility here, uh, where really, um, you know, any company that's managing,

um, uh, even though they're passively managing, you know, index funds or whatever, that

they do at the end of the day have a fiduciary duty to, uh, vote,

uh, you know, along the lines of what would be a fiduciary duty. To maximize

the, the shareholder returns because people are counting on them. Like people, uh, you know,

have say, you know, so it has, have all their savings and say a 401k

or something like that. Um, and they're, they're counting on, um, the index funds to

vote, uh, do company votes in the direction that would, uh,

ensure that their retirement savings, uh, do as well as possible. But the problem is

if that is then outsourced to ISS and Glass -Lewis, which have been infiltrated by

far -left activists, um, because, you know, you know, we're, you know, we're, you know,

we're basically political activists go. They go where the, where the power is. Um, and

so effectively, uh, Glass -Lewis and ISS, uh, uh, controlled the vote

of half the stock market. Now, now, if you're a political

activist, you know what a great place would be to go work? ISS and Glass

-Lewis. And they do. So, um, so my concern for the future,

um, because there's, you know, the Tesla, um, thing is, it's called sort of compensation,

but really it's not about compensation. It's not like I'm going to go out and

buy, you know, a yacht with it or something. It's just that I, I, I

do, I, in order, if I'm going to build up, uh, Optimus and, and, you

know, have all these robots out there, I need to make sure we do not

have a terminated scenario. And then I, and then I can make, you know, maximize

the safety of the robots. Um, and, and, and, um, but I, but

I, I, I feel like I need to have something like a 25 % vote,

um, which is enough of a vote to have a strong influence, uh, but not

so much of a vote that I can't be fired if I go insane. Um,

so it's, it's kind of, but, but my concern would be, you know, creating this

army of robots and then, and then being fired for political reasons. Um, because of,

because of ISS and Glass -Lewis, uh, uh, you know, declined to,

ISIS and Glass -Lewis fire me effectively, or, or the, the activists at those firms

fire me. Um, even though I've done everything right. Yeah. That's my concern.

Yeah. And then I, and then, then, then you've got. And then. have a terminated

scenario and I, and I can make, you know, maximize the safety of the robots.

Um, and, and, and, um, but I, but I, I, I feel like I need

to have something like a 25 % vote, um, which is enough of a vote

to have a strong influence, uh, but not so much of a vote that I

can't be fired if I go insane. Um, so it's, it's kind of, but,

but my concern would be, you know, creating this army of robots and then, and

then being fired for political reasons, um, because of, because of ISS and Glass -Lewis,

uh, uh, you know, declined to ISIS and Glass -Lewis fire

me effectively or, the, the activists at those firms fire me. Um,

even though I've done everything right. Yeah. That's my concern. Yeah. And then I, and

then, then, then you've got, and then I, cannot ensure this, the safety of the

robots. If you don't get that vote, it doesn't go your way. It looks like

it's going to, would you leave? I mean, is that even in the cards I

heard they were, the board was very concerned about that. Uh, let's just say

I'm not going to build a robot army. Um, if I, can be easily kicked

out by activists, investors, no way. Yeah. Makes sense. I mean,

and who is capable of running the four or five major product lines

at Tesla. I mean, this is the madness of it. It's a very complex business.

People don't understand what's under the hood there. It's not just a car company. You

got batteries, uh, you got trucks, you got the self -driving group, and this is

a very complex built business. That's you've built over decades now. It's, not a very

simple thing to run. I don't think there's a Elon equivalent out there who can

just jump into the cockpit. By the way, if we take a full turn around

corporate governance corner also this week, what was interesting about the

open AI restructuring was I read the letter and your lawsuit was excluded

from the allowances of the California attorney general, basically saying this

thing can go through, which means that your lawsuit is still out there. Right. And

I think it's going to go to a jury trial. So there that corporate governance

thing is still very much in question. Do you have any thoughts on that? Um,

yes, I believe that we'll go to a jury trial in February or March. Um,

and, and then we'll, we'll see what the, what the results are there. But, um,

I, there's, there's, there's like a mountain of evidence, um, that,

that shows that open AI was created as a, uh, an open source nonprofit. It's,

it's literally, that's the exact description in the incorporation documents. Um, and in fact, the

incorporation documents explicitly say that no, officer, uh, or founding member will be, will benefit

financially from open AI. And they've completely violated that.

And more of a, you can, that then you can just use the wayback machine

and look at the website of open AI, again, open source, nonprofit, open source, nonprofit,

the whole way until, you know, it looked like, wow, this is a, there's a

lot of money to be gained here. And then suddenly it starts changing. Um, and

they try to change the definition of open AI to mean open to everyone instead

of open source, even though it always meant open source. I came up with the

name. Yeah. That's how I know. So, uh,

if they open sourced it, uh, or they gave you, I mean, you don't need

the money, but if they gave you the percentage ownership in it, that you would

be rightfully, uh, which 50 million for a startup would be half at least,

but they must've made an overture towards you and said, Hey, can we just give

you 10 % of this thing and give us your blessing? Like you're obviously have

a different goal here. Yeah. Um, I mean, essentially since I came up

with the idea for the company named it, um, provided the A, B and C

rounds of funding, uh, recruited the, uh, critical personnel, uh, and told them everything I

know. Um, you know, if that had been a commercial corporation, I'd probably own half

the company. So, um, but, and, and, I, I, I could have

chosen to do that, that, that I, if I, it was totally at my discretion,

I could have done that. Uh, but I created it as a nonprofit for the

world and open source nonprofit for the world. Do you think the right thing to

do is to take those models, and just open source them today, if you could

affect that change, is that the right thing to do? Uh, yeah, I think, I

think, uh, that, that, that is what the cut, what it was created to do.

So it should, I mean, the, the best open source models right now, actually, ironically,

because fate seems to be an irony maximizer. Um, uh, the best open source models

are generally from China. Yeah. Like that's bizarre. And, and, and, and then

I think the second best, uh, or one is, or maybe it's better than second

best. Uh, but like the, uh, the grok 2 .5, um, open source model is

actually very good. Um, and I think we'd probably be, and, we'll

continue to open source our models, but, you know, but whereas like try using any

of the, the, the recent, um, so -called the open AI, open source models that

are out, that don't work. They're basically, they open sourced a broken non -working version

of, of their models as a fig leaf.

I mean, do you know anyone who's running open, open eyes, open source models? Exactly.

Yeah. Nobody. We've had a big debate about jobs here. Obviously there's going to be

job displacement. You and I have talked about it for decades. Uh,

what's your take on the pace of it? Because obviously you're building self -driving software,

you're building optimists. Yeah. And we're seeing Amazon take some steps here where they're like,

yeah, we're probably not going to hire these positions in the future. And, you know,

maybe they're getting rid of people now cause they were bloated, but maybe some of

it's AI, you know, it's, it's all debatable. What do you think the timeline is?

And what do you think as a society we're going to need to do to

mitigate it if it goes too fast? Well, um,

I, you know, I call AI, the supersonic tsunami. So, um, so not the most

comforting description in the world. Um, but there was a tsunami,

a giant wall of water moving faster than the speed of sound. That's AI. Um,

when does it land? Yeah, exactly. Um, so now

this is happening with Obviously, you're building self -driving software, you're building Optimus, and we're

seeing Amazon take some steps here where they're like, yeah, we're probably not going to

hire these positions in the future, and, you know, maybe they're getting rid of people

now because they were bloated, but maybe some of it's AI, you know, it's all

debatable. What do you think the timeline is, and what do you think, as a

society, we're going to need to do to mitigate it if it goes too fast?

Well, you know, I call AI the supersonic tsunami, so

it's not the most comforting description in the world, but if

there was a tsunami, a giant wall of water moving faster than the speed of

sound, that's AI. When does it land? Yeah, exactly.

So, now this is happening whether I want it to or not. I actually try

to slow down AI, by the way, and then the reason,

you know, I, the reason I wanted to create OpenAI was to serve as a

counterweight to Google because at the time Google was sort of essentially had unilateral power

in AI, all the AI, essentially, and, um, and, uh, you know,

Larry Page was not, um, you know, he, he,

he, he was not taking AI's safety seriously. Um, uh,

I don't know, Jason, I'm sure, were you, were you there when he, he called

me a speciest? Yes, I was there. Yeah. Okay, so. You were more concerned about

the human race than you were about the machines. And, uh, yeah, you had a

clear bias for humanity. Yes, yes, I was, exactly. I was like, Larry, well, what,

like, we need to make sure that the AI doesn't destroy all the humans. And

then he called me a speciest, um, like racist or something for being pro, uh,

human intelligence instead of machine intelligence. I'm like, well, Larry, what side are you on?

Um, I mean, you know, that's kind of a concern. And, and, and then at

the time that Google had, uh, essentially a monopoly on AI. Yeah. They bought

DeepMind, which you were on the board of, had an investment in, Larry and Sergey

had invested in it as well. And it's really interesting. I found out about it

because I told him about it. And I, I, I showed him some stuff from

deep, from DeepMind and, and I think that's how he found out, found out about

it and, and, and acquired them actually. I gotta be careful what I say. Um,

but, but the, the, the, the point is that it's like, look, now he's not

taking AI safety seriously and, and, and, and Google had essentially all the AI and

all the computers and all the money. And I'm like, this is a unipolar world

where the guy in charge is not taking things seriously. So, um, and called me

a speciest, uh, who are being pro human. Um, what do you do in those

circumstances? You know, build a competitor. Yes. Um, so opening, I was created

essentially as the opposite, which is an open source nonprofit, the opposite of Google. Um,

now, unfortunately it's, it, it, it needs to change its name to closed for maximum

profit AI. Yeah. For maximum profit, to be clear. The

most amount of profit you could possibly get. I mean,

it is so, it is like, like I said, it's comical. And when you hear,

when you hear Sam. There's an irony maximizing. You have to say like, what is

the most, the most ironic outcome for a company that, was created for, to do

open source, uh, not at nonprofit AI is it's super close source. It's tied

in Fort Knox. Um, the, the, the, the, the AI, open AI source is locked

up, tied in Fort Knox. Um, and, uh, and, and they are going for maximum

profit. Like a maximum, like get the bourbon, the steak knife that, you know,

yeah. I mean, you know, you know, like, like they, they're going for the buffet

and they're just diving headfirst into the profit buffet. I mean, it's just, or at

least aspiration, the revenue buffet, at least profit, we'll see. Um, I mean, it's like,

it's like ravenous wolves for revenue. Ravenous. Revenue buffet.

No, no, no. It's literally like super, it's like bond villain level flip.

Like it went from being the United nations to being specter in like James Bond

land. Yeah. When you hear him say, I'm going to, when Sam says it's going

to like raise 1 .4 trillion to build that data center. Yeah. No, but I

think he, I think he means it. Yeah. I mean, it's, I would say audacious,

but I, I wouldn't want to, yeah, insult the word. It's actually, I have a

question about this. How is that possible? In the earnings call, you said something that

was insane. And then I think the math actually nets up, but you said we

could connect all the Teslas and allow them in downtime to actually offer up

inference and you can string them all together. I think the math is like, it

could actually be like a hundred gigawatts. Is that right? Did you do? If ultimately

there's a Tesla fleet that is a hundred million vehicles, which I think we probably

will get to at some point, a hundred million vehicle fleet. And they have, you

know, mostly state of the art inference computers in them that, that each say are

a kilowatt of inference computer. Um, and they have built in, um, power and

cooling, um, and connect to the wifi. That's the key. Yeah, exactly. Um, yeah,

exactly. And, and, and, and, uh, you'd have a hundred gigawatts of inference computer. Elon,

do you think that the architecture, like there was an attention free model that came

out the last week, there's been all of these papers, all of these new models

that have been shown to reduce power per token of output by many, many, many

orders of magnitude, like not just an order of magnitude, but like maybe three or

four, like what's your view and all the work you've been doing on where we're

headed in terms of power, um, uh, per unit of computer per token of output?

Well, we have a clear example of, uh, efficient power, efficient compute,

which is the human brain. Um, so, um, our brains use about 20 Watts, um,

of power, but, and all that only about 10 Watts is higher brain function. Most

of it's, you know, half of it is just housekeeping functions, like, you know, keeping

your heart going and breathing and that kind of thing. Um, so, so you've got

maybe 10 Watts of, uh, higher brain function in a human. Um, and we've managed

to build civilization with 10 Watts of, uh, of a biological computer. Um,

and that biological computer has like a 20 year. an order of magnitude but like

maybe three or four like what's your view and all the work you've been doing

on where we're headed in terms of power um per unit of compute or per

token of output well we have a clear example

of efficient power efficient compute which is the human brain um

so um our brains use about 20 watts um of power but and all that

only about 10 watts is higher brain function most of it's you know half of

it is just housekeeping functions you know keeping your heart going and breathing and that

kind of thing um so so you've got maybe 10 watts of uh higher brain

function in a human um and we've managed to build civilization with 10 watts

of uh of a biological computer um and that biological computer has like a 20

year you know boot sequence uh so but but

but it's very power efficient so uh given that uh humans are capable of inventing

um you know general relativity and quantum mechanics and uh or discovering

general opportunity like like inventing aircraft lasers the internet

and discovering physics with with a 10 watt uh meat computer essentially um

uh then um there's clearly a massive opportunity for

improving the uh efficiency of ai compute um it's because it's

it's it's currently many orders of magnitude away from that um and and it's still

the case that um a 100 megawatt uh or even

you know a gigawatt uh ai supercomputer at this point can't do everything that uh

a human can do uh it it will be able to uh but it can't

yet um so but but like i said we've got this obvious case

of um human brains being very power efficient and achieving and and building

civilization with with it with a you know with 10 watts to compute um and

and and a very slow and our our bandwidth is very low so that the

speed at which we communicate information to each other is extremely low you know we're

not communicating at a terabit we're communicating more like 10 bits per second um

so um that should naturally lead you to the

conclusion that there's massive uh opportunity for being more power efficient with with ai and

and at tesla and at xai we're both we continue to see massive improvements in

inference computer efficiency um so um yeah you think that there's

a moment where you would justify stopping

all the traditional cars and just going completely all in on cyber cab if you

felt like the learning was good enough and that the system was safe enough is

there ever a moment like that or do you think you'll always kind of dual

track and always do both i mean all of the cars we make right now

um are capable of being a robo taxi so there's a little confusion of the

terminology because um the the the our cars look normal you know

like model three or model y looks it's a good looking car but it looks

looks normal um but it has an advanced ai computer and advanced ai software and

cameras and we didn't want the cameras to stick out so we you know so

that we don't want them to be ugly or stick out so so we you

know if we put them there's sort of an out of truce of locations you

know the forward looking camera cameras are in front of the rear view mirror um

the side view mirrors are in the side repeaters

the the rear camera is you know just in the you know above the license

plate actually typically where the rear view camera is in a car um and um

you know and and the the diagonal forward ones are in the b pillars like

if you look closely you can see all the cameras but but you have to

look closely we just didn't want them to be to stick out like you know

warts or something um but but actually all the cars we make um are

hyper intelligent um and have the cameras in the right places they just look normal

um and um so so all of the cars we make are capable of unsupervised

full autonomy um now we have a dedicated product which is the cyber cab

um which has no no steering wheel or pedals um which are obviously prestigial in

a autonomous world uh and we saw production uh of the cyber cab in q2

next year and we'll scale that up to to quite high volume i think ultimately

we'll make millions of uh cyber cabs per year um but but it is important

to emphasize that all of our cars are capable of being robotic taxis the cyber

cab is gorgeous i told you i'd buy two of those if you put a

steering wheel in them and there is a big movement people are begging for it

why not why not let us buy a couple you know you know just the

first ones off the line and drive them i mean it's they look great it's

like the perfect model you always had a vision for a model two right like

isn't it like the perfect model two in addition to being a cyber cab look

the reality is people may think they want to drive their car but the reality

is that they don't um how many times have you been saying an uber or

lift and and you said you know what i wish i could take over from

the driver and and and i wish i could get off my phone and and

take over from the uber driver and uh and drive to my destination how many

times have you thought to yourself thought that to yourself no it's quite the opposite

okay i have the model y and i just got 14 i have juniper

and i got the 14 one and i put it on mad max mode the

last couple of days that is bad max mode a unique experience i

was like wait a second this thing is driving in a very unique fashion

um yeah yeah it assumes you want to get to your destination in a hurry

yeah um i i used to give cam drivers an extra 20 bucks to do

that medical appointment or something i don't know yeah but it's it feels like it's

getting very close but you have to be very careful you know uber had a

horrible accident with the safety driver cruise had a terrible accident wasn't their fault exactly

except you know that somebody got hit and then it they they hit the person

a second time and they got dragged yeah yeah this is pretty high stakes so

you're being extremely cautious because the car is actually extremely capable right now

but we are being extremely

couple of days that is a unique experience i

was like wait a second this thing is driving in a very unique fashion

um yeah yeah it assumes you want to get to your destination in a hurry

yeah um i i used to give cam drivers an extra 20 bucks to do

that medical appointment or something i don't know yeah but it's it feels like it's

getting very close but you have to be very careful you know uber had a

horrible accident with the safety driver cruise had a terrible accident wasn't their fault exactly

except you know that somebody got hit and then it they they hit the person

a second time and they got dragged yeah yeah there's this pretty high stakes so

you're being extremely cautious because the car is actually extremely capable right now

but we are being extremely cautious and we're being paranoid about it because to your

point um even one accident would would be headline news well probably worldwide headline news

especially if it's a tesla like waymo i think gets a bit of a pass

i think there's half the country or a number of people probably would you know

go extra hard on you uh yes uh yeah exactly

um yeah not everyone in the press is my friend i hadn't

noticed you know some of them are a little uh antagonistic yeah so you just

but people are pressuring you to go fast and i i think is

everybody's got to just take their time with this thing it's obviously going to happen

but i i just get very nervous that the the pressure to put these things

on the road faster than they're ready is just uh a little crazy so i

applaud you for putting the safety monitor in doing the safety drive and no shame

in the safety driver game it's so much the right decision obviously but people are

criticizing you for it i think it's dumb it's the right thing to do yes

and we do expect it to take to not have any um sort of safety

uh occupant or or there's not really a driver that just sits monitor safety safety

monitor just sits they just sit they just sit in the car and don't do

anything um safety dude yeah um so uh but we do expect

that that the cars will be driving around without any any safety monitor um if

before the end of the year so sometime in december in austin yeah i mean

you got a number of reps under your belt in austin and it feels like

pretty well you guys have done a great job figuring out where the trouble spots

are maybe you could talk a little bit about what you learned in the first

i don't know it's been like three or four months of this so far what

did you learn in the first three or four months of the austin experiment actually

it's gone pretty smoothly um a lot of a lot of things that we're learning

um are uh just how to manage a fleet like because you've got to write

all the fleet management software right so yeah um and you've got to have write

the ride hailing software you've got to write basically the software that uber has you've

got to write that software it's just summoning a robot car instead of a car

with a driver um so so a lot of things we're doing we're scaling up

the number of cars um to say say like what happens if you have a

thousand cars like so we'll you know we think probably we'll have you know a

thousand cars or more um in the bay area uh by the end of this

year probably i don't know 500 or more in the greater austin area um

and you know if if if um you have to you

have to make sure the cars don't all for example go to the same supercharger

uh that's fine right um so uh or don't

all go to the same intersection um there's there's it's like what do these cars

do and then like sometimes there's a high demand and sometimes there's there's low demand

what do you do during during those times uh do you have the car circled

the block do you have to try to find a parking space um the um

and then you know sometimes the like say it's a it's a you know a

disabled parking space or something but the the writing's faded or the things faded the

car's like oh look a parking space will jump right in there it's like yeah

get a ticket you gotta look carefully make sure it's it's like you know it's

not a uh an illegal parking space uh or or or it sees it sees

a space to park and it's like ridiculously tight but it's all i can get

in there um yes but with like you know three inches on either side bad

computer but nobody else will be able to get in the car if you do

that um so um you know there's just like all these oddball corner cases um

and um uh and regulators like regulators are

all very um yeah they're they have different levels of

perspicuousness and regulations depending on the city depending on the airport i mean it's just

you know very different everywhere that's going to just be a lot of blocking and

tackling and it just takes time elon let me cast you another like in order

to take people to san jose airport like san jose you actually have to connect

to san jose airport servers um and because you have to pay a fee every

time you go off so so the car actually has to has to do a

remote call the robot car has to do you know remote procedure call to to

san jose airport servers to to uh say i'm dropping someone off at the airport

and charge me whatever five bucks um which is like there's all these like quirky

things like that the the the like airports are somewhat of a racket um uh

yeah um so so that's like you know we have to solve that thing but

it's kind of funny that the robot car is like calling the server the airport

server to to uh you know charge its credit card or whatever

to send a fax yeah we're gonna be dropping off at this time

but it will soon become extremely normal to see cars going around with no one

in them yeah yeah just before uh we lose you i

want to like ask if you saw that bill gates memo that he put out

a lot of people are talking about this memo like you know did go i

guess like willie g is not my love oh

man like did did did climate change

become did it become like woke and is it over like you know like what

happened and what's what what happened with billy g i mean But it's kind of

funny that the robot car is, like, calling the server, the airport server, to, you

know, charge its credit card or whatever. To send a fax.

Yeah, we're going to be dropping off at this time. But it will soon become

extremely normal to see cars going around with no one in them. Yeah. Extremely normal.

Just before we lose you, I want to, like, ask if you saw the Bill

Gates memo that he put out. A lot of people are talking about this memo.

Like, you know, I guess, like, like, Billy G is not my love.

Oh, man. Like, did climate change become

woke? Did it become, like, woke? And is it over being woke? Like, you know,

like, what happened? And what happened with Billy G? I mean, you know.

Great question. Yeah. You know,

you think that someone like Paul Gates, who clearly started a technology company,

that's one of the biggest companies in the world, Microsoft, being, you'd think he'd be

really quite, you know, strong in the sciences. But

actually, my, at least, direct conversations with him have, he

is not strong in the sciences. Like, yeah, this is really

surprising. You know, like, he came to visit me at the Tesla Gigafactory in Austin

and was telling me that it's impossible to have a long -range semi -truck.

Um, and I was like, well, but we literally have them, um,

and you can drive them, and Pepsi is literally using them right now, and you

can drive them yourself, or send someone, obviously, Paul Gates is not going to drive

it himself, but you can send a trusted person to drive the truck and verify

that it can do the things that we say it's doing. And he's like, no,

no, it doesn't work, doesn't work. And I'm like, um, okay. I'm, like, kind of

stuck there. Um, uh, then it's like, I was like, well, so it

must be that, um, you disagree with the watt -hours per kilogram of the

battery pack, so that you must think that perhaps we can't achieve the energy density

of the battery pack, or that the watt -hours per mile of the truck is

too high. Why? And that when you combine those two numbers, the range is low.

And so which one of those numbers do you think we have wrong, and what

numbers do you think are correct? And he didn't know any of the numbers. And

I'm like, well, then doesn't it seem that it's perhaps, um, you know, premature to

conclude that a long -range semi cannot work if you do not know the energy

density of the battery pack, or the energy efficiency of the truck chassis? Hmm.

But, yeah, he's now taking a 180 on climate. He's saying maybe

this shouldn't be the top priority. Climate is gay. It's just, yeah, when he's saying

climate is gay, that's wrong. It's totally retarded.

Well, Bill Gates said that climate is gay and retarded. Come on. Maybe he's got

some data centers he's got to put up. Does he have to stand up a

data center for, say, a Maltman or something? I don't know. What is Azure? I

don't know. He changed his position? I can't figure out why.

I mean, you know, I mean, the reality of the whole climate change thing is,

is that the, um, you know, you've just had sort of people who say it

doesn't exist at all. And then people who say it's, are super alarmist and saying,

you know, RAR is going to be underwater in five years. And obviously, neither of

those two positions are true. Um, you know, the, you know, the, the, the reality

is you can measure the, the carbon concentration in the atmosphere. Again, you could just

literally buy a CO2, uh, monitor from Amazon. It's like 50 bucks. And, um,

you can measure it yourself. Um, and, uh, you know, and you, you can say,

okay, well, look, the, the, the, the possible million of CO2 in the atmosphere has

been increasing steadily at two to three per year. Um, at some point, if you,

uh, continue to take, to take, uh, billions, eventually twillions of tons of carbon from

deep underground and transfer it to the atmosphere and oceans. So you transfer it from

deep underground into the surface cycle, you will change the chemical constituency of the atmosphere

and oceans. Just that you're just literally will. Um, then you can only, then now

you can say, to what degree and over what time scale? Um, and the reality

is that in my opinion, is that we've got at least 50 years, uh, before

it's a serious issue. Um, I don't think we've got 500 years, uh, but, but

we've probably got, you know, 50, um, it's, it's not, it's not five years. Um,

so if you're trying to get to the right order of magnitude of accuracy, um,

I'd say the, the concern level for climate change is on the order of 50

years. It's definitely not five. And I think it probably isn't 500. Um, so, uh,

so really the, the right course of action is actually just the reasonable course of

action, which is to lean in the direction of sustainable energy. Um, and, uh, and

lean in the direction of, of solar, um, and sort of a solar

battery future and, and, and generally have the rules of the system,

um, uh, lean in that direction. I, I don't think we need

massive subsidies, but then we also shouldn't have massive subsidies for the oil and gas

industry. Okay. So the oil and gas industry, gas industry has massive tax write

-offs that they don't even think of as subsidies, um, because these things have been

in place for, in some cases, you know, 80 years. Um, but they're not there

for other industries. So when you've got a special tax conditions that are in one

industry and not another industry, I call that a subsidy. Obviously it is, but they've

taken it for granted for so long in oil and gas that they don't think

of it as a subsidy. Um, so the right course of action, of course, is

to remove, in my opinion, to remove subsidies from oil industries. Um, but, but the,

the, the political reality is that the oil and gas industry, um, it's very strongly

in the Republican Party, but not in the Democratic Party. So you, you will not

see, obviously, even the tiniest subsidy being removed from the, uh, oil, gas, and coal

industry. In fact, there were some that were added to the oil, gas, and coal

industry, uh, in, in the, the, the sort of big bill. Um, and,

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh

uh, uh, uh, uh Massive tax write -offs that they don't even think of as

subsidies because these things have been in place for, in some cases, 80

years. But they're not there for other industries. So when you've got special tax conditions

that are in one industry and not in another industry, I call that a subsidy.

Obviously, it is. But they've taken it for granted for so long in oil and

gas that they don't think of it as a subsidy. So the right course of

action, of course, is to remove, in my opinion, to remove subsidies from all industries.

But the political reality is that the oil and gas industry is very strongly in

the Republican Party, but not in the Democratic Party. So you will not see, obviously,

even the tiniest subsidy being removed from the oil, gas, and coal industry. In fact,

there were some that were added to the oil, gas, and coal industry in the

sort of big bill. And there were a

massive number of sustainable energy incentives that were removed, some of which I agreed with,

by the way. Some of the incentives have gone too far.

But anyway, the actual, I think, the correct

scientific conclusion, in my opinion, and I think we can back this up with solid

reasoning, ask Grock, for example, is that we should

lean in the direction of moving towards a sustainable energy future.

We will eventually run out of oil, gas, and coal to burn anyway, because

there's a finite amount of that stuff. And we will eventually have to go to

something that lasts a long time that is sustainable. But to your point about the

irony of things, it seems to be the case that making energy with solar is

cheaper than making energy with some of these carbon -based sources today. And so the

irony is it's already working. I mean, the market is moving in that direction. And

this notion that we need to kind of force everyone into a model of behavior,

it's just naturally going to change because we've got better systems. You know, you and

others have engineered better systems that make these alternatives cheaper, and therefore they're winning.

Like, they're actually winning in the market, which is great. But they can't win if

there are subsidies to support the old systems, obviously. Yeah, I mean, by the way,

there are actually massive disincentives of POSFOLA because China

is a massive producer of solar panels. China does an incredible job of solar

panel manufacturing. Really incredible. They have

roughly one and a half terawatts of solar production right now. And they're only

using a terawatt per year. But by the way, that's a gigantic number. The average

U .S. power consumption is only half a terawatt. So just think about that

for a second. China's solar panel

production max capacity is one and a half terawatts per year. U .S. steady

state power usage is half a terawatt. Now, you do have to reduce, you say,

to produce one and a half terawatts a year of solar, you need to add

that with batteries, taking into account the differences between night and day, the fact that

the solar panel is not always pointed directly at the sun, that kind of thing.

So you can divide by five -ish to say that – but that still means

that China has the ability to produce solar panels that have a steady state output

that is roughly two -thirds that of the entire use economy from all sources, which

means that just with solar alone, China can, in 18 months, produce

enough solar panels to power the entire – the United States, all the electricity in

the United States. What do you think about near -field solar, a .k .a. nuclear?

I'm in favor of – look, make energy from any way you want. That doesn't,

like, obviously harmful to the environment. Generally, people don't

welcome a nuclear reactor in their backyard. They're not, like, championing – Put it here.

Put it under my bed. Put it on my roof.

If your next -door neighbor said, hey, I'm starting my house, and they're putting a

reactor there, what would you – your typical homeowner

response would be negative. Very few people will embrace a nuclear reactor

adjacent to their house. So – but nonetheless,

I do think nuclear is actually very safe. The – it's – there's a lot

of sort of scaremongering and propaganda around fission, assuming you're talking about fission.

And fission is actually very safe. They obviously have this on – you know, the

Navy, U .S. Navy has this on submarines and aircraft carriers and with people really

walking – I mean, a submarine is a pretty crowded place, and they have a

nuclear -powered submarine. So, I think fission is fine as

an option. The regulatory environment is – makes it very difficult to

actually get that done. And then it is important to appreciate just the sheer magnitude

of the power of the sun. So, this is – here are some just important

basic facts. Even Wikipedia has these facts, right? You know,

so you don't even have to – you go to Rockpedia – You don't have

to use Rockpedia. But even Wikipedia has – Yeah, even Wikipedia got it, right? Yes,

yes. I'm saying – what I'm saying, even Wikipedia has got these facts, right? Yeah.

The sun is about 99 .8 % of the mass of the solar system.

Then Jupiter is about 0 .1%, and then everything else is in the remaining 0

.1%, and we are much less than 0 .1%.

So, if you burnt all of the mass of the solar system, okay,

then the total energy produced by the sun would still round up to 100%. Mm

-hmm. But if you just burnt Earth, the whole planet, and burnt

Jupiter, which is very big and quite challenging to burn,

Jupiter into thermonuclear active, it wouldn't matter compared to the sun.

is 99 .8 % of the mass of the solar system, and everything else is

in the miscellaneous category. So, that's – Uh, then, then Jupiter is about 0 .1

% and then everything else is in the remaining 0 .1 % and we are

much less than 0 .1%. Um, so, um,

if you burnt all of the mass of the solar system,

okay, then the total energy produced by the sun would still round up to a

hundred percent. Um, if you just burnt earth, um,

the whole planet and burnt Jupiter, which is very big and, and quite challenging to

burn, uh, uh, you, you know, Jupiter

into a thermonuclear actor, um, it wouldn't matter. The sun compared to the sun, the

sun is 99 .8 % of the mass of the solar system and everything else

is in the miscellaneous category. So, um, like basically no matter what you do, total

energy produced, um, in our solar system rounds up to 100 % from the sun.

You could even throw another Jupiter in there. Um, so we're going to snag a

Jupiter from somewhere else. Um, and, uh, somehow I've teleport, you could teleport two more

Jupiters, uh, into our solar system, burn them and the sun would still round up

to a hundred percent. Um, you know, as long as you're at 99 .6%, you're

still rounding up to a hundred percent. Um, maybe that gives some perspective

of why solar is really the thing that matters. And, and, and, and as soon

as you start thinking about things in, at sort of a grander scale, like Kardashev

scale to civilizations, it becomes very, very obvious. It's like, I'm not saying anything that's

new, by the way, like, uh, anyone who studies physics has known this for, you

know, very long time. Um, in fact, Kardashev, I think was a Russian physicist who

came up with this idea, I think in the sixties, um, just, just as a

way to classify civilizations, um, where Kardashev scale one would be, uh, you've

used, you're, you're, you've harnessed most of the energy of, of the planet. Kardashev scale

two, you've, you've harnessed most of the energy of your sun. Kardashev three, you've harnessed

most of the energy of galaxy. Um, now we're only about, I don't know, 1

% or a few, a few percent of Kardashev scale one right now, just optimistically.

Um, so, um, but as soon as you go to Kardashev scale two, where you're

talking about the power of the sun, then you're really just saying, um, everything is

solar power and, and, and, and the rest is in the noise.

Um, and, um, yeah, so like the,

you know, like the, the sun produces about a billion times or call

it well over a billion times more energy than everything on earth

combined. Right. It's crazy. It's mind blowing.

Right. Yeah. Solar is the obvious solution to all this. And yeah, I mean, short

term, you have to use some of these other sources, but Hey, there it is.

An hour and a half. Star powered. Like maybe we've got a branding issue here.

Yeah. Star powered. Instead of solar powered, it's, it's starlight. Yeah. Starlight.

Uh, it's the power of a, a blazing sun. Um,

how much energy does an entire star have? Yeah. Well, more than

enough. All right. Uh, and also you really need to

keep the power local. Um, so sometimes people, honestly, I've had these discussions

so many times. Sometimes it's, it's, uh, where they say, well, would you beam the

power back to earth? I'm like, do you want to melt earth? Because you would

melt earth if you did that. Um, we'd be vaporized in an instant. Uh, so

you, you really need to keep the power local, um, you know, basically distributed power.

And, and, and I guess most of it we use for intelligence. Uh, so it's

like the end of the future is like a whole, a whole bunch of, um,

solar powered AI satellites. But you know, the only, the only thing that makes the

star work is it just happens to have a lot of mass. So it has

that gravity to ignite the fusion, to ignite the fusion reaction. Right. But like we

could ignite the fusion reaction on earth now. I don't know. Like if your view

has changed, I think we talked about this a couple of years ago where you

were pretty like, we don't know if or when fusion becomes real here, but theoretically

we could take like 10. No, I want to be clear. My opinion on, um,

uh, so, um, yeah, I started physics in college. Um, at one point in high

school, I was thinking about a career in physics. One of my sons actually does

a career in physics. Uh, but my, the problem is I came to the conclusion

is that I'd be waiting for a collider or, or a telescope. I don't have

any, and I need to get that to a career in physics, but I have

a strong interest in the subject. Um, so, um, so,

so, uh, my opinion on say creating a fusion reactor on earth is I think

this is actually not a hard problem. Um, actually, I mean, it's a little hard.

I mean, it's, it's not like totally trivial, but if you just scale up a

talker mark, uh, the, the bigger you make it, the easier the problem gets. So,

uh, you've got a surface volume ratio, uh, thing where, you know, you, you're trying

to maintain a really hot core while having a wall that doesn't melt.

So, uh, uh, that's a similar problem with, with rocket engines. You, you've got a

super hot core in the rocket engine, but you don't want the, the walls, the

chamber walls of the rocket engine to melt. So you have a temperature gradient, uh,

where it's very hot in the middle and, and, and gradually gets cold enough as

you get to the, uh, perimeter, as you get to the, uh, you know, the,

the chamber walls in the rocket engine where the, it doesn't melt, uh, because you've

lowered the temperature, um, and you got a temperature gradient. So just, if you just

scale up, uh, of, you know, the, the donut reactor, talker mark, um, and, um,

and, and, and improve your surface volume ratio, that becomes much easier. And you, you

can absolutely, in my opinion, no, I think just anyone who looks at the math,

uh, you can, you can make, uh, a, a react, a reactor that is part

that generates more energy than it consumes. And the bigger you make it, the easier

it is. And in the limit, you just have a giant gravitationally contained thermonuclear reactor

like the sun. Um, so, uh, which requires no maintenance and it's free. Um,

so. uh perimeter as you get to the uh you know the chamber walls in

the rocket engine where the it doesn't melt uh because you've lowered the temperature um

and you got a temperature gradient so just if you just scale up uh of

you know the donut reactor tokamak um and um and and

improve your surface volume ratio that becomes much easier and you can absolutely in my

opinion no i think just anyone who looks at the math uh you

can you can make a a a reactor that is part that generates more energy

than it consumes and the bigger you make it the easier it is and in

the limit you just have a giant gravitationally contained thermonuclear reactor like the sun so

uh which requires no maintenance and it's free um so this is also why

why would we bother doing that on making a little itty bitty sun that's so

microscopic you barely notice um on earth when we've got the giant free one in

the sky yeah but we but we only get a fraction of one percent of

that energy on the planet earth we have to go much less yeah right so

we've got to figure out how to wrap the sun if we're gonna harness that

energy that's that's our our long term if people want to have fun with reactors

you know um that's that's fine have fun with reactors um but it's not a

serious endeavor compared to the sun um you know it's it's it's sort of a

a fun science it's a fun science project to make a thermonuclear reactor but it's

not um it's not it's it's just peanuts compared to the sun and and even

the the solar energy that does reach earth um is a gigawatt per square kilometer

or roughly you know called two and a half gigawatts per square mile um so

that's a lot you know um and the commercially available panels are around

25 almost 20 26 percent efficiency and maybe you know i mean you can

eat and then you say like you should pack it densely you get an 80

percent packing density you're gonna uh which i think you know you've a lot of

places you could get an 80 percent packing density you effectively have about uh you

know uh 200 megawatts per square kilometer uh and and and and

you need to pair that with batteries so you so you have continuous power um

although our power usage drops considerably at night so you need less batteries than you

think um and uh and uh and doesn't it doesn't the question then

a rough way to like a very maybe an easy number to remember is is

a gigawatt hour per square kilometer per day is a roughly correct number

but then doesn't your technical challenge become the scalability of manufacturing of those systems

so you know accessing the raw materials and getting them out of the ground of

planet earth to make them to make enough of them to get to that sort

of scale on that volume that you're talking about and as you kind of think

about what it would take to get to that scale like do we have an

ability to do that with what we have today like can we pull that much

material out of the ground yes solar panels are made of silicon uh which is

sand essentially um and um i guess more on the battery side but oh the

battery side yeah so battery battery side um uh you know the like iron

phosphate lithium ion battery cells this you know earth i'd like to throw out some

like interesting factoids here um if most people don't know uh if you said um

as measured by mass what is the biggest element what what is what is earth

made of as measured by mass uh actually it's it's iron yeah

iron yeah we're i think 32 percent iron 30 percent oxygen and then everything else

is in the remaining remaining percentage so um we're basically a a rusty

ball bearing um is that's earth um and and and with with you know a

lot of silicon at the surface in the form of sand um and the the

iron phosphate so so iron phosphate lithium ion cells iron extremely common most common element

on earth uh even in the crust uh and then phosphorus is also very common

um and um and then the the anode is is carbon but also very common

and then lithium is also very common so the there's actually you can do the

math in fact we did the math and published it published math but nobody looked

at it uh um it's on the tesla website um that that shows that you

can completely power earth with solar panels and batteries um and uh

there's no shortage of anything all right so on that note

yeah go get to work elon and just power the earth while you're getting implants

uh into people's brains and uh satellites and other good fun stuff good to see

you buddy yeah good to see you guys yeah yeah stop by anytime thanks for

doing this you got the zoom link stop by anytime thank you for coming today

and thank you for liberating free speech three years ago so yeah that was that

was a very important milestone and i see all you guys are in just different

different places i guess this is a very virtual situation i'm at the ranch are

you ever in the same room we try not to be only when we do

only when we do that that summit but otherwise we avoid each other yeah yeah

otherwise your summit is is is pretty fun yeah yeah we had a great time

recounting uh snl sketches that didn't didn't make it oh god there's just so many

good ones i mean we didn't even get to the jeopardy ones yeah

yeah no they're so offensive oh wait well i think we skipped a few that

would have um dramatically increased our probability of being killed you can take this one

out boys i love you i love you all right i love you all i'm

going to poker later take care all right bye -bye love you

we'll let your winners ride rain man david sacks

and it said we open source it to the fans and they've just gone crazy

with it love you guys i'm queen of kinoa i'm going all day what

what your winners ride besties are gone that

is my uh dog thinking i always see your driveway we

should all just get a

room and just have one big huge orgy because they're all just useless it's like

this like sexual tension but they just need to release them out dramatically

increased our probability of being killed. We can take this one out. Boys, I love

you. I love you. all. I'm going to poker. Later. Take care. Later. Bye -bye.

Love you. Take care. We'll let your winners ride. Rain Man

David Sack. We open source it to the fans,

and they've just gone crazy with it. Love you, Wes. I sweet a kid. I'm

going all in. What your winners ride.

Besties are gone. That is my dog taking an ocean in your driveway. Oh,

man. My habbitasher will meet me at what... We should all just get a room

and just have one big Hugh Georgie, because they're all just useless. It's like this

sexual tension, but they just need to release them out. What your feet.

We need to get murkies are back. I'm doing all in.

Loading...

Loading video analysis...