America’s Official AI Plan: Genesis Mission, Claude 4.5, Google vs. NVIDIA, & ChatGPT Shopping
By Peter H. Diamandis
Summary
## Key takeaways - **Genesis Mission: US AI Manhattan Project**: The Genesis Mission turns America into one big AI factory by uniting supercomputers and federal data sets to accelerate science in biotech, fusion, and quantum, aiming to double scientific productivity in a decade. [06:09], [10:41] - **Claude 4.5 Outperforms Engineers**: Anthropic's Claude Opus 4.5 uses 76% fewer tokens, outscores entire engineering teams on coding benchmarks, and beats new hires on key tests, signaling recursive self-improvement. [14:43], [15:26] - **ARC Leaderboard Drives Intelligence Costs to Zero**: ARC AGI benchmarks show AI achieving breakthrough cost-efficiency in visual reasoning and program synthesis, saturating evals and pushing costs of superintelligence toward zero. [22:17], [22:45] - **Single Entrepreneurs Build Billion-Dollar Firms**: AI agents enable solo entrepreneurs to launch billion-dollar businesses within two years using variable-cost models, with full stacks for compliance and forecasting nearly ready. [24:37], [30:00] - **Personalized AI Agents Revolutionize Shopping**: ChatGPT's shopping agent achieves 64% accuracy in recommendations, while Amazon's Rufus drives $10B incremental sales; personalized Jarvis-like agents will handle purchases via intent tracking. [32:01], [36:04] - **Google TPUs Challenge NVIDIA Monopoly**: Google's Ironwood TPU offers 4x performance via cloud service with massive interconnects for large context windows, commoditizing accelerated compute against NVIDIA GPUs. [37:30], [39:01]
Topics Covered
- US turns into massive AI factory
- Claude 4.5 outcodes engineers sans reasoning
- AI agents enable solo billion-dollar startups
- Launch costs plummet 1000x to space abundance
Full Transcript
I've compared this moment to 1939. This
is the Manhattan project.
>> Similar to the Apollo project that put a man on the moon in 1969. This is an all-in national effort to take the power of AI to use the world's largest
supercomputers to advance innovation and science.
>> This is just extraordinary. I I think this could be the greatest accelerator for human knowledge in the US yet if it's properly funded and executed.
Genesis 1 is, you know, God created the heavens and the earth. Finally, we have the tools to be actually be able to understand them properly. Application of
large amounts of compute with the reagents allows us to unravel the mysteries of the earth and the universe.
You know, if we went back to the beginning of the year, did we predict that we'd be here or is it moving faster than even the few of us could predict?
>> Now, that's a moonshot, ladies and gentlemen.
Hey guys, welcome to our emergency pod for Thanksgiving week. A lot going on here. Uh, Genesis mission. We're going
here. Uh, Genesis mission. We're going
to be talking about that. We'll talk
about uh, anthropics claude 4.5. I'm
here with AWG, Mr. Exo. And, uh, thank you, Immad, for joining us. I know this is Thanksgiving for you in England as well, isn't it?
>> It's Thanksgiving for everyone.
>> Aha. Yes, for sure. So I wanted to start with a question which is what does Thanksgiving look like in the year 2035?
I'm curious is it going to change at all? Salem you want to kick it off? Is
all? Salem you want to kick it off? Is
2035 far enough away to make a difference?
>> It's a it's a hell of a difference. I
mean look by that point we should have the cost of Thanksgiving dinner dropping by 10x. Uh, it should be personalized to
by 10x. Uh, it should be personalized to you for your nutrition. So that
depending on your metabolism, the turkey or ham or whatever the heck it is is totally customized to the ability. Um,
I'll have a little device inside me saying, "Whoa, whoa, whoa. Before you
eat that turkey, I'm still metabolizing the cauliflower. Give me 3 minutes.
the cauliflower. Give me 3 minutes.
Please take a sip. Do not drink alcohol quite yet." D. Um, and I think we'll
quite yet." D. Um, and I think we'll have uh we should have gotten to the point over this hump of uh kind of expensive energy that we have ultra
cheap energy, ultra cheap food, and we're crossing right into the Alex Rubicon.
>> All right. All right. My my addition is we'll have Tesla bots serving us everything. Uh,
everything. Uh, how about you, Alex?
>> Yeah. I I think if we're not celebrating at least some sub some some subset of humanity is not celebrating Thanksgiving on Mars. Some subset is celebrating
on Mars. Some subset is celebrating Thanksgiving in the cloud in the form of uploaded humans. And maybe we have some
uploaded humans. And maybe we have some uplifted non-human animals also celebrating with us at the table. Then
something's gone terribly wrong over the next >> So I get this. So we're going to have uplifted turkeys arguing with their lawyers to keep a cease fire against killing them all.
>> Yeah. If that doesn't happen, then something's gone wrong over these 10 years.
>> Imman, how about you? What are you going to see in 10 years time?
>> Yeah, I mean 10 years is the pessimistic end of the AGI forecast, right? So
assuming that humans don't end up like turkeys where we get happier, happier and then AGI it goes straight down. Um
well, we'll figured out how to do perfectly moist turkey by then. Um but
then as Alex has said, mathematics can should be solved by then, science, etc. So, you're in the post-abundance world hopefully uh with the robots and more and there should be a lot to be thankful
about if we can navigate what's coming.
>> Yeah, if we can navigate what's coming.
Okay. Well, we're going to talk about that. Uh but before we do, I want to
that. Uh but before we do, I want to jump into our first story, which is a doozy. Uh let's hear and learn about the
doozy. Uh let's hear and learn about the Genesis mission coming out of the White House. Uh very powerful concept. All
House. Uh very powerful concept. All
right, let's dive into this with a video.
In every age, humanity invents new ways to see further.
The telescope let us glimpse the stars.
The microscope revealed the worlds within us.
For centuries, thinkers like Linets, Shannon, and Turing dreamed of making all knowledge computable.
But today, knowledge grows faster than our ability to understand it.
Trillions of data points, a universe of information still unconnected.
Now, a new instrument emerges.
One capable not only of observing the universe, but of understanding it.
Genesis mission will transform how science is done in America.
Uniting our brightest minds, most powerful computers, and vast scientific data into one living system for discovery. Built on artificial
discovery. Built on artificial intelligence and quantum computing, it will radically redefine the scale, speed, and purpose of scientific
progress in America.
This is the work that will define our generation's legacy. A new revolution
generation's legacy. A new revolution begins. One guided not by competition
begins. One guided not by competition alone, but by curiosity, imagination, and the belief that discovery is the
truest form of progress.
>> Wow. Just wow. What an incredible story coming out of the White House. Uh again,
the title here, US government launches Genesis mission, transforming science through AI computing. So this is Trump's executive order to use massive federal
scientific data sets to train powerful AI models. Department of Energy will
AI models. Department of Energy will connect US supercomputers and lab data into one unified platform intended to shrink the research timeline from years
to days through AIdriven experimentation focusing on biotech fusion quantum. It's
a big deal. Uh AWG want to kick us off?
Yeah, I've compared this moment to 1939 and this is the Manhattan project. And
in the Manhattan project, as I've remarked previously, we turned the country into one big factory for for nuclear weapons. In in the case of the
nuclear weapons. In in the case of the Manhattan project, in this case, the country is being turned into one big AI factory. And this is an incredibly
factory. And this is an incredibly ambitious we speak of moonshots. This is
an incredibly ambitious moonshot not just to to turn the the country into an AI compute factory, but also to to supply some of the limiting reagents as it were like data sets, federal data
sets that are locked up in a variety of different enclaves are are now according to the the exo going to be unlocked and made available for pre-training probably
software tools that right now are unavailable being made available. And I
I think to the extent that there may be a race dynamic with China, whose government is also collecting large amounts of data, I think the Manhattan
project positioning is probably pretty intentional. Uh and I I think it's it's
intentional. Uh and I I think it's it's just glorious to to see the the sort of ambition ambitious unlocking of scarce resources. I'll also point out Dario Gil
resources. I'll also point out Dario Gil who's who's been named as the mission director for Genesis mission. worked
with him as an undergrad at MIT and it's it's really great to see uh MIT in general uh and that that level of scientific influence positioning in again a 1939 moment such an ambitious
initiative.
>> Yeah, I should just mention by the way our other mate Dave is uh on a research mission in Italy this week. Let's leave
it at that. We miss you Dave. Wish we
were you were here. I mean this this is just extraordinary. I I think this could
just extraordinary. I I think this could be the greatest accelerator for human knowledge in the US yet if it's properly funded and executed. Immod is this something that every country is going to
have to follow through and do similar situ a similar move.
>> Um I think that you're seeing this in the UK we had something similar with DSIT on a much smaller scale and new regulation acceleration for nuclear reactors etc. And I think fundamentally
like Genesis 1 one is you know God created the heavens and the earth and now finally we have the tools to be actually be able to understand them properly. That's what this is really
properly. That's what this is really talking about. Application of large
talking about. Application of large amounts of compute with the reagents AWG said allows us to unravel the mysteries of the earth and the universe. And so
obviously that's a massive advantage.
But I don't think it's any kind of coincidence that it's the Department of Energy that's running this >> because we've talked about how energy is so important and the US has been falling
behind on energy compared to countries like China and more. And you'll see more and more deregulation, more and more fusion, solar, etc. play into this. And
the impact again can be immense if you can figure out any one of these things.
And I think that we're in a good place to figure out almost all of them again if it's done properly.
>> See, your thoughts buddy?
>> So I think this is where you see the best of government uh because they can leverage those global data sets in a powerful way and so when you can do that I think it's brings out the best of what
government is able to do unlike private sector and so I think that's one really great point about this. The second I think is that um this is kind of catchup
in a sense because lots of countries use their federal data sets u in different ways. China, France has been doing it
ways. China, France has been doing it for years etc. So this is catch up in one sense but the taking the data which is now a sovereign uh uh um um resource
and then applying it with all the AI capability the US already has I think really amplifies a huge outcome. So the
potential here is kind of incredible. It
reminds me back in the days of uh when Silicon Valley started uh they created secret labs at MIT, Harvard and Stanford to figure out how radar could be blocked
and came up with aluminum tin foil thrown chaff thrown out the planes and they had to create this global initiative or this countrywide initiative to protect and solve for
World War II. And this is kind of like that initiative. I think this is that
that initiative. I think this is that big. I love it. I mean this basically in
big. I love it. I mean this basically in my mind reframes basic science as a compute problem uh and throwing everything we have at it.
>> Yeah, I think that's the elephant in the room. And I'll also point out the
room. And I'll also point out the department of energy has also clarified that one of the goals of Genesis mission is to double scientific American scientific productivity in the next
decade. When we speak of Thanksgiving
decade. When we speak of Thanksgiving 2035, I I would say if we haven't 10xed or 100xed scientific productivity, by Thanksgiving 2035, also something has
gone wrong. But I think having a 2xing
gone wrong. But I think having a 2xing of productivity is an excellent baseline here.
>> Every week, my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from
ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff, only the most important stuff that
matters, that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to
discover the most important meta trends 10 years before anyone else, this report's for you. Readers include
founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want
tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to
it. To subscribe for free, go to dmmandis.com/tatrends
dmmandis.com/tatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode.
>> I'm reminded of a we had a very senior guy from the DoD at Singularity one year Peter.
>> Yeah.
>> And he during the Q&A, he goes, "This is all great. You guys all love these
all great. You guys all love these exponentials. VCs are all hovering at
exponentials. VCs are all hovering at the knee of the curve trying to catch a technology that goes vertical. You
forget who funds the arbitrarily long flat part of the curve which is government. Um and this allows
government. Um and this allows government to really accelerate that flat part of the curve. So I think we'll see exponentials moving forward in time in a pretty amazing way.
>> Yeah. I mean I do I do please go ahead.
>> Yeah. But I think it's interesting how it's moved from the NSF and again the classical grant making that's been disrupted over the last year now to much more of a techno optimistic approach.
And I think one of the key things that will determine the success of this is is this Manhattan project style closed and private because obviously even they've announced it could be or is it open? If
it's open science, I think it'll be truly exponential. But if it's actually
truly exponential. But if it's actually building up public private partnerships with strong IP protections and doing a lot of stuff in private, I think it'll have a much lower impact on the other
side.
>> Yeah, I I'm excited to just watch this.
Right. This for me, and you said it, you said it perfect, Alex, it's a moonshot.
Uh it's an extraordinary nationwide moonshot. It's nothing less than
moonshot. It's nothing less than >> if not a shot at the moon, as I I sometimes say. And we'll talk about and
sometimes say. And we'll talk about and we'll talk about that. I mean, it's it's the only the only difference is it hasn't set an objective mission like, you know, get to the moon and back
within before the end of the decade. But
this is America throwing its might and and coordination at a massive opportunity. Um,
opportunity. Um, >> well, and and look at the areas, right?
Biotech, fusion, and quantum. I mean,
those are all moonshot domains that totally rewrite the rules of life.
Amazing. I I I would maybe just add, as I've pointed out in the past, I I think the next big thing after solving super intelligence, which arguably has either already been solved or is imminently
solvable, is solving math, science, engineering, medicine. This is this is
engineering, medicine. This is this is what that looks like at grand scale.
This is taking federal resources and applying them singularly to solving grand challenges.
>> All right. Spectacular. All right. Let's
go on to our second big story of this uh of this particular week. It's uh what's going on with the hyperscalers, but in particular Anthropic. Uh nice to see
particular Anthropic. Uh nice to see Anthropic making some moves. Here is the story. Anthropic releases claude opus
story. Anthropic releases claude opus 4.5 which uses 76% fewer tokens to reach the same results as older models.
Outscored the entire engineering team and leads to seven of eight programming languages on industry coding benchmarks.
improves multi- aent support by 15%.
Alex, you want to kick us off? How how
significant is Opus 4.5?
>> Yeah, we're we're nearing, if not already, at the point of recursive self-improvement. Finally, the the point
self-improvement. Finally, the the point of recursive self-improvement, many would say, is the point at which more compute, more infrastructure is being
allocated by frontier labs to AI researchers than to human researchers.
And I think that the most important indicator isn't that the benchmarks the the eval are going up and to the right although they are and it's wonderful and
I love benchmarks. It's that Anthropic has also announced that as you alluded that incoming employees to Anthropic in
particular on the performance team are now being outperformed on key tests, key homework assignments by the AI. I think
that that is that's the canary that we're we're imminently if not already given that this model was arguably pre-trained based on the the data cut off date several months ago. We're we're
now entering the moment of recursive self-improvement. But I think that
self-improvement. But I think that that's the sort of bigger thing smaller headline is I of course have my emails whenever these cogen models come out.
one of my other non-cyberpunk FPS evals is asking it to to see if I can oneshot a Mario style sidescroller and it did a beautiful job.
>> Amazing. And Dario's been talking about, you know, being able to get to 100% or 90% of all the coding being done. Uh so
this is a big move in that direction.
Emod, love your thoughts here. Yeah, I
mean I think um from our tests like we made we got top of the swench pro benchmark which is scale's one and it was really difficult with 45%.
>> This is with intelligent internet right?
>> Yeah that's with intelligent internet framework using a combination of the other models. This model without
other models. This model without reasoning scored 52%.
Without even reasoning tokens which I think was the most shocking thing for me. So usually like the big
me. So usually like the big breakthroughs we've had is the models can think longer. they can check etc. We didn't think it would be that way with just the straight output and the quality of code it outputs is actually just
really really good which is going to be very interesting because the average code base is 100 to 200,000 tokens and this should be able to oneshot most
code bases by next year and the cost has dropped 67% from the previous version.
So it's now $25 per million tokens as well. So coding and tokens will be
well. So coding and tokens will be ubiquitous and it may not be that reasoning tokens are what are needed for those tokens which was again completely shocking to me that it would score
higher on reasoning than reasoning.
>> Fascinating Salem any thoughts here?
>> It feels to me like we moved geopolitics from nation states to these hyperscalers. I mean this is incredible
hyperscalers. I mean this is incredible stuff that's happening from each of these big four or five and it's rewriting all the rules of everything.
Then you have states riding on top of these which is a much better way of doing it than the other way around.
>> Yeah. Alex, I want to take it back to the uh to our subscribers here. What
does this mean for the average individual who's not using Opus 4.5 to code?
>> What's the This is the inner I I mean I I think there are multiple levels of impacts. The the highest level impact is
impacts. The the highest level impact is I've spoken in the past about uh what I call the innermost loop of civilizational progress. This is the
civilizational progress. This is the innermost in so far as we're starting to see models that are so strong that they can conduct research and generate code for better versions of themselves. That
that's the innermost recursive self-improvement loop. That I I've
self-improvement loop. That I I've argued in past that's going to spin out and touch the rest of the economy. it
it's already in progress, but you'll see much more of it over the next two to three years as it solves fully robotics, physical world automation leading to
optimistically radical economic growth.
So I I would say like that's the the macro story. The micro story is in the
macro story. The micro story is in the meantime it's going to be trivial to to generate programs, applications, complex workflows on demand implicitly,
explicitly. Mentioned in past with the
explicitly. Mentioned in past with the length of a tweet, you'll be able to to create a AAA level firsterson shooter or video game. People are going to be
video game. People are going to be creating so much software, so much more software so trivially that will be drowning in in AI generated software of very high quality. That that's the the
narrow micro impact in the short term.
>> Amazing. And please go ahead Matt.
>> One other thing is interesting in this in that by itself it scores 75% in multi- aent when it's the same agent opus 4.5 when they combined it with
haiku which is a very lowcost agent or sonnet it got up to 88%. So, Opus is a really good orchestrator of agents and this is the multi-agent support type of thing. And everyone was saying, well,
thing. And everyone was saying, well, agents can't look after other agents.
This is the first agent that prove or the first AI that provably can. And that
opens up the whole swarm nature of things that we've been discussing.
>> It reminds me of a of a there's a competition that people do called the spaghetti competition where they are looking toh put, you know, uncooked spaghetti and see who can get the
highest vertical height. And they found that the team that had a very efficient executive assistant as part of their team always scored the best.
>> This is the marshmallow challenge.
>> The marshmallow challenge. Yeah.
>> You Here's the You get 20 sticks of spaghetti, a yard of a meter of tape, a meter of string, and a marshmallow. And
you have to structure something so that the marshmallow's on top. And whoever
gets it highest wins. Uh importantly,
you let you you're right, Peter. The
winners are the folks that have an EA on the team. But the second place are
the team. But the second place are kindergarten.
Um and and last place last place are MBAs.
>> Yes.
>> Consistently. They lie. They cheat. They
they kind of break things etc. It's an amazing exercise.
>> You were going to make a second point on this one. I think
this one. I think >> um you know if we went back to the beginning of the year and could we have did we predict that we'd be here or is it moving faster than even the few of us
could predict?
I >> I feel like it's moving faster than the few of us could predict. Although Alex
>> I'll I'll I'll pat myself on the back narrowly for this and and say as Dave who who's not here at the moment would attest, but I I think Peter you were you were in this group chat as well at the beginning of the year. Dave challenged
me to to formalize a prediction for what end of year solving math would look like. I was banging the drum throughout
like. I was banging the drum throughout the year. Math is going to get solved.
the year. Math is going to get solved.
Math is going to get solved. I I made very specific prediction about frontier math tier 4 and AI models passing that and if anything they've slightly
overshot my my very conservative baseline. So I think we're more or le I
baseline. So I think we're more or le I know I I think we're more or less where I expected we'd be by the end of this year in terms of strength of AI models solving >> math science engineering. I'll add one
last thing on the anthropic story here which is last pod we talked about their economic success as a business that they're heading towards significant profitability in the next two years and
this is part of that equation. So
congratulations to Daario and his team on Opus 4.5. Let's go to our next story uh again on the leaderboards and I'll turn to Alex for this uh the ARC AGI
leaderboard update and this is not just performance this is performance per dollar. Um, Alex,
dollar. Um, Alex, >> that's right. So, the big story, we're driving the cost of intelligence to zero. Cost of super intelligence is
zero. Cost of super intelligence is being driven to zero as well. ARC AGI 1 and 2 benchmarks are really lovely benchmarks I've supported in past. The
the general theme is can AIS successfully visually reason and visually synthesize new programs to reason. And what we're seeing for the
reason. And what we're seeing for the first time between the Opus 4.5 results that are demonstrating breakthrough state-of-the-art level cost efficiency
of visual program synthesis and then also an earlier result I don't think we got a chance to touch on which is company named poetic has announced superhuman level performance on the arc
AGI 2 benchmark. We're seeing visual program synthesis start to get solved and the world needs harder benchmarks.
We need harder, better eval. That the
the so what for for everyone right now is so many problems in the world, especially in the physical world, rely on some sort of visual reasoning, some sort of intuitive ability to manipulate
the physical world and and to spot patterns and to synthesize implicit programs, even if they're never written down as source code. and Arc AGI Arc AGI 2 are really like excellent ways to
capture problems that humans generally find easy but AIs have historically found challenging and that's all getting solved now and saturating >> you mean things like proprio you mean
like proprioception for example >> yes and being able to in general like recognize a pattern and be able to to solve it visually >> your take away from this one >> yeah I think even the authors of archi
are like what on earth do we do now I think I've seen some tweets from them saying that. Um I think the next
saying that. Um I think the next benchmarks are dollars. So you have vending bench and some of these other benchmarks where it's like how much money can they earn? You start to see
trading benchmarks. You've got the tip
trading benchmarks. You've got the tip off point now where these models go from very smart people you tap on the shoulder that can do individual tasks to being able to do real economic work. And
we'll see many more benchmarks where the axis is literally dollars >> and that's the next year story I think.
How how far are we guys from uh you know the single entrepreneur with a set of agents building a billion-dollar business? Uh what do you think Iman?
business? Uh what do you think Iman?
>> I'm I'd be surprised if it wasn't within two years, probably next year. There are
some amazing entrepreneurs out there and their only thing was how do we scale talent that listens to me?
>> And given they'll be good at using these again, it's a year or two away at most.
Alex, >> I think it's nowish in the sense that right now, as I remarked in past, you see these poor baby AGIS that that have
some agency sort of peddling altcoins on on X. I I I think the the first
on X. I I I think the the first zerohuman or halfhuman billion dollar startup is probably for better or for worse probably unfortunately likeliest
to be a baby AGI that that pumps an altcoin uh that becomes worth a billion dollars. And I think we could do way
dollars. And I think we could do way better than that as a civilization than pumping altcoins. But I I I think that
pumping altcoins. But I I I think that that's probably unfortunately where it's going to happen first. So I would have gone for porn rather than pumping altcoins because that's such an obvious
place where people have uh list vicitudes. But in terms of the
list vicitudes. But in terms of the broader picture, you know, there's a colleague of ours that we all know who uh launched 47 AI startups in a month,
couple of months ago. So people are now kind of using this as a platform to really kind of change the game and la just whole incubators are just launching AI startups. So I'm I'm going to make a
AI startups. So I'm I'm going to make a point that that's already in the works and it just has to see hit a market segment. So here's the question for you
segment. So here's the question for you then Salem and and Imide and Alex which is is this just going to accelerate the rich poor divide right in terms of the
ability for now single individuals who let's face it who are 21 22 23 years old just out of MIT just out of Stanford able to launch something create
extraordinary wealth at a a pace uh that doesn't need other employees as part of their team. Well, what's going to happen
their team. Well, what's going to happen is you you'll have that happen, but the the rich poor, the ability to go from
poor to rich has never been faster.
>> And you know, you this is a really important point that I think you pointed out in abundance, Peter, that the richest people in the world used to exclusively have inherited their wealth and today the richest people exclusively
have earned their wealth. And that loop is going to just accelerate and now you're going to get a hundred Vitalics and Sam Alman's etc. Thousands of them just springing off companies. The bigger
question I think is what happens to the broader economy when this happens.
>> Yeah. Economy 3.0. Alex please.
>> I I I think all of this like capital substituting for labor uh discussion misses an important point which is these AI agents are arguably neither capital
nor labor. They're they're a new third
nor labor. They're they're a new third category and e everyone who's hand ringing and and I I hear this a lot. Oh
well like how are we supposed to survive and not everyone wants to become an entrepreneur. I I would argue a a near
entrepreneur. I I would argue a a near future where everyone survives by quote unquote becoming an entrepreneur misses the point entirely. It's not I I I would
expect going to be the case that everyone becomes an entrepreneur.
Everyone's going to become an investor.
The entrepreneurs increasingly are going to be these AI agents that are identifying and and solving valuable problems. And humans, the the average human, the average unaded biological
meatbody human is going to be able to invest in fleets in in entire economies and indices of AI agents that are acting as the the proximal entrepreneurs.
>> This is the accelerando premise also.
>> Exactly.
>> Okay. Another another
>> I think say something about this because he's been studying the >> I am gonna go to him next but this is you know this is accelerando as >> don't mean to but it on the hosting
>> oh you do a beautiful job as well but this is the accelerando playbook for people to readmad over to you pal you've been thinking about this very deeply >> yeah accelerando is a great book uh
obviously my book the lost economy is also great but um it the the problem is that it's going to be very difficult to out plan and out compete something like
Claude 5 when it comes to coming up with businesses unless you have skin in the game and you care. This is the main thing because they will try things dynamically and they'll just move on
efficiently. Whereas you can apply these
efficiently. Whereas you can apply these agents to tasks and the key thing is in economic terms it's all variable cost.
>> Normally when you had a company I had to go and hire someone that's a pain to do.
You know I had to launch my own servers before the cloud. Now everything is variable cost and it's also cash flow positive because you typically pay the AI providers a month or two after when
you have an enterprise contract and you charge people up front. So you can have brand new economic models where you're taking information organizing and adding value to people and I think that does
close this rich poor divide because you won't know where companies are coming from and the compliance and everything can be done automatically. Now, we're
actually seeing around some of these big AI startups entire things that will do your tax compliance, that will do your financial forecasting, that will automatically balance payments and things like that. And the stack is
nearly ready. Again, it's about a year
nearly ready. Again, it's about a year away before you can launch a business probably in minutes.
>> Amazing everything there. Could I
mention something here?
>> Of course.
>> The plug for Exo here, which we stumbled across accidentally. Peter Reef quoted
across accidentally. Peter Reef quoted Jeremy Riiffken's book, The Zero Marginal Cost Society.
>> Yeah.
>> One of the uh about threequarters of the way of writing the EXO book, we stumbled upon this economic kind of insight. When
you're running a business, you worry about demand and supply and hopefully the and the cost of demand, the cost of supply. Hopefully, you're on the right
supply. Hopefully, you're on the right side of that equation. What the internet did, it allowed us to drop the cost of demand exponentially. Online marketing,
demand exponentially. Online marketing, referral marketing, every company's trying for a viral loop. If you get there, your cost of acquisition goes to zero, which is an amazing thing. We saw
an initial wave of YouTube, Facebook, etc. explode out of the gate with that.
What exponential organizations and new models have done is drop the cost of supply exponentially. Right? So, you
supply exponentially. Right? So, you
think about Airbnb, the cost of adding room to their inventory is near zero. If
you're high, you have to build a hotel.
And with the launch of Amazon Web Services, you could take computing off the balance sheet and make it a truly variable cost. To Emman's point,
variable cost. To Emman's point, everything now becomes a variable cost, you have almost no capital expenditure.
So now you take out the denominator, the market cap explodes, and for the first time, you have a breed of organization that with low cost of demand, low cost of supply, and that's like a magical holy grail for business. And how we
navigate that is going to be unbelievable over this next few years as this paradigm rolls out.
>> I love it. All right, I'm going to jump into our next story. A lot still to cover. So, Chhat GPT introduces shopping
cover. So, Chhat GPT introduces shopping research. It compares and searches and
research. It compares and searches and provides uh sort of recommended products that you're interested in. There's no
question that this is coming out on Black Friday. Uh they are moving this
Black Friday. Uh they are moving this quickly. This uses uh Chat GPT Mini. uh
quickly. This uses uh Chat GPT Mini. uh
and their claim is that they're able to get uh accuracy of of uh sort of best uh best predictions at what you want to buy
up to 64%. So for me, this is about replacing the search engine, the affiliate blogs, YouTube reviewers, or Amazon's own recommendation engine. It's
AI replacing the entire product research economy. uh in this one, at least from
economy. uh in this one, at least from my perspective, Alex, I'm I'm curious, the middleman's going to lose and the models are going to win. What's your
thoughts?
>> Critically, not just generalist models.
This is a specialist vertical agent. To
the extent that there was some expectation that we'd end up in a singleton near future where there's one generalist agent that does everything, it appears like that's not the case, at
least to to the extent that OpenAI is is a leading indicator. By my count, OpenAI has launched at least two major vertical specialist agents. They they launched
specialist agents. They they launched research, deep research originally, which is general research, and they've launched coding agent Codeex. And this
is the thirdish vertical agent by my account that they've launched other than the the baseline model. And I I think it's it's really interesting. Where are
the generalist models? I mean, yes, they're they and other frontier labs are launching generalist models, but we're starting to see proliferation of specialist models. I think we're going
specialist models. I think we're going to see many more. Wouldn't be surprised if we see more specialist post-trained models for finance and for medicine and
for management consulting just picking off broad industry verticals one by one.
In this case, this is going after consumer purchases.
>> But of course, I don't want to be calling on a particular model. I just
want my AI to do this for me, right? And
>> yes, they're they're throwing a lot of resources at at model routing and routers in general. So, what uh what you'll gain with this umbrella router, I I think will be a single pane of glass,
a single UX surface that you talk to.
>> Love that.
>> I have a question. Yes,
>> Lyn. How far are we from a you know, Peter, you call it Jarvis, right? a
personalized um layer that watches on your behalf is totally secure from a privacy perspective and and a sovereignty perspective and navigates the external
world for you. So if you've got a shopping you want to do or you need to buy something or you need something that you may not even know you need. It's
figuring out what even which of the agents to use and sorting it out. How
far are we from that point >> now? No, I mean what what what you're
>> now? No, I mean what what what you're describing SEM is I I I would argue like a computer use agent a CUA and Microsoft
and and other major companies and Frontier Labs already have CUAs that are either about to be rolled out or have already been rolled out and and are in beta stages to do just what you
described for a desktop. Now I think what Iron Man has is a CUA on top of heads-up display and HUD, but that can come as well imminently. You see what I
find interesting here is the notion that we're about to give our AIS access to everything we read, everything we say, and our intent uh attention and
intention. So, as soon as we get our
intention. So, as soon as we get our heads up, glasses, our augmented reality glasses that are able to not only have forward-looking cameras, but actually cameras that look back at our pupils to
determine what we're staring at, right?
Am I if I'm staring consistently at that beautiful lamp over Alex's uh shoulder and my AI says, "Would you like a lamp like that?" Or if I make some side
like that?" Or if I make some side comment to somebody else, um it may purchase it for me and ship it to my house. So, this ability to understand
house. So, this ability to understand what we truly want by listening to our conversations or looking at where we look uh empowers this AI to become our
our magical uh shopping agent in many ways. Immad, how do you think about
ways. Immad, how do you think about this?
>> Yeah, so Andy Jasse, CEO of Amazon, recently said, I believe that their agent, Rufus, which I doubt any of us have actually used, has $250 million
users, a shopping agent. And they're
estimating next year $10 billion in incremental sales from it, given conversion statistics up to 60% higher.
Who would have thought?
>> I think that the key thing is where is it? you know like Bing and Teams and
it? you know like Bing and Teams and other things had access to the users's eyeballs. The challenge for chat GPT and
eyeballs. The challenge for chat GPT and OpenAI here is how do you become that first intentionality on the shopping experience >> and then what type of shopping is it?
Because if it's toilet paper, who cares?
You just want your AI agent to just do it >> or whatever automatically.
>> If it's super discretionary, some people apparently enjoy shopping.
Maybe not, you know, people like us, but definitely my wife and kind of others.
And then you have this middle bit like how many TVs do you really buy? You
know, you kind of know what TV you're going to get. So I think that the key thing is who is the AI next to you? You
go to Amazon for a shopping experience.
You use a Roffus. You do a Google search. You now have Google AI mode up
search. You now have Google AI mode up there. The key thing I think you know
there. The key thing I think you know going to's point is who comes up with the agent that's the most charming and engaging and licenses Paul Betin's voice you know for a Javis because that can
then disintermediate everything and that's what the fight is on for now.
>> Fascinating. All right, let's move on.
Uh let's get to Google here. Google
further encroaches Nvidia's turf with their new AI chip push. So Google has launched Ironwood TPU. It's their
seventh generation AI chip with four times the performance of their previous version. And importantly, instead of
version. And importantly, instead of selling hardware, Google is now offering their TPUs as a cloud service. For
example, Meta is running on them without purchasing the TPUs. Uh in the photo here, we have Thomas Currion, who's the CEO of Google Cloud, who has been crushing it. Google Cloud's been doing
crushing it. Google Cloud's been doing amazing, and this puts them directly in competition with Nvidia. Uh, Alex, how do you think about it?
>> There's been so much hand ringing, Peter, over Nvidia's purported monopoly or CUDA as as a purported architectural
monopoly. GPUs are now finally facing
monopoly. GPUs are now finally facing healthy competition. We see TPUs that
healthy competition. We see TPUs that are being both purchased according to this reporting as well as licensed and and rented. We see obviously AMD with
and rented. We see obviously AMD with with their own stack. we see Trinia and uh and and other Amazon chips and and AS6 in general and and I think what all
of this is turning into is finally accelerated compute is turning into a fungeible commodity. It's not just a one
fungeible commodity. It's not just a one supplier commodity. It is a
supplier commodity. It is a multi-upplier very healthy very heterogeneous ecosystem of fungeible accelerated compute which is exactly the the sort of competitive ecosystem we
want to find ourselves in.
>> I do you have a comment? Yeah. So, we
used thousands of TPUs a few years ago and from the V5s, this now has 10 times the compute. And the chip size, the
the compute. And the chip size, the single die, the interconnectedness of the Google chips is beyond anything you've seen. So, you've gone from 64 in
you've seen. So, you've gone from 64 in a unit now to I believe 4,000, no 9,000.
>> So, what Google's really, really good at is connecting lots of chips in one place and even multi- data center. We had runs of up to 50,000 of their low energy
chips. And what that's important for is
chips. And what that's important for is context. So right now actually DRAM
context. So right now actually DRAM prices have gone up by about five times.
>> Yeah.
>> So if you want to get the DM for your gaming PC, it's gone up crazy. Google
actually has the ability to use cheaper chips in massive scale to do large context window things. And that means that Gemini has a million 2 million
input tokens from video to audio to others. whereas it's still limited on
others. whereas it's still limited on other GPUs and that's going to become even more of a difference going forward.
Google originally built these chips to power Google search and now they've matured to a point where they can offer it to everyone even hosted. So the cloud service has been available for a few years but now they're exploring actually
saying meta you want it in your own data center we can look at that and that's going to be super interesting going forward particularly as RAM versus flops
becomes the key differentiator in terms of performance because context becomes almost everything because the models are already really fast to be honest.
>> Well just kudos to Google I mean they continue to crush it week over week. Two
plugs for us here. About three months ago, we said two things. One is that Google would inevitably start to lease or sell the TPUs. Um, and here we are.
Um, and second, I think Billy Vimar, it was you a few months ago on the pod that said, um, uh, invest in DRM companies because DM is going to become the short supply, etc., etc. So, it seems that
we're typically three months ahead of the game.
>> Amazing. Amazing, Salem. That's great.
>> This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses
thousands of specialized AI agents that think for hours to understand enterprisecale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy
platform, bringing in their development requirements. The Blitzy platform
requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy
delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete
the sprint. Enterprises are achieving a
the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preide development tool, pairing it with their
coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit
blitzy.com to schedule a demo and start building with Blitzy today.
>> Right, let's move on to the next story here. Amazon is spending up to 50
here. Amazon is spending up to 50 billion on AI infrastructure for the US government. Uh so it's projecting it'll
government. Uh so it's projecting it'll add 1.3 gawatts of new data center capacity beginning construction in 2026.
So what's the story here? AWG, do you want to take a shot? Yeah, the the government clouds in many cases including with with AWS and otherwise
the government has its own availability zones and they are notoriously under supplied when it comes to accelerated compute with with GPUs and I I think it's sort of surprising given that the
public sector is depending on how you count either half the economy or about a quarter of the US economy that how compute starved or at least how GPU starved it's historically been. So I I I
think this is a welcome investment at least from from my perspective. We want
a vibrant vibrant public sector and vibrantly supplied with accelerated compute public sector and I I I view this as a very positive step in that direction.
>> Nice. Some of the stats here in this article AWS is serving 11,000 government agencies and expecting to spend $125 billion in capital expenses by 20 end of
2025. Massive support, right? AW AWS has
2025. Massive support, right? AW AWS has really just dominated. So
>> is this essentially the federal government saying AWS is their cloud provider? That's a big deal if that's
provider? That's a big deal if that's the case because that's what it seems. >> Well, the the the government, the US government has multiple cloud providers.
This is pretty well publicized and and reported on, but Amazon is a an AWS key supplier to US government cloud resources.
>> All right, I'm going to move us along here. uh also another article article on
here. uh also another article article on Amazon here that uh their data center tally tops 900 and we forget the fact that Amazon you know because of AWS has
been running a massive number of data centers around the world in over 50 countries launching now something uh in Indiana it's 1200 acre data center and they're putting it up and getting it
online faster than anybody else any particular thoughts on this one >> I was struck by the fact that the Indiana one uses is 2.2 gawatt of energy. That's like an unbelievable
energy. That's like an unbelievable amount of energy for a data center.
That's a small country with power.
>> Yeah. I just may maybe note we're tiling the earth with compute. That that is that's what we're really talking about here. And this is just the opening act.
here. And this is just the opening act.
And and the the Indiana data center in particular uh is we were speaking about anthropic a few minutes ago. the that
Indiana data center is the the the core computing facility for anthropic both for training and for inference. It's
called project reneer and it was farmland that that was converted almost overnight. I mean it it took about a
overnight. I mean it it took about a year but almost overnight into modern compute. This is 1939 when you see
compute. This is 1939 when you see farmland in the Midwest being converted to comput resources. Yes,
>> Alex. You can't imagine the number of comments I've had for from people saying, "What does Alex have? Why is
Alex against the moon?"
>> From our last podcast, the moon has it coming.
>> Isn't it obvious? The moon's had it coming for years.
>> We We have We have an AMA section that we're going to hit in a few minutes. And
that's one of the questions being asked.
Don't we need to save the moon?
>> It's lunacy, Seem. Lunacy.
>> Touche. Touche.
>> Oh, goodness. All right. Uh the third Amazon story here is Amazon opens 11 billion AI data center in rural Indiana.
We've heard about this already. It's
running 500,000 Tranium 2 chips. So how
does Tranium chips compare to the TPUs compared to the uh Nvidia's GPUs? What
do you guys think about this?
>> Um so we used a bunch of them previously. Tranium 2 chips are
previously. Tranium 2 chips are equivalent to the hoppers and they're good for inference, but they're much more difficult to do the large scale training runs on. But if you look at the
breakdown now like you have a core cluster for training and Anthropic just announced another big Nvidia deal 10 billion with Microsoft and Nvidia but for serving up of claude you always hit
those capacity constraints and tranium is very solid for inference similar to how previously Amazon went allin with graviton which was their CPU equivalent and now that runs massive workloads for
Netflix and everyone around the world.
So I think that it's still one more generation until Amazon starts to catch up. Again, they're about a generation
up. Again, they're about a generation behind, but all those chips are going to be used, but probably for inference versus actual training.
>> I have a crazy I have a crazy question here. Um so if your model is this
here. Um so if your model is this closely bound to the chip then if you did an inference model for any of these
big hyperscalers on tranium versus um uh tensorflow would do you get a very different result because the chip is different >> no so you typically use a framework like
open xla which automatically translates it to different things once it's actually doing the inference cuz the process of inference is quite straightforward. forward matrix
straightforward. forward matrix multiplications. The process of
multiplications. The process of training, the training can be really complicated in the way that things move back and forth, etc. And that's where you really need to have high resilience,
high interconnect. Whereas it's with a
high interconnect. Whereas it's with a single chip or a group of 8 to 16 chips as these are, they're just doing forward passes. It's a lot easier to code and to
passes. It's a lot easier to code and to have speed on. But again there are certain things like um cerebras for example that will give you much faster
inference or a highly optimized grace blackwell etc. >> Much simpler than training.
>> Yeah maybe to expand on that back prop is the key problem. If we could do away with back propagation at training time and and have some sort of like magical like I remember Boltzman machines were
were one sort of concept for how we could do away with global back propagation. If we could do away with
propagation. If we could do away with backrop entirely then one could imagine a near future where training looks a lot more like inference and training would be a lot more portable and a lot more parallelizable but no one has yet in
production figured out how to do away with back prop so back >> but but aren't LLMs like fundamentally anchored to back propagation >> at training time >> at training time not not at inference
time inference time is only forward propagation so if we could figure out how >> is fundamental to the training >> back propagation is is fundamental to training of neural networks
>> for now. But one there are lots of paradigms. There's a whole cottage industry of of researchers trying to figure out ways to eliminate back propagation entirely. If we could
propagation entirely. If we could eliminate back propagation, that would certainly eliminate a training time compute bottleneck.
>> And by the way, just as a reminder, if you're listening and you've just heard a conversation that you think is is being spoken in Greek, then my suggestion is take the >> join join the club.
take some take some notes and go to your favorite l and have a conversation.
>> Bruce Willis said in Die Hard, "Welcome to the party, pal."
>> You know, I'm going to hit what you said earlier, Salem. Uh, and Alex, I think
earlier, Salem. Uh, and Alex, I think the most significant thing about this is going from farmland to seven buildings in one year, 2.2 gawatt.
>> I mean, it's it's just the beginning and we're knocking down regulations and capital is flowing in. This is
continuing. All right, I want to get to our AMA. I want to hit a couple of
our AMA. I want to hit a couple of stories on the science side real quick.
We've been talking about launch costs.
We've been talking about launching data centers, been launch talking about going to the moon. I want to give folks a little bit of overview for a moment about how quickly the cost of launch has
been changing. So, the space shuttle,
been changing. So, the space shuttle, which was originally supposed to cost about $50 million per launch and launch 50 times per year, ended up costing
somewhere between a billion to$2 billion per launch and was launching anywhere from 1 to four times per year. Massively
excite uh expensive, $50,000 per kilogram, super high cost. Falcon 9
comes in uh drops the cost at least 20fold to $2,500 per kilogram by making the first stage fully reusable, right?
It's got nine Merlin engines. So, you're
recovering most of the engines on the Falcon 9. And then here comes Starship,
Falcon 9. And then here comes Starship, which is reducing it again another 25fold, $100 per per kilogram. So, you
know, how many kilograms do each of us weigh? And uh what's your cost to get
weigh? And uh what's your cost to get into orbit? it becomes affordable all of
into orbit? it becomes affordable all of a sudden, right? So as Starship becomes fully reusable but and then Elon comes and starts speaking about the work of Gerard K. O'Neal, Jerry O'Neal at
Gerard K. O'Neal, Jerry O'Neal at Princeton University had actually designed and built at least on the ground here what are called mass drivers. Uh electromagnetic rings that
drivers. Uh electromagnetic rings that accelerate a bucket to lunar escape velocity. And just for the cost of
velocity. And just for the cost of electricity, which by the way on the moon is relatively cheap because you got all the solar flux, you can accelerate something and shoot it towards the earth
into a uh you know earth acquisition orbit. Uh and we get here the price
orbit. Uh and we get here the price coming down not 100fold but a,000fold to 10 cents a kilogram. So all of a sudden we gain access to all the resources on earth. I'd like to remember remind
earth. I'd like to remember remind people that everything we hold of value on Earth, metals, energy, real estate, all these things are in near infinite
quantities in space. So, uh, the the 9-year-old space geek in me is like super excited of what's coming. Uh,
Alex, you want to add anything?
>> Yeah, I'll add that disassembling the solar system is going to require low cost to orbit. So, this is great.
Alex, you going to start protest outside our front door.
>> Dyson sphere in the way. I like this is this is generated by Nana Banana as well. You can see the little thing.
well. You can see the little thing.
>> Yes, of course. All right.
>> I mean, basically this turns lunar launches and and rocket launches into software, >> right? You're you're that same loop is
>> right? You're you're that same loop is now hitting this very and it's one thing and two things here. One is importantly this is a log scale. So for folks uh
watching this is like ridiculous orders of magnitude per level and that's un unbelievable in a very physical environment. This is not some social
environment. This is not some social media gaming Silicon Valley play. This
is getting out of the physical gravity of the of Earth's gravity well. This is
nuts. This is energy, baby. You know,
the one complaint I have about my conversations with Elon is he wants to get out of Earth's gravity well and then go directly back into Mars's gravity well. Uh, you know, I'm far more
well. Uh, you know, I'm far more interested in staying either in the, you know, Earth moon system or better yet, build what some have called O'Neal
colonies in which you are basic. I'm not
disassembling the moon, Alex. I'm
disassembling the asteroid belt. All
those pesky asteroids deserve to be disassembled and used, and we'll use >> I mean, sure. If you if you want to start with the asteroid belt, we can start there. That's fine. Training
start there. That's fine. Training
wheels for solar system disassembly.
That's great.
>> No. All right. Listen, so disassemble asteroids and build large rotating cylinders called O'Neal colonies where you live on the inside, you know, omega
squared R. All right. Uh, one more story
squared R. All right. Uh, one more story before we get to our Q&A, which is a story from, uh, a friend Matt Angle, the CEO of Paradromics. So, Paradromics has
been one of the, you know, significant number of BCI companies, brain computer interface companies. And what's
interface companies. And what's interesting about them, they've just completed their testing in sheep. Uh,
Neurolink did their testing in Macak monkeys. Paradromics has done their
monkeys. Paradromics has done their testing in sheep. And they're been approved to go into humans, which they will do uh in about about 2 months time,
early January, February time frame. And
I think what's most interesting is that they've been able to hit uh a speed about 10 times or actually 20 times faster uh than Neuralink. So Neuralink's
been at about 10 bits per second. Uh the
paradromics implant is at 200 bits per second. So, Salem, you and I have always
second. So, Salem, you and I have always talked about is Ray's prediction on high bandwidth BCI by the early 2030s, 2033
going to happen? Uh, so we're seeing all these companies moving forward here.
>> A few years ago, I was a hard no on that.
>> Yeah.
>> Um, and now I'm like, oh >> he's right again.
>> Uh, Alex, what are your thoughts here, buddy?
>> Yeah, I I think we're seeing the BCI space become competitive, which is great. Yes, we should all get our ray
great. Yes, we should all get our ray was right hats. Fine. But I think if if you extrapolate this, one of my more fun thought experiments is when do we
actually get our nanobots in the brain for high throughput orc process type BCI? And it it's it's interesting. You
BCI? And it it's it's interesting. You
can look at the cost of producing a a gigaflop of compute versus the typical size of a gigaflop of compute. So when
Apple introduced the iMac, originally first iMac was like about a gigaflop. Uh
the the first uh iPhone, first Apple Watch, there's something magical about rolling out a form factor with about a gigaflop. You extrapolate out that curve
gigaflop. You extrapolate out that curve naively assuming exponential progress for a gigaflop saying like gigaflop that's the threshold at which we have useful general purpose computing
including for the purpose of maybe even substituting for human brain cells in the context of high throughput BCI andor a very invasive uploading scenario that
that curve you get 2045 uh which is again # rayisight hat that's when you get about a gigaflop the size of a human brain cell. So I I do think we're very
brain cell. So I I do think we're very much on trajectory for for ray is right style human mind uploading and uh and invasive BCIs and non-invasive. This
isn't this is a quasi invasive BCI. I
think we're also going to get like lots of wear non-invasive we have Ray coming on soon.
>> Yeah, Ray is going to be joining the pod uh in early January to talk about his predictions for 2026. Uh, also we'll have Brett Adcock uh coming on the pod to talk about
>> Wait, wait, wait, wait, wait. We can't
ask Ray about 2026. We got to ask him about 2066. I mean, it's too soon. It's
about 2066. I mean, it's too soon. It's
It's a waste of time.
>> Too soon. Too soon.
>> Too soon.
>> We'll ask him about all of it. We'll ask
about all of it.
>> We need a bigger podcast.
>> Yeah. Well, we'll get one by then. Hey,
by the way, we crossed 400,000 subscribers. So, thank you to all those
subscribers. So, thank you to all those who subscribed to push us over. Our next
hill is 500,000. Then we're going for that million. Why? Cuz Jet and Dax want
that million. Why? Cuz Jet and Dax want this to get a million subscribers.
>> This is it's, you know, whatever chemistry we have here in terms of processing the news and making sense of it for others. It's seeming to really resonate the number of calls and
accolades and kind of feedback I'm getting. I really I'm sure you're seeing
getting. I really I'm sure you're seeing the same thing.
>> Thank you to our listeners for the feedback. Uh we do read all of your
feedback. Uh we do read all of your comments. Uh and in fact we process the
comments. Uh and in fact we process the comments and pull out the questions.
We're about to jump in to that segment with an AMA. But Immod what are your thoughts on BCI?
>> I think that this year has been a breakthrough year. Next year you'll see
breakthrough year. Next year you'll see even bigger advances. We've both seen what else is going on behind the scenes and I think it'll probably one of the biggest investment areas in the next 3 years actually because what could be
better for solving the issues but then augmenting humanity directly. I think as Elon said like the only way you're going to be able to keep up with the AGIS is to plug in.
>> Yeah.
>> And so it's going to be a geostrategic importance as well as a financial importance.
>> There's something fundamentally interesting about the brain because we still really have little idea how it works. But as long as we can interface
works. But as long as we can interface with it effectively, that's very very powerful. Like our memories are already
powerful. Like our memories are already now outsourced to our smartphones. We
don't really use our memory neurons in the same way we used to. and therefore
we'll start doing that with more and more brain functioning capacity and releasing that load off that and putting using it to other things. So I'm really excited about what comes with this.
>> Maybe just to to add quickly to Selma's point, this is admittedly a bit of a hot take, but I arguably we solved a AGI, we we solved super intelligence without actually h having a good mechanistic
understanding of natural intelligence. I
I think it's it's pretty likely we're going to solve brain computer interfaces and and maybe even um whole brain emulation without actually still having a detailed mechanistic understanding of
the human brain. You can get pretty far with phenomenology.
>> Do do you Alex, do you think we can use AI to solve the hard problem of consciousness, the whole qualia thing?
>> Yes.
>> Okay. We want to have a conversation about that in terms of how that goes about. But let's take that offline.
about. But let's take that offline.
>> All right. Two two quick points. Two
quick points on on this PCI. Number one,
amazing people playing in this space, right? Max Hodak, who was the co-founder
right? Max Hodak, who was the co-founder of Neuralink, now has a company called Science. Go and check it out. They have
Science. Go and check it out. They have
a completely different approach uh to interfacing between the compute world and your and your neoortex. Brilliant.
basically uh using neural stem cells to grow uh nerve endings into the brain that that wire together and fire
together. And then uh Sam Alman invested
together. And then uh Sam Alman invested in something called Merge Labs. It's
still kind of under wraps. Um but we'll be hearing a lot more about Merge in uh in the next few months. So I have one final question. Ray's prediction on high
final question. Ray's prediction on high bandwidth BCI is really dependent on having nanotechnology.
Uh and the question is where are we on that front? I'm still waiting uh to hear
that front? I'm still waiting uh to hear some good updates on the ability to assemble molecules atom by atom not with wet nanotechnology which is biology but
assemblers like Eric Drexler spoke about. Alex any thoughts there? I spent
about. Alex any thoughts there? I spent
so many years chasing nano assemblers. I
I I do think we're we're going to get to Drexlerian style, although even Eric Drexler had sort of a personal evolution. We've chatted I've chatted a
evolution. We've chatted I've chatted a number of times with him about this from sort of pure diamondoid style quote unquote molecular assemblers to then there was the the nano systems phase
where it's not about self-replicating nanoobots. It's more about desktop
nanoobots. It's more about desktop factories that produce things. I I
here's what I think. I I think by at the very latest, and this is in my mind like an ultraconservative outerbound, 2045, we get our Drexlerian nano assemblers. I
actually think we're we're far likelier to to get them in some soft form. Maybe
they'll look like DNA origami. Maybe
they'll look like AI solving the the Fineman Grand challenge, which includes both computational and nanoobotic challenges. I I think we're likely to to
challenges. I I think we're likely to to get some AI solution to Drexlerian early style Drexlerian nanotech in the next 10 years. I don't think it's going to take that long. But at the same time,
>> there's our 2035 date.
>> Yeah. Like everything gets solved in the next 10 years.
>> Yeah. I don't think you need to have that high bandwidth to be honest. Um
like we did work at stability on Mind's Eye where we reconstructed images people saw from MRIs which is incredibly low bandwidth. And if you look at the
bandwidth. And if you look at the forward backward diffusion past processes, what you're likely to have is before you get to the full bandwidth, you'll have partial bandwidth that can effectively reconstruct brain processes
with very little information and then you'll just run diffusion models to do that in a similar way to actually Sunday robotics and others have done things going forward.
>> All right, I think >> there was a project here I got to throw this out. There was a project out of
this out. There was a project out of Japan called Dream Catcher. And what
they were doing is having you sleep in an MRI machine and they're storing the images coming off your optical nerve and then replaying your dreams back to you the next day, which was hugely
unnerving. You know, very quickly on
unnerving. You know, very quickly on this one, Peter, you get with with fMRI, you get approximately a million voxels per second just streaming off. You you
can do high bandwidth decoding of of thought with a million voxels per second. I I just don't have my I just
second. I I just don't have my I just don't have my portable friia machine to carry around in my >> Yeah, but you will those are getting smaller. You will have one.
smaller. You will have one.
>> Um, by the way, there is a team that I've been talking to that seems to have a critical credible path for molecular manufacturing. So,
manufacturing. So, >> you're happy to connect them with >> I can't I can't wait. All right, let's get into some of our questions from our subscribers here. Uh, let's jump in. The
subscribers here. Uh, let's jump in. The
first one is from David uh Bowman 6224.
David says, "I'd like to hear AWG tackle EMOD's thousand-day prediction." So,
what does AWG think of IMOD's AGI in a thousand-day prediction? So, Emod, do
thousand-day prediction? So, Emod, do you want to state your prediction first and then love to hear Alex's commentary.
>> Yeah, I was just saying that I think that most human economic work is negative value within a thousand days.
Well, 900 days now left at most. And not
that it will replace all the jobs, but definitely it'll be there for just any job that can be done on the other side of a keyboard or mouse. So that's a weaker version of AGI than in some cases.
>> Alex, >> yeah, I I think the the central challenge as always is defining what we mean with AGI. I I think if if AGI means
generality, I I think we've had AGI since at the very latest summer 2020 when GPT3 and language models are are
few shot learners paper came out. If if
AGI means economically some sort of economic parallel with humanity, yeah, I I agree that either it is the case shumpeter style that we already have
some sort of economic generality for example as parameterized by open AAI's uh GDP val benchmark. If you believe that benchmark economically general AI
is is either already here or imminent like the next few months uh or if you have some other preferred benchmark for human economic output it's probably imminent if not already here.
>> All right let's go to the next question from Josh >> insert my standard rant about AGI.
>> Okay so so so >> incorporated by reference >> so acknowledged. Thank you. So at Josh S5937 says, "What is the future of land
ownership in a future without scarcity?
Land is finite. Will it remain uh the final scarce resource?" So uh Josh, it's a good question. The way I answer it is two different ways. Number one, we're going to be spending a lot of time in
the virtual world and there you'll be able to gain access to unique virtual real estate. The second is you're
real estate. The second is you're thinking with a very uh earth ccentric point of view. There's the moon, there's Mars, there are massive O'Neal colonies
built out of the asteroids and we're going to start to see humanity migrate outside the Earth. Having said that, yes, Central Park West apartments are still going to be scarce.
>> Deeply disagree. I want to rant on this.
>> Okay, go for it, Seline.
>> So, I did some factecking here. It turns
out there's about 16 billion acres of habitable land on Earth. That's about 2 acres per person. Okay, that's a pretty decent number. Um, and that's habitable.
decent number. Um, and that's habitable.
Let's note that passenger drones are going to make difficult to reach areas very habitable. So, that goes up to
very habitable. So, that goes up to about 20 billion to 22 billion uh areas of habitable land. So, technology will expand our the amount of habitable and
reachable land. That's still if we get
reachable land. That's still if we get to about 10 billion, we'll peak at about 10 billion population by 2050 before we start dropping off. Still about two acres per person, which is a pretty
decent uh number. And all of the technology is allowing us to get uh reach that land more easily, make that more land more uh usable. And if you fly
across India, the most populous country in the world, it's mostly empty. Yeah.
>> Right. You see the populations at the edge of the on the coast, the middle is kind of there's nothing there. Same with
Africa. Same with the US. You fly across the US, there's nobody there in the middle. And I'm Canadian. There's nobody
middle. And I'm Canadian. There's nobody
in Canada. So, there's a lot of land that we can use. And technology makes it much more accessible. Uh temperature,
HVAC, heating, cooling. The only
constraint is is energy and uh compute as Alex would say.
>> Amazing. That's a great point. Good.
>> I'd like to just give a practical example to listeners. Whimo is now basically legal across San Francisco, right? And so that could completely just
right? And so that could completely just change where people live because you can just get into and it will just take you and your kids anywhere.
>> And the prediction from sorry just to add to that the prediction for Tesla is it'll be about 30 cents a mile to to get somewhere on the robo taxi. That's like
near zero compared. It's a 10x drop from where we are now.
>> Alex may maybe maybe to to just add a bit of nuance to this. I I think in in the the short to medium-term land is is becoming
post scarce as you say we can build up we can build down we can build on other planets important use case that hasn't been touched on we're going to have so many humans I I think uploaded in one
form or another in into the cloud the the cloud doesn't have the same concept of land so I think short to medium-term land is post scarce in the long term I I think the scarcity of land depends on
whether AI economies have a better use for land than than we do. If we do find ourselves taking apart the solar system, land could actually become really scarce in the end.
>> Yeah. By by the way, let's just talk one second about uploading. I mean, when do you actually believe we're going to start to see uh human uploads to the point where you Alex say, "Okay, upload
me." And there's this speaker that comes
me." And there's this speaker that comes over, you know, this this voice comes over says, "Hey, Alex, I've been uploaded. You can off yourself now. We
uploaded. You can off yourself now. We
don't need your biological body anymore.
I'm in the class.
>> Thanks for the vote of confidence. I I I think I I I think we've already seen non-invasive uploading in the form of large language models. Large language
models are arguably sort of an upload of an ensemble of all of humanity. Yeah.
>> Yeah. in terms of like individual uploading that's non-invasive. I I think either we're either there already in the form, you know, Immod touched earlier on or alluded to like constructing
foundation models from fMRI scans. There
are a number of groups that are training foundation models from uh fMRI scans.
Arguably, those are like lowfidelity faximiles but non-invasive of human minds. I think we're
minds. I think we're >> wait hold on a second. There are people training LLMs on fMRI scans, >> correct? A number of groups now,
>> correct? A number of groups now, including Meta, by the way. Really well
financed, really talented groups.
>> Holy crap. Okay.
>> So, so the real idea >> the implication is so LLM are are trained to to reproduce sort of behavior of humans like fat biological meat
fingers tapping keys on a keyboard, uploading text to the internet. But with
with foundation models trained off of fMRI data like a million voxels order of magnitude per second, you can imagine pre-training uh a a foundation model
that basically encapsulates human thought. Certainly for human thought
thought. Certainly for human thought decoding purposes, you get that >> and fMRIs can can track a single neurons firing in real time.
>> No, fMRIs are both uh they're spatially low resolution, temporally low resolution. you get like 1 to 2 second
resolution. you get like 1 to 2 second temporal resolution approximately 1 mm cubic spatial resolution at best but nonetheless turns out to be enough for thought decoding.
>> So Alex the the concept around a true upload is can I actually map your conneto? Can I map for a human roughly
conneto? Can I map for a human roughly not only the 100red billion neurons but the 100 trillion synaptic connections typically done by slicing the brain into
ever thinner slices and using AI to map those interconnections there. Uh it's a destructive process. Do you think we're
destructive process. Do you think we're going to have >> invasive right now?
>> Yes. Very invasive.
>> Yeah. Here's my brain. Slice it into a thousand pieces.
>> Yeah. I think we're going to have billion pieces. So, so I think we're
billion pieces. So, so I think we're going to have reference non-human organisms. Uh, so, so Drosophila, major progress already. Flies,
progress already. Flies, >> fruit flies done.
>> Mice about to be done like there there have been like a few one to three cubic millimeters of of mouse brain upload or scanned uh and in some form or another
to the extent that the conneto is a proxy for uploading done. I think mice overall going to be done shortly.
>> Lobsters next, right? Lobsters are
easier, interestingly. Uh, mice are harder. I I think we're going to see the
harder. I I think we're going to see the the full human high-res connect probably in in the next 5 years.
>> Alex, aren't you an adviser to Nectto?
>> I'm I'm not a formal adviser to Nectto, but I am an adviser to a company, Eon Systems, that is working on solving human wholebrain emulation and uploading.
>> All right, I'm I'm going to move us forward here onto our next AMA question here. Um and then I want to close out
here. Um and then I want to close out with what are you most thankful for in 2025? So start thinking about that in
2025? So start thinking about that in background. So at success coach uh Cody
background. So at success coach uh Cody writes how do we prevent a world where millions fall into poverty before AIdriven abundance arrives? What are the
real solutions for people who may lose their jobs long before long-term benefits of AI kick in? Uh at JNIND5 asked a similar question. How often um
you often say AI will lift up people at the bottom. How exactly will that happen
the bottom. How exactly will that happen for those who can't meet basic needs like food, healthcare today? You know,
we we hit on this uh about two pods ago where there's concerns and we saw this out of the data from the FI9 event.
Concerns about uh about poverty, about losing jobs, about being able to support your cost of living. Uh Salem, you want to jump on on this one?
>> W ton of this there's like 20 questions buried in each of these here. Sure. Uh I
think the the we are in a difficult kind of 10-year period when we transition all of our world systems from scarcity to abundance, right? Consider the fact that
abundance, right? Consider the fact that almost every business in the world is focused on scarcity. For the last 10,000 years, if we didn't have scarcity, you kind of didn't have a business. uh we're
moving now to abundance models and actually exponential organizations find business models around abundance which is the starting point of that transition
but for society at large we need to move to some model whether it's UBI or UBS uh um pushing universal basic services
>> right similar type of concept where you just give basic capability and make that available to everybody solve the bottom two layers of Mazo's hierarchy And the trick for UBI by the way for people that
are naysayers is if you can find the balance where people are can survive but not be happy you still have a very thriving economy entrepreneurship explodes in that model etc etc. So we're
not far away. The problem is uh governments and getting governments to move from uh a union labor job taxation model to that is such a big leap. We
don't have confidence in governments doing that. And the problem with
doing that. And the problem with government we have all over the world is they like they want to be needed. Um
they ran a two-year UBI in Manitoba in the 70s and it was so successful at some point the government realized we're not even needed here and they canceled the program so that it could be needed.
That's the immune system problem in government that has to be solved. Just a
quick thing for all the folks that emailed me saying, "Hey, how do I find out about that? I'm putting some stuff together. We'll send it out shortly to
together. We'll send it out shortly to everybody." So I I tend to be in the
everybody." So I I tend to be in the optimistic side. Technology uplifts
optimistic side. Technology uplifts people at the bottom. people are
leveraging technology to make more and more money in the short term. We've got
lots of data around that. And as we get technology democratized and demonetized to a broader population, then everybody lifts up. And you, Peter, you and I
lifts up. And you, Peter, you and I talk, all of us talk all the time, but forget the richest people. If you can lift the bottom, that's the key. And the
bottom is being lifted very, very appreciably. You just don't see it that
appreciably. You just don't see it that way. It is being
way. It is being >> people compare themselves against their, you know, the Kardashians or whomever else. Immad, uh, you've been doing
else. Immad, uh, you've been doing incredible work here with intelligent internet on this specific problem. Could
you sort of lay that out and give us your thoughts here?
>> Yeah, I think as with many things in human life, this is a coordination problem, right? Again, we have enough
problem, right? Again, we have enough resource, 2 acres per person, you know, food, healthcare, etc. to coordinate everyone. But we've always lacked the
everyone. But we've always lacked the capability to do so because our systems are dumb. So we have projects like Sage
are dumb. So we have projects like Sage which we launched at FII to do top down policy. And really the way that I've
policy. And really the way that I've been thinking about it more is like AI social scientists you know like we talk a lot about AI scientists for biology for chemistry for quantum AI social scientists to figure out economics
politics implementation are going to be so huge. That's basically our sage
so huge. That's basically our sage project. On the other side I think you
project. On the other side I think you need to have universal AI given to everyone. a Jarvis that's looking out
everyone. a Jarvis that's looking out for people to help them navigate on an individual basis because that's how they get access to food, health care, etc. The reason they don't know is because
people are invisible, particularly the poorest of people. But the pace at which this is going to come over the next few years is going to be so intense that governments need to take a big step
forward and say a we need to use AI to coordinate this, b we need to get AI to the people and c we need to look at historical counterparts. And I think
historical counterparts. And I think probably you need to look at the 1933 New Deal that came out of the Great Depression and others because you might
see entire industries disappear within a matter of days, months. Like Grock 4.1 fast just scored like 95% on Towbench the customer service benchmark and it's
50 cents per million words better than any human. That would just no customer
any human. That would just no customer service jobs within 2 years you know again it takes a little while but it's one way. So coordinate with AI, give
one way. So coordinate with AI, give everyone universal AI and then layer services and coordination on top of that.
>> You know, you're going to you're going to appear as the headline in some news article. Now, Immad, if you remember,
article. Now, Immad, if you remember, you said no more coding in a year like headlines across India.
>> I think the point that I makes that's really the really killer point that Iman makes right there is this is one way.
We're not going back.
>> Yes.
>> We have to face the future that's coming and let's get real about it. let's get
data driven and evidentiary around it and just freaking make it happen because it's not gonna it's you know left to itself we've got these two futures a MadMax future or a Star Trek future
>> right and you can see our politicians pulling us straight to Mad Max uh we h we have the opportunity with technology to pull us in that direction and this is what this community is about this is what we have to do
>> uh Alex closing thought on this one >> yeah closing thought is I think the central policy challenge is growing the overall econom economy much faster than
the value of conventional human labor is destroyed or obsoleted by AI. So I'm
primarily focused on ensuring that we can achieve radical macroeconomic growth. If we can do that then making
growth. If we can do that then making sure that UBI, UBS or UB universal basic equity or some some other variant thereof some door number four. I I think
those all become more a matter of policy decisions but it's relatively easy to distribute abundance if we have abundance.
Yeah. All right. We're going to close out on a question aimed at you, Alex.
This is from >> I'm going to coin a phrase here that just occurred to me. UBA universal basic abundance. There you go, Peter.
abundance. There you go, Peter.
>> Okay. I love it.
>> Awesome thinking around that. And then
that gives you abundance of very interesting things underneath that that solve for all the others.
>> Love that. All right. The final AMA. And
please, if you're listening today and you say, "I've got a question," put it in the in the in the chat on this particular uh episode of Moonshots and
we'll look for it and uh if it's intriguing enough, love to ask it to the moonshot mates. So, uh xfinix96
moonshot mates. So, uh xfinix96 asks, "Hey, Alex, the moon and Jupiter should be off limits to mining. Don't
they stabilize our environment?" What
are you trying to do, Alex? Start a
start a report.
>> Gosh, we're we're we're having in uh all all these worlds are yours except Europa attempt no landings there moment. I I
think if if you've read 2010 by other Z Clark um no I I I'm I'm so many thoughts. First is no we don't need to
thoughts. First is no we don't need to stop mining the moon and Jupiter to stabilize our environment. Jupiter does
at the moment play an important role in protecting the inner solar system from or cloud bodies and and other objects from the outer solar system. The moon
does play for the moment an important role in the tides and and other sort of atmospheric >> and romantic love >> for the moment >> for the moment is is is doing the heavy lifting in in that sentence. So once we
have the ability, which I I think seems likely, we will increasingly have to disassemble the moon and disassemble Jupiter and assuming the solar system does go in that route, we will also have
the ability to protect the inner solar system from variety of asteroidal bodies and to to recreate the tides artificially.
>> I I >> my favorite quote from you, Alex, is Saturn has had it coming for a long time. That's got to be an alltime Alex
time. That's got to be an alltime Alex quote.
>> It's true.
>> Oh goodness. All right. Well, asteroids
represent a significant amount of mass and I think they can handle our needs for at least a decade or two. So, um,
all right.
>> I want to close out with a question here. What are you guys grateful for
here. What are you guys grateful for having happened in 2025? Uh, as a a closing gratitude, I'll kick it off. Uh,
I'm super excited that humanoid robots have made so much progress. uh and there a a real uh the capital is being invested, the manufacturing plants are
being invested and my own version of data or C3PO is on its way. Alex, how
about you?
>> So many things, but I I'll pick one. I'm
grateful that math is credibly and defensively being solved by AI. That
that is in my mind such a canary that this is going to work. The the
singularity is is in progress. We're
going to solve all of the grand challenges to math, science, engineering, and medicine over the next few years. And math is just the tip of
few years. And math is just the tip of the iceberg. It's very exciting.
the iceberg. It's very exciting.
>> Amazing. Uh Salem,
>> um again, a million things. I think
three things pop to mind. One is I'm unbelievably grateful for this podcast, Peter. Thank you for pulling together.
Peter. Thank you for pulling together.
>> I am too. Thank you, Dave, a lot at this moment.
>> And thank you. Just just let's just say it real quick to uh to Nick to Nick Singh to to Dana to Jen Luca who helped this really be excellent. So thank you guys for that.
>> I I think this is like this radically optimistic realistic view of the future is the most important kind of tonic for what's happening out in the world today.
uh and the the there's kind of like palpable relief from all the listeners going, "Wow, thank God there's something I look forward to every week or or few."
That's number one. Number two, um I I think I'm kind of starting to just wallow in gratitude on a near-permanent basis to just thinking about what the
the incredible future that is appearing in front of us driven by that inner loop that Alex talks about. I'm still a fan of the moon for the moment. So let's
let's you know we don't need >> enjoy it while it lasts.
>> Um I think the third the third would be my exo ecosystem is kind of finally jelling in a really powerful way. It's
been like 10 years building this ecosystem. If I ever say in the future I
ecosystem. If I ever say in the future I want to build an ecosystem. Please
somebody get a baseball bat. Take me
behind a witch. It's unbelievably
difficult. But it's actually now coming together and in a very very powerful way. There's a whole bunch of
way. There's a whole bunch of announcements that that we have. And uh
finally I'll do a plug. We're doing this meeting of life session where I will claim to s answer why we are alive and how we live effectively and so link will
be in the show notes for everybody. The
tickets are selling fast for that.
>> Nice. Uh Immod where you come out on your gratitudes.
>> Yeah it's a nice small one. Sim small
question you're answering. Um I think that there's two things >> I go for niche projects.
>> Yeah I think there's two big things. One
is I think we've had the technological breakthroughs and infrastructure breakthroughs to be able to build the AI social scientists to improve our infrastructure and finally coordinate as a species. And that is a huge thing that
a species. And that is a huge thing that we'll start seeing rolling out and announced next year as well. And number
two, I think minus the hard light, we have all of the tools we need now for the holiday.
>> Ah, awesome.
>> We just got to put that together. I'm
going to add one final gratitude uh to close us out here, which is the incredible progress being made on reaching longevity escape velocity, right? The uh the focus by all the
right? The uh the focus by all the hyperscalers and uh and model builders on how do we understand how to add decades of health into our lives? How
to, as Dario says, how do we double the human lifespan the next 5 to 10 years?
That gets me jazzed. You know why?
because I'm excited to see the Star Trek future coming our way. We're going to close out. Uh if you're listening to
close out. Uh if you're listening to this versus watching it, go to YouTube to watch this incredible uh outro music and video by John Natney. Uh
>> John again, >> John again. You're going to see all of uh your favorite Moonshot mates as Star Trek characters. Here, of course, is the
Trek characters. Here, of course, is the opening scene with AWG as a Vulcan. All
right. All right.
>> As a blonde Vulcan with a ponytail.
>> Blonde Vulcan with a ponytail. All
right, let's check it out. And Sim, once again, you look hot here, buddy. You
look hot. All right, comes enjoy the wild.
The map is torn to pieces, but we're setting out regardless of it all.
Pack your courage tight. We're giving
everything.
As long as I get a phaser for secrets and mountain sharp and tall.
We leave the hidden chasms and we never fear the fall.
The storm may rise to test us, but we'll meet it all the moon.
>> Ah, that was epic. I just don't like wearing a red shirt on some of those planets.
>> I don't know.
>> If I if I can be likened to Picard in any way, I'm good.
And Alex, of course, you're the science officer on all the missions here.
>> Obviously, >> everybody, I wish you a incredible Thanksgiving holiday to all our listeners. Uh to my moonshot mates,
listeners. Uh to my moonshot mates, Dave, we missed you on this episode.
Looking forward to seeing you. We're
reporting again early next week. A lot
going on. And we're going to be spending some time with Mustafa Sullivan as well, uh the CEO of Microsoft AI. Uh and we'll be doing a podcast with him. a lot of
incredible things. Get ready. Uh 2026 is
incredible things. Get ready. Uh 2026 is going to rock the planet. Hopefully not
physically, but definitely emotionally and intellectually.
>> Let's all wallow in gratitude the next few days.
>> Yeah. Beautiful. And stuffing and turkey.
>> Take care everybody.
>> Take care folks.
>> Every week my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from
ahead. I cover trends ranging from humanoid robotics, AGI and quantum computing to transport, energy, longevity, and more. There's no fluff.
Only the most important stuff that matters that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute
read via email. And if you want to discover the most important meta trends 10 years before anyone else, this report's for you. Readers include
founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want
tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to
it. To subscribe for free, go to dmmandis.com/tatrends
dmmandis.com/tatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode.
Loading video analysis...