Sam Altman on Sora, Energy, and Building an AI Empire
By a16z
Summary
# Sam Altman on Sora, Energy, and Building an AI Empire ## Key Takeaways * OpenAI's vision is to create a personal AI subscription for everyone, supported by massive infrastructure and cutting-edge research, aiming to make AGI useful to people. (0:41) * Societal co-evolution with technology is crucial; releasing products like Sora and ChatGPT allows society to understand and adapt to upcoming AI capabilities, preventing a disruptive "drop" of technology. (5:08) * The best AI evaluations are moving beyond static benchmarks towards real-world scientific discovery and revenue generation, as benchmarks are easily gamed. (11:44) * AI's potential to accelerate scientific progress is a significant positive impact that is often overlooked amidst concerns about negative consequences. (9:12) * While regulation is necessary for extremely superhuman models, a broad, restrictive approach could stifle innovation and put nations at a disadvantage. (25:05) ## Smart Chapters * **0:00 Introduction:** The episode kicks off with a discussion on the unexpected and continuous breakthroughs in deep learning, highlighting how fundamental scientific discoveries keep yielding results. * **0:41 OpenAI’s Vision and Infrastructure:** Sam Altman outlines OpenAI's core mission: to provide a personal AI subscription, supported by massive infrastructure and research, with AGI as the ultimate goal. * **2:37 Business Model and Vertical Integration:** The conversation delves into OpenAI's business model, the necessity of vertical integration in the tech industry, and how research and infrastructure support product development. * **5:08 AGI, Sora, and Societal Co-evolution:** Altman explains how seemingly non-AGI-relevant bets like Sora contribute to the larger goal by enabling world models and facilitating societal adaptation to new AI capabilities. * **8:01 The Future of AI Interfaces:** The discussion explores the evolution of human-AI interaction beyond text-based chat, envisioning real-time rendered video interfaces and context-aware hardware. * **9:12 AI Scientists and Scientific Progress:** A significant portion focuses on AI's burgeoning ability to conduct scientific research, positing that this acceleration will be a major driver of global progress. * **11:44 Reflections on Progress and Model Capabilities:** Altman reflects on the rapid advancements in AI, emphasizing the immense "capability overhang" and the ongoing surprises in discovering new applications for current technology. * **16:17 Sam's Experience as CEO & Leadership Lessons:** Altman shares insights into his personal growth as a CEO, contrasting his early investor mindset with the operational realities of running a company. * **17:34 Strategic Partnerships and Scaling Infrastructure:** The necessity of aggressive infrastructure bets and strategic partnerships across the industry to support OpenAI's ambitious roadmap is discussed. * **25:05 Regulation, Safety, and Societal Impact:** The conversation addresses the delicate balance of AI regulation, advocating for careful oversight of advanced models while avoiding stifling less capable ones. * **28:33 Copyright, Open Source, and Content Creation:** Altman discusses the evolving landscape of copyright in the AI era, the potential for rights holders to benefit from AI content generation, and the strategic considerations around open-source models. * **33:15 Energy, Policy, and AI’s Resource Needs:** A deep dive into the critical intersection of AI and energy, highlighting the need for abundant energy sources and the policy challenges in achieving this. * **37:07 Monetization and User Behavior:** The discussion explores new monetization strategies for AI products like Sora, driven by unexpected user behaviors and the high cost of content generation. * **43:03 The Talent War and Personal Reflections:** Altman touches upon the intense competition for talent in the AI space and his personal journey from a research-focused individual to leading a rapidly growing company. * **45:20 Advice for Founders:** Altman offers guidance to founders and investors, emphasizing the importance of building and exploring technology rather than relying on pattern matching from past successes. ## Key Quotes * "Maybe this is always what it feels like when you discover like one of the big, you know, scientific breakthroughs is it if if it's like really big, it's pretty fundamental and it just it keeps working." (11:44) * "I'm a big believer that society and technology have to co-evolve. It's you can't just drop the thing at the end. It doesn't work that way." (5:08) * "The I know there's like a quibble on what the Turing test literally is, but sort of the popular conception of the Turing test sort of went whooshing by." (9:12) * "I think most regulation probably has a lot of downside. The one thing I would like is as the models get the thing I would most like is as the models get truly like extremely superhuman capable. Um, I think those models and only those models are probably worth some sort of like very careful safety testing." (25:05) * "I think society decides training is fair use. But there's a new model for generating content in the style of or with the IP of or something else." (28:33) ## Stories and Anecdotes * Altman recounts how, early in OpenAI's history, when asked about the business model, the response was to "ask AI, it'll figure it out for us," a statement often met with laughter but which has, on multiple occasions, yielded insightful answers. (2:37) * He shares that a surprising observation from launching Sora is how users are employing it for unexpected purposes, such as generating funny memes of themselves and friends for group chats, necessitating new monetization models beyond initial projections. (37:07) * Altman reflects on his career shift from an investor to a CEO, noting that while he thought he was a good fit for investing, running a company has been a much more challenging and often less intellectually stimulating, yet crucial, undertaking. (16:17) ## Mentioned Resources * **Sam Altman on X:** Follow Sam Altman on X for updates. (Link provided in description) * **OpenAI on X:** Follow OpenAI on X for updates. (Link provided in description) * **OpenAI Website:** Learn more about OpenAI. (Link provided in description) * **Sora:** Try OpenAI's text-to-video model. (Link provided in description) * **Ben Horowitz on X:** Follow Ben Horowitz on X. (Link provided in description) * **a16z on X:** Follow a16z on X. (Link provided in description) * **a16z on LinkedIn:** Follow a16z on LinkedIn. (Link provided in description) * **a16z Podcast on Spotify:** Listen to the a16z Podcast on Spotify. (Link provided in description) * **a16z Podcast on Apple Podcasts:** Listen to the a16z Podcast on Apple Podcasts. (Link provided in description) * **Erik Torenberg on X:** Follow the host, Erik Torenberg, on X. (Link provided in description) * **Strictly VC:** An interview with Sam Altman from many years ago. (2:37) * **AMD:** A strategic partnership with AMD was mentioned. (17:34) * **Oracle:** A strategic partnership with Oracle was mentioned. (17:34) * **Nvidia:** A strategic partnership with Nvidia was mentioned. (17:34) * **Meta:** Mentioned in the context of Instagram ads and competitive dynamics. (37:07) * **Google:** Mentioned in the context of ads and search results. (37:07) * **DeepSeek:** Mentioned as a dominant open-source model. (28:33) * **Helion:** An energy company Sam Altman is involved with. (43:03) * **Olo:** An energy company Sam Altman is involved with. (43:03) * **Retro Biosciences:** A longevity company Sam Altman is involved with. (43:03)
Topics Covered
- AI's Unexpected, Continuous Breakthroughs Challenge Assumptions
- Why Vertical Integration is Crucial for AGI Development
- "Taste" Products Drive AI-Society Co-evolution
- AI as a Future Engine for Scientific Discovery
- AI Chatbots Need Personalized Personalities, Not One-Size-Fits-All
Full Transcript
sort of thought we had like stumbled on
this one giant secret that we had these
scaling loss for language models and
that felt like such an incredible
triumph. I was like we're probably never
going to get that lucky again. And deep
learning has been this miracle that
keeps on giving and we have kept finding
breakthrough after breakthrough. Again,
when we got the the reasoning model
breakthrough, like I I also thought that
was like we're never going to get
another one like that. And it just seems
so improbable that this one technology
works so well. But maybe this is always
what it feels like when you discover
like one of the big, you know,
scientific breakthroughs is it if if
it's like really big, it's pretty
fundamental and it just it keeps
working.
Sam, welcome to the Z podcast.
>> Thanks for having me.
>> All right. You've uh described in
another interview, you described OpenAI
as a combination of four companies. uh
consumer technology business, a mega
scale infrastructure operation, a
research lab and all the new stuff
including planned hardware devices from
hardware to app integrations to Java
marketplace to commerce. What do all
these bets add up to with OpenAI's
vision?
>> Yeah, I mean maybe you should count just
three maybe as as four for kind of our
own version of the what traditionally
would have been the research lab at this
scale, but three core ones. Uh
we want to be people's personal AI
subscription. And I think most people
will have one. Some people will have
several. And you'll use it in some
first-party consumer stuff with us, but
you'll also log into a bunch of other
services and you'll just you'll use it
from dedicated devices at some point.
You'll have this AI that gets to know
you and be really useful to you and
you'll that's what we want to do. Um it
turns out that to support that we also
have to build out this massive amount of
infrastructure. But the goal there, the
the the mission is really like build
this AGI and make it very useful to
people. and and does the infrastructure
uh do you think it will end up you know
it's necessary for the main goal will it
also separately end up being a another
business or is it just really going to
be in service to the personal AI or
unknown
>> you mean like would we sell it to other
companies as infrastructure
>> yeah would you sell it to other
companies um yeah or or or you know it's
such a massive thing would it would it
do something else
>> it feels to me like there will emerge
some other thing to do like that but I
don't know we don't have a current it's
currently just meant to like support
>> the service we want to deliver and the
research
>> yeah know that makes sense
>> yeah the um scale is sort of like
>> ridiculous
>> terrifying enough that you got to be
open to doing something else
>> yeah if you're building the biggest data
center in the history of humankind
>> the biggest infrastructure project in
the history
>> the um there was a great interview you
did many years ago in Strictly VC and
and sort of early open AI well before
TGBT and and they're saying hey what's
they're asking what's the business model
and you said oh we'll ask AI it'll
figure it out for us everybody laughs
but there have been multiple times and
there was just another one recently
where we have asked a then current model
for you know what should we do and it
has had a insightful answer we missed so
>> I I think when we say stuff like that
people don't take us seriously or
literally
>> but maybe the answer is you should take
take us both
>> yeah yeah well know as as somebody who
runs an organization I ask the AI a lot
of questions about what I should do. It
comes up with some pretty interesting
answers.
>> Sometimes sometimes it does you know you
have to you have to give it enough
context. But
>> what is what is the thesis that that
connects these bets beyond more
distribution more compute? How do how do
we think about it?
>> I mean the research enables us to make
the great products and the
infrastructure enables us to do the
research. So it is kind of like a
vertical stack of things like you can
use chatbt or some other service to get
advice about what you should do running
an organization but for that to work it
requires great research and requires a
lot of infrastructure. So it is kind of
just this one this one thing. It's
>> and do you think that there will be a
point where that becomes completely
horizontal or will it stay vertically
integrated for the foreseeable future?
I was always against vertical
integration and I now think I was just
wrong about that.
>> Yeah. Interesting.
>> And there's kind of cuz you you like
you'd like to think that the economy is
efficient and the theory that companies
can do one thing and then
>> it's supposed to work.
>> Like to think that. Yeah.
>> And in our case at least it hasn't
really. I mean it hasn't some ways for
sure. or like there's people that make
like you know
>> Nvidia makes an amazing chip or whatever
that a lot of people can use but the
>> the story of open AI has certainly been
towards we have to do more things than
we thought to be able to deliver on the
mission
>> right you know although that you know
the the history of the computing
industry is kind of been a story of kind
of a back and forth and that you know
there was the Wang word processor and
then the personal computer and the the
Blackberry before the smartphone. Um, so
you know there has been this kind of
vertical integration and then not but
then the iPhone is also vertically
integrated.
>> The iPhone I think is the most
incredible product the tech industry has
ever produced and it is extraordinarily
vertically integrated.
>> Yeah, amazingly so. Yeah. Interesting.
>> Which bets would you say are enablers of
AGI versus which are sort of hedges
against uncertainty? I
>> think you could say that on the surface
Sora for example does not look like it's
AGI relevant. But
I would bet that if we can build really
great world models, that'll be much more
important to AGI than people think.
There were a lot of people who thought
CH chatbt was not a very AGI relevant
thing. And it's been very helpful to us,
not only in building better models and
understanding how society wants to use
this, but also in like bringing society
along to actually figure out, man, we
got to contend with this thing. Now we
for a long time before CHGBT, we would
talk about AGI and people like this is
not happening or we don't care.
then all of a sudden they really cared
and and I I think that so
research benefits aside.
I'm a big believer that society and
technology have to co-evolve. It's you
can't just drop the thing at the end. It
doesn't work that way. It is it is a
sort of ongoing back and forth. Yeah.
say more about how Sora fits into your
strategy because there some hull on on X
around hey um you know why devote
precious GPUs to to Sora but is it a
short-term long-term trade-off or are we
so aging
>> well and then the new one had like a
very interesting twist with the social
networking
be very interested in kind of how you're
thinking about that and like um did uh
Meta call you up and get mad or like hey
what what do you expect the reaction to
be? Um,
I think if one company of the two of us
has feels like more like the other one
has gone after them, it wouldn't they
shouldn't be calling us.
>> Well, I do know the history, too. But uh
look, we're not going to like
first of all, I think it's cool to make
great products and people love the new
Sora and and I also think it is
important to
give society a taste of what's coming on
this co-evolution point. So like very
soon the world is going to have to
contend with incredible video models
that can deep fake anyone or kind of
show anything you want. that will mostly
be great. There will be some adjustment
that society has to go through. And just
like with chat GPT, we were like the
world kind of needs to understand where
this is. I think it's very important the
world understands where video is going
very quickly cuz that's going to be
video has much more like emotional
resonance than text and very soon we're
going to be in a world where like this
is going to be everywhere. So I think
there's something there. uh as I
mentioned I think this will help our
research program and is on the AGI path
but
yeah some like you know it can't all be
about just making people like ruthlessly
efficient and the AI like solving all
our problems there's got to be like some
fun and joy and delight along the way
but we won't throw like tons of compute
at it or not by a fraction of our comput
it it's tons in the absolute sense but
not in the relative sense
>> I want to talk about the future of AI
human interfaces because back in August
you said the models have already
saturated the chat use case. So what are
future AI human interfaces look like
both in terms of hardware and software
is the vision for kind of a wechathat
like
>> so I'm solving the chat thing in like a
very narrow sense which is if you're
trying to like you know have the most
basic kind of chat style conversation
it's very good but what a chat interface
can do for you it's like nowhere near
saturated cuz you could ask a chat
interface like please cure cancer a
model certainly can't do that yet so I
think the text interface style can go
very far even if for the chitchat use
case the models are already very good.
Um, but but of course there's better
interfaces to have. Uh, actually it's
another thing I think is cool about
Sora. Like you can imagine a world where
the interface is just constantly
real-time rendered video.
>> Yeah.
>> And what that would enable and that's
pretty cool. You can imagine new kinds
of hardware devices that are sort of
always ambiently aware of what's going
on. in rather than your phone like blast
you with text message notifications at
whenever it wants like it really
understands your context and when to
show you what and
there's a long way to go on all that
stuff you know
within the next couple years what will
models be able to do that they're not
able to do today will be sort of white
color um you know replacement at a much
deeper level AI scientist uh human
humanoids um
>> I mean a a lot of things but you touched
on the one that I am most excited about
which is the the AI scientist.
>> Yeah,
>> this is crazy that we're sitting here
seriously talking about this. The I know
there's like a quibble on what the
Turing test literally is, but but the
popular conception of the Turing test
sort of went whooshing by.
>> Yeah, it was fast. Yeah.
>> You know, it was just like we talked
about it as this most important test of
AI for a long time. It seemed impossibly
far away. Then all of a sudden it was
passed. the world freaked out for like a
week, two weeks, and then it's like,
"All right, I guess computers like can
do that now." Yeah.
>> And everything just went on. And I think
that's happening again with science. Uh
my own personal like equivalent of the
touring test has always been when AI can
do science like that has always like
that is a real change to the world. And
for the first time with GPT5, we are
seeing these little little examples
where it's happening. You see these
things on Twitter. did this it made this
novel math discovery and did this small
thing in my you know my physics research
my biology research and everything we
see is that that's going to go much
further so in 2 years I think the models
will be doing bigger chunks of science
and making important discoveries
and that is a crazy thing like that will
have a significant impact on the world I
am I am a believer that to a first order
scientific progress is what makes the
world better over time and if we're
about to have a lot more of that that's
a big
It's interesting because that's a
positive change that people don't
talk about. It it's gotten so um much
into the realm of the negative changes
if AI gets extremely smart. But
>> but carbon disease is
>> who we could use a lot more science.
Yeah. That that that's really good
point. I think Alan Turing said this,
somebody asked him, they said, "Well, do
you really think the uh computer is
going to be, you know, smarter than the
brilliant minds?" He said, "It doesn't
have to be smarter than a brilliant
mind, just smarter than a mediocre mind
like the president of AT&T."
And uh we could use more of that, too.
Probably.
>> We uh we just saw periodic launch last
week. You know, Open AAI lungs. And uh
yeah, to to to that point, it's amazing
to see both the innovation that you guys
are doing, but also the the teams that
you know come out of OpenAI just feels
like are you know, creating tremendous
capable things.
>> We certainly hope so.
>> Yeah. the um I want to ask you about
just broader reflections in terms of
what sort of about diffusion or uh
development in 2025 has surprised you or
what has sort of updated your worldview
since chatb came out.
A lot of things again but maybe the most
interesting one is how much new stuff we
found. sort of thought we had like
stumbled on this one giant secret that
we had these scaling loss for language
models and that felt like such an
incredible
triumph that I was like we're probably
never going to get that lucky again and
deep learning has been this miracle that
keeps on giving and we have kept finding
like breakthrough after breakthrough
again when we got the the reasoning
model breakthrough like I I also thought
that was like we're never going to get
another one like uh
uh and it just seems so improbable that
this one technology works so well. But
maybe this is always what it feels like
when you discover like
one of the big, you know, scientific
breakthroughs is it if if it's like
really big, it's pretty fundamental and
it just it keeps working. But the amount
of progress,
like if you went back and used GPT3.5
from Chat GBT launch, you'd be like, I
cannot believe anyone used this thing.
>> Yeah.
And and now we're in this world where
the capability overhang is so immense
like most of the world still just thinks
about what chat PT can do and then you
have like some nerds in Silicon Valley
that are using codecs and they're like
wow those people have no idea what's
going on and then you have like a few
scientists who say those people using
codecs have no idea what's going on but
the the overhang of capability has come
is is is so big now and we've just come
so far on the what the models can do.
And in terms of further development, how
far can we get with with LLMs? At what
point do we need either new architecture
or how do you think about what
breakthroughs are needed?
>> I think far enough that we can make
something that will figure out the next
breakthrough with the current technology
like I it's a very self-reerential
answer, but if if LLMs can get if LLM
based stuff can get far enough that it
can do like better research than all of
Open put together, maybe that's like
good enough.
Yeah, that would be a big breakthrough.
A very big breakthrough. So, on um on
the more mundane, you know, one of the
things that uh people have kind of
started to complain about, I think South
Park did a whole episode on it, is kind
of the obsequiousness
of uh of kind of AI and chat GPT in
particular. And how hard a problem is
that to deal with? Is it not that hard
or is it like kind of a fundamentally
hard problem?
>> Oh, it's not at all hard to deal. a lot
of users really want it.
>> Yeah.
>> Like if you go look at what people say
about Chach online,
>> there's a lot of people who like really
want that back.
>> And it is, you know,
>> so it's not technically it's not hard to
deal with at all. Um, one thing, and
this is not surprising in any way, but
the the incredibly wide distribution of
what users want.
>> Yeah. out of how of like how they'd like
a chatbot to behave in big and small
ways.
>> Does that do you end up having to
configure the personality then you
think? Is that going to be the answer?
>> I think so. Uh I mean ideally like you
just talk to chatt for a little while
and it kind of interviews you and also
sort of sees what you like and don't
like and
>> and chat just figures out but in the
short term you'll probably just pick
one.
>> Got it. Yeah, that makes sense. Very
interesting. And um actually so so one
thing I wanted to ask you about is uh
>> like I think we just had a a really
naive thing which you you know like
it would sort of be unusual to think you
can make something that would talk to
billions of people and everybody wants
to talk to the same
>> person. Yeah.
>> And and yet that was sort of our
implicit assumption for a long time.
>> Right. Because people have very
different friends.
>> People have very different friends. So
now we're trying to fix that.
>> Yeah. and also kind of different
friends, different interests, different
uh levels of intellectual capability.
So, you don't really want to be talking
to the same thing all the time. And one
of the great things about it is you can
say, "Well, explain it to me like I'm
five, but maybe I don't even want to
have to do that prompt. Maybe I always
want you to talk to Yeah. Particularly
if you're teaching me stuff."
Interesting. Um, I want to ask you a
kind of like a a CEO question which has
been interesting for me to observe you
is you just did this deal with AMD. Um,
and you know, of course, the company's
in a different position and you have
more leverage and these kinds of things,
but like how has your kind of thinking
changed over the years since you did
that that initial deal if at all?
>> I I had very little operating experience
then. I had very little experience
running like I I am I am not naturally
someone to run a I'm a great fit to be
an investor
>> and I kind of thought that was going to
be that was what I did before this and I
thought that was going to be my career.
>> Yeah. Yeah.
>> Although you were a CEO before that.
>> I not a good one. Um and
and so I think I had the mindset of like
an investor advising a company when we
did and now I understand what it's like
to actually have to run a company.
>> Yeah. Right. Right. Right. There there's
more than I I've learned a lot about how
to
>> you know like how you have to like
>> what what operational
>> how you like what it takes to
operationalize deals over time and
>> right
>> all the implications of the agreement as
opposed to just oh we're going to get
distribution of money. Yeah, that makes
sense. Yeah. Know because it it's really
I I I just I was very impressed at the
deal structure improvement.
>> Yeah. Right.
>> More broadly, you've, you know, in the
last few weeks alone, you mentioned AMD,
but also Oracle, Nvidia. You've chosen
to strike these deals and partnerships
with with companies that you collaborate
with, but could also potentially compete
with in in certain areas. How do you
decide you know when to collaborate
versus when when not to or how do you
just think about
>> um we have decided that it is time to go
make a very aggressive infrastructure
bet and we're like I've never been more
confident in the research road map in
front of us and also the economic value
that will come from using those models
but to make the bet at this scale we
kind of need the whole industry to or
big chunk of the industry to support it
and this is like you know from the level
of like electrons to model distribution
and all the stuff in between which is a
lot and so we're going to partner with a
a lot a lot of people. Uh you should
expect like much more from us in the
coming months.
>> Actually expand on that because when you
talk about the scale it does feel like
in your mind
the the limit on it is unlimited like
you would scale it as as you know as big
as you possibly could.
There's totally a limit like there's
some amount of global GDP. Uh
>> yeah,
>> you know, there's some fraction of it
that is knowledge work and we don't do
robots yet.
>> Yes.
>> But
>> but but the limits are out there.
>> It feels like the limits are very far
from where we are today. If we are right
about
>> so so I shouldn't say from where like if
we are right that the model capability
is going to go where we think it's going
to go then the economic value that sits
there
can can go very very far
>> right so you wouldn't do it like if all
you ever had was today's model you won't
go there but it's a combination
>> I mean we would still expand because we
can see how much
>> demand there is we can't serve with
today's model but We would not be going
this aggressive if all we had was
today's model,
>> right?
>> Yeah.
>> Right. We get to see a year or two in
advance though. So like
>> Yeah. Interesting.
>> Chad view 800 million weekly active
users about 10% of the world world's
population fastest growing consumer
product you know ever it seems. Um how
do
>> faster than anyone I ever saw.
>> Yeah. How how do you balance you know
optimizing for active users at well at
the same time being a re you know being
a product company and and a research
company how do you throw the new
>> when there's a constraint we almost like
which happens all the time uh we almost
always prioritize giving the GPUs to
research over supporting the product um
part of the reason we want to build this
capacity so we don't have to make such
painful decisions there are weird times
you know like a new feature launches and
it's going really viral or whatever
where research will temporarily
sacrifice some GPUs, but but on the
whole like we're here to build AGI
>> and research gets the priority.
>> Yeah. the you said in your your
interview with with your brother Jack
around how you know other companies can
try to imitate the the products or or
buy your you know or hire your your your
>> higher IP
all sorts of things but but they they
can't buy the culture or they can't the
sort of repeatable sort of you know m
machine if you will that that is you
know constantly the culture of
innovation
how have you done that or what are you
playing
what talk about this this culture of of
innovation.
>> This was one thing that I think was very
useful about coming from an investor
background. A really good research
culture looks much more like running a
really good seedstage investing firm and
betting on founders and sort of that
kind of than it does like running a
product company. So I think having that
experience was really helpful to the
culture we built.
>> Yeah. Yeah. That's sort of how I see,
you know, Benedict in some ways which
we, you know, you're a CEO but also
have, you know, have this portfolio and,
you know, have an investor mindset,
>> right? Like I'm the opposite.
>> CEO going to investor. He's investor
going to CEO.
>> It is unusual in this direction.
>> Yeah. Yeah.
>> Yeah. Well, it never works. You're the
only one who I think I've seen go that
way and have it work.
>> Uh,
workday was like that, right? No, but
Anne Neil was he he was a operator
before he was an investor and I mean he
was really an operator. I mean people
soft is a pretty big
>> and why is that because once people are
investors they don't want to operate
anymore. Um, no. I think that investors
generally if you're good at investing,
you're not necessarily good at like
organizational dynamics, conflict
resolution, um, you know, like just like
the deep psychology of like all the
weird [ __ ] and then you know how
politics get created. There's just like
all this
there. There's the detailed work in
being an operator or being a CEO is
so vast and it's not as intellectually
stimulating. It's not something you can
ever go talk to somebody at a cocktail
party about. And so like you're an
investor, you get like, oh, everybody
thinks I'm so smart and you know cuz you
know everything. You see all the
companies and so forth and that's a good
feeling. And then being CEO is often a
bad feeling. Yeah.
>> And so it's really hard to go to a a
good feeling to a bad feeling. I would
just say
>> I'm shocked by how different they are
and I'm shocked by how much the
difference between a good job and a bad
job they are.
>> Yeah.
>> Yes.
>> Yeah. You know, it's tough. It's it's
rough. I mean, I can't even believe I'm
running the firm. Like I know better.
>> Yeah.
>> And he can't believe he's running
OpenAI. He knows better.
>> Going back to progress today, are you
still useful in a world in which they're
getting saturated, gained? Are they
still the What is the best way to gauge
model capability now? Um, well, we're
talking about scientific discovery. I
think that'll be an eval that can go for
a long time.
>> Revenue is kind of an interesting one.
Uh, but I think the like static evals of
benchmark scores are less interesting.
>> Yeah.
>> And and also those are crazily gamed.
>> Yeah. Yeah.
>> More broadly, it seems like
>> that's all there is as far as I can
tell. Yeah. More broadly, it seems that
the culture the culture Twitter is less
AGI pill than it was a year or so ago
when the AI 2027 thing came out. Some
people point to you GBT5 them not seeing
sort of the obvious um obviously there
were a lot of progress that in some ways
under the the surface or not not as
obvious to what people were expecting.
Should people be less AGI pled or is
this just Twitter vibes? And
>> well, a little bit of both. I mean, I I
I think like
>> like we talked about the touring test,
AGI will come.
>> It will go whooshing by.
>> The world will not change as much as the
impossible amount that you would think
it should.
>> It won't actually be the singularity.
>> It will not.
>> Yeah.
>> Yeah. Even even if it's like doing kind
of crazy a research like the society
will learn faster but
one of the kind of like retrospective
observations is people and societies as
a whole are just so much more adaptable
than we think that you know it was like
a big update to think that AGR was going
to come. You kind of go through that.
You need something new to think about.
You make peace with that. It turns out
like it will be more continuous than we
thought.
which is good.
>> Which is really good.
>> I'm not up for the big bang.
>> Yeah. Um well to that end, how have you
sort of evolved your thinking? You
mentioned you evolved your thinking on
sort of uh you know vertical
integration. How have you evolved your
thinking or what's the latest thinking
on sort of AI stewardship you safety?
What's the latest thinking on that?
I do still think there are going to be
some
really strange or scary moments. Uh
the fact that like so far the technology
has not
produced a really scary giant risk
doesn't mean it never will. It also like
there's we're talking about it's kind of
weird to have like billions of people
talking to the same brain. like there
may be these weird societal scale things
that are already happening we that
aren't scary in the big way but are just
sort of different. Um
but I expect like
I expect some really bad stuff to happen
because of the technology which also has
happened with previous technologies and
>> all the way back to fire.
>> Yeah.
And I think we'll like develop
some guardrails around it as a as a
society.
>> Yeah. What is sort of your latest
thinking on the the right mental models
we should have around the the right
regulatory frameworks to to think about
or or the ones we shouldn't be thinking
about? Um
I think most
I think the right thing to I I think
most regulation uh
probably has a lot of downside. The one
thing I would like is as the models get
the thing I would most like is as the
models get truly like extremely
superhuman capable. Um,
I think those models and only those
models are probably worth some sort of
like
very careful safety testing uh as as the
frontier pushes back. Um, I don't want a
big bang either. Mhm.
>> And you can see a bunch of ways that
could go very seriously wrong. But
I hope we'll only focus the regulatory
burden on that stuff and not all of the
wonderful stuff that less capable models
can do that you could just have like a
European style complete cramp on and
that would be very bad. Yeah, it seems
like the
the thought experiment that okay,
there's going to be a model down the
line that is a super superhuman
intelligence that could, you know, do
some kind of takeoff flight thing.
We really do need to wait till we get
there. Uh um or like at least we get to
a much bigger scale or we get close to
it. Um because
nothing is going to pop out of your lab
in the next week that's going to do
that. And I I think that's where we as
an industry kind of confuse the
regulators. Yeah. uh because I think you
you really could one you damage America
in particular in that um like China's
not going to have that kind of
restriction and and you getting behind
um in AI I think it'd be very dangerous
for the world
>> extremely dangerous
>> extremely dangerous
>> much more dangerous than not regulating
something we don't know how to do yet
>> you also want to talk about copyright
>> um
yeah So, well then that that's a segue,
but um
when you think about well I guess how do
you see copy right unfolding because
you've done some very interesting things
um with the opt out uh and
you know as you see people selling
rights do you think will they be be
bought exclusively will they be just
like um I could sell it to everybody
wants to pay me or how do you think
that's going to unfold
>> this is my current guess It it
speaking of that like society and
technology co-evolve as the technology
goes in different directions and we saw
an example of a different like video
models got a very different response
from rights holders than image gen does.
So like you'll see this continue to move
>> but forced guess from the position we're
in today. I would say that society
decides training is fair use.
>> Mhm. But
there's a new model for generating
content in the style of or with the IP
of or something else.
>> So you know anyone can read like a human
author can anybody can read a novel and
get some inspiration but you can't
reproduce the novel in your own
>> right
>> and can talk about Harry Potter but you
can't re spit it out.
>> Yes. Although another thing that I think
will change um
>> in the case of Sora, we've heard from
a lot of concerned rights holders and
also a lot of
>> name and like
>> and a and a lot of rights holders who
are like my concern is you won't put my
character in enough.
>> Yeah. I want restrictions for sure, but
like if I'm, you know, whatever and I
have this character, like I don't want
the character to say some crazy
offensive thing, but like I want people
to interact. Like that's how they
develop the relationship and that's how
like my franchise gets more valuable.
And if you become really if you're
picking like his character over my
character all the time, like I don't
like that. So, I can completely see a
world where
subject to the decisions that a rights
holder has, they get more upset with us
for not generating their character often
enough than too much.
>> Yeah.
>> And this is like this was not an obvious
thing that recently that this is how it
might go. But
>> yeah, this is such an interesting thing
with kind of Hollywood. we saw this like
one of the things that I never quite
understood about the music business was
how like you know okay you have to pay
us if you play the song in a restaurant
or like at a game or this and that and
the other and they they get very
aggressive with that. um when it's
obviously a good idea for them to play
your song at a game because that's the
biggest advertisement in the world for
like all the things that you do, your
concert, your your
>> Yeah, that one felt really irrational.
>> Like um but it I I would just say it's
it's very possible for the industry just
because the way those industries are
organized or at least the traditional
creative industries to do something
irrational. Um, and it comes from like
in the music industry. I think it came
from the structure where you have the
publisher who's just,
>> you know, basically after everybody. Uh,
you know, that their whole job is to
stop you from playing
>> the music.
>> Yeah.
>> Which every artist would want you to
play. Uh, so
>> I I do wonder how it's going to shape
out. I agree with you that the rational
idea is I want to let you use it all you
want and I want you to use it but um
that don't mess up my character. Yeah.
>> So so I think like
>> if I had to guess
some people will say that some people
say absolutely not but it doesn't have
the music industry like
>> thing of just a few people with all of
the
>> right it's more dispersed
>> and so people will just try many
different setups here and see what
works.
>> Yeah. And maybe it's a way for new
creatives to get new characters out.
>> Yeah.
>> And you'll never be able to use Daffy
Decker.
>> I want to chat about open source. Um
because there's been some evolution in
the thinking too and that GBD3 didn't
have the open open weights, but you
released a you know very capable open
model earlier this year. What's sort of
your your latest thinking. What was the
evolution there?
>> I think open source is good. I Yeah. I
mean I'm happy like it makes me really
happy that people really like GPOSS.
Yeah.
>> Yeah.
And what do you think like strategically
like what's the danger
of
deepseek being the dominant open source
model?
>> I mean who knows what people will put in
these open source models over time like
>> like what the weights will actually be.
Yeah.
>> It's really hard to
>> So you're seeding control of the
interpretation of everything to
somebody.
>> Yeah.
>> Who may be or may not be influenced
heavily by the Chinese government. Yeah.
What about And
>> by the way, we see I mean, you know,
just to give you and and we really thank
you for um putting out a really good
open source model because what we're
seeing now is in all the universities,
they're all using the Chinese models.
>> Y
>> Yeah. Which feels very dangerous.
>> You've said that the things you care
most about professionally are AI and
energy.
>> I did not know they were going to end up
being the same thing. They were two
independent interests that really
converged.
>> Yeah.
talk more about how your interest in
energy uh sort of began, how you sort of
chosen to to play in it and then we
could talk about Yeah. how they care,
>> right? Because you started your career
in physics. Yeah.
>> CS and physics.
>> Yeah.
>> Uh well, I never really had a career. I
studied physics and my first job was
like a CS like
>> this is an oversimplification, but
roughly speaking, I I think if you look
at history, the best the highest impact
thing to improve people's quality of
life has been cheaper and more abundant
energy.
And so it seems like pushing that much
further is a good idea. And I I don't
know. I just like people have these
different lenses. They look at the
world, but I I see energy everywhere.
>> Yeah.
>> Yeah. And so
get into because we've kind of uh in the
west I think we've uh paint ourselves
into a little bit of a corner on energy
um by both outlying nuclear for a very
long time.
>> That was an incredibly dumb decision.
>> Yeah. And then you know like also a lot
of policy restrictions on energy um and
you know worse so in Europe than in the
US but also dangerous here and now with
AI here
it feels like we're going to need all
the energy from every possible source
and how do you see that developing kind
of policy-wise and technologically like
what are going to be the big sources and
how will those kind of curves cross um
and then what's the right policy posture
around you know drilling fracking all
these kinds of things I expect in the
short term it will be most of the net
new in the US will be natural gas
relative to at least base load energy in
the long term I expect it'll be a
>> I don't know what the ratio but the two
dominant sources will be uh solar plus
storage and nuclear
>> I think yeah
>> some combination of those two will win
in the future like the long-term future
>> in the long term right now
>> and advanced nuclear
SMRS fusion the whole the whole stack.
>> And how how fast do you think that's
that's coming on the nuclear side where
we're at really at scale cuz you know
obviously there's a lot of people
building it.
>> Yeah.
>> Um but we we have to completely legalize
it and all that kind of thing.
>> I I think it kind of depends on the
price. If it is completely crushingly
economically dominant over everything
else
>> then I expect to happen pretty fast.
Yeah. Again, if you like study the
history of energy, when you have these
major transitions to a much cheaper
source, the world moves over pretty
quickly. The cost of energy is just so
important.
>> Yeah. So, if
if nuclear gets radically cheap relative
to anything else we can do, I'd expect
there's a lot of political pressure to
get the NRC to move quickly on it, and
we'll find a way to build it fast. If
it's around the same price as other
sources, I expect the kind of
anti-uclear sentiment to overwhelm and
it take a really long time.
>> Yeah.
>> Should be cheaper.
>> It should be.
>> Yeah.
>> Yeah.
>> It should be the cheapest form of energy
on Earth like or anyway.
>> Yeah. Yeah. Cheap, clean.
>> What's it not to like?
>> Apparently a lot.
>> Yeah. on open. What's what's the latest
thinking in terms of monetization in
terms of either certain experiments or c
certain things that you could see
yourself spending more time or less less
time on different models that you're
excited about? The thing that's top of
mind for me like right now just cuz it
just launched and there's so much usage
is what we're going to do for Sora.
>> Yeah.
>> Um
another thing you learn once you launch
one of these things is how people use
them versus how you think they're going
to use them.
>> Yeah. And people are certainly using
Sora the ways we thought they were going
to use it, but they're also using it in
these ways that are very different. Like
people are generating funny memes of
them and their friends and sending them
in a group chat. And that will require a
very different like sore videos are
expensive to make. Uh
>> right. So that will require a very
different, you know, for people that are
doing that like hundreds of times a day.
>> It's going to require a very different
monetization method than the kinds of
things we were we were thinking about. I
think it's very cool that the thesis of
Sora, which is people actually want to
create a lot of content, it's it's not
that,
you know, the traditional naive thing
that it's like 1% of users create
content, 10% leave comments, and 100%
view. Maybe a lot more want to create
content, but it's just been harder to
do. And I think that's a very cool
change. But it does mean that we got to
figure out a very different monetization
model for this than we were thinking
about. If people want to create that
much, I assume it's like some version of
you have to charge people per
generation. per generation when when
when it's this expensive. Um, but that's
like a new thing we haven't had to
really think about before.
>> What's your thinking on ads for the
longtail?
>> Open to it. I like many other people, I
find ads somewhat distasteful, but not
not a non-starter. Um, and there's some
ads that I like, like one thing I give
Meta a lot of credit for is Instagram
ads are like a net value ad to me. Um, I
like Instagram ads. I've never felt
that. Like, you know, on on Google, I
feel like I know what I'm looking for.
The first result is probably better. The
ad is an annoyance to me. On Instagram,
it's like, I didn't know I want this
thing. It's very cool. I never heard it,
but I never would have thought to search
for it. I want the thing. So that's like
there's kinds of things like that. But
>> people have a very high trust
relationship with Chhat GPT. Even if it
screws up, even if it hallucinates, even
if it gets it wrong, people feel like it
is trying to help them and that it's
trying to do the right thing. And is if
we broke that trust, it's like you say,
"What coffee machine should I buy?" And
we recommended one and it was not the
best thing we could do, but the one we
were getting paid for, that trust would
vanish. So like that kind of ad does not
does not work. There are others that I
imagine that could work totally fine.
Um, but that would require like a lot of
care to avoid the obvious traps.
>> Yeah.
M and then how how big a problem you
know just you extending the Google
example is like um you know fake uh
content that then gets slurped in by the
model and then they recommend the wrong
coffee maker because somebody just
blasted a thousand great reviews of that
coffee maker.
>> So there's all of these things that have
changed very quickly for us.
>> Yeah. Um, this is one of those examples
that people are doing these crazy things
to maybe not even fake reviews, but just
paying a bunch of like human like really
trying to figure out
>> are using chat GPT to write some good
ones.
>> Uh,
>> write me a review that chat GBT would
love.
>> Yeah.
>> So, this coffee.
>> Exactly. Exactly.
>> Yeah.
>> So, this is a very sudden shift that has
happened.
>> Mhm.
>> We never used to hear about this like
6 months ago or 12 months ago.
>> Yeah.
>> Certainly. And now there's like a real
cottage industry that feels like it's
sprouted up overnight.
>> Yeah.
>> Trying to do this.
>> Yeah. Yeah. Yeah. No, they they're very
clever out there.
>> Yeah. So, uh I don't know how we're
going to fight it yet, but people figure
this out.
>> So, that gets into a little bit of this
other thing that we've been worried
about. Um and you know, we're trying to
kind of figure out uh blockchain sort of
potential solutions to it and so forth.
But there's this problem where like the
incentive to create content on the
internet used to be, you know, people
would come and see my content and they'd
read like, you know, if I write a blog,
people will read it and so forth. Um,
with chat GPT,
if I'm just asking Chat GPT and I'm not
like going around the internet, who's
going to create the content and why? Um,
and is there
an incentive theory or or or something
that you have to kind of not break the
covenant of the internet, which is like
I create something and then I'm rewarded
for it with like either attention or
money or something.
Uh, the theory is much more of that will
happen if we make content creation
easier and don't break the like kind of
fundamental way that you can get some
kind of reward for doing so.
>> So, for the dumbest example of Sora
since we've been talking about that.
>> It's much easier to create a funny video
than it's ever been before.
>> Yeah.
>> Um
maybe at some point you'll get a rev
share for doing so.
>> For now, you get like internet likes
which are still very motivating to some
people.
>> Yeah.
>> Um but people are creating tons more
than they ever created before in this
con in any other kind of like video app.
>> Yeah.
>> So, but are that the end of text?
I don't think so. Like people are also
>> are human generated texts.
>> Uh human generated will turn out to be
like you have to
>> you have you have to verify like what
percent? Yeah. Is it like fully
handcrafted? Was it like tool aated?
>> Yeah. I see. Yeah. Probably nothing that
toolated. Yeah.
>> Interesting.
>> We've uh we've given meta their flowers.
So now I can feel like I can ask you
this question which is the great talent
war hall of 2025 has has has taken place
and open AAI remains intact. Team is
strong as ever shipping incredible
products.
>> What can you say about what what's it
been like this year in terms of just
everything that's that's been going on.
I mean, every year has been exhausting
since we like uh
>> I
remember when
the first few years of running open air
were like the most fun professional
years of my life by far. It was like
unbelievable, you know, before you
released the product.
>> Yeah. Yeah. Running a research lab with
the smartest people doing this like
amazing like historical work and I got
to watch it and that was very cool.
And then we launched HGBT and everybody
was like congratulating me and I was
like
my life is about to get completely
ransacked. And of course it has. Uh and
but it it feels like it's just been
crazy all the way through. It's been
almost 3 years now. And
I think it does get a little bit crazier
over time, but I'm like more used to it.
So it feels about the same.
>> Yeah.
We've talked a lot about Open Eye, but
you also have a few other companies,
Retro Biosciences and Longevity and
energy companies like Helon and Ollo.
Did you have a a master plan, you know,
a decade ago to sort of make some big
bets across these major spaces or how
how do we think about the Sam Alman arc
in this way? No, I just wanted to like
use my capital to fund stuff I believed
in. Like I I didn't it it Yeah, it felt
like a good use of capital like and more
fun or more interesting to me and
certainly like a better return than like
buying a bunch of art or something.
>> Yeah.
>> What about the quote unquote human
algorithm do you think AIs of the future
will find most fascinating?
I mean kind of the whole I would bet the
whole thing like the whole my intuition
is that like AI will be fascinated by
all other
things to study and observe and you know
like
>> Yeah.
>> Yeah. In in closing, I I love this
insight you you had um where you talked
about how you know the the next open a
mistake investors make is pattern
matching off previous breakthroughs and
just trying to find oh what's the what's
the next Facebook or what what's the
next open AI and that the next you know
potentially trillion dollar company
won't look exactly like open AI it will
be built off of the breakthrough that
open AI has helped you know emerge which
is you know near free AGI at scale in
the same way that open AI leveraged pre
previous breakthroughs and so for
founders and investor ers and people
trying to ascertain the future listening
to this. How do you think about a world
in which there is open achieves this
mission? There is near near free AGI.
What types of opportunities might emerge
for for company building or investing
that you're potentially excited about as
you put your investor hat on or company
building hat on?
I I I have no idea. I mean, I have like
guesses, but they're like they're
they're I have learned
>> you're always wrong.
>> You've learned you're always wrong. I've
learned deep humility on this point. Um,
I think the the only like
I think if you try to like armchair
quarterback it, you sort of say these
things that sound smart, but they're
pretty much what everybody else is
saying, and it's like really hard to get
the right kind of conviction. The only
way I know how to do this is to like be
deeply in the trenches exploring ideas,
like
talking to a lot of people, and I don't
have time to do that anymore. Yeah,
>> I only get to think about one thing now.
>> So I I would just be like repeating
other people's or saying the obvious
things but
I think it's a very important like if
you are an investor or a founder I think
this is the most important question and
you don't you you figure it out by like
building stuff and playing with
technology and talking to people and
being out in the world. I have been
always
enormously disappointed by the
willingness of investors to back this
kind of stuff even though it's always a
thing that works. You all have done a
lot of it but most firms just kind of
chase whatever the current
>> thing is and so do most founders.
>> Uh so I hope people will try to go
>> yeah we talk about how you know silly
you know fiveyear plans can be in a
world that's constantly changing. It
feels like when I was asking about your
master plan, you know, your your career
arc has been following your curiosity,
staying, you know, super close to the
the the smartest people, uh, the super
close to the technology and just
identifying opportunities and kind of an
organic and incremental way from there.
>> Uh, yes, but AI was always a thing I
wanted to do. I went to I I studied AI.
I worked in the AI lab between my
freshman and sophomore year of college.
>> Yeah.
>> It wasn't working all the time. So, I'm
like not I'm not like enough of a
I I don't want to like work on something
that's totally not working. It was clear
to me at the time AI was totally not
working. Um but
I've been an AI nerd since I was a kid.
Like this
>> so amazing how it, you know, you got
enough GPUs, got enough data, and the
lights came on.
>> It was such a hated like people were
>> man, when we started like
>> figuring that out,
>> people were just like absolutely not.
the the the field hated it so much.
>> Investors hated it, too.
>> It's not It's not the
>> It's somehow not an appealing answer to
the problem.
>> Yeah, it's a bitter lesson.
>> Yeah. Well, the rest is history and
we're perhaps let's let's wrap on that.
We're lucky to to to be partners along
for the ride. Sam, thanks so much for
coming on the podcast.
>> Thanks very much.
>> Thank you.
[Music]
Loading video analysis...