This is the Last Source of Scarcity | USV’s Albert Wenger
By Johnathan Bi
Summary
## Key takeaways - **Privacy Incompatible with Progress**: Privacy is incompatible with technological progress due to the universe's asymmetry where destruction is easier than construction, enabling basement viruses or weapons; the paths are heavy government control or sacrificing privacy for community vigilance. [01:51], [03:36] - **World After Capital: Attention Scarce**: We are entering the world after capital where physical capital is sufficient as shown by China building cities and railroads quickly, but attention is the new defining scarcity, misallocated away from climate and personal meaning. [24:04], [28:14] - **Three Freedoms Reclaim Attention**: Economic freedom via universal basic income frees attention from job loops, informational freedom via personal bots counters adversarial platforms like TikTok, and psychological freedom via mindfulness prevents emotional hijacking. [31:30], [32:27] - **Knowledge Grounds Humanism**: Knowledge, externally recorded art and science, provides power and the basis for humanism centering human agency and critical inquiry, enabling progress without divine reliance. [53:42], [54:16] - **Moral Progress Matches Tech**: Technological progress without moral progress leads to self-destruction as seen in early farming and industry; we need universal moral core and philosophy to converge values amid AI races. [09:37], [11:23]
Topics Covered
- Full Video
Full Transcript
The back of my phone uh carries a sticker that reads scrolling [music] kills X and Tik Tok and so forth. They they're adversarial attention systems because their financial model is to suck up as much of your attention as [music]
possible to then resell it. >> You are a partner at USV. USV is one of the early investors in Twitter. [music] Yes. Do you kind of regret that >> work is a source of purpose? I'm like, have you worked at a McDonald's?
>> Tell us why capitalism is coming to an end and what's going [music] to replace it? China can stamp entire new cities on the ground in no time. China has built a
it? China can stamp entire new cities on the ground in no time. China has built a country criss-crossing railroad system in [music] basically like two decades.
Privacy is incompatible with technological progress. [music]
>> I don't think of privacy as a value in and of itself. It's not a value in of itself.
Albert Wanganger is one of the partners at USV, a legendary venture capital firm that invested early in Twitter. And yet, it [music] is precisely companies like Twitter, now argues Albert almost 20 years later, that have wre havoc and
siphoned our attention away from the seismic shifts that are about to upend your life. As a lifelong student of philosophy, Albert thinks about
your life. As a lifelong student of philosophy, Albert thinks about technology in terms of centuries, if not millennia. In this interview, you're going to learn about the tsunami waves that are about to hit humanity and
Albert's radical proposals to steady the ship. My name is Jonathan B. I'm a
founding member of Cosmos, where we deliver educational programs, fund research, invest in AI startups, and believe that philosophy is critical to building technology. If you want to join our ecosystem of philosophy builders,
building technology. If you want to join our ecosystem of philosophy builders, you can find roles we're hiring for, events we're hosting, and other ways to get involved on jonathanb.com/cosmos. [music] Without further ado, Albert Wanganger,
>> you suggest that privacy is incompatible with technological progress. Yes. >> Why is that?
>> It's due to a fundamental asymmetry built into the fabric of the universe.
And that asymmetry is that um many many more arrangements of the atoms in my body make something absolutely useless than make my body. Right? So it took 58
years to make this particular body and mind. Um because of this asymmetry, uh a weapon, a gun, um can extinguish all of that within a split second,
right? Um and this asymmetry means that as we make technological progress, our
right? Um and this asymmetry means that as we make technological progress, our ability to destroy always grows faster than our ability to construct.
Construction is fundamentally harder because fewer arrangements of atoms make the thing you want. Most arrangements of atoms don't make the thing you want.
they make something useless, something dead, something that's not a machine, etc. And so when you make technological progress, your ability to destroy always grows faster than your ability to construct. And so we're now entering this time
period, for example, where uh using equipment that I can buy more or less off the shelf and using computers that I can run at home, I will be able to make
viruses in my basement. We're really not very far at all from this um ability. Uh and at that moment, you know, if somebody can make a highly
uh lethal um highly infective long incubation period virus, like we cannot wait for that to be released. We should know that. And so the only way there's
only two paths forward. We can either exercise huge amounts of government control over technology. We be like you're not allowed to do this, you're not allowed to do this, you're not allowed to have this kind of machine, you're not like only government and that's very problematic because that
means you need ever more state power to enforce that or you can say we are not going to have a lot of privacy. Um by the way that doesn't necessarily mean that the state knows everything but that means that other people know things you know it's
like like your neighbors and your friends and you know um but you cannot have both. you cannot have unfettered access to technology and and rapid
have both. you cannot have unfettered access to technology and and rapid permissionless innovation um and also have privacy. I just don't think that's
possible because you will you know I mean if we think about this terrible thing of school shootings in the US, right? Like imagine go back to a front loader, right? Like a muzzle loader, right? like, okay, somebody tries and storms a
right? Like a muzzle loader, right? like, okay, somebody tries and storms a school with a muscle loader, like, you know, maybe they'll get one shot off and they'll get over. They walk in there with, you know, multiple um, you know, >> ARs,
they'll kill, you know, dozens or more more kids. And so, >> the need for their parents to kind of pay attention and be like, "Holy I think my kids up to no good," like goes up dramatically, right? And so, you can do one of two things. You can either
have huge amounts of permissionless innovation or you can have a huge amount of government control. But if you go in the direction of permissionless
innovation, then privacy becomes really deeply problematic. In a pre-digital world, it was actually much easier to secure things, right? So when your radiology
images were literally on film like you know they were lock in a cabinet lock in a cabinet >> put the cabinet in a secure room that was pretty secure right? >> Um conversely that made it very difficult to get a second opinion. Now if you wanted a second opinion you had
to go there you had to pick them up you had to carry them to the place. So now
that we're in this digital world it's a lot harder to secure the thing but it's super easy to get a second opinion. like you just push a button and now these days you can get a second opinion from a machine or maybe a first opinion from a
machine, right? And so the tradeoff between kind of how important is it to protect
machine, right? And so the tradeoff between kind of how important is it to protect your information versus how important is it to you to actually get multiple people to look at your thing and give you opinions and give you a better
decision. That trade-off has changed pretty dramatically. And so the big
decision. That trade-off has changed pretty dramatically. And so the big mistake that I believe a lot of people make when they think about privacy is that they think about partial equilibrium analysis of privacy. So you
take a society that is as it is and then you make one person's information public like one person's sexual orientation and all of a sudden people like look at this
guy you know um and you're like yeah but if this were widely known about everybody I don't think it' be a big deal and the example I give in the book is tax records so there are two countries Sweden and Norway where tax records are completely public
everybody's tax filing is public so everybody's wealth is known um in a world in a if you live in a society where everybody carefully guards their wealth if your wealth is released you are in some way exposed right but if
everybody's wealth is known that's a level playing field >> right >> so much of what I believe in is >> because I don't think privacy is compatible with technological progress because I believe we need technological progress because technological progress
I believe is fundamentally um over time good um and we can talk about why I believe that or what the sort there's a there's a um it attaches to my
interpretation of the story of the fall from paradise and Eden. Um but um if if that's the the kind of um road then we need to create societies where stuff
being known about you can't easily be used against you, right? And so that's about shifting power.
If it's like you're going to die because you can't afford health care because nobody will insure you because they know you have some condition, that's a real problem, right? In that case, you're like, "Yeah, I really want to keep my
problem, right? In that case, you're like, "Yeah, I really want to keep my medical information private." But if we have a society where like of course we're going to help you try and get the best possible treatment because we are have a level of solidarity about this, then does it really matter if people
know that you have a disease or not? >> Right. So, so many directions to push.
[laughter] you've opened yourself up to to to attack and broadside.
>> Well, let's start with technological optimism. You describe yourself as a technological optimist.
>> Uh, however, if you buy your own premise that technology always favors the attack over the over the defense one day, you know, like there's only so much you can do if we can 3D print nuclear bombs. Like like I I guarantee you if we can 3D print nuclear bombs
nuclear bombs. Like like I I guarantee you if we can 3D print nuclear bombs like tomorrow one of the big cities is going to blow up in the world like and so it's even worse given it seems like the agricultural revolution was pretty
bad for most people like like behind the veil of ignorance rs where would you choose to live would you choose to live in hunter gather or agricultural I would choose to live in hunter gather because odds are I'm going to be some
malnourished surf do plotting over the sea >> I I I actually think that that's Right.
Um I I'm not at all disagreeing with if you get randomly substituted into some body, >> you would probably much prefer to get randomly substituted into a >> um >> hunter gather. So these are hunter gather past a certain age because of
course child mortality was awful. >> So these are the two critiques I want you to respond to. Number one is existential like if what you say is correct about technology, we should be doing everything to stop technological.
And number two is on the scale of goodness since not existential. It
seemed like humanity needed a long time to recover from the technologies that it created. No, I think that's that's spot on. I I think this is why we need
created. No, I think that's that's spot on. I I think this is why we need philosophy. This is why we need moral progress. I think we have made too much
philosophy. This is why we need moral progress. I think we have made too much technological progress already for the moral progress that we've made. And my
big interest in philosophy, my big interest in psychological freedom, in you know, mindfulness is because you're absolutely right that if we continue to make technological progress but don't make moral progress, we will destroy
ourselves. There's no doubt about that. And you're also right that early farming
ourselves. There's no doubt about that. And you're also right that early farming was absolutely horrendous. We sucked at farming >> and early industrial was like the wars like the you know >> by the way this is why I say I'm a long-term optimist but I'm very
pessimistic about this current transition. I think even though we did the transition out of the age into the industrial age only a couple hundred years ago and it really wasn't complete until the end of World War II. So we're
less than 100 years away from that transition. We're making many of the same mistakes and so I think it's going to get a lot worse before it gets better. Um but that's also why I'm so invested in philosophy and so invested
better. Um but that's also why I'm so invested in philosophy and so invested in this idea that moral progress I believe is possible. Um I think we've made a huge amount of mistakes in philosophy in the last few years as well
where we have kind of you know I think post the horrors of World War II um you know sort of had this very strong countermovement to humanism and this
very strong idea that oh it's all relative and everybody's entitled to their own values etc etc. I don't think that's at all compatible with moral with with technological progress. I think if we want to have real technological
progress, we are going to have to converge on some universal moral core. Otherwise, we're dead, >> right? But that doesn't seem to be a very strong case for the technological
>> right? But that doesn't seem to be a very strong case for the technological optimist. I guess my question is why isn't your suggestion let's push the
optimist. I guess my question is why isn't your suggestion let's push the pedal on the brakes for now for technology until we figure out the the philosophy thing?
>> Because because I I believe that there we are not in a world where you can where there is a break of >> you can do that. Okay. Okay, >> we're not in a world we're not in a world that that's and and you know if you look at the race the AI race you
have both market forces and you have geopolitical forces that are just flooring the AI race, right?
>> Uh and I just like you know I wrote this post like the safest number of super intelligence is zero, right? Um the least safe is one and then you know some number n is probably safer than one. Um the the point though is I think that's a
hypothetical discussion. >> Yeah. >> Like it's like you know and and the
hypothetical discussion. >> Yeah. >> Like it's like you know and and the people who are like we should be nuking data centers from space. I'm like that's a very nice hypothetical but like do you think you can develop a consensus on
this? I just think that's not that's not a realistic um point of view. Right. >> I I think
this? I just think that's not that's not a realistic um point of view. Right. >> I I think >> it's intellectually consistent. >> Right. >> But I just don't think it's all that useful.
>> Right. Right. [laughter] Right. But uh but hold on Albert because this is my position basically which is uh you know in the preface of uh Toqueville's Democracy in America one of my favorite books he says that equality he gives a metaphor it's a stream that you're kind of rafting down and what you can do is
you can kind of like guide the raft to not hit the rocks but there's no way of pedaling back up but because of that I don't call myself a technological optimist like I just think we're we're here along the ride and that's why we
need to build better technologies. So, so, so, so in what way are you an optimist? Like that's because you're excited about it. >> No, the optimism part is the optimism
optimist? Like that's because you're excited about it. >> No, the optimism part is the optimism part is that >> I believe problems are solvable. >> I see. Right. >> So, I contrast it with
technological pessimism in the sense that we're just all toast and like we can't work our way out of the >> There's agency. At least there's agency.
>> There's agency. And if we use the parts of our brain that are capable of rational thought, we can figure out how to solve these problems. When I
mentioned earlier the fall from um paradigm, >> to me it's like you take a bite of the apple, if you think of the apple as the fruit of knowledge, now you have knowledge, a little bit of knowledge, you start doing some things, those things start creating
problems, right? So you start agriculture that introduces tsunatic viruses. Now you have all these
problems, right? So you start agriculture that introduces tsunatic viruses. Now you have all these diseases. Now we got to figure out how to deal with those. You know, you start
diseases. Now we got to figure out how to deal with those. You know, you start industry, you oh, you have this, oh my god, we can pump this stuff out of the ground and burn and make so much energy. And then they're like, oh, we've totally messed up the atmosphere and it's going to like heat up the earth. And when I
say I'm a technological optimist, I'm like, I still believe that humanity can get to a point where like, yeah, that actually was a mistake and we have the technology to go fix it, not to go back. I don't believe you can go back. I don't believe you.
You cannot get back into Eden, right? Um so um you have to go forward. That's why
we need technological progress. But technological progress without moral progress is hugely dangerous. >> Cool. Okay. So to frame for our audience, this is where we are at the the argument. You're you're not getting off that idea. We're just beginning >> please. >> We examine Fantastic. Yeah. We examine
the first fork, right? Pause technology or to go forward. We discuss why we go forward.
>> Then there's another fork if you believe this these two premises. So in uh Nick Bolstrom, existential risk theorist, uh he wrote a very famous paper called vulnerable world hypothesis. Yeah. And one way you can read it, I'm not sure this is intention. One way you can read it is we need a kind of one world state
that is able to exercise a monopoly over force to be able to uh deal with the risks of uh of technology. You are giving an alternative to Bolstrum. Even
when you agree with his premises, you're saying no, no, we don't necessarily need force. We just need information on what everyone is doing. And is that enough?
force. We just need information on what everyone is doing. And is that enough?
Let's say we have a 3D printing that can make nuclear bombs. Is it enough to just to know who's making a bomb or or is or or do we want to actually just stop people from doing that? Yeah. >> Yeah. I mean the the question is because
that's what privacy is, right? To know it's probably >> Sure. Sure. So to me the question comes down to and this is back to the question of optimism. Do you believe that over time
enough people can be like, "No, of course we don't want this dude to print a nuclear bomb and setting off. It's going to kill a lot of people. This is
bad. This needs to be stopped." Like, you know, but where that happens, not necessarily because there's some world government that has some stormtroopers that come and descent on this person, but where communities are wholesome to the point where they're like, "No, of course we don't want violence. Of
course, this is bad." Like, you know. So I guess I believe that if you have a prepundonderance of people who want good things to happen, >> then you have an ability with relatively little state power to stop bad actors.
>> The only this is the moral progress point. >> Yes, this is the moral progress point again. Yeah.
>> Right. I see. Um what do you say to the the privacy concern? And now we're examining your you're moving further down the tree examining that that specific claim. Uh in Europe where uh religion census were taken before World
specific claim. Uh in Europe where uh religion census were taken before World War II had a lot more death of Jews. The Jews were extermin exterminated much more thoroughly than other regions that that that didn't do this. Like the
Netherlands for example is one of the worst. Um, and so the argument there is like if like like even if collecting the information now is not a problem, it
could form the basis of future kind of overreach. And how do you think about >> I I think first of all I I I I think um already today we're seeing that we're
leaving a vastly bigger information trail than anybody left during those ages, right? I mean during those ages the churches had to kind of record it
ages, right? I mean during those ages the churches had to kind of record it into some book. I mean now it's everywhere right? So um so I think what that does is it raises the premium on the rule of law. It raises the premium on democratic governance.
Right? It's like it's like in a world where >> the stakes are higher now.
>> The stakes are so much higher. And I mean look at what states that want to exercise control can do because of how they can control the digital realm and how much information they can gather on citizens. I mean it's given vastly new
powers to the state and I think that genie is entirely out of the bag. Um so
the idea that we can somehow live in a construct a digital world where we don't have any footprints I don't even know what that would look like. you going to not allow any cameras in public? Are you I mean, again, I don't think I don't think people are thinking through what they're really saying if they say they
want to have privacy in a fully digital world right? >> Like the amount of government control of all technology that would be required to make that happen is completely off the charts, >> right? But to say the genie out of the bottle does not directly lead to well,
>> right? But to say the genie out of the bottle does not directly lead to well, let's shake all of it out of the B. So, so is there anything is there any realm of privacy? Is there anything? >> No, my my view is just a view of like
of privacy? Is there anything? >> No, my my view is just a view of like let's not tr you know the the road to hell is paved with good intentions, right? And so I believe a lot of the people who've been advocating privacy
right? And so I believe a lot of the people who've been advocating privacy have inadvertently been advocating for things that are actually long-term harmful, you know, and so I'm not saying, >> you know, we can do away with privacy
tomorrow. We can't. I'm saying directionally though we should be building towards a world where
tomorrow. We can't. I'm saying directionally though we should be building towards a world where >> we can survive without it. Right. >> Exactly. Because I don't think of privacy as a value in and of itself. It's not a value in of itself. It's
something that we've done to help guard against certain things.
>> But I'm more interested in how do you construct freedom without needing privacy. To me it's about freedom about individual freedom about freedom of
privacy. To me it's about freedom about individual freedom about freedom of communities about freedom of you know um innovation and those things. How can we construct that?
>> Right. Right. And this is kind of a parallel to the technology point which is like you're saying this is the road we're heading now. Like the genie genie's out of the bottle. Like and so we're either going to have uh a society where you can reveal things about your sexual orientation, your ethnicity
>> and we're and and you're good or we're going to have a society where you're going to be like lynched. And so like that's that's the the moral choice >> or were you going to be digitally excluded from everything where suddenly your health insurance is denied, your bank account is frozen, you know, um
your assets are tied up, all of those things, >> right? Uh well, I'm glad you don't care much about privacy because I'm going to share with our audience a funny story that I heard from a colleague about you, which is when you were so you were born in and raised in Germany, you were on exchange program uh and you walked
around the house [laughter] naked >> because your parents were raised in this kind of nudest colony. So, so, so I guess two questions. One is tell us more about that. Number two is could it just be you? Could it just be you and your bohemian bohemian
about that. Number two is could it just be you? Could it just be you and your bohemian bohemian >> lifestyle? Um, well, the story is actually very funny because um I spent a
>> lifestyle? Um, well, the story is actually very funny because um I spent a year in the in the in the states as an exchange student. Um, and before we went, they did this orientation weekend in Germany and they talked about, you know, how you might feel alienation or culture shock and um they
>> clothes in the house. [laughter] But but no, so about 30 times throughout the weekend, they said, "Whatever you do, do not walk around the house naked."
>> Oh, wait. [laughter] This is a German thing. >> Oh, yeah. No, it's a totally a German thing. In Germany, people walk around the house naked all the time.
thing. In Germany, people walk around the house naked all the time.
>> So, it's a big cultural difference. And so, they were very worried that we would do this. And so, for the first 6 months, I did great. That's just wonderful. And
do this. And so, for the first 6 months, I did great. That's just wonderful. And
then 6 months in one morning, I'm walking from the shower back to my room and my husband goes [gasps] and I'm like, "Oh I'm totally naked." [laughter]
And so, you know, it's just it's a cultural thing. Yeah. >> Right. And [laughter] and and it's interesting because uh this is kind of like the Sweden idea, but except for taxes, it's your body, right? That in a world where we're all cool with it, that
that kind of privacy doesn't really matter is what you're trying to say.
>> That's what I'm saying. In fact, um there was an ad agency in New York. I
forget the name. Um but for a while they posted nude pictures of each of their the team members online and they were like, "Anybody wants anything on us, it's already out there." [laughter] >> Yeah, this might be um this might be uh
bringing us a bit far, but what do you make of the fact that German culture is both has this both extremely strict uh kind of dimension and all this this like very open and Yeah. How do you make sense of that? >> You know, I just think um Germany has
had a romantic streak, a deeply romantic streak for a long time. If you think of Schiller and Good and um it's been in literature, it's been in art, you know, um a lot of the German painters like deeply romantic paintings and it's had
this sort of also this kind of rationalist tradition both in and a deep scientific tradition >> and >> and it had these it had these you know um like I showed up here 15 minutes early like it had these things about like you know punctuality for example.
What's really interesting is that you can say the strictest culture, one of the strictest cultures in Europe, the German culture has all these open elements including like the Berlin sex culture and you see the same in East Asia where I think one of the strictest cultures in Japan also has this very
open like like sexual mores kind of culture and and my question for you is what do you think is the relationship between this? Is the relationship about it hydraulic? hydraulic meaning because they repress so much here they need to
it hydraulic? hydraulic meaning because they repress so much here they need to go to Burkheim and [laughter] go crazy once in a night or >> is it actually symbiotic in the sense that it's because we're so strict and we're showing up time on 15 minutes that we can afford not to wear clothes in the house or something like that. Yeah,
>> I don't know. That's a great question. I I've not thought about this before. So I
I I will uh take this one as a homework assignment. [laughter]
>> There we go. Um, you title your book with, I think, both fear and excitement, the world after capital. So, tell us why. >> It really refers to what I believe was
the defining scarcity of the industrial age. Uh, and when I say capital, I mean physical capital. Can you build factories and roads, infrastructure? uh and I believe
physical capital. Can you build factories and roads, infrastructure? uh and I believe we are entering a world after capital because we have sufficient capital and
our scarcity has shifted. Now the big theory of the book is that humanity's gone through basically two prior big shifts in defining scarcity
and that when you shift the defining scarcity that's when you get massive transformation of of how humanity lives. For 250,000 years, humans are foragers, hunter gatherers. And then roughly 10,000 years ago, there's sort of a
hunter gatherers. And then roughly 10,000 years ago, there's sort of a series of technological uh inventions. There is the realization that you can plant seeds, that you can irrigate them, that you can domesticate animals, that
you can store food, all those things. And they combine to move us from the huntergatherer age to the agrarian age. And what is the shift in scarcity? Well,
if you are a foraging tribe, your scarcity is food. you either find enough food or you starve or migrate. Um, in the agrarian age, the scarcity is
different. It's arable land. And so then only a few hundred years ago, we have
different. It's arable land. And so then only a few hundred years ago, we have the enlightenment. We have all sorts of scientific breakthroughs. We can make
the enlightenment. We have all sorts of scientific breakthroughs. We can make steam. We can make electricity. We can mine. We can make materials. Um, and the
steam. We can make electricity. We can mine. We can make materials. Um, and the scarcity no longer is land. It becomes capital. Uh so the book basically says look when we make these shifts really massive changes happen. So think about
going from the forager age to the guine age we go from being migratory to being sedentary. We go from living in very sort of flat tribal societies couple
sedentary. We go from living in very sort of flat tribal societies couple hundred people most to these very large by comparison and very hierarchical and
grand societies 10,000 40,000 people 17 layers you know between the commoner and the the emp emperor. Um, we go from being promiscuous to being monogamousish. We go from animistic
religions where every tree, every animal contains a spirit. We go from that to theistic religions. Um, so that's just like extraordinary changes like we're
theistic religions. Um, so that's just like extraordinary changes like we're changing what we consider to be god-given, right? In this transition.
Then fast forward to the change from the agrarian age to the industrial age. We
go from living in the countryside to living in the city. We go from living in large extended families to nuclear family or no family. We go from lots of comments to basically everything being private property including private
intellectual property. And we go from great chain of being theologies where
intellectual property. And we go from great chain of being theologies where theologies basically look I'm going to tell you how to be a farmer. You're
going to be the best possible farmer but you will never be a noble person. And we
go from that to the Protestant work ethic, right? The harder you work the better off you'll be. Um and by the way being rich is not a sin. And you know, we have changed these twice already. And now we're at this extraordinary moment
in time where it's super clear that we have to change them yet again. And that
these are deep foundational shifts all driven by the extraordinary capabilities of digital technology, >> right? >> And yet what's happening re in reality is that politicians keep pretending that their job is to fix the industrial age
somehow. Um it's probably worth very briefly what do I mean by the word sufficient? Um so um
somehow. Um it's probably worth very briefly what do I mean by the word sufficient? Um so um in economics there is sort of this idea of an economic scarcity which is anything that has a positive price is economically scarce. There's a lot of
problems with that definition and I talk about them in the book. Um I have sort of a technological definition and technological definition for me of scarcity is if there's not enough of something to really sustain humanity. So
if um you know if you think of a agrarian society if their agricultural production collapses people really literally start to starve and die right they cannot sustain themselves and the society collapses in um and so I don't
think that our society in the world today is going to collapse because we don't have enough capital. It's going to collapse because we're not paying attention. So we now get to what is the defining scarcity today? It is
attention. So we now get to what is the defining scarcity today? It is
attention. They're not paying enough attention to things that are truly important. So, we're not paying enough attention to the climate crisis, to
important. So, we're not paying enough attention to the climate crisis, to global warming relative to its scale of impact on humanity. We're not paying enough attention individually to our questions of meaning and purpose in life, which is why a lot
of people are struggling with these questions. So, we live in an age where attention is the defining scarcity. And so when I talk about capital like they're good examples of why I believe we have sufficient capital. You know if
you look at for example China you know China can stamp entire new cities on the ground in no time. China's built a country criss-crossing railroad system
in basically like two decades. Um you know uh here in the US you know X.AI built an entire data center in the space of like less than a year. Um so capital
is really not the thing that's holding us back. What's holding us back is that our attention is very very very misallocated. >> Right. >> Profoundly misallocated.
>> Right. Well um you are a partner at USV. USV is one of the early investors in Twitter. Yes. And
I would say I don't know if you would agree that a lot of social media is to blame >> for this misallocation of attention. Right. It's definitely a contributor.
>> How do you guys think about that? Do you kind of regret that in some sense or >> Yeah, I mean the back of my phone uh carries a sticker that reads scrolling kills. Um so look,
first of all, I think that these systems um when they first emerge, it's very hard to tell where things are going to go, right? So when we originally made the investment in what was then called Twitter, um people were like, "You guys
are such idiots. Who wants to hear what somebody else had for lunch?" you know, that was literally the criticism. And then, you know, fast forward, you know, a couple decades and people like, you guys are idiots. You basically killed American democracy. And like I'm like, okay, these things are
American democracy. And like I'm like, okay, these things are >> which one [laughter] is it? It's like, you know, but it but in some ways these systems are both beautiful and dangerous. And a big part of what my book argues for is like what are the things we need to get the best out of
these systems and contain the worst. I don't think the answer is to do away with Twitter or ban Twitter or ban Tik Tok or ban YouTube. I mean, these things are full of fabulous entertainment. They're also full of, you know, uh,
humor, um, uh, of human connection, um, also of insights, you know. So, it's not like I don't want somebody to get rid of X or ban X or ban Tik Tok. I just want
to put people back in charge of their attention, >> right? You know, so that's the that's the the the the big thrust of the book is how can we counter this huge misallocation of
attention? What are the steps we need to take so we can route alloc attention
attention? What are the steps we need to take so we can route alloc attention back to the things that are actually important? >> Right. >> And that's not about doing away with X.
It's just limiting how much time people spend on X or Tik Tok or >> Right. And what do those kinds of methods you can give us a general overview entail?
>> Right. And what do those kinds of methods you can give us a general overview entail?
>> Yeah. I mean so in the book I talk about three freedoms and they are economic freedom, informationational freedom and psychological freedom. Um economic
freedom is about basically universal basic income and that's freeing up attention out of jobs essentially like a lot of people like have jobs that they need in order to
survive but these are jobs that either should be higher paid or should be done by a robot or both. Um, and so basically, UBI is a way of freeing up a lot of attention that's currently caught in what I call the job loop. Um,
thenformational freedom is about reasserting our control over computers.
Um, and that's really fundamentally about having a right to be represented by a bot that works for me, that represents me visav X and Tik Tok and YouTube, and that has my best interest at heart. Um and then psychological
freedom really is about each one of us having some kind of mindfulness practice that allows us to live in this information super saturated world you know >> right >> so those are the three freedoms and I think if we build out these freedoms
then we can free up attention massive amounts of human attention >> right although given how you set up the problem right as this cataclysmic third turning in in the human history I wonder if the prescriptions you kind of fall a
bit short about meeting the challenge in the following sense, right? Think about
the example you gave about how each of the transitions. >> Not just the political economy was changed, right? From from like private ownership to communal to private,
changed, right? From from like private ownership to communal to private, >> not just the theological system fundamentally changed from animistic to great chain of being uh to right now somewhat atheistic. And now you're
telling me, you know, go go meditate three three hours a day and, you know, get get a thousand dollars from the government. We'll be hunky dory. It
doesn't seem to match the kind of challenge that you set up.
>> I I I I I hear you. Um and at the same time, I actually think that these things are deeply profoundly disruptive. All these things are deeply profoundly
disruptive. Let's start with UBI. >> Yeah. We have constructed a system in
disruptive. Let's start with UBI. >> Yeah. We have constructed a system in which if you don't work, you're considered a loser. And the idea that
there would be any money being given to people without them working is runs completely counter to a lot of the current sort of >> culture. It runs completely against the
Protestant work ethic. It cannot happen without a huge shift in the cultural narrative about you know and people will say things like >> well work is a source of purpose I'm like have you worked at a McDonald's because I'm pretty sure most people work
there consider that just a source of income and not a source of meaning in their life right so um which by the way is very different from like working in a
small restaurant that's kind of you know made nice dishes and you know and so one of the things that's funny is when you go to Europe because Europe is a bit of a museum in some regards and has these very extensive social safety nets but
but you find like in a place like Paris you find small subscale restaurants that could never sustain themselves in the US um but they can live there because everybody's kind of taken care of to some degree and it's it's kind of
wonderful actually so that's very disruptive this idea of having a right to be represented by an agent is deeply ly disruptive to the existing structure
of information technology. Today you are program being programmed by your phone.
You are not programming your phone. Your phone is programming you. This is a complete inversion and it would require a lot of changes in the legal system. Um
right now in the US if I basically hacked something like this together for myself, I would be violating three different laws, two of which carry mandatory prison sentences. Right? So the entire system at the moment is
rigged towards central control and away from individual control of computation.
So again this informationational freedom thing is deeply deeply disruptive and then meditation um or mindfulness practice. So for me meditation does not work. I've not been able to make it work. I will try again someday. Um but I
work. I've not been able to make it work. I will try again someday. Um but I do conscious breathing exercises and I find that works really well for me. Um,
so I do think it's really profound though because um, so much of what this these moments of change call for is for us to be able to transcend our immediate emotional
reactions and not to be emotionally hijacked. Uh, so much of what we're seeing play out in tribal politics is where we are today. This descend into tribalism that is emotionally powered. That's people's lyic system taking them
into this tribal mode and really limiting their ability to access the the the rational part of their brain. So let me uh try to push you in two directions.
One from the Marxist side and one from the capitalist side and I'll begin with the Marxist side which is to say look this is all kind of the critique I I surfaced before. This is all surface oil ointment. You know you have cancer and
surfaced before. This is all surface oil ointment. You know you have cancer and now you're just rubbing sunscreen on your skin. And so as in so far as you keep the competitive market as well as private property in place, those are the
deeper issues that bubble everything up. And I'll give you an example. You're
totally right to say, look, there's better ways we can design social media uh uh websites. In fact, I interviewed uh Reed Hoffman and we discussed how LinkedIn is a lot better than than X because of they were very conscious in
their design who they got money from. Substack or Patreon is another example of this. But my push back is but in a competitive market the platform that is
of this. But my push back is but in a competitive market the platform that is able to tap into the hate right the lies deceit and get people stuck on there is going to perpetuate right again and that's just one example so so I'll begin
with that what do you say to the person to say no no no we need to something much more fundamental because this was by the way the Marxist idea that capitalism will take us to some kind of technological maturity and after that point
>> this doesn't matter so so you seem agree with the first half of the sentence and then and then pause the second half. Yeah. >> Well, um the first point I would make is that in the transition from the agrarian age to the industrial age, we didn't do away
with agriculture, right? We still need to feed ourselves after all. What
happened instead what happened instead is that the amount of human attention that was being absorbed by agriculture got shrunk a lot, right? But but our our system of ownership also changed from like surfoms and to like private
property and the individual, right? >> I don't think that matters nearly as much as how much human attention is trapped inside the system. I think that is the really important that's the dominant. Yeah. >> The dominant thing if you believe like I
do that attention is the scarce thing. Right. >> So um >> in the agrarian age roughly 80% of the population you know give or take was engaged in agriculture. so that 20%
could engage in you know things like philosophy [laughter] and art and early science and warfare etc. Um today in modern societies um that's you
know 5% of the population or maybe less that's engaged in agriculture. So the
way I look at where we are today is that today we have all this attention that is trapped in these um uh either explicitly economically incentivized systems like
the labor market or these adversarial systems like X and Tik Tok and so forth.
They they're adversarial attention systems because the financial model is to suck up as much of your attention as possible to then resell it. And I
believe that the three changes that I have outlined are enough to shrink the amount of attention that is stuck in this system. Um, and so if we make this transition the right
way, what I think is the right way, we don't need to do away with private property. We don't need to do away with markets. We don't need we can just
property. We don't need to do away with markets. We don't need we can just shrink their importance just like we've shrunk the importance in terms of attention of agriculture. I think markets are amazing for places where they can work.
>> I also believe that today we don't have a market in social media sites because we cannot aggregate on our own behalf. >> Right? >> I cannot build a system today that says, "Hey Albert, I've got something good from Tik Tok for you. I've got something good from X. Do
you want to reply to this? I'll take care of everything else. And by the way, I don't need to spend any other time or attention in any of these systems. That cannot be built and in the world I envision that can be built and we will have that and that will completely diminish the importance and the amount
of ways that these systems can suck attention out of us. >> Right? So another way to frame what you're saying is to say, Jonathan, you thought I was going towards the socialist Marxist direction. I'm trying to go the opposite. Like I at least when
it comes to social media, I'm trying to engender competition.
>> Yes. 100% and the the competition gets engendered by moving the locus of control back and away from the centralized player. I mean it's truly extraordinary. Think about this. Almost all of us globally walk around
extraordinary. Think about this. Almost all of us globally walk around with a supercomputer in our pocket. Right. >> Right. >> And that supercomputer can talk to every other supercomputer and you pay for it and you pay for the power to charge it
and you pay for the data plan and then the second you hit an app icon that app completely takes over your phone. It's an insane state of the world. It's
completely insane once you think I'm I have a supercomputer and I cannot use my supercomputer to program this world for me. Instead, the world is programming me a few centralized systems. You know, people worry about brain computer
interfaces, right? People like, oh, the matrix, like you have a plug in there.
interfaces, right? People like, oh, the matrix, like you have a plug in there.
Like, you already have a plug. It's your eyes. You're already plugged into the matrix through your eyes, >> right? >> The the the feed. That's why scrolling kills the feed is the matrix. they already plugged into it >> right now. Let me give you the
capitalist push back which is to say look you say markets are bad at allocating attention and you know I read a lot about your your talks and the three big things are climate change right uh asteroids and and the
existential risk on that domain uh as well as uh pandemic and disease prevention and yet SpaceX right purely market driven has opened up a kind of space exploration and build the technologies for for that kind of
existential risk operation warp speed it was forprofit corporations that created the vaccine in record time and USV has a climate fund. >> Sure.
>> And so the the capitalist critiques to say we don't need to do any of these three things. The market will kind of figure it out. Yeah. So again, I love markets and I think
three things. The market will kind of figure it out. Yeah. So again, I love markets and I think markets are fantastic at accumulating physical capital. Um they're incredible
at um funding uh product experimentation and innovation. Um, but
when it comes to attention, the reason markets don't work is because markets need prices. And in order for prices to form, you need supply and demand.
need prices. And in order for prices to form, you need supply and demand.
So, let's take global warming. Global warming has been playing itself out for many, many years. Um, you know, people were ringing the alarm bells. Um
but if you think about the people most impacted by this either haven't been born or if they have been born today they still don't have as
big a political voice as the people who are significantly older. So um so the supply and demand for
allocating attention to the problem of global warming is insufficient. Um and
so what I'm arguing is that capital allocation is often downstream from attention allocation because if there was enough attention on it, we could probably get ourselves to a kind of a global consensus on greenhouse gases are
bad. We need to limit them. We need to draw down the existing greenhouse. Once
bad. We need to limit them. We need to draw down the existing greenhouse. Once
we get to that global consensus, we can use the markets to do a lot of the work.
So there is a solution. The solution is you set a price for carbon in the atmosphere. If we had enough attention on the problem, people understood it
atmosphere. If we had enough attention on the problem, people understood it well enough. There was enough of a political movement around it. We could
well enough. There was enough of a political movement around it. We could
set a global price for carbon and that would that would unleash a huge amount of entrepreneurial activity. So to me it's the question is because we
don't have prices for a lot of important things and let me give you a deeply personal one right. So like >> for your personal how much attention you as a person are allocating to your purpose or meaning in life. There's no
market for that. There's no supply and demand. It's just you. It's literally just you. Um and so as a result most people vastly underallocate attention to this because their attention instead gets sucked into these explicitly incentivized system. It gets
sucked into the labor market. It gets sucked into consumption through advertising. It gets sucked into social media system etc. Historically we had a solution for that
advertising. It gets sucked into social media system etc. Historically we had a solution for that was religion. Like religion much of religion is about saying here is your
was religion. Like religion much of religion is about saying here is your purpose. I it's a god-given purpose. I'm handing this purpose to you. You don't
purpose. I it's a god-given purpose. I'm handing this purpose to you. You don't
think you know in the agarian age it's like till the land make the land fertile like God says you got to till the land so you better till the land right so but as we got you mentioned it earlier as we got less and less religious we had to do
the work ourselves but there wasn't this attention allocation to doing the work so as a result people didn't do the work and then they have massive midlife crisis you know so where I'm going with this is capitalism markets cannot solve this problem this is they cannot solve the problem because there are not prices
for these things and there cannot be prices. It's not a question of missing markets.
>> It's an epistemic polit. Yeah. >> Yes. >> On the topic of uh European culture. Let
me ask you a question since you've lived very extensively and worked uh and has to do with what you talked about with UBI and kind of lessening that um status associated with your job enabling those like little cute art artisal cafes which
I like as much as you >> which is you as well as I chose to come here to to come into the gladatorial realm that is America for its energy right and all of these prescriptions you're giving the UBI the meditation I
would say it's very compact pass actionoriented. It's it's very about right but maybe you'll agree the best entrepreneurs that that I meet they have a darkness to them or or they have a kind of killer kind of instinct and there's a reason you know your investments your best investments are
not in France or god forbid Italy right [laughter] like what do you make of that >> I think this is where sort of Aristotle was you know spot on is like there's failure modes in both directions right I think that um Europe Um
the reason I wound up living in the US was I think Europe had entered a period of stasis you know and there was a sense that you know big companies were
dominant and markets were foreclosed and um startups didn't matter uh and I think you can have a failure mode in the other direction where you think that everything is a startup problem and government get out of the way and
there's no role for government and Um and I just think really strong societies are strong because they have capable government and entrepreneurial activity both
>> right although um one of my favorite uh essays what we can call it is Rouso's second discourse and there he makes a very I think good case very subtle case that the best parts of society are often built by the worst parts of human nature
and and I'm right now I'm I'm reading about the psychology of the Renaissance artists and it was this kind Mcavelian oneupmanship and you know the Medici and so even if I grant you that uh your prescriptions if implemented will make
the American character less let's let's call crazy or like less mad right because they're so worried about not getting health care or their status when they go to a dinner party and ask what they do. Couldn't that take off some of the edge
that has made the US so exciting and dynamic for you? I I don't think so at all. Imagine for a
moment as a thought experiment um a US with government that is as capable as US government once was.
US built the inter US had the space program. Um we've just lost the capability of government to do things. But imagine we had that capability.
Imagine we could build, for instance, a super high voltage DC distribution grid in the US the way China has done. That would be phenomenal. We would have the cheapest electricity on Earth. We have all of freaking Arizona and other places
where the sun always shines, you know? We could have that energy all the way out on the east coast if we had a high voltage DC distribution grid. So, so like my point is like >> this idea that there's some kind of real trade-off where you simply cannot have
capable government because you all also have entrepreneur. That's just a I think that's a false >> Yeah. Yeah. So, sorry that that that wasn't the trade-off I was insinuating.
>> Yeah. Yeah. So, sorry that that that wasn't the trade-off I was insinuating.
It was much more of a psychological trade-off. Um, and you know, one of my favorite quotes from the movie Last Man is, uh, you know, uh, Italy had 30 years of, you know, bloodshed and turmoil and violence, talking about the Renaissance, and it produced, uh, you know, Donatello and Da Vinci and all the great artists.
And Switzerland had love and peace and hundreds years of democracy and, uh, what did it produce? It produced the cuckoo clock. >> And let's talk about your entrepreneurs that you funded. >> The most success cases, isn't there some
kind of dark energy about them that that they miss almost attribute too much weight to their work and don't you see that or >> no I mean it takes a certain kind of obsession
>> um but I don't think of obsession as dark necessarily one of my favorite fields of obsession is math and yes you could be like it's dark when a mathematician is totally obsessed with but to me it's I think of these types of
obsessions as kind of beautiful like the obsessions of entrepreneurs to want to build something um there is a dangerous element in it because it can go too far. It can lead to self-destruction. It can lead to you know fraud and lying. I mean the sort of
to self-destruction. It can lead to you know fraud and lying. I mean the sort of Theronos type story is you know um or the FTX's you know um so but but
fundamentally I don't think there's anything wrong with obsession.
>> Right. Okay. Maybe I can, this is helpful that I'm zeroing in specifically on what I think it is, which is to me at least a lot of the successful people in general that I meet have a kind of need to prove themselves that they tie their
worth to their work or product. It could be artistic, it could be entrepreneurial, it could be academic. And I I grew up partially in Canada and in the US and Canada is kind of like the prescriptions that you prescribed where
people are just given an inherent kind of worth like just as a human like like you get respect just because you're a human nothing more. But because of that a lot of that drive that hunger to know I need to prove myself if I don't build
this company if I don't solve this I am nothing like is that is >> I don't I I don't I don't really subscribe to that. >> You don't see that? No. Um I don't subscribe to that. I um um I think the reason for a long time a lot of entrepreneurs have wound up in the US
was because the US had the most open markets, had the most advanced venture capital system, had the easiest sort of access to, you know, people who were
experienced with startups, etc. Um I don't think that this is a kind of a a unique sort of oh um only this type of culture supports this thing and even
over the time period that I've been in venture and in startups you know the amount of entrepreneurial activity in let's say Germany has gone way way way up um and you know looking today at Europe there's many many interesting
startups in Europe um Asia China, India, tons and tons of entrepreneurial activity and a lot of people are no longer coming to the US or going to the US for graduate school and then returning for a startup to their own
country. So um I think there's some people who have more drive to want to do
country. So um I think there's some people who have more drive to want to do things and some people who are happier sitting back and and um >> we should accommodate both. Yeah.
>> Yeah, I think so 100%. uh to summarize for our uh audience where we at the conversation uh we talked about this kind of macro history this moment we're at we talked about a bit more uh of the practical things that that you think on the policy level that
that that needs to happen >> now I want to get into the the good stuff which is the philosophy because you said in our conversation >> that we need to find some kind of objective basis to ground this new
humanism and it can't just be well your subjectivity versus mine and so let me give you a quote from your book. For not only is the power of knowledge a source of optimism, its very existence provides the basis for humanism. By humanism, I
mean a system of values that centers on human agency and responsibility rather than on the divine or the supernatural and that embraces the process of critical inquiry as a central enabler of progress. Give us your definition of knowledge.
>> Decent actually. >> It's not too bad. Yeah. [laughter] uh give us your definition of knowledge and how it's supposed to ground us in this new age. >> Yeah. So I have a very broad definition
of knowledge. Basically to me knowledge are the things that we um make sort of
of knowledge. Basically to me knowledge are the things that we um make sort of external u meaning we commit them to paper or we record them and we maintain them over time. So um you know we have these beautiful books behind us. Some of
them are more artistic um they move us. Some of them are more scientific. they help us. Um,
and we've decided that they're worthwhile maintaining. Uh, and that to me is knowledge. And so today, people who are born today have access to this huge collective inheritance of um, art and science that we have created
together as a species. And it is the source both of a lot of our enjoyment in the world and through art and our um, and of our power in the world, right? I
mean, we're not the fastest animal. We're not the strongest animal. We can't
fly. We can't swim underwater, etc., etc. And yet, we can do all of those things by virtue of knowledge, which then gets translated into technology, >> right? Um, one concern that I have of using this I I'm going to give you a few
>> right? Um, one concern that I have of using this I I'm going to give you a few critiques. Again, you're not going to be let off easy on this one either. Yeah,
critiques. Again, you're not going to be let off easy on this one either. Yeah,
>> this is the best part. I mean, like challenging them is is the most important thing, which is >> this is this is my point about critical discourse. That's how knowledge improves. improved by virtue of being challenged. >> Well, just like privacy. I'm glad you're
improves. improved by virtue of being challenged. >> Well, just like privacy. I'm glad you're I'm glad you're I'm glad you're open to it. Okay, so here's the first critique, which is um your the way you define knowledge, especially what it is not, right? This
kind of authority from the supernatural or divine, uh removes a lot of the pluralism that I I know you really care about in the world because so much of the world's traditions are grounded on the supernatural and God. And so there is a
the worst way to read your sentence is a kind of totalitarian atheism. >> Sure.
>> Right. It can't be from God. Like like we're all that we have.
>> No, I sort of consider myself an agnostic in the sense that if you want to believe in God, you're totally welcome to. And I reserve for myself
some small probability that there is a god or gods in some form out there. Um,
in my own thinking, they're probably civilizations that are more advanced than ours and that would appear to us like gods. Um, so, uh, I think the only place where it starts to interfere is if you say, well, my religion says that you
can't do X. You know, you can't do this kind of research. This is against my religion. or you can't um for yourself live in a way that even though it
religion. or you can't um for yourself live in a way that even though it doesn't interfere with you but you're like my god says this is you know >> wait but but that's almost that's most religions because most religions the
gods they're universal right >> yeah but I mean see most religions go through a period of moderation right where um they start out very fervently and then they go through this period of moderation where they kind of draw back
and they become a little more live and let live you know um >> okay so But the world you're painting is one that is not hospitable to the Orthodox Muslims or the Christians. Let's let's begin there.
>> No, I mean I I think I I I I don't want to be tolerant of intolerance. >> Right.
>> I think that's a hard line. >> I see. Here's the second concern that I have, which is you said that uh knowledge is a form of power. Um yet
it's not clear. I mean, our conversation has surfaces this up already that it's good to get more power without getting more wisdom. And so, there's a lot of traditions that are actually warned against knowledge, right? I can think
about Job in some sense, Edipus, but certainly Genesis, right? And you brought this up already.
>> And yeah, being more powerful without being more wise. Is is that really a good way to >> No, it's not. I mean, we This is why moral progress is such an imperative. I
mean I I I that I fundamentally agree. I think if we you know we will destroy our species and maybe many other species along with it if we push forward on technological
progress without making moral progress. Moral progress is is is of paramount importance. I just happen to believe that most religions are not actually
importance. I just happen to believe that most religions are not actually capable of moral progress because they're kind of frozen. they have, you know, some kind of biblical text or or sacred text and that's kind of frozen and they're kind of freezing their interpretation. That's also why they
tend to be very anti-inovation. Um I think that's actually a problem because um they are not just inhibiting technological progress. I do also think they're inhibiting moral progress. >> Right. A and knowledge for you
encompasses both like both moral wisdom as well as uh right I see 100% >> because when I listened or when I read your book rather um it called to mind how a scientist collects like facts about the world and I think NZ your
fellow countryman gave one of the most compelling critiques about this in his untimely meditations where he says the Germans at the time they knew a lot about cultures But because of that, they they weren't a culture themselves. They they were
almost burdened by the oh my god like look at the Jews and the Muslims and the Christians and the Buddhists like how are we ever going to choose? And you can kind of see that kind of how excess of knowledge creates a kind of nealism for
modernity. Yes. How would you respond to that that you know for example the
modernity. Yes. How would you respond to that that you know for example the Greeks as I'm sure you and I both agree were abundant in culture even though they were really circumscribed in their knowledge. Right. Well, but I mean the
Greeks I think are the perfect example of uh a huge um amount of progress. Um I
would argue both technological and moral progress. I mean this was a moment when when um science really moved forward. Uh I mean David Deutsch always likes to joke if only the Athenian renaissance had continued we would all be immortal
today. Um so um and also you know systems progress in terms of governance
today. Um so um and also you know systems progress in terms of governance right I mean if we think of you know they had sort they had like features of
democracy that in some cases we've lost since then and not rebuilt um so I I I sort of tend to think that um this type of progress we have episodes
in history where we can see that both technological and moral progress are in fact possible And then as I alluded to earlier, um, coming out of the
enlightenment and coming out of humanism, I think we had sort of a very strong belief in pure technological progress and in human domination over nature. And we thought it was easy and we thought systems weren't complex. And
nature. And we thought it was easy and we thought systems weren't complex. And
we're like, of course it's easy. we're just going to make nitrogen and put it in fertilizer and problem solved, you know, and we're going to burn these fossil fuels and have energy and problem solve. We had this, I think, overly
simplistic notion of how powerful knowledge was and we were not investing in moral progress. And that's why we wound up with the incredible atrocities of, you know, the Holocaust, World War II. I mean, these are just horrendous,
you know, it's like tens of millions of people killed. Um, and so I do think though we have also episodes in human history where we see this
distinct moral progress coming along. And um, that's what we need to revive it. And by the way, I think in this moment more than ever, um, we need to
it. And by the way, I think in this moment more than ever, um, we need to revive this because we are really regressing, right? And we're regressing
rapidly. uh which is one of the reasons why I'm short-term quite pessimistic.
rapidly. uh which is one of the reasons why I'm short-term quite pessimistic.
>> Right? Uh here's my final critique about using knowledge as this fundamental basis for this next stage, which is uh so far humans have been the only creatures that are able to wield knowledge in the way that you describe
that improves that is stored in an external medium. But now AI is very much starting to do that, even doing it better already in the time we're filming this in 2025.
And because of that, in your writing, you've had to force yourself to call AIS humans, whether they're neohans or transhumans. And that that just didn't sit very well for me. Almost as a consequence of you grounding the essence of humanity and
for me. Almost as a consequence of you grounding the essence of humanity and knowledge, you're now forced to call AIs humans. Does that make sense?
>> Uh that that totally makes sense. Um I do believe that this is the threshold we're at, right? So I think we are at this threshold of creating technological
entities that will have access to knowledge and because we are unwilling to also invest or lack the attention I guess in my framework of investing in moral progress. we run a
very high risk that these entities will not have any grounding in morality and will in fact um you know uh contribute significantly to further destabilization
of humans. Um the reason I'm calling them humans is because I don't in fact
of humans. Um the reason I'm calling them humans is because I don't in fact think that there is anything else that we can easily point to that makes humans distinctly human, you know, >> right? But but why do we need to find
something to to point to? And and you know I I I think about that funny story uh where I think it was Plato or Aristotle uh says man is a featherless biped and then Dioynes the cynic you know always the jokester plucks a
chicken clean up his feathers behold man [laughter] and then I think again I think it was Plato added featherless biped with flat toenails [laughter] and we've throughout history there's an intellectual temptation of trying to
define oh I found essence of Man, what is behind that impulse? Why can't
we just say look like a platypus, you know, we're different in these ways? Yeah.
>> Well, no, because we live in the anthroposine, right? So, we are the ones um forming the world and um we need to take responsibility for that and we're
not. I think this is you know um again this sort of need for moral progress is
not. I think this is you know um again this sort of need for moral progress is we are exercising our great technological powers and we are not
following the great wisdom of Spider-Man you know exercising [snorts] with great power comes great responsibility I mean it should be quite obvious that we're responsible for the whales and not the whales for us and yet we don't act that way and this is a
problem because let's say we're not going to call artificial intelligence as humans. Fine. But what are they going to learn from us and our moral behavior today? >> Right?
humans. Fine. But what are they going to learn from us and our moral behavior today? >> Right?
>> They're going to learn that it's totally fine to pilfer, suppress, not take any responsibility if they need more space for their AI babies. Fine if humans get wiped out in the process. Like, who cares right? >> So, um, so I think we really only have
two options. Um the there's one option where we sort of accept that knowledge
two options. Um the there's one option where we sort of accept that knowledge um means power and means responsibility and where we attempt to instill that fundamental principle into these new things that we're creating
or we're likely to find ourselves over time in a world where we are increasingly subservient in various forms to these things which will have more knowledge than we do.
>> I see. So here's my steelman of your of your position which is the desire to call them humans is is less to say well now we they they need they need to be able to vote you know we need to like take their rights into consideration and it's more as a reminder that they are our children in some way in the sense
that they will be learning from us and then we have that responsibility to view those values and maybe this is a good segue for you to talk about the values lab which uh you've been a supporter uh with my old professor Kao actually so yeah tell us about that what are you guys trying to do and how's it different
from other ways is about uh approaching alignment. >> Um so the values lab is about growing deeper connections between philosophers and practitioners of AI. Um as is Cosmos
Institute and I'm really thrilled that there are multiple such initiatives in the world. Um because I do think this moment calls really deeply for um kind
the world. Um because I do think this moment calls really deeply for um kind of as I said moral progress and I think that is basically the domain of philosophy.
>> Thank god I was born into the one century philosophy is useful. [laughter]
>> Um and uh Katya is a is an Aristotle scholar. She's a metaethicist. Uh and
she's also somebody who's not afraid of technology. is not afraid of math and uh she has found many ways of bringing together I think a really wonderful group of people now around her um as part of the values lab uh and so um you
know they're looking at a variety of different questions like what values seem to be existing in these systems today how stable or brittle are their values and there seems to be a lot of evidence that they're quite brittle that
you you know by framing the context or making some suggestion or making some small small change to the system its sort of values fall over um very
dramatically. Uh you know I believe this is a crucial challenge of our time and
dramatically. Uh you know I believe this is a crucial challenge of our time and the the the window that we have to get this right is quite small and so to me
working on values at this moment is is hugely hugely important um question in the world.
>> I see. Let me give you another quote from your book. Uh I think it'll also sound quite fine. Uh advancing machine intelligence is particularly intriguing because it could help produce more knowledge faster, thus potentially
helping to reduce the scarcity of attention. My question for you is, is this the solution? Like might this be it? In the same way that capitalism in
the markets have made physical capital uh sufficient, now we can seemingly scale up agents that can focus on things like the climate crisis or the mental
health crisis after they get to a human level. So there might be a problem solved.
>> Um well because attention is no longer scarce, right? You you define attention for for audience. Uh attention is to time what velocity is to speed. Yeah.
Right. And so there's a directionality. It's how much >> it's intentional.
>> Yeah. It's time like velocity is speed and direction. >> And that's the promise of AI agents, right? So, so we can scale this infinitely now. No. Problem solved.
right? So, so we can scale this infinitely now. No. Problem solved.
>> Um well, so it will definitely help alleviate the attention problem. But
governance today is being done by humans. And so if human attention is focused on the wrong things, then our governance will be focused on the wrong things. and lots of other things are downstream from governance and from
things. and lots of other things are downstream from governance and from culture. Uh so um and then bringing back to the immediately prior conversation,
culture. Uh so um and then bringing back to the immediately prior conversation, if we give these machines rain and and let their attention roam, if there
aren't some sense of values baked into that, that also doesn't feel like a particularly promising direction, >> right? And and that's the philosophy to to speed points. You need to point them in the right direction. Exactly.
Exactly. I mean they could be very potent but just been going off totally in the wrong direction just to give an example. Right. So I gave the virus as a you know the homegrown virus as an example. Of course we can also use these
intelligences to build biod defenses and better antibodies and faster antibbody production and so forth. Um so but this is a question of directionality and
where does this directionality come from >> right? Um, but I guess what I'm trying to highlight, and I think this speaks to how fast history is accelerating, is that we're already seeing a way in which attention to scarcity can be solved.
Uh, even outside of the three prescriptions that that you made. You see what I'm trying to say? >> I
some easing of attention can definitely happen. Um, I think this is the great promise of this technology. it uh but the easing of your personal attention problem to
>> I still have to figure out my meaning right yeah >> that's not going to get solved societies deciding that they need to allocate a lot of this productive capital that we have to solving a specific problem unless we give the governance to the machines right
>> still not going to happen >> got it I see um what is the scarcity after the knowledge age after attention or is that >> I don't know I don't know I I um I I I think I'm entirely focused on this present transition. I think this
transition is it's really profound. It's really potentially incredibly exciting.
We're doing it all wrong. We seem to have learned next to nothing from very recent history. And um and so my personal approach to life is kind of a
recent history. And um and so my personal approach to life is kind of a barbell approach. So the like one side of it is really have a good time, enjoy and also
barbell approach. So the like one side of it is really have a good time, enjoy and also try to do things locally to help with community resilience, family resilience,
etc. in the face of what I think will be increasing um turmoil and then put ideas out there, the other end of the barbell. put ideas out there that I really believe in, but that I believe um are going to take time to be more widely
known, be more widely accepted. Um you know, I just I don't think that the sort of middle of trying to bend politics right now by force somehow. I just don't think that exists. That is sort of like that is your your earlier image of you're sort
exists. That is sort of like that is your your earlier image of you're sort of paddling right now. And you can really only change the stream, you know, quite a bit ahead by throwing ideas out there and watching them grow.
>> Right. It's interesting. Uh I asked one of your colleagues while I was researching and preparing for this interview what your greatest weakness was. And he said um you're self-proclaimed. So So you said your greatest weak weakness is that you're too early. >> Yes.
self-proclaimed. So So you said your greatest weak weakness is that you're too early. >> Yes.
>> So So like you invested in technologies that like would have been great if this >> Exactly. Yeah. And I I have made complete peace with that. Interesting.
>> Exactly. Yeah. And I I have made complete peace with that. Interesting.
>> Well, so tell us about why do you think you're always too early? Is it because you're such an optimist or or >> No, it's just because I have a perception of where things are going to go. Um, that's quite >> Cassandra. You're Cassandra sense.
>> Well, and you know, Cassandra was right as we all know. >> Right. [laughter] Right. But no one listened to her. That's the point. >> So, um, but but I do also believe in the power of ideas over time. So I don't I don't at all you know I think this is
where I take some inspiration from sort of Buddhist thinking which is like if I am too attached on my ideas having an impact right here and now I will be insanely frustrated right um if I make some peace with the idea that my ideas
might take a long time to take fruition that doesn't do away from my drive to want to get my ideas out there it just makes >> right >> the challenges that come along the way slightly less painful doesn't make them less I mean sort of be like, "Oh my god,
this seems obvious. Why are we not doing this?" You know, like, "Yes, it does seem obvious." And when I'm too early in investing, it's because it does seem
seem obvious." And when I'm too early in investing, it's because it does seem obvious that this should be needed. And then sometimes it takes 10 years. Of
course, 10 years is an infinity in the time of a startup. So, you know, the internal joke at USV is if if Albert did a deal 10 years ago, we should probably be doing a deal right [laughter] now. >> I see. Well, well, that hasn't stopped
you from having a somewhat decent investment career. >> It hasn't stopped me, thankfully. Yeah.
>> Yeah. Okay. So, uh, you have a self-recognition that this entire book that you wrote is kind of too early in your own phrase, but beyond that, you felt the need to flag an idea that was too early even for this book, which is
postnation state. That you believe in the distant future we would live without
postnation state. That you believe in the distant future we would live without nation states. Why? Well, so I grew up in um outside of Nerburgg in an area
nation states. Why? Well, so I grew up in um outside of Nerburgg in an area that's um technically part of Bavaria, but if you come from there, you consider yourself Franconian. And so Franconia, if you look at a map of Franconia in
yourself Franconian. And so Franconia, if you look at a map of Franconia in like the 1400s, it's like this patchwork of these tiny principalities, >> right?
>> And they each spoke basically a dialect that the others did not understand. If
you travel 10 kilometers, people did not because they didn't speak high German.
only the courters spoke any form of early version of high German. Um they
had different currencies, they had different measurement systems. So that was an extraordinarily fragmented world. Today that's all part of Germany which is part of the EU. You
fragmented world. Today that's all part of Germany which is part of the EU. You
can travel throughout the Shenhen region without even showing your passport. I
mean they are literally you know these principalities were guarded and if you didn't speak the right thing like you didn't have a letter from your local lord saying that you had passage they would just turn you away. I mean that's
how p circumscribed people's lives were. And so I just think if we look at that we can be like oh yes we can in fact and this goes handinhand with my belief that
moral progress is possible. We can imagine a world where certain issues like the global atmosphere are governed at a global level in in a way that's
people like sector of world government like we only have one atmosphere. Yes,
this does require global coordination and this does require the nations coming together and by the way we've done in the past when we discovered the ozone hole and what was causing it we all got together and we banned the gases that were causing it and that was a good thing we did that. Right. Right. So I
just think these things that we feel so witted to if you had asked the people at the time they were so witted to their local principality and of course they were serving their prince and that other prince was evil you know like that today
that's an absurd notion right um and so today people are so bought into it's like the US versus China versus Russia versus I mean I believe if we don't kill ourselves if we make enough moral progress we'd be like no I'm a human of
planet earth you know Um and we have a very effective government that at the global level deals with truly global issues and then you have a subsidiarity principle where stuff gets pushed down and a lot of stuff can decide be decided super hyperlocally
>> and so one of your answers is to remind me of the trajectory of history that we're already on although the push back there is there's a there's a Chinese saying all things under heaven uh when they're together for too long they split and when they split too there together. And so there's also a history of in in in
venture capital, you can invest in bundling and unbundling. [laughter]
>> Yes, the Chinese were too early on that one. Yeah, they're about 3,000 years earlier on that one. Uh so what I also want to remind you is that there are periods of history of fragmentation. Europe was one of the Roman the Roman period right?
>> I I do not believe in a straight arc of history and I do believe we can regress and we are in a period of regression, >> right? Oh, but but what's really interesting is that you define the splitting into regression.
>> No, regression is doubling down on the nation state at a time period when our biggest problems are all clearly super national problems. Right? At a time when our biggest problems are global warming at an unprecedented pace um AI which is
a global thing, infectious disease also clearly a global thing as we've just seen. So um so I think doubling down and going back and saying it's about making America great
or it's about America versus China as opposed to recognizing that we are facing several existential problems that need to be solved at the level that is clearly a regression. >> I see >> saying something like hey I think every
you know parent should be able to decide most of the education of their kid for themselves. I don't think of that as regression. I mean, I think there's
themselves. I don't think of that as regression. I mean, I think there's certain things that every kid should learn. Um, but how that happens and like do we need a department of education that that >> dictates right
>> dictate? No, I think a lot of that can be decentralized much more than it has been.
>> dictate? No, I think a lot of that can be decentralized much more than it has been.
>> I see. Well, let's go back to the the positive case of AI and you said what other places we can direct AI and uh if I understand correctly, what you're looking a lot at is AI with science, AI with with education. So, so tell us
about the the optimistic case there. Yeah, >> Gigi, my wife and I um we decided that our kids would be homeschooled um you know, at a time when that was not yet very popular. Um and our kids were homeschooled all the way through the end
very popular. Um and our kids were homeschooled all the way through the end of high school and then they went off to college. >> How do they turn out?
>> Um you know, they're still pretty young, but um >> we'll see. It's we will see. But they
all went to very good schools and had very good time at those schools and and wound up doing well there. Um, the point though is we did it by having individual tutors.
Um, and in New York City that's super easy because New York City is full of these people who do after school tutoring and they're great tutors, but if you homeschool you can access them during the day when they don't have their normal students. So, um, now we can give basically every student a
personal tutor. And there's one piece of evidence in what influences learning
personal tutor. And there's one piece of evidence in what influences learning outcomes by two sigma. only intervention is individualized tutoring and we can now deliver that. I think that is extraordinary that AI can do this. Um
are we there yet today? Would I let chat GPT lose um without any supervision? No.
But but do we have the ingredients to build this? Absolutely. >> I see. Um how do you think AI will transform philosophy and content creation? Because that's something you've spent more and more time on. And the way I like to think about this is
you know if you oral culture gave rise to Homer and the books gave rise to say Plato and uh printing press Luther what is what is this kind of new philosophy that's is going to give give rise to and how are you thinking about it uh in your
own kind of content because in some sense it's directly competing against uh what you're doing when you're writing these blog posts and stuff like that right >> um the first thing I I believe that's important to talk about before we even
get to AI is one really important project is to reconnect philosophy and science. Again, I think Greece, you can see the early green shoots of science
science. Again, I think Greece, you can see the early green shoots of science like the atomic thinking for example, you know, and and and philosophy and science were deeply
connected at that time. Um and then we've kind of had this sort of bifurcation where um philosophy and science seem to like have become
disconnected. Uh and I think that's a real tragedy. Um and so
disconnected. Uh and I think that's a real tragedy. Um and so I believe one of the really important um things if we want to make moral progress
is the philosophy that we use to achieve moral progress has to be grounded in science and and has to be compatible with science. Um so that's important and
so coming to your question about AI um I just think AI can help us because it has access it has access to all this knowledge and so it can help us distill
down certain ideas and and hopefully over time I don't think we're quite there yet. It will help us make new connections between ideas that we have
there yet. It will help us make new connections between ideas that we have not seen. Um, I saying this is like one of the areas where I think we're the
not seen. Um, I saying this is like one of the areas where I think we're the furthest away in AI. I think AI is great at condensing. It's great at summarizing, but I think humans are still at the moment better at making
novel connections between different um parts of knowledge. I don't think that will be remain a distinctly human purview. But if I think about where I want AI in philosophy, the use of AI in philosophy to go is to ensure that we
get back to a deep grounding and connection, deep connection to science and that we kind of um get away from this idea that um either philosophy or science are these
extraordinarily big things that nobody can comprehend because I think at the heart of them are relatively few principles and I do believe that AI will help us crystallize what those principles are. >> I see. Uh and in your own private
philanthropy you funded a lot of like almost like science or philosophy of science things like constructor theory, assembly theory, emergent causality.
Tell tell us about your interest and engagements there. >> Yeah, so we made two extraordinary scientific breakthroughs about 100 years ago and that was quantum theory and it was special in general relativity. Um since then in fundamental physics we
really have been kind of languishing. Um and so um really foundational questions um like what does coality even mean? Um what does time mean? You know, like
there's a lot of people who believe in the block universe, you know, and in the block universe, it's like some of these things don't really mean there's scientific theories like super determinism where, you know, um there's
basically no room for agency. Agency as a concept doesn't make any sense. Um and
so those and then, you know, people have interpreted quantum theory to mean that there's no reality. So we we've sort of made certain breakthroughs and then these breakthroughs have been um interpreted in philosophy in various
ways that I think have been very detrimental to actual philosophical progress have been very detrimental to the idea of moral progress and the moral universalism. Um and I believe there are interesting green shoots of new
universalism. Um and I believe there are interesting green shoots of new scientific approaches that I think are reopening the door um to um you know
understanding that there can be causality that they can be agency that those things can be valid concepts. >> I see. I see. So in other words, because
of how weird of a world quantum has made the the fundamental building blocks of the world seem, there's a lot of relativism on the metaphysical realm that has translated into the moral realm. And in some sense, all of these things that you're funding is about establishing a metaphysical realism that
you're hoping will lead to a moral realism. Is is that >> that is a very good summary. Yes. If
your interpretation of physics for example is block universe you know time like the past and the future are all already here and you know it's very easy to get from
there into a kind of nihilism into a kind of like there is no agency we can't actually influence history it's just like it's already there you know um and similarly you can sort of from what I consider to be misinterpretation of
quantum mechanics you can get to this you know, nothing is really without an observer and like different observers and you can inform a kind of philosophy that is sort of an anything goes philosophy and I think this is kind of where we've been for a while. >> I see. So, we talked about the kind of
macro scale, right? We talked about your sort of grand theory of history. We
talked about the practical prescriptions that we need to do uh as well as the philosophy of of knowledge that that's grounding all of it. Um, and now I want to move to a bit more uh personal biographical part of the interview. So
tell us a bit about how you went from being an engineering nerd from what I understand to uh building companies to being an investor and eventually uh finding philosophy quite late in your life. Is that fair or or
>> No. Yeah. I I was always fascinated by philosophy. Um so I as as we said earlier I grew up in
>> No. Yeah. I I was always fascinated by philosophy. Um so I as as we said earlier I grew up in Germany. I fell in love with computers early on like when I was a teenager and
Germany. I fell in love with computers early on like when I was a teenager and um I'm super grateful. My parents were very supportive of that at a time when that was highly unusual and very expensive, you know. >> Well, it's all the money saved from not having to wear clothes, right? You can just >> Yeah, exactly. [laughter] It's amazing
what you can do. Um so um but there was kind of a thing in my head already and it it was not very wellformed but when I went to college in the US and I went to Harvard as an
undergraduate um I wound up studying both computer science and economics >> I I had some inkling already that this was like going to be important and this
was 87 to 90 in 1990 I wrote my senior thesis is about the impact of computerized trading on stock markets which was really just the beginning of >> the quant wave >> of of you know and the high frequency trading and all that like like all that
sort of mostly lay in the future. >> Um and so um you know in in school already
I loved um philosophy. I had this fascination with kind of um thinking about um history, economy, society, philosophy and I had this fascination with
computers. I had both of these very very early on and it it just was in a way a
computers. I had both of these very very early on and it it just was in a way a stroke of luck that I think that I got exposed to computers. Um but I really took a liking to them. Um and then I went back I worked in management
consulting for a few years. Um, I did that because I thought you were going to learn something bit about business. But it turns out in management consulting you mostly learn something about the business of management consulting.
And then I went back and at the time I really thought I wanted to become a professor. Um, and
>> computer science >> a professor of I already was like I really wanted to teach people about why computers were important. >> Um, social consequences. I see.
>> All right. And so I went into there was a specialized program at at MIT um and it's called the information technology program and it's it's situated at the business school at Sloan School of Management um but in the program you take you study one discipline and also computer science and so there were
people in the program who were studying psychology and computer science sociology and computer science and I studied more economics and computer science and you you really do need to rigorously understand computers in this program but you also need to have this one other discipline and the idea A is
that you use the other discipline to really think about um the impact of computers and one of my thesis advisers was Eric Bolson who you know did a huge amount of work on what was then known as the productivity paradox where everybody's like where well people talk about all this productivity gain but it
doesn't seem to be showing up in the macro aggregate statistics and he showed very convincingly that if you look at firm level data you can really see it.
Um, another one of my thesis advisers was Ben Holmstrom who later won the economics Nobel. Um, he did a lot of work on kind of incentive structures.
economics Nobel. Um, he did a lot of work on kind of incentive structures.
So, a lot of my work was like how do computers change incentive structures by making more information available to more people. Um, and so, um, while I was
getting this PhD though, all of a sudden the web was happening. And I just remember distinctly being in a lab at MIT and supposedly doing my stats homework and somebody next to me was like clicking and laughing and clicking and laughing. And eventually I'm [snorts] like, "What are you doing?" And
and laughing. And eventually I'm [snorts] like, "What are you doing?" And
they're like, "I'm surfing the web." And I'm like, "What is that?" And so they're like, "Well, you just have this thing on your workstation called Mosaic, and it lets you browse the web." And so I typed in mosaic and there I was browsing the web. Um, and then I had the sort of realization. I was like, "Oh my god, I'm
web. Um, and then I had the sort of realization. I was like, "Oh my god, I'm writing about all this, but it's also happening. And so why am I not actually
participating in this?" And so in late 96, um, I started a company with two MIT professors. Um, and that company worked on a problem that frankly still isn't solved. [laughter]
professors. Um, and that company worked on a problem that frankly still isn't solved. [laughter]
Speaking of being too early, um, and that was about taking data from many different data sources and bringing them to the point of care. >> Um,
>> kind of like Palanteer or something like that [laughter] similar. Yeah.
>> Um, so um really here though in the spirit of enabling the doctor to make better decisions about your care. Um, and I finished the dissertation also which was probably not great for the dissertation, probably not great for the
startup. What I learned out of all this was that startups are amazing, but I'm
startup. What I learned out of all this was that startups are amazing, but I'm not actually a good operator. Um I I really like to think about lots of different things. And um and so I thought, oh, maybe I can be a good
different things. And um and so I thought, oh, maybe I can be a good investor. And from that insight to actually winding up at USV, it was a
investor. And from that insight to actually winding up at USV, it was a 10-year journey with many false starts. So >> I see. Uh and how have you kept the
philosophy flame kind of kind of alive throughout this? >> Well, by continuing to just read a lot.
Um uh I I really think books are one of the great inventions of humanity and um and I love reading a good book. Uh so uh so you know I don't watch television. I
don't follow professional sports. Those two things alone free up a huge amount of attention.
>> Right. It's like the no clothes and the now you can buy the Mac TV.
>> Exactly. There you go. [laughter] And um have you noticed if philosophy gives yourself kind of superpowers when you do your job or or maybe another
interesting question to ask is is it a detriment in any way? I think first principles um kind of approaches are very powerful and um the reason why USV has been
successful over two decades is because we don't get caught up in waves um and we don't we're like we have to have an investment of this type because everybody else is investing in like let's say scooters like if we're like we
don't understand the unit economics sustainable unit economics scooters.
We're not going to invest in scooters. Um so, uh sometimes that means we miss a a wave early on, but over time we usually find what actually works in this particular um world.
And so I believe that philosophy helps with first principles thinking. I also
do happen to believe that a lot of philosophy for quite some time has been ridiculously obtuse and dense and written in ways that >> your countrymen are most to blame.
>> Oh, horribly so. Horrendously so. I mean, I remember in college like reading Kant in English because I'm like this seems to make slightly more sense than [laughter] the German. I I've heard this actually K I've heard that K scholars actually read the English trend translation because it's better it's more yeah
>> this is more accessible [laughter] than in my native tongue.
So I so I do believe that um philosophy is really powerful but it's powerful when it is accessible and when it's a mode of thinking when it's a mode of
asking questions and trying to peel back what's deeper what's you know a driving force as opposed to what's at the surface. >> I see. Um, and what what I understand
about USV, it's also philosophical in the sense that you guys develop thesis that you go out and then hunt for, which is a lot more proactive than a lot of VC firms. Is that is that fair? >> Yeah. Yeah. And, you know, and and I I
think it doesn't always work. Sometimes, you know, we have the right thesis and back the wrong company. Um, but it works often enough to produce good returns.
And uh you guys are particularly great at understanding kind of network effects especially consumer dynamics. And so I >> I think we were early to that. Um it's
become completely common knowledge today. And in investing you can only produce outsized returns if you understand something before everybody else understands it. So the idea that today you can produce outsized returns
um by knowing about network effects is a lot harder because anything that remotely reeks of having network effects just gets bit up so rapidly. So if
you're already in it that's great but like entering it later um which was still possible you know in the early days when people hadn't really fully groed network effects that that gone. So I think you know in the last few years
we've done quite a few things that looked really weird to other people who were continuing to do only software. We started to do some hardware. Um so for instance we were quite early to doing energy. We did um geothermal energy. We
did nuclear energy. Those things are turning out to be monstrously important now. Um but they were again thesis based and thesis driven at a time when that
now. Um but they were again thesis based and thesis driven at a time when that was not yet you know sort of common knowledge that energy was going to be so important.
>> Right. Um and and hopefully our audience can see how this kind of grand theorizing about history is can be quite practical. That's not the reason you do it, but it can be quite practical in informing you about the macro trends
that you need to that you need to, you know, steer clear of and and double down on.
>> Um there is a side to USV that I think is quite tragic. Um, and it has both to do what we already talked about with X, Twitter, but also with crypto because if I read you guys correctly, you guys went into both hoping this to be kind of not
libertarian paradise, but as this kind of decentralized freedom kind of enabling forces and I don't think either of the promises has have panned out in that way, right? Like crypto. >> Yeah, I see. I disagree with that. The
existence of social media has meaningfully contributed um to sort of voices being heard that weren't previously heard. We haven't figured out how as a society to not get hijacked by
previously heard. We haven't figured out how as a society to not get hijacked by these voices, but I do think it has made it easier for voices to be out there.
And I do think that, you know, there's a there's a really good book um that I think is an important read by Martin Guri called The Revolt of the Public.
and it sort of traces the rise of the tea party alongside social media and it kind of traces the breakdown of governments having narrative control. Um
and um I do actually think that you know institutions in this time of change have tried to hang on to old things in a way that hasn't always been healthy. Uh so
I think it's easy to look at social media and only see its negatives right >> negatives. Uh and similarly with crypto um you know I think the promise of
>> negatives. Uh and similarly with crypto um you know I think the promise of
network effects without um lock in and of um compulsible uh APIs that cannot be revoked. That
promise is still very much there and we're seeing it play out in some degree.
So I I think you know um it's sort of with all the attention now on AI um people have sort of stopped looking at crypto but um you know stable coins for example which run on crypto rails are powering a huge amount of you know
global commerce now um and they've made it possible for instance for commerce in like Africa between different nations there to happen in a way and they meaningfully are contributing to economic growth in Africa. that was
simply constrained by the ability of people to have access to currency that that has been like significantly improved. >> Right. And to tie it to our previous conversation, a way of saying what you were saying is uh this might be early innings of agricultural revolution and and and you see the malnutrition and
then that's that's all there but number one it lays the technological framework for something better to be built. And number two, there's already benefits.
And in agriculture in early phases, there was already aristocrats who could do leisure and do philosophy and >> Yes. And you you wouldn't have had some of those great artists you talked about earlier if you know >> we were still hunting rabbits and Yeah. And so how do you think uh especially
because USB also has a climate fund, right? Something you're obviously very passionate about from a mission driven and not just a profit driven. How do you think about balancing those two imperatives of investing for the world and investing for shareholder returns? >> We don't consider ourselves like a
double bottomline firm or something like that. It's just that our philosophical interests have been in things that we think overall are you know have the potential to make positive contributions. So like a good example is
if you look at education, you know, we've been pitched many startups that are like we provide easier access to student debt. We're like we don't want to get more people into debt. We just want to make learning cheaper. And so
most of our portfolio, you know, the biggest breakout there is Dualingo, but you know, other smaller companies like Quizlet or Brilliant, you know, they're
all about giving really affordable or sometimes free access to to to learning.
So I think it's just because we have certain core philosophies about what we think constitutes actual progress. I think a lot of the stuff we invest in um has had that characteristic. I mean I mentioned energy. I think energy is
fundamental to progress. I gave a talk earlier this year at a conference called DLD that's literally titled energy and progress. Now that doesn't mean as we've
said earlier that society immediately gets the best out of these things. I
think often we in fact really struggle through this early like you know you mentioned the agricultural struggle and we are struggling with our technological progress at the moment and I believe we will continue to struggle for some time
because we haven't yet embraced that we need to make these drastic changes these changes on par with going from being hunter gatherers to being agarians on par with going from agarians to industrialist I mean one really needs to
come back to just how extreme extraordinary those changes were and we are in the midst of another one of these changes. >> Yeah. >> So many politicians still seem to believe that it's a little incremental change here, a little incremental
change. And if there's one positive thing about, you know, Trump blowing
change. And if there's one positive thing about, you know, Trump blowing stuff up, it's sort of seeing, okay, the era, the era of incremental change is over, >> right? I see. >> And I hope we don't return to it. >> I see. >> I just wish we had better ideas.
>> right? I see. >> And I hope we don't return to it. >> I see. >> I just wish we had better ideas.
And uh throughout this entire journey, what do you think has motivated you? And
uh how's your how have your motivations changed throughout this career?
>> Now that I've had some success, um my drive hasn't changed, but my um nervous energy or anxiety or um things that you know >> need to prove yourself or something like
that. That part. Yeah. >> Yeah. which had you know you sort of
that. That part. Yeah. >> Yeah. which had you know you sort of talked about dark aspects earlier which had some dark aspects to it um where you
know I I think I kind of wasn't able for example um to be as um thoughtful an investor as thoughtful a partner to entrepreneurs because I had
so much of my own anxiety um and so uh now I think I could have shortcut all that like I don't think success is the only answer to this. This is why I'm now believer in mindfulness.
Um you know I do think there are ways of dealing with anxiety um other than just success >> but >> success also works. >> It definitely helps and I'm not going to deny that.
>> Right. I see. Well, that's an optimistic note to end on. Thank you so much for a great conversation. >> I really enjoyed the conversation, Jonathan.
great conversation. >> I really enjoyed the conversation, Jonathan.
So tell us about the force of digital technologies that's pushing us to this new age and what is this new age look like and what are the necessary changes we're going to have to make. >> Well, let's first understand the force
because even though we're so far into this, there's still I think a lot of uh lack of understanding of why digital technology is so foundationally different from the technology that came before it. And there's two things that
are fundamentally different about digital technology. And the first is zero marginal cost. So in the physical world, you know, we're sitting on chairs. We want another chair. We have to go find one. We have to go move it.
chairs. We want another chair. We have to go find one. We have to go move it.
We have to maybe make it. Um there's very real cost involved with that, right? Um take YouTube where your beautiful videos run. Um, the marginal
right? Um take YouTube where your beautiful videos run. Um, the marginal view of somebody watching one of your YouTube videos is essentially free.
Their laptop is probably already running or their computer their internet connection is up and running. Google's internet connection is up and running.
Their server is spinning. Like you'd have to like look with a microscope to try and find this um cost for that. It is absolutely microscopically small. It
is for all intents and purposes near zero. And very strange things happen in markets um when your marginal cost go to zero. Um things that don't happen in the
physical world. So if you go back the physical world, you know, yes, there
physical world. So if you go back the physical world, you know, yes, there were economies of scale. If you made more cars, you could make them more cheaply. Um but there wasn't the sense that wow, I could just be the one maker
cheaply. Um but there wasn't the sense that wow, I could just be the one maker of all the cars in the world, >> right? >> The monopolistic force is extreme. Like
YouTube's been around for many years now and nobody comes even close. Not I mean not remotely close but that's just one characteristic. There's a second characteristic which is just as important and I call this the universality of computation.
>> And so when I go into the kitchen in the morning I use the coffee maker to make coffee and I use the toaster to toast bread. I wouldn't even think about trying to use the coffee machine to toast or the toaster to make coffee. These are very
purpose-built machines that do a very specific thing. They do it very very well. Now contrast it with a computer. Computer can run software to play chess
well. Now contrast it with a computer. Computer can run software to play chess one moment uh next second later to route a car or another second later to diagnose disease. Like they're completely generalpurpose machines. They
diagnose disease. Like they're completely generalpurpose machines. They
can compute anything that can be computed. This was first shown by touring um quite some time ago. Any machine that's touring complete can compute anything that can be computed at all. And so much of what happens in the
world is computation. When you, you know, decide to go from point A to point B, you perform some computations about, you know, which steps you should be taking to get you there or whether you should be taking the subway. And not
surprisingly a computer can do that which is why a computer can route a person or it can route a car etc. Uh when a doctor makes a diagnosis it's an act of computation. The inputs are what are the symptoms of the patient [music] and the outputs are a differential diagnostic and then maybe the output is
I got to run this test to narrow it down. But these are all acts of computation.
Thanks for watching my interview. If you like these kinds of discussions I think you'd fit in great with the ecosystem [music] we're building at Cosmos. We
deliver educational programs, fund research, invest in AI startups, and believe that philosophy is critical to building technology. If you want to join our ecosystem of philosopher builders, you can find roles we're hiring for,
events we're hosting, and other ways to get involved on jonathanb.com/cosmos. Thank you.
Loading video analysis...