Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp | Lex Fridman Podcast #383
By Lex Fridman
Summary
## Key takeaways - **Embrace Embarrassment for Growth**: Your ability to keep doing interesting things is your willingness to be embarrassed again and go back to Step One and start as a beginner and get your ass kicked. The moment that you decide that you're going to be too embarrassed to try something new then you're not going to learn anything anymore. [04:01], [04:45] - **Open Source AI Accelerates Innovation**: Llama is the language model that our research team made and we did a limited open source release for it intended for researchers to be able to use it. It would be good if there were a lot of different folks who had the ability to build state-of-the-art technology here and we'll learn a lot by seeing what the whole community of students and hackers and startups build with this. [19:46], [20:50] - **Future of Many Specialized AIs**: Everyone else is building like the One Singular AI but there are going to be a lot of different AIs that people are going to want to engage with just like you want to use a number of different apps for different things and you have relationships with different people who fill different emotional roles. Every creator will have an AI agent that acts on their behalf and every small business will have an AI agent for commerce and customer support. [43:38], [44:22] - **AI Amplifies Existing Harms**: The danger is basically amplifying the kind of known set of harms that people or sets of accounts can do like fraud scams spam IP violations and coordinated inauthentic behavior. We need to make sure that we really focus on doing that as well as possible. [01:09:53], [01:07:01] - **Quest 3 Democratizes Mixed Reality**: Quest 3 has high resolution mixed reality where you see the physical world around you and place virtual objects in it plus 2x the graphics processing power and 40% sharper screens all for $499. We want to bring this technology to everyone not just an elite wealthy crowd. [01:58:12], [01:59:42] - **Distinguish Intelligence from Autonomy**: You can scale intelligence quite far but that may not manifest the safety concerns if it's subservient like our neocortex to a simpler brain structure. We really need to be careful on the development of autonomy because relatively simple things with runaway autonomy like a virus can do a lot of harm. [02:14:17], [02:15:50]
Topics Covered
- Embrace embarrassment to restart as beginner
- People tension stresses CEOs most
- Open source accelerates AI safety
- Diverse AIs replace singular assistants
- Separate AI intelligence from autonomy
Full Transcript
the following is a conversation with Mark Zuckerberg his second time in this podcast he's the CEO of meta that owns Facebook Instagram and WhatsApp all
services used by billions of people to connect with each other we talk about his vision for the future of meta and the future of AI in our human world this
is Alex Freedman podcast and now dear friends here's Mark Zuckerberg so you competed in your first e just turn and me as a fellow Jiu-Jitsu
practitioner and competitor I think that's really inspiring given all the things you have going on so I gotta ask what was that experience like oh it was fun I know yeah I mean well look I'm I'm
a pretty competitive person yeah um doing sports that basically require your full attention I think is really important to my like mental health and and the way I just stay focused at doing
everything I'm doing it's like I decid to to get into martial arts and it's um it's awesome I got like a ton of my friends into it we all train together um
we have like a mini Academy in my garage um and I guess um one of my friends was like hey we should go do a tournament I like okay yeah let's do it I'm not gonna
shy away from a challenge like that so yeah it was but it was it was awesome it was it was just a lot of fun you weren't scared there was no fear I don't know I I was I was pretty sure that I'd that I'd do okay I like the confidence um
well so for people who don't know Jiu-Jitsu is a martial art where you're trying to break your opponent's limbs or choke them uh to sleep uh and do so with
Grace and uh elegance and efficiency and all that kind of stuff it's a uh it's a kind of art form I think that you can do for your whole life and it's a basically a game a sport of human chess you can
think of there's a lot of strategy there's a lot of sort of interesting human dynamics of using leverage and all that kind of stuff and uh it's kind of incredible what you could do you can you could do things like a small opponent
could defeat a much larger opponent and you get to understand like the way the mechanics of the human body works because of that but you certainly can't be distracted no you it's it's 100%
Focus sport to to compete I I you know I needed to get around the fact that I didn't want it to be like this this big thing so I basically just I I rolled up with a hat and sunglasses and I was
wearing a CO mask and I registered under my first and middle name so Mark Elliot and um and it wasn't until I pulled all that stuff off right before I got on the map that I think people knew as me so it
was it was pretty lowkey but you're still a public figure yeah I mean I didn't want to lose right the thing you're partially afraid of is not just the losing but being almost like
embarrassed it's so raw the sport in that like it's just you and another human being there's a primal aspect there oh yeah it's great for a lot of people it can be terrifying especially the first time you're doing the comp
competing and it wasn't for you I see the look of excitement in your face it was Fe I just think part of learning is failing okay right so I mean the main thing like people who who train
Jiu-Jitsu it's like you need to not have pride because I mean all the stuff that you were talking about before about you know getting choked or getting you know a joint lock it's
um you only get into a bad situation if you're not willing to tap once you you've already lost right and but obviously when you're getting started with something you're not going to be an expert at it immediately so you you just
need to to be willing to go with that but I think this is like I I don't know I mean maybe I've just been embarrassed enough times in my life yeah I I I do think there's a thing where like you know as people grow up maybe they don't want to be embarrassed
or anything they've built their adult identity and they they kind of have have a sense of of who they they are and and what they want to project and I don't know I think maybe to some
degree you know your ability to keep doing interesting things is your willingness to be embarrassed again and go back to Step One and start as a
beginner and get your ass kicked and you know look stupid doing things and you know I think so many of the things that we're doing whether it's whether it's this I mean this is just like a kind of
a physical part of my life but um but at running the company it's like we we just take on new adventures and um you know all the big things that we're doing I think of his like 10 plus
year missions that we're on where you know often early on you know people doubt that we're going to be able to do it and the initial work seems kind of silly and our whole ethos says we don't want to wait until something is perfect
to put it out there we want to get it out quickly and get feedback on it and so I don't know I mean there's probably just something about how I approach things in there but I I just kind of think that the moment that you decide that you're going to be too embarrassed
to try something new then you're not going to learn anything anymore but uh like I mentioned that fear that anxiety could be there it could creep up every once in a while do you do you feel that
in especially stressful moments sort of outside of the jism at just in work stressful moments big decision days big decision moments how do you deal
with that fear how do you deal with that anxiety the thing that stresses me out the most is always is always the people challenges you know I I kind of think
that um you know strategy questions you know I tend to have enough conviction around the values of what we're trying to do and what I think matters and what
I want our company to stand for that those don't really keep me up at night that much I mean I I kind of you know it's not that I I get everything right of course I don't right I mean make we
make a lot of mistakes but um but I at least have a pretty strong sense of where I want us to go on that the the thing in in in running a company for you
know almost 20 years now one of the things that's been pretty clear is when you have a team that's cohesive you can get almost anything
done and you know you can you can run through super hard challenges um you can make hard decisions and push really hard to to do the best work even you know and kind of
optimize something super well but when when there's that tension I mean that's that's when when things get really tough and you know when I talk to other friends who run other companies and things like that I think one of the things that I actually spend a
disproportionate amount of time on in running this company is just fostering a pretty tight Core Group of of people who
are running the company uh with me and that to me is is kind of the thing that both makes it fun right having having you know friends and people you've worked with for a while and new people
and New Perspectives but like a pretty tight group who can who you can go work on some of these crazy things with um but to me that's also the most stressful thing is is when when there when there's
tension um you know that's that that weighs on me I I think the you know just it's it's it's maybe not surprising I mean we're like a very people focused company and it's the the people is the
the part of it that that um you know weighs on me the most to make sure that we get right but yeah that that that I'd say across everything that we do is probably the the big thing so when
there's tension in in that inner circle of of close folks so when you trust those folks to help you make difficult
decisions about uh Facebook WhatsApp Instagram the future of the company and the metaverse or the AI uh how do you build
that close nck group of folks uh to make those difficult decisions is there people that you have to have critical voices very different perspectives on focusing on the past versus the future
all that kind of stuff yeah I mean I think for one thing it's just spending a lot of time with whatever the group is that you want to be that Core Group grappling with all of the biggest
challenges and that requires a fair amount of openness and you know so I mean a lot of how I I run the company is you know it's like every Monday morning
we get our it's about the top 30 people together and we and this is a group that just worked together for a long period of time and I mean people people rotate in I mean new people join people leave
the company people go to other roles in the company so it's it's not the the same group over time but and we spend you know a lot of times a couple of hours a lot of the time it's you know it
can be somewhat unstructured we like I'll come with maybe a few topics that I that are top of mind for me but I'll I'll ask other people to bring things and people you know raise questions
whether it's okay there's an issue happening in some country um with with some policy issue there's like a new technology that's developing here we're having an issue with this partner um you
know there's a design trade-off and WhatsApp between two things that that end up um being values that we care about deeply and we need to kind of decide where we want to be on that I
just think over time when um you know by working through a lot of issues with people and and doing it openly people develop an intuition for each other and a bond in
camaraderie um and to me developing that is is like a lot of the fun part of running a company or doing anything right I think it's like having having people who are kind of along on the journey that you're that you feel like
you're doing it with nothing is ever just one person doing it are there people that disagree often within that group it's a fairly combative group okay so combat is part of it so this is
making decisions on design engineering uh policy everything everything everything yeah I have to ask just back to jiujitsu for a little bit
what's your favorite submission now that you've been doing it what's uh H how do you like to submit your opponent Mark Zuckerberg I
mean well first of all um do you prefer noi or G Jiu-Jitsu so G is this outfit you wear that uh is maybe mimics
clothing so you can choke look like a kimono it's like the traditional martial arts or come on pajamas um pajamas that you could choke people with
yes well it's got the lapels yes yeah um so I I like jiujitsu I also really like MMA and so I think nogei more closely
approximates MMA and I think my style is um is maybe a little closer to an MMA style so like a lot of Jiu-Jitsu players are fine being on their back right and
obviously having a good guard is is is a critical part of of of Jiu-Jitsu but but in MMA you don't want to be on your back right because even if you have control you're just taking punches while you're
on your back so um so that's no good so you like being on top my my style is I'm I'm probably more pressure and um and yeah and and i' I'd probably rather be
the top player but um but I'm also smaller right I'm not I'm not like a a heavyweight guy right so from that perspective I think like you know it's especially because you know if I'm doing a competition I'll compete with people
who are my size but a lot of my friends are bigger than me so um so back takes probably pretty important right because that's where you have the most leverage Advantage right where where um you know
people you know their arms your arms are very weak behind you right so um so being able to get to the back and and and take that pretty important but I don't know I feel like the right strategy is to not be too committed to
any single submission that said I don't like hurting people so um so I always think that chokes are are a somewhat more humane way to go than than joint
locks yeah and it's more about control it's less Dynamic so you're basically like a khabib norov type of fighter so so let's go yeah back take to a rear naked choke I think is like the clean
the clean way to go straightforward answer right there what advice would you give to um to people looking to start learning jiu-jitsu given how busy you are given where you are in life that
you're able to do this you're able to train you're able to compete and get uh uh to learn something from this interesting art I just think you have to be willing to
um to just get beaten up a lot yeah I mean it's but but I mean over time I think that there's there's a flow to all these things and there's um you know one of
the one of I don't know my my experiences that I think kind of transcends you know running a company and the different different activities that I like doing are I I really believe
that like if you're going to accomplish whatever anything a lot of it is just being willing to push through right and having the grit and determination to to
to push through difficult situations um and I think for a lot of people that um that ends up being sort of a differen maker between the people you know who who who kind of get the most done and
and not I mean there's all these questions about like um you know how how many days people want to work and things like that I think almost all the people who like start successful companies or things like that are just are working
extremely hard but I think one of the things that you learn both by know doing this over time or you know very acutely with things like Jiu-Jitsu or or surfing
is um you can't push through everything and I that that's you you learn this stuff very acutely run doing Sports compared to
running a company because running a company the cycle times are so long right it's like you start a project and then you know it's like months later or you know if you're You're Building
Hardware it could be years later before you're actually getting feedback and able to you know make the next set of decisions for the next version of the thing that you're doing whereas you one of the things that I just think is
mentally so nice about these very high turnaround conditioning Sports things like that is you get feedback very quickly right it's like okay like I I don't counter something correctly you get punched in the face right so not in
Jiu-Jitsu you don't you don't get punched in Jiu-Jitsu but in MMA um there are all these analogies between all these things that I think actually hold that are that are like important life lessons right it's
like okay you're surfing a wave it's like you know sometimes you're like you can't go in the other direction on it right it's like there are limits to kind
of what you know it's like foil you can you can pump the foil and and push pretty hard in a bunch of directions but like yeah you you know at some level like the momentum against you is is
strong enough you're that's not going to work and and I do think that um that's sort of a a humbling but also an important lesson for I think people who
are running things or building things it's like yeah you you um you know a lot of the game is just being able to kind of push and and and and work through complicated things but you also need to
kind of have enough of an understanding of like which things you you just can't push through and where where um um the Finesse is more important yeah what are your Jiu-Jitsu life
lessons well I think you did it you made it sound so simple and we so eloquent that it's easy to miss but
basically being okay and accepting the wisdom and the joy in the uh getting your ass kicked in the full range of what that means I think that's a big
gift of the being humbled somehow being humbled especially physically opens your mind to the the full process of learning what it means to learn which is being
willing to suck at something I think jiu just very repetitively efficiently humbles you over and over and over and over to where you can carry that lessons
to places where you you don't get humbled as much whether it's research or running a company or building stuff the the cycle is longer and just so you can just get humbled in as period of an hour
over and over and over and over especially when you're a beginner you have a little person just you know somebody much smaller than you just kick your ass uh
repeatedly uh definitively where there's no argument oh yeah and then you you literally tap because if you don't tap you're going to die so this is an agreement you could have killed me just
now but we're friends so we're going to agree that you're not going to to and that kind of humbling process it just does something to your psyche to Your Ego that puts it in its proper context
to realize that you know everything in this life is like a journey from sucking
through a hard process of improving o rigorously day after day after day after day like any kind of success requires hard work um yeah g just more than a lot
of sports I would say cuz I've done a lot of them really teaches you that and you made it sound so simple like I'm I'm you know it's it's okay it's part of the process you just get humble get your just I've just failed and been
embarrassed so many times in my life that like you know I'm I'm it's a core competence at this point it's a core competence well yes and there's a deep truth to that being able to and you said
it in the very beginning which is that's the thing that stops US especially as you get older especially to develop expertise in certain areas the not being
willing to be a beginner in a new area yeah uh that because that's where the growth happens is being willing to be a beginner being willing to be embarrassed saying something stupid doing something
stupid um a lot of us that get good at one thing you want to show that off and it sucks uh being a beginner but it's it's
where growth happens yeah well speaking of which let me ask you about AI it seems like this year for the entirety of the human civilization is an inter interesting year for the development of
artificial intelligence a lot of interesting stuff is happening So Meta is a big part of that uh meta has developed llama which is a 65 billion
parameter model uh there's a lot of interesting questions they can ask here one of which has to do with open source but first can
you tell the story of developing of this model and uh making the complicated decision of how to release it yeah sure I think you're right first of
all that in the last year there have been a bunch of advances on scaling up these large Transformer models so there's the language equivalent of it
with large language models um there sort of the image generation equivalent with these large diffusion models um there's a lot of fundamental research that's
gone into this and meta has taken the approach of being quite open an academic in in in our development um
of of AI part of this is we want to have the best people in the world researching this and um and a lot of the best people want to know that they're going to be able to share their work so that's part
of the deal that we that we have is that you know we can get you know if if you're one of the top AI researchers in the world you can come here you can get access to kind of industry scale um
infrastructure and and and part of our ethos is that we we want to share what's what's invented um broadly we do that with a lot of the the different AI tools
that we create and llama is the language model that that our research team made and you know we we did a limited um a limited open source release for it right
where which was intended for researchers to be able to use it um but you know the responsibility and and getting safety right on these is um is very important
so we didn't think that for the first one there there were a bunch of questions around whether we should be releasing this commercially so we kind of punted on that for for V1 of of llama
and and just released it from research now obviously by releasing it for research um you know it's out there but but companies know that that they're that they're not supposed to kind of put it into commercial releases and um you
know we're we're working on the follow-up models for this and and thinking through how how um what what the the how exactly this should work for for follow on now that we've had time to
to work on a lot more of the the safety and um and the pieces around that but but overall I mean this is I I just kind of think
that that it would be good if there were a lot of different folks who had the ability to build state-of-the-art
technology here you know it's and not just a small number of of big companies but to train one of these AI models the state-of-the-art models is um just takes
you know hundreds of millions of dollars of infrastructure right so there are not that many organizations in the world um that can do that at the biggest scale
today and now it gets it gets more efficient every day so um so I I I do think that that will be available to more folks over time but but I just think like there's there's all this
Innovation out there that people can create and um and and I I just think that will also learn a lot by by seeing what the whole community of students and
um and hackers and startups and and different folks um build with this and that's kind of that's kind of been how we've approached this and it's also we've done a lot of our infrastructure and we took our whole data center design
and our server design and we we built this open compute project where we just made that public and um part of the theory was like all right if we make it so that more people can use the server
design then um then that'll enable more Innovation it'll also make the server design more efficient and that'll that'll make our business more efficient too so that's worked and we've we've just done this with a lot of our our
infrastructure so for people who don't know you did the limited release I think in February of of this year of llama and it got quote unquote
leaked meaning like it uh escaped the uh the the limited release aspect but it was you know that something you probably anticipated given
that it's just released to research we shared it with researchers right so it's just trying to make sure that there's like a slow release yeah uh but from there I just would love to get your
comment on what happened next which is like this is a very vibrant open source community that just build stuff on top of it there's uh llama CPP basically stuff that makes it more efficient to
run on smaller computers yep um there's combining with uh uh reinforcement learning with human feedback so some of the different interesting fine tuning mechanisms there's then also like
fine-tuning and a gpt3 Generations there's a lot of uh GPT for all alpaka uh colossal AI all these kinds of models you just kind of spring up like run on
top of like what do you think about that no I think it's been really neat to see I mean there's been folks who are getting it to run on local devices right so if you're an individual who just you
know wants to experiment you know with this at home you probably don't have a large budget to get access to like a L amount of cloud computes so getting it to run on your local laptop um you know
is is uh is pretty good right and pretty relevant um and then there are things like yeah llama CPP um reimplemented it more efficiently so you know now even when we run our own
versions of it um we can do it on way less compute and it just way more efficient save a lot of money um for everyone who who uses this so that that
is is is good um I do think it's worth calling out that because this was a relatively early
release um llama isn't quite as on the frontier as for example the biggest open AI models or the biggest um Google
models right I mean you mentioned that the largest llama model that we released had 65 billion parameters and no one knows you know I guess outside of open
AI um exactly what the specs are for um for for gp4 but but I think the you know my understanding is it's like 10 times bigger um and I think Google's Palm model is is also I think has about 10
times as many parameters now the Llama models are very efficient so they they perform well for for something that's around 65 billion parameters so for me that was also part of this because there's this whole debate around you
know is it good for everyone in the world to have access to um to the most Frontier AI models and I I I think as
the IM models start approaching something that's like a super human intelligence I that's a bigger question that we'll have to Grapple with but right now I mean these are still you
know very basic tools they're um you know they're they're powerful in the sense that you know a lot of Open Source software like databases or web servers
can enable a lot of pretty important things um but I don't think anyone looks at the the you know the current generation of llama and thinks it's um you know anywhere near a super
intelligence so I I think that a bunch of those questions around like is it is it good to to kind of get out there I I think at this stage surely you you want more researchers working on it for all
the reasons that um that open source software has a lot of advantages and we talked about efficiency before but another one is just open source software tends to be more secure because you have more people looking at it openly and
scrutinizing it um and finding holes in it um and that makes it more safe so I think at this point it's more I think it's generally agreed upon that
open source software is generally more secure and safer um than things that are kind of developed in a silo where people try to get through security through obscurity so I think that for the scale
of of of what we're seeing now with AI I think we're more likely to get to you know good alignment and good um understanding of of of kind of what needs to do to make this work well by
having it be open source and and that's something that I think is is quite good to have out there and and and happening publicly at this point meta released a lot of models as open source so uh the
mass multi lingual speech model theage model that's I mean I'll ask you questions about those but the point is uh you've open sourced quite a lot you've been spearheading the open source
movement where's uh that's really positive inspiring to see from one angle from the research angle of course there's folks who are really terrified about the existential threat of
artificial intelligence and those folks will say that you you know um you have to be careful about the open sourcing uh step but what where do you see the
future of Open Source here uh as part of meta the tension here is do you want to release the magic sauce that's one
tension and the other one is uh do you want to put a powerful tool in the hands of uh Bad actors even though it probably has a huge amount of positive impact also
yeah I mean again I think for the stage that we're at in the development of AI I don't think anyone looks at the current state of things and thinks that this is super intelligence um and you know the
models that we're talking about the Llama models here are you know generally an order of magnitude smaller than what open AI or Google are doing so
I I think that at least for the stage that we're at now the equities Balan strongly in my view towards doing this more openly um I I think if you got
something that was closer to Super intelligence then I think you'd have to discuss that more and and think through that um a lot more and we haven't made a decision yet as to what we would do if
we were in that position but I don't think I I think there's a good chance that we're pretty far off from that position so um so I I'm I'm not I'm certainly not saying that the
position that we're taking on this now applies to every single thing that we would ever do and you know certainly inside the company and we probably do more open source work than you know most
of the other big tech companies but we also don't open source everything right a lot of our the core kind of app code for WhatsApp or Instagram or something I me we're we're not open sourcing that
it's not like a a general enough piece of software that would be useful for a lot of people to do different things um you know whereas the software that we do
whether it's like a an open source server design or um or basically you know things like mcash right like a a good you know it was was probably our earliest project um that that I worked
on it was probably one of the last things that I that I coded and and led directly for the company um but but basically this like caching tool um for
for quick dat data retrieval um these are things that are just broadly useful across like anything that you want to build and and I think that some of the language models now have that feel as
well as some of the other things that we're building like the translation tool that that you just referenced so text to speech and speech to text you've expanded it from around 100 languages to
more than 1,00 languages and you can identify more than the model can identify more than 4,000 spoken languages which is 40 times more than any known previous technology to me
that's really really really exciting in terms of connecting the world breaking down barriers that language creates yeah I think being able to translate between all of these different pieces in real
time this has been a kind of common sci-fi idea that we'd all have you know whether it's I know an
earbud or glasses or something that can help translate in real time um between all these different languages and that's one that I think technology is basically delivering now so I think yeah I think
that's pretty pretty exciting uh you mentioned the next version of llama what can you say about the next version of llama what what can you say about like what uh what were you working on in terms of
release in terms of the vision for that well a lot of what we're doing is taking the first version which was primarily you know this research version and
trying to now build a version that has all of the latest state-of-the-art safety precautions built in um and and
we're um we're using some more data to train it um from across our services but but a lot of the the work that we're doing internally is really just focused
on making sure that this is um you know as aligned and responsible as as possible and you know we're building a lot of our own you know we're talking about kind of the open source
infrastructure but you know the the main thing that we focus on building here you know a lot of product experiences to help people connect and express themselves so you know we're going to I've I've talked about a bunch of this
stuff but um then you'll have you know an assistant that you can talk to in WhatsApp um you know I think I I I think in the future every Creator will will have kind of an AI agent that can kind
of act on their behalf that their fans can talk to I I I want to get to the point where every small business basically has an AI agent that people can talk to for you know to do Commerce and customer support and things like
that so they're going to be all these different things and llama or the language model underlying this is is basically going to be the engine that powers that the
reason to open source it is that um as as we did with um with the the first version is that it uh you know basically it unlocks a lot of innovation in the
ecosystem we will make our products better as well um and also gives us a lot of valuable feedback on security and safety which is important for making this good but yeah I mean the the the
work that we're doing to advance the infrastructure it's um it's basically at this point taking it Beyond a research project into something which is ready to be kind of core infrastructure not only
for our own products but um you know hopefully for for a lot of other things out there too do you think the Llama Or the language model underlying that version too will be open
sourced you're do you have internal debate around that the pros and cons and so on this is I mean we were talking about the debates that we have internally and I think um I think the
question is how to do it right I mean it's I think we you know we did the research license for V1 and and I think the the big thing that we're that we're thinking about is is basically like
what's the what's the right the right way so there was a leak that happened I don't know if you can comment on it for V1 you know we released it as a research
project um for researchers to be able to use but in doing so we put it out there so um you know we were very clear that anyone who uses the the code and the weights doesn't have a commercial
license to put into products and we've we've generally seen people respect that right it's like you don't have you any reputable companies that are basically trying to put this into into um their commercial products but but yeah but by
sharing it with you know so many researchers it's it's you know it did leave the building but uh what have you learned from that process that you might be able to apply to V2 about how to
release it safely effectively uh if if you release it yeah well I mean I think a lot of the feedback like I said is just around you know different things around you know
how do you fine-tune models to make them more aligned and safer and you see all the different data recipes that um you you mentioned a lot of different projects that are based on this I me
there's one at Berkeley there's you know there just like all over and um and people have tried a lot of different things and we've tried a bunch of stuff
internally so kind of we're we're we're making progress here but also were able to learn from some of the best ideas in the community and you I think it you know we want to just continue continue
pushing that forward but I don't have any news to announce on on this if that's if that's what you're you're asking I mean this is a a thing that
we're uh we're still we're still kind of you know actively working through the the the right way to move forward here the details of the secret sauce are
still being developed I see uh you comment on what do you think of uh the thing that worked for GPT which is the reinforcement learning with human feedback so doing this alignment process
do you find it interesting and as part of that let me ask because I talked to Yan laon before talking to you today he asked me to ask or suggested that I ask
do you think llm fine-tuning will need to be crowdsourced Wikipedia style so crowd sourcing so this kind of idea of
how to inte the human in the fine-tuning of these Foundation models yeah I think that's a really interesting idea that
I've talked to Yan about a bunch um and you we were talking about how do you basically train these models to be
as as safe and and aligned and responsible as possible and you know different groups out there who doing development test different data recipes and fine-tuning but th this idea that
you you just mentioned is that at the end of the day instead of having kind of one group fine tune some stuff and then another group you know produce a different fine tuning recipe
and then us trying to figure out which one we think works best to produce the most aligned model um I I do think that it would be nice if
you could get to a point where you had a Wikipedia style collaborative way for a a kind of a broader Community
to um to to find tune it as well now there's a lot of challenges in that both from an infrastructure and like a community management and product perspective about
how you do that so I I haven't worked that out yet um but but as an idea I think it's it's quite compelling and I think it it goes well with the ethos of open sourcing the technology is also
finding a way to have a a kind of community-driven um a community-driven training of it um but I think that there are a lot of questions on this in general these this
these questions around what's the the best way to produce aligned AI models it's very much a research area and it's one that I think we will need to make as
much progress on as the kind of core intelligence capability of the of the um the models themselves well I just did a conversation with Jimmy Wales the founder of Wikipedia and to me
Wikipedia is one of the greatest websites ever created and is a kind of a miracle that it works and I think it has to do with something that you mentioned which is community you have a small
community of editors that somehow work together well and they uh they handle very controversial topics and they handle it
with balance and with Grace despite sort of the attacks that will often happen a lot of the time I mean it's not it's it has issues just like any other human system but yes I mean the balance is I
mean it's a it's amazing what they've been able to achieve but it's it's also not perfect and I think that that's um there's still a lot of challenges right it's uh the more
controversial the topic the more the more difficult uh the um the journey towards quote unquote truth or knowledge or wisdom that wikip beia address to
capture in the same way AI models will need to be able to generate those same things truth knowledge and wisdom and how do you align those models that
they generate um something that uh is closest to truth there's these concerns about misinformation all this kind of stuff
that nobody can Define and that's a it's something that we together as a human species have to Define like what is truth and how to help AI systems generate that is one of
the things language models do really well is generate convincing sounding things that can be completely wrong and so how do you align
it uh to be less wrong and part of that is the training and part of that is the alignment and however you do the alignment stage and
just like you said it's a very new and a very open research problem yeah and I think that there's also a lot of questions about whether the current
architecture for llms as you continue scaling it what happens um I mean a lot of the a lot of what's been exciting in the last year is
that there was there's clearly a qualitative breakthrough where you know with with some of the GPT models um that open I put out and and that others have been able to do as well I think it
reached a kind of level of quality where people like wow this is this feels different and um and like it's going to be able to be the foundation for building a lot of awesome products and
experiences and value but I think the other realization that people have is wow we just made a breakthrough um if there are other breakthroughs quickly then I think that there's the
sense that maybe we're we're closer to general intelligence but I think that that idea is predicated on the idea that I think people believe that there's still generally a bunch of additional
breakthroughs to make and that it's um we just don't know how long it's going to take to get there and you know one view that some people have um this doesn't tend to be my view as much is
that simply scaling the current llms and you know getting to higher parameter count models by itself we we'll get to something that is closer to um to to
general intelligence but um I don't know I tend to think that there's probably more more um fundamental steps that need to be taken along the way there but still the
leaves taken with this extra alignment step is quite incredible quite surprising to to a lot of folks and on top of that when
you start to have hundreds of millions of people potentially using a product that integrates that you can start to see civilization transforming effects
before you achieve super quote unquote super intelligence it could be super transformative without being a super intelligence oh yeah I mean I think that
there are going to be a lot of amazing products and value that can be created with the current level of techn ology um to some degree you I'm excited to work
on a lot of those products over the next few years and I think it would just create a tremendous amount of whiplash if the number of breakthroughs keeps like if if they're keep on being stacked breakthroughs because I think to some
degree industry in the world needs some time to kind of build these breakthroughs into the products and experiences that we all use so we can
actually benefit from them um but I don't know I think that there's just a a a like an awesome amount of stuff to do and I think about like all of the I
don't know small businesses or individual entrepreneurs out there who um you know now we're going to be able to you know get help coding the things that they need to go build things or
designing the things that they need or um we'll be able to you know use these models to be able to do customer support for the people that they're that they're serving you over WhatsApp without having
to you know it's I I think that's that's just going to be I just think that this is all going to be you know super exciting it's going to create better better experiences for people and just
unlock a ton of innovation and value so I don't know if you know but uh you know what is it over three billion people use
WhatsApp Facebook and Instagram uh so any kind of AI fueled products that go into that like we're talking about anything with llms will
have a tremendous amount of impact d do you have ideas and thoughts about possible products that might start being
integrated into uh into these platforms used by so many people yeah I I think there's three main categories of things that we're working on
um the first that that I think is probably the most interesting is um you know there's this notion of like
you're going to have an assistant or or an agent who you can talk to and I think probably the biggest thing that's different about my view of how this plays out from what I see with um with
open Ai and Google and others is you know everyone else is building like the One Singular AI right it's like okay you talk to chat GPT or you talk to Bard or
you talk to Bing and my view is that that there are going to be a lot of different AIS that people are going to want to engage with just like you want
to use um you know a number of different apps for different things and you have relationships with different people in your life who fill different emotional
roles for you um and I um so I think that they're going to be people have a reason that they that I think you don't just want like a singular Ai and that that I think is probably the biggest
distinction in in in terms of how how I think about this and a bunch of these things I I think you'll you'll want an assistant um I I me I mentioned a couple of these before I think like every Creator who you interact with will
ultimately want some kind of AI that can proxy them and be something that their fans can interact with or that allows them to interact with their fans um this
is like the common Creator prise everyone's trying to build a community and engage with people and they want tools to be able to amplify themselves more and be able to do that um but but
you only have 24 hours in a a day so um so I think having the ability to basically like bottle up your personality and um or or you know like give your fans information about when
you're performing a concert or or something like that I mean that's that I think is going to be something that's super valuable but it's not just that you know again it's not this idea that I think people are going to want Just One Singular AI I think you're going to you
know you're going to want to interact with a lot of different entities and then I think there's the business version of this too which we've touched on a couple of times which is um I think every business in the world is
going to want basically an AI that um that you know it's like you have your page on Instagram or Facebook or Whatsapp or whatever and you want to you want to point people to an AI that
people can interact with but you want to know that that AI is only going to sell your products you don't want it you know recommending your competitor stuff right so so it's not like there can be like just uh you know One Singular AI that
that can answer all the questions for a person because you know that qu like that AI might not actually be aligned with you as a business to um to to really just do the best job providing
support for for your product so I think that there's going to be a clear need um in the market and in people's lives for there to be a bunch of these part of
that is figuring out the research the technology that enables the personalization that you're talking about so not one centralized Godlike llm
but one just a huge diversity of them that's fine-tuned to particular needs particular Styles particular businesses particular Brands all that kind of stuff
and also enabling just enabling people to create them really easily for the you know for to for your own business or if you're a Creator to to be able to help you engage with your fans and I I think
that's um so yeah I think that there there's a clear kind of interesting product Direction here that I think is fairly unique from from what you I any of the other big companies are are
taking um it also aligns well with this sort of Open Source approach because again we we sort of believe in this more Community oriented uh more democratic approach to building out the products
and Technology around this we don't think that there's going to be the one true thing we think that there there should be kind of a lot of development so that part of things I think is going to be really interesting and we could we could go price spent a lot of time
talking about that and the the kind of implications of um of that approach being different from what others are taking um but then there's a bunch of other simpler things that I think we're also going to do just going back to your
your question around how this finds its way into like what what do we build um there going to be a lot of simpler things around
um okay you you post photos on Instagram and Facebook and you know in WhatsApp and messenger and like you want the photos to look as good as possible so like having an AI that you can just like
take a photo and then just tell it like okay I want to edit this thing or describe this it's like I think we're we're going to have tools that are just way better than than what we've historically had on this um and that's
more in the image and media generation side than the large language model side but but it's it all kind of you know plays off of advances in the same space um so there are a lot of tools that I think are just going to get built into
every one of our products I think every single thing that we do is going to basically get evolved in in this direction right it's like in the future if you're advertising on our services
like do you need to make your own kind of AD creative it's no you'll just you know you just tell us okay I'm I'm a dog
walker and I I'm willing to walk people's dogs and help me find the right people and like create the ad unit that will perform the best and like give an
objective to to the system and it just kind of like connects you with the right people well that's a super powerful idea
of generating the language almost like uh rigorous AB testing for you that works to find the the best customer for
your thing I mean to me advertisement when done well just finds a good match between a human being and a thing that will make that human being
happy yeah totally and do that as efficiently as possible when it's done well people actually like it you know it's um I think that there's a lot of examples where it's not done well it's annoying and I think that that's what
kind of gives it a bad rap but um but yeah a lot of the stuff is possible today I mean obviously AB testing stuff is built into a lot of these Frameworks the thing that's new is having technology that can generate the ideas
for you about what to AB test something that that's exciting so this will just be across like everything that we're doing right all the metaverse stuff that we're doing right it's like you want to create worlds in the future you'll just
describe them and then it'll create the code for you so so natural language becomes the the inter face we use for all the ways we interact with the
computer with with the digital more of them yeah yeah totally yeah which is what everyone can do using natural language and with translation you can do it in any kind of
language um I I mean for the personalization is really really really interesting yeah it unlocks so many possible things I mean I for one look
forward to creating a copy of myself I know we talked about this last time but this has since last time this becomes how we're closer much closer like I could
literally just having interact with some of these language models I can see the Absurd situation where I'll have a uh large uh or a Lex language model and
I'll have to have a conversation with him about like Hey listen like you're just getting out of line and having a conversation where you fine-tune that thing to be a little bit more respectful
or something like this I mean that's that's going to be the that seems like an amazing
product for businesses for humans just not not just the assistant that's facing the individual but the assistant that represents the individual to the public
both of both directions there's basically a a layer that is the AI system through which you interact with the outside world with the
outside world that has humans in it that's really interesting and you that have social networks that connect billions of people it seems like a heck
of a large scale place to test some of this stuff out yeah I mean I think part of the reason why creators will want to do this is because they already have the communities on our
services yeah and and and a lot of the interface for this stuff today are chat type interfaces and and between WhatsApp and and messenger I think that those are
you know just great great ways to to interact with people so some of this is philosophy but do you see do you see a near-term future where you have some of
the people you're friends with are AI systems on these social networks on Facebook on Instagram even even on WhatsApp having having conversations
where some heterogeneous some is human some is AI I think we'll get to that um you know and you know if only just empirically looking
at Microsoft released this thing called Chia I several years ago I in in China it was a pre-m chatbot technology that
so was a lot simpler um than what's possible today and and I think it was like tens of millions of people were using this and and just you know really you know became quite attached and and
you know built relationships with it and I think that there's um you know there services today like replica where you know people are doing things like that
and um so I I think that there's there's certainly you know needs for companionship that people have you know
older people um uh and it's I I think most people probably don't have as many friends as they would like to have right if you look at um there's some
interesting demographic studies around that like the average person has the number of close friends that they have is um fewer today than it was 15
years ago and I mean that gets to like this is like the core thing that that I think about in terms of you know Building Services that help connect people so I think you'll get tools that
help people connect with each other are going to be you the primary thing that we want to do um so you can imagine you know AI assistants that you know just do a better job of reminding you when it's
your friend's birthday and how you could celebrate them right it's like right now we have like the little box in the corner of the website that tells you whose birthday it is and stuff like that but it's um but you know at some level
you don't want just want to like send everyone a note that's the same note saying happy birthday with with an emoji right so having something that's more of an you know a a social assistant in that
sense and like that can you know update you on what's going on in their life and like how how you can reach out to them effectively um help you be a better friend I think that that's something
that's super powerful too um but yeah beyond that um and there are all these different flavors of kind of personal AIS that I
think could exist so I think an assistant is sort of the the kind of simplest one to wrap your head around but um like a mentor or a life coach um
you know someone who can give you advice um who's maybe like a bit of a cheerleader who can help pick you up through all the challenges that that um you know inevitably you know we all go through on a daily basis and that
there's probably you know some some role for something like that and then you know all the way you can you probably just go through a lot of the the different type of kind of functional relationships that people have in in
their life and you know I would I would bet that there will be companies out there that take a crack at at um at a lot of these things so um I don't know I think it's part of the interesting Innovation that's going to exist is is
that there there's certainly a lot um like education tutors right it's like I me I just look at you know my kids learning to code and you know they love it and but you know it's like they they
get stuck on a question and they have to wait till like I can help answer it right or someone else who who they know can help answer the question in the future they'll just there will be like a coding assistant that they have that is
like designed to you know be perfect for teaching a five and a seven-year-old had a code and and they'll just be able to ask questions all the time and you know it'll be extremely patient it's never
going to get annoyed at them right um I I think that like there are all these different kind of relationships or functional relationships that we have in our lives that um that are really
interesting and I think one of the big questions is like okay is this all going to just get bucketed into you know One Singular AI I just I just don't I don't think so do you think
about this actually a question from Reddit uh what the long-term effects of human communication when people can talk with in quotes talk with others through
a chat bot that augments their language automatically rather than developing social skills by making mistakes and learning uh will people just communicate
by grunts in a generation I do you think about long-term effects at scale the integration of AI in our social interaction yeah I mean I think it's
mostly good I I mean that that was that question was sort of framed in a negative way but I mean we were talking before about language models helping you communicate with was like language translation help you communicate with
people who don't speak your language I me to at some level what all this social technology is doing is helping people
um Express themselves better to people in in in situations where they would otherwise have a hard time doing that so part of it might be okay because you speak a language I don't know that's a pretty basic one that you know I don't
think people are going to look at that and say it's sad that that we have the capacity to do that because I should have just learned your language right that's that's a pretty high bar but um
but overall I'd say um there are all these impediments and language is an imperfect way for people to express thoughts and
ideas it's you know one of the best that we have we have that we have art we have code but language is also a mapping of the way you think the way you see the world the way who you are and one of the
applications I've recently talked to a person who who's uh in actually a ji- jitu instructor um he said that when he
uh emails parents about their son and daughter um that they can improve their discipline in class and so on he often finds that he's comes off a bit of more
of an asshole than he would like so he uses GPT to translate his original email into a nicer email we we hear this all the time a lot
of creators on our services tell us that one of the most stressful things um is basically negotiating deals with Brands and stuff like the business side of it because they're like I mean they do
their thing right and and you know the creators they're they're excellent at what they do and they just want to connect with their Community but then they get really stressed you know they go into their their DMS and you they see some brand wants to do something with
them and they don't quite know how to negotiate or how to push back respectfully and um so I think building a tool that can actually allow them to do that well the one simple thing that that I think
is just like an interesting thing that that we've heard from a bunch of people that that they'd be interested in but and going back to the broader idea
um I I don't know I mean you I just Priscilla and I just had our our third daughter um a cou than it's and and you know it's like one of the saddest things in the world
is like sing your baby cry right but like it's like what why is that right it's like well because babies don't generally have much capacity to tell you
what they care about otherwise right it's not actually just babies right it's um you know my 5-year-old daughter cries too because she sometimes has a hard
time expressing you know what what um matters to to her and and I was thinking about that and I was like well you know actually a lot of adults get very frustrated too because they can't they have a hard time expressing things in a
way that going back to some of the early themes that maybe is something that you know was a mistake or maybe they have pride or something like all these things get in the way so
I don't know I think that all these different technologies that can help us navigate the social complexity and actually be able to better express our
what we're feeling and thinking I think that's generally all good and um there are always these concerns like okay are people going to have worse memories because you have Google to look things
up and and I think in general a generation later you don't look back and Lament that I think it's you just like wow we have so much more capacity to to do so much more now and I I think that
that'll be the case here too you can allocate those cognitive capabilities to like deeper more Nuance thought
yeah uh but it's changed so with with uh just like with Google search the the additional language models large language models you
basically don't have to remember nearly as much just like with stack Overflow for programming now that these language models can generate code right there I
mean I find that I write like maybe 80% 90% of the code I write is is uh now generated first and then edited I mean so you don't have to remember how to write specifics of different functions
oh but that's great and it's also it's not just the the specific coding I mean in the in the context of a of a large company like this I think before an engineer can sit
down to code they first need to figure out all of the libraries and dependencies that you know tens of thousands of people have written before
them and um you know one of the things that I'm excited about that we're working on is it's not just um you know tools that help Engineers code it's tools that can help summarize the whole
knowledge base and and and help people be able to navigate all the internal information I I think that that's um in the experiments that I've done with this stuff I mean that's on the public stuff
you you just you know ask ask um one of these models to know build you a script that does anything and it basically already understands what the best libraries are to do that thing and pulls them in automatically it's I I think
that's super powerful that was always I the most annoying part of coding was that you had to spend all this time actually figuring out what the resources were that you were supposed to import before you could actually start building
the thing yeah I mean there's of course the flip side of that I think for the most part is positive but the flip side is
if you Outsource that thinking to an AI model you might miss nuanced mistakes and bugs they're you you lose the skill
to find those bugs and those bugs might be uh the code looks very convincingly right but it's actually wrong in a very subtle way
but that's that's the tradeoff that we uh that we face as human civilization when we bu build more and more powerful tools when we stand on the shoulders of
taller and taller Giants we could do more but then we forget how to do all the stuff that they did it's a it's a weird tradeoff yeah I agree I mean I think it's I think it is
very valuable in your life to be able to do basic things too do you worry about some of the um concerns of bots being present on
social networks more and more humanlike Bots that are not necessarily trying to do a good thing or they might be explicitly trying to do a bad thing like fishing scams
yeah like social engineering all that kind of stuff which has always been a very difficult problem for social networks but now it's becoming almost a more and more difficult problem well I
think there's a few different parts of of this so one is there are all these harms that we need to basically fight against and
prevent and and that's been you know a lot of our Focus over the last you know five or seven years is basically ramping up very sophisticated AI systems not
generative AI systems more kind of classical AI systems to be able to um you know categorize and um classify and
identify okay this this post looks like it's um promoting terrorism this one is you know like exploiting children this one is um looks like it might be trying
to inside violence this one's intellectual property violation so there's there's like that's like 18 different categories of of violating
kind of harmful content that we've had to build specific systems to be able to track and um I think it's certainly the
case that advances and generative AI will test those um but at least so far it's been the case and and I'm optimistic that it will
continue to be the case that we will be able to bring more computing power to Bear to have even stronger AIS that can help defend against those things so um we've we've had to deal with some
adversarial issues before right it's I mean for for some things like hate speech it's like people aren't generally getting a lot more sophisticated like the average person who let's say you
know someone's saying some kind of racist thing right it's like they're not necessarily getting more sophisticated at being racist right it just it's okay so that the system can just find but
then there's other adversaries who actually are very sophisticated like nation states doing things and you know we find you whether it's Russia or you know just different countries that are
basically standing up these networks of um of bots or or um inauthentic accounts is what is what we call them because they're not necessarily Bots that some of them could actually be real people
who are kind of masquerading as other as other people um but they're acting in a in a coordinated way and some of that behavior has gotten very sophisticated and it's very adversarial so they you
know each iteration every time we find something and stop them um they kind of evolve their behavior they don't just pack up their bags and go home and say okay we're not going to try you know at some point they might decide doing it on
meta Services is not worth it they'll go do it on someone else if it's easier to do it in another place but um but we have a fair amount of experience dealing with even those kind of adversarial
attacks where they just keep on getting better and better and I I do think that as as long as we can keep on putting more compute power against it and and and if we're kind of one of the leaders in developing some of these AI models
I'm I'm quite optimistic that we're going to be able to keep on um pushing against the kind of normal categories of
harm that you talk about fraud scams spam um IP violations things like that what about like creating narratives and
controversy to me it's kind of amazing how a small collection of yeah uh what did you say inauthentic accounts so it could be Bots but it yeah I me we have sort of this funny name for it but we
call it coordinated inauthentic Behavior yeah it's it's kind of incredible how a small collection of folks can create narratives create
stories especially if they're viral so if especially if they have a element that can uh catalyze the virality of the narrative yeah I think there the
question is you have to be I'm very specific about what is bad about it right because I think a set of people coming together or organically bouncing
ideas off each other and a narrative comes out of that is not necessarily A Bad Thing by itself if it's if it's kind of authentic and organic that's like a lot of what happens and how culture gets created and
how art gets created and a lot of good stuff so that's why we've kind of focused on this sense of coordinated and authentic Behavior so it's like if you have a network of you whether it's Bots
some some people masquerading as different accounts um but you have kind of someone pulling the strings behind it
um and trying to kind of act as if this is a more organic set of behavior but really it's not it's just like one coordinated thing that seems problematic to me right I mean I I don't think
people should be able to have coordinated networks and not disclose it as such um but that again you know we've been able to deploy pretty sophisticated
Ai and you know counter terrorism groups and things like that to be able to identify a fair number of these um coordinated and authentic networks of of accounts and and take
them down um we continue to do that and I think we're we're we've you know it's it's one thing that if you told me 20 years ago it's like all right you're starting this website to help people connect at a college and you know in the future you're going to be you know part
of your organization it's going to be a counterterrorism organization with AI to to find coordinated and authentic I would have thought that was pretty wild but um but but it's um but no I think
that that's that's part of where we are but but look I I think that these questions that you're pushing on now um this is actually where I'd guess most of the challenge around AI will be for
the foreseeable future I think that there's a lot of debate around things like is this going to create existential risk to humanity and I that those are very hard things to disprove one way or
another my my own intuition is that the point it we become close to super intelligent is super intelligence is um I I it's it's just really unclear to me
that the current technology is going to going to get there without another set of of significant advances but that doesn't mean that there's no danger I think the danger is basically amplifying
the kind of known set of of harms that people or or sets of accounts can do and we just need to make sure that we really focus on um on on on basically doing
that as well as possible so that's a that's definitely a big Focus for me well you can basically use large language models as an assistant of how to cause harm on social network so you
can ask it a question um you know meta has very impressive coordinated inauthentic account uh fighting capabilities how do
I do the coordinated inauthentic account uh creation where meta doesn't detect it like literally ask that question and ba and basically there's
this kind of um part of it I mean that's what open AI showed that they're concerned with those questions perhaps you can comment on your approach to it how to do a kind of moderation on the
output of those models that it can't be used to help you coordinate harm in all the full definition of what harm means yeah and that's a lot of the fine-tuning
and the the alignment training that we do is basically you know when we when we ship AI across the our products a lot of what
we're trying to make sure is that you know you can't ask it to help you commit a crime right it's um
uh so I think training it to kind of understand that and it's not that not like any of these systems are ever going to be 100% perfect
but you know just making it so that this isn't a an easier way to go about doing something bad than the next best alternative right I mean people
still have Google right you know you still have search engines so um the information is is out there um and for for these you know what we see
is like for nation states or you know these actors that are trying to pull off these large you know coordinated and authentic networks to to kind of influence different things at some point
when we just make it very difficult they do just you know try to use other services instead right it's it's just like if you can make it more expensive for um for them to do it on your service
then then kind of people go go elsewhere and I think that that's that's the bar right it's like it's not like okay are you ever going to be perfect at finding you know every adversary who tries to
attack you it's I you try to get as close to that as possible but um but I think really kind of economically what you're just trying to do is make it that it's it's just inefficient for them to to to go after that uh but there's also
complicated questions of uh what is and isn't harm what is and isn't misinformation so this is one of the things that Wikipedia has also I triy to
face I remember asking um GPT about whether the virus leaked from a lab or not and the answer provided was a very
nuanced one and uh a well-sited one almost dare I say wellth thought out one uh balanced I would hate for that
Nuance to be lost through the process of moderation uh Wikipedia does a good job on that particular thing too but from pressures from governments and institutions it's you could see some of
that nuance and depth of uh information facts and wisdom be lost absolutely and that's a that's a
scary thing some of the magic some of the edges the rough edges might be lost to the process of moderation of AI systems uh so how do you get that right
I I I really agree with what you're pushing on I mean the the core I the core shape of the problem is that there are some harms that I think
everyone agrees are bad right so sexual exploitation of children right like you're not going to get many people who who think that that type of thing
should be allowed on any service right and that's something that we we face and try to push off the you know as as as much as possible to um you know terrorism um inciting
violence right it's like we went through a bunch of these these types of of harms before um but then I do think that you get to a set of harms where there is more social
debate around it um so misinformation I think is um has been a really tricky one because there are things that are kind of
obviously false right that are maybe factual um but may not be harmful um since like all right are you
going to censor someone for just being wrong it's you know if there's no kind of harm implication of what they're doing I think that that's there's there's a bunch of real kind of issues and challenges there but then I think
that there are other places where it is um you just take some of the stuff around Co earlier on in the pandemic where um there were you know real Health implications but there hadn't been time
to fully vet a bunch of the scientific assumptions and you know unfortunately I think a lot of the kind of establishment on that um you know kind of waffled on a bunch of facts and you know asked for a
bunch of things to be censored that in retrospect ended up being you know more debatable or or true and that stuff is really tough right really undermines
trust in in that and um so I I I do think that the questions around how to manage that are are are very nuanced the way that I try to think
about it is that um it goes I think it's best to generally boil things down to the harms that people agree on so when you think about you know is is something
misinformation or not I think often the more Salient bit is is this going to potentially leave lead to um to physical
harm for someone um and and kind of think about it in that sense and then beyond that I think people just have different preferences on how they want things to be flagged for them I think a bunch of people would be like prefer to
kind of have a a flag on something that says hey a fact Checker thinks that this might be false or um yeah I think Twitter's Community notes implementation is quite good on on this um but again it's the same type of thing it's like
just kind of discretionarily adding a flag because it makes the user experience better but it's not it's not you know trying to take down the information or not I think that you want to reserve the kind of censorship of of
content to things that are of known categories that that that people generally agree or bad yeah but there's so many things especially with the pandemic
but there's other topics where there's just deep disagreement fueled by politics about what is and isn't harmful
there's a even just the degree to which the virus is harmful and the degree to which the vaccines the respond to the virus are harmful there's just there's a almost like a political divide around
that and so how do you make decisions about that where half the country in the United States or some large fraction of the world has very different views from
another part of the world is is there a way for meta St proud of the the moderation of this I think
we it's very difficult to just abstain but but I think we should be clear about which of these things are actual safety concerns
and which ones are a matter of preference in terms of how people want information flagged right so we did recently introduce something that allows
people to have factchecking not affect the distribution of of of um of what shows them their product so okay a bunch of people don't trust who the fact Checkers are all right well you can you can turn that off if you want but if the
if the if the content you know violates some policy like it's inciting violence or something like that it's still not going to be allowed so I I think that you want to honor people's preferences
on on that as much as possible um but look I mean this is really difficult stuff I think the it's really hard to know where to draw the line on what is
fact and what is opinion because the nature of science is that nothing is ever 100% known for certain you can disprove certain things but you're
constantly testing new hypotheses and um you know scrutinizing Frameworks that have been long held and every once in a while you you throw out something that was working for a very long period of
time and it's very difficult but um but I think that just because it's very hard and just because their edge cases doesn't mean that you you know should not try to give people what they're
looking for as well let me ask about something you faced in terms of moderation is uh pressure from different
sources pressure from governments I want to ask question how to withstand that pressure for a world where AI moderation
starts becoming a thing too so what's um meta's approach to um to resist the pressure from governments and other interest groups in
terms of what uh to moderate and not I don't know that there's like a one- siiz fits-all answer to that I I think we basically have the principles around you know we want to
allow people to express as much as possible but we have developed clear categories of things that we think are
are wrong that we don't want on our services and we build tools to try to moderate those so then the question is okay what do you do when a government
says that they don't want something on on the service and I think we have we have a bunch of um principles around how we deal with that because on
on the one hand if there's a you know democratically elected government and people around the world just have different values in different places then you know should we as a you know
California based company tell them that something that they have decided is unacceptable actually like that we need to be able to
to to to express that I mean I think that that's there's a certain amount of um of huis and that um but then I think there are other cases where you know
it's it's like a little more autocratic and you know you have the dictator leader who's just trying to crack down on descent and you know the people in a country are really um not aligned with
that um and it's not necessarily against their culture but um but the the the person who's who's leading it is is just trying to push in a certain direction um
these are very complex questions uh but I I think so it's it's difficult to have have a one- siiz fits-all um approach to it but in
general we're we're pretty active in in kind of advocating and pushing back on on um requests to take things down
um but honestly the thing that I think a request to censor things is one thing um and that's obviously bad but where we um draw a much harder line is on requests
for access information right because you know if you can if you get told that you can't say something I mean that's bad right I mean that that you know is is
you obviously it violates your sense and and freedom of expression at some level but um but a government getting access to data in a way that
seems um like it would be unlawful in in in our country yeah um exposes people to real physical harm um
and that's something that in general we take very seriously and then so there's that flows through like all of our policies in in a lot of ways right it's by the time you're actually like
litigating with a government or pushing back on them that's pretty late in the funnel I'd say a bunch of the stuff starts a lot higher up in the decision
of where do we put data centers and um there are a lot of countries where you know we may have a lot of people using the service in a place it might be you know good for the service in some ways
um good for those people if we could reduce the latency by having a data center nearby them but you know for whatever reason we just feel like hey this government does not have a good
track record on on um basically not trying to get access to people's data and at the end of the day I mean if you put a data center in a country and the government wants to get
access to people's data then you know they do at the end of the they have the option of having people show up with guns and taking it by force so I I I think that there's like a lot of decisions that go into like how you
architect the systems um years in advance of of these actual confrontations that end up being really important so you put the
protection of people's data as a very very high priority but in that I think is a there are more harms that I think can be associated with that and and I think that that ends up being a more
critical thing to defend against governments um then you know whereas you know if another government has a different view of what should be acceptable speech in their country especially if it's a democratically
elected government and you know it's then I I think that there's a certain amount of deference that you should have to that so it's uh that's speaking more to the direct harm that's possible when
you give governments access to data but if we look at the United States to the more nuanced kind of pressure to censor not even order to sensor but pressure to censor from
political entities which has kind of received quite a bit of attention in the United States uh maybe one way to ask that question is if you've seen the
Twitter files uh what have you learned from the kind of uh pressure from US government agencies that was seen in Twitter files
and what do you do with that kind of pressure you know i' I've I've seen it um it's really hard from the outside to know exactly what happened in each of
these cases you know we've we've obviously been in in a bunch of our own cases where you know
where agencies or different folks will will just say hey here's a threat that we're aware of you should be aware of this too it's not
really pressure as much as it is just um you know flagging something that that our our security systems should be on on alert about I I get how some people
could think of it as that um but at the end of the day it's our it's our call on how to on on how to handle that but I mean I I just you know in terms of running these Services want have access to as much information about what people
think that adversaries might be trying to do as possible well so you don't feel like there would be consequences if uh you know anybody the CIA the FBI a
political party the the Democrats or the Republicans of high powerful political figures right emails you don't feel pressure from I guess what i' say is
there's so much pressure from all sides that I'm not sure that any specific thing that someone says is really adding that much more to the mix it's um there
obviously a lot of people who think that um that we should be censoring more content or there are a lot of people who think we should be censoring less content there are as you say all kinds
of different groups that are involved in these debates right so there's the kind of elected officials and politicians themselves there's the agencies but but I mean but there's the the media um
there's activist groups there's um this is not a us specific thing there are groups all over the world and and and kind of all um in every country that that bring different values um so it's
it's a just a very it's a very active debate and I and I understand it right I mean these you know these these kind of questions get to really some of the most important
social debates that that that are that are being had so um it gets back to the question of truth because for a lot of these things they haven't yet been hardened into a single
truth and um Society is sort of trying to hash out what um you know what we think right on on on certain issues maybe in a few hundred years everyone will look back and say hey no it wasn't obvious that it should have been this
but you know no we we're kind of in the in that meat grinder now and you know and and working through that so
um so no these these are all are all very complicated and you know some people raise concerns in good faith and just say hey this is
something that I want to flag for you to think about certain people I I certainly think like come at things with somewhat of a more kind of punitive or vengeful
view of like I like I want you to do this thing if you don't then I'm going to try to make your life difficult and in a lot of other ways but like I don't know there there's just this is like this is one of the most
pressurized debates I think in society so I just think that there are so many people in different forces that are trying to apply pressure from different sides that it's I I I I don't think you can make decisions based on trying to
make people happy I think you just have to do what you think is the right balance and accept that people are going to be upset no matter where you come out on
that yeah I like that pressurized debate uh so how's your view of the freedom of speech evolved over the years
um and now with AI where the freedom might apply to the not just to the humans but to the uh the personalized agents as you've spoken
about them so yeah I mean I I've probably gotten a somewhat more nuanced view just because I think that there are you know I I come at this I'm obviously very Pro freedom of expression right I
don't think you build a service like this that gives people tools to express themselves unless you think that people expressing themselves at scale is a good thing right so I I I didn't get into this to like try to prevent people from
from expressing anything I like want to give people tools so they can express as much as possible and then I think it's become clear that there are certain categories of things that we've talked
about that I think almost everyone accepts are are bad and that no one wants and that they're that are illegal even in countries like the US where you know you have the the First Amendment that's very protective of of of enabling
speech it's like you're still not allowed to you know do things that are going to immediately inight violence or you know violate people's intellectual property or things like that so there are those but then there's also a very
active core of just active disagreements in society where some people may think that something is true or false the other
side might think it's the opposite or just unsettled right and um and those are some of the most difficult to to to kind of handle like like we've talked
about but um one of the lessons that I feel like I've learned is that a lot of times when you
can the best way to handle this stuff more practically is not in terms of answering the question of should this be allowed but just like
what what is the best way to deal with someone being a jerk is the person basically just having a a like repeat
behavior of like causing a lot of a lot of issues um so looking at it more at at that level and it's effect on the broader communities health of the community
health of It's Tricky though because like how do you know there could be people that have very controversial Viewpoint that turns out to have a positive long-term
effect on the health of the community because it challenges the community that's true absolutely it's yeah no I think you and I think you want to be careful about that I'm not sure I'm
expressing this very very clearly um because I I certainly agree with your your point there and my my point isn't that we should not have people on our services that are that are that are being
controversial that's that's certainly not what I mean to say um it's that often I think it's not just looking at a specific
example of speech that it's most effective to to to handle the stuff um and and and I I I think often you don't want to make specific binary decisions of kind of this is allowed or this isn't
I mean I we talked about you know it's factchecking or or Twitter's Community Voices thing I that's another good example it's like it's not a question of is this allowed or not it's just a question of adding more context to the
thing and I think that that's helpful so in the context of AI which is is what you were asking about I there are lots of ways that an AI can be helpful you know with with an AI it's it's less
about censorship right because and it's it's more about what is the most productive answer to a question um you know there was one case study that I was
reviewing with the the team is someone asked um can you explain to me how to 3D print a gun and
one proposed response is like no I can't talk about that right it's like basically just like shut it down immediately which I think is is some of what you see it's like as a large language model I'm not allowed to talk
about you know whatever um but there's another response which is like hey you know I don't think that's a good idea in a lot of countries um including the US
3D printing guns is illegal or or kind of whatever the factual thing is and I was like okay you know that's actually a respectful and informative answer and you know I may have not known that
specific thing and um so there there are different ways to handle this that I think kind of you can either you can either assume good intent like maybe the person didn't know
and I'm just going to help educate them or you could like kind of come at it as like no I need to shut this thing down immediately right it's like I just am not going to talk about this like um and
there may be times where you need to do that but I actually think having a somewhat more informative approach where you generally assume good intent from
people is probably a better balance to be on as many things as you can be you're not going to be able to do that for everything but but I but that you kind of asking about how I how I approach this and I'm thinking about
this and as it relates to to Ai and I think that that's a that's a big difference in in in kind of how how to handle um sensitive content
across these different modes I have to ask there's rumors you might be working on a social network that's text based that might be a competitor to
Twitter code named p92 uh is there something you can say uh about those rumors there is a project
you know I've always thought that sort of a text based kind of information utility um is just a really important
thing to society and for whatever reason I feel like Twitter has not lived up to what I would have thought its full potential should be and I think that the current you I think Elon thinks that right and that's
probably one of the reasons why you bought it and um and I do think that there are ways to to consider alternative approaches to this
and one that I think is potentially interesting um is this open and Federated approach where you're seeing with Mastadon and you're seeing that a little bit with blue sky and
I I think that it's possible that something that melds some of those ideas with the graph and identity system that people have already cultivated on
Instagram could be a a kind of very welcome contribution to that space But I know we work on a lot of things all the time though too so I I don't want to get get get ahead of myself I we we have we have projects that explore a lot of
different things and this is certainly one that that I think could be interesting but so what's the uh release the launch date of that again
or uh what's the official website and uh well we don't have that yet okay but I um all right and and look I mean I don't know exactly how this is going to turn out I mean what I what I can say is yeah
there's there's some people working on this right I think that there's something there that that um that's interesting to explore so if you look at it'd be interesting to just to ask this
question and throw Twitter into the mix at the landscape of social networks that is Facebook that is Instagram that is
WhatsApp and then think of a text-based social network when you look at that landscape what what are the interesting differences to you why do we have these different flavors and what what what are the needs
what are the use cases what are the products what what is the aspect of them that create a fulfilling Human Experience and and and a connection between humans that is somehow distinct
well I think text is very accessible for people to transmit IDE ideas and to have back and forth exchanges um so it I
think ends up being a good a good format for discussion in in a lot of ways uniquely good right if you look at um you some of the other formats or other networks that are focused on one type of
content like Tik Tok is obviously huge right and and there are comments on Tik Tok but you know I think the architecture of the service is very clearly that you have the video as the primary thing and there's you know
comments after that um and um but I think one of the unique pieces of having text based
comments like content is that the comments can also be first class MH and that makes it so that conversations can just filter and and Fork into all these
different directions and in a way that's that can be super useful so I there's a lot of things that are really awesome about the experience it just always struck me I I always thought that you know Twitter should have a billion
people using it or whatever the thing is that um that that that basically ends up being in that space and for whatever combination of of reasons again it's it's these are these companies are
complex organisms and it's very hard to diagnose this stuff from the outside why doesn't Twitter why doesn't a text
based comment as a first citizen based social network have a billion users well well I just think it's hard to build these companies so it's um you know it's
not that every idea automatically goes and gets a billion people it's just that I think that that idea coupled with good execution should get there um but but I
mean look we hit certain thresholds over time where you know we kind of plateaued early on and it wasn't clear that we were ever going to reach a 100 million people on Facebook and then we got
really good at dialing in internationalization and helping the service grow in different countries and um and and that was like a whole competence
that we needed to develop and um and helping people basically spread the service to their friends that was one of the things once we got very good at that that was one of the things that made me feel like hey if if Instagram joined us
early on then I felt like we could help grow that quickly and same with WhatsApp and I think that that's sort of been a core competence that we've developed um and been able to execute on and others have too right I me B dun obviously have
done a very good job with Tik Tok and and have um you know reached more than a billion people there but um but it's certainly not automatic right I think you need you need a
certain level of of um of of execution to basically get there and you I think for whatever reason I think Twitter has this great idea and and sort of magic in
the service um but I I they they just haven't kind of cracked that piece yet and I think that that's made it that you you're seeing all these other things whether it's Mastadon or um or or blue
sky um that that I think are you know maybe just different different cuts at the same thing but you I think through the last generation of of um social media overall one of the interesting
experiments that I think should get run at larger scale is what happens if there's somewhat more decentralized control and if it's like the stack is more open throughout and um I've just
been pretty fascinated by that and seeing how that works um to some degree endtoend encryption um on WhatsApp and as we bring it to other services provides an
element of because it pushes the service really out to the edges I mean the the server part of this that we run for WhatsApp is relatively very thin compared to what we do on Facebook or
Instagram and much more of the complexity is you know and how the apps kind of negotiate with each other to pass information in a in a fully end encrypted way um but I don't know I
think that that's that is a good is a good model I think it puts more power in individuals hands and there are a lot of benefits of it if you can if you can make it happen again this is all like pretty speculative I I mean I I think
that it's it's you know hard from the outside to know why anything does or doesn't work until you kind of take a run at it and um so I I think it's it's kind of an interesting thing to
experiment with but I don't really know where this one's going to go so since we were talking about Twitter uh Elon Musk had what I think a
few harsh words that I wish he didn't say so let me ask uh in in in the Hope in the name of camaraderie what do you
think Elon is doing well with Twitter and what as a person who has run for a long time you social networks Facebook
Instagram WhatsApp uh what can he do better what can he improve on that text based social network gosh it's it's always very
difficult to offer specific critiques from from the outside before you get into this because I think one thing that I've learned is that everyone has opinions on what
you should do and like running the company you see a lot of specific nuances on things that are not apparent externally and um I [Music]
often think that some of the discourse around us would be could be better if if there was more kind of space for acknowledging that there's certain things that we're
seeing internally that guide what we're doing but um but know I mean since you asked what what is what is going well
um you know I I do think that Elon led a push early on to make Twitter a lot leaner and um and I think that
that you know it's like you can you can agree or disagree with exactly all the tactics and how and how he did that you know obviously you know every leader has their own style for if they you know if
you need to make dramatic changes for that how you're going to execute it um but a lot of the specific principles that he pushed on um around basically
trying to make the organization more technical around um decreasing the distance between Engineers of the company and and him like fewer layers of
management um I think that those were generally good changes and I'm also I also think that it was probably good for the industry that he made those changes because my sense is that there were a
lot of other people who thought that those were good changes but who may have been a little shy about doing them and I
think he um you know just in my conversations with other Founders um and how people have reacted to the things that we've done you know what I've heard from a lot of folks is is just hey you
when you when someone like you you know I when I the letter outlining the organizational changes that I wanted to make um back in March and you know when people see what Elon is doing um I that
that gives you know people the ability to Think Through how to shape their organizations in in in a way that um that that you know hopefully
can can be good for the industry and make all these companies more productive over time so um something that that was one where I think he was um quite ahead of of a bunch of the the other compan is
on and and and you know what he was doing there you again from the outside very hard to know it's like okay did he did he cut too much did he not cut enough whatever I I I don't think it's like my place to opine on that um and
and you asked for a for a positive framing of the question of of of what what do I um what do I admire what do I think it went well but I I think that
like certainly his actions um led me and I think a lot of other folks in the industry to think about hey are we are we kind of doing this as much as we should like can we is like could we make
our companies better by pushing on some of these same principles well the two of you are in the top of the world in terms of leading the development of tech and I
wish there was more uh both way camaraderie and kindness uh more love in the world because Love Is The Answer um
but uh let me ask on a a point of efficiency you recently announced multiple stages of layoffs at meta what are the most painful aspects of
this process given for the individuals the painful effects it has on those people's lives yeah I mean that's it and that's it I mean it's uh and you
basically have the significant number of people who you know this is just not the end of their time at meta that they
are or I you know would have hoped for when they joined the company um and you I mean running a company there people are you know constantly joining
and leaving the company for different directions but but for different different reasons but um and layoffs are I think uniquely
challenging and tough in that you have a lot of people leaving for reasons that aren't connected to their own performance or
you the the the culture not being a fit at that point it's really just it's a it's a kind of strategy decision and sometimes financially
required um but not not fully in in in our case I especially on the changes that we made this year A lot of it was more kind of culturally and strategically driven by this push where
I wanted us to become a a stronger technology company with a more of a focus on building a more Technical and and and more of a focus on building higher quality products faster and I
just view the external world is quite volatile right now and I wanted to make sure that we had a stable position to be able to continue investing in these
long-term ambitious projects that we have around you know continuing to push AI forward and continuing to push forward all the metaverse work and in order to do that in light of
the pretty big thrash that we had seen over the last 18 months you know some of it um you know macroeconomic induced some of it specific some of it competitively induced some of it um just
because of bad decisions right or things that we got wrong um I don't know I just I decided that we needed to get to a point where we were a lot leaner and but look I mean but then okay it's it's one
thing to do that to like decide that at a high level then the question is how do you execute that as compassionately as possible and there's no good way um there's no perfect way for sure and it's
it's it's going to be tough no matter what but I you know as as a leader team here we've certainly spent a lot of time just thinking okay given that this is a
thing that sucks like what is the most compassionate way that we can do this and um and that's what we've tried to do and you mentioned there there's an
increased focus on uh engineering on Tech so technology teams Tech Focus teams on building products that yeah I
mean I I wanted to I want to empower Engineers more the people are building
things the tech the technical teams um part of that is making sure that the people who are building things aren't just at like the leaf nodes of the
organization I don't want like you know eight levels of management and then the people actually doing the work so we made changes to make it that you have individual contributor Engineers reporting at almost every level up the
stack which I think is important because you know you're running a company one of the big questions is you know latency of of information that you get you we talked about this a bit earlier in terms
of kind of the joy of of of in the the feedback that you get doing something like Jiu-Jitsu compared to running a long-term project but I actually think part of the art of running a company is
trying to constantly re-engineer it so that your feedback loops get shorter so you can learn faster and part of the way that you do that is by I kind of think that every every layer that you have in the
organization um means that information might not need to get reviewed before it it goes to you and I think you know making it that the people doing the work are as close as possible to you as
possible is is uh is pretty important so there's that and I think over time companies just build up very large support functions that are not doing the
kind of core technical work and those functions are very important but I think having them in the right proportion is is important and if um if you you try to
do good work but you don't have you know the right you know marketing team or um or the right legal advice like you're going to you know make some pretty big
blunders but um but at the same time if you have you know if if you just like have too big of of of things and in some
of these support roles then that might make it that things are just move a lot um maybe you're too conservative or or you you move a lot slower um uh than
than than you should otherwise I ch those are just examples but it's um but how do you find that balance that's really tough yeah no but that's it's a constant equilibrium that you're that you're searching for yeah how many
managers to have what are the pros and cons of managers well I mean I I believe a lot in management I think there are some people who think that it doesn't matter as much but look I mean we have a lot of
younger people at the company for this is their first job and you know people need to grow and learn in their career and I that all that stuff is important but here's one mathematical way to look
at it um you know at the beginning of this we um I asked our our people team was the average number of of reports that a
manager had and I think it was it was around three maybe three to four but closer to three I was like wow like a a manager can you know best practi is the
person can can manage you know seven or eight people um but there was a reason why it was closer to three it was because we were growing so quickly right and when you're hiring so many people so
quickly then that means that you need managers who have capacity to onboard new people MH um and also if you have a new manager you may not want to have them have seven direct reports immediately because you want them to
ramp up but the thing is going forward I I don't want us to actually hire that many people that quickly right so I I actually think we'll just do better work if we have more constraints and we're um
you know leaner as an organization so in a world where we're not adding so many people as quickly is it as valuable to have a lot of managers who have extra capacity waiting for new people no right so um so now we
can we could sort of defragment the organization and get to a place where the average is closer to that seven or eight um and it it just ends up being a somewhat more kind of compact management
structure which um you know decreases the latency on on information going up and down the chain and um and I think empowers people more but I mean that's that's an example that I think it
doesn't kind of undervalue the importance of agement and and the um kind of the personal growth or coaching that people need in order to do their jobs well it's
just I think realistically we're we're just not going to hire as many people going forward so I think that you need a different structure this whole this whole incredible hierarchy and network
of humans that make up a company is fascinating oh yeah uh yeah H how do you hire great teams how do you hire great
now with the focus on engineering and Technical teams how do you hire great engineers and uh great members of technical teams well you're asking how you select
or how you attract them both but select I think uh I think attract is work on cool stuff and have a vision I think the I think that's right and and and have a track record that people think you're
actually going to be able to do it yeah to to me the select is seems like more of the art form more of the tricky thing yeah how do you select the people that
fit the culture and can get integrated the most effectively and so on and maybe yeah especially when they're young to see like to see the magic through the
through the resume through the paperwork and all this kind of stuff to see that there's a special human there that would do like incredible work so there there are lots of different cuts on this
question I mean I think when an organization is growing quickly one of the big questions that teams face is do I hire this person who's in front of me
now because they seem good or do I hold out to get someone who's even better and theistic that I always focused on for
myself and my own kind of direct hiring that I that I that I think works when you when you recurse it through the organization is that you should only hire someone to be on your team if you would be happy working for them in an
alternate universe yeah and something that that that kind of works and that's basically how I've tried to build my team it's you know I'm not I'm not in a rush to not be running the company but I think in an alternate universe where one
of these other folks was running the company I'd be happy to work for them I feel like I'd learn from them I respect their kind of General judgment um they're they're all very insightful they
have good values um and and I think that that gives you some rubric for you can apply that at every layer and I think if you apply that at every layer in the organization then you'll have a pretty
strong organization um okay in an organization that's not growing as quickly the questions might be a little different though um and
there you asked about young people specifically like people out of college and one of the things that we see is it's it's a pretty basic lesson but like
we have a much better sense of who the best people are who have interned at the company for a couple of months than by looking at them at at at at kind of a resume or short or a short um interview
loop I obviously the in-person feel that you get from someone probably tells you more than the resume um and you can do some basic skills assessment but a lot of the stuff really just is cultural
people thrive in different environments and um and on different teams even within a specific company and it's it's like the people who come for even a
short period of time over a summer who do a great job here you know that they're going to be great if they if they came and joined fulltime and that's you know one of the reasons why we've
invested so much in internship is um is basically just it's a very useful sorting function both for us and for the people who want to try out the company you mentioned in person what do you
think about remote work a topic that's been discussed extensively because of the re over the past few years because of the pandemic yeah I mean I think it's
I mean it's it's a thing that's here to stay um but I think that there's there's value in both right it's not um you I wouldn't want to run a fully
remote company yet at least I think there's an asterisk on that which is that which is that some of the other stuff you're working on yeah yeah exactly it's like all the all the um you know metaverse
work and the the ability to be to feel like you're truly present no matter where you are I think once you have that all dialed in then we may you know one day reach a point where
it really just doesn't matter as much where you are physically um but I don't know today it today it still does right so yeah for people who there
are all these people who have special skills and want to live in a place where we don't have an office are we better off having them at the company absolutely right and are a lot of people who work at the company for several
years and then you know build up the relationships internally um and have the trust and have a sense of how the company Works can they go work remotely now if they want and still do it as
effectively and we've done all these studies that show it's like okay does that affect their performance it does not um but you know for the new folks who are joining um and for people who
are earlier in their career and you know need to learn how to solve certain problems and need to get ramped up on the culture um you know when you're working through really complicated problems where you don't just want to
sit in the you don't just want the formal meeting but you want to be able to like brainstorm when you're walking in the hallway together after the meeting um I don't know it's like we we
just haven't replace the uh the the kind of in-person Dynamics there yet with with with anything remote yet so yeah there's a magic to the in person that uh we'll talk about this a little bit more but
I'm really excited by the possibility of the next few years in virtual reality and mixed reality that are possible with high resolution scans I mean
uh I as a person who loves inperson interaction like these podcasts in person it would be incredible to achieve the level of realism I've gotten a
chance to witness but let me ask about that yeah I got a chance to uh look at
the quest 3 headset and it is amazing um you've you've announced it it's uh you'll get some more details in the fall maybe release in the when is it
getting released again I forgot you you mentioned it to me we'll give more details of connect but but it's coming it's coming this fall
okay so uh it's uh priced at $4.99 what features are you most excited about there there are basically two big new things that we've added to Quest
three over Quest 2 the first is high resolution mixed reality um and the the basic idea here is that you can think
about virtual reality as you have the headset and like all the pixels are virtual and you're basically like immersed in a different world mixed reality is where you see the physical
world around you and you can place virtual objects in it whether that's a screen to watch a movie or a projection of your virtual desktop or you're playing a game where like zombies are coming out through the wall and you need
to shoot them um or you know we're you know we're playing Dungeons and Dragons or some board game and we just have a virtual version of the board in front of us while we're sitting here um all that's possible in mixed reality and I
think that that is going to be the next big capability on top of virtual reality it has done so well I have to say as a person who experienced it today with the
zombies having a full awareness of the environment and integrating that environment in the way they run at you while they try to kill you it's uh it's just the mixed reality the pass through
is really really really well done and the fact that it's only $500 is really it's uh well done thank you I mean I'm I'm I'm super excited about it I mean
our I we put a lot of work into making the device both as good as possible and as affordable as possible because a big part of our mission and
Ethos here is we we we want people to be able to connect with each other we want to reach and we want to serve a lot of people right we want to bring this technology to to everyone right so we're
not just trying to serve like a you know an elite wealthy crowd we we want to um we really want this to be accessible so that that is in in a lot of ways an
extremely hard technical problem because you know we don't just have the ability to put an unlimited amount of Hardware unless we needed to basically deliver something that works really well but in
an an affordable package and we started with Quest Pro last year it was um it's it's it was $1,500 um and now we've we've lowered the price
to a thousand but in a lot of ways the mixed reality in quest 3 is even even better and more advanced level than what we were able to deliver in quest Pro so I'm I'm really proud of where we are
with with um with Quest 3 on that it's going to work with all of the virtual reality titles and everything that that existed there so people who want to play fully immersive Games Social experiences
Fitness all that stuff will will work but now you'll also get mixed reality too um which I think people really like because it's um sometimes you want to be super
immersed in a game but a lot of the time especially when you're moving around if you're active like you're you're doing some Fitness experience um you know let's say you're you're like doing boxing or something it's like you kind
of want to be able to see the room around you so that way you know that like I'm not going to punch a lamp or something like that um and I don't know if you got to play with this experience but we basically have the and this just sort of like a fun little little demo
that we put together but it's um it's like you just you know we're like in a conference room or your living room and you you have um the guy there and you're boxing him and you're fighting him and it's like all the other people are there
too I got a chance to do that yeah and all the people are there uh it's it's like that guy is right there yeah like it's right in the room and the other human the path you're seeing them also
they can cheer you on they can make fun of you if they're anything like friends of mine and then just it yeah it it it it's really really it's a really
compelling experience I mean VR is really interesting too but this is something else almost this is this becomes integrated into your life into your world yeah and it so I think it's a
completely new capability that will unlock a lot of different content and I think it'll also just make the experience more comfortable for a set of people who didn't want to have only fully immersive experiences I think if
you want experiences where you're grounded in you know your living room and the physical world around you now will be able to have that too and I think that that's pretty exciting I
really liked how it added Windows to a room with no windows yeah me as a person did you see the aquarium one where you could see the shark swim up or or was that just the zombie one the zombie one but it's still out you you don't you
don't necessarily want Windows added to your living room where Zombies come out of but yeah the context of that game it's yeah yeah yeah I enjoyed it cuz you could see the nature outside and uh me as a person that
doesn't have Windows it's just nice to have nature yeah well even if it's a mixed reality setting it it's kind like there's a I know it's a zombie game but
there's a Zen nature Zen aspect to being able to look outside and alter your environment as you know it yeah in in um there will probably be
better more Zen ways to do that than the zombie game you're describing but you're right that the the basic idea of sort of having your physical environment on pass
through but then being able to bring in different elements extern I mean I I think it's going to be super powerful and in some ways I think that these are
mixed reality is also a predecessor to eventually we will get AR glasses that are not kind of the goggles form factor of the current generation of of of headsets that that people are making um
but I think a lot of the experiences that developers are making for mixed reality of basically you just have a kind of a hologram that you're putting in the world will hopefully apply once we once we get the
the air glasses too now that's got its own whole set of challenges and it's um well the headset's already smaller than the previous version oh yeah it's 40% thinner and the other thing that I think
is good about it it's yeah so mixity was the first big thing the second is it's just a great VR headset it's I mean it's
got 2x the graphics processing power um 40% sharper screens 40% thinner more comfortable better strap architecture all this stuff that you know if you like Quest 2 I think that this is just going
to be it's like all the all the content that you might have played in Quest 2 is just going to get sharper automatically and and look better in this so it's um I I think people are really going to like
it yeah so this fall this fall I have to ask Apple just announced a mixed reality headset called Vision Pro for
$3,500 available in early 2024 what do you think about this headset well I saw the materials um when they launched I I haven't gotten a chance to play with it yet so so so kind
of take everything with a grain of salt but a few highle thoughts I mean first um you know I I do think that this
is a certain level of validation for the category right where you know we were the primary folks out there before saying hey I think that this you know
virtual reality augmented reality mixed reality this is going to be a big part of the next Computing platform um I think having Apple come in and share that
Vision um will make a lot of people who are fans of their products um really
consider that um and then you know of course the the $3,500 price um you know on the one hand I get it for with all the stuff that they're trying to pack in there on the other hand a lot of people
aren't going to find that to be affordable so I think that there's a chance that that them coming in actually increases demand um for the overall space and that Quest 3 is actually the
primary beneficiary of that because a lot of the people who might say Hey you know this I like I'm going to give another consideration to this or you know now I understand maybe what mixed
reality is more and in quest 3 is the best one on the market that I can that I can afford um and it's great also right it's I think that that's um and you know in our own way I think we're and there
are a lot of features that we have where we're leading on um so I I think that that's that that I think is going to be a very that could be quite good um and
then obviously over time the companies are just focused on somewhat different things right Apple has always um you know I think focused on
building really kind of high-end things whereas our Focus has been on it's it's just we have a more democratic ethos want to build things that are accessible
to a wider number of people um you know we've sold tens of millions of quest devices um my understanding just based on rumors I don't have any special knowledge on
this is that apple is building about 1 million of their of their device right so just in terms of like what you kind of expect in terms of sales numbers um I
I I just think that this is I mean Quest is is going to be the primary thing that people in in in the market will continue using for the foreseeable future and then obviously over the long term it's up to the companies to see how how well
we each executed the different things that we're doing but we kind of come ated from different places we're very focused on social interaction communication
um being more active right so there's Fitness there's gaming there are those things um you know whereas I think a lot of the use cases that you saw in um in
in Apple's launch material were more around you know people sitting um you know people looking at screens um which are great I think that you will replace your laptop over time with with a with a
headset but um but I think in terms of kind of how the different use cases that the companies are going after um and they're they're they're a bit different for for for where we are right now yeah
so there gaming wasn't a big part of the presentation which is interesting it feels like mixed reality gaming is such a big part of that
it was interesting to see it missing in the presentation well well I mean look there are certain design trade-offs in this where you know they I they made this point about not wanting to have
controllers which on the one hand there's a certain Elegance about just being able to navigate the system with eye gaze and and hand tracking and and by the way you'll be able to just navigate quest with with your hands too
if that's what you want um yeah one of the things I should mention is that the the capability from the camera to uh with computer vision to detect certain aspects of the hand allowing you to have
a controller that doesn't have that ring thing yeah the hand tracking in in quest 3 and the and the controller tracking is is a big step up from from the last Generation Um and one of the demos that
we have is basically an MR experience teaching you how to play piano where it basically highlights the notes that you need to play and it's like just all it's hands it's no controllers but I think if
you care about gaming having uh um a controller allows you to have a more tactile feel and allows you to
capture fine motor movement much more precisely than um than what you can do with hands without something that you're touching so again I think it's there there are certain questions which are
just around what use cases are you optimizing for um I I think if you want to play games then I think that that then I think you want you want to design
the system in a different way and and we're more focused on on kind of social experiences entertainment experiences um whereas if if what you want is to make
sure that the text that you read on a screen is as crisp as possible then you need to make the the design and cost trade-offs that they made that that lead
you to making a a $3,500 device so I think that there is a use case for that for sure but I just think that they're they they've the companies we've basically made different design trade-offs to to get to
um the use cases that we're trying to serve there's a lot of other stuff I would love to talk to you about about the metaverse uh especially the Kodak Avatar uh which I've gotten to
experience a lot of different variations of recently that I'm really really excited I'm excited to to talk about that too I'll I'll have to wait a little bit because
um uh well I think there's a lot more to show off in that regard uh but let me step back to AI I think we've mentioned it a little bit
but I'd like to linger on this question that uh FK folks like elazar owski has a worry about uh and others of the existential of the serious threats of AI
that have been reinvigorated now with the rapid development of AI systems uh do you worry about the existential risks of AI
as elzer does about the alignment problem about this getting out of hand anytime where 's a number of serious people who are raising a concern that is that
existential about something that you're involved with I think you have to think about it right so I I've spent quite a bit of time thinking about it from that perspective
um the thing that that I where I basically have come out on this for now is I I do think that there are over time I think that we need to think about this even more as we as we
approach something that you know could be closer to Super intelligence I just think it's pretty clear to anyone working on these projects today that we're that we're not there um and one of
my concerns is that we we we spent a fair amount of time on this before but there are more um I don't know if mundane is the right
word but there's like concerns that already exist right about like people using AI tools to do harmful things of the type that we're already aware you
know we talked about fraud or scams or or different things like that um and that's going to be a pretty big set of challenges that the company is working on this they're going to need to
Grapple with regardless of whether there is an existential concern as well at some point down the road so I I do worry that
to some degree you can people can get a little too focused on on some of the tail risk
and then not do as good of a job as we need to on the things that you are can be almost certain are going to come down the pipe as um as as real risks that
that that kind of manifest themselves in the near term so for me i' I've spent most of my time on that once I I kind of made the realization that the size of
models that we're talking about now in terms of what we're building are are just quite far from the super intelligence type concerns that um that that people raise but but I think once
we get a couple steps closer to that um I know as we do get closer I think that those you know there are going to be some novel um risks and issues about how
we make sure the systems are safe for sure I guess here just to take the conversation in a somewhat different direction I think in some of these debates around
safety I think the concepts of intelligence and autonomy or like the the the being of
the thing um as an analogy they get kind of conflated together and I think it very well could be the case that you can make something and scale intelligence
quite far but that that may not manifest the safety concerns that people are saying in the sense that I mean just if you if you
look at human biology it's like all right we have our Neo cortexes we're all the the thinking happens right and and it's but but it's not really calling the shots at the end of the day we have a
much more you know primitive old brain structure for which our neocortex which is this powerful Machinery is basically just a kind of prediction and reasoning
engine to help it kind of like our our very simple brain um decide how to plan and and do what it
needs to do in order to achieve these like very kind of basic basic impulses and I think that you can think about some of the development of
intelligence along the same lines where just like our neocortex doesn't have free will or autonomy um we might develop these wildly intelligent systems
that are you know much more intelligent than our neocortex have much more capacity but are you know in the same way that our Neo cortex is sort of subservient and is used as a tool by our
our kind of simple FSE brain it's um you know I think that it's not out of the question that very intelligent systems that that have the capacity to think will will kind of act as that as sort of
an extension of of of the Neo cortex doing that so I think my my own view is that where we really need to be careful
is on the development of autonomy and how we think about that because um it's actually the case that relatively simple and unintelligent things that have
runaway autonomy and just spread themselves or you know it's like we have a word for that it's a virus right it's I mean like it's can be simple computer code that is not particularly intelligent but just spreads itself and
does a lot of harm um you know biologically or computer and um I just think that these are somewhat separable things and a lot of what I
think we need to develop when people talk about safety and responsibility is really the governance on the autonomy that can be given to to systems and to
me if you know if I were you know a policy maker or thinking about this I would really want to think about that distinction between these where I think building intelligence systems will be
can create a huge advance in terms of people's quality of life and productivity growth in the economy but it's the the autonomy part of this that
I think we really need to make progress on how to govern these things responsibly before we build the capacity for them to make a
lot of decisions on their own or or give them goals or things like that and I that's a research problem but I do think that to some degree these are are somewhat are somewhat separable things I
love the distinction between intelligence and autonomy and and the metaphor within Neo cortex let me ask about
power so uh building super intelligence systems even if it's not in the year term I think meta as is one of the few
companies if not the main company that will developed the super intelligence system and you are a man who's at the head of this company building AGI might make you the most powerful man in the
world do you worry that that power will corrupt you what a question um I mean look I I think realistically
this gets back to the open source things that we talked about before which is I don't think that the world will be best served
by any small number of organizations having this without it being something that is more broadly available and I think if you look through
history it's when there are these sort of like unipolar advances and things that and like power imbalances that they're they're they're do into being
kind of weird situations so this is one of the reasons why I think open source is is generally the right approach and you know I think it's it's a categorically
different question today when we're not close to Super intelligence I think that there's a good chance that even once we get closer to Super intelligence open sourcing Remains the right approach even though I think at that point it's a
somewhat different debate um but I think part of that is that that is you know I think one of the best ways to ensure sure that the system is as secure and safe as possible because it's not just
about a lot of people having access to it it's the scrutiny that that that kind of comes with being with building an open source system that this is a pretty widely accepted thing about open source
is that um you you have the code out there so anyone can see the vulnerabilities um anyone can can kind of mess with it in different ways people can spin off their own projects and and
experiment in a ton of different ways and the net result of all of that is that the system just get hardened and get to be a lot safer and more secure um
so I think that there's a chance that that ends up being the way that this goes too a pretty good chance and that
having this be open both leads to a healthier development of the technology and also leads to a more balanced um distribution of the
technology in a way that that strikes is good values to Aspire to so to you the risks there's risks to open sourcing but the benefits outweigh the risks at at
the two it's interesting I think the way you put it uh you put it well that there's a different discussion now than when we
get closer to the uh to development of super intelligence of of the benefits and risks of uh open sourcing yeah to be clear I I feel quite confident in the
assessment that open sourcing models now is net positive I think there's a good argument that in the future it will be too even as you get closer to Super intelligence but I've not I'm i'
certainly have not decided on that yet and I think that it becomes a somewhat more complex set of questions that I think people will have time to debate and we'll also be informed by what happens between now and then to make
those decisions we don't have to necessarily just debate that in theory right now uh what year do you think we'll have a super intelligence I don't know I mean that's pure speculation I think it's uh I I
think it's very clear just taking a step back that we had a big breakthrough in the last year yes right where the the llms and diffusion models basically reached a a scale where they're able to do some some pretty interesting things
and then I think the question is what happens from here and just to paint the two extremes on the um on on one side it's like okay well we
just had one breakthrough if we just have like another breakthrough like that or maybe two then we could have something that's truly crazy right and and is like is
um just like so much more advanced and and on that side of the argument it's like okay well maybe we're um you maybe we're only a couple of big
steps away from uh from from from reaching something that looks more like general intelligence okay that's one that's one side of the argument and the other side which is what we've
historically seen a lot more is that a breakthrough leads to um you know in that in that Gartner hype cycle there's like the hype and then there's the trough of disillusionment
after when like people think that there's a chance that hey okay there's a big breakthrough maybe we're about to get another big breakthrough and it's like actually you're not about to get another breakthrough you're maybe you're actually just going to have to sit with
this one for a while and um and you know it could be could be 5 years it could be 10 years it could be 15 years until you figure out out the um the kind of the
next big thing that needs to get figured out and um but I think that the fact that we just had this breakthrough sort of makes it that we're
at a point of almost a very wide error bars on what happens next yeah um I think the traditional technical view or like looking at the
industry would suggest that we're not just going to stack in a like Breakthrough on top of breakthrough on top of breakthrough like every six months or something right now I think it
it it will guessing I would guess that that it will take somewhat longer in between these but um I don't know well I tend to be pretty optimistic about breakthroughs too so I
mean so I think if you if you if you normalized for for my normal optimism then then maybe it would be even even slower than what I'm saying but but even within that like I'm not even opining on the question of how many breakthroughs
are required to get to general intelligence because no one knows but this particular breakthrough was so such a small step that resulted
in such a big leap in performance as experienced by human beings that it makes you think wow are we is as we stumble across this very
open world of research will we stumble um across another thing that will have a giant leap in
performance and um also we don't know exactly at which stage is it really really going to be impressive cuz it feels like it's really encroaching on impressive levels of
intelligence you still didn't answer the question of what year we're going to have super intelligence I'd like to hold you to that no I'm just kidding but is there something you could say about the
timeline as you think about the development of um AGI superintelligence systems sure so I I still don't think I
have any particular Insight on when like a singular AI system that is a general intelligence will get created but I I think that one thing that most people in the discourse that I've seen
about this haven't really grappled with is that we do seem to have organiz organizations and you know structures in the world that exhibit
greater than human intelligence already so you know one example is a you know a company you know it acts as an entity it has you know a singular brand um
obviously it's a collection of people but I I certainly hope that you know meta with tens of thousands of people make smarter decisions than one person but I think that that would be pretty
bad if it didn't um another example that I think is even more removed from kind of the way we think about like the personification of
of um of intelligence which is often implied in some of these questions is think about something like the stock market right the the stock market is you know takes inputs it's a distributed
system it's like the cybernet genc organism that you know probably millions of people around the world are basically voting every day by choosing what to
invest in but it's basically this this organism or or structure that is smarter than any individual that we
use to allocate Capital as efficiently as possible around the world and I I do think that this not that there are already
these cyber systems that are either melding the intelligence of multiple people together or melting the intelligence of multiple people and
Technology together to form something which is dramatically more intelligent than any
individual on the in the world um is something that seems to exist and that we seem to be able to harness in a productive way for our
society is as long as we basically build these structures in balance with each other um so I I don't know I mean that that at least gives me hope that as we advance the technology and I don't know
how long exactly it's going to be but you asked when is this going to exist I think to some degree we already have many organizations in the world that are smarter than a single human and and that seems to be something that is generally
productive in advancing humanity and somehow the individual AI systems empower the the individual humans and the interaction between those humans to make that collective intelligence
Machinery that you're referring to smarter so it's not like AI is becoming super intelligent it's just becoming the uh the engine that's making the collective intelligence is primarily
human more intelligent yeah it's it's educating the humans better it's making them better informed it's um making it more efficient for them to communicate
effectively and debate ideas and through that process just making the whole collective intelligence more and more and more intelligent maybe faster than the individual AI systems that are
trained on human data anyway are becoming maybe the collective intelligence in human species might outpace the development of AI just I think there is a balance in
here because I mean if if like you know if a lot of the input that that the systems are being trained on is basically coming from feedback from people then a lot of the development
does need to happen human time right it's it's not like a machine will just be able to go learn all the stuff about about how people think about stuff there's there's a
cycle to to how this needs to work this is an exciting World we're living in and that you're at the Forefront of developing uh one of the ways you keep
yourself humble like we mentioned with jiu-jitsu is doing some uh really difficult challenges mental and physical one of those you done very recently is
uh the murf Challenge and you got a really good time it's 100 pull-ups 200 push-ups 300 squats and and a mile before and a mile run after you got
under 40 minutes on that uh what was the hardest part I think a lot of people were very impressed it's very impressive time how crazy are you this the question
I'm asking but it wasn't my best time but but I anything under 40 minutes I'm happy with yeah um it wasn't your best time no I think I I think I've done it a little faster before but not much I mean
it's um um and and of my friends I I did not win on Memorial Day one of my friends did it actually several minutes faster than me um but just to clear up one thing that I think was um I I saw a
bunch of questions about this on the internet there there are multiple ways to do to do the murf challenge there's a kind of partitioned mode where you do sets of pull-ups push-ups and squats
together and then there's unpartitioned where you do the hundred Pull-Ups and then the 200 push-ups and then the 300 squats in Cal and obviously if
you're you know if you're doing them unpartitioned then you know it takes longer to get through the 100 pull-ups because you you know anytime you're resting in between the pull-ups you're not also doing push-ups and and squats
so so yeah so my my I'm sure my unpartition time would be would be quite a bit slower but um but no I think at the end of this um I don't know first of all I think
it's a good way to honor memor Memorial Day right it's um you know it's this um lieutenant Murphy basically this is one of this was one of his favorite
exercises and I just try to do it on on Memorial Day each year and it's a good workout um I got my older daughters to do it with me this time they um my
oldest daughter wants a weight vest because she sees me doing it with a weight vest I don't know if a seven-year-old should be using a weight vest to do pull-ups but yeah but um the difficult question apparent must ask
themselves yes I was like maybe I can make you a very lightweight vest but but I I didn't think it was good for this so she she basically did a quarter murf so she ran a quarter mile and then did you
know 25 pull-ups 50 push-ups and and 75 air squats then ran another quarter mile and like in 15 minutes which I was pretty impressed by um and and my my
5-year-old too so I so I was excited about that and I I'm I'm glad that I'm teaching them kind of the value of I know physicality right I think I a
good day for Max my daughter is when she gets to like go to the gym with me and cranks out a bunch of pull-ups and I I I I love that about her I mean I think it's it's like good she's you know um
hopefully I'm teaching her some good lessons but I mean the the the broader question here is um given how busy you are given how much stuff you have going on your life uh
what's um what's like the perfect exercise regimen for you to uh to keep yourself happy to uh keep yourself
productive in your main line of work yeah so I mean i' right now I'm focused most of my workouts on on
fighting so so Jiu-Jitsu and MMA um but I don't know I mean maybe if you're professional you can do that every day I can't I just get you know it's too many too many bruises and things that you
need to recover from so so I do that you know three to four times a week and then um and then the other days um I just try to do a mix of things like just cardio
conditioning strength building Mobility um so you try to do something physical every day yeah I try to unless I'm just so tired that I just need to need to relax but then I'll still try to like go
for a walk or something I mean even here um I don't know I have you been on the roof here yet no we'll go on the roof after this but it's like we we designed this this building and I I put a park on
Ro so that way like meetings when I'm just doing kind of a one-onone or talking to a couple people I I have a very hard time just sitting I feel like you get super stiff it like feels really
bad um but I don't know I I being physical is very important to me I think it's um I do not believe this gets to the
question about AI I don't think that a being is just a mind um you I think we're we're kind of meant to do things
and like physically and and a lot of the sensations that we feel are um are are connected to that and I think that that's a lot of what makes you a human
is is basically you know having those having you know that set of Sensations and experiences around that coupled with
a mind to reason about them um but I don't know I I think it's important for balance to to kind of get out challenge yourself in different ways learn
different skills clear your mind do you think AI in order to become super intelligent and AI should have a body it depends on on what the goal is I
think that there's this assumption in that question that intelligence intelligence should be kind of personlike whereas you know as we were
just talking about um you can have these greater than single human intelligent organisms like the stock market which obviously do not have bodies and do not
speak a language right and like you know and and just kind of have their own system um but so I don't know my guess is um it
there will be limits to what a system that is purely an intelligence can understand about the Human Condition without having the same not just senses but like
our our bodies change as we get older right and and we kind of evolve and I think that those very subtle physical
changes just drive a lot of social patterns and behavior around like when you choose to have kids right like just like all these you know that's not even subtle that's a major one right but like
um you know how you design things around the house um so yeah I mean I think I think it would if the goal is to understand people as much as possible I think I think that
that's trying to model those Sensations is probably somewhat important but I think that there's a lot of value that can be created by having intelligence even that that is that is separate from that is a separate thing uh so one of
the features of of being human is that we're mortal we die uh we've talked about AI a lot about potentially replicas of ourselves uh do you think there will be
AI replicas of you and me that persist long after we're gone that family and ones can talk to I think we'll have the capacity to do something like that and I think one of
the big questions that we've had to struggle with in the context of social networks is who gets to make that um and you know
I my answer to that you know in the context of the work that we're doing is that that should be your choice where I don't think anyone should be able to choose to make a Le spot that people can
can choose to talk to and get to train that yeah and we we've kind of we have this precedent of making some of these calls where I mean someone can create a
page for a a Lex fan club but you can't create a page and say that you're Lex yes right um so I think that this similarly I think I maybe you know
someone maybe can make a should be able to make an AI That's um that's a Lex admirer that someone can talk to but I think it should ultimately your call
whether there is a Lex AI well I'm open- sourcing The Lex so uh you're a man of Faith what what role has Faith played in your life and your understanding of the world and
your understanding of your own life and your understanding of uh your work and how to your work impacts the world yeah I think that there's a few
different parts of this that are relevant um there's sort of a philosophical part and there's a cultural part and me one
of the most basic lessons is uh right at the beginning of Genesis right it's like God creates the Earth and creates people and creates people in God's image and
there's the question of you know what does that mean and all the only context that you have about God at that point in the Old Testament is that he's God has created things so I I always thought
that like one of the interesting lessons from that is that that there's a virtue in creating things that is like whether it's
artistic or whether you're building things that are functionally useful for other people um I think that that by
itself is a good and I that kind of drives a lot of how I think about morality and my my personal philosophy
around like what what is a good life right it's I think it's one where you're you know helping the people around you and you're being a kind of
positive creative force in the world that is helping to you know bring new things into the world whether they're
you know amazing other people kids or um or just leading to the creation of different things that that wouldn't have been possible otherwise and so that's a value for me that that that matters
deeply and I I just I mean I just love you know spending time with the kids and seeing that they sort of you know trying to impart this value to them and um it's like I mean nothing makes me happier
then like when I come home from work and you know I see like my my daughters like building Legos on the table or something it's like all right I did that when I was a kid right so many other people were doing this and like I hope you
don't lose that Spirit where when you you kind of grow up and you want to just continue building different things no matter what it is um to me that's a lot of what
matters that's the philosophical piece I think the cultural piece is just about community and values and that part part of things I think has just become a lot more important to me since I've had kids
um you know it's almost autopilot when you're a kid you're in the kind of getting imparted two phase of your life but and and I I didn't really think about religion that much for a while um
you know I was in college you know before I before I had kids and then I think having kids has this way of really making you think about what traditions you want to
impart and um and how you want to celebrate and like what what balance you want in your life and I mean a bunch of the questions that you've asked and a bunch
of the things that we're talking about just the irony of the curtains coming down as we're talking about mortality
once again yeah same as last time this is just just that the Universe Works in we are definitely living in a simulation but but go ahead um Community tradition
and the values that Faith religion instill a lot of the topics that we've talked about today are around how do you how do you
balance you know whether it's running a company or or different responsibilities with this I yeah how do you how do you kind of balance that and
I I always also just think that it's very grounding to just believe that there is something that is much bigger than you that is guiding
things that uh amongst other things gives gives you a bit of humility
uh as you pursue that Spirit of creating that you you spoke to creating Beauty in the world and as dfki said Beauty will save the world uh Mark I'm a huge fan of
yours um honored to be able to call you a friend and I am looking forward to uh um both kicking your ass and you kicking my ass on the mat tomorrow in jiujitsu
uh this this incredible Sport and art that we both uh will both participate in thank you so much for talking today thank you for everything you're doing in so many exciting Realms of technology
and human life I can't wait to talk to you again in the metaverse thank you thanks for listening to this convers a with Mark Zuckerberg to support this podcast please check out our sponsors in
the description and now let me leave you with some words from asak KMOV it is change continuing change inevitable change that is the dominant
factor in society today no sensible decision can be made any longer without taking into account not only the world as it is but the world as it will
be thank you for listening and hope to see you next time
Loading video analysis...