Aravind Srinivas: 'Success is about the relentless pursuit of knowledge'
By OxfordUnion
Summary
## Key takeaways - **Education Never Ends**: Education is not an event. You don't finish in education; you can choose to stop it or keep learning forever through simple questions and answers. [00:00], [06:03] - **No Network? Build AI Mentor**: Lacking a network in Chennai, Aravind faced basic founder questions like employee health insurance with no one to ask, so he and co-founders invented Perplexity AI to provide those answers. [02:25], [03:14] - **AI Ignores Tradition**: AI will make decisions from booking flights to career paths without concern for tradition, putting those who celebrate it at a disadvantage. [03:41], [04:13] - **Ilya Called PhD Research Wrong**: Ilya Sutskever bluntly told Aravind his conventional Berkeley PhD research was wrong; instead, execute two simple ideas with massive compute and internet training, which became ChatGPT. [04:38], [05:18] - **Prioritize Accuracy Over Virality**: A former Googler advised against launching Perplexity for being 'boring' and truthful, suggesting viral chatbots with hallucinations instead, but Aravind insisted on accurate answers to make the world smarter. [15:25], [16:12] - **Curiosity Fuels AI Success**: Curious employees who use AI to ask good questions become multi-dimensional, efficient multitaskers, even outside their comfort zones, giving them a glorious future. [20:02], [20:47]
Topics Covered
- Elite networks lose value with AI
- Tradition disadvantages AI decision-makers
- Execute known ideas with massive compute
- Accuracy chains knowledge generation
- Prioritize truth over virality
Full Transcript
Education is not an event. You don't
finish in education. You can choose to stop it or you can choose to keep learning forever.
How? Simple questions and answers.
Tonight we have the honor of inviting and speaking to Arvin Trinas the CEO and co-founder of perplexity AI a major AI instructor in the world of search challenge the likes of Google and open
AI born in Chennai India Arvin holds dual degrees in electric in electrical engineering from IIT Madras and a PhD in computer science from UC Berkeley with a research background at leading
institutions including opening eye Google brain deep mind perpety is now processing nearly 780 million user queries per month and is currently
valued at $9 billion with support from figures such as Jeff Basos and companies such as Nvidia. Advand is recognized not only for his technical expertise but for
his vision to transform how the world access information. Please join me in
access information. Please join me in warmly welcoming the one and only Arvin Shinvas.
This will be okay. Um, first of all, hello everybody.
okay. Um, first of all, hello everybody.
Thank you for having me here. I don't
want to speak for too long because I actually want to get to the Q&A. But
first, uh, let me tell you why. I mean,
it's kind of obvious, right? Perplexity
is all about questions and answers.
But Q&A is actually more important for you specifically as students from a university like Oxford.
You are students at the beginning of a life of questioning.
And you may be the leaders of the world in the coming age of AI.
When you started at Oxford, it was reasonable to expect certain advantages.
The network, the prestige, the opportunities.
AI is about to change that calculation.
Let me tell you why I know this.
I grew up in Chennai, India. When I
wanted to start a company in the US, I didn't have a lot of people that I could call on for help or advice. I didn't
have a network, a family or fellow alumni.
In other words, I didn't have the advantages that you'll all have coming out of Oxford.
So, when I became a founder and CEO, I was perplexed by the simplest of questions, health insurance from employees. I didn't have anyone to call
employees. I didn't have anyone to call and tell me the answers to these questions. And there are two reasons for
questions. And there are two reasons for why I could not get these answers. First
is they don't teach you all this in uh electrical engineering or computer science.
The second is I had no mentors or network to tell me.
So my co-founders and I decided to invent an AI to do that and that became Perplexity's core product.
With answers widely available, the network that your college gives you may not be as useful anymore.
And there's something else. As we stand here in these beautiful buildings, there is an element of tradition that guides how we think
from what we wear to how we treat each other and to our understanding of how things are generally done.
That will also change. We are entering a future where many of our decisions will be made by AI. And these could be small decisions like booking flights or
restaurant reservations to really large ones like choosing a career path or crafting a business strategy.
And AI will not be concerned with tradition.
So those of us who celebrate tradition are expected to influence decision making actually be at a disadvantage.
So I would urge all of you to be very careful about the people who tell you to resist the future.
History has many examples where conventional wisdom has turned out to be wrong. Let me give you an example. My
wrong. Let me give you an example. My
PhD research at Berkeley was very conventional for a PhD but it was wrong.
I met Ilia Sutsker who was working on what would become Chat GPT and I shared my ideas with him for my research and he
said this is bad. It's all wrong.
He wasn't disrespectful about it but he was quite blunt.
I could have chosen to ignore him but instead I chose to be curious.
Contrary to what we believed on the academic side, Ilas said nothing new was needed for AI. Just execute two simple
well-known ideas sequentially, give it a ton of compute and train on the entire internet. That became chat GPT.
internet. That became chat GPT.
At that time, nobody understood it, but he foresaw the future and he was correct and I could tell he was right.
Now, up until this point, I've told you ways that academic inquiry may not set you up for success in the AI future. The
point in this story with Ilia is also where I could have chosen to drop out and start a company. I did not and I don't think you should either. So, now
let me tell you ways in which your study makes you uniquely suited for success in the AI future.
The first is that education is not an event. You don't finish in education.
event. You don't finish in education.
You can choose to stop it or you can choose to keep learning forever.
How? Simple questions and answers. At
Perplexity, we say that we serve curious people. What does that mean? To be
people. What does that mean? To be
curious is to always have more questions. So like me, you can finish an
questions. So like me, you can finish an important education and still have questions about what is it like to run a company or have questions about another
industry like health insurance.
So what h what your education has given you is the right frame of mind to keep asking more questions.
Here is the second superpower you will have. In academia, we all have this
have. In academia, we all have this thing called peer review because we all know that knowledge is built on the
knowledge before it. Every answer just is the foundation for the next question.
That is why you have professors, educators. They're not just here to
educators. They're not just here to teach you everything that we already know in the world. A book can do that already.
They are here to teach you everything we know and help you ask more questions so that you can know more and help the next generation to do the same. At the heart
of this chain is an important consideration accuracy.
To build knowledge on the knowledge before it, you have to know that it is right or else everything after it could end up being wrong. A lot of people ask
me about trusting AI, but what they mean is in the privacy sense, the way you talk about whether you can trust social media companies with your data.
But now we have a technology that can actually give you answers to all your questions and make decisions on your behalf. So how can you trust that any of
behalf. So how can you trust that any of its outputs are accurate? This is
exactly what we want to work on. We are
working on a perplexity.
But we can't do this alone. We can't be the only people in the world working on accuracy in the outputs. But only
accurate outputs will help you get the next set of new questions. You all need to be a part of this. And you're all very uniquely suited for this. The
reason is pretty simple. It is easy to exit an incredible institution like Oxford and think you have all the answers to all of the world's questions.
But the way you change the world is the only way anyone has ever changed the world and that is to have all of the questions.
Let's start now. Michael is here to ask me questions and I'm ready. Thank you.
Still a spark.
Thank you very much for that very inspiring speech. you've set me up here
inspiring speech. you've set me up here with quite a task to ask questions and hopefully get um impressive answers.
Thank you. Um just following up from your talk, you started off talking about your upbringing and how you had no advantages um no one to call on and a lot of that came from your own
perseverance and resilience that has got you here. What I want to ask is what do
you here. What I want to ask is what do you think was unique about you growing up in the environment you grew up in and what how do you think it contributed to the kind of character that it takes to
become the successful man that you are today?
I would say fundamentally uh it it keeps boiling down to curiosity of course, perseverance, hard work, like trying to go against the odds, not taking no for an answer. I I I I would you know
an answer. I I I I would you know attribute all these qualities, but uh one thing that like has stayed with me is this relentless pursuit of knowledge.
Um my parents always bought me whatever books I wanted. uh whatever savings they had, even if it was pretty minimal, they spent it all on my education and and uh
they valued education more than stat like like wealth status in the society.
Uh like like to to to them it means even more that actually got a PhD and like did really original research and and got them accepted in conferences.
To my mom and dad are way more proud of like those things than like how much I earn or something like that. So, uh I would say that that's stuck with me and like that's kind of like uh core to who
I am and that's also core to what product perplexity is. That's amazing.
You touched on education an important point here. Um a lot of critics of um AI
point here. Um a lot of critics of um AI believe that the use of these um equipments is actually making people much more lazy, much more dumb. You
talked about that in your opening speech. Um could you expand your push
speech. Um could you expand your push back onto that narrative and that philosophy? Do you think that's true?
philosophy? Do you think that's true?
And how do you think it's forced? So it
all depends on how you view it. Uh in in a world where AI can answer all your questions like like that's what our product is trying to do. Uh should we just be lazy and not do anything about
it anymore? Uh or should we actually
it anymore? Uh or should we actually feel like oh wow this is insane. Um it
feels like I got a calculator or computer moment again. And so I'm going to whatever like like you know with the childlike curiosity we all have where we ask our parents all sort of random
questions where the simplest questions are stuff that puzzle our parents like why is the sky blue or like why why does this work this way? Why does this not fit here? U now like you can go back to
fit here? U now like you can go back to that sort of childlike curiosity mode and you can ask all these questions to a tool that will give you back answers instantly. You can run your own
instantly. You can run your own experiments on the world. You can have your own hypothesis about the world. You
can uncover truth without having to pay for access to the world's top most experts on a topic. The tool works the same way whether you're a professor from Harvard or Stanford or whether you're like in a nation that's still
struggling. So that's that's phenomenal
struggling. So that's that's phenomenal and I I hope like that has a very positive impact. Amazing. Um you cited
positive impact. Amazing. Um you cited in the past sort of um Elon Musk's grits and fiveword rejection letter from Harvard as formative moments. I want to
talk about you having gone from you know deep mind openai to launch of perplexity. What was the most pivotal
perplexity. What was the most pivotal moment for you in your journey?
I would say that uh like formative experiences are when you kind of have to hear some uncomfortable truths. So the one incident I pointed
truths. So the one incident I pointed out with OpenAIS co-founder Ilia Sudskaver where I actually thought I was pretty good and I I had very good ideas
and he uh came and told me like like this is not the right thing to work on.
uh you you should be very comfortable like hearing those kind of things and like introspecting for yourself and that instilled a uh sense of truth seeeking attribute to me which I try to like like
encourage in everybody who work at perplexity is like yes like maybe sometimes the truth is uncomfortable but try to embrace it and and and and the product should also reflect those values
of helping people seek truth. So uh I I would say that was like a pretty important moment. Amazing. you the truth
important moment. Amazing. you the truth being uncomfortable. Of course, the
being uncomfortable. Of course, the journey of being a founder in such a vicious space is quite a task.
Many people go in and fail. I'm sure you probably had moments that you thought that you couldn't carry on. Could you
tell us whether you ever seriously considered giving up before you hit success and what kept you going? The
thoughts have definitely come, you know, flashed in my mind for sure. Uh several
times like raising around might feel difficult. Our
metrics would remain flat for like four or five months like like you know more people are not coming onto the product.
Uh retention on the product is pretty bad like the new users are dropping off.
Uh it feels very hard to move around things. People I'm trying to recruit are
things. People I'm trying to recruit are not joining the company. I've definitely
faced all these moments. Uh but even if I've entertained the thought of giving up, I've never wanted to. uh because uh like like like I told in the Harvard
interview that you it's only over when you think it's over. So as long as so some amount of delusion is needed to succeed in the startup world and I try
to always think like uh even if it takes time I'm going to figure it out. Thank
you. A very simple question about the product you chose. Now you chose to uh build a search engine instead of perhaps a chat box. What does that say about
your views on the future of information and technology in the intersect with AI?
Could you repeat that again? So I wanted to talk about how your decision to choose to build u a search engine instead of like a chat box, right? Why
that specific path and what does that say about your views on information?
Yeah. So uh there was a moment where when we were just like having a version of Perplexity and and sharing it with friends and colleagues for feedback. Uh
a former Googler who's actually an investor in the company, I'm not going to name who it is. Uh said, "Hey, this is actually good. I like using it, but I don't think you should launch this. Um
you should actually do something like character.ai AI like like like where people are talking to chat bots and and and like where hallucinations that is the AI model making up things becomes a
feature becomes entertaining. People
want to laugh at mistakes AI makes or stuff AI makes up which is not even true and people want that's how you be make a viral product and that's how you grow.
Your product is so boring. It just tells what people are already saying on the internet. It just tells what is already
internet. It just tells what is already true. AI is not making up stuff and and
true. AI is not making up stuff and and if AI makes up stuff your product actually ends up becoming worse because people want correct answers. So you
shouldn't launch this. I'm like hey wait like this is too important for the world like trust me whether I I'm not I'm not trying to build for virality. I'm trying
to build for like making the world like really smarter and and and and and a tool like this makes me smarter. I'm
pretty sure there'll be a lot more people like that. So that conviction uh definitely helped us to uh stay true to what we wanted to do and that's kind of what guided to you know like not get too
far along the directions of making it very chatty or conversational or like put a lot of emojis or like trying to ignore web sources and trying to like say stuff that the model thinks. We've
avoided these things because we have a mission to execute on uh which is answer people's questions with the most accurate in in the most accurate way possible. That's how we think we can
possible. That's how we think we can build trust with the users and and that's you know served us well. Yeah,
it's really noble to be guided by such personal conviction and the and the idealism in what guides you and I think that in response to that some of your critics some of your your critics have
sort of like you know picked on that you know they comment on how you emphasize you know transparency salations no hicculation with perplexity. So my
question here is more about the long term. Do you think this is a
term. Do you think this is a strategically sustainable long-term solution especially as you know state actors and other competitors are going to catch up and possibly not have the same moral or ethical standards that you
have? Um do you think that this strategy
have? Um do you think that this strategy and in the way you idealistically go and buy perplexity would be able to sustain itself in the long run? Definitely. uh
there is like a framework that uh Jeff Bezos has which I find pretty valuable is like like often people ask what are the things that'll change how how's the world going to change in the next 5 to
10 years uh but nobody usually asks like how's the world not going to change in the next five what are the things that remain constant uh because those are the things you can actually like build
around and one thing that um I believe is like nobody's going to come and say hey I want like slower answers I want answers that are inaccurate I want like poor sources. Uh I want I
want more lies from an AI. Like like
nobody wants that especially in a world where it's going to be pretty easy to create fake synthetic content that appears human level in terms of the language skills. Uh it's going to be
language skills. Uh it's going to be even more important for having at least one AI native product that you can rely on for accurate answers. Just like how I
said in my talk is uh usually people associate trust with like privacy because that's what social media companies did to us but in the in an AI native world uh where AI are really going to like plug into your personal
context and help you do stuff accuracy becomes paramount to trust. Well you
touched on how AI could change things or potentially not change things. What are
your thoughts on AI 2027?
2027. Yeah. So it's a forecast on how AI will develop over the coming years. It's
a paper that sort of like predicts what they believe the the world would look like by 2027. So I may not have read it.
So if you if you can share some um specific predictions, I can make some comments on that. Okay. Uh we can come back to that more in a second. I I think one of the things I want to talk about
relation to that um more particularly uh it's more about how the impact of um generative AI. So as generative AI sort
generative AI. So as generative AI sort of like turns factual knowledge you know the traditional basis of university degrees into cheap commodity. What
should become the new currency of human employability? And how do you think
employability? And how do you think universities and employers should should redesign their credentials and hire metrics so that people can be matched for roles that feel generally earned and
socially valuable rather than you know the mere make work show up. Yeah. So I
think the the the employees who are curious who who use these tools and who who ask good questions are going to have a glorious future and I'm already seeing that at our own company. Even though we
are an AI native company, the habit of using AIS every day is not uh universal.
Some people are still getting used to it. But those who are using it, those
it. But those who are using it, those who are asking the right questions, those who are pretty curious learners are having a fantastic time because they they might have signed up for a job that's very specific, but they start
being more multi-dimensional and start doing a lot more things and things in a very new way. They're a lot more efficient at how they get things done.
And so, uh, efficiency is a great way to measure, uh, you know, how people can be more product, like like how people are, you know, doing well in a company. Um
and and like one one other thing I would say is multitasking. Usually people are like like you know averse of multitasking but in a world with AI like you can actually get so many different
things done things that are even outside of your comfort zone because you were just curious and you learned about the topic. Brilliant. Thank you. Um I want
topic. Brilliant. Thank you. Um I want to move on to something a bit more um controversial. Um if you don't mind if
controversial. Um if you don't mind if we talk about the elephant in the room.
um many publishers are saying that you are profiting from their work without their permission. Do you think they have
their permission. Do you think they have a point?
So we always like we always like attributed the answers to sources, right? In fact, I would I'm very proud
right? In fact, I would I'm very proud to say we were the first AI product that did that. uh when every other AI product
did that. uh when every other AI product wanted to just say what the model thinks and and train the models on all the data of publishers and and and kind of like to the extent that models can work bad
and reproduce the text uh we never did that. We only use models as
that. We only use models as summarization engines or reasoning engines and we only use sources as like uh citations that how you know you would write an article yourself for an
assignment. Uh that's essentially the
assignment. Uh that's essentially the principle we use for building our product. So we definitely did not steal
product. So we definitely did not steal anyone's data. We always attributed
anyone's data. We always attributed credit to the for the profitability profiting of their work part like we came up with a program that where we
would share revenues we eventually make on queries where they get cited uh with the publishers and we very openly acknowledged several different times that we cannot be a successful product
if new content doesn't keep getting created and that requires like the journalism ecosystem to keep thriving.
Thank you. um with the example about how me as university students might use um citations of professors and writers in my work. Um there's an ecosystem that
my work. Um there's an ecosystem that allows for them to get paid in that.
Whether it be my library paying for the books, whether it be my um university system called Solo paying for me to have access to the articles that I use, there is a path many different path in which
these publishers do get paid. Do you
think there could be a future where Plexity as a platform would perhaps pay publishers and not just site them? So we
we already paying them through revenue sharing and we give access we give them access to all our tools for free and we subsidize our APIs for them to build
like AI native uh features in their uh products. So there is a lot of like like
products. So there is a lot of like like uh payments going on already in many different forms and we we we made a $250,000 grant to the school of journalism and Northwestern University.
So we definitely are going to like keep investing in the journalism ecosystem to like keep thriving. Okay. On on
journalism that's clearly a very important sector right now especially with common affairs what's going on in the world. um what do you believe the
the world. um what do you believe the future of journalism can be in an AI age and what role do you believe Plexity has in shaping it given you guys are working
very closely with journalists so definitely like new like the world is chaotic right there's always going to be like crazy things going on and um
someone needs to actually like take the responsibility to like try to report things in in an unbiased and truthful way possible and tools like ours like look at all sources and synthesize and
summarize things in the context of a specific question. But people are still
specific question. But people are still going to come and like try to identify and like discover new things and like uh read it from the reporter's perspective.
So having your own perspective about things at the end of an article but reporting the article in the most unbiased way based on whatever facts are true is how the world can continue to be
more truth seeeking and so journalism has a very important role to play there.
Thank you. Um, correct me if I'm wrong, but my understanding is that you're currently expanding into, you know, obviously Europe, Middle East. How do
you localize AI across, you know, different cultures, languages, norms, and ways of interaction?
It's a it's a it's a pretty important question. So, one thing we've done is
question. So, one thing we've done is try to use several different models. Uh,
run separate evals for different languages. uh making sure that the voice
languages. uh making sure that the voice assistant tries to work well across different languages uh both in terms of speech recognition and the speech
synthesis. Um so the these are like some
synthesis. Um so the these are like some simple ways in which you can make sure the accessibility across languages exists for the product. Uh over time we
plan to work with local partners who are like building custom models for each of their countries. maybe there are some
their countries. maybe there are some aspects of the culture that are better captured by some of these models and and trying to build separate evalets for these things would be a great way to do
it. Thank you. Um in this section I want
it. Thank you. Um in this section I want to ask one final um question regarding sort of like um uh local norms and
cultures. At the moment AI search
cultures. At the moment AI search seems to power users and professionals who uh exist you know in developed countries so and so um but not
necessarily viewed as something that perhaps right now is helping um everyday users especially those who are considered being in the global south.
Um, do you think this is true or more importantly, how do you think that pep plexity or the AI industry can serve those everyday users or those who do not
use AI for the white collar related um um um searches or uses?
So uh that's not what we are seeing in usage actually like you know half of our usage comes from high GDP countries but the remaining half like comes from uh the developing countries too and uh
their usage is really surprises they use it in so many different ways for like running their businesses or like uploading a lot of files and doing research on that uh and they do it in
their own local language. So I'm
actually like very encouraged by how they use it and I would even say I wouldn't be surprised if the coming years their adoption of AI surpasses the
developed countries and u they they try to rethink their businesses entirely because they're still developing. So
there are ch there are opportunities to like rethink the entire thing rather than trying to like pivot and uh from something that's already working. That's
what the development have. Thank you. Um
pushing further on that. So you think perhaps there could be the future perhaps of AI revolution will be in places in the global south in Africa in parts of Absolutely. Brilliant.
Absolutely. Sweet. Uh one uh acknowledge your section about just uh leadership and the just quickfire questions. What's
the hardest thing that you feel about being a founder right now, especially in the context of how politically charged things are, but also how difficult it is to raise funds when uh the economy seems to be quite unstable with what's going
on in America and so on so forth. I
mean, I guess the hardest thing is honestly the pressure like like dealing with the relentless pressure because it's a very competitive space in AI.
There's a lot of uh we're the only like I would say startup still. Everybody
else is like a multiundred billion dollar trillion dollar company in the space. So, I think that pressure
space. So, I think that pressure uh you got to like learn how to deal with it and still stay calm and focused because uh if you if you lose sight of
the long-term road map and start executing things on whims and fancies and like whatever others are doing uh you're going to be in a pretty poor spot
like just 6 months from now. Uh but it takes a lot of like effort and like real conviction to like stick to your road map even when others are doing uh things that may seem like that's what you
should be doing. Brilliant. Um if you could go back and do it all again, what would you do differently? I would do the same things. Brilliant. Well, with that,
same things. Brilliant. Well, with that, I would like to open up to the audience for their questions. Uh if you'd like to ask a question, please raise your hand
up very high. and uh the loveliest um member of committee would hand you the microphone. Uh I recognize the gentleman
microphone. Uh I recognize the gentleman with the blue jacket for the recording testing. Okay, we're
good. Arvin, thank you so much for speaking with us. I guess my question is where can this all go wrong? What's the
biggest risk on your mind?
Well, like the like I said, the responsibility is there to to make sure we earn your trust, right? And we can easily lose that if we lose sight of the big picture, which is we stop focusing on everything other than giving you
accurate answers. At least for us, like
accurate answers. At least for us, like like I'm talking about things that we can control. Uh there are a lot of other
can control. Uh there are a lot of other AI companies and they're all after so many different goals. But at least in the case of perplexity, the way things can go wrong for us is we lose sight of
our core objective mission, which is provide truthful, accurate answers.
Maybe a quick followup. What about for the broader AI space?
I think like the fundamental thing to keep in mind is like try to build things that add value, right? Um, it's
questionable if like all the social apps are like truly valuable to our lives. In
some ways they are, but in a lot of ways they are not. And I hope at least with AI uh it's not something that's so addictive that you have to be on it 24/7.
But uh whenever you are there like you feel smarter or like you got something done and you feel like your quality of life improved. If all AI companies build
life improved. If all AI companies build with that sort of a mindset, I think uh we'll be fine. And and the other thing to keep in mind is some people are after
like trying to gain power because you know AI is truly valuable. It's going to augment a lot of our knowledge work maybe even automate some aspects of it.
So whoever has uh the most powerful models definitely gathers a lot of power. So I hope they act with
power. So I hope they act with sufficient responsibility so that like like they're not after power but like actually like contributing usefully to the world. Thank you.
the world. Thank you.
Thank you. I recognize uh the lady on the second roll here.
Thanks. Hello. Uh my name is Nett. Very
nice to meet you, Arand. I work with the Gulf States and you may be aware that they're pouring billions of dollars into the AI race. Um recently the crown
prince of Saudi Arabia hosted Amazon, Meta, Nvidia, uh OpenAI, etc. If you had access to those, you know, hundreds of billions of dollars, where would you put
those funds?
Wow.
Huge question.
Well, the mission is the same like make it make it even faster to like like achieve what we want like we you know give accurate answers to everybody like you know he said like how access to all
these tools needs to be universalized democratized and that requires a lot of like inference infrastructure spend so we'll take it and spend on it and you
know like like accelerate the advent of answers that's basically it like today we serve like um you know 30 million daily queries or something. I hope we can do billions of them a day and that's going to cost us a lot of money. I don't
know if it'll cost 100 billion by the way. Um but definitely it'll cost a few
way. Um but definitely it'll cost a few billion. They're investing billions and
billion. They're investing billions and billions and they've launched the new human AI which is the Arabic large language model to try to outdo um sort of the eastern hemisphere in sort of
competing I suppose with Silicon Valleys etc to try to make um the Arabs on the map in terms of the AI launch and the AI race. So it's quite a exciting time for
race. So it's quite a exciting time for AI in the Middle East I think at the moment. But thank you very much. Thank
moment. But thank you very much. Thank
you.
Thank you very much. I recognize the member at the far bench there.
Hi, thank you for your talk. Um you talk about sort of adding value, but I I wonder if you think about perplex perplexity being kind of at maximum
value. Um my question is sort of about
value. Um my question is sort of about the role that human memory plays in the future. So in some sense if you have a
future. So in some sense if you have a system that's kind of ultimately capable and you can ask it sort of anything
you know is there any point in knowing anything yourself and therefore like um I don't know do you how do you see
the role of sort of human memory and and capability going forward I suppose. So
your your question is like is our memory power important? Is is that get the
power important? Is is that get the question? Yeah. I mean I think like as
question? Yeah. I mean I think like as these systems become more capable we become sort of less needed in a way and and in some sense we can just ask we don't have to remember and therefore we
don't really have to like process anymore or whatever and maybe it reduces our creativity or something.
I think like some basic aspect like like you don't want to be uh in a state of like I always have to ask for even basic things. So so I think some basic amount
things. So so I think some basic amount of memory to like process what you've already learned to be able to ask the next set of questions would would still be essential. Of course the memory
be essential. Of course the memory that's required for basic cognitive tasks is still needed. U stuff that's required for social connections still needed. None of that is going to go away
needed. None of that is going to go away and memory to relive some of the experiences at least in your head and like feel good about it. These are these are not going to go away.
Thank you very much. I recognize the member with a purple top.
Thank you for your talk. Um I'm curious today why should I use perplexity over chat GPT with search Gemini or claude
with internet access? Thank you. So I
think like uh every AI product building internet access is a good thing. Like I
I feel like uh it's a positive impact we've had on the world to push others to do that. I still think we have the
do that. I still think we have the ultimate obsession and focus over accuracy and like you know and that and and we think that will help us be the most trustworthy AI out there as as far
as like fact checks and research is concerned. Um and the next thing that we
concerned. Um and the next thing that we are pushing as a company is to work on the browser which will go an abstraction further away from like you know this particular questions like why do you need to worry about which AI to use if
the AI is always there with you on on the browser on any web page you're on.
And so uh and our vision for that is like moving AIS from just answering questions to doing things for you. Thank
you.
Thank you. Uh the member with a green top.
Thank you so much for the talk. Um if
you hinted at this already then I apologize because I might have missed it. But I think one way you separate
it. But I think one way you separate yourself is how much you guys emphasize sources and citations in your answers.
And my question relating to this is whether this is purely motivated just by accuracy or is there a second underlying principle behind this. So for the sake of the argument, let's suppose that
there's a question and there's a really good answer to it based on sources and there's another answer that the AI comes up with and there's no source behind it.
Would your approach still be to choose the answer that has sources behind it or just choose the most accurate one? In
other words, is there some value behind authority um and authority based answers that you respect and that's what motivates your philosophy or is there something else? So, uh definitely we we
something else? So, uh definitely we we do prioritize answering from authoritative sources when we have them available. But when there are no
available. But when there are no authoritative sources to help the user um user's particular question, we do communicate to the user that we don't
really have any sources from the web uh to answer that particular question or like nothing indicates something they're thinking. But uh you know here's what
thinking. But uh you know here's what the model like AI model like might can think but it's up to you to actually go and do the fact checks and find out more because our our tools unable to like
source anything useful for that query.
So uh I think that aspect is essential to gathering gaining users trust too when when there is no information to be able to answer a question there's no point like like throwing users at a
bunch of sources.
Yes the member on the access bench just right behind you.
Thank you for your talk. Uh my question to you was that uh in future perplexity does it aim to remain a generic AI platform or does it plan to penetrate
into a particular industry and if yes which industry would it be? So uh the answer is like uh yes and no. Uh we we we plan to be as generic as possible
because we think there's so much value to be created by being doing so. And um
by being more generic you try to actually be a better product on any specific thing too because the magic is like the reasoning and conversations like that's where the whole magic
happens. Uh specifically though there
happens. Uh specifically though there are a few verticals that are uh like resonating a lot with our users like I mentioned health is like one one one
very important vertical and know finance is very important vertical uh all these things are where like like accuracy is very important so that it's natural that our product is excelling in those
categories uh and then lot of verticals where research is being done and it's not easy to do it in the current state of tools uh is something we're focusing
on like shopping and travel. Just even
do answering simple questions like what you should buy or like which hotel you should stay or what you should do in a place or planning a trip giving you answers where it's not just a wall of text but you can actually execute
actions after the answer. Uh these are ways in which we plan to like go deeper on certain verticals. Thank you. Thank
you very much. I recognize the member on that.
Hi Arvin. uh thank you for speaking with us today. Um we talked a lot about
us today. Um we talked a lot about accuracy in uh the training data but my question is how do you mitigate bias that is currently present in data and
while training your models? So uh our training largely focuses on uh skills rather than knowledge like we we try to train models on summarization. Uh we try
to train models on citations like sourcing relevant content. uh reasoning
like like breaking down like math or like coding problems into separate steps and then executing them and we can verify if the thing is correct or not without any bias because there's one
single truthful answer to any puzzle or coding problem. Uh and so uh we try to
coding problem. Uh and so uh we try to train models on like how to format an answer better. So these are the things
answer better. So these are the things that we tend to like do all our post- training on. Uh and and the way our
training on. Uh and and the way our product is constructed is knowledge and and skills are decoupled. So the models internal knowledge of the world is not really used to give you answers because
we try to source that from human content on the web. Of course you still live with human bias because there's nothing that's bias free. You can only try to push it to as unbiased as possible. So
that's where using authoritative sources that are peer reviewed and so because of the peer review the like the probability of bias goes down and then using the models just for the summarization and
reasoning layers is is is to like you know cut down as much bias as possible.
Thank you. I recognize a member with a dark top there.
So uh thanks for being here today. Um
seeing as most of the industry uh looks at AI as using let's say a large language model supervised um fine-tuning reinforcement learning does perplexity
see a way to disrupt the industry by having a different paradigm of model or do they care more about dis uh distribution of the product and the user experience
u I think we care about both uh without having distribution you don't actually have enough data to actually train anything. So distribution is important
anything. So distribution is important for like like training models. U in
terms of the training paradigm like RL is actually going to be the most important thing because uh that's the one that you can actually let the models think for themselves and learn skills on their own and not do things in the way
like humans hardcode it and so that often ends up being a better solution long term. Thank you. Thank you. Um I
long term. Thank you. Thank you. Um I
recognize the member in between the two members that Hi, thank you for your talk. Um, forgive
me for being a bit blunt, but as I understand it, when Perplexity first came out, it was a completely revolutionary new kind of search. But in
the years since, um, a lot of the leading AI models have integrated these kinds of search abilities into their products. And in my personal experience,
products. And in my personal experience, I find that a lot of the, you know, like Google open AI tools, uh, seem to work
at least as well as Perplexity does, um, and often even better. How do you see Perplexity competing against these more
powerful, more established players in the AI landscape, especially ones which are able to pivot easily um to respond
to uh you know new challenggers and how do you um like do you have any thoughts about what this means for the broader AI ecosystem?
Yeah. So I think the way we plan to address this is to uh operate in an abstraction even about chat bots which is the browser. Um every AI product is
currently stuck at a point where all it does is like answering your questions.
Uh maybe things like deep research push it to the boundary in terms of answering really hard questions that take like several minutes worth of work, but it's still not actually doing anything for you. It's still not actually completing
you. It's still not actually completing your work. You're you're still the one
your work. You're you're still the one doing everything. And um the fundamental
doing everything. And um the fundamental way to change that is to build agents and and make agents part of your browser and unify browsing question answering
and agentic workflows all in like one single interface and first of all just completely remove the need for asking this question of which AI should I use for for this one particular prompt
because you're just always going to be asking things on your new tab page or on the omni box or you have the assistant on the side on whatever task you're doing on any web page and uh that's our
bet like taking all the distribution that we've built so far and like going further along with the browser and uh that'll be a completely different front end than just another chat interface and
we think that's the right way to like actually build agents.
So, if I may ask a quick follow-up question, what you're saying is the way to compete against the biggest players is just to move faster than them and to
establish a foothold before they can even get into that field or do you think it's I I mean obviously you have to believe that it's possible to compete,
but from a broader philosophical perspective, do you see this, you know, working in the longer term or do you think this is always just going to be an uphill battle?
I think it'll work. But by the way, all these questions existed even for OpenAI when they were beginning to train these GPD2 or GPD3 like models or even GPT 3.54.
Everybody just said like Google's going to eventually train these models like why should OpenAI even exist and like I think they've proven people wrong by just continually like shipping new things and like keeping to updating
their models and their products, right?
So uh every time there's a new company that's building new things until it gets that critical uh velocity like the adoption the scale of like hundreds of
millions of users there's always going to be some uncertainty and the only way to overcome that is to keep innovating and shipping new things. Thank you.
Thank you. Um
no question for you. All right. Cool. Uh
the member right here.
Thank you very much. Um, I have an education background. So, I've worked in
education background. So, I've worked in Ghana in education policy regulation and as also as a former teacher. I know just immensely how important AI is for
filling learning gaps in the curriculum for students. But I've also been around
for students. But I've also been around a couple of old folks who believe in conventional wisdom and are very skeptic of AI. And so for me I guess
of AI. And so for me I guess specifically I would want to find out what perplexity is doing especially in terms of people were skeptic about data
mining digital colonialism and AI's tendency to extract rather than bridge knowledge gaps. How are you specifically handling that or making sure there's
less of that and more like embracing what AI can potentially do at least for in education? Thank you. So, so the
in education? Thank you. So, so the question is like how are we mining data?
Is is that No, no. How are you making sure you're not mining data and sort of replicating digital colonialism from the global south specifically?
Because indigenous knowledge is so important, but AI tends to extract and I'm thinking about a system that allows the global south to be part of the producers of the knowledge and not just
about who is generating or consuming or accessing or benefiting from the global north. If that makes sense.
north. If that makes sense.
uh like I'm not fully getting the question. Sorry. Okay. So, I'm what I'm
question. Sorry. Okay. So, I'm what I'm trying to find out is how are you making sure perplexity doesn't fall into the trap of digital colonialism or data mining so that the people from the
global south can contribute to generating the data and using it instead of we being another mind for extracting
data to further. I see. Uh so the question is like how can we ensure individual contributors can still
contribute data to to to a product like ours. Got it. Um I
think like it comes through like you know like like building good reputation for yourself and credibility for yourself and uh and then like you know that will build a trust score for like
whatever server that you host the data on and that'll be used to like rank up higher and like our search indexing and then that'll help us use that as part of
the answers.
So we have time for just one more question. Um
question. Um I'm going to recognize the member with a blue denim jacket there.
Yeah, thank you for taking my question.
So u my question was related to advertising. So do you think the model
advertising. So do you think the model of companies providing services through advertising and using that as means of revenue is now changing where we're going back to now charging for a premium
service and you know minimal free services. Yeah. Yeah. I I I I think like
services. Yeah. Yeah. I I I I think like I' I've been pleasantly surprised by uh the willingness of people to pay for
these tools. Uh I think all of us in AI
these tools. Uh I think all of us in AI like who are running AI companies severely underestimated the willingness of people uh to to to pay subscription
fee for for all these tools. And I think it's only going to increase when agents are actually beginning to work because it's very much easier for people to internalize like the feeling of paying
for another person you hired u and when they're actually doing work and tasks for you on a daily basis. So uh we plan to like keep our product as
advertisement free as possible and just uh rely on subscriptions for like building a business.
Thank you.
Thank you. Um, so we've had like a lot of questions uh specifically about AI complexity and very high technical and knowledgeable questions. I I just
knowledgeable questions. I I just thought for us to just take a breathe a second and just recognize um more about you very quickly like you're clearly quite engaged and do a lot. What do you
do to relax? What what what what unwinds you?
Um actually like sleep. Um, no, I'm not not like say saying I think it I I think it really helps. Um, I think a lot of people try to sleep less because they
think they have a lot of work to do and that only makes you less efficient and more stressed and it becomes a pretty
bad spiral. Uh, so yeah, so I sleep well
bad spiral. Uh, so yeah, so I sleep well and I think I work out and like, you know, there's not much time to do other other stuff just to be very honest. Uh
but yeah, I do I do try to read books um physical books more than Kindle. Uh I
think it's good for the eye. Um so these are and sometimes play a sport. These
are the main things that's really good.
U a lot of people view and look at you know online influencers you know there's a stereotype of tech bros of what you need to do to be successful. you know,
wake up at 4:00 a.m. in the morning, stick your head into a glass bath or No, I don't do cold showers. So, my my question is that, you know, a lot of us here are university members and a lot of
us here have aspirations to hopefully be successful. Maybe, you know, a quarter
successful. Maybe, you know, a quarter of you or maybe one10enth will be okay or nice more than um what would your advice be to our members? Um what they need to do in their personal life to to
achieve the success that you you achieved and what what they might want to do. So uh I'll tell the boring answer
to do. So uh I'll tell the boring answer but I think it's very important uh because I I used to have these sort of questions and I used to want to like imitate others I thought were successful because I thought that was the best way
for me to be successful too. The real
answer is just be yourself because uh authenticity is very very uh important and and people value that uh and and I think that's what makes you special.
Thank you. My final roundup question um would is back to AI again. Sorry guys.
Um, but it's more about you and what you want to be viewed as when everything's all said and done. So, what do you want your legacy to be? Not just for Plexity,
but for this very interesting moment in AI history.
Definitely. Uh, I I would allow our product to be remembered in a very good way as increasing the amount of truth and understanding of
the world globally.
um and democratizing knowledge because access to knowledge is expertise is still hard and um a tool like ours can give someone the power of like deep
research uh without having to do much work and I hope it makes everyone curious and truth seeeking that that that's the impact I would like to see and I'm happy if other
products do the same. Um it doesn't have to be just us. I don't want to be the only person operating in this, but I would love for us to be the best product. Thank you very much, Arvin.
product. Thank you very much, Arvin.
Ladies and gentlemen, please join me in welcoming and thanking Ivan for tonight.
Thank you.
Loading video analysis...