The Godmother of AI on jobs, robots & why world models are next | Dr. Fei-Fei Li
By Lenny's Podcast
Summary
## Key takeaways - **ImageNet's Data Spark**: The ImageNet project curated 15 million images across 22,000 object concepts to provide the big data needed for training machines on visual recognition, leading to the 2012 AlexNet breakthrough that combined neural networks and GPUs to ignite modern deep learning. [17:24], [18:33] - **AI Winter to Boom**: Just nine years ago, calling a company an AI company was a 'death sentence' because no one believed it would work, but by 2017, every tech company embraced the AI label as the field exploded from obscurity to ubiquity. [02:29], [22:46] - **World Models Frontier**: World models enable AI to create, interact with, and reason in 3D spatial environments beyond language, such as generating navigable worlds from prompts for robotics path planning or human design augmentation. [30:28], [35:47] - **Bitter Lesson Fails Robots**: Unlike language models with aligned text data, robotics lacks action data in 3D worlds, making simple scaling with big data insufficient; robots require physical bodies, scenarios, and supplemented data like teleoperation to succeed. [41:06], [44:42] - **Marble's Diverse Applications**: Marble generates prompt-to-3D worlds that cut movie virtual production time by 40x, enable robotic simulations with diverse environments, and support psychological research by creating immersive scenes for patient studies. [53:07], [56:25] - **Everyone's AI Role**: From musicians using AI for unique storytelling to nurses augmented by robotic assistance, every profession has a role in AI; it should enhance human dignity and agency rather than replace it, with citizens shaping its societal impact. [01:15:20], [01:18:03]
Topics Covered
- AI's impact hinges on human responsibility.
- ImageNet's big data sparked modern AI revolution.
- Current AI falls short of true intelligence.
- World models enable spatial intelligence.
- Everyone has a vital role in AI.
Full Transcript
A lot of people call you the godmother of AI.
The work you did actually was the spark that brought us out of AI winter >> in the middle of 2015, middle of 2016.
Some tech companies avoid using the word AI because they were not sure if AI was a dirty word.
2017ish was the beginning of companies calling themselves AI companies.
>> There's this line, I think this was when you were presenting to Congress, there's nothing artificial about AI.
It's inspired by people. It's created by people.
And most importantly, it impacts people.
>> It's not like I think AI will have no impact on jobs or people. In fact, I believe that whatever AI does currently or in the future is up to us. It's up to the people.
I do believe technology is a net positive for humanity. But I think every technology is a double-edged sword.
If we're not doing the right thing as a society, as individuals, we can screw this up as well. you had this breakthrough insight of just okay we can train machines to think like humans but it's just missing the data that humans have to learn as a child >> I chose to look at artificial intelligence through the lens of visual intelligence because humans are deeply visual animals we need to train machines with as much information as possible on images of objects but objects are very very difficult to learn a single object can have infinite possibilities that is shown on an image in order To train computers with tens and thousands of object concepts, you really need to show it millions of examples.
Today, my guest is Dr. Feay Lee, who's known as the godmother of AI.
Feet has been responsible for and at the center of many of the biggest breakthroughs that sparked the AI revolution that we are currently living through.
She spearheaded the creation of ImageNet, which was basically her realizing that AI needed a ton of clean labelled data to get smarter. And that data set became the breakthrough that led to the current approach to building and scaling AI models.
She was chief AI scientist at Google Cloud, which is where some of the biggest early technology breakthroughs emerged from.
She was director at Sale, Stanford's artificial intelligence lab, where many of the biggest AI minds came out of.
She's also co-creator of Stanford's human- centered AI institute, which is playing a vital role in the direction that AI is taking.
She's also been on the board of Twitter.
She was named one of Time's 100 most influential people in AI. She's also on the United Nations Advisory Board. I could go on.
In our conversation, Fay shares a brief history of how we got to today in the world of AI, including this mind-blowing reminder that 9 to 10 years ago, calling yourself an AI company was basically a death nail for your brand.
because no one believed that AI was actually going to work.
Today, it's completely different.
Every company is an AI company.
We also chat about her take on how she sees AI impacting humanity in the future, how far current technologies will take us, why she's so passionate about building a world model, and what exactly world models are.
And most exciting of all, the launch of the world's first large world model, Marble, which just came out as this podcast comes out.
Anyone can go play with this at marble.
worldlabs.ai.
It's insane. Definitely check it out.
Fei is incredible and way too under the radar for the impact that she's had on the world.
So, I am really excited to have her on and to spread her wisdom with more people. A huge thank you to Ben Harowitz and Condisa Rice for suggesting topics for this conversation.
If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube.
With that, I bring you Dr. Fay Lee after a short word from our sponsors.
This episode is brought to you by Figma, makers of Figma make. When I was a PM at Airbnb, I still remember when Figma came out and how much it improved how we operated as a team. Suddenly, I could involve my whole team in the design process, give feedback on design concepts really quickly, and it just made the whole product development process so much more fun.
But Figma never felt like it was for me.
It was great for giving feedback and designs, but as a builder, I wanted to make stuff.
That's why Figma built Figma Make.
With just a few prompts, you can make any idea or design into a fully functional prototype or app that anyone can iterate on and validate with customers.
Figma make is a different kind of vibe coding tool.
Because it's all in Figma, you can use your team's existing design building blocks, making it easy to create outputs that look good and feel real and are connected to how your team builds. Stop spending so much time telling people about your product vision and instead show it to them.
Make codeback prototypes and apps fast with Figma Makeake.
Check it out at figma.
com/lenny.
Did you know that I have a whole team that helps me with my podcast and with my newsletter?
I want everyone on that team to be super happy and thrive in their roles.
Just Works knows that your employees are more than just your employees.
They're your people.
My team is spread out across Colorado, Australia, Nepal, West Africa, and San Francisco.
My life would be so incredibly complicated to hire people internationally, to pay people on time and in their local currencies, and to answer their HR questions 24/7.
But with Just Works, it's super easy.
Whether you're setting up your own automated payroll, offering premium benefits, or hiring internationally, JustWorks offers simple software and 24/7 human support from small business experts for you and your people.
They do your human resources right so that you can do right by your people. just works for your people.
[Music] Fay Fay, thank you so much for being here and welcome to the podcast.
>> I'm excited to be here, Lenny.
>> I'm even more excited to have you here.
It is such a treat to get to chat with you.
There's so much that I want to talk about.
You've been at the center of this AI explosion that we're seeing right now for so long. We're going to talk about a bunch of the history that I think a lot of people don't even know about how this whole thing started. But let me first read a quote from Wyatt about you just so people get a sense and in the intro I'll share all of the other epic things you've done but I think this is a good way to just set context. Fay is one of a tiny group of scientists a group perhaps small enough to fit around a kitchen table who are responsible for AI's recent remarkable advances. A lot of people call you the godmother of AI.
And unlike a lot of AI leaders, you're an AI optimist.
You don't think AI is going to replace us.
You don't think it's going to take all our jobs. you don't think it's going to kill us. So, I thought it'd be fun to start there.
Just what's your perspective on how AI is going to impact humanity over time?
>> Yeah. Okay. So, Lenny, let me be very clear.
I'm not a utopian. So, it's not like I think AI will have no impact on jobs or people. In fact, I'm a humanist.
I believe that whatever AI does in currently or in the future is up to us.
It's up to the people. So I do believe technology is a net positive for humanity.
If you look at the long course of civilization, I think we are an fundamentally we're an innovative species that we you know if you look at from you know written record thousands of years ago um to to now humans just kept innovating ourselves and innovating our tools and with that we make lives better.
we make work better, we build civilization, and I do believe AI is part of that. So, that's where the optimism comes from. But I think every technology is uh is um a double-edged sword.
And uh if we're not doing the right thing as a species, as a society, as communities, as individuals, we can screw this up as well. H there's this line I think this was when you were presenting to Congress.
There's nothing artificial about AI. It's inspired by people.
It's created by people and most importantly it impacts people.
Uh I don't have a question there but what a what a great line.
>> Yeah I I I feel pretty deeply.
I you know I started um working in AI two and a half decades ago and I've been having students for the past two decades and almost every student who graduates I remind them you know when they graduates from my lab that your field is called artificial intelligence but there's nothing artificial about it.
>> Coming back to the point you just made about how it's kind of up to us about where this all goes. What is it you think we need to get right? How how do we set things on a path? I know this is a a very difficult question to answer but just what should what what's your advice?
What do you think we should >> Yeah.
>> How many hours do we have?
>> How do we align AI? There we go.
Let's solve it.
>> Also, I think people should be responsible individuals no matter what we do.
This is what we teach our children and this is what we need to do as grown-ups as well. No matter which part of the AI development or AI deployment or or AI application you are participating in and most likely many of us especially as technologists were were in multiple points we should act like responsible individuals and uh and care about this actually care a lot about this.
I think everybody today should care about AI because it is going to impact your individual life. It is going to impact your community. It's going to impact the the society and the future generation.
And caring about it as a responsible person is the first but also the most important step.
>> Okay. So, let me let me actually take a step back and kind of go to the beginning of AI. Most people started hearing and caring about AI is what it's called today.
Just like I don't know a few years ago when JGBT came out.
Maybe it was like three years ago.
>> Three years ago. Almost one more month.
Three years ago.
>> Wow. Okay. That was JT GBT coming out.
Is that the milestone that you have in mind?
Okay. Cool. That's exactly how I saw it.
But very few people know there was a long long history of people working on it was called machine learning back then and there's other terms and now it's just everything's AI and there was kind of like a long period of just a lot of people working on it and then there's this what people refer to as the AI winter where people just gave up almost people did and just okay this this idea isn't going anywhere and then the work you did actually was essentially the spark that brought us out of AI winter and is directly responsible for the world we're in now of just AI is all we talk about as you just said it's going to impact everything we do. So, I thought it'd be really interesting to hear from you just kind of like the brief history of what the world was like before imageet, then just the work you did to create ImageNet, why that was so important, and then just what happened after.
>> It is for me hard to keep in mind that AI is so new for everybody. when I lived my entire professional life in AI, it's there's a part of me that is just it's so satisfying to see a personal curiosity that I started barely out of teenagehood and and now has become a transformative force of our civilization.
It generally is a civilizational level uh technology.
So, so that journey is about about 30 years or 20 something 20 plus years and uh it's it's just very satisfying.
So, where did it all start? Well, I'm not even the first generation AI researcher.
The first generation really date back to the 50s and 60s. And you know Alan Touring was ahead of his time by in the 40s by asking daring humanity with the question can we is there thinking machines right and of course he has a specific way of uh testing this concept of thinking machine which is a conversational chatbot which to his standard we now have a thinking machine but uh that was just a more anecdotal inspir inspiration.
The field really began in the 50s um when computer scientists came together and look at how we can use computer programs and algorithms to uh to build these programs that can do things that have been only capable by human cognition. So um and and that was the beginning and the founding fathers the Dartmouth workshop in the 1956 uh you know we have professor John McCarthy who later came to uh Stanford who coined the term artificial intelligence and between the 50s60s 70s and 80s it was the early days of AI exploration and we had logic systems we had uh expert systems We also had early exploration of neuronet network and then it came to around the late 80s, the 90s and the the very beginning of the 21st century.
That stretch about 20 years is actually the beginning of machine learning.
It's the marriage between computer programming and statistical as uh learning.
And that marriage brought a very very critical concept into AI which is that purely rulebased um uh program is not going to account for the vast amount of cognitive capabilities that we imagine computers can do.
So we have to use machines to learn the patterns. Once the machines can learn the patterns, it has a hope to do more things. For example, if you give it three cats, the hope is not just for the machines to recognize these three cats.
The hope is the machines can recognize the fourth cat, the fifth cat, the sixth cat, and all the other cats.
And that's a learning ability that is fundamental to humans and many animals.
and uh we we as a field realized we need machine learning.
So that was up till the beginning of the 21st century.
I entered the field of AI literally in the year of 2000. That's when my uh PhD began at Caltech. And so I was one of the first generation machine learning researchers and we were already studying this concept of machine learning especially neuronet network.
I remember that was one of my first courses in at Caltech is called neuro network but it was very painful. It was still smack in the middle of the so-called AI winter meaning the public didn't look at this too much.
there wasn't that much funding but there was also a lot of ideas flowing around and I think two things happened to myself that brought my own career so close to the birth of modern AI is that um I chose to look at artificial intelligence through the lens of visual intelligence because uh humans are deeply visual animals. We can talk a little more later, but so much of our intelligence is built upon visual, perceptual, spatial understanding, not just language per se. I think they're complimentary.
So I chose to look at visual intelligence and um my PhD and my early uh professor years I um my students and I are very committed to a northstar problem which is solving the problem of object recognition because it's a building block for the perceptual world.
Right? We go around the world interpreting, reasoning and interacting with it more or less at the object level.
We don't interact with the world at the molecular level.
We don't interact with the world as um we sometimes do but we rarely for example if you want to lift a teapot you don't say okay the teapot is made of a 100 pieces of porcelain and let me work on this 100 pieces you look at this as one object and and interact with it.
So object is really important. So um I was among the first uh uh researchers to identify this as a northstar problem.
But I think what happened is that as a student of AI and then a researcher of AI, I was working on all kinds of mathematical models including neuronet network including Beijian network including many many models and there was one singular pain point is that these models don't have data to be trained on and uh as a field we were so focusing on these models but It dawned on me that human learning as well as evolution is actually a big data learning process. Humans learn with so much experience you know constantly and evolution if you look at time animals evolve with just experiencing the world.
So I think my students and and I conjectured that a very critically overlooked ingredient of bringing AI to life is big data and then we began this image that project in 2006 2007 we were very ambitious we want to get the entire internet's image data on objects now granted internet was a lot smaller than today so we I felt like that ambition was at least not too crazy.
Now it's totally delusional to uh to think a couple of graduate student and a professor can do this. But uh and that's what we did. We curated very carefully 15 million images on the internet.
Created a taxonomy of 22,000 concepts borrowing other researchers work like a linguist work on wordnet and it's a particular way of dictionarying uh words and we combine that into image that and we open source that to the research community.
We held an annual image net challenge to encourage everybody to participate in this.
We continue to do our own research.
But 2012 was the moment that many people think was the beginning of the deep learning or birth of modern AI because a group of Toronto researchers led by professor Jeff Hinton participated in imageet challenge used the imageet big data and two GPUs from Nvidia and created successfully the first neuronet network algorithm that's can it didn't fundamental it didn't totally solved but made a huge progress towards solving the problem of object recognition and that combination of the trio technology uh big data neuronet network and GPU was kind of the golden recipe for modern AI and then fast forward the the the public moment of AI which is the chat GPT moment if you look at the ingredients of what brought Chad GPT to to the to the uh world technically still use these three ingredients.
Now it's internet scale data mostly texts is a much more com complex neuronet network um architecture than 2012 but it's still neuronet network and a lot more GPUs but it's still GPUs. So these three ingredients are still to at the core of modern AI.
>> Incredible. I have never heard that full story before.
I love that it was two GPUs was the f I love and now it's I don't know hundreds of thousands right that are orders of magnitudes more powerful uh and those two GPUs were they just bought they were like gaming GPUs they just went to like the game store right that people use for playing games >> as you said this continues to be in a large way the way models get smarter some of the fastest growing companies in the world right now I've had them all mostly on the podcast Merkore and Surge and Scale like they do this they continue to do this for labs just give them more and more label data of the things they're most excited about.
>> Yeah, I remember um Alex Wong from scale very early days. I probably still has his emails when he was starting scale.
He uh he was very kind. He keeps sending me emails about how image that inspired scale.
I was very pleased to see that.
>> One of my other favorite takeaways from what you just shared is just such an example of high agency and just doing things.
That's kind of a meme on Twitter.
Just you can just do things.
you're just like okay this is probably necessary to move AI and it was called machine learning back then right was that the term most people used >> I think it was interchangeably it's true like I do remember the companies the tech companies I I'm not going to name names but I was I was uh in a conversation in one of the early days I think is in the middle of 2015 middle of 2016 uh some tech companies avoids using the word AI I because they were not sure if AI was a dirty word. And I remember I was actually encouraging everybody to use the word AI because to me that is one of the most audacious question humanity has ever asked in our quest for science and technology and I feel very proud of this term.
But yes, at the beginning some people were not sure.
>> What year was that roughly when AI was developed?
2016 I think that was >> less than 10 years ago >> that was the changing like um some people start calling it AI but I think if you look at the Silicon Valley tech company companies if you trace their marketing term I think 2017ish was the beginning of companies calling themselves AI companies >> that's incredible just how the world has changed now you Can't not call yourself an AI company.
>> I know.
>> Just nineish years later.
>> Yeah.
>> Oh man. Okay. Is there anything else around the history that early history that you think people don't know that you think is important before we chat about where think things are going in the work that you're doing?
>> I think as all histories, you know, I'm keenly aware that uh I am recognized for being part of the history, but there are so many heroes and so many researchers.
We're talking about generations of researchers there.
You know, in my own world, there are so many people who have in inspired me, which I I talked about in my book. But I do feel our culture, especially Silicon Valley tends to assign um achievements to a single person.
Well, while I think it has value, um but it's it's just to be remembered.
AI is a field of at this point 70 years old and we have gone through many generations. Um nobody no one um could have gotten here by themselves.
>> Okay. So let me ask you this question.
It feels like we're always on this precipice of AGI. This kind of vague term people throw around. AGI is coming.
Is it going to take over everything?
How what's your take on how far you think we might be from AGI? Do you think we're going to get there on the current trajectory we're on? Do you think we need more breakthroughs? Do you think the current approach will get us there?
>> Yeah, this is a very interesting term, Lenny.
Um, I don't know if anyone has ever defined AGI.
You know, there are many different definitions including, you know, some kind of superpower for machines all the way to can um a machines can become economically viable agent in in the society.
In other words, making salaries to live.
Is that the definition of AGI?
As a scientist, I I take science very seriously and I enter the field because I was inspired by this audacious question of can machines think and do things in the way that human humans can do.
For me, that's always the northstar of AI.
And from that point of view, I don't know what's the difference between AI and AGI. I think we've done very well in achieving parts of the goal, including conversational AI, but I don't think we have completely conquered all the goals uh of of AI. And I think our founding fathers that Alan Turing, I wonder if Alan Turing is around today and you ask him to contrast AI versus AGI, he might just shrug and said, well, I asked the same question back in 1940s.
So, so I don't want to get get onto a rabbit hole of defining AI versus AGI.
I feel AGI is more a marketing term than a scientific term.
As a scientist and technologist, AI is my northstar is my field's northstar and I'm happy people call it whatever name they want to call it.
>> So let me ask you maybe maybe this way like you described there's kind of these components that from ImageNet and AlexNet kind of took us to where we're today.
GPUs essentially data label data just like the algorithm of the model.
There's also just the transformer feels like an important step in that trajectory.
Do you feel like those are the same components that'll get us to I don't know 10 times smarter model something that's like life-changing for the entire world or do you think we need more breakthroughs?
I know we're we're going to talk about world models which I think is a component of this but is there anything else that you think is like oh this will plateau or okay this will take us just need more data more compute more GPUs.
>> Oh no I definitely think we need more uh innovations.
I I think scaling loss of more data, more GPUs and bigger current model architecture is there's still a lot to be done there. But I absolutely think we need to innovate more.
Um there is not a single deeply scientific discipline in human history that has arrived at a place that says we're done. We're done innovating.
And AI is one one of the if not the youngest discipline in in human civilization in terms of science and technology.
We're still scratching the surface.
Uh for example, um like I said, we're going to segue into world models today.
You take a a model and and and run it through a a video of a couple of office rooms and ask the the model to count the number of chairs. And this is something a toddler could do or maybe maybe a a a elementary school kid could do and AI could not do that, right?
So um there's just so much AI today could not do then let alone thinking about how did you know um someone like Isaac Newton look at the movements of the celestial bodies and and and derive an equation or or a set of equations that governs the movement of all bodies that level of creativity extrapolation abstraction we have no way of enabling AI to do that today. And then let's look at emotional intelligence. If you look at a student coming into a teacher's office and have a conversation about motivation, passion, what to learn, what's the problem that's that's you know really uh bothering you.
that conversation as powerful as as today's conversational bots are, you don't get that level of emotional cognitive intelligence uh from today's AI.
So there's a lot we can do better. Um and I do not believe we're done innovating.
>> Uh Demis had this really interesting interview recently from deep mind Google where someone asked him just like what do you think uh how far are we from AGI?
What does it look like when it's through there?
He had a really interesting way of approaching it is if we were to give a the most cutting edge model all the information until the end of the 20th century see if it could come up with all the breakthroughs Einstein had and so far we're never near that but they can >> no we're not in fact it's even worse let's give AI all the data including modern instruments data of celestial bodies which Newton did not have and give it to that and just ask AI to create the six 17th century set of equations on the laws of bodily movements.
Today's AI cannot do that.
>> All right, we're a ways away is what I'm hearing.
>> Yeah.
>> Okay, so let's talk about world models.
This is uh to me this is just another really amazing example of you being ahead of where people end up.
So you were way ahead on okay, we just need a lot of clean data for AI and neural networks to learn. uh you've been talking about this idea of world models for a long time. You started a company to build uh essentially there's language models.
This is a different thing.
This is a world model. We'll talk about what that is.
And now uh as I was preparing for this, Elon's like talking about world models.
Jensen's talking about world models.
I know Google's working on this stuff.
You've been at this for a long time.
And you're actually just launched something that's going to we're going to talk about uh right before this podcast airs.
Um talk about what is a world model?
Why is it so important?
I'm very excited to see that more and more people are talking about role models like Elon, like Jensen.
Um, I have been thinking about really how to push AI forward all my life, right?
and the large language models uh that came out of uh the research world and then open AI and and all this for the past few years were extremely inspiring even for a researcher like me. I remembered when GPT2 came out and that was in I think late 2020.
I was um co-director um I still am but I was at that time uh full-time co-director of Stanford's uh human center AI institute and I I remember it was you know the public was not aware of the power of the large language model yet but as researchers we were seeing it we're seeing the future and I had pretty long conversations with my natural language processing colleagues like Percy Leang and Chris Batting, we were talking about how critical this technology is going to be and Stanford uh AI institute, human center AI institute, hi was the first one to establish a full research center um foundation model.
We were Percy Le Young and and many researchers led the first uh academic paper um foundation model.
So so it was just very inspiring for me.
So, of course, I come from the world of visual intelligence and I was just thinking there's so much we can um push forward on beyond language because humans um humans have used our sense of spatial intelligence and world understanding to do so many things and they are beyond language. Think about a very chaotic first responder scene, whether it's fire or some traffic accident or or some natural disaster.
And it's if you immerse yourself in those scene and think about how people organize themselves to to rescue people, to stop further disasters, to put down fires, to to a lot of that is movements, is is spontaneous understanding of objects, worlds hum situational awareness.
Language is part of that.
But a lot of those situations language cannot get you to put down the fire.
So that is what is that? I I was thinking a lot and in the meantime I was doing a lot of robotics research and I it ca it dawned on me that the lynch pin of connecting the additional intelligence in addition to language and connecting embodied AI which are robotics.
connecting visual intelligence is this sense of spatial intelligence about understanding the world and that's when um I think I um it was 2024 I gave a TED talk about spatial intelligence and world models and uh I start formulating this idea uh back in 2022 um based on my robotics and computer vision research and then one thing that is really clear to me is that I really want to work with the brightest uh technologist and and move as fast as possible to bring this technology to life.
And that's when we founded this company called World Labs. And you can see the the the word world is in the title of our company because we believe so much in world modeling and spatial intelligence.
>> People are so used to just chat bots and that's a large language model.
So the simple way to understand a world model is you basically describe a scene and it generates an infinitely explorable world.
We'll link to a the thing you launch which we'll talk about but just is that a simple way to understand it?
>> That's part of it Lenny. I think a simple way to understand a world model uh is that this model can allow anyone to create any worlds in their mind's eye by prompting whether it's an image or a sentence and also be able to interact in this world.
whether you're browsing and walking or or picking objects up or or or changing changing things as well as to reason within this world.
For example, if if the person consuming if the agent consuming this output of the world model is a robot, it should be able to plan its path and and help to you know tidy the kitchen for example.
So, so world model is a a foundation that that you can use to reason, to interact, and to create worlds.
>> Great. Yeah. So, robots feels like that's potentially the next big focus for AI researchers and just like the impact on the world. And what you're saying here is uh this is a key missing piece of making robots actually work in the real world. Understanding how the world works.
>> Yeah. Well, first of all, I do think there's more than robots that's exciting.
Um so, but I agree with everything you just said. I think uh world modeling and spatial intelligence is a key missing piece of uh uh embody AI.
I also think let's not underestimate that humans are embodied agents and humans can be augmented by AI's uh intelligence just like today humans are language animals but we're very much augmented by AI when helping us to you know do language tasks including software engineering.
I I think that uh we shouldn't underestimate or maybe it's it's um we tend not to talk about how humans as an embodied agents can actually benefit so much from world models and spatial intelligent u models as well as robots can. So the big unlocks here, robots, which uh a huge deal.
If this works out, imagine each of us has robots doing a bunch of stuff for us.
Goes into, you know, they help us with disasters, things like that.
Uh games obviously is a really cool example.
Just like infinitely playable games that you just invent out of your head.
And then creativity feels like just like being fun, having fun, being creative, thinking of m wild new worlds and and environments.
>> And also design. humans design from machines to buildings to homes and also scientific discovery right there is so much u I I like to use the example of the discovery of the structure of DNA if you look at one of the most important piece in DNA's discovery history is the X-ray defraction photo that was captured by Rosalyn Franklin and it was a flat 2D photo of a structure that looks like it looks like a cross with defractions.
You can you can uh Google those photos.
But with that 2D flat photo, humans, especially two important humans, James Watson and Francis Crick, in addition to their other uh information, was able to reason in 3D space and deduce a highly three-dimensional double helix structure of the DNA.
And that structure cannot possibly be 2D.
You cannot think in 2D and deduce that structure.
You have to think in 3D spatial um use the the human spatial intelligence.
So I think even in scientific discovery um spatial intelligence or AI assisted spatial intelligence is critical.
>> This is such an example of I think it was Chris Dixon that had this line that the next big thing is going to start off feeling like a toy. When Chad GBT just came out, if like I remember Salman just tweeted as like here's a cool thing we're playing with. Check it out.
Now it's the fastest growing product all of history changed the world.
>> Yeah.
>> Uh and it's oftentimes the things that just look like okay this is cool.
Uh that it's fun to play with and end up changing the world most.
>> Yeah.
>> This episode is brought to you by Cinch, the customer communications cloud.
Here's the thing about digital customer communications.
Whether you're sending marketing campaigns, verification codes, or account alerts, you need them to reach users reliably. That's where Cinch comes in.
Over 150,000 businesses, including eight of the top 10 largest tech companies globally, use Cinch's API to build messaging, email, and calling into their products.
And there's something big happening in messaging that product teams need to know about.
Rich Communication Services, or RCS.
Think of RCS as SMS 2.0.
Instead of getting text from a random number, your users will see your verified company name and logo without needing to download anything new. It's a more secure and branded experience.
Plus, you get features like interactive carousels and suggested replies. And here's why this matters.
US carriers are starting to adopt RCS. Cinch is already helping major brands send RCS messages around the world, and they're helping Lenny's podcast listeners get registered first before the rush hits the US market.
Learn more at get started at cinch.
com/lenny.
That's s i nch.com/lenny.
>> I reached out to Ben Horowitz who loves what you're doing. A big fan of yours.
Uh they're investors I believe.
And >> yeah, we we've known each other for for many years, but yes, right now they are investors of uh Warlaps.
>> Amazing. Okay. So I asked him what I should ask you about and he suggested ask you why is the bitter why is the bitter lesson alone not likely to work for robots.
So first of all just explain what the bitter lesson was in the history of AI and then just why that won't get us to where we want to be with robots.
So well first of all there are many bitter lessons but but the bitter lessons everybody refers to is a u is a paper written by Richard Sutton who won the touring award recently and he does a lot of reinforcement learning and Richard has said right if you look at the the history especially the algorithmic development of AI it turns out simpler model with a ton of data always win at the end of the day instead of the the um the you know more complex model with less data.
I mean that was actually this paper came years after imageet that to me was not bitter it was a sweet lesson that's why I built uh image net because I believe that uh big data plays that role so why can bitter lesson work in robotics alone well first of all um I think we need to give credit to where we are today robotics is very much in the early days of experimentation.
It's not the the research is not nearly as mature as say language models.
So many people are still um experimenting with different algorithms and some of those algorithms are driven by big data. So I do think big data will continue to play a role in robotics and um but what is hard for robotics there are a couple of things one is that it's harder to get data it's a lot harder to get data you can say well there is web data this is where the latest robotics research is using web videos and I think web videos do do play a role but if you Think about what made language model work. A very as someone who does computer vision and and spatial intelligence and robotics, I'm very jealous of my colleagues in um in language because they had this perfect setup where their training data are in words eventually tokens and then they produce a model that outputs words.
So you have this perfect alignment between what you hope to get which we call objective function and what your training data looks like.
But robotics is different. Even spatial intelligence is different.
You hope to get actions out of robots.
But your training data lacks actions in 3D worlds. And that's what robots have to do, right? actions in 3D worlds.
So, you have to um find different ways to fit a uh what do they call a a a a square in a round hole that what we have is tons of web videos.
So then we have to start talking about uh adding supplementing data such as teleaoperation data or synthetic data so that the robots are trained with this hypothesis of bitter lesson which is large amount of data.
I think there's still hope because even what we are doing um in world modeling will really unlock a lot of this uh information for robots but I think we have to be careful because we're at the early days of this and bitter lesson is still to be tested uh because we haven't fully figured out the data for another part of the bitter lesson of robotics I think we should be so so realistic about is again compared to language models or even spatial models, robots are physical systems. So robots are closer to self-driving cars than a large language model. And that's very important to recognize. That means that in order for robots to work, we not only need brains, we also need the physical body, we also need application scenarios.
And if you look at the the the the the history of self-driving car, um my colleague Sebastian Thrum uh uh took Stanford's car uh to win the first DARPA challenge in 2006 or 2005. It's 20 years since that prototype of a self-driving car being able to drive 130 miles in the Nevada desert to today's Whimo and um on the street of San Francisco and we're not even done yet. There's still a lot.
So that's a 20 year journey.
And self-driving cars are much simpler robots.
They're just metal boxes running on 2D surfaces. And the goal is not to touch anything.
Robot is 3D things running in 3D world and the goal is to touch things.
So the journey is going to be you know there's many aspects elements and of course one could say well the self-driving car early algorithm were pre-deep learning era.
So deep learning is accelerating uh the brains and I think that's true.
That's why I'm in robotics. That's why I'm in spatial intelligence and I'm excited by it.
But in the meantime, the car industry is very mature and productizing also involves the mature use cases, supply chains, the hardware.
So I think it's a very interesting time to work in these problems. But it's true Ben is right. we might still be subject to a number of bitter lessons >> doing this work. Do you ever just feel awe for the way the brain works and is able to do all of this for us?
Just the complexity just to get a a machine to just walk around and not hit things and fall.
Does it just give you more spec for what we've already got?
>> Totally. We we operate on about 20 watts.
That's dimmer than any light bulb in in the room.
I'm in right now. And yet we can do so much. So I think actually the more I work in AI, the more I respect humans.
>> Let's talk about this uh product you just launched.
It's called Marble.
A very cute name. Talk about what this is, why this important. I've been playing with it.
It's incredible. We'll link to it and for folks to check it out.
What is Marble?
>> Yeah, I'm very excited. So first of all, Marbo is uh one of the first product that World Labs uh has rolled out.
Worldlabs is a foundation frontier model company.
We are founded by four co-founders who have deep technical history.
My co-founders Justin Johnson uh Kristoff uh Lassner and Ben Mildenhal.
We all come from the research field of AI, computer graphics, computer vision.
And uh we believe that spatial intelligence and world modeling is as important if not more to uh language models and uh complementaryary to to language models.
So we wanted to seize this opportunity to create deep uh tech research lab that can connect the dots between um frontier models with products.
So, Marvel is an app that's built upon our frontier models.
We've spent a year and plus building the world's first uh generative model that can output genuinely 3D worlds.
That's a very very hard problem. Um and uh and I it it was a very hard process. Uh we uh we have a team of incredible founding team of incredible technologists from you know incredible uh teams. And then around um just a month or two ago, we saw the first time that we we can just prompt with a sentence and an image and multiple images and create worlds that we can just navigate in. If you put it on goggle, which we have an option to let you do that, you can even walk around, right?
So it was even though we've been building this for for for quite a while, it was still just all inspiring and we wanted to get into the hands of uh people who need it.
And then we know that so many creators, designers, people who are thinking about uh robotic simulation, people who are thinking about uh different use cases of uh navigable interactable um uh immersive worlds, game developers will find this useful. So we uh develop developed Marble as a first step.
It's it's again still very early uh but it's the world's first uh model doing this and it's the world's first uh product that allows people to just uh prompt we call it prompt to worlds.
>> Well, I've been playing around with it.
It is insane. Like you could just have a little sh world where you just infinitely walk around Middle Earth basically and there's no there's no one there yet but uh it's insane.
You just go anywhere. There's like dystopian world.
I'm just looking at all these examples.
>> Yes. Uh, and my favorite part actually, I don't know, I don't know if this is a feature or bug, you can see like the dots of the world before it actually renders with all the textures.
And I just love to like you get a glimpse into what is going on with this model.
basically create.
>> That's so cool to hear because this is where as a researcher I I I'm learning because the the the the dots that lead you into the world was a an intentional feature uh visualization. It is not part of the model. It's uh the model actually just generates the world. We we were trying to find a way to guide people into the world and a number of engineers uh worked on different versions but we converged on the dot and so many people you're not the only one told us how delightful that experience is and it it was really satisfying for us to hear that this intentional visualization feature that's not just the big hardcore model actually has delighted our users.
>> Wow. So, you add that to make it more uh like to have humans understand what's going on more, get more delightful.
Wow, that is hilarious. It makes me think about LM and the way they it's not the same thing, but they talk about what they're thinking and what they're doing.
>> Yes, it is. It is.
>> It also makes me think about just the Matrix.
Like, it's exactly the Matrix experience.
I don't know if that was your inspiration.
>> Um, well, like I said, a number of engineers worked on that. It could be their inspiration.
It's in their It's in their uh It's in their subconscious.
>> Yeah.
>> Okay. So, just for folks that may want to play around with this, maybe use it.
What's like what are some applications today that folks can start using today?
What's what's your goal with this launch?
>> Yeah. So, um we do believe that world modeling is very horizontal, but we're already seeing some really exciting uh use cases.
virtual production for movies because what they need are 3D uh worlds that they can align with the camera so when the actors are acting on it uh they can you know they can uh position the camera and shoot the the segments really well and uh we're already seeing um incredible use in fact I don't know if you have seen our launch video showing marble it was produced by a virtual uh production company.
We we collaborated with Sony and they use marble things to shoot those videos. So our we were collaborating with those uh uh technical artists and directors and they were saying this has cut our uh production time by uh 40x.
In fact it has tox.
>> Yes. In fact, I had to because we only had one month to work on this project and and there were so many things they were trying to shoot. So, so using marble really really significantly accelerated the production of virtual virtual production for VFX and movies.
That's one use cases. We are already seeing our users putting uh taking our marble scene and taking the mesh export and putting games you know whether it's games on VR or games uh just just just fun games that they they have developed we have had um we were showing uh an example of uh robotic simulation because uh when I was I mean I'm still am a researcher doing robotic uh training.
One of the biggest pain point is to create synthetic data for training robots.
And these synthetic data needs to be very diverse. They need to come from different environments with different objects to manipulate.
And uh and one path to it is is to ask uh computers to simulate.
Otherwise, humans have to, you know, build every single asset for robots.
That that's just going to take a lot longer.
So we already have researchers reaching out and wanting to use marble to create those synthetic environments.
We also have unexpected um user uh outreach in terms of uh how they want to use marble.
For example, a psychologist team called us to use marble to do psychology research.
It turned out some of the psychiatric patients they study, they need to understand how their brain respond to different immersive scenes of different features.
Uh, for example, messy scenes or clean scenes or or whatever you name it. And it's very hard for researchers to get their hands on um these kind of immersive scenes.
and it will take them too long and too much budget to uh to to create. And Marble is a really almost instantaneous way of getting so many of these um experimental uh environments into their hands.
So, we're seeing um uh we're seeing multiple use cases at this point, but the the VFX, the game developers, the simulation uh uh developers as well as designers are very excited.
>> This is very much the way things work in AI.
I've had other AI leaders on the podcast and it's always like put things out there early as soon as you can to discover where the big use cases are.
the head of CHAJBT told me how when they first put out ChatJBT, he was just scanning TikTok to see how people were using it and all the things they were talking about and that's what convinced them where to lean in and and help them see how people actually want to use it.
I love this last use case of like for therapy.
I'm just imagining like like heights, people seeing dealing with heights or snakes or spiders, which >> it's amazing. A friend of mine last night literally called me and talked about his height scare and asked me if marble should be used. That's amazing.
You went straight there.
>> That's, you know, cuz I'm imagining all the like the exposure therapy uh stuff like this could be so good for that.
Uh that is so cool. Okay, so let me I should have asked you this before, but I think there's a qu there's going to be a question of just how does this differ from things like V3 and other video generation models?
It's pretty clear to me, but I think it might be helpful just to explain how this different from all the video AI tools people have seen.
>> Wordlab's thesis is that spatial intelligence is fundamentally very important and spatial intelligence is not just uh uh it's not just about videos.
In fact, the world is not passively watching videos passing by, right?
Um I I love uh Plato has the allegory of the cave analogy uh to describe vision.
He said that imagine a prisoner tied on his chair uh not not very uh humane but um uh in in a cave uh watching a full life theater uh on the in front of him. But but the actual live theater that actors are acting is behind his back.
It was just lit so that the projection of the the uh the action is on a on a wall of the cave and and then the goal the the task of this prisoner is to figure out what's going on.
It's a pretty extreme example, but it really shows it describes what vision is about.
is that to make sense of the 3D world or 4D world out of 2D.
So spatial intelligence to me is deeper than owning creating that flat 2D world.
Spatial intelligence to me is the ability to create, reason, interact, make sense of deeply spatial world, whether it's 2D or 3D or 4D, including dynamics and all that.
So, so World Lab is focusing on that.
And of course, um the ability to create videos per se, could be part of this.
And in fact uh just a couple of weeks ago we rolled out the world's first uh realtime demoable realtime video generation on a single uh H100 GPU. So we we we part of our technology includes that.
But I think Marvel is very different because we really want creators, designers, developers to have in their hands a model that can give them uh worlds with 3D structure so they can use it for for their work.
And that's where that's why Marble is so different.
>> The way I see it is it's a it's a platform for a ton of opportunity to do stuff.
uh as you described videos are just like here's a oneoff video that's very fun and cool and you could and that's it and that's it and you move on.
>> By the way, we could in Marble we could allow people to export in video form.
So you could actually, like you said, you go into a world. So So let's say it's a hobbit uh cave, you can actually, especially as a creator, you have such a uh specific way of uh uh moving the camera in a trajectory in the director's mind, right?
And then you can export that uh from Marble into a video.
>> What does it take to create something like this?
Just like how big is the team?
How many how many GPUs you working?
Like anything you can share there?
I don't know how much of this is private information, but just what does it take to create something like this that you've launched here?
>> It takes a lot of brain power.
So, we just talk about 20 watts per brain.
It's uh so from that point of view, it's it's a small number, but but it's actually an incredible, you know, it's a half billion years of evolution to get give us those power. Um we have a team of 30ish people now and uh we are predominantly uh researchers and research engineers and uh but we also have designers and and product.
We we actually really believe that we want to create a company that's anchored in the deep tech of spatial intelligence but uh we we we are actually building serious products.
Um so so we have we have this uh integration of R&D and productization and of course we use you know a ton of GPUs.
That's a that's the technical >> I'm so happy to hear.
>> Well, congrats on the launch.
I know this is a huge milestone. I know this took a ton of work. So, I just want to say congrats to you and your team.
>> Let me talk about your founder journey for a moment. So, you're a founder of this company.
You started how many years ago?
Couple years ago, two, three years ago.
>> Oh, a year ago. A year ago.
>> A year. Okay.
>> 18 month. Yeah.
>> Okay. What's something you wish you knew before you started this that you wish you could like whisper into the ear of Fay of 18 months ago?
>> Well, I continue to wish I know the future of technology.
I think actually that's one of our founding advantage is that we see the future earlier in general than than most people.
But still, man, this is so exciting and so uh amazing that that what's unknown and what's coming.
But I know the reason you're asking me this question is not about the future of technology.
You're probably more, you know, look, I I did not start a company of this scale at 20 year old. So, you know, I started a dry cleaner when I was 19, but that's a little smaller scale. we got to talk about that >> and and then I you know um founded Google Cloud AI and then I founded an institute at Stanford but those are different beasts.
I did feel I was a little more prepared as a a founder of the the grinding journey that um that I um compared to maybe um maybe the the the 20 year old founders. But I still I'm surprised and and and uh it puts me into paranoia sometimes that how intensely competitive uh AI landscape is from from the model the technology itself as well as talents. And you know when I founded the company um we did not have these incredible stories of how much certain talents would cost you know um so these are things that continue to surprise me and uh and I have to be very alert about.
>> So the competition you're talking about is yeah the competition for talent the speed at which how things are moving.
>> Yeah.
>> Yeah. you mentioned this point that I want to come back to that you if you just look over the course of your career.
You were like at all of the major uh collections of humans that led to so many of the breakthroughs that are happening today.
Obviously we talked about Imageet also just sale at Stanford is where a lot of the work happened at Google cloud which a lot of the breakthroughs happened.
What brought you to those places? uh like for people looking for how to advance in their career, be at the center of the future, just like is there a throughine there of just what pulled you from place to place and pulled you into those groups that might be helpful for people to hear?
>> Yeah, this is actually a great question, Lenny, because I do think about it and uh obviously we talked about it curiosity and passion that brought me to AI.
That is more a scientific northstar, right?
I did not care if AI was a thing or not.
So, so that was one part. But how did I end up choosing um in the particular places I work in including starting world labs is I think I'm very grateful to myself or maybe to my parents' jeans.
I'm I'm an intellectually very fearless person and I have to say when I hire young people I look for that because I um I think that's a very important quality if one wants to make a difference is that when you want to make a difference you have to accept that you're creating something new or you're diving into something new.
people haven't done that.
And if you have that self-awareness, you almost have to allow yourself to be fearless and to be courageous. So when I uh for example um came to Stanford, you know, in the world of academia, I was very close to this thing called tenure um which is, you know, have the job forever in in at Princeton.
But I I choose to chose to come to Stanford because I love Princeton. It's my alma mater.
It's just at that moment there are people who are so amazing at Stanford and the Silicon Valley ecosystem was so amazing that I was okay to take a risk of restarting my tenure clock.
um going to um becoming the first uh female director of sale. I was actually relatively speaking a very young faculty at that time and I wanted to do that because I care about that community.
I didn't spend too much time thinking about all the failure cases.
Obviously, I was very lucky that the more senior faculty supported me, but I just wanted to make a difference. And then going to Google was similar. I wanted to work with people like Jeff Dean, Jeff Hinton, and um all these incredible Dennis, the the incredible people.
Um I you know, so so the same with World Labs.
I I I have this passion and I also believe that people with the same mission can do incredible things.
So that's how it guided my through through life.
I don't overink of all possible things that can go wrong because that's too many.
>> I feel like that's an important element of this is not focusing on the downside, focusing more on the people, the mission.
What gets you excited?
What do you think? Uh I do yeah I do want to say one thing to all the young talents in AI the engineers the researchers out there because some of you apply to world labs.
I I feel very privileged you considered world labs.
I do find many of the young people today think about every single aspect of a equation when they decide on jobs at some point. Maybe, you know, maybe maybe that's the way they want to do it.
But sometimes I do want to encourage young people to focus on what's important because I find myself um constantly in mentoring mode when I talk to job job candidates.
Not necessarily recruiting or not recruiting, but just in mentoring mode.
When I see an incredible young talent who is overfocusing on every minute dimension and aspect of considering a job when when maybe the most important thing is where's your passion? Do you align with the mission?
Do you believe and have faith in this team?
and and just just focus on the impact and and you can make and the kind of work and team you can you can work with.
>> Yeah, it's tough. It's tough for people in the AI space now. There's so much so much at them, so much news, so much happening, so much FOMO.
>> That's true.
>> I could see the stress. And so, I think that advice is really important.
Just like what will actually make you feel fulfilled in what you're doing, not just where's the fastest growing company?
Where's the who's going to win?
I don't know. I want to make sure I ask you about the work you're doing today at Stanford at the HCI. I think it's HAI human centered AI institute.
>> What are you what are you doing there?
I know this is a thing you do on the site still.
>> So yes, I HAI human center AI institute was co-founded by me and a group of faculty like uh professor John Hendy, professor James Landy, um professor Chris Manning back in 2018.
I was actually finishing my last the last sabbatical at Google. Um and uh it was a very very important decision for me because I could have stayed in industry but my time at Google taught me one thing is AI is going to be a civilizational technology and it it's it dawned on me how important this is to humanity to the point that I actually wrote a piece in New York Times that year 2018 to talk about the need for a guiding framework to develop and to to apply AI and that framework has to be anchored in human benevolence is human centerness and I felt that Stanford uh one of the world's top university in the heart of Silicon Valley that gave birth to important companies from Nvidia to Google uh should um be a thought leader uh to create this human- centered AI framework and to um to actually embody that in our research education and policy and in ecosystem work.
So I founded HAI it uh you know after uh fast forward after six seven years it has become the world's largest AI institute that does human- centered um uh research education uh ecosystem outreach and policy uh in uh in uh impact.
Uh it involves hundreds of faculty across all eight schools at Stanford from medicine to education to sustainability to business to engineering to humanities to uh law and uh we we support researchers especially at the interdisciplinary area from digital economy to uh legal studies to political science to discovery of new drugs.
uh to to new algorithms to that's beyond transformers.
We also actually put a very strong focus on um on policy because when we started HAI I realized that Silicon Valley did not talk to Washington DC and or Brussels or other parts of the world and it's re given how important this this technology is we need to bring everybody on board.
So we created multiple programs from congressional boot camp to um AI index report to policy briefing and we especially uh participated in policym including um advocating for a u a national AI research cloud bill that was passed in the first Trump administration and participate participating in state level uh regulatory AI discussions.
So there's a lot we did and and I continue to be um one of the the leaders even though I'm much less involved operationally because I care not only we create this technology but we use it in the right way.
>> Wow. I was not aware of all that other work you were doing. Uh, as you were talking, I was reminded Charlie Mer had this quote, take a simple idea and take it very seriously. I feel like you've done that in so many different ways and and stayed with it and it's unbelievable the impact that you've had in so many ways over the years. I'm going to skip the lightning round and I'm just going to ask you one last question.
Is there anything else that you wanted to share?
Anything else you want to leave listeners with?
>> I I'm very excited by AI Lenny.
Uh I want to answer one question that I when I travel around the world everybody asks me is that if I'm a musician, if I'm a teacher, middle school teacher, if I'm a nurse, if I'm an accountant, if I'm a farmer, do I have a role in AI or is AI just going to take over my life or my work?
And I think this is the most important question of AI. And I find that in Silicon Valley, we tend not to speak heart-to-heart with people with people like us and and not like us in Silicon Valley, but like all of us, we tend to just toss around words like infinite productivity or infinite leisure time or or you know, infinite power or whatever. But at the end of the day, AI is about people. And when people ask me that question, it's a resounding yes.
Everybody has a role in AI.
It depends on what what you do and what you want.
But no technology should take away human dignity and the human dignity and agency should be at the heart of the development, the deployment as well as the governance of every technology.
So if you are a young artist and your passion is storytelling, uh, embrace AI as a tool.
In fact, embrace Marvel. I hope it becomes a tool for you.
Um, because the way you tell your story is unique and this the world still needs it. But how you tell your story, how do you use the most incredible tool to tell your story in the most unique way is important and that that voice needs to be heard.
If you're a farmer near retirement, AI still matters because you're a citizen.
You can participate in your community.
You should have a voice in how AI is used, how AI is applied. you you work with people that you can you know encourage all of all of you to use AI uh to make life easier for you. If you're a nurse, I hope you know that at least in my uh career, I have worked so much in healthc care research because I feel our health care workers should be greatly augmented and helped by AI technology whether it's smart cameras to feed more uh in information or robotic assistance because our nurses are overworked, over fatigued And as our society ages, we need more help for for people to be taken care of. So AI can play that role.
So I just want to say that it's so important that um even a technologist like me um are sincere about that everybody has a role in AI.
>> What a beautiful way to end it.
Such a tie back to where we started about how it's up to us and take individual responsibility for what AI will do in our lives.
Final question, where can folks find Marble? Where can they go?
Maybe uh try to join uh World Labs if they want to. What's the website?
Where do people go?
>> Well, World Labs website is www.
worldlabs.ai and you can find um you can find our research progress there. We we have technical blogs.
You can find Marble the product there.
You can sign in there.
You can find our job posts uh link there.
You can uh you know, we're in San Francisco.
We love to work with the world's best talents.
>> Amazing. Fay, thank you so much for being here.
>> Thank you, Lenny.
>> Bye, everyone.
Thank you so much for listening.
If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app.
Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast.
You can find all past episodes or learn more about the show at lennispodcast.com.
See you in the next episode.
Loading video analysis...