TLDW logo

Logan Kilpatrick on Gemini, AGI & The Future of Google Gemini | The Roo Cast- Oct 22, 2025 Ep. 28

By Roo Code

Summary

## Key takeaways - **Vibe Coding via I'm Feeling Lucky**: AI Studio's 'I'm feeling lucky' button builds AI-powered apps from a single prompt, like a meeting agenda generator from document uploads with Google Search grounding, as easy as one click. [02:22], [03:43] - **Gemini Unifies Google Dev Tools**: Gemini acts as the 'through line' connecting Google's disconnected developer ecosystems like Android Studio, TensorFlow, and web tools into a unified experience via CLI and subscriptions. [14:12], [14:51] - **Models Eat Developer Scaffolding**: Models are 'eating the scaffolding' as they advance, making tools like context condensing obsolete with 1M-2M token windows, pushing innovation to new frontiers like 100M tokens. [25:32], [28:10] - **No to Code-Specific Models**: Code-specific models lose capability transfer and world understanding needed for general software building; general models like Gemini reach global maxima without splitting compute focus. [41:38], [43:02] - **AGI via Superhuman Products**: Users will experience AGI through products delivering superhuman coding capabilities before true general intelligence, despite LLM limits like strawberry R counting, due to parallelization. [43:24], [44:37]

Topics Covered

  • AI Studio Enables One-Shot Prototyping
  • Prototypes Trump Product Specs
  • Gemini Unifies Fragmented Tools
  • General Models Beat Code-Specific
  • Superhuman Coding Precedes AGI

Full Transcript

Welcome to the Rooode podcast. I'm your

host, Annis Rudolph, the community manager of Rukode. Joining me today are my co-hosts.

Oh, look at that. I'm streaming myself echoing. Okay. Joining me today are my

echoing. Okay. Joining me today are my co-hosts, uh, Root Code Founders, Matt and Danny, and our special guest, Logan from Google DeepMind. Uh, good morning

everybody, and thank you for being here.

Hey folks, happy Wednesday.

>> Um, so Logan, um, for listeners who may not know who you are, can you tell us a little bit about yourself and, you know, what's your role at DeepMind?

>> Yeah. Uh, well, first of all, I'm excited for this conversation. Uh, big

fans of what you all are building. Uh,

so excited to chat. Um, I lead our developer product team at DeepMind. Uh,

responsible for AI studio, which is our developer platform. uh now supports vibe

developer platform. uh now supports vibe coding which is exciting and our underlying uh sort of APIs for all of our latest models and services. So

things like Gemini, VO, imagine uh etc. >> Yeah, you all have been you all been launching a lot of stuff this week. I

saw um I'd love to hear a little more about the the most recent one, the AI studio, if you're willing to talk about that. And I don't know if you're set up

that. And I don't know if you're set up to demo, but I'd love to love to see it a little bit as well.

>> Yeah. Yeah, let's do it. Um, let me share my screen live and we can see if >> this guy comes prepared.

>> Well, it's easy easy to demo. Um, yeah.

So, we uh we just landed this brand new sort of vibe coding experience which I'm really excited about. I think like you know I'll say this with all the normal caveats that I think vibe coding is overloaded and there's lots of different

products and it means different things to different pe to different people but the sort of thesis for us is like we think building software is is extremely powerful um and and particularly I think we see a ton of interest and traction in

building software that's actually AI powered. Um, so again, because our

powered. Um, so again, because our team's responsible for both this like UI product, but also all the underlying APIs that developers use to build uh AI powered experiences, um, there's this

like interesting fusion of those two things coming together. And I think that's this experience that we're building here. Um, so we're going to

building here. Um, so we're going to help people build AI powered apps, uh, and make it super easy to do. Um, and

hopefully it should be as literally as easy as coming here and clicking this I'm feeling lucky button. Um, and then we have this uh uh a bunch of like suggested prompts um and different AI

capabilities as well. So this one that I clicked is create an app that allows me to upload any document and build out a meeting agenda from it. Um then create a meeting in the timeline view. We're

going to use shad cienne uh for styling a clean panel on the left for uploads.

This is really cool. Um the fun thing is that you can just really quickly also like add additional AI capabilities. So

we have this grounding with Google search functionality. So I'm going to

search functionality. So I'm going to add that in as well. And then we can actually kick off building this app. Um

and then you get even more things which is AI suggested like next steps. So I

can add potentially the ability to easily add manage assign roles to different meeting stakeholders which can be linked to meeting agenda items. Uh you can go and deploy, you can share,

you can move it to GitHub. Um, the

experience is free, but if you want to use like certain premium models and stuff like that, you can actually switch to one of your API keys if you want. Um,

so yeah, I I'm, you know, removing the friction for people to build is is so exciting and I just saw on the timeline yesterday just like all of these like great examples of people building cool new stuff. Um, so yeah, happy to keep

new stuff. Um, so yeah, happy to keep jamming, but I'm excited to see what this builds for us. I love the uh the renewed uh relevance of the I'm feeling

lucky button.

Like we all we talk a lot here about wanting to oneshot and and man is that a brand for oneshot right there.

>> Yeah. You know what's funny is maybe I don't mean to be overly critical of search, but it is interesting that I don't personally find that I'm feeling lucky button in search to be useful at

all because it still requires that you like put in the search term and something like that. And I guess it is I I'm actually curious. I don't know what the data is. Maybe like people get a ton of value out of it, but I've never like

I never clicked that but I've never used it.

>> It's more I always I always saw I'm feeling lucky as like bragging rights for Google. Like it's like our results

for Google. Like it's like our results are so good. You probably think number one is really great. And so like in again like in a vibe coding world like I think we talk a lot about like how we

can get a prompt to one shot correctly do as gigantic a task as possible and I'm feeling lucky. It's just like the manifestation of that to me it's really cool.

>> I love that. That's actually a great uh that's a great reminder. So now now it's our bragging rights hopefully that we'll have models that are good enough to just build you a really cool AI product uh

off the bat. But um yeah, it it feels it'll be again this is very new so it'll be interesting to see what the data looks like of like how much do people actually just like want to let the model come up with some cool idea for some

product for them to build uh versus like how much handcrafting do they want. But

we got our generation I'll just note a couple of things. Um this is you know I spend a lot of time with folks who are sort of new to this vibe coding experience. I think your audience is

experience. I think your audience is probably like much more uh deeply technical but I always like to make my disclaimer that like whatever the model puts together is the starting place. Um

even if it's like slightly broken or it doesn't have the features that you want um you know you can continue to iterate and that's the magic of this. Um and

also like this prompt that we were using is like not sophisticated. So like I didn't really specify a whole lot of the detail of what I wanted. Um, I could have even put in like an image with sort of a UI and stuff like that, but I'm

just going to make up an agenda. So, the

agenda for this podcast is um, you know, intros show AI studio

the app a little cool through code stuff.

We love AI. Uh, and we'll see what this actually puts together. I have no idea.

Um, but the fun thing is if it doesn't work, we get to continue to to iterate uh real live. Um, I'll make one other quick example. One of the things that I

quick example. One of the things that I love most about this UI experience is like again from a developer point of view. Um, sometimes like I'm I'm not a

view. Um, sometimes like I'm I'm not a TypeScript developer and it is helpful for me to like start building my mental model of what's happening in Typescript.

When I hover over any of these files and it's telling me that like the file upload panel.tsx tsx creates a left

upload panel.tsx tsx creates a left panel component from the user input. It

includes a text area, etc., etc. So, I can like sort of roughly understand how what I'm seeing on my screen is actually mapping to the underlying code files without actually having to go and look

at the code. You can look at the code if you want. You can open up the editor.

you want. You can open up the editor.

You can like change whatever you want.

Um, but by default, you don't have to.

Um, and actually, I just lost my my beautiful agenda. um

beautiful agenda. um >> because it it reloads when you look at the code in case you made any changes.

So um but yeah, I think the experience is is super interesting. Um so yeah, happy to riff in in any direction. We

also have this uh I won't show my apps, but we have this great gallery that sort of showcasing all these different capabilities. So if you're like I want

capabilities. So if you're like I want to build a video gen app or I want to build something with you know the the grounding with Google maps feature or nano banana or all those different

things are possible. Um and again the fun part is you can actually uh directly like change and manipulate them. This

one's really cool where you can take a picture of a product and then like drag to insert it into a specific place in a scene. So, let me actually see if I can

scene. So, let me actually see if I can put in a goofy scene. Um,

this one I will have to um note.

Nano Banana made this for me earlier today for a YouTube thumbnail, which is a little bit goofy. Um, so I acknowledge that. But then I'll take the Gemini logo

that. But then I'll take the Gemini logo and I'll put this in. And then it should be able to just drag in and I'll put the Gemini logo right there. And then it'll use nano banana to fuse these two things

together. Um, and this example was like

together. Um, and this example was like completely vibecoded by somebody on my team, which is awesome. Um, and I think it just like showcases how Yeah, I mean

almost got it perfectly right. Um, so

very very cool to see to see this come together. And again, if you want to see

together. And again, if you want to see like under the hood what's happening, how did it go for me dragging? I think

it's like coordinate based. Um, you can actually read through the code and uh and sort of figure it out or or ask Gemini to explain how it's making that piece possible.

>> Can we connect this to our GitHub repos?

>> Yes. I mean, so right now we have uh like you could save to GitHub. Um, so

you could uh go and like actually migrate the code there. It doesn't work right now at least. You can't take an existing GitHub repo, bring it in, and then like do the iteration. Um, I think

we think about like this experience as a starting place for people and like if you're a sophisticated developer who's using GitHub already and you're sort of saving your code and you're building there, I think the hypothesis right now

is it's unlikely that you know there's lots of great developer tools out there that like go a lot deeper than what we'll be able to offer. Um, so really this is the the sort of top offunnel experience.

>> So we can think of this as like a prototyping environment first. Um and

and uh uh for for those of you who are going to have to implement something that a designer or product manager uh had in their in their mind first like a working prototype is better than a spec.

>> 100%.

>> Here it is.

>> Yeah. Yeah. And I think you know there are very like thin use cases and I think this is also like tied to the model story as well like hope you know eventually maybe the models will be good enough where they'll really be building full product for people. I don't think

that's the case right now. like this

home canvas one is a good example of something that I think just like works out of the box and like you could have like a very like micro SAS product around something like this. Um but it's

definitely like you you know there's limitations everywhere. Um, and I think

limitations everywhere. Um, and I think that like this is the power of being a developer and having sort of like really really powerful bespoke tools is that you can sort of go the five levels deeper uh and really sort of like uh

make the nuance of the experience incredible. Um, which is which is not

incredible. Um, which is which is not the sort of experience that we're going after right now. Really just like get you to that aha moment uh as quickly as possible. You can build this thing with

possible. You can build this thing with AI um and then sort of potentially get you into another product that's a little bit deeper that allows you to to continue to build if you want to.

Are you actually using things like this day-to-day um as a product leader to to prototype?

>> Yeah, this is the fun thing. So, we I don't have these demos because they're all they're all a bunch of internal ones. Um but we actually so that feature

ones. Um but we actually so that feature that we were looking at when we're building which allows you to like click through different options and suggestions was initially prototyped in

AI Studio. Um the entire like revamped

AI Studio. Um the entire like revamped overall design experience was prototyped in AI stu like we prototyped AI studio in AI studio. Um we have full stack runtime supports coming uh which is

really exciting. So what we have an even

really exciting. So what we have an even deeper version of this that allows us to like quickly iterate. So um it does make that cost of like hey I have some random feature idea that I would love to sort

of like see what it look like. Um the

cost of bringing that into reality and like putting my fingers on it is really nice. And I think like and I'm sure

nice. And I think like and I'm sure others have had this experience like our engineering counterparts. It's like way

engineering counterparts. It's like way easier to show up with something like a prototype like this and be like, "Hey, here's what I'm thinking." And there's like much less guesswork in that because

you have like like a PRD or a product specs representation of I think what actually ends up being built. Um so it's super exciting. I I think

exciting. I I think >> and and it forces design and product to think through the the the experience edge cases that aren't evident on a mock, which is always kind of my um pet

peeve.

>> 100% 100%. And you just end up doing a bunch of guess work and then people are surprised when the guesswork that other people are doing doesn't actually connect with the guesswork that they were doing in their head. Uh so I think you save a lot of that sort of back and

forth by just being able to like actually bring the thing to life. Um, so

yeah, it's it's been immensely helpful for us. And I think broadly this is

for us. And I think broadly this is something that gets me really excited for Google in general, which is like being able to we have a huge amount of like internal teams that are using this to prototype next generation experiences

across all of Google. Um, and I think we're just scratching the surface of like accelerating the product development uh for for all of Google, which is really which is going to be

really magical >> as as you think about the road map for Google assisting software development overall both internally and externally.

Like so it seems like I I think I understand where this fits in on sort of the broader life cycle of software development. Like what else is on the

development. Like what else is on the road map? uh uh that to the extent that

road map? uh uh that to the extent that you can talk about that that um uh uh we'd love to know.

>> Yeah. Yeah. I think one of the interesting things for Google is like there's so many places in which we show up in so many different ecosystems. Um and I think this work is like obviously just like one of many places like we're

obviously showing up in in a completely different way for like Android developers and for web developers with Chrome and and everything that's happening there. Um, so I think the and

happening there. Um, so I think the and maybe this is like an overly generic answer, but I but I think it has like a really interesting thread to pull on, which is historically as you looked at

that portfolio of things that Google had from a a sort of developer ecosystem offering, they were all actually extremely disconnected. like the Android

extremely disconnected. like the Android developer didn't have a lot of crossover with the web developer which didn't have a lot of crossover with you know the person using TensorFlow or some other

some other product that's in Google's sort of ecosystem of offerings. Um, and

now the exciting piece is you actually have Gemini bringing these things together. Like Gemini is this through

together. Like Gemini is this through line in which as a as a user, as a developer, you're sort of connected to all of these tools. Um, and you start to see things like, you know, with the

Gemini CLI and how the CLI is sort of showing up on more of Google's developer services. And I don't know off the top

services. And I don't know off the top of my head whether it's integrated as part of Android Studio, but I think it's like becoming more and more integrated.

like if there's a terminal available, there's an option to use the Gemini CLI as part of that to help with code development. Um, and then also like

development. Um, and then also like they're, you know, available as part of like Google subscription offering so that you can sort of have a single subscription ideally and like be able to use that Gemini capability across the

breadth of Google's developer products and not sort of have this isolated fragmented experience. So, um, yeah, and

fragmented experience. So, um, yeah, and obviously the models like coding from a a model perspective is extremely important. So like very top of mind like

important. So like very top of mind like something that we want to hill climb on.

I think it's like very clear that if you have a great coding model like there's a massive amount of unlock for the world in that. So um lots of effort on that

in that. So um lots of effort on that lots of progress already um and hopefully more progress soon. So, so

when we think about Gemini and the road ahead and and particularly like why Gemini wins and and why the Google developer ecosystem wins overall, is it

is that is that is it that through line that is the secret sauce there? Are

there other ways that we should think about like the the the way that the ecosystem is stronger and ultimately wins?

>> Yeah, that's a great question. I think

um one of the and and this is one of the interesting parts about like what it means for Google to win in developer. Um

so much of the story is like uh and I think about this all the time is like how do we meet developers where they actually are. The reality is you're

actually are. The reality is you're probably using 50 tools that have nothing to do with Google and like will never have nothing to do with Google.

And like it's a great opportunity for us to um still have a touch point, still be able to create value for developers in those ecosystems. And I think that's why this like really strong Gemini coding

model uh showing up in like thirdparty developer tools um where code and software creation is actually happening is really important for us strategically versus like you know making our own

product and like solely reserving a new Gemini model to be available in that type of product. Um so being in the ecosystem matters a lot. I think that's a core part of how we think about like

the the positive outcome of this technology. Um but separately I think

technology. Um but separately I think there's also something interesting which is coding is obviously really important to developers but there's like these 10 other things that all these like really

interesting emerging use cases around multimodal around sort of like real-time interactive experiences and I think we don't want to like just having a great coding model isn't enough um for us to I

think like do the things that we want to do um we need to have the sort of like frontier experiences and like be state-of-the-art in those frontier experiences that developers are trying to build and I think like the two great

examples of this are uh computer use models um and the whole computer use agent space and then voice uh and like real-time interactive stuff and I think with voice agents and all that so two

use cases that are top of mind where we're like trying to push the rock up the hill from a from a model perspective and product perspective >> and how can uh we at rude code and I the

we I mean the royal we of both the the company but also the community many of whom are are are uh um tuned in at the moment. How can we help you? How can we

moment. How can we help you? How can we be part of why Gemini wins?

>> Yeah. Well, I first of all, I think we we have been partnering with you all, which has been awesome. So, uh would love feedback on new models and all that that are that are coming out. would love

also ways of like um there's something interesting around this like flywheel of like what doesn't work well and like how we can sort of like I care deeply about

and I love reading not because I don't like things not not because I like seeing things that don't work but because I like knowing what developers are having problems with so that we can actually go and hill climb on those

things. Um, so I think that's very top

things. Um, so I think that's very top of mind, which is like as a developer, what are the things that you're trying to do today that just like aren't possible or don't work well or aren't reliable enough for like you to really

lean on the models or like lean on these different ecosystems. And I would love that feedback and signal so that we can make sure that it's stuff that we're prioritizing as we're thinking about what is the next generation of these models look like, what are the API

features we should be building, etc. Um because yeah, I think the the promise is like there's so many things that are hard about building software. Um and

ideally we we sort of make some of those things easier so that you don't have to smash your head against the wall more than you already do. And I think in in actually some cases like you know AI is great and super powerful but in there are edge cases where you definitely have

to smash your head against the wall more than you perhaps were before and that's the sort of wrong outcome in a lot of ways. H how much do you pay attention to

ways. H how much do you pay attention to the evals? You know, when you were

the evals? You know, when you were talking about um guiding how things, you know, evaluating how things are getting better are eval.

>> Yeah, evals are obviously super important. The challenge is it's like a

important. The challenge is it's like a it's a proxy of a proxy. Um, and like you know even some of the best evals out there and if you look

specifically at coding evals as an example like you know if you go in and spend the time and this is why I'm like very hopeful that we'll get some like really great consolidated eval products platform because it's really hard to

keep up with all there's like 50 different evals and like based on the way that the wind is blowing one day people care about one versus the other and they say oh we discount that one because of these three things and it's

like I I completely lose track of it. Um

Oh, yeah. So,

>> I' I'd love to have a platform that makes that easy and and makes my life simpler. Um but yeah, I think the fun

simpler. Um but yeah, I think the fun the fundamental question is always like, does that actually represent the work that developers are doing or what they're trying to do? And I I spend a lot of time like what people are

building with software is so varied that it's like, you know, it's always like yes in certain cases, but like no in many cases as well. Um, and it's just

like hard to I think even as a developer to know like I was talking to someone today about like open source game engines and like whether the models were good at these different specific game engines. Um, and I'm like there's no way

engines. Um, and I'm like there's no way to know other than like somebody running the evals and like spending the like painstaking amount of time to make that happen. I think it just makes it if

happen. I think it just makes it if you're the developer who's in that situation and you're like trying to understand should I be using these tools to try to accelerate the work that I'm doing like the bar of having to go and

build evals to understand that answer to that question is actually really really high. Um so I think there there's

high. Um so I think there there's something interesting there to to figure out to make it easier for developers to try to answer that question for themselves.

I we asked you oh sorry >> um no I go ahead >> we asked you about winning here and that

sort of moves on how from an outsers's perspective how do you think a tool like Rue code wins wins how do we um take off because right now we're we're I hate to

say it but we're an underdog in this race right now >> that's an understatement under >> interesting dimensions to this. Um I

don't have and we should have an offline conversation. I'd love to learn more

conversation. I'd love to learn more about like what's sort of like top of mind strategically for you all. I think

the way that I see it is um you know obviously there's a massive platform shift happening for like how software is being created. I think there's like no

being created. I think there's like no better time for uh disruption and for like net new players in the market to like really sort of like grab a piece of of that experience than like when the

technology evolution is happening than when uh as developers right now people are like more willing than ever to go and try um these different experiences and like try new tools. And I think

actually if you look back three years ago like my sort of sentiment was developers kind of didn't want that.

They're like, "Hey, I've got great tools right now. Sort of just like help me

right now. Sort of just like help me what I'm doing doing right now do it better and faster." Um, and you know, there need to be a really high bar for me to like switch off of one of these existing platforms that I was using. Um,

and I think right now people are like, "Oh, new tool like let me let me give it a shot." Uh, and see whether or not it

a shot." Uh, and see whether or not it matters for me. Um, I think there's some balance for you all to think about or just for like anyone building a product

to think about around like how do you go after some of these frontier use cases that like don't really work that well today. Um, versus

today. Um, versus hey, we have this developer journey that um sort of we can like really refine and polish and like make you know two to 5x

better. um but is also perhaps a journey

better. um but is also perhaps a journey that like lots of people are going after and I think there's this like interesting tension between those two things which is like I think developers want to bet on products that are sort of

going to be on the frontier of what's possible because that way they ideally in the future potentially won't have to continue to switch tools every month. Um

but they also like have a lot of like product experiences that they're using day in and day out and they want those things to get better over time. Um, and

there's this thoughtfulness about like where AI fits in that story that I think like people are often on one ends of the spectrum where it's like, hey, we'll just throw AI at everything. And I don't actually think that that solves people's

problems in a lot of ways. Um, so I think if you can nail the nuance of like when to use AI and when not to uh and what are the right experiences that

people care about and and that's a little hand wavy, but um I I think it's like no better time ever than to be competing against uh everyone in the developer space than right now because

of the shift that's happening and how much new product exploration can actually be done.

>> Yeah. Yeah. No, Gemini has uh I remember the first time I think I hit 70 million contacts after, you know, we with our sliding window. It was with Gemini 2.5

sliding window. It was with Gemini 2.5 uh preview. It was just unbelievable.

uh preview. It was just unbelievable.

So, we love what you guys have have been doing and are excited about what you guys are going to be doing. People are

pestering me endlessly in the chat. Ask

him about Gemini 3 expecting that like you're going to be like, "Give us the answer." Um

answer." Um >> it's coming. It's coming.

>> Yeah. Because because he's going to come on the R code podcast to announce it, right? This is obviously the place where

right? This is obviously the place where a big announcement happens, right?

>> Exactly. That's that's what you have to do though. That's that keep people on

do though. That's that keep people on their toes.

Soon soon. I'm excited. I think the the the sort of positive vibes for Gemini 3 is really exciting. Like I think um it's also just like a great reflection of like, you know, where were we a year and a half ago? nobody really cared that

much um about the Gemini models and sort of uh painstakingly earning the uh the trust from the ecosystem and the excitement from the ecosystem about what we're doing I think is like a uh is a

great reminder of just like just how quick the tide can turn um in in people's perception and and usage of this stuff which is cool.

>> We're we're stoked for it. So I think Matt has another question and you made a bunch of friends in the chat for that.

So uh >> chat slowed down. I've been distracted.

>> Yeah. Yeah.

>> Um, yeah. I mean, as as someone who's been very deep in building R code. Um,

one story that I've experienced in a lot of people over and over is you like solve some problem with your harness and then the next week a model comes out that makes it all irrelevant. You know,

it's like a lot of people working on context condensing and then there's a 1 million context window and then two million. like do you have any thoughts

million. like do you have any thoughts or guidance on which direction you know the models are going to be going and and which directions are going to be more open for harnesses and uh you know

things like root code to provide value.

>> Yeah, this is a really tough question because I think you're spot on. um

you there's this like notion of the models sort of like eating the scaffolding um as they sort of get smarter which is like kind of what you want like you know bu building

scaffolding in in some cases is really helpful and important. Um in some cases like it introduces like you know you again you have to build the scaffolding there's like a cost to actually doing it and is a trade-off against doing other

stuff. Um, and then there's this idea of

stuff. Um, and then there's this idea of like where is the scaffolding frontier and like what are the things that these net new use cases you're trying to enable that aren't possible without all

this scaffolding work. Um,

it's it's tough to answer like precisely Matt. I think that's my challenge. Um,

Matt. I think that's my challenge. Um,

but I think like broadly the the model is becoming an agentic system out of the box. Um, and like I think in that like

box. Um, and like I think in that like you know at least for Gemini models it's worth thinking about sort of from our development perspective like the breadth of where Google's models are actually

showing up like we aren't just making Gemini models specifically for the developer ecosystem like the Gemini models are showing up across search and YouTube and you know Google Workspace

and in Gmail and you know 10 other places. So, we actually like there's a

places. So, we actually like there's a um there's a lot of thought that goes into like what are the trade-offs we're making on these different models and and specifically where versus like our our

1P customers and our 3P customers and stuff like that. Um,

and like again like maybe this is too easy of a of an example, but like uh you know if you're building a bunch of scaffolding for like Google Maps uh you know feels like it's an obvious case

where like we you know we have this really interesting experience. we have

control of the the Google maps APIs and like we then ship a feature like grounding with Google search that makes it so that at the model level at the API level it's really really easy to uh

ground with uh both search but also with maps uh without you having to like go and use some net new API it's it's just built into the experience um I think another one and sort of you were both

alluding to this like long context story I think long context is obviously something super exciting to us and we've been at the forefront with 1 million in token and two million token context

windows. Um and have been also like now

windows. Um and have been also like now beyond it just like working for um in like single retrieval examples, single needle retrieval examples. It's

now sort of getting like much more robust and like it's the model is actually able to like reason over everything that's in the context window.

Um but I still think like you know it would be great to have a 100 million token uh context window model even if it would be expensive and perhaps low but I think those are the types of things that like actually require net new innovation

and in the meantime like huge amount of alpha if you can build great scaffolding that helps people do that. Um, and I think we haven't really done I think partially because the ecosystem's actually done a reasonable

job of of building some of the scaffolding like we haven't thought about like trying to you know could you augment Gemini in some uh way to like make it to a 100 million token context

even if at the actual model level it's not able to do that. Um there's a lot of yeah there's a lot of this that I think folks other folks are doing and it's not something that we've been we've been pushing on but I think if the model

natively could do it then that's super exciting. Um and then you all can do a

exciting. Um and then you all can do a billion token uh scaffolding and and solve those infrastructure challenges which I'm sure would be extremely unique. I used flashlight preview the

unique. I used flashlight preview the other day to get up to a half a billion >> in a chat just repeating stuffing the context to see to try to break a >> half a billion

>> well it kept condensing so I kept condensing and refeeding it back to it and it flashlight um just stayed on track. It did not miss a beat. It didn't

track. It did not miss a beat. It didn't

start screwing up tool calls. It didn't

start repeating itself like we've seen.

>> That new flashlight model is actually really good. Um I I don't know how much

really good. Um I I don't know how much folks uh just because the like timing was a little bit weird. It was like historically we'd release a bunch of models and then we do a standard uh

stable release model and then we sort of move on to the next thing. Um but this model actually came out for flashlight and flash like after we had released the stable model uh where where mo most uh

customers tend to end up sticking on the stable models. Um but there were like

stable models. Um but there were like substantive improvements over the previous generation. Um, so if you

previous generation. Um, so if you haven't tried, uh, give the new Gemini Flash latest and Gemini Flashlight latest models, uh, a shot. They're

pretty good. They're pretty good.

They're Yeah, absolutely. Matt, you were going to say something.

>> Well, I was going to say like, uh, feel like we've gotten the context condensing working pretty well. So, I'm going to enjoy the, you know, the time that that's still valuable before the models take over. But, yeah, spend the Hannah

take over. But, yeah, spend the Hannah spends all day on this because it's like the dumbest things to stop you in AI, right? It's like, how many Rs are there

right? It's like, how many Rs are there in Strawberry? when does your editor

in Strawberry? when does your editor gray screen? You know, I think those are

gray screen? You know, I think those are kind of the things that we've been running up against. Um, and yeah, just crazy to push the limits there. We made

progress, though. Hannis has been uh single single single-minded at trying to get rid of these gray screens and half a billion. You really made it there.

billion. You really made it there.

>> Yeah, I did. I did. One chat, it didn't crash once. It kept going. So,

crash once. It kept going. So,

>> um, but yeah, look, and I was going to respond a little bit about the frontier because it's been interesting and we have Paige Paige Bailey on last week. We

talked about it a little bit, too. just

I mean everyone's so excited for the next Gemini, but also like VO and Nano Banana, it does seem like a lot of these different modalities are interesting.

The one thing that seemed hard, I don't know if you see this as well, is like getting teams actually using all this stuff. You know, it kind of feels like

stuff. You know, it kind of feels like the advancement stops when you have busy engineering teams who are, you know, trying to ship stuff and trying to hit deadlines. Um, and in my sense that the

deadlines. Um, and in my sense that the models are moving just way faster than the the teams are. uh do you experience that at all and is that like part of what's behind like almost like

evangelism through the AI studio and things like that? I guess I'm curious on the more human side of things because it does feel my instinct is that tools like Rue and Harnesses like actually focusing on that side of the problem is probably

better than trying to out compete the models on you know tokens and things like that.

>> Yeah. No, that's a super interesting push. I think this actually ties to this

push. I think this actually ties to this scaffolding frontier narrative perfectly because I think what ends up happening is uh especially for like somewhat larger teams you end up like you're like

hey I want to build a product around the models right now I have this problem that my users have we want to solve it we'll sort of take the most advanced thing that we have and we'll like do the normal thing which is spend a bunch of engineering cycles to like actually make

the experience work reliably enough that like customers are happy. Um, and all the while that you're doing that, you know, the the frontier is moving on to the next thing. Um, and actually like I

I spent a bunch of time this year talking with with customers and and sort of business owners and developers who um had sort of built the initial version of

their product around uh sort of like the circa 2023 20 early 2024 LLM era. And I think the way and I'm sure you all experienced this as

well, but like the way of building around LLMs at that time was like completely different than how you would build right now. Um because you know the models are all multimodal by default.

They have this reasoning capability by default. All the APIs are different and

default. All the APIs are different and they're stateful and there's a bunch of stuff happening server side and uh there's you know agents are working in in some cases. So, uh, you end up like,

uh, you know, the conversation was of oftentimes like, hey, we have this product and like we know it's not great, uh, and we want to keep evolving it, but you know, we made all of this investment

into into like trying to like actually make it reliably lurk and like what do we do at this point? Um, and the challenge is like for a lot of these folks it was like basically you have to like strip everything out and start over again and like you're completely

rewriting your product. Um, and like that obviously has a huge amount of cost. Um, and so I'm I'm wary of that

cost. Um, and so I'm I'm wary of that and like on one hand it's like exciting how much innovation is happening and how much the frontier is moving. Uh, which I think is like a hopefully a net positive

thing, but on the other hand it's like there's a bunch of work that's happening by humans who are putting in a lot of effort and time to like make things work around the previous iterations. And um,

that work is definitely transitory. Like

it's not going to be the thing that that sticks around. Um,

sticks around. Um, so it's yeah, it's tough. I don't know if that's like a great answer to your question or not, Matt, but it is like I have a lot of empathy for teams that are going through this right now because it is um there's just like so much stuff

happening and again hopefully the positive outcome is you end up being able to build a better product. Um and

there is this interesting disconnect between which I is my final comment uh around this thread which is I think the innovation is happening and sort of like for engineering teams and for people

building products it's like you know it's it's it's not the gold rush but it's like the gold rush of opportunity because there's so much cool stuff to be built and then on the consumer side at least like the like long tail of people

who are actually customers and maybe this is also somewhat inclusive of developers though I think developers are like because of the success of a bunch of these AI developer products are a little bit more in tune with this. I

think um there's this huge disconnect between like consumer awareness and what's actually possible with the models. So if you can sort of like build

models. So if you can sort of like build a product that is able to to both bring consumers along or bring the longtail of users along um and also create a bunch of value around that, I think it's it's

really uh there's a ton of like value arbitrage creation that you can do which is uh which is exciting if you're building a business.

Yeah, it's been one thing so interesting too, just talking to companies. I this

might be oversimplifying, but I feel like if AI isn't really working for your company, it's worth $20 a month. And if

it is worth work, if it is working, it's worth thousands, you know, and and trying to bridge the gap because I think I think we're feeling right now that some companies are feeling the thousands a month and and that a lot of them are feeling the 20 a month and and just

trying to figure out even how to structure the way you work in the teams. Um, I was actually going to ask you about it and I think this kind of gets to it a little bit. I know that you all kind of straddled this a little bit with Gemini when it first came out, right?

There's the pro version that was paid and the experimental Gemini 2.5 and the experimental that was free. I mean, I've been thinking a lot about that, especially with you probably saw some of

the recent discourse around uh, agentic coding being ad supported and free and things like that. I'm curious if you have thoughts on those two different forks, like the one where you're you're trying to get the most out of the

models, you know, you're paying top dollar, like even close to human engineer prices for it versus the the versions that are a little bit that are free and yeah, cheap and and accessible.

And I mean, it doesn't have to be from the Google or Gemini perspective, but just curious for your perspective on on those two forks in the road whether there is a middle ground that's worth considering.

Yeah, I think it depends on from whose perspective. I think as a user like what

perspective. I think as a user like what I want is um and this has been like a principle for AI studio which is we actually give away a ton of compute. Um,

and the reason we get away a ton of compute is because we want to get you to that sort of aha moment where you're like, "Okay, I understand this is actually possible." And then at that

actually possible." And then at that point, and the challenge of this is we don't have a good way of quantifying what that moment is like. We I think we're and I think for other products and platforms like, you know, Facebook had this thing initially where it was like

if you had 20 friends on Facebook, you were guaranteed to be a Facebook user for life or something like that.

>> Magic moments. Yeah. Remember those?

>> The magic moment. Like I I don't think we actually have what that metric is though. we have like a bunch of signals

though. we have like a bunch of signals that perhaps influence us. Um, and then at that moment like assuming the the user has like the ability to like actually be a paid customer like we we

want to get them into the paid funnel like the uh it is both like you know maybe I won't go as far to say like morally but like it is like imperative that we get compute into the hands of

people who like haven't yet had that aha moment with AI. Um, and then if you have the means to actually like buy and be a be a paid customer, um, being able to do

that and it is, um, we're sort of, you know, the the interesting situation that Google is in with like the pricing stuff is we're we're making this, you know, sort of decision and there's this large

ecosystem and there's lots of other models and there's so it is like uh um it's a very different pricing situation than I think like individual products

have to go through um also because perhaps yeah it's just it's a very interesting complex situation. Um

but yeah, I I do think that like get people to the aha moment principle. Um

and then at that point like being able to monetize is like something that that we've thought about from from our team's perspective. Um and also there's there's

perspective. Um and also there's there's some like interesting threads about like this outcome based pricing as well.

Matt, I think it's getting to your point about like pricing the models and the systems like close to what it would cost for a human engineer. The interesting

thing is just like can you actually scale up and parallelize um because there is like a real um at that point it's like what is more

efficient? Um,

efficient? Um, and like if you can't parallelize because actually the models and the systems require a lot of handholding and human intervention in order to get value out of them, then it's not clear to me

like how valuable it is or like how Yeah. I think for outcomebased pricing

Yeah. I think for outcomebased pricing to work, you actually have to be able to parallelize to like get scale.

Otherwise, it's it doesn't seem super compelling to me.

>> Yeah. No, that that makes total sense. I

think that's what we're thinking about for some of our upcoming offerings, to be honest.

It's also hard for people to make this transition mentally. I think like you're

transition mentally. I think like you're everyone's so used to like I'm paying per token or something like that or I'm paying $20 a month. This is like the great uh the great uh mistake or not

even mistake but the great like quorum quarrel that I have with uh uh with chatbt is that like you sort of they arbitrarily not I mean not super arbitrary but like somewhat arbitrarily

set $20 at the price point for all AI products and then everyone else is like trying to you know that dramatically informed the rest of the ecosystem and then everyone has been like trying to make a product experience that's

monetized that works at $20 a month and like The ecosystem has dramatically changed over the last two and a half years. The cost of compute is different.

years. The cost of compute is different.

The cost of energy is different. The

capability of AI products is different.

So all these things are completely different. And yet sort of we're all,

different. And yet sort of we're all, you know, the ecosystem is stuck with $20 being the entry price point. Um,

which is just like a very interesting situation for everyone to be in.

>> Yeah. Um, Paige Paige Bailey by the way said to say hi.

>> Hi, Paige. And we have a uh >> Paige is the number one Root Code supporter. I feel like every

supporter. I feel like every conversation I ever have with Paige, she's like uh she's hyping Root Code. So

I I love it.

>> Thank you, Paige.

>> Well, uh here I put a question on screen. What does Google think about

screen. What does Google think about this is from AI code king, by the way.

What does Google think about coding specific models? And can we ever see an

specific models? And can we ever see an open uh Gemma code model?

>> Yeah, that's a great question on the code Gemma model. Uh something to think about. I'm not sure uh what the what the

about. I'm not sure uh what the what the progress looks like there, but I can I can poke around and see if there's anything happening. Um code specific

anything happening. Um code specific model is definitely interesting.

My my point of view is um there's so much positive capability transfer between these models that like you actually end up uh not

getting to the global maxima by doing code specifically. um like there's lots

code specifically. um like there's lots of like and also if you just think about like what is the proxy of what's actually happening with code there's like so much world understanding as an example that's required in order to

build software you're you're trying to like again in this in those oneshot examples I was doing in AI studio before like you actually have to have a mental model of like how do people use software how do they build software how you know

what what are the normal user journeys that people are going through so there's all this like extra stuff that's baked in and if you strip all of that out um and you really focus on code which I do

think works like code specific models for like autocomplete and and other like very domain specific tasks I think makes perfect sense but if you want like a general LLM that's going to like be your

you know AI senior engineering sort of uh co-programmer pair programmer um I I feel like the path is not building a a

bespoke coding model um I think there's just too many you you lose too much by going down that route No, that that makes complete sense and thank you.

That's a very clear answer on that because I've heard that question quite a number of times. So, I appreciate that.

>> Yeah, there's also a limited amount of compute. So, like you you can only make

compute. So, like you you can only make a certain number of bets and I think like our like Google's bet has always been like let's build a really great general model um that sort of works and like we have a massive amount of

customers across the ecosystem. So, um I think it's just like it it splits focus, it splits attention. Um and I think it uh just doesn't end up being as good in the end.

>> Yeah.

Huh. Well, um this is a question. This

is this is a general fire and line question, I guess you could say. Um what

is what are your thoughts on AGI? I

mean, people always talk about this. I

know that's a canned question, but you know, is this happening? Are people

getting too excited for nothing?

Yeah, this is a tough one because I think it's like such an interesting and now like overloaded conversation. I

don't know if there's like what the other examples are. My sense is uh my sort of like only maybe not even nuanced take on this is that people are going to

end up getting the sort of AGI experience from a product. um like

they're going to end up using some either new product or iteration of an existing product and be like oh no for me this thing is you know artificial general intelligence. I think by like

general intelligence. I think by like the standard definition it has to be able to or whoever's definition use do everything that's econ economically productive that humans do and not make

all the sort of like simple mistakes that LLMs can do. Um but and maybe there's actually this like difference in AGI versus like I think we might

actually get this like superhuman level product experience. Um and you I can

product experience. Um and you I can imagine that happening for coding given the progress of what's happening. Um and

maybe the model isn't actually like generally intelligent enough to work in all these domains, but for code it's like you know 10x more capable than I'll ever be and sort of parallelizable and all this other stuff. So you get this

immense amount of like world value out of that. um and yet still has like some

of that. um and yet still has like some of the faulty things that LLMs can't do like counting the RS and strawberries maybe as a as a as a goofy example of

that. Um just because of the like

that. Um just because of the like fundamental limits of tokenization and the the whole model process. Um

>> so it will be interesting to see like which of those happens first. um do you get these like superhuman capabilities which I think is like kind of like roughly where we're trending towards relative to the I think to get to

artificial general intelligence you need all these like fundamental research breakthroughs and things that have yet to happen um versus this like superhuman capability. I think there almost like

capability. I think there almost like feel like there's more clarity that we'll be able to get to that point.

>> Yeah.

Matt, um I think you you're up next on the question list.

>> Are we doing the lightning round? Are

there any other Trying to look through the chat to see if there's any other questions that are not the same question? But all right, lightning round. Let's do that. Um what

lightning round. Let's do that. Um what

do you think we're going to be talking about in 2026 when it comes to AI?

>> Science.

>> Yeah. Not coding.

>> Interesting. It's like scientific advancements. Are you see are you seeing

advancements. Are you see are you seeing some uh highlights that are really underpinning that uh um intuition or is there um or is it more broad intuition?

>> Yeah. Yeah. I mean I think we're we already are seeing some of this traction. I think we announced

traction. I think we announced co-scientist at IO last year I believe and like seeing a bunch of traction from that which allows uh the model to sort of agentically go and and confirm or do

exploration about different scientific hypotheses. Um and then we just had uh

hypotheses. Um and then we just had uh the sort of new Gemma openweight model in collaboration with with Yale uh coming up with some potentially like novel uh cancer treatments and and

different literature uh bringing a bunch of that together which is really exciting. So I think that's going to

exciting. So I think that's going to continue to accelerate and as the as the product experience continues to be built around this as the models keep getting smarter um I think the like AI for

science is the like promise of the technology. So, um I would expect lots

technology. So, um I would expect lots of excitement in 2026 and I'm sure there'll be other stuff like hopefully coding as well, but uh as a as a developer, I hope there's lots of coding

stuff, but I'm also excited for science.

>> That makes sense.

>> Do you think people are still going to be talking about vibe coding next year or is that going to be a dirty word at that point? Yeah, I mean my sense is it

that point? Yeah, I mean my sense is it is there's there's maybe a disconnect and actually I think you see this across a lot of different examples between like

what is the what is the sort of like AI zeitgeist bubble group talking about versus like what is the value that's being created for um for the longtail of people and I'm sure there's still a

bunch of people who show up and use an AI chatbot for the first time. and

they're like, "Holy crap, this is incredible and I never thought this experience would be possible and it's so helpful and useful." And they're like blown away by that. Um, and like everyone else has moved on to, you know, 10 other things. And I think the same

thing will be true for vibe coding. I

think the ability to for anyone to create software is just like such an economically powerful force in the world. um that and like from a user

world. um that and like from a user perspective and you can actually see this and they like post the charts of like the number of users that lovable

and bolt and everything else have um I mean in absolute terms it's not that many people in relative terms it's impressive how quickly these products have grown um but I think there's a huge market of folks who like have yet to

have that aha moment with creating software um so I I'm guessing 2026 will continue to be like lots of growth in that world plus with the coding model is

getting better, it makes the aha moment even more sort of uh exhilarating.

Yeah, I find people often um when they test the waters, they go to use a piece of software like R code and they they they open it up and they go, "Huh, can it do this?" And they type something, they go, "Oh, didn't do this." Because,

you know, they ask something beyond its capability. And not just with root code,

capability. And not just with root code, with any piece of software. Uh, and I think a lot of people sort of test the waters from time to time, do something like that, and then they're like, I'm going to get back to work, uh, before

they really dig into how far it can go.

So, it is exciting to see with with new models like, uh, Gemini 3 hitting the market soon enough, um, as hype would have us believe, then, uh, you know, RU

is going to jump its abilities. I think

we're doing that stepped growth. So,

we're very excited.

>> Yeah. Yeah. And it it is like particularly exciting because I think a lot of people just don't spend the time to like figure out the nuance of what these things can do. So if you have that

bird that bad first impression then it's like you're sort of you're like ah this thing can't do anything. Um, so I think nailing like I I'm somewhat skeptical of

like models ability to oneshot something as being like a like a great north star, but I do think for like the initial step of a product experience, it matters a ton because if you sort of don't get the first iteration right, like no one's

going to stick around to like try 10 other times unless you have like a super high activation energy. Um, so it is uh yeah, I'm I'm excited. That's how I

ended up with this job because I tried it and kept trying it.

So, >> activation energy is uh is an edge.

>> Well, for new developers, um what's the single most common mistake that you think they make when they first try to adopt tools uh or or model, you know, coding with with Gemini or other large

models?

Yeah, my my sense right now and and I need to sort of do some diligence on this perspective because it's not grounded in reality I don't think at the moment or it's grounded in my intuition

and maybe not data but um I think there's so much valuable stuff about like learning the process of building software um and as far as like how you think and from a

resilience perspective and a bunch of other things and when you um the models are so eager to just do to

like give you the answer. Um and I think you actually lose some like there's a cost to that which is like you're not sort of coming to the and I don't know if it was Simon Williamson or someone

had a good blog post recently. I know

the Jeremy Howard had a really interesting post. I'm on the like fast

interesting post. I'm on the like fast AI email list and I never read emails um that aren't like personal emails. But he

sent out this email which is like he's been frustrated by um the like AI coding process because he doesn't have to do this problem solving that used to sort of give him this like aha dopamine hit

when he would like fix the problem that he was having. Um and I think as a developer like that's the thing that got me excited was I was like you sort of would painfully get through this process. you would eventually figure the

process. you would eventually figure the thing out and you were like, "Okay, that's so cool that I was able to make this thing work after, you know, the code not running 25 times." Um, so I think it'll be interesting to see how

like a new generation of like developers grow up not going through that process perhaps because like AI tools are so good at getting them started. Um, and I

think that the flip side of this of what's exciting is that um, as a as a new developer, you often times have to extend disbelief that you can actually build something useful or

meaningful or interesting. And I

remember like learning C++ and it was like extremely boring um, and not cool and like we were just like doing coding for the sake of coding and you couldn't actually build the thing that you wanted which I think for a lot of people are

like games or websites or whatever it is. Um, and I think now with AI, you

is. Um, and I think now with AI, you sort of, you can see the aha moment really quick. I'm like, okay, I have the

really quick. I'm like, okay, I have the code on my computer now. It's running. I

have this game working. I have this product working. Like, this is great.

product working. Like, this is great.

Now I can sort of believe that it's possible that I could learn how to do this and then sort of uh know how to wield these tools to do cool stuff. And

I don't think that that was possible before. You sort of just had to like

before. You sort of just had to like blindly trust that that was going to be possible. Um, so that that gets me

possible. Um, so that that gets me excited on sort of the other hand of that of that equation.

We've got one more. Oh, go ahead. No,

no.

>> I was gonna say, how much do you think that this enablement of people to build stuff is going to kind of kick the uh the issue down the road a little bit?

Like how much do you think are going to be focused on review and feedback and trying to, you know, separate the slop from the not slop going forward.

>> I think this is one of the edges for tools to figure out. Um, so I think you can actually really build um, and this this maybe comes to like where the outcome piece is like difficult because

I think you sort of have to self- select a cohort of people who actually want that because I think a lot of people just like want the answer and they don't want to go down this journey but like ultimately that's perhaps to your

detriment depending on sort of what worldview you take. Um, so I do think like there is a world where if you build this product experience that is like and our team has thought about this as well.

um that doesn't just build software for you but sort of helps you through the process of of uh learning what it is required to build software and how to think about building software and all this. I think there's something really

this. I think there's something really interesting there and no one is doing that right now. Um so I'm I'm excited about again I don't know how you you know it's a lot of

>> onboard someone into that.

>> Yeah. and and also like how you capture value from doing that which is the other thing like I don't know if that's just like another you know basically the value capture is you replace like a CS education because your product that

helps people write code is also educating them about CS but like I I don't know if I haven't thought deeply about it I have one last question here

somebody asked uh please ask Logan this question why did you decide uh to work with Google DeepMind and not other labs

what makes Deep Mind special?

>> Yeah, it's a great question. Um,

I I think today my worldview when I joined Google, my I I sort of had less context and I've sort of like learned more about Deep Mind and Google as as time has gone on. I think today

what sort of my perspective of what makes Deep Mind special is um I think this like legacy both of like who created Deep Mind but like who leads

Deep Mind from a day-to-day today um and with with Demis and Korai and others um like they truly are scientists which is really interesting like Demis is a is a Nobel laureate scientist um and I think

the way in which he shows up in the world and the his his worldview and and what we prioritize I think is like grounded in that scientific perspective.

Um, and I think that's a very interesting and different way than lots of other folks are looking at solving these problems. Um, and like Demis like

I don't know if folks have watched the thinking game before, uh, which is sort of like a story of the origins of of Deep Mind and about Demis' life, but like it's an incredibly interesting story and I think makes gives me even

more conviction every time I see a clip from it or something like that. uh that

DeepMind is going to be the place where where we create um AGI and and do it in a way that actually benefits all of humanity. Um and I think practically for

humanity. Um and I think practically for me when I joined DeepMind and and Google the the exciting thing was this opportunity to like go from zero to one to like help uh build a developer

platform and and we didn't sort of have a robust API or momentum or developer stuff when I joined Google um a little over a year and a half ago and getting to sort of go through that experience. I

love uh the sort of like early stages of building something. Um and so yeah, it's

building something. Um and so yeah, it's been a ton of fun to do that.

>> Well, uh we are at time. Thank you very much for joining us, Logan. And thank

you to the community for joining us once again. And we'll see you next week.

again. And we'll see you next week.

>> Have a great rest of this fun. Thanks

Logan.

Loading...

Loading video analysis...