TLDW logo

I got a private Masterclass in AI PM from Google AI PM Director

By Aakash Gupta

Summary

## Key takeaways - **Imagen Understands World Models**: Imagen's ability to understand world models allows it to generate images that reflect seasonal context, like adding snow to a Toronto scene but not to San Francisco's Painted Ladies. [07:23] - **Refine Image Prompts with Gemini**: Use Gemini Pro to refine prompts for image generation, focusing on details like vibrant colors, lighting, and camera optics, while also incorporating negative prompts for better results. [09:11] - **Chain Tools for Advanced Workflows**: Combine tools like Imagen and Veo to create complex workflows, such as transforming a pet photo into an animated drone show video where the pet's tail wags. [17:37] - **Build AI Apps with Natural Language using Opal**: Opal allows users to build AI applications by describing desired functionality in natural language, which then generates prompt chains and customizable models. [20:25] - **Think Big, Ship Fast: The Inverted Triangle**: To innovate rapidly, think expansively about the vision, then use three levers—scope reduction (MVP focus), strategic positioning (beta labels), and audience segmentation (trusted testers)—to ship quickly. [36:36] - **Build a Car, Not a Faster Horse**: Focus on creating entirely new workflows and solutions with AI, rather than just incrementally improving existing processes, to unlock true value and innovation. [39:43]

Topics Covered

  • Discomfort is a Signal for 10x AI Innovation, Not Wrongness.
  • Mastering Prompts: The Key to Unlocking Generative AI Magic.
  • Future-Proofing Your AI Strategy: Ride Model Advancements, Don't Be Crushed.
  • Build Platforms, Not Just Apps: The Power of Second-Order Thinking.
  • Deep AI Intuition and Creativity Define the Modern AI PM.

Full Transcript

We're going to cover everything you need

to become an AIPM as well as demo how to

use all of Google's AI products like a

pro.

>> Google went from way behind in the AI

race to a leader. Poly market puts their

odds at having the best AI model by the

end of the year at 72%. That's because

Nano Banana is already the best image

model. V3 is one of the best video

models. And with tools like Opal and

Magic Book, they're allowing you to

chain together workflows into really

powerful use cases. That's why today I

brought in director of AI product at

Google, Jacqueline Kunzelman.

>> It does really feel like there's never

been a more exciting time to build than

right now.

>> Can you show us the insider view? The

best ways to use Nano Banana.

>> What you'll end up seeing is the dog

literally coming to life as a drone

show.

>> Oh man, this is amazing. So, people keep

saying the AIPM role is hype. Is it

real?

>> I think it's absolutely real.

>> What are you looking for when you're

hiring an AN product manager?

>> This is my like fun hack for you all.

This is so cool. How has AI changed

product building?

>> I'm a huge advocate of building in

public. I think there are several

questions that you can ask yourself as a

check-in.

>> WA for somebody with PM experience but

not AI experience and they really wanted

to break into one of these top AI

companies. What would be your road map

for that person?

Really quickly, I think a crazy stat is

that more than 50% of you listening are

not subscribed. If you can subscribe on

YouTube, follow on Apple or Spotify

podcasts, my commitment to you is that

we'll continue to make this content

better and better. And now on to today's

episode.

>> Jacqueline, welcome to the podcast.

>> Thank you. Excited to be here.

>> So, people keep saying the AIPM role is

hype. Is it real?

>> I think it's absolutely real. I am a

product manager who works exclusively

with AI based and AI native products.

Um, so no, it's definitely real.

>> So how much are AIPMs paid? Levels at

FYI is showing these pretty high numbers

for Google product managers. Are these

accurate for Google AIPMs? I mean I

think you can look in all the job

postings that we have online right now

um and be able to easily compare the

fact that uh AI product managers and

product managers in general um they

carry a lot of experience and are good

at what they do uh is a well- paid

industry.

>> So for these different levels like what

is a L6 senior PM at Google because in

my experience what might be a senior PM

at a seriesB startup versus a senior PM

at Google can be dramatically different.

>> I think that's true. I joined Google

what 8 and a half years ago as an L5 PM

back then and um I think I was a senior

or group PM at the uh more mature

startup that I was at previous to that.

So I think you need to look at years of

experience and realize that like

calibrating different levels changes

based off of if you're at a startup

versus at a more mature company. Um I

will say that straight out of school you

tend to start as either an L3 or an L4

PM and then you can continue to sort of

rise with more years of experience, more

ships, more product experience under

your belt from there. So this role is

very real. It's all about building AI

products. Can you give us a master class

in building AI products? How has AI

changed product building? So I think

there's a couple ways that AI has

changed product building. One is in how

you actually build products. So how you

can use new AI native tools to get

things done. And then the other is in

the types of products that you do

actually want to build and how does AI

functionality change inherently the

types of capabilities and features you

want to be thinking about. And in saying

all of that, you know, I think it's

important to just really call out that

it it does really feel like there's

never been a more exciting time to build

than right now. But with that, I've also

noticed that it also can feel like

there's never been a more overwhelming

time to build than right now. Um, and

that's because the pace of AI is

accelerating. More powerful models are

coming out almost every day it feels.

And that's leading to better tools.

Better tools to help you build, better

tools to help you understand what's

possible to build as a product. And as a

result, more and more products are

coming out into the world. And so that

leads a lot of people to also sometimes

feeling like there's never been a more

overwhelming time to build than right

now. Um, and I think it's helpful to

just acknowledge that. But more than

anything, uh, it's just such an exciting

time to be building because the

possibilities of how to build and how

fast you can ship, um, have never been

more realized than they are right now.

>> 100%. So, how do you build 0ero to1 AI

products?

>> You know, it's funny. I have um I have

this series of diagrams that I always

like to talk about. Um I'm sure you've

all seen this one before, which is the

blueprint. It's what everybody says it

feels like to build 0ero to one. It's

really messy at the beginning and

confusing, but don't worry, it'll all

like level out and you'll find your path

through at the end of the day. And that

is absolutely true. But I think in the

era of AI, everybody's so excited that

they also tend to glamorize what this

feels like. And so it's not just this

black line that's messy and it evens

out. It's actually like rainbows and

sparkles and colorful. And although it's

really messy, it's really fun, too. That

said, what I've realized is that when

you're in that messy part, it can

sometimes feel like there's a bit of a

cloud over you because it gets

confusing. It gets overwhelming, as I

mentioned previously. And I think it's

important to just call that out as a way

to give it a name and then you can move

past it. You can understand that, you

know, being uncomfortable is natural. It

does not mean it's wrong. And you can

really start to just move forward. Bring

clarity to chaos. It's one of the things

that I I really prize in the the folks

that I work with, those that can bring

that focus to a group and just get them

to move forward and really focus on the

bigger picture. And I think that, you

know, this really rings true to me. And

I I had this moment one one evening

after somebody had questioned a decision

I made because I'd been thinking much

further along and they gave me a moment

to pause and I reflected on it for a

while and I realized um after looking

into also even Google lore on like

thinking big and 10x thinking and it's

important to you know mention that true

10x thinking is actually supposed to

feel uncomfortable and the part that was

upsetting to me at that moment was that

I realized that I I kept confusing

uncomfortable with wrong and that's not

the right way to think about it when

you're trying to innovate discomfort can

also be a signal that you're on to

something really interesting. And so

being able to separate out discomfort

from feeling wrong was kind of this this

moment I had that really helped me move

past it and really helped me move past

that kind of like gray cloud area in the

the confusing beginning.

>> So one of the craziest products you guys

recently released was Nano Banana. Can

you show us the insider view the best

ways to use Nano Banana?

>> Absolutely. Um it's funny. I actually

have a a side project going on at the

moment which is 99 things to nano banana

about because I found that the more time

I spend playing around with this model

the more I just discover what it's

capable of doing and it makes you think

in different ways. So I'm going to just

jump over to a few examples and then

happy to share these with folks as well

because there's a lot to go into here.

So this is my work in progress deck. Um

but let me just breeze through a few of

them and then we'll jump into some

actual examples. So, you'll notice right

at the beginning like you can just

rotate objects that are already in an

image.

>> Yeah,

>> you can add uh info

pieces or info boxes to things. Um just

going to scroll through a few. This one

I love. You can take any sketch and

actually transform it into an art piece

now. And I think this is really

interesting because you're going from

something that you have a say in how it

should look, but I don't have the skills

to make a beautiful watercolor blotchy

art piece in like 10 seconds. Turns out

Nano Banana does. Um, I think what's

also really cool about this is its

ability to understand world, like the

world model that's underneath it. So,

this one's really cool. Um, I simply

asked it to show me what each of these

images would look like in winter. That

was kind of the open-ended prompt. And

you'll notice that I'm from Toronto, so

that's that first uh first image there.

In winter, in Toronto, there's a lot of

snow. The painted ladies in San

Francisco, however, do not get snow in

winter. And so the model is not only

able to edit the image, it's able to

actually infer what it should look like

in that season. And that was one of kind

of one of those moments where you just

start to realize the possibilities as

all of these multimodal capabilities

come together. This one continues to

blow people away. It's uh the digital

transformation of old photographs. And

I'll show you a demo of that in a

second. This one's really cool. This one

actually took the three models on the

left hand side here. I simply gave it an

image, showed it where to place them by

picture or circles that mapped the

colors of their outputs and told them

the position it should be in and it was

able to just understand all of that and

generate the image that you had on the

right.

>> Wow.

>> In fact, that diagram from earlier, I

actually used Nano Banana to help me

edit that first one and uh transform it

into that fun visual metaphor that you

saw.

>> Yeah. I want to see how to prompt it

correctly.

>> Okay, this is a picture of my

grandparents actually on their wedding

day. Um, so I'm going to take this image

right here and I am going to put it into

a chat with Nano Banana. And then this

is a prompt that I actually spent a

little bit of time figuring out. Uh, and

that's one of the things I would say is

when you're playing around with these

models, if it doesn't work out the first

time, keep playing around with things

and adjusting it until you get it just

right. Um, but I've included all the

prompts in the examples that I've I've

sent out so far. So, you can see that

it's a pretty lengthy prompt here right

now, but it is going to turn this black

and white image into a color image. Um,

so we'll just get

>> Can you break down the prompt for us?

>> Yeah. So, this one I ended up um you'll

notice like I actually used Gemini Pro

to help me figure out what the prompts

are. So, that's a a good kind of trick

is if the image doesn't turn out exactly

the way you want, copy that image, copy

your original prompt, put it into uh

Gemini and just ask it like how would

you adjust this prompt knowing that like

the output didn't quite turn out the way

I wanted in these specific ways. So, in

this case, I talked about how I wanted

vibrant saturated colors and this kind

of goes into a little bit of detail

there. Then, it focuses on the lighting

transformation of the photo. Then it

makes sure to continue to to lean into

that hyperrealistic detail and texture.

And then lastly, this one has a a play

around with, you know, using modern

camera and lens optics. When going from

old photos to new photos, you want it to

feel like it was taken from a new photo.

Um, and you're really restoring it from

that perspective. And then in this case,

I actually did end up having some

negative prompts that Gemini helped me

come up with based off of a lot of the

things that weren't working out the

first several times that I was iterating

on this. Um, so as you can see here,

this is the fully colorized version of

my grandparents wedding photo.

>> Wow. That's wild. Oh man, this is

amazing. And one thing I forgot to ask,

some people actually have trouble

accessing Nano Banana. How did you

access Nano Banana?

>> So, I use Nano Banana in two different

places. The first is in AI Studio

directly. I have a lot of fun iterating

on prompts this way, but then I'll also

use it directly from the Gemini app as

well. and it it really just is a matter

of which one I happen to be in for my

workflow. Um, but both of them are

easily available. Um, I will also say

that we launched Mixboard last week and

that's an open-ended canvas which allows

you to also play around with image

editing. So that's a third way that you

can start to play around with it if you

are interested in. Um, and actually let

me just quickly show you uh one of the

the experiments that I did on that one.

So this is mixboard which we can talk

about a little later but in this case I

took these are the getting started

prompts. So in this case I took kind of

a a base image or a grounding image

here. I really like the style of this

painting by Eric Bowman and I wanted to

be able to transfer it to different

themes. Um so I have a hiker, a surfer,

a climber. And the way I used nano

banana here is I took this image and I

said, you know, generate an image in the

style of this horse painting, but make

it of a

person winning a race.

And it's going to take the the style and

aesthetic from this, but generate an

entirely new image um of the subject

that I just mentioned. So you can see

right here similar kind of paint brush

and brush strokes. Um but now we have

somebody winning a race.

>> And what does mixboard enable us to do

that we couldn't do in studio or Gemini?

>> So this is more of an open-ended canvas

uh user experience that we wanted to

start playing around with. I think that

the chat paradigm is incredibly

powerful. It's very familiar and there's

a lot you can do with it. But as we

start to get more of these multimodal

models and we think about what does it

mean to visually storytell? What does it

mean to brainstorm? What does it mean to

ideulate?

>> Very cool.

>> Yes, you can also do uh like group

uploads. So, you could take this, this,

and this, and you say could say, so I

got three here. Uh, generate Whoops.

versions of these as black and white

sketches. And so, it kind of just lets

you reimagine what it means to create,

brainstorm, and ideulate with these AI

tools at your fingertips. There we go.

We've got our

>> That's so cool.

>> Yep. Black and white sketches down below

here. And these are powered by Nano

Banana but in Mixboard.

>> Correct. Yes.

>> Cool. I guess if it's possible, let's

show people the Gemini way to access it

too.

>> Yeah, definitely. Okay. And I'll pull up

a different example for that. So, if we

go into Gemini here, you can simply ask

it to generate an image or you can

explicitly say that you want to use uh a

nano banana based image here. And let me

pick a different prompt for you. Okay,

this is going to be a really long prompt

that I spent a while working on. I was

trying out how to take pictures of my

pets and my kids and turn them into

images reimagined as drone shows. Um, so

let's see how this one this one turns

out.

>> Okay.

>> Okay. So, this one you can see there's

still a little bit of like the image

behind it. This is what I mean with uh

with playing around with it a little

bit, but that's a fun one. Let's

actually try and see how it turns out

also on in AI Studio quickly. And this

is also what I mean by like play around

in both of them is what I found helpful.

Certain experiences just end up like

turning out a little bit better

depending on what you're going for.

Okay, so in this case, this is how it

turned out in

>> way better

>> AI Studio for this one. Yes, there's

other ones where Gemini app actually

like blows it out of the water and I'm

super impressed. But the reason I also

wanted to bring you back here is cuz the

next really fun thing I found is I

downloaded this. You can now go to

generate media and go to our VO model.

And this is my like fun hack for you

all. Um, I'm going to paste this image

directly. Upload the image here. And now

what you can also do is take it a step

further and say take this drone show

image and turn it into a video where the

drones fly away to the next formation.

So, it's going to take a few minutes. We

can come back to it in a bit. But, uh,

what you'll end up seeing is the dog

literally coming to life as a drone

show. And in one version where I did it,

uh, my dog's tail started wagging before

all the drones flew away. So, it's

really fun to play around with these

things.

>> Wow. And before we move on to the next

one, can we look at the, uh, prompt for

this as well, just to get some learnings

on how that was structured?

>> Absolutely. Once again, this is one of

those examples where I went back and

forth in the Gemini app actually to help

me iterate and refine. And so, the final

prompt ended up coming out with, you

know, your core philosophy on on

inspiring atmospheric photography. This

one ended up really wanting to lean into

like non-negotiables and then also

visual language of drone formation. We

have the universal workflow. This was

the part I had a lot of time iterating

on which is didn't quite knelt in the

Gemini app, you'll notice, but like

interpret don't copy. It was really

difficult to say don't literally just

take this image and put it in a sky with

some on top of it. Like reimagine what

it is. And I think just learning how to

prompt these models is a skill that

takes time but is incredibly worth it.

And that is something that I would also

encourage people to do is like spend

time just trying to figure out how to

coax the magic out of these models cuz

it's there. But giving a one-s sentence

prompt is not always going to lead to

massive success. Yeah. Garbage in,

garbage out pretty much for these

models. They can develop really specific

styles. You can do crazy stuff like we

just saw where they take a picture and

they reimagine the shape in a figure

formation of a drone show, but you

really have to iterate on it in this

way. So that's where it's really

interesting how big the gulf is between

people who are just dumb prompts versus

smart prompting.

>> Yes, very much.

>> Today's episode is brought to you by

Vant. As a founder, you're moving fast

toward product market fit, your next

round, or your first big enterprise

deal. But with AI accelerating how

quickly startups build and ship,

security expectations are higher earlier

than ever. Getting security and

compliance right can unlock growth or

stall it if you wait too long. With deep

integrations and automated workflows

built for fast-moving teams, Vanta gets

you audit ready fast and keeps you

secure with continuous monitoring as

your models, infra, and customers

evolve. Fastcoring startups like

Lingchain, Writer, and Cursor trusted

Vanta to build a scalable foundation

from the start. So go to

vanta.com/acosash.

That's v a nta.com/

a kh to save $1,000 and join over 10,000

ambitious companies already scaling with

fantas.

Today's podcast is brought to you by

Pendo. Welcome to the SAS plus AI agent

era where every product team, no matter

the size, can build worldclass

experiences. Meet Pendo AI agents.

Virtual teammates that read your product

data, chat with users, and take smart

inapp action so you don't have to. With

agent analytics, Pendo shows exactly how

those agents drive adoption, complete

tasks, and even cut churn. No extra

engineering lift, fully sock 2, and

HIPPA ready. And because Pendo's

behavioral insights sync straight back

into your BI stack, you get AI ready

data to fine-tune models and prove ROI

in one place. Smaller team, bigger

ambitions. You no longer need an army to

deliver software your customers love.

Grab early access at pendo.com/acos.

That's pendo.com/

a kh.

>> So, um, and on that note, let's jump

back into the drone show, which looks

like it has I played it once. Let's, uh,

play it again. Whoa.

>> Even comes with sound.

And then you can see it was starting to

move to the next formation because

that's what my prompt ended in. And this

is an example actually of like my prompt

was pretty simple here for the video

model. There's so much further you could

take it on this um on also just being

able to like lean into exactly what did

you want the drones to do. Sometimes you

get something delightful like the tail

wagging without indicating it. Uh but

other times it will interpret in another

direction if you give it more

instruction. You can just get so much

magic out of these things. And one

difficulty I sometimes had was like

having consistent scenes. Like if I

think there's like a chain feature,

right? I think you can like add like the

next scene. Do you have any tips and

tricks on that of just like maintaining

consistency if we want to add more time

to this video? Yeah. So we have another

tool flow that you can check out as

well. Um which will actually like try to

help you build a video that way. But I

do want to also show you another area

that has been really helpful uh and was

unlocked by Nano Banana actually, which

is this one kind of also blew my mind.

Uh so I one of my other like fun little

side projects that I want to do with for

my kids is create a mockumentary of the

first sloth in space. Um that was

heavily inspired by I think the first

monkey that they uh sent up to space,

but entirely fictional movie plotline.

But now what you can do and what I what

I was starting to play around with here

is you can have your first image

generated and then you can actually use

nano banana to just iterate on those

images in conversation. So I started by

generating this and then I said okay now

show me that sloth getting a medical

checkup. Now show me that sloth

undergoing space training in zero

gravity. Now show me that sloth walking

across the tarmac. And so it's able to

keep that sloth consistent um look and

feel throughout those images. And now I

can start each of the videos um with one

of these images. And that also kind of

helps with your scene consistency. If

you're doing sort of like these 8-second

vignettes as part of this, you know, uh

mockumentary film series that I'm

working on.

>> That makes a lot of sense. Get the

consistent image out of Nano Banana. Use

that as your seed for V3.

>> Exactly.

>> Love it. There's so many advanced

workflows here. If you want to create a

true like ad or something like that,

what tools do you guys have to build

workflows?

>> Great question. So, one of the other

projects that we recently launched um

that's really exciting to play around

with is Opel. Opel is a um how we

describe it? Build, edit, and share many

AI apps using natural language. Uh and

let me actually just jump into an

example right now. So, this is one I've

already made that called itself wild

form. It's actually pretty fun. It's

it's a great example of a nano banana

workflow. You get a photo and then it's

actually going to generate a nature

collage based off of that photo and

output it. And so you can see if I click

in here, this is that advanced prompt

that I had worked on for this particular

image. But what's really cool about this

tool is although this is just asking for

a photo using Nano Banana and then

outputting it, you can actually chain

together much more complicated prompt

chains and within here you can also

change the model that you want to call.

And so that enables all new types of

workflows of mini or micro apps of

different sort of like creative flows

that you can put together. And let me

actually take you out and see if we can

find slightly more advanced one. Okay,

so this is my custom story bookmaker app

that I made. And in this one, what I

ended up doing was I asked my uh well, I

designed it so that it would ask for a

picture of the main character that you

wanted. Then it would ask for their name

and then it would just simply ask where

does this story take place? And from

there, I actually loved this

illustration style. So I put this in as

an asset and then that's what's

referenced in the image of the

character. So, generate a kids cartoon

image of the person who you uploaded um

in the style of this particular image.

And once again, this is using Nano

Banana for this particular um piece of

it. And then from there, it'll also

generate a story. And so, you can start

seeing here, I've decided each story

should just be three pages, but for the

first page, it comes up with what the

plot line is. And then it also comes up

with the image for that first page. And

then this node here assembles the entire

thing. So you end up with three

different pages that each have a sort of

story line as well as some contents

within it. Um, and this is kind of an

example of an opal that is a much more

uh advanced workflow, but also

incredibly easy to use because if you're

starting from scratch, you can simply

hit create new and just describe what

you want to make in natural language and

it will figure out the entire Opal flow

for you. So let me give you a quick

example of one that we were playing

around with the other day. Um, I've

talked a lot about résumés. Um, so for

this particular Opal, I'm going to say

an app that asks a user for a resume,

then critiques it against, and I

actually wrote a post the other week

around um on what I look for in a uh

AIPM resume. So, if we open up this

post, so I'll just show you this here.

I'm literally just going to copy the URL

of this post and then I'm going to go

back to Opal and I'm going to paste that

in there. and offers suggestions.

So, I'll hit go. And it'll take a few

minutes, but it's actually going to

construct that entire prompt chain that

you saw, and it's going to write all the

prompts underneath the hood, and you

should be able to use it right away. So,

as soon as that's done, um, we can we

can give it a go on a fake temp resume

that I have waiting.

>> This is so cool. So, we're basically

chaining together prompts that react to

the outputs of other prompts to create a

workflow, and we can leverage different

models along the way.

>> Exactly that. And along the way, you can

ask users for input at various points in

the system. Um, and then you can change

how the output is displayed. You'll see

in this way I'm just displaying a basic

like web page type of an output. But we

actually allow you to write to docs, you

can write to sheets. Um, and we're

adding more and more features and

functionality and integrations to really

help with sort of an endto-end workflow,

but also just the types of little micro

apps or mini apps that you might create.

M

>> okay so in this case uh let's see if

this works um you can click into any of

these and see the prompts that were

written you can also go to advanced

settings if you want to change any of

these system instructions and then as

mentioned you can always change the

models that you want I will say when I

first started natural language to full

opal wonderful workflow the more you get

comfortable with it the more I actually

just start building from scratch at this

point because I kind of know what

prompts I want and I start thinking in

these prompt chains and understanding

what's possible and this became a really

easy way to sort of onboard board is a

new way to think about how to build AI

native features. Okay, so let's start

here. And it is going to ask us to paste

our resume, but let's actually just

upload the one I have from the device.

So, this is a fake resume.

>> I love how this has like the UI and

everything already.

>> Oh, yeah. And it's easily sharable. So,

I can hit this share link and then it

will allow me to send you my little

Opal. Um, you can play with it. You can

also remix it yourself. So, you can take

this, it'll fork a copy, and it'll allow

you to customize it in any way that you

might want as well. Nice. So, as you

build out the integrations, this is

going to really be like a full agent

workflow competitor for the Lindies and

Relays of the world.

>> Yeah, it's funny. We've gotten compared

to a few different things that exist out

there. Lindy's come up, NAND's come up,

Zapier's come up. I just I truly think

there's so much like blue sky out there

to still be building in. Um, and I've

noticed that a lot of people still want

to like snap to other products that have

been somewhat built in a similar space

as we're all navigating the sort of

future of what's next. Uh but I think

that uh it's what's makes it it so

exciting. And so yeah, some of the the

workflow stuff that you're mentioning

like that's where Lindy seems to fit in

the like process optimizations and the

the automations. The other things that

we're realizing users are leaning into

are just some of the pure like content

creation flows. So a lot of the you know

more intricate or prompts I showed you

for Nano Banana, I can actually throw

those into an opal. That's what I did

for that nature collage one. and it's a

lot easier for me to just share that

with you and then you can kind of create

your own nature collages and I'll show

you some of those later rather than me

having to like copy and paste and share

a prompt necessarily. And then the the

last one is like these more intricate

sort of mini apps. That was like the

story book one that I I showed you kind

of the workflow behind it for. Um okay,

so resume critique and improvements

overall critique. The resume is

presented does not align with the

expectations of an AI product management

role and then it kind of goes through

and actually like gives you why it's not

relevant and suggestions. Um, and I will

say that the spoiler alert is the fake

resume I uploaded is from a PM on my

team who has a Homer Simpson resume as

her example one. So, it's not surprising

that it didn't resonate very well for

the AIPM uh tips and tricks that I gave.

But, you saw how I went from a direct

natural language input to a fully

working Opal. And then, as I showed you,

I can hit share app, make it public, and

then I can just send it to you and you

can start using it or remixing it to to

change it. Maybe you don't want an AIPM

resume critique Opal. Maybe you want

something that's more around a software

developer Opal or a product marketer uh

resume critiquer. And this might just

take a minute. I'm going to close it out

just for the sake of time. But if you go

in here, this is also where you can

start changing what the uh the prompts

are behind the scenes. And it calls

Flash right now. If I wanted to, I could

decide to call Pro if I wanted a

different type of insight. Um so it's a

very flexible system, but meant to be

super easy to use and approachable to

get started as well. Very cool. Can we

share this with the uh audience this

Opal so that they have your resume

advice?

>> Yes, absolutely. I will send you the

link after this.

>> Awesome. So, check the description you

guys. She has put together some of the

best resources and we're going to go

into a little more detail on that in a

little bit. But before we get there,

we've talked workflows. If we had to

summarize for people like when should

they be using Opal and writing a

workflow versus just building in a

chatbot and when should they be going

ahead and taking the next step to build

like an full AI agent?

>> I think first and foremost solidify your

idea. Make sure that what you're

thinking of is substantial enough that

you're thinking big enough. I look at

these tools as easy ways to prototype

and test out what it is you want to

build. There's so many vibe coding tools

out there. AI studio is also a wonderful

place to go in and start trying to to

vibe code and make your app and actually

can deploy it as well straight from

there. I think not enough people are

talking about how important it is to

just think properly about what it is you

want to build. So I would say like have

fun with this stuff too. Um, I'm going

to jump over to uh to one other uh you

know slide I often talk to people about

this one. And these are the what am I at

10 10 side projects that I have going at

the moment. And you'll notice I've shown

you some of them already. There's my

like nano banana idea set as well as my

you know writing which is where the the

blog or the resume tips went. The reason

I do this though is because it helps me

think differently. It helps me think

bigger and so have fun with it also. Uh

I have three kids uh four and under

right now. So I do a lot in the like

having fun with my kids side of things

as well. Um and that that really makes

me connect dots in a different way. And

once you feel convicted about your idea,

then you can go and actually like make a

production app and and you know deploy

it and I'm a huge advocate of building

in public. At this point I think it's

incredibly important to get that signal

and that feedback from users as soon as

you can. Uh, but even earlier on than

that, tools like AI Studio, like Opal,

like the Gemini app as well, they really

help you just uncover what's possible

and kind of stress test different things

before you take it to the next step of

building something real and getting it

out there in front of the world.

>> So, how do you actually build AI agents?

Instead of telling you the tactical

parts of it, I do want to spend a bit

more time thinking about the frameworks

that I found helpful because as a

product person, that's usually where I

try to spend a lot of my time just to

orient myself and understand what makes

sense to build. Here are a few

frameworks. I've written about a bunch

more. Uh, but these are the ones that

seem to to really hold true more and

more these days. The first is just

having an understanding high level of

what is the anatomy of an agent. Agents

have many different components, but at

its core, there's a few pieces that tend

to stand out to me. The first is what

are the AI models that you want to use.

Do you need to have support for audio,

for text, for image? And that's both

image out, but also image understanding.

Does it need to be able to write code or

produce code? Does it need to be able to

understand video or produce video? Just

start to understand what are the

capabilities you want in your agent or

your product? and that'll help give you

a sense for which models start to make

sense to play around with.

>> Hey, let me take a quick break to talk

about linear. It's software that's truly

built by crafts people. If I were

leading a product or engineering team

today, Linear is the tool I would bring

on. Here's why. When I was a product

manager, I was drowning in tools. Notion

Docs for vision, Google Sheets for road

maps, Jira for engineering, not to

mention Slack, Intercom, and App Reviews

to piece together customer feedback. I

was spending more time keeping systems

in sync than actually building product.

Then once development finally kicked

off, my plans would immediately need

updating. So I was the human API

constantly chasing updates. That's why I

love Linear. It cuts through that maze

of disconnected systems. And it's why

product teams at OpenAI, Versel, BS, and

Cash App all use Linear. Check out

Linear at l.app/partners.

app/partners/

a kas. That's linear.app/partners/acos.

Today's episode is brought to you by

Jira product discovery. If you're like

most product managers, you're probably

in Jira tracking tickets and managing

the backlog. But what about everything

that happens before delivery? Jurro

product discovery helps you move your

discovery, prioritization, and even road

mapping work out of spreadsheets and

into a purpose-built tool designed for

product teams. Capture insights,

prioritize what matters, and create road

maps you can easily tailor for any

audience. And because it's built to work

with Jira, everything stays connected

from idea to delivery. Used by product

teams at Canva, Deliveroo, and even The

Economist. Check out why and try it for

free today at

atlassian.com/roduct-discovery.

That's a t- si a n.com/roduct-discovery.

Jurro product discovery. Build the right

thing. Today's episode is also brought

to you by my cohort-based coaching

program to help you land your dream PM

job. I am taking 30 elite PMs on a

journey from November through January to

land their jobs at Google, OpenAI, and

other $700,000 plus roles. If you want

in, check out landpob.com. Once all 30

seats are sold out, that's it. And

already seats are going almost every

day, so grab yours at landpob.com.

The next is the tools. Models are super

capable, but they're even more powerful

when you combine them with tool use. So,

that's where you get into things like,

hey, should you be calling a search API,

or should you be using UI actions? One

of the projects I've worked on uh is

project mariner, which is an agent that

can browse the internet. So, obviously

that leaned heavily into UI actions and

was really trying to push the frontier

of what was possible there. Um, and this

is also where MCP and APIs come in. So,

you want to know like what is the

capabilities and the features that you

want your agent to be able to have and

that will help you understand what tools

you need to make available to it. And

then another big chunk of it is just how

do you think about memory? And this is,

you know, both memory and

personalization. What do you want your

agent to be able to remember? How do you

think about if it should actually be

able to personalize an experience or

recall things that a user may have

previously done? And I think there's a

lot of different ways to build memory,

but I usually first start to think about

what does memory mean to you? What are

the goals that the agent is trying to

achieve? What does success for your user

look like? And then that tries that that

can help you kind of map towards what

are these different sort of um facets of

building an agent at a higher level.

>> Love that explanation. So clear.

>> Awesome. All right. The next one uh I

like to call this the user interaction

spectrum. And I map things out on a

scale of you know do it for me versus do

it with me. So what do I mean by that?

Well, on the do it for me side of the

spectrum, you have agents that you want

that a user expectation is simply to

give them a task and the agent will run

off and go do it and then return once

the task is complete. Some good examples

of this are deep research. Arguably,

maybe the agent asks you one or two

clarifying questions up front, but then

after that it just goes and it searches

the, you know, dozens of different

websites and it pulls together a fully

fleshed out report for you. And it it

literally like in the UX usually says

this will take a few minutes, you know,

check back here when it's done and we'll

alert you when it's done. Um, and that's

an example. If I gave this agent a task,

it's going to run off and just do it for

me and do not bother the user. Do not

check in with me. I think audio

overviews is another good example of

this where you upload a bunch of sources

into Notebook LM. Another great tool to

try out and you can actually turn that

into an audio overview, which is

basically two people talking in podcast

style about all of the source material

that you've uploaded. But that also

takes several minutes for it to go and

and come up with that audio overview and

it's going ahead and doing that task for

you of creating that. On the other end

of the spectrum, you have do it with me.

And these are much more collaborative

experiences where a user and an agent

are basically working handinand. I think

you know vibe coding is a good example

of this where uh it's a seamless handoff

and transition of a user also expecting

AI to help them um you know throughout

the entire process. And I think audio

overviews when you launch into

interactive mode that's another great

example of this. So after the audio

overview is first created the feature

that they've added um since then is this

ability for a user to interrupt the

podcast hosts on notebookm and actually

engage with them in real time. And now

all of a sudden the user is there

interacting directly with the agent um

in a more like do it with me sort of

format. So as you're thinking about what

the goals are of your product,

understand how much involvement do you

want the user to have in it and that can

help you understand where along the

spectrum things should lie and that will

change how you design the experience.

>> So framework one taught us the anatomy

of an agent. Framework two teaches us

how to design the user interaction.

What's the third framework? All right,

the third framework goes back to

thinking and thinking big. I think it's

so important these days with the pace of

how fast AI is is advancing to make sure

that the ideas you have are actually big

enough. Otherwise, you're going to spend

weeks trying to build something only to

realize it might be commoditized. Um, so

think really big. Now, the the hard part

of that is sometimes when you think

really big, it could take forever to

ship. And that's why I think this, you

know, inverted triangle framework can

come in handy. We think really big, but

then we say, all right, tactically, how

do we make sure that we're getting

something in front of users as soon as

possible. And there's sort of three

different levers that I found myself

coming back to time and time again. The

first is start by thinking big, but then

just reduce the scope and cut features

and get realistic about what is really

needed is in an MVP. You saw Opel

earlier that I demoed. You saw Mixboard

earlier that I demoed. Um, for Opel, I

mentioned we're adding more

integrations. We're like really going to

lean into how can we do more than just,

you know, docs and sheets. What would it

mean to have calendar, to have email, to

have all these other tools available to

it. We did not set that as the bar

before we were able to launch as an

experiment. So, get strict around what

do you truly need for an MVP to get

signal and cut those features, but know

that it could still ladder up and it

should still ladder up to that bigger

picture vision. The next is the

positioning of what you're launching.

Beta experiment. lean into these labels

so that you're setting user expectation

accordingly. If you're saying you're

launching a polished product, that

quality bar is much higher than saying,

"Hey, I want to build in public. This is

an early version. This is a concept

edition." And so I think there's a lot

of ways that you can get things out

sooner and also let users know that, you

know, the expectation is that not that

this is perfect, but that we're showing

you something we think has potential and

come like learn and build and use with

us and and we want to take that feedback

back into the product to make it better.

And then the last is the people or the

audience that you're exposing it to. If

something's super early on, things that

we've done are just open it up to a

small group of trusted testers. It gets

people outside of your entire only your

team using it. Um although we do rely

heavily also on team food and dog food

which is internal users testing out the

product. Then when you go public you can

also um do things like trusted testers

or early access partners. We've had to

rely on things like weight lists

previously before if we want to get

users externally using it but we're not

ready to go mainstream quite yet. Um so

that audience is the other dimension

that you can play around with as as

well.

>> Love it. Where do we go from here?

>> Play around with things and have fun. I

mean, I think there's so much left to to

build right now. And so, I would just

heavily encourage folks to start trying

things out, start pushing the limits,

um, and start connecting the dots to

understand what can be built and and

just stress test that the ideas that

you're thinking of are big enough, are

good enough, get it out in front of

users, build in public, uh, get that

feedback and and go back to the team and

continue to iterate and improve. What

questions should PMs be asking

themselves to make sure they're working

on the right thing?

>> All right. I think there's several

questions that you can ask yourself as a

check-in to make sure that you're

thinking big enough and building

something worth building. Two of the

ones that I I keep coming back to are

what I've labeled the paradigm shift

question. And that's really this idea

of, you know, are you just building a

faster horse or are you building

something that new like a car? Um what's

that fundamental problem your user has?

And how could new technology create a 10

times better solution? Uh, another way

of thinking about this or framing this

is are you just process improving a

current workflow or do you think an

entirely new workflow should exist for

this thing? And I think a lot of times

people just focus on the first because

it's comfortable. They know what the

workflow is. You can start to say, "Hey,

AI can plug into this this one feature

here and save you 2 minutes or 10

minutes." And there's some value that

can be had for things like that. But I

think the real value is going to be the

unlock on like what's the new way things

will get done. How do you build a car

and not just a faster horse? And then

the second one is what I call the future

proofing question which is always

wanting to check in with yourself and

make sure that you're thinking about

what happens when the models get better.

So how will the next AI model update

affect your strategy? Will it

commoditize the core feature you're

building or will it unlock a new

capability that enables you to do more?

Another way to think about this is, you

know, you want to ride the tailwinds of

model advancements. You don't want to be

crushed over top like a wave where the

the model just commoditizes everything

you've done. Um, in fact, I actually

asked this as one of my PM screening

questions at this point, which is how

how would you react if this happened?

Um, and there are certainly ways that

it's, you know, this is bound to happen

and there there are ways to think

through it. Um, I'll give you an exact

example. So with Mixboard before we

launched, we've been working on it for a

while and we actually spent a bunch of

time trying to build image editing

capabilities into the product and uh

what we realized was months and months

ago, this is pre- nano banana, um it was

going to be a never-ending hill climb

for image editing capabilities. You end

up just going up against all the things

that exist today. And that was not like

a net new way of doing something. that

was just trying to have us build the

same thing that existed um in other

products but using an AI image that was

generated at first. Um and that didn't

make sense. So we kind of went back to

the drawing board and and cut a bunch of

features from the early prototypes that

we were working on. And then Nano Banana

came out and you realized that hey image

editing itself is fundamentally changing

now. Now's the time to rethink it. I

don't need the 10,000 sliders that, you

know, another image editing platform

might have had to have if you're doing

it the traditional way. All of a sudden,

I have natural language to edit my

image. I have image markup that I can

point to. And that really gets you to

that like there's a new paradigm shift

happening here. Let's build for that new

workflow, but also all that work that we

had done trying to build some of the

core image editing uh functionality, we

were okay with just letting that go. And

I think that's the other important

thing, like models will get better and

sometimes you just need to throw out

stuff you've done previously because

it's no longer relevant. Don't hold on

to it. It got you to where you are now,

which I'm sure there's a ton that you've

learned, but be willing to let that go

and then build for what's next and where

things are going as you inevitably will

learn those lessons firsthand.

>> Love it. You've talked about how

important it is to building AI products

to think from first principles. And I

feel like that really relates to those

two questions you just walked through.

But how do you really think from first

principles?

>> There's a couple things that I found

helpful as I've been trying to practice

this more and more. And these are kind

of the the three points that have been

distilled uh as I was reflecting on

this. So the first one is just going

back to that core user need. Let's take

the image editing for example or photo

restoration example that I showed you

earlier with Nano Banana. The core need

there was that you want to be able to

restore old photos. And if somebody were

to try and just build a service to

restore old photos, you might first

start by looking at how other photo

editing tools did it. Um, but going back

to first principle says, "What I really

just want is a way to bring my memories

to life." And so if you think about it

that way, you realize that these

generative models, if you can figure out

the right prompts, which is part of why

I spent so long working on those prompts

previously, it can unlock a new way of

doing that same or or achieving that

same thing without having to build the

same type of tool that existed before.

Um, so really instead of thinking about

the practical how do I just build

something, I think it's important to

just go back to the core question of why

am I trying to build this? What is it

that's actually needed? and then

brainstorm ways to just rethink what the

solution space should look like, not

trying to just copy what somebody else

did previously with like slight tweaks

or improvements. The next one, sort of

this like futurep proofing question. Uh

we kind of touched on this previously,

but really this also taps into the the

making sure you're thinking big enough

piece. Assume that models will get

better. If you're thinking big enough, a

model improvement should actually just

like leapfrog you into the next five

things that you want to be building. And

so as you're thinking about that sort of

MVP, that's fine to scope it and make

sure that you're getting something out

early, but make sure that the the bigger

picture vision is actually bigger

picture. I think an example here also

has been how I try to push second order

thinking instead of first order

thinking. So I was at the zoo the other

day with a friend and I was telling him

about how I wanted to take stories that

my daughter was telling and turn them

into actual like story books that she

could read. And I was like, it's really

easy with models these days. You can

simply voice record your kid telling a

story and then you can feed it into a

model and you can say, extract the key

points of the story and then, you know,

maybe make it sound a little bit more

like one of your kids' favorite authors.

I like Shell Silverstein or Dr. Seuss.

And then you can actually use the models

to generate images um with pretty good

consistency with Nano Banana now. And he

was like, "That's such a great idea,

Jacine. Why don't you just build an app

that can do that?" And I was like, "I

could, but here's the thing. Anybody's

going to want to do something similar to

this. What's more interesting to me is

building something like Opal, which

allows me to build any sort of workflow.

And you saw earlier one of my like kids

story book opals, for example, did

exactly what I just described to you.

But to me, the bigger opportunity wasn't

building a, you know, a custom story

book app. It was building a platform

that allowed you to build anything,

including a custo a custom story book.

And that to me is the difference sort of

between first order thinking and second

order thinking. It's this like how can I

build tools? How can I build platforms?

How can I really think bigger picture um

around what is possible and not just go

after the obvious kind of first step?

And then I think with this magic wand

question, you know, what human in the

loop step is my current in my current

idea exists only because of a technical

limitation and what would I build if

that limitation disappeared tomorrow? Um

this is pretty crazy. uh because a lot

of the things that we might need a human

or a person to verify as you're building

out a product are usually because of

model limitations. And if you assume

that models are getting better, um

there's a way to continue to plan for

how to include that step in the process.

Um, that said, this once again gets to

the MVP piece, which is like build

something that's tactical that you can

launch today, but always be ready to

continue to sort of peel away those

layers or simplify what you've built as

the models get bigger. And so, I think

there's this interesting tension between

starting with a product, being able to

simplify it as models get better and

they can do more with less, and then

that also gives you the space space and

the runway to build even more of what's

next. And that's why it's important to

know like what are you building today,

but where is that lading up to? Because

you're going to be building that future

version a lot sooner than you might

expect. You're so right that it's about

platform level thinking and not just

like small product level thinking.

What's interesting though is those

initial examples that can seed into a

platform level solution, those kind of

become your base like core user prompt

and that kind of becomes your golden

eval set at first which is hey if I can

build a product that solves this smaller

order opportunity or problem that's

incredibly helpful validation that

you've actually built the right tool and

so it's important to know those first

order ideas as well and like start

collecting them because that kind of

feeds in as your like validation eval

set or your golden eval set or your core

user prompt set as I've called it before

for knowing that what you've built is

actually useful.

>> And that's where I want to transition us

next. I think you have one of the

world's best views into what it takes to

become an AI product manager. What are

you looking for when you're hiring an AI

product manager?

>> Well, I've been thinking about this a

lot because I am actively trying to hire

an AI product manager. Um, and I think

it comes down to these six core

characteristics that seem to really

matter. The first one here is

exceptional product taste and user

centric craft. And this is as I I

mentioned in here is this innate ability

to just understand what is a good idea.

I think that product taste is so

important these days. It is one of the

hardest things to find in good PMs. And

so some of the ways I try to practice

cultivating my product taste is just

looking around the world asking

yourself, do you like this thing? Why

not? What would you do better? What

would you do differently? Why do you

think this person made this decision on

this product? And the more you kind of

rehearse those questions and develop a

second intuition on what's good and

what's not and why, um, I think that's

sort of a a good exercise to have in

being able to understand product taste.

The next is visionary leadership and

systems thinking. Being able to connect

dots, to project out where you think

things are going, to be able to paint a

picture of the future in a compelling

way. so incredibly important when we

think about building AI native products.

A lot of what I'm looking for these days

is people who have a good hypothesis on

what's going to come next and it's not

usually rooted in what people are doing

today. It's about being able to predict

the future in a way. And so that ability

to see into where things are going,

which is often times rooted in what's

possible today, but then being able to,

you know, think five steps ahead, super

important in the age of AI, especially

with the pace at which things are

advancing. The third is this clarity and

chaos and empathetic resolve. You know,

I talked a lot earlier around how

difficult the 0ero to one can feel these

days and being able to lead a team

through that is incredibly important.

Being able to make them feel heard and

comfortable and excited super important

in keeping people motivated to move

forward, especially in those more

difficult early messy days. And one of

the best ways to do that I found is to

just be able to bring that clarity into

the chaos. To be able to hold competing

tensions in your mind without having to

solve them all at once and know that you

are just trying to drive people forward

and being open to, you know, knowing

when to pivot without making it feel

like thrash. Um it's just it's a skill

that's come up more and more and

something that I'm realizing is just

more and more important um the more I'm

building in these zero to one ambiguous

times. The fourth is compelling product

storytelling. A lot of times especially

in large companies people try to rely on

data as a way to predict what to do or

as a way to uh decide what to do and

what to build next. There isn't a lot of

data when it comes to building the next

generation of AI native products at a

massive scale. There's some, don't get

me wrong, but I think the degree to

which traditional product rules might

have been analyze where the leaky funnel

is and you know think about how to just

understand what people are saying isn't

working today and then like build the

small features that address that. In the

very early days, you don't necessarily

have all of that hard data and knowing

how to be able to thus tell a compelling

story that gets people excited and

believing in you is super important.

There are certainly ways to leverage

data in this compelling product

narrative and storytelling, but I think

it's a different way being able to craft

that narrative than has been in the

past. Full spectrum execution and

ownership. It's interesting. One of the

things that has been talked about more

and more these days and I also believe

is this idea that you know sort of at

Google we call them role profiles. So

I'm a product manager, there's software

engineers, there's user researchers. uh

we we all have different role profiles

and more and more going forward I think

those role profiles are blending and you

need to be able to just kind of work

really collaboratively with a group I

look for PMs that can both give their

team a sense of agency and help get

everybody on board to move products

forward but also takes ownership on that

execution and is able to jump in

anywhere they're needed more and more

these days and in the past so this isn't

necessarily a net new thing but I think

it's just it's interesting to see how

those role profiles are blending these

days is and how important it is for a PM

to be able to just be comfortable with

that and and keep moving things forward.

And then this this last one, deep AI

intuition and applied creativity. You

know, I wrote about this uh even earlier

today around just the ability to have a

lot of really good ideas consistently

because more and more an idea could be

commoditized in the coming weeks or

months. And that is fine. Things are

moving really fast. I need people that

don't just have one idea and latch on to

it and and treat it preciously. It's

it's no, it's the skill of being able to

have good ideas that I look for. And

creativity, creativity is so key to

that. Being able to pattern recognize,

but also like think differently as a

result of that. It just keeps coming

back to this muscle of being creative. I

think that um it's something I've always

valued, but even more so these days with

uh with the age of AI. AI tools are

capable and they help you get things

done faster, but you need to make sure

what you're building is the right thing.

And I found creativity is a really good

lens for focusing that.

>> Brilliant. What a framework. So that's

what you look for in a product manager.

They need to translate that into a

resume. How do you create a great AAPM

resume?

>> Good question. Um I will share that opal

at the end of this which should

hopefully help add some critiques to

folks. Uh but I tried to summarize it in

this table. Um the first is just keeping

it short. I think some people feel like

they need to put their entire life

history on their resume and it can get

overwhelmingly long. So, keep it

succinct. You don't need to tell me what

you did in high school at this point,

unless for some reason it's incredibly

notable and worthwhile. Um, in which

case I'll I'll defer to you. But really

think critically about making every word

count. And the best résumés I've seen

are usually only a page. Um, the next is

show. Don't just tell with specific

linked examples. more and more résumés

are, you know, not just a physical piece

of paper that you're handing me, but

even if they are, give me websites to

link out to or like show me what it is

that you've done in a way that can jump

off the page. And often that times that

could mean linking out linking out from

your resume. Using vague buzzword-filled

statements, uh, not helpful at all. I

realize it might sound like you're

meeting all of the job requirements, but

I have to put vague buzzwordy type

things out on the the the job

description because I'm trying to

understand the people that can meet

those. What I need you to do is show me

that you're doing those things, not just

repeat back at me what it is that I'm

looking for. Um, designing it with care

and personality. There's so many great

design resources. I mentioned how

creativity is one of the skills I'm

looking for. So if I get a super boring

resume that just is, you know, plain

text wall, that to me doesn't scream I'm

a creative PM. And so for me personally,

I am looking for somebody that knows how

to thread the needle of giving me a

creative resume that's also incredibly

informative. And that comes out in the

design itself. Help me connect the dots

of your unique journey. This kind of

goes into make every word count. But

what you want to do in making every word

count is tell like think about it as

telling a narrative or a story. This is

the onepage story of you. What is that

story that you want to tell? And make

sure as somebody who's never met you

before, it's clear what that story is.

Thinking in terms of it of like not just

achievements, but like what is that

connected narrative throughout is really

helpful. And I can tell the the résumés

tend to feel cohesive as a result of

that. Proofread meticulously and check

all your links. There's no reason for

spelling mistakes. I still see rums with

spelling mistakes. So, this is just like

a pretty basic one, but please take the

time. Please take the time. Make sure

that every word that's on there, like I

feel like you read it. Cuz if I see a

spelling mistake, it tells me you didn't

read it, which gives me a signal to say,

why am I spending my time continuing to

read it? Frame your impact with context.

This one's incredibly important. And the

best way to actually test this is to

show people that don't know your work

directly your resume and see if they

feel like it stands out or feels

important. Um, when you tell me metrics

like 50,000 monthly active users or I

made the company x amount of dollars, I

don't know if that's good or bad because

I don't know what the baseline was

before that. I also might not know what

company you worked at and if it was a

smaller startup, don't just tell me the

name. Give me maybe a quick description

or like what what was the company about.

Don't make me do the heavy lifting of

having to go now search for this

company, especially if it doesn't exist

anymore. But I like assume that the

person looking at your resume just knows

nothing about you, nothing about your

experience. So, how can you orient them

that way and make make them understand

why the things you're putting on there

are important? And then the last one,

highlight your above and beyond

projects. A lot of times I've heard back

that cuz I'm looking for an AIPM. I've

heard feedback that not a lot of

companies are building AI native

products. People don't have experience

doing that right now. How can they ever

get into it? That's why I have side

projects on the go. It's a way for me to

do things outside of my day job and stay

on top of things. Um, it's also a really

great way for me to see what your

interests are. So, link to them. Show me

what they are. Did you go to hackathons?

Did you win a hackathon? Have you spoken

at public events before? Like, what what

makes you you outside of just your

normal day-to-day job and and

credentials in a way that's related to

what I'm looking for?

>> Amazing. So, that's the resume. Let's

say you make it past, which is hard to

do at Google, but you do. What does the

interview process look like? There was

this viral post on Reddit about a vibe

coding interview. Is that true?

>> So, some like quick comments on this.

Um, the what I should have done was

approach it like a product design

interview. I would say that anytime

you're trying to be asked to build

something, I would be approaching it

with a product hat on if it's a PM

interview. I think just jumping straight

into vibe coding something is not what

I'm looking for in a PM right now. Like

I think it's great that you can do that

for what it's worth. But I've talked so

much already about this. I'm going to

say it again. Ideas are so important.

Knowing what to build is so important.

So if the first thing you do is jump

straight into execution mode, that would

worry me. Um I I like I love seeing the

the ambition towards wanting to do that

and the excitement of building. But

first and foremost, I think that

building something good is is incredibly

important. So, um I agree that starting

with like a more of a design or a

product design uh mentality would have

made sense. I'm trying to write openly

about what I'm looking for in a PM and

the characteristics, the questions I'm

asking them to work on um before making

it into the in-person interview because

the goal of an interview isn't to try

and trip you up. That's never my

intention. Like I truly just want to

know how potential candidates think and

if they're a good fit. I can't speak to

this person's particular interview

experience. I think different teams will

have different ways of going about it

these days. But my general advice would

be like ask upfront what the interviewer

is looking for and or suggest things

even better. I think if all you do is go

to an interview and ask what the

expectations are over and over again

that can also lead to perhaps some like

you know awkward conversation. So maybe

propose what it is you're about to do

and and then you can check and say like

I'm going to approach it this way. Does

that make sense? And they can say yes or

no at that point or steer you in another

direction. But certainly like vibe

coaching is important more importantly I

think is having that like product uh

product thought and the goal is not to

on my side my goal is never to trip up

the candidates. Um but I do want to know

how they think and I do want to like

have exciting conversations with them.

>> So the typical Google PM interview loop

as far as I've understood it from people

I've mentored there's a recruiter call.

It's usually 30 minutes. There's a PM

phone screen. It's usually 45 minutes.

There's a full loop which is usually

four to five rounds. It'll usually have

like product design, analytics and

metrics, strategy and execution, maybe a

technical discussion, usually some sort

of leadership drive, behavioral, and

then there's team matching. Is that the

right process? Is that the up-to-date

process?

>> The roles I have posted, I screen the

candidates myself. Um, so I think there

might be some that follow that where the

team matching comes at the end. for me

um it's coming at the beginning where

it's a specific role that I posted out

there. So you are correct though that

there is a initial screen with the

recruiter and then um the way I've been

doing it is I actually have candidates

uh answer I think it's five different

questions and I' I've shared them online

the ones that I asked them for then I

read through all of them um and the ones

that resonate well I will flag to the

recruiter and she'll get them scheduled

for that 45minut uh call with myself and

then if that goes well there is the full

um round of I believe it is for

interviews with different

characteristics that we are looking for.

But because I'm starting by looking for

a specific candidate, there's no team

matching at the end of this one. That

isn't to say that there might still be

other teams within Google um or other

job applications within Google that are

more broad to begin with and then team

matching comes at the end.

>> Got it. So, we've covered so much in

this episode. We covered a bunch of

knowledge. We covered how to use

Google's AI tools the best, how to break

into AIBM. If you had to put it together

into an 18-month road map for somebody

with PM experience but not AI feature

experience and they really wanted to

break into a Google or a fang or an open

air anthropic, one of these top AI

companies, what would be your road map

for that person?

>> I would say focus on building and that

includes both building and creating. So

it might not be a full like production

deployed app although if you want to do

that great. It could just be a series of

opals or maybe it's more on the creator

side that you've decided to lean into

like some cool videos that you've made

with AI and like talk through the

workflow. Um network would be another

big one. Go to different events, meet

different people. Um really try to uh

learn from others, but also create a

name for yourself as well. Get on

different social platforms, share what

it is that you're building and talk or

or building and learning. um have

conversations, uh position yourself as

somebody who has interesting ideas and

share those out with the world for

feedback. It's a great way also to

stress test. If you're thinking big

enough or thinking interestingly enough,

there's courses that exist out there

that can also be helpful. Read up.

There's so many great substacks. There's

so many great podcasts that you can be

listening to. Um so I would say just

immerse yourself more than anything.

Continue to practice good product

management first principles. I think

that that doesn't necessarily go away,

but learn which ones need to adapt a

little bit more. And then I think kind

of the proof is in the pudding, which is

why I say create, build, like show what

it is that you've learned um rather than

just talk to it or rather than just

like, you know, go and and do things um

behind closed doors. I think that

getting things out into the open and

having people be able to see what it is

you've done and why how you've learned

um is going to be the best way to kind

of showcase your skills in this in this

area going forward. Wow. Thousands of

dollars dropped in value for free just

like you do all the time with your

LinkedIn posts and your Substack, which

people should check out if they enjoyed

this episode. Jacqueline, thank you so

much for being on the podcast.

>> Thank you so much for having me. This

was fun.

>> Bye, everyone. So, if you want to learn

more about how to shift to this way of

working, check out our full conversation

on Apple or Spotify podcasts. And if you

want the actual documents that we

showed, the tools and frameworks and

public links, be sure to check out my

newsletter post with all of the details.

Finally, thank you so much for watching.

It would really mean a lot if you could

make sure you are subscribed on YouTube,

following on Apple or Spotify podcasts,

and leave us a review on those

platforms. that really helps grow the

podcast and support our work so that we

can do bigger and better productions.

I'll see you in the next one.

Loading...

Loading video analysis...