How this Yelp AI PM works backward from “golden conversations” to create high-quality prototypes
By How I AI
Summary
## Key takeaways - **Start with "Golden Conversations", not Wireframes**: Instead of traditional wireframes or PRDs, begin AI product design by crafting example conversations that represent ideal user interactions. This 'working backward' approach ensures the user experience is central to development. [00:31], [04:49] - **Use AI to Generate and Refine Prototypes**: Leverage tools like Claude to generate sample conversations, then use those to create interactive prototypes. This workflow allows for rapid iteration and testing of AI features with realistic LLM responses. [05:53], [15:59] - **Explore UI Variations with Magic Patterns**: Utilize AI prototyping tools like Magic Patterns' Inspiration mode to rapidly explore multiple UI variations for AI features. This allows for quick visual ideation and comparison of different design directions. [21:22], [25:30] - **AI Prototyping Offers Non-Linear Problem Solving**: AI prototyping tools enable a less linear approach to product development, allowing teams to explore solutions from various angles – front, back, or even starting from the end – making the process faster and more iterative. [31:53], [32:35] - **Build Personal AI Prototypes to Skill Up**: For those without direct AI product opportunities, use AI prototyping tools to build personal projects. This is a fun way to learn and develop AI product management skills by creating solutions for your own use cases. [33:35], [37:50]
Topics Covered
- AI Product Management Starts with Example Conversations
- Use AI to Prototype Product Experiences
- AI Prototyping Accelerates Design and Ideation
- Embrace Non-Linear Approaches in AI Product Development
- AI Limitations: Context Windows and Human Differences
Full Transcript
Where do you start when you're thinking
about designing and framing out a AI
product for what you're working on at
work?
>> What's different about managing products
that are powered by AI is there's the
interface of how a user interacts with
any product or product feature and that
still really matters. And there's also a
lot going on behind the scenes. There's
a lot also about how do you drive good
quality products because these
technologies produce different results
each time you use them. So we start with
golden conversations. What's the
experience that you're trying to drive?
And so this is just a way for me to
think about how to write that role
playing a little bit with AI. What
you're saying is actually write an
example conversation that can represent
what a real user might do. and you're
working backwards from that example
conversation which I have actually not
seen anybody do before.
[Music]
Welcome back to how I AI. I'm Clarvo
product leader and AI obsessive here on
a mission to help you build better with
these new tools. Today we have an AI PM
showing us how to AI PM. Pria Matthew
Badger is a PM at Yelp and is showing us
a completely new way to think about
product requirements, prototyping, and
how to build effective conversational
agents using conversational agents.
Let's get to it. This episode is brought
to you by GoFundMe giving funds, the
zero fee daff. I want to tell you about
a new product GoFundMe has launched
called Giving Funds. a smarter, easier
way to give, especially during tax
season, which is basically here.
GoFundMe giving funds is the DAFF or
donor advice fund from the world's
number one giving platform trusted by
200 million people. It's basically your
own mini foundation without the lawyers
or admin costs. You contribute money or
appreciated assets, get the tax
deduction right away, potentially reduce
capital gains, and then decide later
where to donate from 1.4 million
nonprofits. There are zero admin or
asset fees, and while the money sits
there, you can invest and grow it
tax-free, so you have more to give
later. All from one simple hub with one
clean tax receipt. Lock in your
deduction now and decide where to give
later. Perfect for tax season. Join the
GoFundMe community of 200 million and
start saving money on your tax bill. All
while helping the causes you care about
the most. Start your giving fund today
in just minutes at gofundme.com/howi
ai. We'll even cover the daff pay fees
if you transfer your existing daff over.
That's gofundme.com/howi
ai to start your giving fund. Priya,
welcome to how I ai. I am so excited to
have you here because whenever anybody
asks me and they ask me a lot, how do I
do AI product management? I have to say,
wait, are you talking about product
managing with AI? Because I have some
ideas about that. Or are you talking
about product managing AI products? And
what's really great about the
conversation we're about to have is you
actually do both. So what in your mind
is really different about product
managing products using AI?
>> Yeah, I'm really excited to be here. Big
fan of the show and have learned a lot
about um AI, both managing AI products
and how to use it in my day-to-day from
the podcast. So it's exciting to be
here. For me, I think you know what's
different about managing products that
are powered by AI is there's the you
know interface of how a user interacts
with a with any product or product
feature. Um and that still really
matters with AI products. Um and I'll
show some of the tools that we use um to
explore that. Then there's also a lot
going on behind the scenes that
determines the product experience for
the consumer. So um the system prompts
and how that guides the conversation
flow is really interesting and I think
kind of a new challenge when you're
working on AI powered products and
there's a lot also about how do you
drive good quality products because
these um technologies produce different
results each time you use them. So
there's a lot of um interesting
challenges there too. Yeah. So, I'm
really excited to myself learn from your
flow because I'm building an AI powered
product as well. And so, let's dive into
it. Where do you start when you're
thinking about designing and framing out
a AI product for what you're working on
at work?
>> Yeah, absolutely. So, I thought a good
example would be to talk about building
a new feature capability into our Yelp
Assistant. So that's the product I work
on. And the way it works is a consumer
can come in for a service need. So let's
say you want to hire a handyman, a
plumber, an electrician, somebody to fix
your car, and you can describe the
problem in your own words, and then the
AI will understand what you're saying,
collect some project details, and um
help you get matched to pros and get
quotes. And so that's how the product
works. And we recently launched a
feature that allowed consumers to upload
a photo to help describe their need. And
that just makes sense, right? It it
helps for pros sometimes to be able to
see a photo along with the description.
But one of the things we wanted to do
was because we're doing this in our AI
assistant, think about, you know, how
can we leverage those AI capabilities?
Can the AI understand what's in the
photo and customize the conversation
from there? Um providing, you know, some
recommendations around what the consumer
should do next. as a a Yelp user, I can
imagine that the variety of services
that your pros are providing and um you
know with I don't run consumer
businesses, but I can imagine the the
variety of things a user puts into these
conversational or image upload
interfaces could be very diverse. So I'm
curious how you approach that from a
product development perspective.
>> Yeah, absolutely. Yeah, we certainly
cover a lot of different categories of
service needs at Yelp and one of the
challenges is yeah, making sure that the
experiences work across all those
different use cases that a consumer
might have. Do you want to jump in and
uh I'll I'll show you my workflow.
>> Yeah, let's do that.
>> Okay. So, I'm going to just open up
Claude. And here we're starting in a
totally new window. And you know, as we
talked about, like I think there's, you
know, two pieces to these AI products.
There's the behind the scenes part and
then there's the interface. uh user
interface that consumers see. Um and I
like to start with thinking about what
is that conversation flow going to look
like when we add this new functionality.
And so I'm going to show you here how
you can do that with claude. Um and you
can also use chat GPT or any other um of
these foundational models. So here I'll
say write a complete um sample
conversation between the consumer and
the AI assistant um where we want
consumers to be able to upload their
photo and then just add some scenario
requirements like we want the assistant
to analyze the photo maybe provide some
suggested replies and uh continue that
back and forth until they have enough
info to submit quotes. One thing I'll
call out on the prompting is I do like
to give a little direction on what the
output looks like. So you can see here
I'm saying like use assistant colon user
colon for labels, write it as one
continuous conversation. I think that
really helps make sure that you know you
get the output that you're looking for
and there's a little less back and forth
with the AI. So for the folks listening,
one of the things I want to call out
that I think is really interesting about
this approach is you're sort of using a
example conversation as your first pass
wireframe for building a conversational
AI. So instead of saying like show me a
chat window and show me messages that
show up in these buttons, what you're
saying is actually write an example
conversation
um that can represent what a real user
might do and um you kind of give some
some constraints about what that
conversation could look like and you
give it some of the capabilities that
might be available during that
conversation and you're working
backwards from that example conversation
which I have actually not seen anybody
body do before. So I think it's a really
unique approach that product managers
out there working on conversational um
AI products including myself can really
take a lot of inspiration from. How did
you come to this idea? I mean was this
your like are you just a genius and
you're like this is the first thing that
we need to do or how did you come to
this idea?
>> No, I mean I think this is part of um
our standard alumowered playbook at Yelp
where we start with golden
conversations. What's the experience
that you're trying to drive? Um, and so,
you know, I think, uh, this is just a
way for me to like think about how to
write that, um, roleplaying a little bit
with AI.
>> Yeah. And I just want to call this out.
We're going to take a little side uh,
detour to just some product management
ideas, which is I often tell product
managers to prototype their product as
close to the end product that a consumer
is going to consume, including the
content. So when I worked in dev tools,
um I would tell a lot of RPMs, don't
write a PRD, write a quick start and
documentation guide to the product.
Write the code snippets. Um and then
work backwards into what the product
should look like. And so I love this
idea of just from a general product
perspective, work with the artifact
that's closest to what the consumer is
actually going to experience and then
you can back into all the requirements
once you're kind of inspired by what
that end state is. So, what does
something like this get you?
>> Yeah, absolutely. So, let's go through
it. So, I'm actually going to upload a
real photo of a home service need. So,
here's like a picture with a cracked
porch. Um,
>> not your cracked porch.
>> It's not. No. Um,
yeah. And then we'll look at what um
what Claude comes back with. Um, I will
say one of the pictures I'm going to
test is from my bathroom renovation. So,
you will see my bathroom. And one thing
I'll call out is Claude now shows you
your thought process. And you'll see
this in a lot of AI tools. I really like
to read the thought process and it's
also something to do while you're
waiting. Um, but I think it really helps
because you can see how it's
understanding you. If it doesn't come
back with what you want, it also is
really good for troubleshooting. So,
definitely something I recommend doing.
>> Yeah. One thing that I'll do while this
is loading is call out, I too think that
reading the reasoning or the thought
process of the AI is interesting for two
reasons. One, it can often help you
improve your prompts because you
understand what the AI is understanding
or not understanding about your prompts.
As somebody who likes misspelled, no
sentence, low syntax prompts myself,
it's good good to see where I'm
misleading the AI. The other thing is
the thought process is often where the
AI reveals its personality. I think it
is so funny
>> to read like Gemini 25's thought process
versus 03 versus Claude is very nice.
Claude practices self-love. Um Gemini 25
does not. And so I just think it's uh
it's also interesting from just like a a
model understanding perspective. Okay.
So we got a we got a chat here.
>> Yeah. So then we can read through the
chat and it's, you know, it's saying
like, I can see you've uploaded this
photo of a front front porch stabs with
a significant crack running through the
concrete. So pretty good recognition of
the photo. And then it says, let's ask,
let me ask a few questions, you know,
how urgent is this? You know, are you
looking to repair this? Would you prefer
to replace the entire steps? And so I
could look through this, you know, and
maybe workshop it a little bit, giving
it some feedback. I also find it's
helpful to just create some more
examples. Um sometimes like when you see
a lot of examples, that's when the
trends come out and that's when you see,
you know, what you might want to improve
or change. And so I have a bunch of
images now. So now that I've tested it
with one and I've seen that, you know,
it works pretty well with that one. I'm
now going to test it with a lot more
images. And this is the prompt I'm going
to use. So I'm going to say now create
more examples based on these images. And
to your point earlier, you know, Yelp
covers lots of different um types of
service needs. So, this is where you can
kind of test and see how's it going to
do across a lot of different problems.
And so, here I have, you know, like a
appliance repair issue with an error
code. I have a hornet swap, a wasp nest.
Um, so you can see, you know, a larger
variety of things. And just because I
know you really wanted to see my
bathroom, I will also upload and add a
picture of my bathroom renovation in
progress. Um, and then I'm going to say,
um, you know, label each conversation
with a title and a number at at the top.
So, just another example of how just
that like little nudge on the output can
really help you get something usable.
Great. And so we're going to see here
how this AI thinks about potentially
framing responses to consumers on a
variety of as a homeowner total
nightmare scenarios. Everything from a
wasp to a bathroom renovation, which I
am also about to start um is just a
nightmare to me whether or not I want to
do it. Um and so you're getting these
example conversations and what are you
looking for? Are you are you looking for
patterns? Are you looking for product
inspiration?
what's kind of the thing that you're
seeking in these examples?
>> Yeah, that's a great question and I
think this like goes in with, you know,
there's the the a lot of people talk
about like evals are the new PRD and
this is like the very early step of of
getting getting to the eval process. Um,
you know, I think you you get a sense of
like what are the criteria that are
important for this capability. So, you
know, the first thing is like did it
actually recognize the image? Well,
right. So I can compare and see like in
this first one like the oven door lock
malfunction where I've uploaded this
picture and it is actually looking and
seeing that like it has the door locked
and it's trying to understand that
issue. You know maybe we would give it
feedback to go one step further like
pull that E3 error code you know look in
your LM see if you uh understanding to
see if you can guess what the issue is
and and diagnose it better. Um but I
think that's like the first step of is
it um doing that recognition right and
then after that you know we're we're
looking through the conversation to
first I just look at it qualitatively to
see like does this feel like it sounds
uh like it flows well is it concise is
it easy to understand um and then we'd
probably develop like more of a rubric
around what are the criteria that we're
looking for
>> okay so you have these different
conversations what do you do with them
next Yeah, and I'll just show one
example of refining these conversations
and why AI is really great for this. So,
you know, let's say I say I I think it's
good, but I don't think it's being as
opinionated as it could be about like
offering the user a recommendation and
maybe sometimes it's talking about
budget, which we think the consumer may
not know. So, I can ask it to rewrite
these conversations based on this
feedback and it will go through and
update all those conversations for me,
which I think is really nice. And um you
know then you can go through and see you
know do you feel like it's taking that
feedback well? Is it actually rewriting
it um based on that guidance? But
definitely you know you can see here
it's saying like this definitely
requires professional pest control.
Don't attempt a DIY removal of this
nest. Um which I think is probably good
advice. Um,
and then to your other point about like
how do we get um an artifact that is
closest to the ex what the consumer will
experience that is the next step that
I'm going to show you and something I
think that is pretty unique to Claude.
Um, so Claude has a special
functionality built in where it actually
can create an artifact that uses the LM
that powers Claude to produce those
responses. And that's very unique to
Claude. If you did this in another
prototyping tool, you would typically
have to set up a API key and um
integration which just takes a little
bit more work and with pod you can do it
out of the box. So here you can see I'm
asking it to create an assistant app as
an artifact have a chat interface where
the AI responds using the LLM that
powers Claude and then also create
system in uh prompt that is based on
these example conversations and then
analyze these upload loaded photos and
include a camera um icon in the input.
And then I'm actually going to upload
some um screen grabs of our current Yelp
Assistant and indicate that it should
use these attached screenshots as an
example for what the front end should
look like just so that it feels a little
bit more real.
>> Got it. So you really are using example
conversations and just reference designs
as your PRD here. And then what you
called out that's unique about quad
artifacts is it has fully integrated
quad AI. So you can actually generate
artifacts that do make native LLM calls
to the anthropic API. So if you are
prototyping little AI product out there
um check out Claude because it just
makes it a little simpler and you don't
have to pass it a bunch of API keys.
>> Yeah, absolutely. And you can see that
it's writing the code here and at the
top it actually wrote the system
instructions. And I think this is also a
really good way to learn because you can
see that based on these example
conversations, how is Claude translating
that into system instructions. Um so
it's, you know, mirroring some of my
initial prompting and redirection around
providing suggested replies, um not
asking the user about budget. And so I
think that's um really helpful. And then
you can see it gives some examples from
my examples as part of how to guide the
um assistant around photo analysis as
well. All right. And so I'm going to
test it out and we'll see if it works
out of the box. Um it does sometimes
require a little back and forth.
Um so you can see here I have uploaded
the photo of my issue and Claude is
thinking.
Okay, great. Um so here you can see it
worked pretty well. So it said, you
know, I can see it's showing F2 in red
and the door locked and this is a common
error code relating to the oven lock.
You know, typically you want a repair
technician. It's asking about the
urgency. So it is, you know, simulating
pretty well this conversation. And one
of the reasons why I think it's helpful
to simulate it in this kind of artifact
is you can also get a real feel of how
this would be for the user. Like you can
see like sometimes a response that looks
fine when you have it in a doc feels
really long when you see it in like the
little chat bubble and the mobile
interface
>> and you know that waiting period of like
the three dots and then the response
comes back when you play out the full
conversation
>> can feel very different. So I think this
is also a really good step to do
>> and then you can of course share this
with your team or your designers or your
engineers and they can also start to get
a sense of how does this feel? Can we
actually do this? How can we refine it
or make it even operate better? So I I
just have never thought of this low. I
have to repeat it again for folks. You
know, kind of starting inside out with a
conversational agent, prototyping
example conversations first, getting
them
um refine getting a good set of example
conversations that you can then put into
a um prototype generating tool in this
instance claude to then back into the
chat experience including the system
prompt that would best serve those
conversations as such a great flow. I'm
so impressed. This episode is brought to
you by Persona, the B2B identity
platform helping product fraud and trust
and safety teams protect what they're
building in an AI first world. In 2024,
bot traffic officially surpassed human
activity online. And with AI agents
projected to drive nearly 90% of all
traffic by the end of the decade, it's
clear that most of the internet won't be
human for much longer. That's why trust
and safety matters more than ever.
Whether you're building a next-gen AI
product or launching a new digital
platform, Persona helps ensure it's real
humans, not bots or bad actors accessing
your tools. With Persona's building
blocks, you can verify users, fight
fraud, and meet compliance requirements.
All through identity flows tailored to
your product and risk needs. You may
have already seen Persona in action if
you verified your LinkedIn profile or
signed up for an Etsy account. It powers
identity for the internet's most trusted
platforms and now it can power yours
too. Visit withpersona.com/h
how I AI to learn more. You know, now
what I have to call out is this looks
pretty good, but it doesn't look quite
like Yelp. So, how do you take this how
do you take this to that next step of,
you know, really um designing out what
the real product might look like?
>> Yeah, for sure. And I will say like I
think this is all just a starting point
and it's a part of a conversation with
your larger team, right? With the
engineers and with the with designers
like I think this is really something
that helps me clarify my own thinking
and ideas and like refine what is that
ideal conversation look like and and
also just you know be a better
collaborator because I understand system
instructions better um as uh as we're
going through features. Um but yeah, so
I think um you know, it still goes
through our our our usual like design
and engineering pro uh processes once we
have a good idea of you know where we're
headed and it really has been a
collaborative process for us between
design, product and engineering where
we're all writing these conversations
together. We're giving each other
feedback on them. Um so now we're going
to I'm going to talk about you know how
do we how do we think about the
exploring ideas on the other side? So we
we went pretty deep on like what does
that conversation flow look like? How
can we use cloud to um explore ideas
there and the other piece is like how do
use what does the interface look like?
What are the user flows? How does a user
get into these assistant experiences?
And I have seen that a lot of those
little details matter as well. You know
what are the prompts? How how does a
user understand the capabilities of the
assistant? And so here with uh I'm going
to show another tool which is magic
patterns. And I think magic patterns is
really great for when you want to
explore something visually and like kind
of consider what that flow would look
like. I know Colin Matthews was on this
show earlier and he showed how you can
recreate a you know an existing product
using component library or screenshots.
So I'm not going to cover that in
detail. So here I've recreated our Yelp
Assistant um with that kind of approach.
But I'm going to show you how you can
then move on um to actually explore
features within u magic patterns which I
think is a lot of fun. So here I'm going
to actually ask it to add a prompt
suggestion at the top for start with a
photo which allows the user to upload a
photo. And you know you can see here
it's it's thinking and it's saying I
will start um add this prompt suggestion
for start with a photo. um this will
likely require these things. Um for
styling, I'm going to consider this. So
again, like reading those thinking
instructions, I think is super helpful.
So what it's doing now, now that it has
those instructions, it looks like it's
sort of doing this thing that you see in
a lot of these prototyping tools, which
is it's creating or updating new
components, updating components. It's
going to kind of insert those design
elements
into into this design for you to give
feedback and test with. And I just have
to say, you've been a PM for a little
bit. I've been a PM for a little bit.
Have you ever had access to this kind of
like ondemand
design and code? Like is has this
totally like changed the way you think
about working through designs,
wireframes, stuff like that?
>> Yeah, it absolutely has. Yeah, I think
my mind was kind of blown to be honest.
the first time I use these like natural
language prompting prototyping tools
just because yeah it's just so magical
for you as a PM to be like hey I can
just describe what's in my head and
actually have it you know come to life
um in a prototype. So it really has uh
you know I think the core of the of the
PM job and the earliest part of the
workflow hasn't really changed and that
you're still trying to understand deeply
the user problem figure out what to
prioritize. Um but I think it really
helps in the phase after that where as a
team you're exploring the solution
space. What can really solve that
problem for a user? How do we make them
aware of it? How do we make sure it's
easy to use? And I feel like it's just
really fun to be able to like play
around in these tools and explore ideas
um myself visually and and find better
ways where I can communicate something
that's in my head.
>> Amazing. Okay. So now we have a start
with a photo.
>> Okay. So yeah, we have a start with a
photo. As you can see here, it's got
this UI where I can start with a photo.
Um so you know that's you know one
option. And then of course like you know
we did something simple when you launch
this feature where there's just a camera
icon but I'm showing this example as a
way that you know you can explore like
what would other ways be that we could
make this experience um as you're
thinking about iterating. And so here
I'm going to show you this really cool
feature within Magic Patterns which is
called inspiration mode. Um and
definitely recommend digging into this
menu in general. Um, they have like a
lot of nice little shortcuts, but this
inspiration mode is my favorite because
you can quickly explore lots of
different options. So, here I can say,
"Give me some options on how the start
with the photo flow could work to make
it feel more guided for the user." And
this part of the prompt I workshopped a
little bit, but I think works to help
have the inspiration mode come up with
different ideas. I say like think
expansively and make each option
differentiated and then explain in in
your response which option um what each
option is. Um and so I'm going to go
ahead and submit that and it will
generate for me four different options.
And you'll see that um once it goes
through this process, it will actually
have four different boxes on the screen.
And as you want to explore those
options, you can click through those
boxes and it'll update what's on the
left side. So you can really quickly
explore and see the different ideas and
you know decide what you like. Um and I
like doing this because I think
sometimes we come in and we feel like we
need to have a whole PRD before we can
start prototyping. And that's definitely
one approach and use case for AI
prototyping tools. But I've also found
that they're helpful even earlier when
you you do understand your you know your
user problem, what you're trying to
solve for, but you may not know really
what those solution looks like and you
want to explore and maybe get some ideas
from AI as well. Yeah, this just makes
me think I don't know if designers are
going to love this or hate this. I
remember this experience when I was a
designer where somebody would give me a
purity or a feature like this and I
would give them back a design like what
we see on the left and they'd be like
great but can we like try it over here
and try it over there and move it up
there and make it this button and like
make it a link and that like manual
iteration where it wasn't really um
moving the product forward. It was kind
of getting our own minds around what the
problem space and the solution space
could be so that we could move the
product forward just took a lot of time
and so I think it's really interesting
to compress the time for ideiation so
that you can get to the ultimate product
a little bit faster.
>> Yeah, absolutely. And like some of our
designers are also using using magic
patterns or even other AI prototyping
tools like Figma has it Figma make and
and so I think it's really just part of
the conversation. you know, I'll ping a
designer, hey, I was thinking about this
and, you know, was thinking maybe we
could go in this direction and send them
a link and they'll be like, oh, I was,
you know, exploring something similar
and we'll just trade notes. So, to me,
it's a replacement for what I was doing
before, which was really hacky Figma
mockups and like not so great
wireframes. Um, and so I I think it's an
extension of that like wireframing hacky
Figma prototype process where it just is
easier for someone to understand because
they can actually click through and see
the flow.
>> Yeah, it's just more interactive I think
is really it might not be higher
fidelity, but it's a richer kind of
prototype experience than you would get
from sort of a flat design.
>> Okay, we at least have three successful
generations. We can click through
>> with with with all AI, you know,
sometimes you get errors, but you know,
here it says it's like a guided category
selection flow. So, we'll click through
and see what they did. So, you can see
here it's like kind of customizing it a
little bit for the category of um of the
service. So, I'm going to go back and
maybe select another category and see
how it's different. So, it's like, you
know, kind of customizing some of the
tips um in this one. Let's see. I might
need to actually select a photo to see
what it does. Um, so you can see it's
like going through an analysis.
You know, this is not using the LLM
behind the scenes. So, you can see it's
not uh not making sense, but I think the
idea here makes sense where it's like,
okay, it's going to do this like kind of
real time detection. Um, and then in
this one, it looks like it's like
multiple photos. So you can see here
it's you know showing like you know you
could um prompt the user to maybe take
multiple pictures. I will just click on
this to show that you know this is how
AI works or sometimes sometimes you get
errors and you need to fix them. Um you
know usually there's that like shortcut
to like try to fix it. Um, if it doesn't
work, um, there is also like a debug
command within magic patterns, which I
found pretty useful, which just tells it
to like look through your code, try to
come up with what's wrong to fix it.
>> Um, let's see if it did fix it. For our
listeners that are not wa not are not
watching, I will spare you reading the
uncaught react errors about um
incompatible React versions. But that is
what we are looking at right now, which
is we are looking at a compatibility
issue between 18 and 19.
>> Yeah.
>> All right. So like all good AI demos,
this one did not work. But I do want to
say just stepping back what I wanted to
just call out is you have demoed for us
a completely new way of thinking about
product management prototyping and
product requirements
in a way that is very different than I
think what classic product management
has looked at. And so you're starting
from a kind of example consumer
experience first. you're backing into
kind of a rough prototype of what could
support that experience. You're using a
AI prototyping tool, in this instance,
magic patterns, to then put that
experience in your brand and design
guidelines. And then you're using that
as a jumping off point to fork and
inspire a couple different versions of
what that ultimate user experience could
look like. And then I'm presuming you're
going to take one of these and you're
gonna say I think we want to start here
for our MVP or our V1 and then that you
know you get the team together and then
and then that's where you start. And so
I think for the product people listening
what I like about AI is it's not just
multimodal and that you can put any sort
of um file type or data type in. It also
allows you to approach problems from the
front door, the back door, the side
door, the window. Like, you know, you
can come at your product problems in a
much less linear way. And in fact, you
can start at the end, go back to the
beginning, come to the middle, fork off,
go back to the beginning, and
reprototype. And it's not expensive,
it's fast, and it's interesting. And so
I think what you've inspired me to do is
actually think a little bit differently
about what the starting point of product
management could be not just for AI
products but for product in general. And
then of course you showed some great
ways that AI can help with that.
>> Yeah, absolutely. Um and I will say yeah
to your point you know you can pick
which one you like the best um which you
think fits your you know where you are
um in your in your product journey and
your user needs. Um, you can also like
if there's one that feels like, hey,
this like AI assisted one seems really
interesting or this multifoto one seems
really interesting, but maybe not like
where we're going to go right away, you
can fork this design and it will create
um a totally separate window and chat
for you um of just that variant and then
you can just run off with that, you
know, maybe on the side um while you're
continuing down the original path you
were in.
>> I I love that. So we have seen your AI
powered AIPM
process and usually I would bump us to
lightning round but part of our
lightning round is going to have a
couple demos in it. So as my first
lightning round question can you do a
quick world tour of a couple
nonworkreated AI use cases that you
think our listeners would really get a
lot of value from?
>> Yeah absolutely I can share a few
personal examples also. Um so um one is
you know I have started this um you know
talk AI channel that was at Yelp which
was actually inspired by a talk AI
channel in Lenny's community and um I
wanted to create a monthly newsletter
that gets sent out that just summarizes
all the great discussion and content
that was being created there. And so um
I'm just going to show an example of how
to do that using Lenny's community. Um,
and so here I have this um, set of
project instructions that say, you know,
I'm a community manager writing a weekly
newsletter. Um, use these Slack
conversations and format them just like
the community wisdom newsletter. And
then I think what's really cool is I can
just come in here and I can say, you
know, I want to just make a version of
this community ver uh wisdom using this
slack chat and I can upload the file of
all those slack chats and I did
randomize the names or um replace the
names for privacy also using GPT. Um,
and then you can see here it's going to
make a version of that community wisdom
newsletter just using those Slack chats
and um, reuse that same format. And by
using a project, I can, you know, save
myself some time on the prompting.
>> Great. So, you're copying and pasting
um, like a week's worth of Slack
conversations. M
>> you're putting it into this cloud
project which you've been given a um
you've given a template and then you're
having it generate on a weekly basis or
whatever kind of a summary of what's
going on in that community and other
kind of like content that's being
shared.
>> Yeah, absolutely. And then you can see,
you know, kind of follows that community
with some uh format and pulls out what
the top threads are. And so you might
want to make some edits to this
afterwards, but it really, you know,
gets a really good first draft that you
can then edit.
>> Amazing. And you're probably everybody's
favorite community member.
>> Yeah, it's definitely a lot of fun um to
yeah, see what people share. And then
I'll show a couple other examples. So,
you know, I showed the example of
creating the Yelp Assistant and I
actually used the same workflow to
create this parent pal to explain how
artifacts work to my husband and he was
really excited about it. He was like,
"Hey, like let's try it out with, you
know, Tommy where Tommy throws toys down
the stairs." So, you know, I did like,
you know, my two-year-old um throws toys
down the stairs and uh it's some the
same kind of artifact where it's powered
by Claude's LLM and it's going to ask me
some clarifying questions like what's
the trigger and it's like always at
dinnertime when we are cleaning up. Um
and then you can, you know, see how the
AI will provide some parenting guidance.
And I think the really fun thing for
this is that, you know, you can build
something that's just really for your
own personal use case. Um, and it's a a
really fun process to do that. I'll show
one other one, which is um my siblings
and I like to play this board game,
Settlers of Katan. But the bad thing is
it kind of takes a long time, especially
if people don't go fast. So, I'm working
on this Settlers of Katan timer where um
I actually have a timer for me and my
siblings and both for the setup and the
main game play. But this one I actually
built in Lovable because my siblings had
a lot of feature requests about tracking
the future uh you know who who's won
over time and having a leaderboard and
handicaps and all sorts of other ideas.
So, I definitely think it's a lot of fun
to prototype with AI for your personal
use cases. And I know some PMs are like,
"Hey, I really want to work on AI
products, but I don't have that
opportunity right now." I think the fun
thing about these prototyping tools is
you can build a use case that's just for
you or just for you and a family member.
Um, and learn a lot as you're doing it.
You just gave me such a good idea
because I don't play a lot of board
games, but my kids get like 10 to 15
minutes of Minecraft every day, but we
only have one
>> like uh time timer. Um, and so so I need
an iPad where they can like both click
their button and have it have it
countdown. And then they're also really
worried about fairness. So I will also
use a uh relational database to store
all their time
>> and say I promise every week you are
getting an equal amount of Minecraft.
There is no no lack of fairness and then
when they fight about it I'll use your
parent pal GBT.
>> I love it. Yeah, you can just direct
them to check the dashboard.
>> Amazing. Okay, last question and then I
will get you back to all your
prototyping and all your AI building.
>> When AI is not listening, other than
clicking that debug button in magic
patterns, what is your tactic? What do
you do?
>> I I think that when AI is not working
and you've already tried some of the
debug um methods, I think it's helpful
to actually think about the ways that AI
is different than a human. Like often we
just get in this chat and we're like,
this is just like talking to someone.
Um, but when you're hitting the wall, it
it helps to like take a step back and be
like, "This thing is actually not a
human. Like, what could be going wrong?"
And think about AI's limitations. And,
you know, the ones that I try to keep in
mind are it tends to lose context as you
go through many different turns. And it
has a limited context window. And so,
when you start having a really long
conversation with AI, sometimes it just
goes haywire. And so the um methods I um
recommend are if you're doing AI
prototyping, you can use that fork or
you know a remix to start a new chat
with the context of that code and that
actually resets the context window. Um
so that's a good idea if you're going
really far and deep with a prototype. Um
and the same thing applies to a chat.
Like if it's going haywire and you've
had like a hundred back and forths, you
can ask the AI to summarize the chat and
the context and start a new chat.
>> You gave me such a good idea with your
last two answers because I am going to
prototype a parenting pal for the
relationship between me and my a my AI.
>> Be like, AI parenting pal,
>> my my 4-second old AI is no longer
listening to me. What do what do I do?
Um, that's that's really great really
great feedback. And yes, reminder, AI is
not human until the AI overlords take
over and then you can be whatever you
want.
>> All right, Priya, this was such a
practical, super useful, inspirational
conversation. Where can we find you and
how can we be helpful?
>> Yeah, you can find me on LinkedIn and
then I also have a Substack called
almostmagic.substack
where I share some prototyping tips and
other tips about building AI products.
>> Amazing. Well, thank you for sharing and
joining How I AI.
>> Awesome. Thanks so much for having me.
Thanks so much for watching. If you
enjoyed this show, please like and
subscribe here on YouTube, or even
better, leave us a comment with your
thoughts. You can also find this podcast
on Apple Podcasts, Spotify, or your
favorite podcast app. Please consider
leaving us a rating and review, which
will help others find the show. You can
see all our episodes and learn more
about the show at howiipipod.com.
See you next time.
Loading video analysis...