Complete Course: AI Product Management
By Aakash Gupta
Summary
## Key takeaways - **AI PMs Paid More for Tech-Business Combo**: AI PMs are paid more than average PMs because they combine business skills with technical knowledge to work effectively with engineers on new AI technology. [01:41], [02:06] - **Prompt with Context, Roles, Rewards**: Provide context, assign roles like PM or engineer, step-by-step instructions, and add rewards like $1000 for champion performance to get reliable LLM results; politeness and examples also improve outputs. [03:03], [08:57] - **Few-Shot Prompts Beat Fine-Tuning Often**: PMs fine-tune too early when better prompting like few-shot gets 95% results faster and cheaper; demo showed few-shot summarizing better than fine-tuned model. [19:52], [29:01] - **RAG for Dynamic Document Queries**: RAG uses vector stores like Pinecone to retrieve relevant document chunks for queries, avoiding hallucinations and high token costs from stuffing all data into prompts; demo connected Google Drive to chatbot. [30:03], [56:48] - **MCP Standardizes Multi-Tool Access**: MCP lets agents discover and call tools from services like Figma or Jira via standard methods without reading full APIs; demo auto-generated epics and stories from Figma design into Jira. [01:07:29], [01:16:36] - **Agents Plan, Delegate, Execute Research**: AI agents classify intent, select tools, execute with error handling; deep market researcher plans up to 11 tasks, delegates to sub-agents for web search and scraping, producing PM-focused reports. [01:18:31], [01:21:16]
Topics Covered
- AI PMs blend business with technical depth
- Prompt like chess grandmaster
- AI PRD counters hype with alignment
- Fine-tune shrinks costs captures style
- RAG scales to millions via vectors
Full Transcript
In today's episode, we'll cover everything prompting, fine-tuning, rack building that you need to know to become an AI product managers. Are all PMs going to need to become AI PMs? The AI
market is growing so fast that there is a high probability that we will meet more AI product managers in the future.
You can't just prompt like an average person, right? You need to be prompting
person, right? You need to be prompting at a very high level. And we instruct Chop GPT to think separately from different perspectives. I like adding
different perspectives. I like adding that you would get $1,000 if you perform this task on a champion level or something like that. Why are AI PMs paid so much? One of the reasons is that they
so much? One of the reasons is that they need to combine business skills with the technical knowledge. Then based on this
technical knowledge. Then based on this generic information, it plans up to 11 separate tasks and those tasks are distributed to different agents. It's
not that everyone needs to become an AIPM, but this market is growing really fast and we just gave you all the tools to become an AI product manager. Looks
like that's ultimate list of product metrics. I'm a dummy. So, I'm still
metrics. I'm a dummy. So, I'm still trying to understand when do I use fine-tuning or when do I use rag? Really quickly, I think a crazy
rag? Really quickly, I think a crazy stat is that more than 50% of you listening are not subscribed. If you can subscribe on YouTube, follow on Apple or Spotify podcasts, my commitment to you
is that we'll continue to make this content better and better. And now on to today's episode.
Powell, thank you so much for being here again. Yeah, it's great to have to be
again. Yeah, it's great to have to be here. Thank you, Akash. So, I want to
here. Thank you, Akash. So, I want to talk a little bit about AIPM. Why are AIPMs paid so much? That's
AIPM. Why are AIPMs paid so much? That's
a good question.
Yeah, they are definitely paid more than an average product manager across different industries and regardless of their experience. Uh one of the reasons is
experience. Uh one of the reasons is that um they need to combine business skills with the technical knowledge. So
even though you do not have to uh like just like with technical product managers, you do not need to code and in this case you will not fine-tune or
create uh AI workflows. Uh but you need to understand the technology good enough well enough to work with engineers and that might be challenging. This is a new technology.
Yeah, I think that's one thing. And then
the other thing is it's just such a hot area, right? So all of the PMs want to
area, right? So all of the PMs want to go into it and I think that as a result what we're seeing is that the best companies they're trying to pick up the best talent and to pick up the best
talent you might need to pay a little more. So today I want to break down for
more. So today I want to break down for everybody how you can become an AIPM and I want to start at the very basics of AIPMing. You need to be able to prompt
AIPMing. You need to be able to prompt well. So can you break this down for us?
well. So can you break this down for us?
Yeah, of course. want me to demonstrate a prompt. Share my screen. Yes, I would
a prompt. Share my screen. Yes, I would love to see how you prompt. So an
example an example prompt from my article where I describe the best practices. It's like this. So first this
practices. It's like this. So first this is a prompt about identifying hidden assumptions for uh the product trio which performs continuous product discovery and the first thing we want to
do is to explain the context. So just
like when we are working with engineers or product teams and uh we want to communicate the context. So not only what is the task they are supposed to
perform but also uh why it matters and the yeah how it aligns with a broader organizational context. So in case of
organizational context. So in case of working with LLMs or AI in general we also want to explain this context. So uh
I informed chat GPT that it is working in a product trio performing continuous product discovery. I explain what the
product discovery. I explain what the goal is, what are the team objectives.
Uh the next thing uh that is in this prompt are uh identified opportunities related to that objective that are might be the result
of interviewing customers and only after providing all that context we ask about assumptions
that needs to be true for those ideas to work. Uh so we have sorry we have idea
work. Uh so we have sorry we have idea uh that one of the ideas might be offering automated investment recommendations
uh and we instruct chat GPT to think separately uh from different perspectives. So this
is an perspective of a product manager, experienced product designer and experienced software engineer. And we would like for each of
engineer. And we would like for each of those personals to identify assumptions related to value, usability, viability and feasibility. Uh and what we have done
feasibility. Uh and what we have done here is so the first element is introducing the context uh and describing the goal uh the broader objectives
uh what happened before performing this task. So we have identified some
task. So we have identified some opportunities. We have came up with an
opportunities. We have came up with an idea and then uh we instruct this we explain the steps that LLM needs to take in order to perform this task. Well, so
we have um iterating four different roles and for each of those roles there is a specific task to perform. Uh let's
try what and see what will be the result of the section. Okay. So we have we have
section. Okay. So we have we have different assumptions related to the four areas that we mentioned for product manager, product designer and engineer
those will not be the same risk areas because each of those persons brings a different perspective to the table. Um
and when being so specific and defining the steps that LLM needs to take u also providing the broader context we can get
much more reliable much better results uh in the post about uh top uh hyroi use
cases for product managers about the prompting I explain several hacks. So
one of the hacks is to ask AI to play a role like a product manager, product designer or software engineer. Uh the
next hack, the next important thing is to clarify the context. So not only what is import what is required but also why we need
it. Uh another one is uh talking like we
it. Uh another one is uh talking like we talk to to humans. Even though LLMs do not have emotions, uh you get better
results if you uh treat them well.
maybe. Yeah, I feel like a 1% difference, but just being polite, even including a smiley face helps. Yeah. Uh,
another one is to set clear expectations. So, it's just like when we
expectations. So, it's just like when we delegate a task to a human. Uh, it's way better when we specify what are the what are the desired outcomes and how the
response should be formatted. For
example, what are the success criteria uh like those four risk areas for each of the persons that we uh selected uh in other situations. So for
example, if the prompt was about generating a user story uh it's very helpful to provide a template. So not
just inform LLM that we want the user story but if we have some user stories that we used in the past we generated in the past give them as an example so that uh the answers can be better
aligned. Another one is providing
aligned. Another one is providing stepby-step instructions. So for example
stepby-step instructions. So for example first iterate for this and this and this role. Then uh identify assumptions
role. Then uh identify assumptions related to first value second usability and so on.
uh in the reasoning models. So here this is oneshot prompting. So it doesn't actually do it uh
sequentially. Uh but in reasoning models
sequentially. Uh but in reasoning models it will actually iterate on those on those steps. Uh another one is avoid
those steps. Uh another one is avoid including questions to avoid injecting biases. I have tested several times and
biases. I have tested several times and I get better results when I uh say AI that they will get a reward. So for
example, if I have a prompt that uh I I cannot get satisfying answer, I like adding that you will get $1,000 if you perform this task on a champion level or
something like that. And it for some reason it works. And definitely LLMs tend to provide longer answers, more
detailed answers if you uh mention the reward. Um yeah and uh the last two is
reward. Um yeah and uh the last two is uh just to iterate. So uh there is high chance that um if you try the first time
uh the output might not be ideal. So in
that case we just iterate try to uh improve and uh inspect the outcomes and in some cases it also helps
if you provide an example of the ideal output like a user story let's say or a product requirement document and you can
ask uh LLM to reverse engineering. what
uh what should be the prompt assuming this is the outcome. Uh and that's a a very powerful
outcome. Uh and that's a a very powerful technique. Okay, so that's skill one for
technique. Okay, so that's skill one for AI product managers. You need to be able to prompt well. You can't just prompt like an average person, right? You need
to be prompting at a very high level.
The way I would think about prompting is try to think about it like chess, right?
Hey, let me take a quick break to talk about something that's completely changed my product management workflow.
linear. As a PM, I was drowning in tools. One for planning, another for
tools. One for planning, another for issue tracking, road mapaps, and sheets, and jumping between Slack, intercom, and app reviews just to piece together customer feedback. Sound familiar? I was
customer feedback. Sound familiar? I was
spending more time keeping systems in sync than actually building product.
Every time development kicked off, my carefully crafted plans would immediately need updating. I was the human API between all our teams, constantly chasing updates and translating between tools. That's why I love Linear. I can capture customer
love Linear. I can capture customer feedback, shape product ideas collaboratively, quarterback cross functional teams, and monitor development progress in one place. It
cuts through the maze of disconnected systems that are complicating my life.
Product teams at OpenAI, Verscell, BS, and Cash App all use linear. If you're
tired of spending your days keeping different tools and teams in sync, check out Linear at linear.app/partners/ a kh. That's
linear.app/partners/ a kh. That's
linear.app/partners/ aos. Today's
episode is brought to you by Miro. Let
me ask you something. How many tools are you juggling just to get a single project across the finish line? One for
brainstorming, another for planning, something else for tracking tickets.
That's where Miro comes in. It becomes
an all-in-one collaboration workspace.
Whether you're consolidating user research from several interviews, developing and synthesizing product briefs or a wireframe or project managing development, Miro brings
everyone into the same space. It's fast,
intuitive, and fully loaded with features like project templates, two-way Jiraync, and integration with software like draw.io and plant UML. Miro's AI
features can be used to synthesize elements in a board to develop a readyto-review product requirements document in seconds. If you're tired of tab overload and scattered workflows,
try Miro. Head to miro.com and see why
try Miro. Head to miro.com and see why over 90 million users choose Miro to guide from idea to outcome. You can get pretty good at chess if you put in a couple hundred hours, but you're not
going to suddenly be Magnus Carlson until you put in 10, 20, 30,000 hours.
And so Paulo is sharing some of his hard tips and tricks after putting in those hundreds and thousands of hours. I want
to move to the next area for AIPMs, which I think this is the one where it's now veering really into the PM skills, which is an AI PRD. How do
you write a PRD in an AI context? For
this one, we partner with partnered with MCAT Jeffer for which is producted open AI and arguably PRD for uh AI features
for AI products is not something completely unique. Uh what is unique is
completely unique. Uh what is unique is a lot of hype around artificial intelligence and this hype causes that
uh in some cases uh product teams pursue features or products without u uh a justified business case.
So what is included? There are two areas that are included in in this PRD. One is
ensuring that our initiative is aligned with our business strategy. uh in case of AI features with product team objectives. And the second one is
objectives. And the second one is including AI specific considerations like how we will uh ensure that there are there are specific uh guard drives
implemented so that the model is aligned with the user. And uh there are several sections.
user. And uh there are several sections.
uh this post is still free and can be downloaded from from my newsletter or together with the template.
uh but before we discuss the template maybe it's worth mentioning that the first uh AIPD uh is a tool for alignment building
alignment uh in the organization uh not we don't necessarily need and I actually don't like it documenting every
single detail like user stories tasks uh very detailed deadlines road maps uh uh we would rather want to use PD as a
tool to uh highlight our assumptions and um yeah uh provide some evidence and also connect our initiative
to to the broader uh organizational context and the second thing is that uh it shouldn't be a distinct phase at the beginning of the project product or
project or initiative.
uh we usually start with a draft and as we build our feature or AI powered
product uh we iterate on on our PRD uh and the sections in the PRD. So the
first one is executive summary. So just
briefly we briefly summarize what this initiative is about and how it will uh what are the success criteria how it will benefit the organization. The
second one is about market opportunity and this is about AI specific because we would like to explain why this is something that we should build right now
and maybe this is something that has just become possible and also what is the uh potential for the future. So is this
market big enough or will it become big enough uh in the mid or long term?
uh another one is strategic alignment and this is also inspired by this uh AI hype. So we want to ensure that for an
hype. So we want to ensure that for an AI product it is aligned with our vision our strategy it supports company objectives and for AI feature
[Music] uh there is additional assumptions related to our team objective.
Yeah, because you don't want to just build AI for AI's sake, even if your board or your investors are asking you uh customer user needs. So this is quite
straightforward. So what is what is the
straightforward. So what is what is the problem we are trying to solve and uh let's hope there is some problem and
uh and the only problem because building uh an AI powered product cannot be the problem in itself. So uh we want to identify market segments um which are
clusters of customers with similar underserved needs and for each of those segments we would like to understand how important those needs are for the customers and how satisfied they are
with what they already have. The next one is our value
have. The next one is our value proposition and the value proposition is about how we will address those uh needs for each of the segments. So um what is
important here is that we do not want to focus only on features but also we want to mention capabilities um and benefits. So what is the current
state? What is the customer pain um how
state? What is the customer pain um how we will address it? So for example we will introduce a specific feature uh and what will happen after. So what what are
the benefits for for the customers and also how it is different from what our competitors offer and how we can communicate it and for this there is a value proposition
template also value curve so that we can easily compare our well value proposition to what other companies offer. Uh the next section competitive
offer. Uh the next section competitive advantage is about uh not just competitive advantage right now but how we can sustain competitive advantage in
the long term. So um what can we do so our competitors can't or won't copy our strategy and this is a classical uh K
want to test from Roger Martin uh work from his uh yes whose podcast I believe will be right before yours.
Oh yeah, that's nice. And uh here uh yeah, I'm I'm a big fan. So I will uh watch it for
sure. Okay. And uh product scope. So
sure. Okay. And uh product scope. So
high level assumptions, use cases, links to Figma prototypes, nonfunctional requirements. So general requirements
requirements. So general requirements and how we will measure them and also AI specific. So what are the key
specific. So what are the key architectural choices? Um what are the
architectural choices? Um what are the how can we um assess uh our implementations? So for
example AI evaluation metrics or bias and fairness audits. So AI evals how do you handle those? Uh I will do a simple demonstration when we will be discussing
fine-tuning. Okay. Perfect. Okay. And
fine-tuning. Okay. Perfect. Okay. And
the last one is go to market approach.
So what are the build and release phases? What market segments we want to
phases? What market segments we want to focus on first? So uh yeah this is our perhaps we want to apply the pitch heads uh
strategy and uh how we will measure our success. There is also a PD template to
success. There is also a PD template to download which is simplified and much shorter. So even though this article
shorter. So even though this article seems long and the actual uh template is like three four pages uh document and also there is a case
study and here um you can read uh Nick that explained
uh how this AIPD was applied to Shopify uh and he was responsible for releasing a feature called autoite which was about product descriptions.
Can you walk us through a practical demo of finetuning for PMs?
Yeah, sure. Uh before I demonstrate a finetuning and I have an example for you, uh let me explain what is wrong with the current with with the popular
approach. So just taking offtheshelf
approach. So just taking offtheshelf model and uh instructing it to to perform a pro some prompts. Uh so for
example here we can see Deli which is a common uh platform for building clones
and in order to get uh the right results uh that resemble your style you need to in uh inject a lot of instructions every
time you ask a prompt. So for example this is a purpose of the chrome.
Uh here you can see a speaking style and it is all part of the prompt. So every
time user u connects to a chatbot all those instructions are injected and
uh of course this causes that uh you need to select much more expensive model and even with more expensive more
powerful models uh it might be challenging to adjust and this is one of the things that is actually difficult with LMS to adjust
LLM um to your style, unique style like brand voice.
Uh so what fine-tuning allows you to do is that you can take a smaller model like CH GPT 40
mini and you can train it specifically on your uh training data and those are
the parts like uh user prompt and model response so that it internalizes uh this knowledge inside the model parameters.
And this as a result you can use much smaller specialized model and you will use much less tokens because you do not need to include all this context every
time you ask a question. So I have prepared two data sets with example prompts and answers uh to fine-tune
chart GPT uh so that it can talk like Yoda from Star Wars. And um if we open one of the data sets you can see that
uh it's just a collection of messages. Uh so for example user asks
messages. Uh so for example user asks can you explain gravity and the response is something what Yoda could uh could
respond. Uh okay. So to use this to
respond. Uh okay. So to use this to finetune a model the easiest way is to go to platform open aiccom and here on the left we should
have a finetuning tab and all we need to do is to uh click
create. Uh the default model is
create. Uh the default model is supervised and we don't want to touch it. The base model is the model that we
it. The base model is the model that we will be adjusting. So for example chd4 uh mini
uh next we provide training data and the training data is this data that I presented and the second data set that I
can attach is validation data. So it
means that during the training uh OpenAI will automatically uh run tests on on this additional data
set to see how well it can predict the answers and what is important here is that um this test data is not present in the training
data. So it's like uh independent audit
data. So it's like uh independent audit of how our training is performing.
uh okay number of epochs. So batch size it's uh how many
epochs. So batch size it's uh how many examples we will process if I remember correctly uh at once if I remember correctly is it's just one record after
another and we do not want to touch that. Uh this setting is also something
that. Uh this setting is also something that we can leave uh with a default value and number of epox is how many times we will iterate over the training
data. Uh we will see that in the
data. Uh we will see that in the interface in a moment. So let's say we will do it three times and after clicking create it will
start uh iterating on on our files. I
have done it several and the process takes like 20 30 minutes. So it's not very quick. Uh but I have done it
very quick. Uh but I have done it several hours ago. So maybe we can we can see um how it
went. This is a cooking show. You put it
went. This is a cooking show. You put it in the oven and then it comes out ready. Okay.
ready. Okay.
So what we can see this green line is uh how well the model was able to after training was able to predict your answers.
So training clause this is the difference how much text are text generated text was similar to the
expected text and uh this is uh not comparing characters but semantic meaning of the generated answers. So as
you see after 10 iterations it was still pretty bad. After 40 it it it has became
pretty bad. After 40 it it it has became pretty good and after the first epoch so
processing 200 records um we see that answers became almost ideal. Additional
epochs were not needed. So this
uh this red uh point is that after each epoch open AAI uh tested the entire test data set. So the second data set and
data set. So the second data set and calculated how how well the model was trained. So after yeah after the first
trained. So after yeah after the first iteration we check uh probably it was not needed to to
repeat the process but either way it was repeated twice. So let's uh now take one
repeated twice. So let's uh now take one of those fine-tuned models and we can preview it here. If I
copy the name I will go to the playground probably. I
playground probably. I hope no. Um, okay. So, I can just take it and
no. Um, okay. So, I can just take it and go to the playground manually. Something
didn't work. Uh, the model we want to interact
work. Uh, the model we want to interact with is this one. So it is CH GPT 40 mini
uh trained to act like uh Yoda and uh let's say uh how are
you or uh well do or do not there is no try clear your mind must be to see the truth and how are you what can you tell me
about product management. So yeah, this is of course you will not be uh you're unlikely to need uh Yoda in your uh
product AI powered product or AI powered features but adjusting AI to your unique style your brand voice which is something that might
be need much more needed and also it allows you to uh it allows the model to internalize knowledge
inside model weights.
So rather than accessing um uh external data sources which is also fine uh in many cases the model can
learn and um respond immediately without uh accessing in a database like uh rack it has this knowledge inside. So is a is
an AI bot posting all your LinkedIn posts yet? Have you fine-tuned it? No, I
posts yet? Have you fine-tuned it? No, I
actually plan building not for posting, but I plan combining fine-tuning with rack to build a clone. That's awesome. I
think this is really powerful. So, I've
been profiling a lot of AI companies and they almost all use fine-tuning in this way because of what you emphasized.
Number one, you get to use a much cheaper model. And number two, just read
cheaper model. And number two, just read this Yoda response, right? Named must be your fear before it banish it. You can
understand it. you will in time to get it to speak in a certain way. You can
also do that. So you can actually increase the quality of the results while reducing cost. So fine-tuning is just one of the most important concepts out there. Yeah, this is probably must
out there. Yeah, this is probably must have for any query, any prompt that you are uh executing repeatedly in your product. Today's episode is brought to
product. Today's episode is brought to you by Amplitude. Building great digital products is hard. You know that better than anyone. Getting teams aligned,
than anyone. Getting teams aligned, measuring what matters, and scaling your product strategy isn't easy. But what if you had a clear framework to guide your next steps? That's exactly what
next steps? That's exactly what Amplitude built. They studied the best
Amplitude built. They studied the best product teams to understand what really drives impact and turn those insights into the digital experience maturity assessment. In 2 minutes, you'll be able
assessment. In 2 minutes, you'll be able to see where your team stands and what you can improve to build better products faster. Click the link in the caption to
faster. Click the link in the caption to take the free assessment and get a clear path to product growth.
because we know those expensive models, they're way too expensive if you have somebody paying you $20 a month. So, you
have to reduce the cost on these things and this is how you do it. So, the other concept that people talk a lot about is rag. Can you explain that to us and show
rag. Can you explain that to us and show us how that works? Uh yeah, sure. So,
for that I have uh Okay, what we will build first? So I
have this Google Drive folder in which I store different documents and those are uh those will be my articles and I would like to build a
chatbot that can connect to this um that can use documents that I will be storing here uh to answer the questions. And of
course someone can say that yeah but if you have 10 documents you can just read all of them and uh get the entire content and inject it inside the prompt.
And uh the first problem with that approach is that uh you pay for every input token. So if you have 10 documents
input token. So if you have 10 documents you include all the entire content it's uh it will be expensive. And another
reason is that uh in many products you do not work with 10 documents but with millions of documents. For example, when working
documents. For example, when working with ideas uh which is a VDR solution, we've been working with millions of documents. So it would be impossible to
documents. So it would be impossible to read the entire content and create a like big prompt that will include all the
information. So we need to use a rack. H
information. So we need to use a rack. H
in order to do that uh I will I'm a dummy so or I'm stimulating a dummy. So
I'm still trying to understand when do I use fine-tuning or when do I use rag?
[Music] Uh those are two different use cases and there is no contradiction between them.
So you can combine fine-tuning with uh rack. Uh if you want the model
rack. Uh if you want the model to if you want the mo that the model answers will be based on some data then
you use rack. You can of course you can combine
rack. You can of course you can combine it with fine-tuning which will uh to some extent uh encode this data also in model
weights. But if you wanted to quote
weights. But if you wanted to quote documents or you have a lot of documents, uh a much better solution is connecting the default solution is
connecting to a rag data source and only later you can fine-tune the model to uh so that it also has a general knowledge
about your documents. Yep, thanks for that clarification.
Okay. So, let's imagine this folder has 100,000 of documents. Okay. So, in order to do that, uh we won't connect with this folder
directly, but we want to use a vector database. I will create a new database.
database. I will create a new database.
Uh and uh there are many solutions. My
favorite is using a pine cone which is quite easy to to use.
And uh what happens inside pine cone is that documents are not stored in the original form but they are converted into multi-dimensional vectors and
stored as chunks. I will demonstrate it.
We can preview it in a moment. Uh but
first let's create this database. Uh
there are different ways to convert uh chunk of the of document to a vector.
different algorithms let's let's say uh I will use text embeddings by open AI small which is cost effective
[Music] um and let's call this index or database demo product
growth and the next thing is to take documents and put them inside this this
vector store and for that we can use n8. Oh sorry
n8. Oh sorry um this will take a moment. Okay. So n
is it's a it's a solution that you can host for free on your local environment uh to create agentic workflows and also
agents.
uh it can use MCP servers. Um yeah, but today we will
servers. Um yeah, but today we will focus on simple workflow. So the first step uh we would
workflow. So the first step uh we would like to perform is that when the new document appears in this folder, we would like to take this document,
download it from Google Drive and save it in our vector store.
uh and we can do that by creating something that we call trigger. This is the way how the how our
trigger. This is the way how the how our workflow can start and uh we will be reacting to changes in a specific
folder. I previously uh provided my
folder. I previously uh provided my Google drive credentials. So I will not repeat that
credentials. So I will not repeat that process but you can you can easily find information how to do that. Uh you need to open Google uh cloud console and
create an authentication token. Okay.
And uh the folder we are going to monitor is product compass demo. So
uh let's find this folder.
Okay. And we wait for file created. Uh this will be a simplified
created. Uh this will be a simplified example because in real life we also want to react on file updated or file removed so that we sync all those changes with with a vector store. But I
hope it's it's enough to to understand the uh how to how we can do that.
Okay. And uh I clicked fetch test event.
So it simulated an event for for this one document that we currently have on in Google Drive. So every time a new documents appear uh we will get something like this. So basic
information about the document. Uh I
will pin this data so I do not have to repeat this process. So that it's used as a mock data when working with the editor in the test uh in the test environment.
Uh what we want to do next is to download this document. So Google Drive once
document. So Google Drive once again. Uh we want to download file also
again. Uh we want to download file also using my Google credentials file uh by ID and on the left you can preview the
data that we previously got this and one of the properties is identifier of the file.
Okay. So, it can download this file. I
file. I hope. Yeah. Yeah. It it was download.
hope. Yeah. Yeah. It it was download.
This is live, folks. Yeah. This is this is live. So,
folks. Yeah. This is this is live. So,
the next step uh we want to split this file into small chunks and convert each chunk into into a vector multi-dimensional vector.
uh that can be stored in in our web pine cone here demo product growth. So let's do that. Uh there is a
growth. So let's do that. Uh there is a readyto use note uh pine con vector store and
um we want to add documents to vector store right okay and oh sorry one more thing I think I should
Uh no, we don't need it. Okay.
Sorry.
Uh so we want to insert this document uh to and also I have previously generated an API token from for Pineon
but we can see our new uh index.
uh and then we need to provide a few additional information. So first how the
additional information. So first how the how this node can get the document get chunks of the document because we cannot
just take the entire Google drive file and for that we will use default data loader it will just take data from the previous step.
uh the type of data is not JSON it's a binary file we know we expect uh word documents or PDFs things like that and
what we want to load is not all the metadata we don't need it but specific
um specific data this one uh with the file uh content and also I would like to add some metadata in case in the future
we would like to search documents in a vector store by by properties. So for example, by file
by properties. So for example, by file name like in a traditional SQL database. And for that we can just
database. And for that we can just include uh let's see if we can find it something file name. Yeah, we have
name, we have ordinary file name.
uh let's say name just by dragging and dropping those elements.
So it it will also be stored in a pine cone store uh in addition to to the chunk of the
content and the the text splitter is about how we will split document into different chunks and we will use the
default one uh default policy. Let's uh
not dive too deep into this. And then
the next thing is that we need to decide how we can generate those vectors. In
pine cone we have already uh defined that our embedding model is uh this model by open AI and so we use we need
to use the same model here. Uh I also previously generated an
here. Uh I also previously generated an open AAI API key. So n already knows it and
we will use text embedding small uh okay like this and let's see if it works. I
works. I will disable this unpin the data and let's test the
workflow. So it gets real document data
workflow. So it gets real document data once again from uh from Google Drive. It generates some vectors
and as we can see here it generated 18 items. We can preview those
items. Uh so what it generated is uh maybe here it will be better.
[Music] So yep. Uh so we have the basic metadata
yep. Uh so we have the basic metadata for each chunk about the document including I hope including the file name that we ask it to
include and uh chunk of the text from this document.
And this is repeated 18 times. Okay, we can now go to our pine
times. Okay, we can now go to our pine cone, refresh it and see if those document trunks are inside.
Uh okay. So if we open one of them, we should see that there is uh chunk of the text and also metadata
that we can use for additional filtering. Uh to fully test it, I will
filtering. Uh to fully test it, I will now publish this workflow. Okay. And add more documents
workflow. Okay. And add more documents to this folder.
uh without this this document that is already there.
I like it. So we start with one document, we test out the workflow, then we add more. We don't just start with million documents.
Yeah. Uh this workflow is executed uh more or less once every minute. So we do not necessarily we it might happen that
we will not see it immediately.
Okay, it is running. We can preview it.
I know what I I made a mistake.
Okay, I will stop it. Uh we made a mistake because we should we should repeat this process for every document.
Okay.
Um, okay. So, we want to loop over items. By the way, guys, this is how the process actually happens, right? You put
something on paper, then you iterate and improve. And we're actually doing this
improve. And we're actually doing this live for you all.
Yeah. Okay. So, for every new document, it will repeat the process that we already demonstrated. Um, just to make sure, I
demonstrated. Um, just to make sure, I will remove everything from this database.
[Music] uh name spaces. I will remove the entire name space. It would be because it
name space. It would be because it already created some records. Okay. And now the database is
records. Okay. And now the database is empty. I can also remove documents from
empty. I can also remove documents from Google Drive.
Uh save this workflow. And
workflow. And yeah, just in case. And let's try
case. And let's try again. All right. So, let's upload those
again. All right. So, let's upload those documents again. Let's see.
documents again. Let's see.
Mhm. Yeah. Uh, it should soon detect that that there are new documents. Oh, and let's let's see if it
documents. Oh, and let's let's see if it will if we will succeed this time. It's cool how it detected it so
time. It's cool how it detected it so fast.
Okay, it detected only two items. So, probably it will detect more
items in 1 minute. Okay.
Uh oh, wait. Maybe maybe detected more. Uh yeah, I think we need to wait
more. Uh yeah, I think we need to wait because it it detected only one file. Okay.
file. Okay.
Yeah. Now, now it is adding more. There
we go. It's working every minute. Yeah, maybe it does it in
minute. Yeah, maybe it does it in batches. Anyway, it it succeeded again.
batches. Anyway, it it succeeded again.
So, let's uh let's see what is what is inside this uh this index.
Uh yeah, we have 77 records.
For some reason, I cannot see it. I just needed a refresh. Okay.
it. I just needed a refresh. Okay.
Uh yeah, we have 10 documents chunks and it detected the first document it detected is objectives and key results
PDF and let's see if there is any other objectives and key results. Not
results. Not really but it is yeah top Let's try again. Looks like that's ultimate list
again. Looks like that's ultimate list of product metrics. So, I did find the other
metrics. So, I did find the other documents.
Well, yeah. Okay. Let's
yeah. Okay. Let's
um I think we have two documents. Is
that enough or Sure. Yeah. Yep.
I do not want to debug it right now. And
no problem. For some reason, yeah, we have Okay. So, we can see that there are
have Okay. So, we can see that there are two documents. There are already two
documents. There are already two documents inside. One is the ultimate
documents inside. One is the ultimate list of product metrics. Another one is uh objectives and key results
PDF and uh over time it will index more files and I demonstrated it also in my newsletter in the past with uh detailed
yeah detailed recording of uh indexing all the documents.
So the next step is to uh create an endpoint for our chat that will connect to this rack. So let's try to do
rack. So let's try to do this. Uh because we want we we now want
this. Uh because we want we we now want to use this uh those documents inside our chatbot. And in order to do that we will
chatbot. And in order to do that we will create something called a web hook.
This is like an inbox for for your uh web service uh where different apps can can ask a question and get the
response.
Uh okay, let's say um other apps will call this web hook.
So our chatbot will call this web hook and they will provide the question like here. So
[Music] uh okay so the question the user query will be part of the
URL. Um and we can see that we received
URL. Um and we can see that we received that query. uh user messages here. So
that query. uh user messages here. So
let's now retrieve documents from our vector store. So the
vector store. So the next the next step is getting ranked documents. Uh as we already explained in
documents. Uh as we already explained in vector store we store document chunks not the entire documents. So we will get
the different chunks from uh our demo.
Let's say we will get 10 most similar chunks. Uh, of course we compare
chunks. Uh, of course we compare vectors, not not uh not text. And here
we could ask uh open AI or another LLM to generate a specific prompt to generate a query to our vector store um
like different phrases by which we will uh search documents. But in this case to simplify I will just rewrite the user query from uh the user message from the
from the URL. So we will use the same query parameter and just for the demonstration there is no authentication I will
disable this uh web hook after the demonstration. Okay. After we uh in
demonstration. Okay. After we uh in order to retrieve document chunks uh they are represented as vectors. So
we also need to represent our query vector uh and the vectors will be compared across those different
dimensions. Okay. So user query from the
dimensions. Okay. So user query from the URL becomes a vector and then we ask pine cone to retrieve similar vectors.
Uh the next step is to combine those responses so that we can use it uh do something with them. It will be
easier. Uh if we just merge
easier. Uh if we just merge them. Uh actually aggregate will be
them. Uh actually aggregate will be better.
And what we want to aggregate is this page content which is document chunk and that's
uh what we also want to include is the file name. So if the chatbot provides an
file name. So if the chatbot provides an answer uh based on based on some document uh I would
like our chatbot to quote this document. So to do that we can refer to
document. So to do that we can refer to to this metadata that we previously exported to pine con. Okay let's test.
Okay, we have page content and uh yeah different names combined. Uh that's
combined. Uh that's okay. And the next step is to call open
okay. And the next step is to call open AI. So now we want to call an LLM that
AI. So now we want to call an LLM that will uh look at the user request uh analyze different chunks from the
database and provide an answer. So let's say this will be an
answer. So let's say this will be an open AI. We want to message a model.
open AI. We want to message a model.
Uh I can also access fine-tuned models.
So for example, we could ask this uh Jedi uh don't let's but we will not see anything uh smart and it is it is not
trained to perform those operations. Uh okay so I will just uh
operations. Uh okay so I will just uh ask CH GPD for all and I have previously prepared a
prompt that we can now use. So the
prompt is as follows.
Uh so answer the user request only based on rag data below. Um provide the answer
and at the end of your response code the sources. Uh the property is not file
sources. Uh the property is not file name but actually it is name. So let's
rename it and yeah user request was
uh we can take it from the web hook from the URL.
And uh additional context for our chatbot will be what we have uh from uh
Pinecon. Uh okay, we have our document
Pinecon. Uh okay, we have our document trunks and document names and let's keep it
simple. Uh
simple. Uh okay. Uh let's try to test
okay. Uh let's try to test it. So the query will be OKR and the
it. So the query will be OKR and the assistant replies what is OKR and let's see if it will code the
source. Yeah, in the sources we have a
source. Yeah, in the sources we have a specific document name. Uh so the last step will be uh
name. Uh so the last step will be uh changing the web hook so it doesn't reply immediately but actually waits for this logic to be executed because we
want to process we want to get chunks from vector store and use open AI and at the end uh
uh we want to respond to a web hook.
with text and the text will be this content provided by open AI right uh let's activate our
workflow we can also test it but I will take the production URL this
time okay and Let's say that our man search is north star. What it
star. What it is? The workflow was started.
is? The workflow was started.
Uh yeah, it provided some answer. Oh,
nice. And this is based on your article about it.
Yeah, that's awesome. The Northstar metric is
that's awesome. The Northstar metric is a critical measure that aligns a business or products focus. It's for
some reason it provided several quotes because Northstar metric uh Northstar metric but those are real documents names. So measuring and maximizing
names. So measuring and maximizing customer value this is also part of of my folder and there is a PDF. Uh yeah so it as we see it started
PDF. Uh yeah so it as we see it started processing more there are already more documents processed that we initially
seen. Okay now the last step because
seen. Okay now the last step because this was only debugging would be going to loable and asking it to create a
chatbot. Okay. Um, please create a
chatbot. Okay. Um, please create a simple [Music] simple
[Music] call a web hook like this uh message. This is a user input. I hope
uh message. This is a user input. I hope
Lovable will understand it. Okay.
it. Okay.
uh call a web hook like this to get a response. Uh to complicate it, interret
response. Uh to complicate it, interret uh responses as mark down. This is a special
down. This is a special type of formatting.
And inside the workflow, this was part of the prompt that uh an agent should reply with using markdown formatting. So let's see if
formatting. So let's see if Lovable will do this. Such a good tool. So powerful.
this. Such a good tool. So powerful.
Yeah.
So to summarize so far, we used nan for our workflows, pine cone for our database, and lovable for our front end.
Is that right? Yeah.
And previously we have used finetuning but not in this case. Here we have pine cone as a vector store for storing those um uh embeddings.
uh we also used uh open AAI text embeddings model which was one of the choices because pine cone
offers many others.
Uh what is important is using the same embedding model when we ask a question because uh we need to compare vectors and so that document chunks are
represented as vectors but also the query we are asking to pine cone later needs to be encoded in the same way.
Okay. Uh
okay nothing unexpected.
It should resolve this problem in a moment. Yeah,
moment. Yeah, typical. Did it all on its own. You
typical. Did it all on its own. You
didn't even have to prompt it.
I'm sorry. You didn't even have to click anything, right? It just fixed the error
anything, right? It just fixed the error on its own. Yes, sometimes it happens.
Yeah. No, just just uh just uh just fix the error. I have not even
error. I have not even So, we're clicking to fix this, too. Uh, I just keep clicking
too. Uh, I just keep clicking fix. I hope it will figure it out. It's
fix. I hope it will figure it out. It's
It's the best set on the market. The
best uh coding model on the market.
Uh okay.
[Music] Ah, I tested him before and these things are stochastic, right? Uh,
if it doesn't solve the problem, I will just remove this. Okay.
this. Okay.
like remove that. It's rendering
something. Here we go. We have a chatbot.
It's what is the nor star metric? Let's see if it does it. I don't know if it will work. It
it. I don't know if it will work. It
worked. It worked. But this is not formatted fully formatted as markdown.
Yeah, you you didn't format. Uh if we go to executions, we
format. Uh if we go to executions, we should see that there was a production request. The response
request. The response was this. Uh the response it
this. Uh the response it was okay. Yeah, maybe it will fix that.
was okay. Yeah, maybe it will fix that.
If not, uh maybe it's already enough.
This part's giving us in trouble.
Yeah, it's it happens of more often than usual. Uh, usually when working with
usual. Uh, usually when working with Lovable, I don't have those issues, but yeah, not at this
stage. It happens with much larger
stage. It happens with much larger uh projects.
It knows that you're uh recording this, so just more air.
All right, let's see if it's working now.
Is it be in markdown? Let's see. Fingers
markdown? Let's see. Fingers
crossed.
Um, okay. I will I think I will not fight with because it is like it is in the independence I think. Yeah. Uh just
to fix the formatting, we could probably ask it several more times, give examples, and it will figure it out.
Yeah.
I Yeah. Okay. But we have we have our uh rack powered chatbot. And as we can preview in
chatbot. And as we can preview in executions, uh here
it actually gets user requests from the web hook.
Uh what is OKR? It gets document chunks from the
OKR? It gets document chunks from the vector store. It aggregates them. Then
vector store. It aggregates them. Then
we use open EI uh with this all this context and based on the context we send the response to the web hook and the response is correctly
formatted. Okay.
formatted. Okay.
Uh so what could be improved here is that instead of just rewriting user request, we could use um we could have
an AI note here that will uh decide if it needs to uh query rack at all. Perhaps it can provide an answer right
away. And if it needs to uh connect to
away. And if it needs to uh connect to the vector store uh it could generate a dedicated uh
query. Right now we just take user input
query. Right now we just take user input and ask uh rack but perhaps user has written a very long message and uh that
was only one of the points. So in that case uh we shouldn't rewrite the entire prompt uh the entire user input but rather focus on a specific technique for example that we don't know what it is
and then retrieve information about it from from vector store uh that would be more intelligent but yeah I hope it demonstrates how
rack can work. Yes, absolutely. So,
let's move on to our next concept, MCP.
And you have a really fascinating use case that I haven't seen anyone else talk about. Uh, yeah. So, MCP, it's a
about. Uh, yeah. So, MCP, it's a standard for uh developed by Entropic for
uh AI agents and agentic workflows to talk to different systems. So previously when you wanted for example uh if you develop an app and you want to integrate
with stripe you need to uh read the entire stripe uh API documentation and understand how to call different
methods. Uh what
methods. Uh what uh MCP does is that it for every service it provides a set
of standard methods. One of them is to uh explain what actions are uh possible, what tools this MCP server
offers. And another one is a
offers. And another one is a standardized way to execute those tools.
So for example, uh my chatbot can ask when using stripes MCP server, it can ask uh what actions, what tools uh what
do you support? And one of the responses can be I can find a customer if you provide me customer ID and then uh our
chatbot can understand will understand how to call this method how to use this tool and uh what are the required
parameters and to demonstrate it I I have a demo in
cloud what I would like to achieve is to connect to a Figma design
uh which is just a copy of a random Figma design from the public repository uh something about task management uh
with desktop and mobile views.
Okay. And I would like to create a set of epics and user stories in my gyro.
Yeah. Save you a huge amount of time.
Yeah, based on those designs.
uh for that let's create a new gyra project so that we okay let's let's just create a project I will use a standard template
for scrum simplified no need to complicate this uh okay demo
So yeah, the project key will be demo.
Um I will just continue and use the default settings.
Okay, we don't need a confluence. By the
way, the method that I will be demonstrating also allows interacting with Confluence. So, for example, it could
Confluence. So, for example, it could create Confluence pages or read Confluence documentation. And if if you
Confluence documentation. And if if you have some standards like how user stories or how epics should be created.
Um, it can get this get those instructions and it it can also use user stories created in the past to to write new user stories.
uh but let's let's go to the backlog. Uh I will take this uh shimma URL and if we go to
cloud by default it cannot connect to any tools. We need to go to the
any tools. We need to go to the settings. So file
settings. So file settings and after opening the developer tab I can click edit
config and here is uh a file where I can define my connections those standard MCP servers.
We can present those keys. I will remove them after the demo. And I already configured two connections. One is uh
connection to Figma MCP server with my Figma ID and another one is
Atlassian so Jira confluence and other products uh with some standard parameters and I have recently described
where to where you can find uh basically it is easy to find NCP server NCP server repositories is uh in the internet or
you can Google them and what parameters uh you should put here. If you want to uh get configuration and very detailed steps on configuring this specific
workflows, you can go to my newsletter and I have explained it. Uh okay. So I have those connections
it. Uh okay. So I have those connections to Figma and Atlassian defined. Uh so now I
defined. Uh so now I can and cloud even though those are only two connections cloud has asked those
MCP servers what are the supported tools. So we have a set of tools related
tools. So we have a set of tools related to confluence, related to gyra like adding comment, creating issues and
uh uh yeah uh searching in gyra and also there should be something uh related to Figma. Maybe it was hidden somewhere.
Figma. Maybe it was hidden somewhere.
Yeah, there it is. We we can download Figma images and we can get data from Figma. So now my prompt will be
Figma. So now my prompt will be uh this is the URL and I have this prompt that I have
prepared previously but yeah maybe we can just write it from scratch. So uh
please connect to my Figma project.
Uh what it really needs is the Figma file uh ID which is only this part but we can put the entire URL and that's fine.
Next, create a set of user epics and user stories which in general
are just stories uh based on that design.
Make sure stories follow the invest which is independent
uh valuable uh possible to estimate short and so on testable uh those are good practices for user
stories uh loading uh if possible. Next we in the and uh
possible. Next we in the and uh our gyra project is I forgot
uh demo. Okay, I can also use the entire
demo. Okay, I can also use the entire in the back in a in a real life example.
It would be better to provide some context. We already discussed it when
context. We already discussed it when discussing uh best practices for prompting. So what
is the the goal of this initiative? Uh how we will measure
initiative? Uh how we will measure success maybe some examples of user stories or user story templates from the past. We could also ask it to uh just
past. We could also ask it to uh just analyze the existing user stories and create uh new user stories based on all on those existing ones.
Uh but just to simplify uh in cloud we need to uh confirm that we allow performing every single uh connect using every single
tool. Uh I will soon I'm I'm currently
tool. Uh I will soon I'm I'm currently doing a research of uh using nan and mcp servers. uh you cannot do it in cloud
servers. uh you cannot do it in cloud but you can do it when you host n uh for free in docker or also you can hosting
in cloud like digital ocean and then um there are no restrictions so it means that if you uh
from n if you call mcp server you don't need to confirm anything got it and as we can see it started reading information and create
some epics and user stories in injyra. Uh maybe
injyra. Uh maybe to make it better visible, I can split the screen a little
bit. And uh I have a special addin which
bit. And uh I have a special addin which refreshes the will refresh the page every uh but before I do
that I will enable an epic panel. And
now let's refresh the page every 5 seconds.
So as we can see there are already some epics created and after creating all the epics cloud should start
creating user stories. Wow. Based on
that design and we don't need to do anything else. Uh if we used the smaller
anything else. Uh if we used the smaller model this is cloud set 37 set.
uh the best but also uh the slowest that we can choose. Uh yeah, it started creating user stories. Previously uh
when I did this uh exercise, it generated about 20 user stories and six or seven
epics and it took it uh 9 minutes with some sec additional seconds. Got it.
Uh we can already see some user stories like for example just let's stop this auto
refresh uh implement site navigation menu. Yeah and it's really nice right
menu. Yeah and it's really nice right has a great description. It has
acceptance criteria. This is saving you hours of work.
Yeah, we can ask it to add links to Figma design because uh yeah, we can do many things uh and adjust how those user stories are generated. Uh links to Figma
would be we only need note ID and it can access notes ids. So I can imagine that
we could generate those URLs and link specific notes from from Figma dynamically or just link to the design.
If it has many pages, link to the specific page and yeah, I suppose we will not be waiting. That's it for MCBE. We want to
waiting. That's it for MCBE. We want to finally turn to our last topic which is AI agents.
Yeah.
uh AI agents. So this is my platform built
agents. So this is my platform built with lovable and why I call some of them are just aentic warp
pro. So there is LLM inside but the
pro. So there is LLM inside but the process is predictable. Uh so it can generate PDS, it can uh review uh product manager resumeum uh create
product strategy and so on. And the most complex one which is really an agent at least according to how I understand the
agents is deep market researcher. So it
can use tools like uh web search. It can
scrape uh websit's content. It can also plan research steps and then delegate those
research steps to to other agents that will uh focus on specific uh research areas.
[Music] Um the goal of this agent is to perform market research for a specific company
or product like Netflix movies let's say or you can do something else. Do you
have an idea cash? Um, let's pretend I'm Amazon Prime Video PM and I want to analyze, you know, what new Netflix how Netflix is thinking about its content
right now.
Oh, compare with Netflix.
Um something like this. Yep. We focus on comparison and Netflix and current strategy. So the first thing it will do
strategy. So the first thing it will do it will uh search for some important context. So what is Amazon? What is Netflix? This is quite
Amazon? What is Netflix? This is quite obvious, but maybe something has happened recently that can influence our research. And then based on this generic
research. And then based on this generic information, it plans up to, if I remember correctly, up to 11 uh separate
tasks and those tasks are distributed to to different um agents which again search the web uh
scrap websites and then the entire uh all the results are combined uh and presented to the user.
Very cool. It should take like 20 25 seconds. It's pretty fast. Uh, of
seconds. It's pretty fast. Uh, of
course, it is not uh I I didn't try to build the second Grock or or OpenAI deep research, but I think it is more focused on market
research and it will also fuel the other agents that I created like uh this PD generator [Music] or product tree ideation. So it will
also get this uh context about the company about the market and use it to generate some product artifacts. Um yeah deep market
artifacts. Um yeah deep market researchers. So first as I explained uh
researchers. So first as I explained uh planning the research um like the context of about Amazon
Prime and Netflix and then uh yeah detailed findings. So, Amazon Prime
detailed findings. So, Amazon Prime video content strategy analysis analyszis uh Amazon versus Netflix financial and
market insights potential gaps and opportunities. There is quite a lot of
opportunities. There is quite a lot of elements. I'm not sure we we can uh we
elements. I'm not sure we we can uh we can discuss everything but the next one is comparative analyszis. Wow. We also
have value proposition. What is the difference between value proposition of those two platforms?
uh user demographics, preferences and uh as you can see there are quotes to some external websites. Uh location
websites. Uh location market share key insights.
Uh yeah, market share share dynamics 24.
It's quite quite long and quite extensive. It's and it's focus on on
extensive. It's and it's focus on on product management. Uh there are 50
product management. Uh there are 50 sources. You can probably get more from
sources. You can probably get more from some platforms but yeah I my agents focus on this product management perspective and
how it was done it it was done without any platform other than lovable. Uh but
lovable. Uh but uh what is important the entire business logic is um and this is the default option that lovable uh allows you to do
is to host your business logic inside uh superbase uh superbase functions. So
it is not inside the browser but user interface uh sends a request and request is uh executed
uh on the back end and the back end has information has secrets about different AP I keys other services that are uh I described the logic I didn't code
anything but described the logic of how I imagine this um entire orchestration should work And it came up with with the edge function.
It is called an edge function. Uh that
is executed in superbase. Wow. Uh a different approach.
superbase. Wow. Uh a different approach.
Uh probably easier one if I did it right now. Maybe maybe I would approach it
now. Maybe maybe I would approach it differently. Uh you you could use uh N8.
differently. Uh you you could use uh N8.
Yep.
Um, so just like we created a web hook uh when demonstrating rack, I could create a web hook or a chat or
whatever. Uh, it would be easier to to
whatever. Uh, it would be easier to to test with a chat.
And here we can have an AI agent. There
is a diff special note for that uh which can be for example set or we can use open AI or anything else. I will
use open AI because open AI is I already has have credentials uh saved for open AI. Okay. Our agent can have memory so
AI. Okay. Our agent can have memory so it can remember interactions with the user uh or interactions within workflow
execution if there are loops or um yeah some iteration inside our workflow. We can use just a simple
workflow. We can use just a simple memory with session ID. So it means that as long the the the session it's the same session it can access the old
memories up to five by default but still uh there are also more complex memory types that we can use. Okay. And we can
give our agent some tools uh in a community edition and I will demonstrate it uh in a few days. So
probably before this video is published I will press I will publish how to do that. Uh but even without MCPN an uh
that. Uh but even without MCPN an uh offers a lot of tools that you can just connect to your agent like I'm not sure if Jira is here. Yeah Jira is here uh
create issue create you just need to provide parameters.
Uh similarly you can connect to your Google drive or do something else like uh Gmail send message and you can say that
uh it will be defined that the parameters will be defined by the model.
So let's say two subject message are defined by the model.
Uh and another tool I will [Music] uh calendar Google calendar and
uh you want to get information about events.
[Music] Okay.
Uh, by the way, I did not plan demonstrating it, so I'm not sure it will work. Uh, okay. I have some fake email
work. Uh, okay. I have some fake email account that we can possibly use. But this is not not my my main
use. But this is not not my my main email account. All right. And uh maybe
email account. All right. And uh maybe let's not test it. uh in theory uh even though this is not an MCP server I can provide a set of tools and there are
many demonstrations on YouTube how to do that. Uh those tools can include voice
that. Uh those tools can include voice generation or image generation and uh yeah uh much more complex stuff than
just uh using two tools and in theory we could uh our agent even without MCP can access those tools. So, uh let's say uh
send uh send an email to uh how about just random creating an invitation for
the next week in my calendar and sent an email to someone who probably not ideal let's say
work it created some Uh there are also uh services that allow you to uh make voice
calls. So this is really cool that an
calls. So this is really cool that an agent can perform a voice call. for
example, call restaurant. Um, make a reservation and then add compare it with your calendar. Then add it your to your
your calendar. Then add it your to your calendar and I will check in another screen if it's sent a
message and we refer it in a moment. Yeah.
moment. Yeah.
Uh I just need to present it carefully.
So this is the message that was sent from and end. I am planning to schedule a meeting
end. I am planning to schedule a meeting next week. Uh yeah, I don't have any events
week. Uh yeah, I don't have any events in my calendar and possibly if I look for the next week, maybe it has created
something. No, it didn't create
something. No, it didn't create anything. It would be nice to ah because
anything. It would be nice to ah because we the only action that we added was getting information
about and uh yeah uh let's add the tool to create to create
uh something in the calendar and let's uh save and repeat. Okay.
uh requesting him to attend it.
Okay, it it has created some event. Yay.
Sending an email.
Whoa.
Okay, this is so fast.
Okay, let's to Okay. So, first I have this
to Okay. So, first I have this uh meeting invitation sent. Wow. By the
agent and also I hope October October 9. Uh maybe he he doesn't know
October 9. Uh maybe he he doesn't know what is the current date in the distant future. Okay. Was creating an uh we
future. Okay. Was creating an uh we should probably probably add some other tools so it can that's fine. Yeah. So we
got the basic idea right which is this agent is going out there. It's creating
e calendar events. It's sending emails and we built it all in like literally you built that without any preparation in like 10 minutes. So that that was
improvised. I'm sorry that it didn't
improvised. I'm sorry that it didn't fully work. I did not plan this. It
fully work. I did not plan this. It
worked well I think. So that's it for our live cooking session. We just walked people through a hierarchy of everything they need to know about AI product management. Right? We started at the
management. Right? We started at the base of the pyramid. We went through prompting. We went up a level. We talked
prompting. We went up a level. We talked
about PRDS. Then we went through fine-tuning rag AI agents. We have given you the full toolkit of how to become an
AI product manager. So I just want to end on some hot questions for you here.
Are all PMs going to need to become AI PMs? That's a hard one. I don't I don't
PMs? That's a hard one. I don't I don't think so.
But uh there's a high probability because the market the AI market is growing so fast that there is a high probability that we will meet more AI product managers in the
future. The the market for other product
future. The the market for other product managers might not necessarily grow that fast. Yeah, it's not that everyone needs
fast. Yeah, it's not that everyone needs to become an AIPM, but this market is growing really fast and we just gave you all the tools to become an AI product
manager. Pavle, I literally think there
manager. Pavle, I literally think there is no one else in the world who could have done this set of demos this fast.
Thank you so much. Yeah, thank you. That
was a a pleasure. So, if you want to find him, make sure you check him out.
His product compass newsletter for my money. It is a nobrainer. You should
money. It is a nobrainer. You should
check out the paid newsletter. I am a subscriber to his paid newsletter. He
gives out demos like this every single week. Um, is there anything else you
week. Um, is there anything else you want to say before we break, Paul? No.
Uh, actually, I started uh being a full-time uh full-time focus on my newsletter a month ago. So just as you
said Akash right now every week we have open hours and for example tomorrow which is probably uh probably it will not be the case because this you will
publish this later but for example tomorrow we will uh discuss MCPS in detail and yeah there are slots for AI sessions every week inside our paid
community.
He's building up an awesome community.
He's already got an awesome newsletter.
It's one of the top newsletters in tech on Substack. Check him out. If you
on Substack. Check him out. If you
haven't seen him on LinkedIn, he has over 190,000 followers on LinkedIn. Paul
Vhern, my very first guest to appear two times on the podcast. Thank you. Thank
you, Akash. Take care. All right. Bye,
everyone. I really hope you guys enjoyed that episode. It would mean a ton to me
that episode. It would mean a ton to me and the team if you could please subscribe on YouTube, follow on Apple and Spotify podcasts, and leave a rating
and review. Those ratings and reviews
and review. Those ratings and reviews really help grow the show and help other people discover the show, and they help fund the production so that we can do bigger and better productions. Can't
wait to share the next episode with you.
Until then, see you later.
Loading video analysis...