TLDW logo

Master Prompt Engineering – Build Smarter AI Apps with Lovable!

By Lovable

Summary

## Key takeaways - **Prompting is key for AI app success**: Effective prompt engineering is crucial for building AI apps, not only for initial development but also for troubleshooting errors and expediting the process. It's about providing enough hints for the model to predict your desired output, as models don't truly 'understand' in a human sense. [03:52], [04:50] - **AI models have prompting biases**: AI models exhibit recency bias, paying more attention to the beginning and end of prompts, and struggle with 'lost in the middle' logic. This means prompts should be concise yet high-value per token to maximize effectiveness. [07:56], [08:18] - **Use AI to engineer better prompts**: The most effective prompt engineers are often the language models themselves. You can leverage this by assigning the AI the role of a prompt engineer, asking it to create detailed, concise prompts for specific tasks, which can significantly improve your own prompting efficiency. [10:20], [11:26] - **Reverse meta-prompting for error resolution**: When encountering persistent errors, use 'reverse meta-prompting' by asking the AI to summarize the issues and errors you've faced. This creates a more detailed and instructive prompt for future use, preventing you from starting from scratch and saving significant time. [14:30], [15:50] - **Chat mode for iterative development**: For complex or uncharted development territories, utilize chat mode to iterate with the AI. This allows for a back-and-forth conversation to clarify requirements and ensure the AI understands your vision before implementing code, reducing errors and improving clarity. [19:25], [32:04] - **Leverage webhooks for external integrations**: Webhooks act as 'ears' listening for data, enabling seamless integration between platforms like Lovable and automation tools like Make.com. This allows for automated workflows triggered by user input, facilitating complex data processing and response loops. [27:40], [32:34]

Topics Covered

  • Ask AI to write your prompts for optimal results.
  • Learn from AI errors to create perfect future prompts.
  • Foreshadow functionality and leverage chat mode for clarity.
  • Use browser developer tools to diagnose vague errors.
  • Make, N8N, or Edge Function: Pick the right integration tool.

Full Transcript

are we live can people hear you if you

hear us if you can do a type one or

something in the

chat give a couple more seconds for the

people

to log

in nice and also yeah feel free to drop

also where you're you're connecting

from cool all right uh looks like we are

live I'm seeing the first ones coming in

the Stream um so with the team at

lovable um you probably know what we're

doing but we basically want to enable

anyone to transform their ideas into

working software that that's sort of our

mission so instead instead of hiring or

needing needing to hire a software

engineer that is often costly and slow

you simply ask in AI in plain English

what you want and you get a working app

or website out of it um I'm joined here

by a number of people we'll do a quick

round of intros I'll start before I hand

it over my name is Christian I'm part of

the product team here at lovable um and

I have Nicholas with me Nicholas you

want to go

next yeah I'm a software engineer here

at loveable um doing all kinds of stuff

mostly focusing on front end right now

bringing out some nice better ux yeah

cool and we also have a very special

guest uh Mark Mark over to you yeah for

sure yeah my name is Mark um I run a

company called prompt advisors where we

help businesses use and generate um use

AI in their workflows and yeah I have a

YouTube channel where I go pretty mad

scientist on all things prompt

engineering and and gen in general cool

and then we also have Stefan Manning and

producing the event he's leading our

community today you probably know him

from from Discord um but yeah maybe

before we go into like the focus of

today's session which I think a lot of

you are very excited about we can do

some basic housekeeping uh we skipped

the office hours last week there been a

bit of a gap on on coms so do we have

anything interesting we should be uh

yeah shouting out this time nichas yeah

there's a lot of things to look forward

to but I'm just going to recap what we

have done in the past weeks first so as

many of you might have know we migrated

our back end from python to go overall

it's been very successful I see a lot of

improvements especially for us

developers but there are still some

issues and we are aware of them and

we're trying to do our best to like

hatch this out and fix it uh but if you

still have issues after this migration

from python to go please reach out to us

uh but to more exciting news as a part

of the go migration we're much more

quicker on iterating of things so last

week you might have noticed that we H

released our visual edit

function uh and yeah I hope you're

enjoying it if there's any feedback on

it we're happy to take it so we can keep

on going H and lastly we also have

office hours on Thursday I think Chris

can update you on what the topic is and

I'm also going to plug just quick quick

one here that we are going to do one

with super base in the coming weeks so

stay tuned for

that yeah don't you Chris yes so the the

one on Thursday is going to be focusing

on using visual edits uh tips and tricks

we're gonna have NAD who's our head of

design and Harry join us for that one so

I think it's going to be a good session

and then as Nicholas say we'll have one

with the super based team uh covering um

whatever you guys think is most

important uh so we'll be maybe asking

for you guys input on what to focus on

um but yeah back to today's session so

typically these sessions are very

focused on building we we build live uh

you know taking into account the risk

that you know doing live Devils implies

but today is going to be a bit of a mix

uh it's going to be some expertise uh

and good prompting can in my experience

Mark the difference between you having a

super great ride with tools like lovable

and you know digging yourself into a

deep deep hole and there's a lot of

tricks that normally come with practice

and iterating so we want to make like

sort of speedrun you into into getting

all that knowledge without having to go

through the through the grueling pains

um so having said that I think I'll hand

it over to you mark you're the expert uh

and to sort of do a deep dive into the

art of H prompting I guess perfect thank

you so much Christian so uh my goal is

to go through some slides just to show

you just general concepts that not

everyone might be aware of and I'm just

going to share them right now just going

to go

here just let me know in the chat if you

can see my screen you should see a

heart yes we see it perfect all right so

um big picture why prompting is so

important is it not only sets the tone

for how your lovable apps going to be

built but when you run into errors there

are some cheat codes that myself and my

team have figured out that help you get

out of jail in level or any other

similar function that it really helps

you expedite things and understand where

things went wrong because sometimes you

get stuck on an error and you just spin

forever once you understand what's

happening and what the error is that

unlocks learning and for you to start

the next lovable app with the best foot

forward so we're not going to spend too

much in lecture land here but just some

reminders that are super helpful because

none of this stuff is Magic It's a

combination of a lot of engineering from

the lovable side and predictive prompts

meaning there's a prediction based on

your prompt your goal with a prompt is

to give enough information or enough

hints that the model can predict what

you want as an output there's no quote

unquote understanding yet in these

models you have an input and you get an

output and when you keep things

non-emotional like that knowing that

it's not magic when things go wrong it's

usually a combination of a prompt where

you weren't able to articulate what you

wanted to articul in the way it was

expecting or there's a mismatch in terms

of what it's expecting as an input so

real quick because they're all

probability based the more you can load

a prompt with either examples or the

words it understands are knows so for

example let's say you're implementing an

edge function using superbase and you

know that you not only want to send a

request but you want to receive a

request from another service or

microservice even just going the next

step and telling it you know what we

want you to handle the response in XYZ

way is helpful to know back and forth

what the input is and what the expected

output

is in general I go through four main

tiers of prompting one of them or maybe

two of them the majority of you will be

aware of the last two the majority of

people that we work with and I meet are

less aware of so the training wheels

method and I'm not just going to be

yapping here I'll pull up um an example

little screen here on chat gbt the

standard method you probably learned is

you use a combination of what's called

markdown where one hashtag is supposed

to denote a header which is like the

most important part of the prompt a

double hashtag is something like a

subtitle triple hashtag is even a

smaller subtitle where asterisks become

a bold so now typically you would have

seen something like you have a context

and then you have a task then you have

some form of of

guidelines and then you have

constraints and then you use these

hashtags as a way to tell the model hey

this is a proxy for a new section or

importance so this is typically what you

would start with and this is the

training wheels method where you're

still understanding that for most models

let's say Claude 3.5 Sonet gbt 40 and

all kinds of models they care about

what's at the very beginning and the

very end of a prompt there's recency

bias there not great right now at

something called Lost in the middle

logic where some portions of the body of

your prompt are not forgotten about but

it's paid attention paid attention to

less we have more and more models right

now like Gemini flash that are coming

out that are better at this but knowing

that there are these biases should push

you to make as concise but high value

per token prompts as possible so this is

the typical training wheel method where

you have these different headers the I'm

interject and ask you here I saw in the

chat that there's one person asking for

sheet sheet to download will you post

one of that and also someone is asking

what's the optimal number of words in a

prompt if there is one yeah so first one

uh I can put together something for you

to distribute to the community here so

I'll take care of that in terms of

number of words there is no magic number

of words the Nuance with prompting is

not just at the conceptual level every

model should be promp prompted slightly

differently um so claw 3.5 Sonet has a

different personality with prompting

than gbt 40 and reasoning models

obviously they're not unlovable yet but

in the future like 01 or uh deep seek

the way you prompt those is wildly

different those want way less input and

very targeted sentences whereas with

foro or Sonet sometimes more is better

and uh if it's not as detailed less is

more makes sense

cool so that's the training wheels the

no training wheels is when you start

using this framework of the context and

the guidelines Etc without actually

calling it out it's a part of your

actual workflow the part here that

should help many of you and if anyone's

here watch my channel on lovable I use

this as my get out of jail card to be

not only lazy but also be much better at

explaining things so myself I am a data

scientist full stacked of data scientist

by trade not a soft Ware engineer so

when it comes to implementing things I

don't necessarily have the words to

describe things that a season software

engineer would be able to do so one of

my get out of jail free cards is

creating a prompt that's similar to this

where I assign whatever language model

the role of a prompt engineer so I say

something like and I'll be extra lazy

here by using my voice you are a world

class prompt engineer you put together

very detailed prompts that are concise

but detailed enough that I can get my

desired output when I ask you for a

prompt I want you to put together the

prompt in markdown in a code block so I

can easily copy it and then I'll say

what I actually let me just stop that

there and then I'll say what I actually

want so write me a prompt that will

generate a full stack app that will take

an input of let's

say

someone's name number and Company and

generate a

report about their company and let's

just do that as a as a draft here what

the actual output is is not much of a

concern but typically I'll ask a

reasoning model to make these prompts

now because they're really good at

prompt engineering and you might not

think about it because not intuitive but

the best prompt prompt Engineers on

Earth are the language models themselves

who better to ask how to talk to

something than a member of that

Community itself yeah for someone for

someone new to to the like types of

models like can you give a quick like

one line like one sentence explanation

on what a reasoning model is versus a

non- reasoning model U that we might be

more used to uh yeah great question so

the I like to call it Old World versus

New World old world is gbg 40 3.5 Sonet

and everything before that those models

take in input and directly predict an

output and spit that back out a

reasoning model will take either one to

n number of steps to Output its result

then it checks that result against your

prompt again and double checks did I

fulfill Christian's request or not and

then it goes back again and depending on

how much thinking or reasoning it needs

to do it keeps going that Loop to check

its own work so whereas with 40 or

anything else you only worry about

oneway traffic with reasoning models

there's two-way traffic because there's

actual back and forth makes sense makes

sense and then more importantly maybe

what what did you use to record your

voice there on chat gbt yeah so it's

it's a free extension it's called voice

control for chat gbt it's a Chrome

extension and it'll work automatically

as soon as you upload it nice ni cool so

you can see here um it's smart enough to

even understand the Dynamics of what I

showed you manually so it says you are

an expert full stack engineer create a

full complete production ready web app

and then it goes through all the

different things here and notice how it

produces markdown itself it's very

concise so even if you have a prompt you

want to start with lovable I would ask

AI just to double check it lean it down

compress it so that it's as valuable as

possible without being confusing so

that's one tier of meta prompting the

cheat code is when you're getting

outputs that are not expected instead of

going back into a fresh new chat you can

say I tested this and let me just zoom

in

here and it's using

um I don't know

typescript but it should be in node.js

right so in in this case we don't care

about the actual languages it's more so

the fact that you can now have your own

mini prompt engineer chat that you go

back and forth with until you get the

prompt that gets you to the outcome

you're looking for so now even when it

comes to like what do I do with my

prompt just ask the AI and give it that

criticism of what went wrong okay um

this this for me like for me to

understand this is basically if you

notice the prompt they created is like

not really using the Technologies you

know lovable uses in our case it's like

react and and super base you can kind of

like correct it and say like no actually

use this instead exactly exactly so you

can course correct the

prompt makes sense now this is the part

where I want as many people to focus on

as possible because this is the most

tangible part to um building things on

lovable so when it's not just about meta

prompting you can do something called

reverse meta prompting now what that

means is you had a back and forth with

lovable let's say you had tons of Errors

which will happen to the best of us no

matter how well you prompt because some

things are in predictable so I'm going

to show you an example of an app here

that I put together so this app

transparently even with good prompt

engineering took me around an hour and a

bit to get through the books and its

function is pretty straightforward I

upload a PDF that PDF PF should get

stored in super based storage and then I

should be able to parse that PDF by

clicking on extract text and then it

should show me that text now the entire

chat we got hung up on things like

authentication the security of the

actual PDF and you'll see here we got

tons of Errors uh post requests were not

going properly there's a function issue

with the storage uh security again so at

the end of this I'm like I don't want to

go through this ever again so reverse

meta prompting is when I I end the chat

with something very important so let me

just go back here let me go

into uh this here I'll just zoom

in all right so you can see here at the

very end of the chat once I figured it

out I say now put together a detailed

instructive prompt end to end I can

provide from the beginning next time

that captures all of this and one thing

before I said that was I said said this

is a good start can you give detailed

why information from some of our

requirements was not fully understood

and can you summarize all the errors I

went through so that the next time I do

this I don't have to start from scratch

and the output is something like this

which is let me go down to the bottom

here there we go cool let's zoom in so

now it created its own prompt that I can

use in lovable next time it says create

a full stack PDF processing and

summarization system with the following

specifications uh super based project

with these requirements it even

summarized what SQL request I should

have given to increase the likelihood

that it will understand the next thing

is um Edge function like basically what

kind of edge functions should we use and

how should they be used it gives it a

cheat sheet for the security stuff that

I ran into this one component here is

what I spent four chats going back and

forth with now again it's aware of it it

doesn't mean that it won't make that

same mistake again but the fact that

it's in the context window will give it

an easy hint or a cheat sheet that you

know what this is probably the error and

then as you go down here it says

different things to look out for so

empty PDFs I uploaded a PDF that was

empty and didn't know how to handle it

so now it's trying to look out for it

and then you can go down here it'll give

more and more requirements um looking

out for certain errors and this is a

better prompt than I could ever put

together because it's using the software

engineer sorry software engineer words

that I might not have yeah so this is

basically you saying I've gone through

all this pain I finally got it working

next time I do this I just want to save

a bunch of time so I'm just gonna tell

it to tell me exactly what it did and

then I can just sort of like store this

somewhere and reuse it in a in a in a

yeah in the next project I do basically

okay makes a lot sense and and can I ask

you mark did you use out of curiosity

are you using chat mode for this or for

any like are you a big user of chat mode

and do you use chat mode particularly

for this uh reverse engineering

technique or

yes yes so for this final technique I go

full chat mode and I go back and forth

until I'm happy with it and yes I'm GNA

actually go through chat mode very

shortly right now okay so I I won't

Spill the Beans then you're good you're

good cool um I'll just go back here so

with lovable prompting so now hopefully

we should understand you know we have

four tiers of prompting telling the AI

to write prompts for you is a cheat code

now when it comes to lovable itself it's

also about foreshadowing and if that

work doesn't make much sense in your

first prompt you shouldn't just tell it

the functions you want to do that should

typically be sent at the second or third

prompt once you have superbase hooked up

because without superbase you're not

going to be able to really integrate and

build those Edge functions so your first

prompt should give a preview to The

Prompt of what the goal is what are we

trying to build what are the

functionalities you want the user to

have at a very high level meaning I want

the user to be able to upload X

and get y as a result that's the dream

vision and the UI should look like I

don't know you can come up with an idea

or you can upload a screenshot of some

inspiration UI you want that should be

your first prompt once all this is taken

care of and now you've hooked up to

superbase my next cheat code for you is

I spend now 70% of my lovable sessions

in chat mode and the reason why is

sometimes again you're unclear on what

your requirements are so how do you

expect an llm to understand what you

want if we go back to the premise that

this is not magic an input and output

are the same meaning garbage in garbage

out when you go on sheat chat mode I

like to ask it hey here's my vision

here's the functionality I want to

integrate um does this make sense to you

and if it makes sense to you play it

back to me what you think I want to

implement not only

that if I go to here I'm just going to

jump ahead for two seconds so um I have

one little hack here

llms like 3.5 Sonet 40 always remember

the old world of models so if you ask it

hey can you integrate open AI it will

default saying okay oh yeah I know what

gbt 4 is I know what 3.5 turbo is and

then you run into errors from the get-go

that you could avoid by me telling it in

chat only mode hey I want to implement

gbt 40 do you know how to implement gb40

yes or no like if if so show me the code

block of how you're going to implement

it and if it shows me gbd4 then I'm like

nope let me go to the documentation copy

paste the code block this is how we're

going to implement it forget all your

training on how you implemented gbd4

before this is how we do it and now I'm

dealing with this in chat mode I'm not

going through pain and suffering and

wondering what of my five

functionalities is not working I know

from the get-go you're not using the

right model so we're Bound for issues

later

on I'm just going to pause there in case

there are any questions from the chat

let's see here there's a couple

questions here maybe we can find

um Nicholas you have anything on your

side no but let's give people the time

to ask some questions before we continue

yeah that's a good point if you have a

question now's the Now's the Time I have

maybe one here which is um have you

found reasoning models to be good at

debugging as well or most or you use

them mostly to craft prompts no that's a

fantastic question so I'll jump ahead

here for the sake of that question what

I like to do with reasoning models is

sometimes you'll get an error in lovable

that just says error right and it's not

very explanatory even when you go to the

logs it'll look like some super based

posting error but it doesn't actually

tell you what's really happening I like

to open this in a new tab so that I can

then get access to uh in Google Chrome

there's every single browser has one of

these developer tools and when something

goes wrong this is something that just

went wrong like an hour ago um I didn't

get this error in lovable but I

screenshot this error and say hey like

whenever I deal with this issue with

let's say super base what's the best way

to go about asking to to fix it so this

helps a lot and this tells me that

there's a role security issue with

superbase so then I start to learn

myself as well so now the next time I

write I know that road level security

cores Security in general is a problem

so I like to use reasoning models a lot

because they're smarter at debugging

what might go wrong just off of a very

very basic error yeah makes sense

there's another one here I think sorry

go ahead Nicholas also saying I'm seeing

some more questions in the chat

regarding RLS policies Road level

security and superbase and getting that

right you could just give a quick

overview on that sure yeah so um on road

level security typically again as you

fail and learn from implementing certain

things you'll know when security is

going to be a problem so now like I'm

pretty familiar with when security is

going to be a problem I'll give you one

example right away that I know for a

fact in my prompt somewhere I have to

disable security so let's say you wanted

to build a matchmaking app and it was

matchmaking for AI tools so you enter

your AI tools you get matched with

someone else who has AI tools that are

same as yours the fact that that

information from every user has to be

shared immediately tells me you're gonna

have issues with Road level security

because by default every user's input

and profile and everything they do

activity-wise is hidden to be able to

enable that functionality you have to

disable that or disable portions of that

so that they can talk and see other

people's activity and

responses so that's one example there um

other times with security is when you

upload a document let's say that PDF

example I just showed you my main error

there was I was able to upload a

document but on the UI it didn't let me

see it let alone view it because it was

protecting in the storage so with

superbase always assume that it's hiding

things for your own good and if you need

to unhide them you have to be explicit

about telling it

that makes sense I want to highlight

another question SL comment here so

there's somebody saying hers I here from

Twitter saying for Vibe coding impr

prompting I still get confused which use

for what switching from Chach PT to

figma lovable to n8n which is like an

integration platform and that causes a

lot of friction so I think currently

today you might uh benefit from like

bringing in other tools outside lovable

obviously our vision with the product is

to make it so that you can stay in

lovable without having to go to these

external tools as much as possible so we

we we will likely integrate reasoning

models in the future uh in in the sort

of very in the midterm short term we

will H you know improve um how we uh

like create the prompts internally maybe

so that all these things that you

currently have to go through external

tools are just not required and and

something critical about um about

prompting is that context is key and if

you if you do most of your work H and

and let lovable help you create your

prompts then lovable is the best is the

best person to do this because they

lovable has entire context it knows the

codebase it knows the history it has

some some degree of memory so you're

never going to get that level of H

detailed context anywhere else any other

external platform right unless you copy

everything which is uh super super super

hard so so the goal is that Lov just

becomes the best unbeatable expert at

understanding your requests because

we're just the ones that are equipped

with all the necessary ingredients for

that um but but I still think that it's

interesting to see what people can do

now in in in the shortterm midterm by

using these third party third party

tools

um cool um keep going yeah let's go and

I just uh we're at almost half an hour

so let's make sure we have some time for

building as well

sounds good let's actually just jump

into building then um cool so we'll try

something that is error prone um I tried

to dabble this morning didn't have too

much time but let's uh you can use

either n8n or make.com I'm going to use

make because for most beginners it's

more approachable less of a steep

learning curve but let's build two

applications one is super simple and

there's a reason why I want to do it

simple and one is more complex but both

use the same ation so let's uh with make

that anyone's not familiar make is a

workflow automation tool you can build

all kinds of what are called scenarios

where you receive a request or some

input of data and then you go and take

care of a bunch of modules or tasks that

are automated as a part of that flow and

then you return some form of response so

it it'll make more sense if you haven't

seen it before while I build things but

if I have never built something before

and I'm looking for a key functionality

here here what I like to do is I build a

small proof of concept for myself that's

super simple where I just double check

that the function is working properly so

in this case I'm just going to say um

can uh let's start off and say build

a user interface and I'll zoom in here

for everybody on the stream build a user

interface that has a shiny red button

and a text box right above it

the goal is that when I hit the

button it'll send the

request um web hook to an Automation and

come back with a

response for now just

worry about creating a simple

minimalistic UI

with a clickable red button now one

thing here you'll notice is I'm I'm

giving it that goal this is not a very

sophisticated prompt but I'm just at

least telling it where I want to go So

eventually we want to build in a way

that we want to embed what's called a

web hook so while this loads just going

to zoom out we're going to set up this

web hook inm make.com and again if you

don't know what a web Hook is imagine

you have an ear that's listening for

some data that ear we have to activate

that ear so that they can listen in to

The Lovable UI and execute that workflow

we build based on that input so let's

Okay I Goa let me see

here Christian I think are things uh

operational see I think you might have

hit the

curse there was something that did not

work properly there

so maybe hit a refresh Mark done

yeah perfect the curse of doing live

streaming hey it wouldn't be it wouldn't

be a good live stream if there's no bugs

so cool so we have our box and we have

our button and again if you're saying oh

this is such a simple example yes that's

the whole point I want to make sure I

can do this before I actually do it in

an app that I want to ship so in make

what I'm going to do is I'm going to

click here I'm going to create our ear

to listen in for the data by clicking

web hook and then I'm going to go into

custom web hook we'll set up our ear I'm

literally going to call it lovable

ear and we'll click save and when it

saves it's going to generate this little

key here this URL that I can use to talk

to the app so I'm going to copy the

address and that is going to be what we

want this button to process now let's

just uh ask it

something um I'm going to go back to my

reverse verse prompt so this prompt I

ended a chat not having this make

automation work and I tried to ask it

what could I have done to increase the

likelihood this will this will work next

time so what I'm going to do is I'm

going to just go back here and say um

here's the web hook let me zoom in

here's the web

hook okay and then I'm just going

to try to be extra lazy and then go into

chat you be real quick and I was going

to say

uh what's in this image just so I don't

have to take out of presenter mode it'll

just copy paste it in case I want to

change it too it'll be easy to change

okay there we go so this is what lovable

told me was what I needed to explain to

it to understand how to put this web

hook and get a response back it was easy

to send request but not get one back so

I'm just going to go here and then go

back to lovable and then zoom out just a

tad say uh we want to send

whatever is in the text box to the let

me just correct that spelling for my

OCD to the uh web hook

below and expect a response later not

now for now just send whatever I submit

to the web hook um here's some pointers

to help and this is where I'll paste

that little tidbit of that Json

basically telling it post Json text

input value which is this to insert web

hook so in this case I'm going to just

say to web hook below okay can I ask you

mark what point would you use chat mode

in the process right now you sent a

default mode

prompt yeah so in this case it's because

I went through the chat mode in another

chat um to understand what I could have

done better to explain to it what the

goal is so if I want to go into

uncharted waters that's where I

immediately I go to chat chat mode so

I'll go here go to chat mode and then if

I want to change something functionally

that I've never done before I'll go back

and forth and make sure it understands

my

request maybe a question on like just

clarifying again what what a web Hook is

and what make is just to make sure so a

web Hook is basically can can you is

this a correct explanation mark make

sure I'm not saying something that is

not true but you pick an endpoint you're

sending a request to and then that

triggers a workflow which you've defined

a make and then it sends the the the

results of that workflow back to lovable

right that's basically that that's

basically what you're doing yeah yeah so

Step One is can we leave lovable with

some data and the step two is can we

come back to lovable with the response

and make just allows you to create all

these workflows and I guess make

integrates with a bunch of different

platforms to create these sort of

automations right uh exactly so you can

think of it as let's say um lovables

back end is not familiar with your CRM

that only you and a handful of companies

use but make.com has that native

integration you can use a make web hook

as a cheat code to not have to go back

and forth with superbase feeding it docs

you can just use the modules set up the

workflow and just worry about the back

and forth response so in this case I'm

going to now it says it's up the code so

what I'll do is I'm going to go back to

make and then right now if we receive a

response to this endpoint this will stop

running because it'll have some data so

in this case let's go back to lovable

I'll put it side by side I'll say I love

lovable okay let's just click process

okay so it says accepted um what I need

to see is on make that that spinny thing

stop spinning so it says successfully

determined that means we've sent data

from lovable to here here now to see

what data we got I'm going to run this

module on its own again and I'm going to

write this again okay so it says

accepted now you can see we got that

text from the UI that says I love

lovable right so now with this input I

can now start a whole workflow

automation so if I could wanted to add a

node here okay um and I say let's do

something very basic just so that you

can understand uh let's do GPT

40 or let's do mini just so it's faster

let's do

mini and what you would have had to do

if you've never used make.com is just

authenticate yourself and you do that by

clicking add and then it'll pop up with

um the name of the connection you want

to establish and then you'll put the API

key for your openai in here and once

that's figured out you can click on ADD

message and I'll click on user here I'm

sending a user prompt and I'm just going

to say

um

create a poem or let say a

Hau for whatever you see in and then

I'll insert the variable from our first

node text okay so this should just

create a poem and if I test it out here

we'll send this again what I'll do is

I'll click on set so that it runs

automatically and then I'll click save

so now if we send this web hook um I

don't know let's talk about web apps as

our core thing

here it now heard that question it now

created that poem okay so now we know

that it got that data we were able to

process it now the second part here is

how do we get data back to lovable this

is the next part so we need some form of

feedback loop so we'll add another web

hook that says web hook

response and um yeah and for the comment

that I just saw now you can totally do

this in lovable the idea is this is just

to show you what you can do from the

response standpoint and then we can do

something more advanced we can click on

uh the body here we're just going to

respond back with whatever chat gbt

responded with and we'll click save so

now we'll go back to lovable I'll go

back to uh let's go back sorry here I'll

go back to chat mode and I'll say we

have a few questions in the chat by the

way Mark how do I find chat mode like

it's not there for me so sound good so

let me go into different thing so I

don't uh lose my track here so if you go

to uh settings and you go to account

settings I believe you'll see Labs here

you'll see chat mode it'll be default to

off I believe and then you have to

toggle it back on yeah yeah and I can

add to that it's still an experimental

feature some of it we are rapidly

changing it so it might have a different

Behavior tomorrow should still be great

and I can also add that this togle is on

stored in local your browser's local

storage so if you switch devices or

browsers it will not be there

anymore you might change that later but

that's how it is for now yeah and the

last thing is if you come up with a plan

with chat mode and you implement the

plan it goes back to default so if you

want to keep going in chat you want to

make sure you go back to chat mode um so

now let's go back to chat mode here I'm

going to ask it I now built a

portion let's zoom in it a portion of

the

automation that sends back a response

with the poem that's

generated um do you know how to handle

this

response and I'll sometimes I'll just

ask it Point Blank like are you are you

cool with this are you ready to to

handle this and then it'll go through

let's say we have a response State

variable to store the response we

capture the response text correctly then

we display the response nicely formatted

a container below that all sounds good

to me and if it says uh would you like

to test it out I'll

say yes let's

Implement so instead of having a head

to-head with lovable I am giving love

and trying to get back love by just

asking for some preparation

yeah I I think you might have to change

to default yeah so this is probably a

little bug it should for you when you

said yes but

yeah

cool and while we're doing that now like

just to show the new feature visible

edits while it's actually loading if I

want to just quickly change this without

going into a battle saying no the button

the button or change it this color I can

click on edit and then I can select the

element I want to change so I could make

this theoretically any color I want um

without having to prompt it so while

that's loading up that's a beautiful

little nugget I think you guys are going

to show later on right

yeah all right so uh let me see here

refresh let's try this again let's talk

about uh web apps and click

process all right so you can see here I

can look at the automation if I go back

out here I can look at the execution we

were able to go from a response from an

inquiry from lovable from a response

back back from this workflow Automation

and then we now have the poem displayed

here so to me the most important thing

is now we go back to chat mode is if you

had to

implement this again in one shot

prompt how would you

structure The Prompt so that all the

details are captured

um I'll say act as a prompt engineer and

output a detailed yet concise prompt I

can use for next time and I'll make sure

chat only is on so I don't cause any

chaos and then it should come up pretty

quickly with a response to

that there we go so you can see here

create a modern web interface with the

following components UI elements a

sender text box a shiny red button uh

with core functional it so it says here

send post requests to this web hook now

obviously this will change over time but

now if I want to do this again which we

will right now I now have a head start

as to like how I should actually

communicate with this yeah maybe ju just

a question that is not entirely related

to what you were just talking about Mark

but I think is a good question so how

can I what are Best Practices to prevent

unexpected regressions when validating

code or Futures and how to basically

avoid these happening as your project

gross larger and larger so this is this

is a known a known issue uh and we we we

we think about this problem a lot and

we're constantly think about ways to

make it um less likely for these things

to happen as the project grows it

becomes more complex since our agent

needs to sort of think about uh more

files and more and a larger context but

we're doing things and uh we shipped

something actually earlier this week

which um basically will uh be like the

agent will be better at ignoring um

duplicate files uh often when you add

futures or when you move a future from

uh like when you change this the like

where feutures located in your app uh

the AI for like creates the new like

creates a new feature but like forgets

to clean up after itself so to speak so

you end up having two files maybe that

do the same thing and the problem is

that as as as you when you then try to

kind of add something to that same

fature it will edit the wrong file

because it's confused and doesn't know

exactly which of those two files has

been used H so you often will see an

error called could not be applied or um

I think it's uh changes already like

changes already in place so the edit

will fail so we've just actually shipped

something that will make that much less

likely to happen because the AI Now sort

of takes into account what is being used

and what's not been used um I can also

add one more pointer here and when

migrating from the go API or python API

to to go API we talked about this

earlier this stream you just made a big

change and switched programming

languages for our own back end we also

revealed a bug that we had since before

where the AI somewhere or the agent some

sometimes just went crazy and edited

files that they did not have context on

yep H and that's of course going to make

the rewrite terrible because it didn't

know what was there before and so

hopefully we're going to solve this

better and better now because we know

this is an issue yeah so just to address

the question myself um what I like to do

is whenever there's a suggestion to

refactor my code which is one of the

Holy actions of a developer um I asked

it in chat mode what were you thinking

of cleaning up or refactoring and how

are you planning on doing that and

usually immediately even without being a

developer or technical it'll tell you

I'm going to edit this part and this

part or this function and I can smell

that there's something off here so if

anything chat mode can help you decide

you know what it sounds nice to refactor

but based on what I think your plan is

either here's the rule moving forward

never refactor anything that's not X so

now you've made a temporary rule in the

context of the chat or you just ignore

it entirely and you keep going unless

you are genuinely getting to thousands

and thousands of lines that are just

things are not working right at that

point um which is what I would do there

so Mark we have a lot of non-technical

user what is a refactor let's just

address that really quickly as well cool

so let's let's say I wrote a 100 page

essay uh the word refactor would be the

equivalent of me trying to compress that

to 10 pages

but unlike an essay well actually

similar to an essay if you compress from

100 to 10 you might lose a lot of

meaning and context and information in

code that's more drastic when you

refactor code that's hundreds or

thousands of lines and you try to make

it more lean in the process you might

shortcut functions that depend on other

functions that lead to clashes uh

conflicts which is literally one

function trying to accomplish something

that the other function is trying to do

or they're mistimed so it leads to a lot

of issues that are hard to resolve

because if you're non-technical you'll

never understand what has changed that's

caused all these

issues so yeah like um refactoring is

like not the last thing you want to do

but you want to make sure you're fully

informed on what the plan is before you

even think about doing it y cool it's a

beautiful

metaphor good um any other questions you

want me to go through before I keep

ripping I think you can continue you

cool yeah all right so now we know how

to deal with this function here so let

us open a new

chat in here okay so in this case let's

do something that is not a basic red

button okay so let's do

um you know what let's let's have ai

help me so one thing I'll give to you

guys to provide the community is my

little prompt helper for lovable I like

to use this to help me get things off

the ground so I'll say there we go it'll

just give you some guidelines as to what

you can

use okay so I just want to build a web

page that's kind of a landing page for a

dentist company where you have a button

that opens up a form and you can collect

some basic information about what I'm

trying to do and then we're going to

send that information to um a web hook

in make.com but we just want to worry

about building the main landing page in

Shell ideally I want the design to be

minimalistic but look like something

that a dentist company would own and

make the prompt as concise as possible

don't write me essays don't write me a

bunch of bullets uh be very succinct so

in this case I just oh there we go

instructed that let

just I think I got to re refresh that

give me a second

here let me just regenerate that real

quick

Murphy's Law all right

boom open a eyes down yeah if open a

down there's bigger

problems cool so this is going to

generate hopefully it's not gonna be too

much yeah I like this this is a nice

clean start so we'll go here we'll send

that and then this should start us on

our way the most important thing I want

is I want a button that opens up some

form it could be even in the browser

itself and then we want to send those

components to make and then ideally do

some research on uh the condition and

see if our dentist company can actually

handle

it yeah there's a question here maybe we

can just uh answer real quick but I

think Sam is asking can we make an

example something more complex like user

logins and wres and I think my

suggestion we're doing these quite often

uh this this like current office hours

was mostly focusing on prompting so we

we're not going too deep and building

something super complex but we do these

every week at least once and in the

other sessions we're really building for

the entire session or actually building

an app throughout more than one session

uh so we have the zero to launch series

that we're building as something that

we're actually going to launch so I I

suggest you you subscribe and I'm sure

you get notified next time we have them

and I think those there you'll get a bit

more of this um sort of advanced app use

case

yeah we could definitely go there but I

think yeah you'll see many opportunities

for those authentication

things all right and while this is

loading I think just to finish off I

think I had a couple more notes here

um I mentioned the opening your tab to

get more meaningful errors and the last

thing is just very important when you

resolve things in lovable Beyond just

reverse prompting at the very end try

and see what it actually said was the

resolution to the problem because then

you'll start to understand patterns of

what does it usually fall on what issues

pop up so that's one last thing here

that you'll see typically when you

resolve it it'll give you a summary of

what it resolved or what it fixed and

then it's easier to go back so uh go

book consultation here all right so we

get these three

components I want to say um can we add

an open text field where the user can

explain what they're suffering from

specifically in their own words so we're

going to use this so that we can do some

research on it and see if it's something

that we can handle or if we have to pass

it on to a specialist for

example and while this is setting up we

can create a fresh new scenario in make

so we're going to need that ear or web

hook or that endpoint once again so let

me just refresh here

so let's set this up we're going do

custom web hook we're going to add

another one we're going to call it

lovable

dentist

endpoint and then we'll click save so

this is ready to go I'll copy this to

the clipboard and let's make sure that

we actually got what we wanted so let's

click on book of consultation perfect so

we have email name we have this so now I

can

say in chat mode all right

so I want to send a post Json request of

all the user inputs to a web hook in

make you'll see

below okay um I want to return a

response as to whether or not we can

help the

patient or if they need to see another

Specialist or doctor all right and then

here's the web

hook can you confirm that you'll be able

to send all this information

cleanly to this web

hook and just

know that uh will'll needs you to handle

the responses as well need you to handle

the response as well okay so that's an

example of what I like to call like a

very clear prompt where I'm

foreshadowing The Next Step so I'm not

saying worry about the response yet I'm

worried about just integrating this and

sending this cleanly as strings and yeah

as text

information so now that it's saying okay

it's clear on that it's clear on how

it's going to handle the

response I'll click on Implement plan

and then we'll be able to test whether

or not it works pretty quickly and then

in the actual make automation We'll add

a couple extra steps than before and

there we

go well this loads Mark can I ask you a

question do you have a horis for when

you use make to build automations versus

using something like an edge function in

super base we lovable is writing the

code for the actual Logic for sure okay

let's let's delineate between three of

them um Edge function make or n8n which

is super trendy right now

so make I'll use when there are

Integrations that are hard to accomplish

with a standard Edge function where

there's typically tons of documentation

that the mo let's say 3.5 Sonet might

not be aware of but the integration

exists off the shelf and make so to me

that's a cheat code to replicate that

function ity without going through the

pain of educating in chat mode hey

here's the API here's the docs so that's

the difference between let's say make

and an edge function nadn compared to

make is much more flexible meaning you

can use custom code steps in n8n um you

can have multi-agents they have

something called an agent module where

you can make basically do tool calls to

different agents in one workflow and

come back with a response and one thing

with NN as well is um at scale I find

that n8n is better it's less expensive

with make.com you pay per operation and

one operation is literally one module

running at a time whereas an NN these

scale a lot easier and last thing is

obviously with NN you can also um self

host and you can do all kinds of custom

stuff or flexible things you can't do

with

make.com makes sense

cool all right so now that we have this

all set up I'm just going to send a test

here I'm going to send a test here

and then we'll click on something I have

a serious infection all my teeth fell

out all right let's just copy that to

clipboard so we have an error here and

look this is good um I don't know what

the error is so I'm going to go into a

new tab and do that

again okay and this time when we get an

error I want to open developer tab

because I want to know what went wrong

so let's go to more tools developer

tools and I can see all kinds of Errors

so dialogue content requires this this

this okay so there's something on

probably yeah the UI side that's not

working so we have another thing here

listener indicated an asynchronous

response so I don't think make got some

form of response interestingly but um

according to our feedback loop it didn't

go through so I like to just screenshot

this puppy copy this

go into here um please see and resolve

the following errors that are

happening and just that this is super

good timing we will we are working on

something very very soon that will

basically automate this debuging step so

lovable will automatically know uh what

network errors all these logs that Mark

just showed and it will basically pull

them so that they are in the context uh

when debuging something in lovable so

you won't have to do this step very very

soon um but I think it's good that you

you show the the current alternative um

in the mean time yeah and someone did

mention right now that edge functions

are better to use because there's logs

which is true so if you were to use an

edge function you could trace exactly

what the error is if you go to super

base um but obviously it's better to

have it here for anyone that's not

technical because going that extra step

is typically tricky for most people yeah

it can add as well since our go

migration or since we started since we

rewrote our API like a few weeks back we

are actually ingesting Edge function

logs

automatically uh so whenever an endpoint

or a Web book fails the agent will have

context on it which is

awesome yeah gotcha that's perfect so

you can see here we have some okay so

this might be on the make side so let me

make sure that make is listening for the

request to begin with before I fix it

let me just make sure that we have a

problem okay so it looks like yeah we

still have the problem here ideally want

to know what's remaining as the problem

so let's refresh this

here

and book now my biggest thing is if I'm

getting the same exact errors in the

same quantity over and over again that's

when I know there's something that is

not great so in this case seems like we

have less errors that's better and on

the make side let's see here we still we

are still receiving the response so I'm

going to tell it something that's going

to be more meaningful I'm going to write

this in chat mode actually and say so

make is actually receiving the

information but the uy seems to think

it's not sending it

properly and I'll also send a screenshot

of it working in make.com because I'm

getting the functionality I need I don't

think it knows that it's doing it

properly so that's the feedback loop

there and I think this is a great

example where our current uh Labs

feature chat mode can be a good use case

right you you want you want the model to

not jump into coding immediately you

wanted to think you wanted to look at uh

the the code it's written before

actually suggesting something so by by

choosing chat mode you're basically

telling it hey don't worry about

implementing focus on like debugging and

then maybe I'm happy with the plan and

I'll tell you to to implement it exactly

exactly especially when you're going

down uncharted waters again uh when we

mentioned the refactoring thing you

won't run into that refactoring problem

if you're not spinning on changing the

code constantly so that'll help you

there as well yeah I've seen a few

questions in the chat about uh lovable

prompt helper you had one GPT or

something in chat GPT right yeah yeah

yeah for sure I have two actually so

there's one for the new uh visual edit

mode and there's one for General yeah

are these public in the GPT store uh

they're not in the store but they're I'm

happy to share the links with anyone

here to help out so uh where's the best

place I can throw them in if you pop

them in the private chat here we can put

them in the in

the this is Numero Uno and then let me

see the second one if I have it for

you we'll make sure to post this on our

Discord okay sounds good sounds good let

me post the second one to so this would

help here while I'm I'm doing this I'll

just show you how to use the second one

so that you leave this call with that in

mind so in visual edit mode you can

click on things to edit right the the

manual way I like to be lazy so I just

put up a GPT that when you click on

Advanced here this code basically

denotes what changes you want to make

here so for me I just like to copy this

into this other lovable so I'm going to

write uh write my single string um

here's my current and I'll screenshot my

UI and I'll say

um whatever I want to uh paste this

here I want the button to be a super

purple a super light purple okay so in

this case I'll just let it come up with

the uh actual string I'll copy that

string so I don't have to go back and

forth and edit and look for uh the color

hex myself I'll just go back here and

then paste it and then this case I was

clicking on these bad boys not the

button but you'll see here it made the

change

without without me actually writing it

so I like to be lazy that's my it's my

go-to so now it saves and you're good

there so I'll paste that in the chat as

well let me paste that for you and for

reference to you guys in the chat this

is the CSS class names and we are using

something called called Tailwind so

these are Tailwind classes yeah exactly

so when you denote any one of these you

can see here it labels it as div section

button these are all in CSS world how

you refer to these components on the

page so this is just like your cheat

code to saying for this thing I want to

make this

change cool how are we doing for time

just I'm I'm cognizant of that we're

hitting the end now okay we'll give this

a shot works then it works let's see

here

uh

oh let's see let me just run that again

let me just do immediately as arrives

delete all data let's do

this um and let's go back out here okay

so this is let's just make sure that

this worked

Mark okay

so that's recent Okay cool so now the UI

knows that the back end has received

this request and now um with the little

minute time I have what you could do

next is literally go on and add a perple

perplexity step which is the ability to

go search the web live through an API

and I could say you know what can um

let's go to this model let's add a uh

user prompt can you search the um

patient condition

description and see if it's

serious or can be

handled by a dentist right then You' put

the variable description here and then

where you could go next is you know what

maybe for our Dentistry we only do

certain types of surgeries and

procedures so you add now a j a GPT step

you say completion here and then maybe

you want to use a a reasoning model so

reasoning model is a bit smarter it's

more thoughtful so let's use let's say

an 03 mini to be like you know what

based on this condition and based on the

research that we got from the last step

let's determine whether or not this is

something eligible and then just like we

did the first MVP we'd add one more

little web hook response here that would

respond back with hey yeah we're we're

eligible to see you here's our link to

go book or you know what so sorry we we

can't do this but here's someone else

that can help you so that's example how

you could make this bit more

sophisticated and troubleshoot what we

just went through at the same time so I

think let we can call it there if that

makes sense super all right this was a

very very cool I think that last example

you showed really

showcases maybe the pros of using

something like make or n81 versus Edge

functions like it seems like the logic

is very

much you can see it it's very visual you

can build up on it um which is very

helpful especially if you're if you're

non- Technical and and you can't

necessarily read the the code that

lovable is writing for the edge function

um so so I think I think we'll do for

sure some sessions in the future

focusing on integrating lovable with

these sort of automation platforms I

think that that will benefit um many of

you in the yeah looking at looking at

the session and one one last nugget one

last nugget is um you can go from one to

the other so let's say you built an

Automation and make and that one's

working you could theoretically

screenshot the automation flow feed that

as an input to a prompt to say hey this

is what I'm trying to replicate as an

edge function so then you're giving it a

cheat sheet of how the flow should go

but anyway I want to just throw that

in um cool so I think we we can wrap it

up uh there's a couple maybe shoutouts

before calling it so you can go to your

YouTube and you'll find a bunch of uh

previous recorded um office hour

sessions as Nicholas said the beginning

we're going to be doing another session

on Thursday that is going to be mostly

focusing on design using visual edits

our new future that we just shipped and

natad is going to be joining us our sort

of design lead here also Harry which is

a very experienced user that some of you

might already know so I really recommend

you signing up for that one um what else

we have Discord um uh so I would really

really recommend you guys to go to the

community Talisha is in in the comments

uh uh

basically invited you to to go and and

and ask questions there's a lot of help

a lot of knowledgeable people that are

extremely like experienced with lovable

and I'm sure they'll um yeah basically

give you a helping hand um and what else

also as usual you can always go to L.D

support to see all the sort of different

support channels we um we have and I

think I would at least try to spend the

next half an hour I'll be available in

Discord as well if there's any questions

about anything that we've been uh

discussing

uh in the session anything else I'm

forgetting Nicholas or Stan yeah this

session will be uploaded immediately to

our YouTube channel so if you miss the

beginning or something just go back uh I

think the chat will be replay there as

well so if you missed some of the links

to the GPT for example it will be there

but I hope we'll chrisan you you'll make

sure to post it on Discord perhaps to

the

gpts and thank you for joining us Mark

do you want like where do you share tips

and tricks for these things where can we

direct our viewers yeah yeah for sure no

first of all thanks for everyone

watching um love doing this I love The

Lovable team um if you want to follow

for more mad scientist stuff more prompt

engineering tips for every modality you

can think of markor casf is my YouTube

handle so if this was interesting to you

I'd love to see you there as well cool

awesome all right see you guys next time

have a good one bye bye bye-bye

Loading...

Loading video analysis...