How I Build Apps With Google AI Studio 💡 Full AI Coding Tutorial
By Wanderloots
Summary
## Key takeaways - **Plan MVP First, No Code**: The best way to get started is to have a conversation with Gemini to plan the app out ahead of time, outlining the goal, core element, and limiting it to an MVP without creating any code yet. [04:02], [04:25] - **Ground AI with Google Search**: Ground your answer in a Google search to understand how to build the app practically in 2025, as Gemini initially misunderstood Obsidian integration and proposed a wrong send mechanism. [07:28], [08:24] - **Docs Folder for Context Continuity**: Create a docs folder with roadmap, architect, and development log files to maintain permanent context across chats, so AI can catch up even if the conversation resets. [10:56], [11:48] - **Version with GitHub Every Feature**: Save to GitHub after every feature to version control everything, as the connection enables pushing to a remote repository though pulling back isn't supported. [13:24], [18:54] - **Debug by Giving Specific Feedback**: When a bug occurs like the template not inserting note contents, give feedback to edit the template with a 'note' placeholder, fixing it instantly. [19:29], [19:50] - **Add Generative AI Easily**: Introduce a generative AI element like dynamic tag generation with a sparkle button that analyzes note content and suggests tags using the Gemini API. [23:23], [24:51]
Topics Covered
- Plan Apps Before Coding
- Ground Prompts in Google Searches
- Context Engineering Preserves AI Knowledge
- AI Generates Tags Instantly
Full Transcript
So, I've already built basically a text editor, and that took like two seconds.
The age of waiting for someone else to build the perfect app to solve your problems is at an end. Artificial
intelligence is finally at the point where anyone can build their own personal apps. You don't need to know
personal apps. You don't need to know how to code. You don't need to hire someone to bring your idea to life. All
you need is a good idea and AI will do the rest. Hi, my name is Callum, also
the rest. Hi, my name is Callum, also known as Waterloots, and welcome to today's video on building personal apps in Google AI Studio. Google AI Studio lets you experiment with AI tools,
including Gemini 2.5 Pro, one of the most powerful AI tools available for coding. The best part, it's all free.
coding. The best part, it's all free.
You can experiment in Google AI Studio and test out your building before you ever make it ready for production. You
can use the build section of Google AI Studio to chat with Gemini, road map your idea, and then bring it to life. In
today's video, I walk through how to get started with building an app in Google AI Studio, including my own strategies for prompt engineering and context engineering, essential skills for building quality apps with AI. By the
end of the video, you'll understand how you can get started building your own personal apps to solve real world problems. It honestly feels like magic watching my ideas come to life. The goal
here is not to build the perfect app from the beginning, but to show you the core skills you need to understand how to use AI to help you build. That way
you can go now from building the basic system, iterate and recalibrate and get better and better until you're an expert app builder. If you find this video
app builder. If you find this video helpful, please consider hyping and subscribing as I'm working on making YouTube my full-time career. So any
support you can give me is very much appreciated. Now let's start building.
appreciated. Now let's start building.
Okay, so when you first get into Google AI Studio, this is what it looks like.
We have four major sections on the side here. We've got chat, stream, generate
here. We've got chat, stream, generate media, build, which is today's focus, and then we also have the history of the chats that you've had. So, there's a bunch of different settings here, and honestly, it can be a little overwhelming. There's a lot of
overwhelming. There's a lot of information. So, today I'm going to try
information. So, today I'm going to try and focus on just specifically the build component if you're interested in learning more how the chat, stream, and generate media parts work. I recommend
watching my video on what is Google AI Studio as I walk through all of these features in much more depth, and people have been finding it pretty helpful. So,
I recommend checking that out. But
today, let's just jump right into the build section.
So if we click on build, you can see here that there's a showcase with a whole bunch of different applications.
These are all kind of micro apps that people have built inside Google AI Studio with Gemini. So you can see here we also have my apps that I've built in the past, recent apps I've worked on, and an FAQ here with more context and
information. But rather than just
information. But rather than just explaining how all this works, I thought I would just show you. So I was having a conversation with Perplexity on what would be a good example to show you, and I figured I use Obsidian for all of my
note-taking. So, you can see this is my
note-taking. So, you can see this is my Obsidian app. And if you're interested
Obsidian app. And if you're interested in learning more about Obsidian, I do have a whole series of videos all on how to use Obsidian and specifically how to use the new Obsidian Bases core plugin.
For example, you can see here I have turned all of my notes into a database.
So, all of my book notes, for example, are all shown here in this single place with all this information, which makes it very easy for me to keep track of all my notes. I thought that an interesting
my notes. I thought that an interesting example, something that I could use to show you how to build a useful app in Google AI Studio would be to build an idea inbox or something that can send
notes to my idea inbox to be tracked in Obsidian. So rather than having to open
Obsidian. So rather than having to open up Obsidian, go click create new note, go select apply template for the idea template, write in the note, and then leave it, which can all take a couple
minutes. What if I just had a micro app
minutes. What if I just had a micro app whose sole purpose was to operate as an idea intake? So, I can just quickly open
idea intake? So, I can just quickly open this app from my phone, drop my thoughts down, and it would automatically get sent to Obsidian. So, let's give it a go. Let's see if I can make that with
go. Let's see if I can make that with you here. Just a quick note that if we
you here. Just a quick note that if we click advanced settings on the side here, we can see that we're operating with Gemini 2.5 Pro and we're operating with React TypeScript. This is my favorite to work with to begin. And
rather than just jumping right into building something, though you can do that, in my experience, the best way to get started is to have a conversation, which you can also do in the chat
section with Gemini, to plan the app out ahead of time. So, let's try that.
This is the initial prompt I'm going to start with, and I'll show you how it works in a moment here. But a few things I want to point out is I'm outlining the goal of what I'm trying to do here as the first part of the prompt. Here's the
core element of what I want the app to be. And then I specifically try and
be. And then I specifically try and limit it. I say how can we make an MVP
limit it. I say how can we make an MVP and then also do not create any code. We
are working on the plan first. So the
idea with working on the plan first is that I can go back and forth and I can iterate with Google AI studio with Gemini 2.5 Pro and make sure that the plan for the app that we're actually
going to build is the app that I want to build. Here we go. So on the side here
build. Here we go. So on the side here you can see that Gemini thought for 20 seconds and it outlined this entire project. So here the project plan create
project. So here the project plan create the Obsidian idea inbox progressive web app. So for context a progressive web
app. So for context a progressive web app is basically if you're on your phone you can go and you can click the share button on something like Safari or the Google browser and you can turn that
website into an application that sits on your phone home screen. So you can just open it directly from your phone home screen and not need to go search it on the internet. That's where we're
the internet. That's where we're installing it on the phone's home screen. The sole purpose is to capture
screen. The sole purpose is to capture fleeting ideas and then send them to the obsidian vault with minimal friction. So
you can see here phase one the capture engine. So we need to have this feature
engine. So we need to have this feature of the note input which would have the user interface of a single full screen view dominated by a clean autofocusing text area. Great. Of course if we're
text area. Great. Of course if we're taking notes we need to have a writing section. And here right off the bat you
section. And here right off the bat you can see it's thinking of excellent features like the persistent state. It's
going to use the browser local storage which is kind of like fleeting storage that just sits inside your browser. And
if I believe if you restart your phone, you would lose it, but otherwise it would stay there. So you can automatically save the content of the text area as you type so that if you reopen it, if you exit the browser and go back in, you don't lose your note
that you were working on. And then we get to the send to obsidian mechanism, which I'll explain more in a moment because that's going to be a little more complex. And actually, I note that it
complex. And actually, I note that it already doesn't understand exactly how Obsidian works. So I'm going to get it
Obsidian works. So I'm going to get it to do some research in a moment. And you
can see here it's proposing a fallback of copy to clipboard. We don't want that. That's no good. So, so far
that. That's no good. So, so far features one and two were great. Feature
three was incorrect. And then feature four, make this a progressive web app, which is exactly what I was talking about. That also works. And then we can
about. That also works. And then we can get into some AI enhancements that it's suggesting. I mean, this is powered by
suggesting. I mean, this is powered by Google, so they're going to suggest to include something like Gemini. So, you
can include Gemini 2.5 Flash. And I'll
explain this more in a little bit, but I've made a few apps now, and it is incredibly easy to add generative AI to add this enhance with AI button. So, for
example, I could imagine it automatically analyzes the note that I typed and suggests tags that would fit that note and then it could automatically add that with the AI
enhancement. Few other thoughts on UIUX
enhancement. Few other thoughts on UIUX philosophy. We need to have speed. The
philosophy. We need to have speed. The
whole point of this is that everything is instant because sometimes the Obsidian app on mobile can be a little laggy to load. So, I'm trying to speed up my writing flow. And then here you can see it's gone through and explained
how we're going to be building all this, what the stacks are, all the different code. I hope that this shows you a bit
code. I hope that this shows you a bit how going through and creating a road map is actually a great place to start because while most of this was good, I've already done some research on how
to do feature 3 and I know that Google got this wrong. So this is where it's really important that you can use Google as the starting point. That's great. But
if something feels a little bit off or if you're not quite sure if there's a better way, it's always better to push back on it. So I'm going to do this now for a second.
So, for example, I know that for the send to Obsidian function, that's not going to work because the browser itself as an application like what I'm working in right here operates in a sandbox and it can't go right to local storage
directly. But if you have an iPhone or
directly. But if you have an iPhone or an Android, there's some form of save to files as like a send button that you can do, which might be a way for me to once I finish writing my note, just click
save to files and then I can bypass the sandbox because I'm using the built-in phone system. So here's another tip for
phone system. So here's another tip for how we can fix this issue. Giving Gemini
a suggestion to please go do a Google search to understand how we should be building this app in 2025. So this is called grounding your answer in a Google search and it makes a huge difference in
terms of the quality of the output of the AI because it's able to go search the internet and get an answer to your direct question. Before you begin, I
direct question. Before you begin, I always recommend asking for some form of grounding Google search so that Gemini can have a better idea on practically what makes the most sense for 2025 or
whatever year it is that you're watching this. So, I'm going to send Gemini off
this. So, I'm going to send Gemini off to go do that. And then while I do, I'm going to explain a couple more buttons here.
We can see at the top here, I have the option to edit the app name. So, I'm
going to call this idea inbox and then click save. And we can see here when I
click save. And we can see here when I click save, it's saving it up here. This
has now been saved to my app library in Google AI Studio. So you can always click save and then go leave and come back if you want to. But if you quit the app without clicking save, you will lose your progress. Then this button here
your progress. Then this button here gives you the option to copy the app. So
let's say that you've got something working pretty well and you want to make a big change to it and you don't really want to break everything that you've already done or you want to add an entirely new feature, you can just duplicate the app directly so that you can go work on the new version without
worrying about breaking the old one.
Clicking this button here gives you a downloaded zip file. This enables you to save your app to GitHub. And I note that you can only push it to GitHub. So you
can save it in GitHub as a remote repository, but you cannot pull from GitHub. So what that means is you can
GitHub. So what that means is you can only use this as a versioning system, but you can't use it to go make changes in GitHub with a different application and then import it back into Google AI Studio. Then we have this deploy app
Studio. Then we have this deploy app button, which goes into cloud run, which is this feature here. basically kind of operates like Netlefi or Versell or some other hosting platform where you're able
to actually host your application on Cloud Run for free and you get up to 2 million requests per month for free which is pretty wild. So what that means is when the app is ready to be sent out into the public so that I can actually
start using it from my phone. I'll
deploy it on the cloud run here and then I'll be able to go to the website URL on my phone and start using it. You can
share your app and we can switch to an API key rather than using the built-in API from Google AI Studio. Then we have the option to select the device preview.
So you can choose based on your current screen size or mobile or tablet. So you
can see how your app looks in different views. And you can refresh the app right
views. And you can refresh the app right here. Let's go back now and take a look
here. Let's go back now and take a look at what Gemini said.
And you'll note it will start building code unless you tell it not to. It
already built the first version of it.
So that that's pretty wild. And you can see here, I click that, click save, and it instantly just saved the note. So,
it's already saving the note into markdown for me. So, the app is already working to some level. Now, obviously,
we want to add more features to it, but it's already built a system for myself to be able to just jot down a note and then save it directly to my device. So,
I've already built basically a text editor, and that took like 2 seconds.
So, this works not bad, but I want to add a few more features.
I think it's really important. You can
see here I just said, "Can you please create a docs folder with an idea inbox MVP road map that explains what we're doing, what works, and what architectural decisions we've made." If
we go take a look at the code for a second, you can see it's now building a docs folder for me where it's going and writing everything we've talked about so far into a road map file, into an architect file, and into a readme file.
But it didn't actually do the road map file. So the reason I'm doing this is
file. So the reason I'm doing this is because, for example, right now I can click save, I can click download, and I'll get a zip file of it. I could push it to GitHub if I want to, which I'll show you in a moment. And what's nice is
let's say this system crashes or this chat freezes or the conversation gets reset. Even if I would have the app
reset. Even if I would have the app build here, I've lost all the value of the context of this conversation. So
this is where we get into what's called context engineering where by creating these files inside the docs folder, I'm creating a permanent set of context that
every single AI chat I work with in the future will be able to access to understand exactly where we are. So for
example, I can also say please create a development log to match the road map.
So you can see here I told it to just do that and it didn't work. It thought it worked but it didn't. So this is where this chat is already starting to fill up with context. There's been a lot of
with context. There's been a lot of stuff inside of it. So I might need to reset the conversation shortly to keep the context of the chat clean to focus on whatever feature it is I'm working on. And that's where I can tell it to
on. And that's where I can tell it to then go check the road map to catch itself up on where we actually are. So
when this happens, I tell it to you did not actually change any code. please
change the code. So you can see there it didn't create the development log that I asked for it because I wasn't explicit enough in my prompt. I just said you didn't change the code please update it.
So it took that to mean oh there's something missing from the code. What
should that be? And it added a new feature of choosing whether or not you want it to toggle if it should be cleared after saving. So you can see how this is a bit of a work in progress.
This is using a form of Aentic AI where it's able to go in and do the code for you. It's actually writing all of this
you. It's actually writing all of this code directly for me. But it's not perfect. And that's why it's important
perfect. And that's why it's important to keep track of the docs on what it is you're working on as you go so that you know where you've left off. It's
important to save and it's important to version control. For example, if you go
version control. For example, if you go over to the readme, you can see that we're currently on MVP v 1.0. So, I'll
add a few more features here in a moment to show you how quickly really it can become a very powerful tool. But before
we do that, I just want to show you how we can version this by saving it to GitHub because this is an extremely important step. I can't overstate this.
important step. I can't overstate this.
If you'd like to see me build any other apps, please let me know in the comments below as I'm happy to show you how I would approach solving that particular problem or building that particular tool. Also, a reminder to please hype
tool. Also, a reminder to please hype and subscribe if you're finding this video helpful. Let's keep building.
video helpful. Let's keep building.
Let's click save to GitHub. So, I need to sign into GitHub to continue. Click
save. But now, what we're going to do is we're going to create the repository and make the first commit. So, I'm going to call this Obsidian idea inbox. going to
click create get repo and I'm getting an authentication error. This is an example
authentication error. This is an example of where something has clearly gone wrong. So I need to sign out. I need to
wrong. So I need to sign out. I need to refresh but then I'm going to lose everything that we've done in this conversation so far. But it's not so bad because I've already saved this app here. So let's do that for a moment. So
here. So let's do that for a moment. So
let's see if this works. Sign into
GitHub to continue. We have to give access to the repositories. So you have to install this Google AI Studio extension to your GitHub account. And at
the moment, because the repository has not been created yet, I have to install this into all of my existing repositories. If you're not wanting to
repositories. If you're not wanting to do this or you're a bit uncomfortable, you can always create just a new GitHub account like I just did specifically for Google AI Studio projects so that you can keep track of it without giving
Google access to the rest of your repositories. So, let's click install
repositories. So, let's click install and authorize. See if this works. We
and authorize. See if this works. We
have the option here to make this private or public. So private means that only you can see and access that repository whereas public means that anyone can see it. So it's up to you what you want this to be. I'm going to
keep it private for now and click create git repo. Okay. So there's still an
git repo. Okay. So there's still an authentication error. I feel like the
authentication error. I feel like the connection between GitHub and Google AI Studio is currently broken. So I'm just going to move past this for now.
And now you can see I've lost this whole conversation. But thankfully I have the
conversation. But thankfully I have the architect, the readme, and the road map already in here. So rather than starting from scratch, I can say, "Please review the docs and audit the code to catch up with what we were working on and give me
a concise report." So I like starting a new conversation with this prompt because it starts adding valuable context into the chat conversation. So
the AI doesn't need to go back through and read all of the files all over again. Instead, it can just say, "Oh,
again. Instead, it can just say, "Oh, the next step is V1.1." But there's a few more things I want to add here. So
before we jump into adding AI, let's think about what else could make this a valuable feature. So I had a
valuable feature. So I had a conversation with Perplexity about what I want to build in here. So basically
what I just did here was I copied in my conversation with Perplexity and said, "Here's some research I did on the app I want to build. How does this fit into what we're already building?" So now Google's going through and analyzing the
conversation I had with Plexity, all the different features I wanted to add, the organization of this tutorial, which you can see we just did the project setup in GitHub. We set up the progressive web
GitHub. We set up the progressive web app and we're about to get into some other features that I was interested in showing you. Great. So now I'm going to
showing you. Great. So now I'm going to say let's jump ahead and add the template feature.
So what I mean by that is in Obsidian I'm able to take something like this idea template. So, if I'm writing a
idea template. So, if I'm writing a note, every time I write that note, it already is prepopulated with the idea tag, which means it will automatically appear in this inbox. So, if I'm able to upload this idea template to the
application here, that means that I'll be able to apply the template to my ideas right away. So, that every note that's saved automatically gets this template applied to it. So, it
automatically appears inside of this inbox. Let's see what Google has to say.
inbox. Let's see what Google has to say.
So, you can see it's now adding a settings page. It's implementing
settings page. It's implementing persistent storage through index DB so that I'm able to save notes even if I don't have access to the internet. It's
going to build in a template selector so that I can have a drop down so I can choose a template for the current note and then the last selected template will be remembered for next time. And then
it's going to include this standard dynamic input section with these curly brackets so that I'll automatically be able to apply these templates with the note contents every time I want. Cool.
So there you go. That took about maybe 45 seconds or so. And we can see right now here's the idea inbox. I think this is supposed to be the settings cog. And
I can add a new template here. So if I click add template, you can see that I'm able to now write in particular information and that will be applied whenever I create a new note. But as you
can see, this isn't exactly the same as operating in Obsidian where we've got this front matter here. So what I'm going to do is I'm actually going to reveal this in Finder. I'm going to drop the markdown file right in here. So
rather than me having to just write the template in manually, ideally I would like to be able to use a file picker to go find this idea template and select it because what that means is it becomes very easy for me to just refresh
whatever the template is and update it rather than having to try and manually type it all in here. So let's see what it does. So you can see here how it's
it does. So you can see here how it's going through and it's just writing all the code. It's updating what we need for
the code. It's updating what we need for this application. And this is where
this application. And this is where Google AI Studio does a pretty good job, but you can see that it is rewriting the file every time. If you're using a different AI editor like cursor or
clawed code, it might not rewrite the file every time. It would go and select a specific component and then fix only that. Cool. Let's see how it works.
that. Cool. Let's see how it works.
Let's go to settings. So now we can see we have the option to add new template which is the button that we had before.
Or there's an option to import from file. Personally, I don't like having to
file. Personally, I don't like having to go necessarily search my browser. I like
to just drag and drop. So let's add that feature in. So I just said here, let's
feature in. So I just said here, let's see if we can change this button from import from file, which then opens up my file browser, so I have to go navigate and try and find it to instead just a drag and drop interface. Let's see what
it does. And again, just a note, I
it does. And again, just a note, I wasn't able to get GitHub working right now for some reason. But what I would be doing is every time I implement a new feature, I would save to GitHub so that I was version controlling absolutely
everything. And if you hover over the
everything. And if you hover over the file, it does explain exactly what's being done here. So you can say that it's now introducing the drop zone file picker. Cool. Let's see if it works.
picker. Cool. Let's see if it works.
Cool. So, it didn't actually display it, but I was able to just drag and drop, which honestly is pretty good for now.
Then click done. Now, I have the option here where there's a bubble where I can click idea template. Now, let's see what happens here. Click save note. It's
happens here. Click save note. It's
going to download the file. And we can see here it overwrote the note contents that I had and didn't make space for the actual note itself. So, this is a clear bug because it just applied the
template. It didn't apply the actual
template. It didn't apply the actual note contents.
So this is where I would go give the AI feedback. So I can see what I probably
feedback. So I can see what I probably need to do here is go into the template and put note like this. And now click save. And then let's see if that works.
save. And then let's see if that works.
Click save note. Grab the file. Cool. So
that worked. So what you can see is the way it built it. If I go to settings and I drag in the template, it did automatically apply all of this content for the YAML is called the front matter to create these properties. Like you can
see here where for example I were to drag this note directly into Obsidian.
You can see that it automatically applied this template here and now it appears in my Obsidian folder. So that's
great. That means that the template picker and everything is working to the point that it's very easy for me to now just drag in a template, modify it to just have the note contents and everything is good to go. So I can do this for any type of note I want to. But
why don't we take this to the next step just so that I don't have to do that again. I'm now iterating where I'm going
again. I'm now iterating where I'm going to ask it, hey, instead of me having to go in and click edit and add this note property here, can we just update it automatically so that whenever we import a file, it automatically puts that note
at the end of it. So you can see here, it just went through and updated the app, which means that I lost my local storage there. So now let's see what
storage there. So now let's see what happens if I drag the new file in. Go
grab my idea template, drop that in, click edit, and you can see now it's automatically applied that note to it.
Cool. So that looks like everything is working now. And we have the settings
working now. And we have the settings icon a little better. And I click save again. This is when I would push to
again. This is when I would push to GitHub by clicking save to GitHub.
I'm going to tell Gemini now to please go in and update the road map file so that we know where we're at in case I need to refresh or if the system crashes. Cool. There you go. So, it just
crashes. Cool. There you go. So, it just updated it. And you can see that it's
updated it. And you can see that it's tracking where we're at, what's been done, and what we should be adding next.
So, it keeps suggesting enhancements for the apps as we go. So, I'm going to click save again.
And now before we get into more features, why don't we check to see how this actually works on my phone. So for
example, this is what it would look like on mobile. Not bad. We can of course
on mobile. Not bad. We can of course make this better. So now to deploy the app so we can actually get this working on my phone. I would just need to go click the deploy app button, which does require me to set up billing. So I'm not
going to do that right this second. I
currently don't have my billing connected. And basically the idea here
connected. And basically the idea here is that when you deploy the app on Google cloud it uses cloud run service which makes the app accessible via a public URL which is great because then I
can share it and open it on my phone and everything which I will set up later and most importantly the API key is not exposed in the app but is still usable by the application. So this just protects your interest by not exposing
your API key to the public. So, if
you're interested, I can make another video that goes more in depth on how to actually deploy this. But for now, I think you have a good idea on how we can actually use this system effectively.
Let's give it a bit of an overhaul where I'm going to ask Google to analyze the code and the docs we have so far and see if we can make this look a little better. It's just okay, I would say,
better. It's just okay, I would say, right now. So, here we go. It's now
right now. So, here we go. It's now
going through and it's going to give it a bit of a UI overhaul. Okay, cool. So,
the settings wheel looks a little better. Nice. We have the import from
better. Nice. We have the import from file or the option to drag it directly in there, update it. Cool. This is
looking pretty good. Again, the design itself, the colors, everything you can just very easily change if you need to.
But you can see how this is just a pretty simple app. Now, all I would need to do is again keep saving this to GitHub for version control, deploy it, and then I'd be able to just create a new phone icon for my home screen, and
I'd be able to start writing notes here.
And then when I click save note, it would pop up and say where do you want to save it to? And I would just save it into my Obsidian vault. And that would automatically cause it to appear here, especially because it's not processed.
So for example, I have a filter here where processed is not true. So for
example, if I process a note, it gets removed from the table here. So the idea is that this can just very easily be an inbox and I can just draw notes in as I go and it will automatically get added to my Obsidian vault. Next, as the final
feature, why don't I quickly show you how we can introduce a generative element?
So, I just said, can you please introduce a generative AI element to the app? For example, dynamic tag generation
app? For example, dynamic tag generation based on the contents of the note. So,
basically, my thought is if I go through and I have a bunch of ideas and a bunch of notes, I can have the tags automatically generated here for me by the AI based on the content of the note that I'm putting in there. This is not something that you necessarily want to
do if you're trying to keep your ideas private because this does involve using the Gemini API, but I just want to quickly show you how you can add a generative AI feature so that if you have ideas for it in your app, you can
do that. So you can see it's saying we
do that. So you can see it's saying we could have an AI generated title, content summarization automatically, note structuring. So you could just
note structuring. So you could just brain dump where you drop in a bunch of information and then it will automatically format it with proper headings, lists, and structure to make it more organized for you. So I could imagine and I can add this in in the
future. I can just add a voice input
future. I can just add a voice input here where you could just speak and give it a wall of text and then when you click save it gets fed through the generative AI that formats it into nice paragraphs for you. So these are all features that would be very easy to
implement using Google AI studio because it's just automatically integrating Gemini for you. Cool. So you can see it just added this little sparkle button here. So I just wrote something quick
here. So I just wrote something quick about quantum computing and let's click generate and it pops up with a list of AI suggestions on tags for quantum computing, computer science, technology and physics. So I can just go like this
and physics. So I can just go like this and add it in. Click save note and I get a copy of it and now those tags would be in there. So, as the next feature, I
in there. So, as the next feature, I could say, can we automatically display the properties here and then have the AI fill in the properties so that for example, when we're looking at the idea
template, it'll prefill the tags for you.
As part of operating in Google AI Studio, you can get an API key very easily by just clicking API keys. You
get a copy. So, what that means is you get a certain number of free Google prompts per day as part of your API key.
So, if you go in and we download this app, you're running it on your phone, you can just input your API key and it'll be able to do all of this for free for you up until you hit the limit, which I believe is maybe a thousand per
day. It's quite large. I could just keep
day. It's quite large. I could just keep going and going. Honestly, as I go through, I get more and more ideas. And
it's cool that I can just iterate and build on them as I go. But I hope this gives you a sense on how you're able to build a fairly simple application for yourself, a personal app that really could make your life a lot easier. I
know there are apps out there that do this type of thing where you're able to send notes directly to Obsidian. But the
cool part is that I can customize this to operate exactly how I want to with the ability to introduce generative AI as I need to, multiple templates, the coloring, the style, the UI. It's all
possible and very easy to do. If I
wasn't explaining this as I went, I probably would have had this done in about 15 minutes, maybe less. And I
could just keep adding more features to make it more personal, more custom for what it is I'm looking for. Now I have a system for myself where I can just quickly save notes to my Obsidian, have this process property, and know exactly
whether or not I've processed the idea into the proper note that it's supposed to go for. I can even add haptics in there. There's just so much you can do.
there. There's just so much you can do.
The key is that you just keep going back and forth with the AI and you keep asking it to help you and keep asking it to update the docs so that you can keep track of where you're at. Ensure your
versioning in GitHub and then deploy it to Cloud Run if you want to in the future. or you can connect to Versell or
future. or you can connect to Versell or Netlify. So every change you save to
Netlify. So every change you save to GitHub, it redeploys and builds the app for you so that you can then access it from any of your devices or share it with other people. It's honestly pretty incredible. I've been spending a ton of
incredible. I've been spending a ton of time in here and I hope that you find it helpful.
I hope this overview helps you understand how you can actually get started with building your own personal apps. The key is to experiment, to
apps. The key is to experiment, to iterate, to recalibrate so that you can go from a basic concept and layer on more features as you go and as you understand how to work with the AI to
build the tools for you. It's kind of shocking how quickly you'll get to a point where you can actually use the tools, the apps that you're building to solve your own real world problems. I use it all the time. A reminder that
there's any other micro apps or personal apps you'd like to see me try and build, please let me know in the comments as I'm happy to make a dedicated video showing you how I would approach building that particular tool. If you
found this video helpful, please remember to like, hype, and subscribe.
Your support goes a lot further than you realize, so I appreciate it very much.
If you want to learn more about building with AI and using specific tools like Google AI Studio, Notebook LM, or Perplexity, Chat GPT, I've got a full AI learning playlist that shows you how I
connect my knowledge management system using something like Obsidian into the AI tools so that I can continuously make a better and better system for myself.
Thanks again for watching and I will see you in the next video.
Loading video analysis...