TLDW logo

What Is Google Antigravity? 🚀 AI Coding Tutorial & Gemini 3 App Build

By Wanderloots

Summary

## Key takeaways - **Agent-First IDE Revolution**: Anti-gravity feels like the first time I'm managing different AI agents instead of just prompting another AI tool, with a smooth flow from editor to agent manager to autonomous browser that tests, debugs, and fixes apps. [00:04], [00:18] - **Browser Agent Automates Testing**: The browser agent moves the cursor, clicks buttons, takes screenshots, and verifies UI functionality autonomously, saving massive debugging time by testing, logging errors, and iterating fixes. [19:30], [20:52] - **Agent Inbox Tracks Parallels**: The agent inbox centralizes notifications from multiple parallel agents needing human approval, letting you monitor 10-30 conversations in one place without losing track. [15:09], [15:37] - **Artifacts Enable Precise Direction**: Artifacts like implementation plans allow commenting on specific text or images like Google Docs, directing the AI to refine plans iteratively for better, non-generic apps. [11:35], [12:00] - **Model Switching Beats Limits**: Switch seamlessly from Gemini 3 Pro (1.5 hours free quota) to Claude Sonnet 4.5 for debugging when limits hit, continuing workflows without interruption. [09:23], [29:17] - **Git Commits Auto-Generated**: AI analyzes code changes and generates precise Git commit messages automatically, eliminating the need to manually describe updates. [17:47], [33:03]

Topics Covered

  • Artifacts Revolutionize Context Engineering
  • Agent Manager Enables Parallel Orchestration
  • Shift from Coder to Agent Orchestrator
  • Browser Agent Autonomously Tests UIs
  • Anti-Gravity Beats Sandbox Limitations

Full Transcript

Google just launched a new agent first coding tool called anti-gravity. And it

honestly feels like the first time I'm managing different AI agents instead of just prompting another AI tool. As

someone who's been building apps and vibe coding over the last 8 months with a whole suite of AI tools, anti-gravity genuinely feels like a gamecher. The

flow from editor to agent manager to an autonomous browser that can test, debug, and fix your app is surprisingly smooth.

Plus, you get an agent inbox to run multiple agents in parallel while staying completely in the loop. Hi, my

name is Callum, also known as Waterloots, and welcome to today's video on Google's anti-gravity. Google's new

Agentic desktop coding environment.

Anti-gravity is designed around Agentic workflows. You get an agent manager,

workflows. You get an agent manager, parallel tasks, implementation plans, and artifacts that you can comment on like Google Docs, and a browser agent that you can test, debug, and even fix your app for you. If you've been playing

in AI coding sandboxes like Google AI Studio and you're ready to level up to the next tier of vibe coding or if you're a developer and you're looking for an agent first IDE, I think you're really going to like what anti-gravity

can do. In today's video, I'll walk

can do. In today's video, I'll walk through anti-gravity's key features and what the agentic workflow feels like in practice. Show you how to install it

practice. Show you how to install it from scratch and keep your data as private as possible. build a local app with a custom RSS reader pulling in articles and then extend the app with more features to connect to a local LLM

for autotagging and daily debrief generations. I'll also share honest

generations. I'll also share honest takeaways, current bugs and practical workarounds. I've been waiting for an

workarounds. I've been waiting for an agent first vibe coding tool for a while and I'm genuinely excited to show you how this all works. If you find this video helpful, please like, hype, and subscribe as your support enables me to continue making these videos and I

appreciate it very much. Also, I know this is a new tool, so if you have any questions or there's specific features you want me to explore in future videos, please let me know in the comments, and I'm happy to make more videos to help you out. Now, let's dive into

you out. Now, let's dive into anti-gravity.

So, welcome to Google's anti-gravity.

This is their new ID, which stands for integrated development environment, and basically it's a way to bring in AI coding in a bit of a different form because it's very focused on agentic

workflows. So, they have a few main

workflows. So, they have a few main features. I'm just going to briefly

features. I'm just going to briefly explain what they are for a moment and then we'll dive right into the software itself. So, it's become pretty standard

itself. So, it's become pretty standard I would say now to have an AI powered IDE with automatic tab completions and the ability to use agents for coding.

But what's cool about anti-gravity is that it introduces asynchronous agents in a bit of a different form than the other ones. So, I'm excited to show you

other ones. So, I'm excited to show you how that looks because it's all organized in the agent manager. So, I

truly think that the way that they have designed anti-gravity actually feels as though it's structured in a way that is agent first. I've used Google AI studio

agent first. I've used Google AI studio a lot. I've used cloud code. I've used

a lot. I've used cloud code. I've used

client. I've used rue. I've tested

cursor. I've tried a lot of different software using AI for coding. And I do feel like the UI in general with this multi-wind workflow with the editor,

manager, and browser really does feel like a different experience. And as

someone who's been learning to code a lot over the last eight months or so, it feels better. Personally, it makes sense

feels better. Personally, it makes sense for how I would like to operate. And a

big part of that, too, is that there is a browser agent. So, this can actually go through and click the browser for you, which lets you do a lot of things like pulling information, UI testing, which is a big one if you're building

any kind of app. So, the fact that all of this is fit together into one single ecosystem really does feel very seamless. So, I'm not going to go too

seamless. So, I'm not going to go too much into that here because I'd rather just show you how it looks. But you can also do a few other things. You can

bring in MCP connections, model context protocol. They've introduced what are

protocol. They've introduced what are called artifacts to keep track of the context that the agent is generating and how you're planning it. So, that brings in what's called context engineering.

And specifically, there's this concept of an implementation plan as well that I'm going to show you in a moment that I've been finding very helpful. And in

part, it's because you can do things like this where you're able to actually add comments to specific elements, both images and text, so that you're better able to direct the AI as it's going through and testing and building for

you, so that you're able to target specific changes that you want to implement. All this can be controlled

implement. All this can be controlled from the agent manager, which I'll show you more in a moment, which creates a series of agents operating in parallel that send you messages to your inbox so that you're able to control where each

agent is at as you build multiple projects in parallel. This is a great way to keep your system organized. But

with that, now let's download it and get started.

Just going to install it. Cool. Here we

go. It is installed. So, I honestly love the logo. I think it's great. It reminds

the logo. I think it's great. It reminds

me a little of mat lab, which is kind of fun. So, I've been testing this out on

fun. So, I've been testing this out on my other computer and I thought I would actually just show you what it looks like from scratch so that you have a first glimpse on what this looks like when you get going. So, you have the option to choose a setup flow, which

means you can import your settings from another IDE. Since this is a VS Code

another IDE. Since this is a VS Code fork, you're able to actually bring in a lot of the extensions, but I'm going to start this from scratch. I've actually

really been liking the solarized light one, but to help protect your eyes, I'm going to go with the Tokyo night mode.

Okay. And right away, we get into some of the settings on how you want to actually use the anti-gravity agent.

Now, this is pretty big because depending on your comfort with AI coding, you may want more or less permissions given to the agent to drive the development. So, you can always

the development. So, you can always change this in settings later. And you

can see here by clicking on the different options, it automatically changes the settings on the side. So

with all the different AI tools I've tried for coding, all the different IDEs, they have different levels of requesting review. And it seems like at

requesting review. And it seems like at the moment anti-gravity is a little more free flowing. So I like to add another

free flowing. So I like to add another layer of request review just to make sure that it's actually building what I want it to build. So you can see here we have the option to bring in different extensions for different coding languages and to install the command

line tool to open Andy Gravity. So I'm

going to do all of that and I have to sign in with Google. Here we go. Okay.

And then we get to the big setting that's going to control, I think, your comfort level with the privacy of the information that you're sending to Google. So, I'm glad that this happens

Google. So, I'm glad that this happens right at the beginning for the terms of use because of course with AI, you have to be careful, especially if you're dealing with sensitive code information.

What do you want to actually send to Google versus not send to Google? So,

this is bringing in what's called telemetry, which is the process of automatically collecting, measuring, and transmitting data from remote sources for monitoring and analysis. So, as this is experimental at the moment, I would

recommend being careful with what you're sending. I'm personally going to uncheck

sending. I'm personally going to uncheck this. And you can change this later if

this. And you can change this later if you want to, but I highly recommend reading the anti-gravity terms of service to make sure you're comfortable with it and checking out Google's privacy policy. That's just standard

privacy policy. That's just standard best practice for using new tools. And I

will mention in a moment that we're going to be able to set up almost a sandbox mode for anti-gravity to make sure that it only touches the files that we want it to touch. So, let's click next. All right, so the extensions are

next. All right, so the extensions are installed and welcome to anti-gravity.

So, if you're familiar with VS Code at all, you'll notice that this looks quite similar. We've got our files on the side

similar. We've got our files on the side here, the ability to search, a git connection, run and debug, remote explorer, and the different extensions that we can install because this is a VS Code fork. We also have this agent

Code fork. We also have this agent manager and this agent on the side here.

So, this looks similar to Cloud Code through VS Code or through Klein or some other agent. And we have the option of

other agent. And we have the option of course to open different panels, but you'll notice in the top here we have the option to open browser. So I'm just going to click on that to show you for a second so you can see the browser

launch. So right away this is a pretty

launch. So right away this is a pretty key difference in my opinion for how anti-gravity is operating. Basically

what this does is it's bringing the agent into your browser to see and interact with websites. So you can send the agent to go interact for you. It

literally will move the cursor around and I'll show you this in a moment to click and copy and research and perhaps most importantly test out your UI for you to make sure it works. Going to

install the extension. And one thing to note here as well is that this is operating within its own sandbox Chrome.

It's set up a brand new profile specifically for anti-gravity to operate. So I do think this is best

operate. So I do think this is best practice because it means that if I sign in to an account here, anti-gravity will get access to that website through the authentication by remembering my sign-in

information. So what you can do is you

information. So what you can do is you can create an entirely separate Google profile if you want. That's actually

what I've done so that I'm sandboxing specifically what is available for the agent to operate within the browser control. And here are a few use cases.

control. And here are a few use cases.

Let's go back to anti-gravity itself.

So we have two different view options here. We have this which is the code

here. We have this which is the code view or we can click open agent manager and by clicking open agent manager it brings up a completely new view that I haven't seen in any other system. The

idea here is that you can have multiple agents operating at the same time operating in parallel and then you can watch the editor to see how the changes are happening and you can even follow

along with a particular agent to see what it's doing. So I'll show you that in a moment and you can always toggle back and forth to the editor here. You

can see that this just opened a new blank workspace for me. Again, this

looks very much like VS Code. And if we go down to agent manager, we start to get into a bit more of a playground. You

can see that this looks a little bit different because we don't actually have the code view available to us. We can

open the editor up in the top here, but this looks more similar to something that you might be familiar with in Google AI Studio or ChatGpt where we just are based on a standard chat

interface.

So, we have the option here to toggle between planning mode and fast mode.

Planning mode, I have found personally, is what you want to use in most situations when you're first getting it started. And then once you have your

started. And then once you have your plan orchestrated, you can switch to fast mode if you want to. And we also have the option to change our model here. So, you get a pretty decent limit

here. So, you get a pretty decent limit with Gemini 3 Pro, which is Google's latest state-of-the-art model. Using it

constantly, I've been able to get about 1 and 1/2 hours straight usage of it.

And once that's done, I could switch to Claude Son at 4.5 Thinking and get a pretty decent allowance. So this has actually been the most economical method of vibe coding for me in recent times.

And worst case scenario, you can always go down to GPTO OSS 120 billion, which is a pretty decent model to be running.

So the cool part here is that each time you start a new conversation, you can modify which model is being used and what its state is. And the idea here is that once you've had a conversation in playground, if it's something that you

feel is fruitful and you're happy with it, you can move it to a workspace which is kind of just like a a folder or a git workspace. It's going to have its own

workspace. It's going to have its own version control. It's going to create an

version control. It's going to create an entirely separate code editor like this vast Schroinger. Each workspace will be

vast Schroinger. Each workspace will be a separate coding environment. So the

playground is good for prototyping or just exploring, following your curiosity to see what happens. And then if you like where it's going, you can always move it to a workspace and make it more permanent. Have a git version control

permanent. Have a git version control set up. So rather than me just

set up. So rather than me just explaining more how this is going to work, why don't we actually try something?

Okay, so basically I just told it why don't we build an RSS reader, which is basically a way to just pull information from the internet automatically and I want to see if it's able to build a little web app that can run locally and

then test it out. So, I've asked it to specifically not code anything yet, which is, I think, helpful, especially for Gemini 3, which is a little excited to get started sometimes. And here we go. So, that took about 10 seconds or

go. So, that took about 10 seconds or so. And you can see that it's created a

so. And you can see that it's created a plan to build my RSS reader. It's

suggesting using Nex.js. And it created two artifacts here, if you remember that I mentioned that before. So, we have this task artifact here, and if we click on the task up in the top right, we're able to get a list of all of the files

that have been created as part of this project. And you can see too already it

project. And you can see too already it sent something over to our inbox because we have this option to proceed. So this

is called human in the loop which is basically my ability to approve or deny the AI's actions. So you can see here we're in the plan and design phase and step one was create implementation plan.

It has the project scaffolding the different things that we want to do here including premium styling. So it's

interesting that it's saying that. And

you'll notice too as I'm highlighting things I have the option to leave a comment. So, I could say, for example,

comment. So, I could say, for example, let's go here and click comment. And I'm

going to say, ensure we have state-of-the-art design with minimalism in mind. And I can actually just leave a

in mind. And I can actually just leave a comment there. And now the review has a

comment there. And now the review has a comment on it when I click proceed because it would take into account this comment specifically applied to the UI here. So, let's take a look at the

here. So, let's take a look at the implementation plan. So, I honestly

implementation plan. So, I honestly really love the way that they've designed this UI for the artifact system. The fact that it has this

system. The fact that it has this implementation plan, the local host, RSS reader, the goal, user review it required, the layout of this artifact for this implementation plan is really

clean and nice and really does show you, I think, a lot of the most important elements of what goes into building this app and explains why it wants to use these particular elements. So whether

you're learning or a pro, I think this is a great way for you to get better at coding with Agentic AI. And you'll

notice all of this is happening inside of the agent manager, not inside of the editor yet. So, I'm just going to

editor yet. So, I'm just going to quickly read this for a second to make sure the implementation plan is good.

The goal of an RSS feed is effectively to bypass needing social media where you're able to just subscribe directly to the sources that you care about and get them given to you. So, rather than having, for example, a single article, I

can say, let's give the user the option to decide how many articles are displayed. So, you can see as I'm

displayed. So, you can see as I'm starting to do this more, I'm thinking of myself more as an orchestrator, a manager of the agents that are going to go implement this plan rather than the coder directly, at least for the beginning. It's nice. It's already

beginning. It's nice. It's already

bringing in local storage. That's great.

By default, it wants to bring in light and dark mode. I also agree with that.

And it's going to write an automatic test to get things going. So, that all sounds good to me, but I'm going to add one more element here.

I'll show you this a little later in the tutorial once we get to it. But, I want to be able to run a local model that's able to parse, it's able to read the RSS feeds, the information sources, and then aggregate the information into a daily

report for me. So that's going to show you how we're able to use the agent to potentially spawn a new local model to use generative AI to give me a report on the type of information that's coming in through the RSS feed. So this is under

the submit comment section. This is

where when the AI has given me an implementation plan. I can either review

implementation plan. I can either review it or proceed. Proceed moves it forward with the comment that I have. Review

tells the AI, hey, I have some comments.

I want you to take a look at this. Let's

reevaluate. Give me the plan again. So,

I find that it's much better to iterate to go through the plan a couple times until you really know what it is you're looking for before you click proceed.

This is prompt engineering that helps you, I think, get a better app by thinking through and planning because if you just go with what the AI gave you right off the bat, it's going to be fairly generic. So, we want to push it

fairly generic. So, we want to push it back a little bit. You can see here it's saying, "Oh, based on the fact that I want to add this local model in the future, we're probably thinking a little too simplistically here." Let's go take a look at the task now. So you can see

that it added a future road map component called Alama local LLM integration. And just if you're not

integration. And just if you're not familiar with it, Alama is a way to chat and build with open models. So you can actually run this in the command line, which means that it's really easy for something like anti-gravity to use an agent to go operate it. And let's see

how the implementation plan has changed.

Okay, that sounds not bad. So I'm just pushing back one more time to say, is there anything that we may have missed in this plan? Let me know and I'll approve or reject it. And also I like to start off with involving a git

repository that I'll show you in a moment so that we are always versioning what we're doing so that if the AI hallucinates or messes up we can always roll back to a previous version of the app. So you can see here it's now added

app. So you can see here it's now added this as initialize git repository configure get ignore. So that plan looks pretty good. Maybe just before we click

pretty good. Maybe just before we click go I'll show you over on the side here we have this inbox option.

So you can see here basically what this is doing is it's saying hey in one of your conversations in the RSS reader app plan the agent is blocked from being able to go. So it gives me the full chat message that we could see on the side in

our RSS reader plan this message right here but it's displayed in the inbox. So

now you can imagine that we might have 10 or 20 or 30 of these conversations running at the same time and every time the agent gets blocked by something and it needs human approval it'll send it to the inbox and you can see it just all in

one place very easily. Okay. So it's

suggesting that we use TypeScript to begin with, which I think is best practice. We can set it up as a

practice. We can set it up as a progressive web app so that I can use this on my phone if I want to, which would be cool. And it's suggesting using state management for Zustand, which is if we want to add like read later,

favorites or read history. So let's say I'm telling the LLM it can proceed. So

again, we can see here that it's currently waiting for me because it sent me this get init. So, this is where I can control and because I have my settings on request review, which I just clicked in the settings button in the

top right here. You can give the agent the ability to decide or always proceed.

But I've found personally that requesting review, especially if you're learning, is very helpful. So, let's

click run.

It's now proceeding. And if we switch over to follow along with agent, we can turn that on. And then we'll be able to follow exactly what the agent is doing as it's doing it. So, you can see right now it's running in terminal. If I click

open terminal, it brings me over here and I can see that it's operating within the terminal directly. So I think that following element is pretty cool. We can

also turn on review changes if we want to. And then you can see the exact code

to. And then you can see the exact code changes that are happening. So really I think the key here, the difference that I've noticed with this compared to other tools is that you can control the level of visibility you want in how the agent

is operating or at least it's very intuitive for it to do that. Okay, so

while this goes, we can just have this running in the background and my agent inbox will tell me if I actually need to do something. And what's cool too is I

do something. And what's cool too is I just clicked over to the editor mode and you can follow the exact same conversation on the side here. So you

can toggle back and forth and see exactly what you're looking for and say, "Oh, yep. Cool. I have a task waiting

"Oh, yep. Cool. I have a task waiting for me. What does it want?" Well, I can

for me. What does it want?" Well, I can click on here. It's telling me that I need to install the following packages, the the next app. So that sounds good to me. I will approve. So it is pretty

me. I will approve. So it is pretty solid that the AI is able to recognize what should require human input versus what shouldn't. So anything that's

what shouldn't. So anything that's involving the terminal tends to need that which is pretty great.

If we go on the side here, we can see all the files that will be created as it goes through. We can see the terminal

goes through. We can see the terminal that's running here. So what I also think is cool just while this is running in the background here, we have the option to control the get commits, which is always true, and you can connect this to GitHub if you want. But what I really

like is they have this generate commit message. So if I click generate, it

message. So if I click generate, it creates that message for me, which makes it really easy for me to not have to think about what it is I've done. It's

going through and finding only the files that have changed and then creating that get commit message. So now you can see it's going through and it's building all the code, all the different components here. If we go back to the editor, you

here. If we go back to the editor, you can see that we now have this public folder, the src with all the different components. It's built the grid already,

components. It's built the grid already, the feed manager. There's a lot of things it's doing here. And the whole time it's just tracking its tasks inside of this artifact that it created. It's

been getting errors and then it's been fixing the errors and it's running through and doing the full testing of the API to make sure that the RSS connects and everything. And here we go.

It's now created a new type of artifact that I'll show you in a moment called a walkth through.

Okay, here we go. So, that took about 7 minutes or so for it to build this. If

we click open, we have a new artifact now. This walkthrough, which you can see

now. This walkthrough, which you can see appears inside the list here. And the

idea here is that it goes and explains exactly what was built. And if we go back to the tasks, we can see, okay, we've initialized now. We've done all these different features, but what I want to know is, does this actually

work? And this brings in the next major

work? And this brings in the next major component of anti-gravity, which is the browser. Let's take a look at that.

browser. Let's take a look at that.

Just a quick reminder to please like and subscribe if you're finding this video helpful. I am working on making YouTube

helpful. I am working on making YouTube my full-time career, so any support you can give me is very much appreciated. If

you want to support me further, also please consider joining my membership.

Also, have you tried anti-gravity? If

you have, I'd love to hear of your experiences in the comments as I'm still learning how to use this tool and I think the more we can share with each other, the better we'll be able to use it. If you have any specific questions

it. If you have any specific questions or if there's anything you would like to see me make in future videos, please let me know in the comments. And I'm happy to target specific features to help you out. Now, let's keep building.

out. Now, let's keep building.

So, I'm actually going to go over to the task section here and click comment. And

I'm going to say, can you please run the browser agent to check the local host to make sure everything is working as it should? And then I'm going to send this

should? And then I'm going to send this off to review. Let's see if it's able to spawn the browser agent on its own. All

right, so again, we can see that we've got this request sent to our inbox, which is basically can anti-gravity use the browser. So, let's click set up.

the browser. So, let's click set up.

I've already done this. All right, here we go. So, it's got this blue overlay

we go. So, it's got this blue overlay around it, which basically just means, you can see the agent is actually going around and clicking on things. The blue

overlay means, as you can see over here, that the agent is actually in control of this website. So, what it's doing right

this website. So, what it's doing right now is it's going around and clicking all of the buttons completely on its own to make sure that everything works. It's

getting the DOM. It's clicking on it, which wow, that's wild that it has a fully functioning RSS feed already. And

it's taking screenshots as it goes.

Okay, here we go. So, it just went through and it says added browser verification section with embedded recording. So, I've run the browser

recording. So, I've run the browser agent to verify the app and it works perfectly. The agent was able to load

perfectly. The agent was able to load the app, open the feed manager, added the New York Time feed, verify the articles got loaded, and then click through an article. And honestly, all of that happened almost too fast for me to tell. So what's really cool is if we go

tell. So what's really cool is if we go back to the walkth through now, we can actually review the screen recording that the agent took while it added that new feed to test that functionality. So

I think it's really awesome that they have this ability for it to go in and not only test it out, but show you the proof, show you the evidence of it actually working. That's super powerful.

actually working. That's super powerful.

A lot of the time that I spend once I build front-end apps, including in Google AI Studio, is going through and testing and making sure all the buttons work and making sure everything works as it should. But now I can just

it should. But now I can just autonomously send the browser agent to go run the app itself and make sure everything works properly. Now let's go back to the editor and I'm going to generate a new commit just for best

practices here so that we have a snapshot of the version of this app before we move on to any more features.

Also, I think this is probably a good moment that this is becoming a solid permanent app. Let's move this to an

permanent app. Let's move this to an actual folder. And now all the files are

actual folder. And now all the files are moving over. This is just best practice

moving over. This is just best practice where if you've built something and you want to keep it, don't just keep it in the playground. move it over to the

the playground. move it over to the actual workspace itself because what happens is it's able to bring in all of the conversations and everything. But

now this has been moved into a workspace where I can now have multiple conversations per workspace.

Let's say I want to add the Olama integration and let's say I want to add the Zustand integration. I could do both of those at the same time by starting new conversations within the workspace.

Let's just take a quick look at the app again before we start adding some more features. We now have this in our RSS

features. We now have this in our RSS reader app which just keeps things a little cleaner. Let's go back to the

little cleaner. Let's go back to the browser. So, you weren't able to see it

browser. So, you weren't able to see it because the browser was hidden on the bottom, but you can see that I have two versions of Chrome right now. One is my version of Chrome that I was showing you before, but the other this is a completely separate profile. It operates

like a separate extension of Chrome. So,

basically, it looks like we have the option here where it's pulling in Hacker News and the Verge and New York Times technology. So, it has all of these

technology. So, it has all of these cards appearing for all of the different RSS feeds that I've subscribed to. This

is running at my local host and I have the option to click up in the top right here to change how many articles per feed I have. I can add more RSS feeds.

This is honestly a pretty solid starting app. There's obviously a few more things

app. There's obviously a few more things I would like to add here. Like this is not very easy for me to organize. I

would like to have some kind of ability to filter by topics and perhaps generate that report that I was talking about. So

why don't we add the feature?

Okay, so I'm giving the AI the next task to update the app a little bit more. I

want to add some new features here that are going to introduce the OAMA local LLM spawning, but also hopefully the ability to introduce some tags so that I can take a look and say, "Oh, I want to

filter only by AI or I only want to filter by Xbox Crocs, whatever that is just gives some more control on the information that's coming in. Also, I

want to mention that I go a lot more in depth in how I build apps with Google AI Studio on prompt engineering, context engineering, how I like to build apps, and I also show you how to deploy apps from Google AI Studio along with using Firebase to introduce a full backend.

So, if you're interested in learning more about how I build apps, a lot of the principles that I talk about in my Google AI Studio videos do apply to anti-gravity as well. So, I recommend checking them out if you're looking for

more, including the difference between Gemini 2.5 Pro and Gemini 3 Pro and how it compared against me building the same app twice. But the key difference

app twice. But the key difference between Google AI Studio and anti-gravity is that Google AI Studio operates in a sandbox. So, it does allow you to easily bring in features like

generative AI and it's really great for prototyping, but if you want to have persistent data storage and you want to start adding more complexity, like being able to connect to a local LLM model, that's impossible for you to do in

Google AI Studio. So, just as a quick workflow example, you could build the app here, like this is the one that I built in the Gemini 3 video, and you'd be able to download the app or push it to GitHub, and then you could pull it

from GitHub into anti-gravity and continue and add the back end and just add more complexity to it once you've built the initial application here on Google AI Studio. So, that's just something to keep in mind. You also get

a lot of Gemini 3 Limit on Google AI Studio for free. So you could perhaps be combining the two where maybe you build the front end here in Google AI Studio as a prototype and then send it over to anti-gravity where you can build the

back end. But let's take a look at the

back end. But let's take a look at the second implementation plan.

Okay, so I'm going to send this off. I'm

going to just quickly say please add the tagging elements. And I'm actually going

tagging elements. And I'm actually going to go over to the side while this is operating and updating the plan. And I'm

going to create a new instance of the agent to go start building the zen state which is going to improve the read later and generally just the state management which is organizing which articles have been read, organizing which ones have

been favored, that kind of thing. And

while this is building out the implementation plan, I'm going to go back and take a look at my blocked RSS reader app, which is the main one that I was building with the OAMA integration.

So, I hope this kind of shows you that with the agent manager, you can start having multiple agents operating in parallel. While I got this one starting

parallel. While I got this one starting on its next task, I can go over to the first one and check to make sure that it updated to include the features I was looking for. Cool. Here we go. So, it

looking for. Cool. Here we go. So, it

does look like it's added all of the tagging features. Um, but I notice here

tagging features. Um, but I notice here it says requirements. This plan assumes a local instance of Lama is running, but that is not true. So again, this is just powerful context engineering and prompt

engineering where I'm going through and ensuring that the implementation plan has the correct context on what I actually want to implement. And then I go in and use the comments to provide prompts to the AI to tell it how to

update. So the rest of this looks pretty

update. So the rest of this looks pretty good to me. So I'm going to tell it to proceed but with that one comment. And

you can see on the inbox it just dropped down to one. Now my Zusten state management is blocked. So let's go take a look at that plan. So you can kind of see how you bounce back and forth between the different agents. This one's

already finished. it next plan. But I'm

going to open the implementation plan here. And there definitely is a best

here. And there definitely is a best practice here. Like I told the AI to try

practice here. Like I told the AI to try and build in parallel for this other feature, but you have to be careful with what you're building so you don't sort of get into conflicts with them both trying to edit the same file at the same time. So this is building a separate

time. So this is building a separate element almost like a a parallel data tracking system that's not going to interfere with our existing one. So I'm

going to tell this one to proceed and I'm going to go over to the RSS plan.

Cool. And it has now the rest of it showing that it will detect if Alama is running. If not, it will spawn. So,

running. If not, it will spawn. So,

let's click proceed. So, now I have two agents operating at the same time. Oh,

and it looks like by trying to do two things at once, I did hit the model limit here. So, you do get your 5hour

limit here. So, you do get your 5hour limit, which means that I'm going to be able to work on this again at 11:30. So,

we do have the option here to switch to another model. I could, for example,

another model. I could, for example, switch over to Claude Sonnet 4.5 thinking, but I want to show you how Gemini 3 runs at the moment. So, I'm

going to pause right now and I will continue once I get more of my Gemini 3 quota.

Okay, so it's now the next day. I can

click dismiss on the model rejection.

And you can see here when I click back over to the other agent chat, I'm good to go. So rather than trying to run

to go. So rather than trying to run these both at the same time just for the purposes of this tutorial, I'm going to focus on getting the Olama local model running rather than just getting into Zustan. Normally I would click go both

Zustan. Normally I would click go both at the same time, but I just want to make sure I can finish this tutorial for you. So let's take a quick look at the

you. So let's take a quick look at the updated plan.

We can spawn Olama, which means that it'll start the local model if it's not already running. And then it's going to

already running. And then it's going to generate tags and then generate a daily report. So, it's honestly pretty easy to

report. So, it's honestly pretty easy to do once you've installed it. Cool. So,

the rest of this plan looks pretty good to me. Since it got cut off with the

to me. Since it got cut off with the quota, I'm going to tell it to please continue. And this is where it's nice

continue. And this is where it's nice that it's keeping track of itself in its own tasks here because even though the model ran out, it can now just continue where it left off. And if I wanted to, I could have switched to Sonet 4.5 or 4.5

thinking and it would have continued building the same road map that Gemini had set up. So this is honestly a big improvement overall, I would say, in context engineering that anti-gravity enables that I've struggled a little bit

more with in other tools like claude code. Let's see how it goes. I actually

code. Let's see how it goes. I actually

had an issue with anti-gravity here where I needed to restart my computer to get it going, but now it seems to be working fine. So again, I love that you

working fine. So again, I love that you can see exactly where the agent is at a given moment. The follow mode is just

given moment. The follow mode is just really cool that you can see if it's in terminal, if it's in browser, if it's editing the code directly like it is now. It just adds a layer of

now. It just adds a layer of transparency that I've generally been missing from using AI tools. Cool. So,

that took about a minute or so and it's already built the connection and AI tag extraction for OAMA. Now, it's just updating the front end to display it.

So, I can also instruct Gemini to commit changes to the git repository for version control at each subtask if I want to. And then that would pop up in

want to. And then that would pop up in the inbox over here waiting for me to confirm the git commit before it proceeds with the next step. So if

you're interested in learning more about how git works and how it integrates potentially with GitHub and just generally how version control best practices operate, let me know in the comments and I am happy to make a

dedicated video on GitHub and Git.

Okay, so I was just running into a bit of an error there using Gemini. So I

switched down to Claude Sonnet 4.5 thinking. This is one of the benefits of

thinking. This is one of the benefits of using a tool like this where you are able to just quickly switch the model.

It seemed like Gemini was running into some issues with debugging. So now I have Claude Sonnet working on the debugging. You can see there's an error

debugging. You can see there's an error on the the red line here and a few errors on the bottom here that it looks like Claude has already fixed. Cool.

Okay. So I was able to switch down to Claude Sonnet 4.5 thinking to to finish up building the Lama integration. Let's

see how Claude operates with the browser sub agent. All right, let's click go.

sub agent. All right, let's click go.

It's still a bit surreal to watch it just pop up like this. Okay, it looks like we now have this daily briefing button. That's new. You can see it's

button. That's new. You can see it's already pulled the most recent days. We

have the option here to switch between the different OAMA models, which is pretty cool. So, I'm going to click it

pretty cool. So, I'm going to click it and test out the daily briefing myself.

Okay, so it seems like I'm running into a few issues here. So, I'm just going to go through and keep debugging. And I

guess just as a general tip, if you want to conserve your Gemini 3 usage or if you're running into issues, you can always start switching between Gemini 3 and Claude Sonnet 4.5 thinking, perhaps

have sonnet debug and then go and use the browser testing with Gemini so you can conserve your credits in both. So I

just added a new analyze with AI button.

Let's see if it's able to click it.

Cool. So there you go. You can see that now we have this analyze with AI button where it will take in all of the articles that are displayed here. It

will scan the title of each of them in the description. And then it will pull

the description. And then it will pull key tags that align with the article itself. And then if I want to, I'd be

itself. And then if I want to, I'd be able to generate a daily report that's going to find all the articles on today's RSS feeds that can be added to the side. Here you can see I have the

the side. Here you can see I have the option to toggle between different models depending on the complexity of what I'm doing. And again, this is something that could not be done with Google AI Studio because you can't spawn

a local model within the sandbox of Google AI Studio. So I still have some tweaking to do. I'm going to go and improve the daily briefing report to make sure it can work correctly. I'm

going to improve the ability for me to save articles and bookmark them so that I can start to perhaps create a store of local data that I can then feed into the OAMA model to let it know what articles

I'm the most interested in and then my RSS feed could get smarter over time.

All this would be happening 100% privately locally on my computer.

And just overall, I want to say that there is a lot that we can do with anti-gravity. For example, if we go into

anti-gravity. For example, if we go into the editor mode, you can see there's customizations. We can start adding

customizations. We can start adding rules, which basically gives more personalized experiences for the agents.

So, I didn't touch on that. We can set up custom MCP servers. We can export the app, download diagnostics. There's a lot that we can do here. So, if there's anything that you have questions about, please let me know in the comments, and

I'm happy to make more dedicated videos.

I hope that this shows you overall how this tool can work. Yes, it did seem to be a little buggy this morning when I started using it. So, that is to be expected, I would say, for a new tool.

Just keep that in mind. And I was able to solve all of my problems with CloudSonet 4.5, even if the Gemini 3 wasn't working properly. So, it is nice that you can bounce back and forth between these different models.

So, overall, after using this for about a week, I'd say my key takeaways are that I really love the flow between the editor, the manager, the inbox, and then the browser. the editor for getting a

the browser. the editor for getting a deeper glimpse into the code itself, the manager for tracking multiple agents working on something, and the playground mode where you're able to test stuff out. But for me, probably my favorite

out. But for me, probably my favorite part is the browser. The fact that we're able to have the agent come in and just click on the buttons and test stuff out for you. It saves me so much time for

for you. It saves me so much time for debugging. It's pretty wild that it's

debugging. It's pretty wild that it's able to go and just find everything, click it, test it out, bring the error logs into anti-gravity, debug, update, iterate, push it again, test it. It's

this autonomous system that you can just click go and have it go and make a bunch of changes and if you're happy with it then you can go and click commit and kind of save the snapshot of that update. So that's actually another

update. So that's actually another element is that I really like the generation of the git commit. I think

it's really nice that we don't have to think about what to write in here that the AI will automatically analyze the changes and then create the commit message for us. And then I also really like the artifacts and the annotation

mode where I can highlight something and then add a comment kind of like I'm working in Google Docs. It just makes a really easy way for me to direct the AI to manage it a little bit better. And

something I didn't even get into that I'm excited about is that we could generate front-end like UI mockups with Nanobanana Pro, which is now in anti-gravity, and then we can give that

image to Anti-gravity and have it convert the Nano Banana mockup into a full website or a full application. So,

if that's something that you're interested in, let me know and I will make a video on how to use Nanobanana with anti-gravity.

I'm honestly extremely impressed with the UI and the agentic workflow with anti-gravity. It definitely has some

anti-gravity. It definitely has some bugs and some issues that need to be worked out, but I would say this is a pretty solid start. If you found this video helpful, please like and subscribe as I really appreciate your support. If

you're interested in seeing how anti-gravity fits into my broader app building and vibe coding ecosystems, I recommend checking out my AI learning playlist or my Google AI studio playlist as I go in a lot more depth on how to

strategize your app building and use AI to help you out. Thanks again for watching and I will see you in the next video.

Loading...

Loading video analysis...