¡Google Rompe la IA con TODO GRATIS! 25 Cosas que ahora hace por ti
By Alejavi Rivera
Summary
## Key takeaways - **Gemini Flights Finds 67% Discounts**: In Google Flights, Gemini AI explores deals with flexibility like a 4-day break in February for direct flights to eat well and relax, showing options to Lisbon or Vienna with up to 67% discounts in 3 seconds. [01:11], [02:03] - **NotebookLM Creates Infographics**: NotebookLM generates infographics and presentations from 18 information sources on memory improvement, using Imagen model for images and perfectly structured slides. [02:24], [02:58] - **Gemini Images Combine Uploaded Photos**: Upload photos of yourself, Mount Everest, Mexican hat, and neoprene suit; Gemini creates a selfie on the snowy peak in rain suit and red hat, then iterates to add academy logo flag. [05:37], [06:51] - **Flow Turns Images into Videos**: In Flow, generate unicorn beach images with Imagen at zero credits, then convert to video transitioning from bad rainy weather to good weather. [13:51], [15:05] - **Opal Automates LinkedIn Posts**: Opal builds a mini-app for automotive niche LinkedIn posts: user specifies topic like fuels, AI researches trends, generates 5 ready posts with hashtags. [23:13], [24:32] - **Stitch Designs Functional Apps**: Stitch creates professional app designs for productivity with calendars and graphs, exports to Google AI Studio to build fully interactive functional app. [27:49], [29:48]
Topics Covered
- Google integrates AI seamlessly into daily life
- NotebookLM auto-generates infographics and presentations
- Gemini Canvas crafts editable visual presentations
- Opal automates custom workflows instantly
- Run Google AI models locally unlimited
Full Transcript
True value can't be bought, but it can be revealed through what you know how to use. Google has
stepped on the gas, creating a clear gap with its competitors. Its ecosystem is enormous , especially considering it has over 50 published artificial intelligence tools. Since the beginning of 2025, they've released something new almost every week, and some months even saw more than 30 new features. But which ones are truly useful, and when should we use each one?
In this video, we'll be looking at the 25 most useful Google tools with practical use cases you can apply today. I've categorized them into seven sections, covering everything from
apply today. I've categorized them into seven sections, covering everything from creative to professional applications, to help you discover their potential. And,
as is typical for Google, everything is completely free. Sounds good? Let's start with the first new feature.
completely free. Sounds good? Let's start with the first new feature.
Google currently has the best AI model ever created. They know this, and that's why they 're integrating it into each of their products. One of the latest platforms to incorporate it is Google Flights, which you might already know as a flight search engine . But now, if you look at this section here, they're asking if we have
. But now, if you look at this section here, they're asking if we have any flexibility and if we should try using their artificial intelligence, supported by Gemini 3 models, to find our ideal flight. If I click on "explore deals" and tell it where I want to depart from, for example, Malaga, it will immediately
show me the 30 best deals available. But this can be improved even further, since, as we know, we're using artificial intelligence. To do this, if I tell it in the "pron" section that I'd like a 4-day break in February and that I'm looking for a deal with direct flights where I can go to eat well and relax, when I click "search" Yeminey will start to understand the trip
and in about 3 seconds it will be showing me a bunch of flight deals, like going to Lisbon, Las Palmas de Gran Canaria, Vienna, and other destinations. But
look at the discounts it found for me, even starting with offers that have a 67% discount. With these examples, we can see how Google doesn't just want to have a
67% discount. With these examples, we can see how Google doesn't just want to have a very powerful model, but also seeks to integrate it usefully into our daily lives.
Let's jump now to the second new feature, seeing how LM is now able to create infographics and presentations. For this, I just came here to this notebook I have prepared about
and presentations. For this, I just came here to this notebook I have prepared about key factors for improving memory. But if we go to the right-hand side, we'll find two more new features: infographics and presentations. If I
click on each of these, a few seconds later I'll see an infographic created by Notebook LM, starting here with the key points of the most relevant aspects from the 18 information sources I've included. There, as you can see, we have an image, and that's because it's also using the Nano Banana Pro model. Furthermore,
if we go to the presentation section, we'll also see a bunch of slides where all the images were also created by Nano Banana, and we'll have perfectly structured information that we could even present to someone. Now let's look at the third new feature: Jimney is gradually incorporating the ability to make these images interactive. For example, if we ask it how the neurons in the brain work,
interactive. For example, if we ask it how the neurons in the brain work, Gemini processes it and creates an image for us. But the most interesting thing is that when we click on each part, we can see both the definition and information.
Looking at the fourth new feature, we can also see that it includes artificial intelligence that could call us on the phone. For example, if we're looking for electric guitars near us, by clicking "Get Started" we could set our preferences for what we're looking for, how we want to contact them, our location, and Google will then
contact those stores to greatly expedite our search. This last
This advancement is already becoming available, although for now only in the United States. Of course,
so many advancements are released that it might not be available or might not be entirely useful. Therefore,
we'll continue with the different categories to see which tool we should use depending on the intended use case. Keep in mind that we'll be looking at many tools, and that's why I've prepared this document here, where I've compiled everything you'll see in this video, plus any updates that may occur
after its release. So now let's move on to the first, more creative section about images.
Gemini, with its recent release of the Nano Banana Pro model, allows us to do practically everything. To see this, if we go to Gemini, we'll see how,
practically everything. To see this, if we go to Gemini, we'll see how, by clicking on the tools section, we can find the option to create images. If we want to start from scratch with something we're designing, like creating minimalist logos for a project we want to begin, when we click "send," it could soon
generate different variations, such as ones based on speed, echo, light, rhythm, strength, more fluid logos, growth, and silence. And look,
even though I haven't given it any information about the logo's purpose, GYGY Gemini has already analyzed it and created several very different alternatives so I can see which one fits best. Let's go back to a new chat. I'm going to click again on the "
fits best. Let's go back to a new chat. I'm going to click again on the " create images" section, because now I want to show you a use case for its image combination capabilities. And if I click on the "plus" icon and then on the "upload files" section, I can upload different images, like me pointing to Mount Everest.
Why not a Mexican hat and a neoprene suit? And once all these images are uploaded, I can tell it to add the person—in this case, me—taking a happy selfie on top of Mount Everest in a rain suit and red hat. I click send, and a little while later, there's the first attempt. Look at the result we have here, perfectly integrated,
where I'd be on this snowy mountain in my rain suit with all those snowy details, the red hat, and my phone as if I were taking a selfie. With this,
whether we want to edit existing images or even combine various materials in our creative workflow, now with Google, we can do it in a matter of seconds. What's more,
the most interesting thing is that if we go back to the chat, I can keep iterating. Now
imagine that in this photo I wanted to appear holding a flag with my academy's logo.
Well, I can simply write here to add that I'm holding a flag with this logo. And if I add, for example, this image here and upload it again to the chat section,
this logo. And if I add, for example, this image here and upload it again to the chat section, when I click send, a little later we can find this other perfectly integrated photograph. And look at all the details with the snow, the backlighting, and in the end, it even looks like I'm actually holding that flag. As you can see,
we can also create more professional use cases using our project logos. But
if we want to do something more professional, we have to switch to Pomei, another platform you'll find in the guide, and this one is the only one we need to use from the United States. However, I've also included a free tool so you can simulate
United States. However, I've also included a free tool so you can simulate being there. So, with that active, if we click on "Let's get started," it
being there. So, with that active, if we click on "Let's get started," it will basically tell us that it's going to analyze the entire brand image of the project we share.
To do this, if we click "Let's Go" and then enter the URL of the project for which we want to create content. Here, for example, I'll be using my academy. By pasting it here and clicking "continue," it will start analyzing the entire brand image. A little
later, it will have extracted everything, such as the brand name, logo, font, colors, focus, brand values, and a lot more details. And not only that, but also You've brought me all the images we use in our academy so we can finally start creating professional content based on the actual brand image. Now,
all we have to do is click this button here, and it will automatically generate a bunch of designs that we can use on social media or even in advertising campaigns. We can use this for ourselves or to offer a service to other companies. For example, if I enter any campaign in the campaign section,
like this one (which is real, by the way), where we're extending a 24-hour Cyber Monday offer, an €80 Black Friday coupon with the code "Black," for our exclusive 8-week course for 25 people, and I click on "generate ideas," Pomegi will give us different approaches. If I click on one of them, it will start generating four different designs, and after
about 40 seconds, it will be created here. For example, here's this first design, which fits perfectly with my academy's brand image, as you can see here. And ultimately, alongside this one, we have other designs that truly
see here. And ultimately, alongside this one, we have other designs that truly fit together perfectly. Now, if we go into these, we can edit anything here. For example, instead of "intensive," if we wanted to remove this, we would start
here. For example, instead of "intensive," if we wanted to remove this, we would start applying it here, and about five seconds later, we could see the change.
We could professionalize what we've just seen and share it with the world through real campaigns. That's why
I want to introduce you to Chatfield, an all-in-one platform that will respond for you.
They've just launched a new feature where every click on your ad opens a chat and responds instantly. It doesn't matter if you receive 10, 50, or 200 messages; none of them get lost. To do this, we can go to their platform and use
lost. To do this, we can go to their platform and use their AI called Fuly. By selecting this option to respond to ad messages, the AI will take care of everything. With this, when someone contacts us via WhatsApp through an ad, the AI will handle the response. But if you want to further personalize the conversation depending on the type of ad it comes from, you can do that too. To do this, going back to the platform,
we can create a new flow where the artificial intelligence responds. And
now, going to the keywords section and selecting the one we just created in the flow, we can also filter the keywords that our ad messages contain by default. And thanks to Chatfel, the sponsors of this video, they've given me a special coupon called Alejabi, which you can use to get premium features for a month completely free.
I'll leave the special link below in the description. With that said, let's look at the next Google tool. We're going to jump to another Google tool that lets us create many images at once
Google tool. We're going to jump to another Google tool that lets us create many images at once with the nano banana template, and without limits. To do this, we have to jump to Mixbard, a tool that, if we click "get started" from here, will let us design in a way very similar to Notebook LM, only with images. Look, if I write something like, "
I want to create a company Christmas dinner, and we're a car repair shop," if I click send, it'll start analyzing our prompt and begin creating a bunch of images . See, different elements are already appearing here, like cocktails, gifts,
. See, different elements are already appearing here, like cocktails, gifts, what the table setting could look like, even including details like an engine in the middle, and basically a ton of ideas for us to start working on. But the best part is that I'll be able to combine everything. Look, if I, for example, take this table with the engine, this plate of food,
and this appetizer, when I select these three images, I can tell it, "Add this food to the table." I send it, and in about 10 seconds, we'll see how, for example, this plate here would be perfectly included in this section. But what's more...
We can continue iterating, because if we click on the image, we'll see different options like getting more images like this, regenerating it completely, or even if we click on this little pencil here, we can draw anything. Here, I'm actually going to draw this circle with this design, like a car tire. And when I click on save and
then on the image, I tell it to add a giant car wheel. When I click on send, this model will start being added here. And with this, we really won't have any limits to our creativity. Whether we want to combine images, create many at once for inspiration, or
our creativity. Whether we want to combine images, create many at once for inspiration, or even make small modifications, you can do it all for free from here. Another
very interesting tool that also has additional features is Google Whisk. This
platform, if we click on "add tool," will show us how, from the left-hand section, we can add people, scenes, and styles. Here, in fact, we could either write something to be generated by artificial intelligence or upload our own photos. In this case, I'm going to upload this photo of mine. In the scene, I'm going to use the same image of Mount Everest. And now, from the style section, I could also upload any other image, or by
clicking on this side here, the plush style has already appeared. Now, with this style, if I click on "generate," a little later I'll find a photograph like this one, which perfectly follows the green dinosaur plush style we had before. But the most interesting thing is that from this tool, we can continue iterating simply by going here to make the changes we want, or even by clicking on "animate." We'll enter this other visual from
which, in a limited way, we can create some videos. Here, I'm actually going to tell it to get up happily and point at the mountain, and if I click on "send," we could find a result like this. Look at that incredible view! We've reached the summit. Now,
for the entire video and audio section, it's best if we skip to the next audiovisual block.
We're going to start this block with what, for me, is one of the best tools if we want to create both images and videos from a single place. To do this, we need to switch to Flow, a tool from which we can create not only videos but also images, and for now, it's free and costs zero credits. In fact, if we click on "create image,"
notice that here in the settings section, we can use the nanobanana template at zero credits. And now, if I write something like "unicorn on the beach" and click "send," a little while later it would have generated this image here, which, to be honest, perfectly followed my instructions. But the most interesting thing is that we can iterate all of this. Whether we want to add specific annotations with the changes we want
to make, or put it directly into the window. And here, in fact, I'm going to tell it that the weather is turning bad and it's raining, and if I send it, a little while later I'll find this other result here. But the best thing of all is that this platform doesn't just create
result here. But the best thing of all is that this platform doesn't just create or edit images; it also brings them to life. To do this, if I click on exit, and once I have these two images created, I can click to convert from image to video and select how I want it to start and end. Imagine I want it to start with the
unicorn in bad weather, and then I'd like it to transition to the same dinner, but with good weather. So, by describing it here as a unicorn on the beach checking the weather,
weather. So, by describing it here as a unicorn on the beach checking the weather, and clicking send, it will start processing to produce a result like this.
Another thing we can do with Google tools is generate music using artificial intelligence. To do this, going to Music FX, I can add any type of song I want to create. By
default, it has given us an example of a winter afternoon, afternoon tea, soft vibes, and a spacey tempo. So, I'm going to press the tab key to keep this description.
spacey tempo. So, I'm going to press the tab key to keep this description.
Although we could actually add descriptions of the instruments we want to appear or the environment we're in, so it can generate the music to accompany us. So in this case, I'm going to leave it like this, click on generate, and in just about 20
accompany us. So in this case, I'm going to leave it like this, click on generate, and in just about 20 seconds, it has generated three songs. Let's listen to a few seconds of the first one.
Look, whether you want to create music for yourself or you have a business and want to add some more original elements, you can recreate your own music using artificial intelligence. If we go back to the platform, notice that we have a maximum duration of 30 seconds,
artificial intelligence. If we go back to the platform, notice that we have a maximum duration of 30 seconds, but there's actually a way to make the duration unlimited. If we click in the upper left, we'll find a platform called Music FXDJ. Clicking on it will take us to a mixing console where we can select instruments, and it will automatically generate the
melody without ever stopping. Here I can actually click play to start generating music with these instruments, but now I can add other things to it, like a piano, which I'm going to increase.
And along with the instruments, I can also add music genres, for example, electronic. I increase that too.
And we could have already heard it somewhere. Keep in mind that as long as I don't stop this section, it will keep generating the song indefinitely. You can also change different aspects like density, brightness, chaos, as well as other adjustments like BPM or even pitch.
In short, a very complete tool that's also free. Let's
gradually move away from the more creative aspects to something we can actually use little by little in projects or our daily lives. For that, we're going to switch to Google AI Studio. And once we land on the platform, continuing with the audiovisual section, if I click on "Chat with models," you'll see that above the model section we have a bunch of options, whether it's Gimini,
video images, or even an audio option. If we click on it and select the most powerful model, Pro, we'll have a free, unlimited AI voice generator with good results. In fact, if I, for example, change the settings here to listen to audio from a single person and enter this phrase for it to read,
I could now choose from several voices. In this case, I'm going to stick with the first one that appeared. I could even select the tone, that is, the level of creativity I want in the AI voice. And with that, if I click on RAN, about 10 seconds later we'd hear this audio here. But anyway,
who is Alej? It seems like he's talking about all the free things Google offers. Not a bad result at all, especially considering we'll have complete control over the pitch.
If you look at this section above, you'll see I can modify it with any other instruction. For example, if I tell it to read as if it were whispering,
other instruction. For example, if I tell it to read as if it were whispering, and I click on RAN, we could get a result like this. But anyway,
who is Alejabi? It seems like it's talking about all the free Google stuff.
Let's move on to the next learning module. Jiminai has launched a ton of features so we can learn practically anything we want. Let's
start with the first one, and for that, we need to jump to Learnab, a platform where we can upload anything from a PDF we want to learn that's difficult to understand to simply inputting the idea of what we want to learn so it can break it down for us. To see what it 's capable of, I'm going to tell it I want to learn everything about volcanoes. Click submit and a little
later Google will show us this result, which includes dynamic keywords, a table with relevant information, and even interactive games to help us see if we 're really understanding each of these concepts. All these results that you're seeing in this Keep in mind that the information on this page will be in English, but you can always
right-click, select "translate to Spanish," and then view it in your language.
Also, remember that once you reach the end of the results, you can continue exploring further, either with the recommendations provided here or even the index that appears in this section for learning. Another essential tool in this section is Notebook LM, which allows you to upload any type of information
you need to learn. You can also conduct research from here, both quick and in-depth. Once you've entered all the information into the notebook, you can chat with the AI, similar to a GPT chat, but it will respond only with the information you've provided. Beyond
that, you can also create various formats, including audio, video, and mind maps, like this one here, where you can expand on each concept.
We can also create summary reports, flashcards, and basically a whole lot of other things, all from one place. Regarding Notebook LM, keep in mind that I already have a detailed tutorial here explaining it from start to finish. So now let's move on to another Yemini feature for creating presentations. If we go back to Yemini and click on the tools section, and I select canvas, and then tell it to create a very
visual presentation full of details and images with this information—which, by the way, I'll be using from a summary report I created right here—I'll click copy, paste it here, and send it. Having done this, a little while later I'll have a presentation that really does have a lot of infographics, images, and details, just as we
requested. Here, in fact, we can see on each slide how
requested. Here, in fact, we can see on each slide how much information we actually have, and you might be wondering where we actually create the presentation, since Notebook LM can also do that. But I would say that Notebook LM would only be for our personal things, because we won't really be able to edit anything . However, in Gemini we will have total control, since beyond being able
. However, in Gemini we will have total control, since beyond being able to view the presentation from here, I could click on "export to presentations" and a little later I could go from here to "open presentation" and by doing so, I would enter Google Slides from where I can modify each of these things that the
artificial intelligence has added, having total control over the final presentation. Beyond presentations,
we can even create books. And yes, if we go back to Gemini by clicking in the upper left, we will have a section called "Gems". Notice that from here we can create our own custom assistants or even reuse some of those that Gemini already provides.
For example, here we have one called Storybook, which is specifically for creating books. And
if I were to copy the entire summary report on improving memory, learning, and cognitive stimulation from here, paste it, and send it, well, a little while later, on the first try, Gemini would have created a story titled "Sofia and the Memory Palace," where this girl appears alongside this seahorse. As we progress through each
page of this book, we can see how it maintains consistency among all the characters in the story and even begins to explain complex aspects of the report in a very simple way. With this, whether we want to transform the information into books for
simple way. With this, whether we want to transform the information into books for young children or even easily understand difficult concepts, this function allows us to do it quickly. Let's move on to the automation section. Google also
offers a complete suite of tools to automate our work. To
begin, let's look at the first one, called Opal, a Google platform from which we will...
You can submit a request for what you want to automate, and it will create a mini-tool that will work according to that requirement. For example, if I write here that I want to create an application that generates LinkedIn post ideas for my automotive niche, specifying that the information must be relevant and up-to-date, and then click submit, Gemini will start processing it. A little while later, it will have created
this automated workflow with four parts. First, the user would specify what they want to talk about. Then, the in-depth research AI would investigate the trends we could publish, then it would generate the content itself, and finally, it would create that mini-application with all the information. To see how this works, I'm going to click preview. And look how
it has already added all of this. So now I'm going to click start. And now I simply have to specify which topic I want to focus on within this automotive niche. Here, I'm going to tell it, for example, to talk about fuels, and if I send it, Gemini will start thinking, and look how it would go through each of these blocks, starting first with the
research. And just a couple of minutes later, we already have the result here, so I'm going
research. And just a couple of minutes later, we already have the result here, so I'm going to zoom in a bit more. And look how we have information here about fuel in the automotive niche. Here, in fact, we see different content ideas that we can explore.
automotive niche. Here, in fact, we see different content ideas that we can explore.
And specifically, if I scroll down, I could find up to five posts, all with their hashtags and information, and now I could simply copy and paste them directly from here to my social media. Because with this, I know that if I want to automate my social media, create images, videos
social media. Because with this, I know that if I want to automate my social media, create images, videos , or anything else that Google can do, with OPA you'll have everything unified so you can interconnect all its technologies, creating mini-applications. Another thing to keep in mind is that Google has historically been the number one search engine for information, so it has the best functionality for in-depth research.
In fact, if we go back to Gemini and click on Tools, then select the Deep Research section, when we type in anything we want to research, such as the best recipes in the world by country, we can now research not only the internet but also information from the entire Google ecosystem, whether it's information from our Gmail, our Drive, or even what we discuss
through Google Chat. However, I'm going to stick with Google. I'm not going to upload any additional documents either, although we could. And now, if I click Submit, it will present me with a research plan. And if I click Start Research, I'll have a very detailed report, not only in terms of information
but also, if we scroll to the bottom, we can see all the relevant sources of information it used. Furthermore, it doesn't stop there. Just like
we did with the presentations, we'll be able to click on "share and export" and export it to a Google Doc so we can continue working from there. Another thing that will allow us to automate a lot of the process of using a tool we don't even know how to use is the ability to share our screen in real time with Gemini so it can assist us with whatever
we need. To do this, we could go back to Google AI Studio, but now instead
we need. To do this, we could go back to Google AI Studio, but now instead of selecting the "models" section, the one we saw for audio, we could select "live" for real-time interaction. I select it, and from here I'll not only be able to have a conversation, but I'll even be able to have video calls or share my screen.
If I click on "confirm" and share my screen, I'll be able to ask it anything I need help with. Hello, good day. Look, I'm here in Google AI Studio, and I actually had some questions about these two options I'm looking at now.
Could you tell me what they are? Hi. These two options, Affective Dialogue and Proactive Audio, allow you to add a more expressive and interactive touch to audio conversations and responses.
Affective Dialogue aims to model and respond to the user's emotions, while Proactive Audio generates more proactive responses, such as asking questions or suggesting actions. Would you like me to help you configure them? No, thank you very much. Whether you urgently need to use a tool or you have a question about a specific function,
you can now share your screen with Gemini, and it will give you the answer in real time, in record time. Let's move on to the penultimate section, where we'll be looking at different
record time. Let's move on to the penultimate section, where we'll be looking at different tools for creating applications using artificial intelligence.
One of the most interesting things when starting a project like this is the new Stitch update. This platform
allows us to input an idea, and the artificial intelligence will provide us with a well-designed application. In fact, it doesn't just do that;
well-designed application. In fact, it doesn't just do that; if we scroll down, we can see how we could pass the design to it on our own website or application, and Nano Banana will automatically improve it. This is a function we could use simply by selecting the redesign option here, but we could also
start from scratch. Here, for example, in the last video, I showed a design, asking it to create an application that helps people be more productive. The result was these four screens here. And as you can see, each one has a
screens here. And as you can see, each one has a very interesting and professional layout, with calendars, a progress section with graphs and animations—it would be difficult for an AI to create an application with these visuals on the first try. But now that we've created the visual aspect here
—because this application really focuses primarily on design— we can select everything, click on the three dots, and not only can we download it to continue iterating on any application we want, but we can also click on export, select Google AI Studio, and by clicking on compile with Google AI
Studio, all the information Stitch created will start being imported here so that we'll have everything ready. If I click on the build section, and then a little later, after correcting some errors that appeared and simply clicking on fix, I can see here that I'm actually viewing the application as if it were a mobile device, with the same design we had in Stitch, with all the tools
completed and all the tasks done. If I click on the plus sign, I can see that it really does have the exact same design I'm interacting with. Furthermore, from here I see that I have the calendar option, which is exactly the same as the one we had in Stitch. And lastly, I also have the progress section, which was the most surprising, and here it has truly excelled, providing these graphs, goal achievement, time by category, and
ultimately, a very well-developed application that we designed with an expert in app and web design thanks to Stitch, and then brought to life in Google AI Studio with its Build feature to have a fully functional application that we can not only use from here, but also publish in a professional environment.
This would be a paid option, or we could also download the application for free from here to have the entire project locally and continue iterating. Regarding creating applications in Google, the truth is that we have a complete suite, and although many may seem to do the same thing, we should really use each one depending on our needs. For
example, we have a platform called Jules, which is an autonomous agent focused on...
Programming code. This would be for slightly more technical people, as it's designed to assist us in developing code collaboratively or even testing for potential bugs. Then we also have the Google Collab tool, which allows us
bugs. Then we also have the Google Collab tool, which allows us to write and run Python code without needing to install anything locally. But
even though it's a bit technical, we'll also have a layer of artificial intelligence to do this for us. If we switch to Google Collab, from there, beyond simply entering the code ourselves, we'll have a Gemin option at the bottom to generate it for us. For example, imagine I want to create a universal converter. So, I can
for us. For example, imagine I want to create a universal converter. So, I can write to it from here that I want it to create this simple converter, functional universal units in Spanish, and if I click send, Gemini will start working so that seconds later it will have all this code here, and now I just have to click play to start it running. When I do, it says "Welcome to the
universal unit converter" and asks me what I'd like to convert, from length to temperature. Here, I'm going to select six, since I want to convert temperature. And now it
temperature. Here, I'm going to select six, since I want to convert temperature. And now it asks me to enter the value. Here, for example, I'm going to want to convert 27º. So I write 27. Then,
from here, I'm going to tell it that it's in Celsius and I want to convert it to Fahrenheit. I send it, and it tells me that the result would be 80.6. And look, with this, we'll even be able to run our own Python directly from the cloud, and on top of that, have all the code done by artificial intelligence. We would simply input the idea, and Gemini would take care of the rest. Then
artificial intelligence. We would simply input the idea, and Gemini would take care of the rest. Then
we also have Gemini Courasis, an autonomous AI assistant that integrates seamlessly with any ID. In other words, if we work with tools like Visual Studio, we could add that layer of AI agents on top of that tool. Another tool that Google also offers is Gemini CLI, very similar to the one I just mentioned, except that this one can be integrated
directly into our terminal. This way, we don't have to install any additional programs and we can work with AI agents directly on the terminal, which will help us both to create applications and even to control our computer, something we discussed in detail in this video here. In addition to this, we have two other very interesting tools. On the one hand, we have Farbase Studio, another platform
we've been discussing on the channel that allows us to create professional applications directly in its cloud. It's very similar to Visual Studio, except that it doesn't require any installation. But the truth is, they recently released Google Antigravity,
any installation. But the truth is, they recently released Google Antigravity, something very similar to Farbis Studio, except this one can be installed locally and is now completely free. Regarding Google Antigravity, keep in mind that one of the main differences
completely free. Regarding Google Antigravity, keep in mind that one of the main differences compared to any other app creation tool that has existed until now is that it has an AI agent functionality. By opening it, we can add any application we want to create, such as a transcription app that adds the user's voice-guided task to a calendar. If I click
"send," it will start creating all the files here, and a little later I 'll find this calendar here where I can say anything using the microphone, like, "Remind me that I have to record a video here today." In fact, it would have already transcribed it at the speed you just saw, and if I click "send,"
here today." In fact, it would have already transcribed it at the speed you just saw, and if I click "send," the task created in Google Antigravity will appear here. Keep in
mind that I already have a detailed video here so I can get the most out of it.
We're going to continue with the last section, which I've called "all-terrain vehicles," since we really have different models and tools that have a complete suite of functionalities.
for any type of person. The first would be Gemini. From here, beyond creating in-depth research, as we've seen before, presentations, books, images, and so on, we could even convert the information into any other format. For example,
an infographic by clicking here, so that a little later we can find an infographic like this section, detailing the most important aspects of all that research. Regarding
Gemini, keep in mind that I have a complete course, but I'll leave it in the description below.
Finally, and to conclude, keep in mind that you can also use Google's artificial intelligence locally in a completely private and unlimited way. To do this, we have to install one of its YENMA models. And we can easily do this with a free platform like LM Studio, available on any device, whether Windows, Mac, or Linux. From here, if we click on the magnifying glass and search for the YENMA model,
which are free artificial intelligence models from Google. I don't actually have this model here, so the download button would appear, and with just one click, it would be saved to my device. Then, I would simply click on the downloaded model, in this case, YMA 3N, and by clicking "load model," my computer would have it loaded. Now, if I type something like "
hello, how are you?", it would start responding without needing an internet connection.
It's truly amazing how many tools we have available. With
this knowledge, we now know everything at our fingertips, and all that remains is to put our feet on the ground so we can begin our journey.
Loading video analysis...