🔥 Google Opal Veo3 免費用!AI生成影片、音樂、圖片一次搞定 | No Code神器實測 | 比N8N Dify更簡單的AI工作流程
By 凱文大叔AI程式設計教室
Summary
Topics Covered
- Opal redefines workflows as pure data flows
- Every function derives from AI models
- Internet tools boost model prompts automatically
- Remix official templates for custom apps
Full Transcript
Hello everyone, I am Uncle Kevin.
This channel focuses on AI applications and intelligent workflows.
Let's learn and explore the unlimited possibilities of AI together.
Remember to like, subscribe, and turn on notifications.
Make AI your superpower.
Hello everyone, I am Uncle Kevin.
Today, we'll quickly talk about Google's new Opal.
Opal is a no-code development environment.
It generates apps through some workflows, and also incorporates AI models.
Although similar to N8n or Dify, within its tools, the only usable components are models and prompt words.
Its workflow actually feels more like a data flow.
After one action is completed, it passes the result to the next.
It doesn't have the process control or looping capabilities like N8n or Dify.
Google Opal就好了 就可以找出來 希望對大家有幫助 After entering, we can design our own Opal app.
Let's click in and take a look.
This looks somewhat like Dify's design workflow.
On the left, it's a visual workflow design screen.
On the right, it shows the settings for each tool.
Recently, there's Web Coding.
This can also be described to explain what you want to do.
After typing it out, it can help generate a simple workflow for you.
Let's start with some basics.
First, when creating an app, we usually need input.
If there's no input, it may generate data randomly.
Or it might generate fixed data.
So, after inputting data, it proceeds to the next step.
There are many types of input formats.
You can be arbitrary.
輸入的型態就有很多種 你可以是任意的 Or you can upload a file with sound, images, or text Finally, upload a video 輸出這邊我們也不用特別寫什麼啦 我們就直接把結果呈現出來看就好 Generation is basically producing data through a model The focus is on this generation part
It’s not like Dify or N8n, which have many options to choose from 我們來看一下它的結果 然後去取得網頁的資料 There are some additional features here But these features are added within the generation process Let’s take a look at this generation There are actually many functions available here
Basically, all of these generate data through a model Currently, available options include 2.5 Flash and 2.5 Pro Additionally, this uses 2.5 Flash to plan content and execution results Then, Deep Research uses a model to search for information online To produce in-depth investigation reports Essentially, these functions are all derived from the models
Next, for image generation Image generation includes Imagen 4 It can also directly use 2.0 Flash with image generation features Besides these, it also offers voice output Which is text-to-speech Furthermore, there is Veo Veo actually provides Veo 3 and Veo 2 Although it states that generation is limited
I personally am not sure exactly what the restrictions are Want to try Veo 3? You can directly use it here It’s free of charge Lyria2 is mainly used for generating music If you need to create background music You can use Lyria2 It’s very simple to see this Input and output You can connect an arrow here Look at how, after it receives it,
your input becomes its prompt, and finally, we throw this output into the Output section, You see, although the Output is connected to the previous data, after it receives the data, it can perform some transformations, like having some simple layouts, So, how do we make this layout?
You can directly write the prompt for the layout here, or you can use its existing templates, which are currently a bit basic, So if we don't know how to do it, we just write "Webpage," and let it automatically create a layout for us, then you'll have this input and output, and a result, If you need to do other things in the middle, like generating a video,
then I might first write a simple description, and use a model to generate prompt words for creating the video, Finally, give this prompt to Veo 3, and we also write a generation command here, then convert it into Veo 3 format, using the improved prompt, feeding it to Veo 3, and then connecting the generated video to Output, You can also directly give it an image,
to turn it into video, and so on.
This is very easy to produce, If you often need to make videos, you can try this method first, since it can save you a lot of time, by following a simple process—upload an image, and it can directly generate a video.
That's roughly the function overview, For more complex applications, I'll teach you how to use it when I have time, I have slightly adjusted this process, Let's take a quick look, The input data remains the same as before, but we've added a prompt in the middle, which is "You are trained on data up to October 2023."
So basically, I am using the 2.5 Pro model for this.
And you see, I am using a tool here.
Where is this tool? Right here.
We can call upon some functions like searching the internet, finding maps, or getting webpage content, and there's also a weather retrieval feature.
There should be more tools added in the future.
Currently, there are only these four.
Because this is still an experimental version of the tool.
I am using this internet inquiry tool here.
to find out how to make this video.
And the input here is the description of the video I want to produce.
After writing it above, there will actually be data after the search.
So, I generate a 8-second video prompt based on this keyword and the internet search results.
Why 8 seconds?
Because Veo 3 only supports 8 seconds.
So, we first generate an 8-second video prompt like this.
After generating the prompt, we can then send it to Veo.
Veo allows choosing between Veo 3 or Veo 2.
Okay, once that’s done, Veo will generate the video based on the previous results.
One important thing to note here is that in this process, it can only receive data from the previous node.
So, the data coming from the earlier node can only be written here.
I cannot send data here.
So if I want to send input data over, I need to draw another line directly over.
This way, the previous prompt will also be sent here.
Actually, there’s no need for that right now.
Because I can directly use the modified prompt to produce the video.
Finally, I export the video.
There’s no need to write anything special for output; just present the result directly.
We don't need to specifically write anything here for the output either.
Let's just present the results directly.
Basically, if there’s nothing special with the layout, we choose manual, that’s enough.
No need to do layout, just output the results directly.
OK, let’s take a look at the results.
I press execute first, then it will ask me to input the content.
For this content, I can write, I want to make a Apple ASMR video.
Next, let’s take a look.
On the right side, besides Review, because Review just means waiting for the generation.
Text and video are generated, we can check its Console.
The Console shows the results generated in the background.
So, you can see which stage it’s in right now.
It's currently in the middle stage.
It needs to generate prompt words.
So, it first searches the internet.
After searching, it will have some results.
It will pass these results further down.
It hasn’t finished searching yet.
So, it searches many contents.
After that, it will send these results further down.
You see, it has searched many web pages and videos online.
Finally, it passes these results to the model below.
This model will then generate relevant prompt words.
Finally, these prompt words are sent to Video.
The previous step is already complete.
So, once it’s finished, let’s look at the final result.
The final prompt script for the video looks like this: Apple ASMR And from seconds X to Y, do this action.
It’s impossible to write something this complex manually.
You are doing this action for a few seconds, and it’s all written for you.
Once this is done, it will send it to the tool that generates the video.
Currently, it’s just waiting.
OK, this is already finished.
Let’s check out the result.
Let's take a look at its results.
It really is just slicing an apple Because I didn't specifically assign it to cut a crystal apple So it just cuts a real apple directly Like this thing You can use this method to quickly generate a video Here's a little tip for everyone Although it says on the top that there is a limit to the video generation Up to now, we've tested it but we don't know what the quota actually is
And I am currently using a free account If you want to generate videos, you can directly do it through this Opal which is currently free Alright, finally, let's look at the outermost part here It provides some examples Almost all of these examples are from Opal they are officially released by the creators Like this, turning a photo into a clay-like effect When you click in, we can see how the process is designed It's very simple
So, the input, how each prompt is written in the middle, and finally how to generate the video When the video is being generated, what kind of layout it requires Like this thing, you can see how it's made The provided template cannot be edited But you can click on Remix, Once you click Remix, it turns into a personal project so you can make modifications within your own project and make adjustments to this project
So you can change the content inside If you want to combine it with other business reports or similar, it seems this is related to company data Harshly, it searches for maps on the internet, and looks for information on web pages, and retrieves data from web pages.
Then go fetch the webpage data.
Using these materials to look up basic information about a company, you just need to enter the company you want to look up here, and it will do the investigation for you.
After the investigation is complete, it will summarize the results for you.
If you find any feature here useful, you can copy it to create your own project for modifications.
These are some basic features of Opal.
If you want to use it, the first step is to access via VPN, because currently it is not open to users outside of the US.
So, use a VPN to connect to the US, then you can visit this Opal website.
Just search for the Opal website directly, Google "Opal", and you will find it.
Hope this helps everyone.
Google Opal will do.
You can find it out.
Hope it will be helpful to everyone
Loading video analysis...