TLDW logo

Building AI Agents in Kotlin with Koog | Vadim Briliantov

By Kotlin by JetBrains

Summary

## Key takeaways - **2024 is Agentic AI Year**: McKenzie, Forbes, Morgan Stanley all say about AI agents and how important they are. Y Combinator invested in many AI startups, meaning strong AI-powered competition in every domain next year. [00:25], [01:02] - **Future Helper Robots Use AI+Software**: Users will ask helper robot to do the whole work, and robot's responsibility is to use AI and your software to solve the problem and return ready solution. All of us will be building these helper robots. [02:11], [02:44] - **AI Tool: LLM-Callable Function**: AI tool is any function that the model can decide to call to solve the user problem. LLM becomes the brain deciding what tools to call and when just to solve the user problem. [05:47], [06:20] - **Koog Agents in 5 Lines of Code**: Write one to five lines of code in your application, connect agent to OpenAI, register tools from your app like send money, provide descriptions, and run it with user problem for fully autonomous agent. [14:27], [15:44] - **Custom Strategies Guide Unpredictable LLM**: Build thin corridors for LLM like classify request then transfer money or analyze transactions; for quarterly report, sequence data collection, report writing, compliance check with specific tools per phase. [17:35], [26:08] - **Koog Solves Real Production Issues**: Use history compression to TLDR long contexts saving money and time; streaming for multiple tool calls at once; tracing for debugging; memory to recycle past executions. [27:32], [33:47]

Topics Covered

  • Agents Shift UX to Problem-Solving Robots
  • AI Tool Enables LLM Brain Autonomy
  • Kotlin Agents Must Be Predictable Scalable
  • Custom Strategies Build LLM Corridors
  • Kotlin First for Production AI

Full Transcript

Hello everyone.

Happy to see you here see you here. My

name is Vadim and today I'm going to tell you how to build AI agents in Cotlin.

But why? Why AI agents? The answer might look very simple because this year is the year of Agentic AI. If you haven't

yet, please check the news. McKenzie,

Forbes, Morgan Stanley, all of them say about AI agents and how important they are in the model reality.

Or if you open my combinator YouTube channel, most of the videos that you will find will be about AI and they

openly speak about how many AI startups they invested in the last year. What

does it mean for everyone, for all of us? This means that tomorrow or like the

us? This means that tomorrow or like the next year, we're going to see a lot of strong competition

powered by all these AI startups almost in every single domain and that's great. So, is it just another

hype or at least is there something that I as a developer can benefit from?

So, let's check. So actually I would assume that most of you here are engineers and your main focus your main job has been always building your

software which is cool. At the same time some other companies such as Google, Entropic, OpenAI and many others have

been focusing on building AI.

But let's now just put the user on the picture. Let's take a look at this thing

picture. Let's take a look at this thing from the users's perspective. Users

would use AI to ask like how to solve the problem, how to do something and then they will use your software to

solve the problem. But that picture is actually a little bit outdated. So let's

imagine the picture of the future. Let's

imagine there is some helper robot in the middle and the user will just ask the robot do the whole work for me and the robots now it's the robot's

responsibility to use AI and to use your software to solve the problem and once ready it will come back to the user with a ready solution

that looks very cool and promising but then there is a real question who will be building this helper robots What do you think?

Actually, all of us will be building these helper robots.

So again, what will change? Let's take a look at this classical, you know, banking application example. You see

this, you know, checking, savings, credit cards, transactions, many other different things. But actually what user

different things. But actually what user wants, they want to solve the problem.

For example, they can tell to the helper robot, show me my credit card and it will do. So that's the idea. So AI has

will do. So that's the idea. So AI has already changed the user experience of search, of text editing, of studying and even of coding. That's Jet Brains Junior

by the way. If you haven't tried it yet, please do it. I strongly recommend. So

it can automate the coding tasks writing your IDE. That's the future. So

your IDE. That's the future. So

the question is will your application remain the same or will it become a gentic? Of course the choice is yours.

gentic? Of course the choice is yours.

But first let's understand what is a YI agent and how to build one. So let me give you like very simple example. It's

actually no magic here. You can do it by yourself even right now. Just take your phone open chat GPT or any other AI chatbot that you use and type the following. You can like it's real

following. You can like it's real experiment. You can do it by yourself

experiment. You can do it by yourself right now. Just type you you're a

right now. Just type you you're a banking assistant with the access to banking operations and you can send money by writing only the following JSON

like name, send money parameters to whom, amount and which amount and you will understand. But now let's enhance

will understand. But now let's enhance this even more. Let's add some other operations like and also you can get the list of my contacts using another JSON name list contacts. You can do it by

yourself. It's like a real chat. It's no

yourself. It's like a real chat. It's no

magic, right? So, and I will as a user provide them in the format of another JSON. So, I'm just like communication

JSON. So, I'm just like communication communicating the language that I will use. So, I will reply in the format of

use. So, I will reply in the format of JSON like name, name of contact, ID, ID of contact and so on. And it says, got it. What's your problem? I going to

it. What's your problem? I going to solve it. And then I can can continue

solve it. And then I can can continue this chat. I can say like send 100 bucks

this chat. I can say like send 100 bucks to my wife. And it replies in the agreed format in this JSON like name list contacts and I also follow the

agreement. I also reply in the JSON

agreement. I also reply in the JSON format like name wife ID ID of my wife name Daniel ID of Daniel and many others. And it says thanks now sending

others. And it says thanks now sending $100 to to your wife. And it uses again the agreed JSON format name send money

parameters to exactly ID of my wife amount and exactly $100.

So what we just did we actually kind of solved the user problem with chipt but without any integration it's just like a messaging history right but of

course like as an engineers you can easily imagine how to integrate this into your into your app into any app you just need the right API for that right

so let me just formalize a little bit so we we just you know learned the concept of AI tool so what is AI tool It's actually any function that the model can

decide to call to solve the user problem.

And now LLM in this system becomes the brain. Now it decides what to do, what

brain. Now it decides what to do, what tools to call and when to call them just to like solve the user problem for one simple reason to solve the user problem.

So and this is actually AI agent. So

again it's a model using tools to solve the user's problem.

That's very simple definition. I'm going

to refine it in the future actually two times. Uh but for now let's stay with

times. Uh but for now let's stay with this one. It's going to be enough for

this one. It's going to be enough for our understanding. So and now actually I

our understanding. So and now actually I want to switch my whole presentation to the history the history of Chad Brains.

Well you all know that Chad brains is responsible for creating multiple different IDs such as intellig pycharm and many others. But why why do we make

IDs? Not of not because of we like IDs.

IDs? Not of not because of we like IDs.

Of course we do. actually to make developers more productive and we actually made more other tools like dim city on back end tool like many

different things also to make developers more productive but do you know something is missing on this picture does anyone have any ideas what is missing here we made something else very

important very cool actually we made a programming language we made cotlin but why because cotlin makes all of you more productive Ive so

we made cotlin in 2016 that was like far away from now right so you may ask what is jet brains today what is jet brains doing today and to

answer this question I would like to bring all of your attention to the center of the screen and I mean to the very actual center of the screen and you

will see the answer during the last two years jet brains has been heavily focusing on AI I some of you might have tried our various AI

features in our products. Some of them were like really successful. Some of

them initially were not. And

importantly, we as a company gained a lot of experience. You may seen our AI assistants in intellig.

And some of you may even had a chance to try Juni, our full-fledged coding assistant like coding agent that works right inside your ID and it helps you to

automate your coding tasks.

Moreover, we even train like large language models from scratch. We made

Melum. Currently, Melum is the best LLM model for code completion in its size category in the world. And we even trained Melum for cotlin. It's like a

separate model for cotlin and it's the best LM model for cotlin code completion in the world. Just think about that.

So what else actually what else would you ask? Like what else did we do with

you ask? Like what else did we do with AI? Well, unfortunately I'm not I'm not

AI? Well, unfortunately I'm not I'm not allowed I'm not allowed to say more but I will more more much more is coming.

And of course like with all this experience with all these things that we did around AI we had we gained an understanding of what what is AI agent

like how to build AI agent and of course we are a big company we're like a very big company and we have some requirements we cannot afford doing like toy experiment and small demos and like

these kind of playful things we need to make real products and deliver them to real customers and we we we need our agents to first of all to be predictable

and also scalable and fast. Why? Because

if something is slow, users are not happy. We are reading all of your

happy. We are reading all of your reviews. And we are actually looking at

reviews. And we are actually looking at many different charts and graphs. And we

know that something is slow, your users are unhappy. And what do we want? We

are unhappy. And what do we want? We

want our users to be happy. You may want you may wonder like why why am I like telling you all of this jet brain story?

Because I strongly feel that you can easily project this to your own companies.

Likely you will see the same like requirements if you want to deliver something real something with AI into production scale. So let me continue. So

production scale. So let me continue. So

other things also we want our agents to be reusable. Why? Because like we have

be reusable. Why? Because like we have like different features, different product lines, different projects in chat brains. And of course if something

chat brains. And of course if something is cool and it's working in one product, we don't want to replicate the job. We

don't want to do the same job again. We

want to easily like allow to reuse the same idea, the same agent in another product. And also moreover, we want them

product. And also moreover, we want them to be composable. If some detail, some like small part is very cool and it's solving great problem efficiently. We

don't want again to replicate the job.

We want to be able to like include it in many other different products and compose one inside each other.

Also, of course, evolation evalation is very important. We need to be able to

very important. We need to be able to show our AI agent and other different AI things to our machine learning colleagues so that they can learn and know and analyze what's going wrong and

how to fix that to make it better to make the best user experience to make them predictable. So of course we need

them predictable. So of course we need our agent to be traceable for that and also multiplatform because of multiple project thing we are delivering for multiple environments and of course we

need to be able to deliver agents to multiplatform and last but not least of course we need our agents to be in cotlin. Why? Because we are the company

cotlin. Why? Because we are the company that made cotlin and most of our engineers they write in cotlin. Of

course we are a very big company. We can

easily like hire you know another machine learning department that will just solve all all our AI problems and they will write in different language.

We can do that but in my experience that's not going to work. Why? You know

this like very famous law the way you like structure the team structure the people in your organization that will inevitably reflect the final product the shape of the product the level of

integration.

You know this like thing like if you have like backend team a front end team separately or if you have them together that will actually impact the final product and we want our AI agents our AI

functionality in general to be tightly integrated inside our products. So

that's the thing because why we need to make the best user experience that's the idea and of course with all these requirements in mind we built AI agents

framework that solves all of them so meet cooklin AI agent framework by jet brains based on our real AI experience and today it's

getting open sourced and you may wonder why well because We actually gained a lot of experience. We made a lot of mistakes and we solved them and we want

to share this experience with everyone in the cotlin community so that can like everyone can benefit from that and you can like build your AI agents without having to like learn from scratch. you

can just reuse this and then eventually I feel that you know you can also contribute to that with your AI experience and all the cotling community can build this ecosystem

around building AI in cotlin. I really

dream that at some point cotlin might become the AI first language.

So that's like the dream and the goal.

Um now let me just give you a brief list of features of cook not all of them. Of

course, we integrate with most of flash language models. We have MCP

language models. We have MCP integration, embeddings, and a lot of pre-built components that solve existing problems and many other things. But so

far, I've been just talking, right? And

you might, you know, want to it's been like 14 minutes. You might want to like see some code. So, let me show it. So,

this is very simple thing like you can easily with cook to any LLM for example, OpenAI. Then with cotland DSL you can

OpenAI. Then with cotland DSL you can construct a prompt like system messages, user messages, some other messages. You

can do that and then you can ask the charge in this example to to give you the next reply. That's cool. But that's

that's not the real power because the real power lies in fully autonomous AI agents. And here it is. That's it. You

agents. And here it is. That's it. You

just line write this like one two three five lines of code in your application and immediately it just becomes a gantic just write simple agent connect to open

AI agent run and any task it will solve it for you don't need to write anything else it just works do you believe me

actually you shouldn't of course because like how the the model will understand what functionality from from your application is available You need to be a little bit of AI

engineer. You need to just point that

engineer. You need to just point that out. Like for example, in this, you

out. Like for example, in this, you know, we are building the banking application, right? So you have this

application, right? So you have this send money somewhere in your app and you you just say this is a tool and it works. Do you believe me now?

works. Do you believe me now?

Well, actually it's almost true already.

But how would LLM understand like what is the meaning of each function? There

is no magic. You have to be AI engineer.

You have to explain just like write the descriptions what is the meaning of each function what is what is the meaning of each u field and now this is the fully

working example that's it we just register the tools from your application provide them to agent and run it with some user problem

actually that's it that's very simple yet very powerful fully autonomous AI agent that will instantly make your application agentic nothing else is needed

So cook if you want can be very simple but cook can be also smart of course like you can define very custom pipelines in the form of graphs like just look at the left picture you

have like start you ask lm then you call a tool then you do something else and then you finish you can you know represent it in a form of a graph which will like guide the llm through the

process that's like the picture on the left u actually for you it's on the right but whatever So, and on the other side you will see

the code and just please raise your hand if you ever heard the term VIP coding.

Okay, pretty a lot of people. And who

heard the term VIP understanding?

Actually, I just made it up. But for

this presentation and for this like code examples, I just suggest you know you know like not uh diving too deeply into each line of code. But let's just like look at how beautiful it is. like it's

just some cotlin DSL it's simple and it allows you to like draw the picture with the code so that's the idea so you can just with cook you can um define the

nodes like the steps of your pipeline in your agent like ask lamb call 2 and so on and then connect them with edges and another beauty of this is that first of

all it's like very composable and second of this um well if you show this code to your machine learning colleague that doesn't know cotlin they will be easily

able to understand this and suggest improvements and you can also design something very custom for your specific like banking application for example like you can say

like you start then you classify the request with AI then you either solve a transfer money problem or you solve the transaction analysis problem and then

you finish that's the pipeline and you can describe that with code using cook So cook can be very simple and cook can

be smart. But most importantly, cook

be smart. But most importantly, cook solves real problems. Let me show you examples. First one is obvious. LM is

examples. First one is obvious. LM is

unpredictable and unreliable.

Even if you open charg at the bottom, you will see this. You shouldn't rely on the LM responses. And we know how to solve this problem. I will show later.

Another problem. If you deliver something at production scale, not just you know your toy small application, but you deliver it to customers, you will see that inevitably the message history

will grow. Whether it's a chatbot like

will grow. Whether it's a chatbot like assist like support assistant or it's like something analyzing banking transactions. If you analyze a lot of

transactions. If you analyze a lot of data, the message history, all this tool calls iteration. Remember this child GPT

calls iteration. Remember this child GPT example at the beginning? That's it.

like you you you write the next message you get the next response like all of that you know is the message history it will grow and LLM inevitably will be

just lost in so much information that's a problem another thing calling tools in a loop is very inefficient in real life you know imagine that you know remember

again the same charge example so LLM you know you you you give the the message and you wait until it gives you the next prediction the next action the next tool call for for example like send money and

then you wait again. You give a response and you wait again. You get a response, you get wait again. That's very

inefficient. Um and also it costs you money like you have to pay for the tokens again and again and again for each single tool call. That's a problem

and actually we faced a lot more real life problems when developing AI applications.

And with cook you can actually cook the solutions and I will show you how. But first we need a little bit of theory. So let me refine the definition first. So what is

AI agent? Again we told that it's a

AI agent? Again we told that it's a model using tools to solve the user problem. Then this is the agent, right?

problem. Then this is the agent, right?

LM call is an agent. So like your application calls the large language model, gets a response and then makes some action.

But it's not autonomous. The next level of autonomy is what entropic calls AI workflows. It's like a humanly designed

workflows. It's like a humanly designed logic, humanly designed pipeline. If you

want an algorithm that uses application of yours and LLM one after each other to solve the problem, you can you can have like branches, different pipelines and and many different things. That's a

workflow, but it's kind of very predictable. You know approximately like

predictable. You know approximately like what will follow after what. But that's

not autonomous enough. So the real autonomous thing is the agent agent agent kind of thing. It's like this calling tools in a loop. where a lamb becomes the brain. So in our

understanding and yeah what what's the beauty of it? It's like you don't know what will be next. You don't know when it stops. You don't know what tool it

it stops. You don't know what tool it will call. So the combination of all of

will call. So the combination of all of these different things is what we understand as AI agent. And now and now like let me give you another definition

agent strategy. Let me go back and

agent strategy. Let me go back and forward back and forward. You see the difference? So the strategy of agent is

difference? So the strategy of agent is like the form like the shape of your algorithm, the steps in the pipeline plus how you work with LLN but without

your specific application logic. Why is

it important? Remember I told you that like we in J brains we want agents to be reusable, composable and many other things. So this actually allows you to

things. So this actually allows you to abstract out specific use case of your application from the general strategy of the agent and it allows to use this

agent in different features and different products. So that's the idea

different products. So that's the idea of strategy.

So but why would I care you would ask?

So let me give you an an example. So

actually this is a strategy. You have

already seen this picture. It's this one from cook can be smart and you can express this strategy with cook but let's let let's let's just like see

how it works in real life imagine the user is asking this helper robot send 20 bucks to Daniel so and it will start working it will start the strategy then

it will ask the LM what to do because it doesn't know LLM does and it will get the first response list contacts and then it will use the API of your

application to get the contacts and get the response in the form of like some data in your application and then it will send the JSON to the LLM and wait

for the next action and LM will say like okay send money and we will use your application again to show the message to the user like this is the UI kind of thing that you can build on top of that

like how the user feels how the user worked with this uh agent you can show the the message do you confirm are you okay with this transaction and the user

says yes and we say yes to LLM. We don't

know what will be next but this time the LM decides to stop. Why? Because the

problem is solved. We say done. So if

you look again at the strategy alone without the example you may notice that it's kind of generic right it's there is nothing about the banking application

nothing about anything else just LM calls responses tools. So does it mean that it alone can solve any possible problem in the world?

Well, let's think of this example. You

give all of the banking application tools to the LLM and even of all the trading tools and cryptocurrency tools and the user would come and say, "Make

me rich." Do you think that it's going

me rich." Do you think that it's going to happen? Well, at least in the current

to happen? Well, at least in the current state of technology, no.

Why? Because LLM is unpredictable.

So um imagine that for example like you are playing a hideandsek game with the blind LLM in this large audience room.

It's quite large room, right? Will the

LLM be able to catch you if it's blind?

I don't think so. It's going to be like spinning from one wall to another back and forth trying to find you and it will fail because you give too much variety.

And now let's think of another example like you build a thin corridor and you just tell the LM just run forward and you will find the target. I think that in this case it will not have enough

variety. It will just run forward and

variety. It will just run forward and solve a problem. And there is that's actually where the development is needed. That's that that brings us to

needed. That's that that brings us to the idea of making custom strategies building the thin corridors for LLM to be predictable. Let me give you an

be predictable. Let me give you an example. You're preparing a quarterly

example. You're preparing a quarterly financial report and this could be your strategy like first like you you say like you are collecting the data this

part of the strategy you just tell to LLM you are only collecting the data you are not writing the the whole report you you have very very dedicated specific task

and you connect the LLM with only data collection tools not all the tools then with all the history and achievements of this app you go to the next phase to

writing the report and you connect the LM only with writing report tools and the task is also very specific here and then of course that's a very like

responsible task like you're preparing financial report you need to make sure that it's correct right and for this we need to make sure that it passes

compliance for example and you connect the LLM with compliance tools and then some somehow like inside this strategy you need to make sure you need to force

the LM like to follow specific steps you need to make sure that it actually passes all the compliance and then if not if there are some problems you go back to writing

report and you start it from scratch or like you add something to it right so that's the strategy and with cook you can implement it that's the whole code

so it first this is how you define the collection of data use just pre-built component. It's called subgraph with

component. It's called subgraph with task. You give it a task. You give it a

task. You give it a task. You give it a model. You give it tools. That's it.

model. You give it tools. That's it.

It's already implemented. You don't have to design it by yourself. Same thing

with writing report. You give different task, different tools, different model and it works.

And next step is compliance check. I use

different component here. It's called

subgraph with verification.

Why verification? Because I want to force the LM to do the final check. I

don't want it to like finish beforehand.

And then like I also give the tools and the task. What's left? Well, you only

the task. What's left? Well, you only just need to connect the three. Just

glue them all together. And like for this, I just like, you know, connect the edges using code just to match the picture at the bottom. Very simple. And

that's it.

So that's cool. But

same question again. What is AI agent?

So this is actually like I'm going to give you the final definition what we understand what is AI agent. So the

agent is a combination of strategy tools connection with your application and models. So that is AI agent.

and models. So that is AI agent.

But we need to go deeper. Remember I

promised to solve a few problems and one of them was a lamb struggles with long context.

Remember this picture. So again let me repeat LM will read the whole history to make the next decision. It will lead that to the to the problem that for

example if the history is too long it will be lost in the context you will lose the accuracy of your solution. You

will pay a lot of money because of the tokens and you will also spend a lot of time waiting for the next response.

And we made a solution very simple. You

just use that component, you put it anywhere inside your strategy and it works. So what it does, it will take the

works. So what it does, it will take the whole history and just change it to like one simple message like TLDDR what has

been done. It will save you money. It

been done. It will save you money. It

will increase the accuracy in many cases and also it will make it much faster.

But do you see any problems now?

Well, actually this is the problem now.

Instead of reading the whole book, LM is left with just like one sentence on the table. It's it doesn't know what to do.

table. It's it doesn't know what to do.

It doesn't have enough context. Of

course, we also faced this problem while developing agents and we also know how to solve it. We also built like different strategies for history compression that you can use out of the

box or you can implement your own. For

example, here you can like chunk the whole history in like smaller pieces and compress each of them independently.

And moreover, actually if you know specifically what task you are solving, if you know specific domain like domain that you're working with, you can

provide specific fact that you need in the history like you can use the fact retrieval strategy. It's also there out

retrieval strategy. It's also there out of the box. You don't have to implement it. You just declare. For example, if

it. You just declare. For example, if I'm working with a banking application and at some point for the next step of the strategy, I only need the risk

factors and valuable purchases. The LLM

will search for them and leave only them in the history. That's the idea.

So this is basically the illustration how it works. Instead of the whole book, LM will be left with only few important papers on the table.

Next problem. I remember sorry I remember I I promised you to like solve the problem of calling tools one by one being slow.

Again let me illustrate this. Imagine

you're giving you're you're building Android application of your bank with cotlin and you gave to LLM all the

buttons as tools. So now the LM will click buttons one by one to solve the user's problem. That's going to be very

user's problem. That's going to be very slow. Just trust me. You'll wait every

slow. Just trust me. You'll wait every next button click. You will wait for the LM response and again you will pay for the tokens multiple times.

And we have a solution. Just use

streaming. So what's streaming? It will

like stream like one response with m multiple messages, multiple tool calls at once. And you don't have to read the

at once. And you don't have to read the whole you can like start working already if you receive something.

So how it works? First you define the structure in markdown like for example here I'm defining you know the transactions like recipient and amount

I'm defining it using the DSL from cook and markdown and you may ask why markdown well because we tested a lot of things a lot of formats and the markdown

comes out to be the most efficient for LLMs why because LLMs were trained to predict the next line the next token in the text and this is exactly how

markdown format is structured like to understand the next thing you you only need like a few lines above you don't need the whole structure that's why markdown works the best and we providing the markdown API out of the box you can

use it and then that's it you can just call tools on the fly you request this stream from the LLM and then like when some some part is is parsed you call the

tool or you can even do it in parallel next thing remember evaluation is very important to like debug your agent to understand how well it's working or why

it's failing. You need to be able to

it's failing. You need to be able to trace to like understand what happens after what like what LLM call is there, what what is the response from LLM, what

tool does it use and why the tool response is misleading LLM and it follows the wrong path in the strategy.

You need to debug this thing, right? as

an engineer. And for that you only do this. You install tracing and you

this. You install tracing and you provide like you know the the collector of the traces like you provide where to store the traces. You can implement your own or you can like use one of the

available things and of course remember we are the tooling company and of course soon we'll be provide the

ID support for that. So we you'll be able to visualize your strategy what's happening right now in the agent and you'll be able to also see the trace and

debug the agent using your ID.

Next thing last but not least memory.

Um well you know like you're probably like most of you like are living in in Europe and you know that recycling is important.

It's much better for the environment and same thing with the agent executions.

So if you have the agent that has successfully run and solved the user problem before, why would you run it again without any context? You can just save and like recycle the same

information in the next run. And that's

like the agent memory. So to use the memory just install it to any agent and provide like where to store it, where to store the fact and then you can design

it as an engineer by yourself. You can

say like here I want to save these facts in the memory and later I can load them and use that's it.

So actually we have many more features.

So if you want to learn more visit our cook hands-on session tomorrow.

So to summarize, if you're writing in cotlin, you are privileged to start building the AI future and to deliver it at production scale.

And you actually you don't need to have the expertise. You don't need to know

the expertise. You don't need to know how to cook AI because with cook just take it and bake it.

So actually I've been talking for quite a while already. Um and sometimes people don't like me talking. So rather let's maybe just go to GitHub and start

building the AI future together.

So thank you so much and don't forget to vote.

[Applause] So um actually we have um several more minutes. uh if you want to have a Q&A

minutes. uh if you want to have a Q&A sess session, you have any questions, feel free to come to the microphone here and I'll be happy to answer them.

Uh oh, thank you for presentation. I

have one question. What is the under the hood of the uh history compress strategy? It's AI as well.

strategy? It's AI as well.

We need to just double send then compress AI compress it and return back and every time we have to send it, right? Mhm. Yeah. Uh thank you so much

right? Mhm. Yeah. Uh thank you so much for your question. So yes, uh talking about the history compression. So

actually there are different strategies and actually there is also a strategy interface that you can implement by yourself but the ones that we used they actually use the LLM again to analyze

the history and you can actually use another LLM which is not biased and compress and like just give you like one fact or several facts or something like that. So that's the idea. So there is

that. So that's the idea. So there is already a working and tested prompt inside and specific logic to preserve like memory effects uh storm specific data and form that to like nicely so

that the next time throughout the history lm will understand this context like what to do with it. So does that answer your question? I see the two

issues here is you have to pay again for the tokens for the compression and another maybe it could be like a broken

phone game that we sent and get and lose something important in this context. Uh

yeah uh thanks for the comment. So let

me just show um actually this slide again.

So um first of all regarding paying for the tokens. Yes sure to make one single

the tokens. Yes sure to make one single this LLM request like to compress the history of course you will be paying for these tokens but just once compare it

with the fact that for example if you weren't compressing the history you'll be paying it all one in like one after each other calling each tool. So it's

like calling like paying one time instead of paying like n times. Um and

regarding the losing specific information that's why exactly that's the problem that we also faced and that's why we made this strategy and this strategy. So you can point to LLM

this strategy. So you can point to LLM and inside you can actually it's open source you can check the implementation.

So inside like there is specific logic that makes sure that LLM retrieves these facts that are relevant and puts them specifically in the history so that this

important information is not lost.

Yeah. Okay. So, uh I would like to ask uh a short question. Uh it seems that you have a lot of experience with AI agents and I'd like to ask did you have

any issues with security vulnerabilities and uh if so then how did you solve it?

Um well regarding the security it actually depends on how you apply the agent. So like you can have risk with

agent. So like you can have risk with security basically doing everything like including like servers or even mobile.

Uh but in general uh this is something that has to be more worked uh on the side of your legal team rather than on development side. So like if for example

development side. So like if for example you like you know have the right um agreement and users understand and expect they know they have the expectation like what what what data

goes where then of course there there should be no questions and uh talking about like products of jet brains I cannot like answer for all the products

of course uh but generally uh we don't store like any data and uh we als we are also working on the AI enterprise project for enterprise customers like if

you don't want to like send your data to like launch large lang language model somewhere uh like you can connect to any other model you can deploy it on premises or you can use like local

models such asma and uh in our framework uh if you ask me of course we do support as well. So let me give you um this

as well. So let me give you um this slide one more time.

So we support basically right now OpenAI entropic, Google, Olama, open router and actually there is just a simple interface that you can implement and you

can connect to any LLM and also with open AI connector you can use it with your own any like deployment of flash language model that supports the open

like open API. So you can deploy something on premises and then nothing will uh exit your the contour of your company.

But in general, Jet Brains is a company we don't store anything, right?

Any other questions?

Yeah.

Hey, thanks for the presentation over here. Um, you mentioned about optimizing

here. Um, you mentioned about optimizing calls to tools um and um that you have used flows for example to deal with that. So in an

application that uses voice for interacting with the user where the basically it's a bit unpredictable and it will result on calling external tools. What are your key takeaways on

tools. What are your key takeaways on how we could optimize so that we don't make that much repeated calls to tools?

Mhm. Uh so if I got your question right um you're asking like um how to make sure that if you're like using the voice recognition you don't have to pay um a lot for the tokens for the voice or can

you repeat again please for the external tools so you you use your voice to ask something to the AI and then that will result on calling tools um to get other

other things or to do tasks um and what are your key takeaways on how we could optimize so that we reduce the number of calls to those to those tools because

using voice is a bit unpredictable. So

it um actually in my understanding um I don't see like really a big difference of like user user like instructing the agent with voice or the user instructing

the agent with text. So the only thing that you add to the system just by adding voice is this extra like error probability that you may not understand the voice correctly. But that's more of

something that you can delegate to the voice recognition tools and libraries.

So actually I was I was talking a bit about so you mentioned in your example and it it appeared that the example was

fixed. So that code was fixed and not u

fixed. So that code was fixed and not u let's say dynamic as a result of the voice command. So that's more like there

voice command. So that's more like there where where I want to to reach.

Uh yeah uh actually it's a great question. Uh to be really honest with

question. Uh to be really honest with you, we didn't work that much with uh voice recognition. Uh so but we can

voice recognition. Uh so but we can actually discuss it on the booth. Uh

yeah, sure. Sure. Thanks. Thanks very

much. Thanks.

Hello over here. Thank you Vim. Great

presentation. Uh I had a question for the context that you're supplying the LLM for yeah each of those uh commands each of his tools. Uh it seems like a

very similar problem to documentation right it's context that could already be in swagger documentation or Java docs is there any good synergy that we could

find to avoid this duplicate effort of communicating to my human clients and also to my LLM agents. Mhm. Uh if I got your question correctly, you mean like

that to um explain what is the meaning of each tool, you need to like explain it with annotation and your question is can we use documentation for that, right? Correct. Yeah, that's a actually

right? Correct. Yeah, that's a actually an awesome question. Uh we are actually experimenting on that right now. So we

are experimenting on supporting of the K docs in Cotlin so that you can just use the K doc and uh make a tool call from it like make a tool explanation for it

for the LLM but right now there is no such thing but we are actually considering that okay thank you very much thanks hi uh thank you for the pres

presentation uh I have a question about uh it's nice when you have a working uh aentic uh application But uh building upon this, how do you

test that there is no regression when you add features to your existing agent?

Mhm. Um yeah u that's a kind of actually very complex question because u for this of course you need many different components like one is obviously tracing. So I've already showed like you

tracing. So I've already showed like you can just you know install the tracing so save the traces somewhere and then you you have to make sure that they don't regress and for that for that you you

don't you cannot I mean you have to build a lot of things so we built actually an evalation pipeline u which runs on CI and evalates the agent and also to remove this

unpredictability kind of thing we can cache the responses and I think actually even in the framework that we open sourced uh you can see that there is a caching ing LM provider. So you can cach

you can like wrap any LM provider like OpenAI for example or any other inside this one and it will cache the responses so that whenever your tool replies the

same result it doesn't go back to LM again if it has seen it before if the response is already saved somewhere in on the disk in like for your inolation

pipeline it just takes the the same response and again you can like uh take any trace any like path of the LLM in the agent and you can save it and reuse

it for testing so that you can make sure that you know you don't add another unpredictability in this pipeline in this evaluation thing and that allows us to actually catch the real regressions

but not some unpredictable LM behaviors like today compared to yesterday does that answer your question but that basically mocking the response of tools

right um they're not mocking the responses of tools I mean of Of course you can also do that but then it becomes more like of a test rather than evaluation thing. And of course we

evaluation thing. And of course we actually have a testing API as well. You

can mock LM responses tools save it on disk and then reuse it. But uh for the evaluation it's more complex thing like you have to make sure that whenever you

change the action change the agent. Of

course some tool responses will be different the next time but the ones that are not different we'll just take the existing response. we only test like the uh changing part and make sure that

you know for example on some benchmark we are not losing we are only gaining or staying on the same page so that's the main idea thank you

yeah other questions no worries okay yeah then thank you so much again [Music]

[Applause]

Loading...

Loading video analysis...