Why IDEs Won't Die in the Age of AI Coding: Zed Founder Nathan Sobo
By Sequoia Capital
Summary
## Key takeaways - **IDEs Won't Die**: Despite hype around terminal-based AI coding tools, human beings will continue interacting with source code, as it is a language designed for humans to read, not just machines to execute. Visual interfaces are necessary to understand AI agent edits in context. [02:53], [03:15] - **Zed's Rust Rewrite**: After hitting performance limits with web-based Atom on Electron, Nathan restarted in Rust with GPU-accelerated rendering for JetBrains power and Vim responsiveness. Zed now has 170,000 active developers. [10:19], [22:32] - **Agent Client Protocol**: ACP externalizes AI agents like LSP did for language servers, making Zed neutral for any agent including Claude Code and Gemini CLI, with JetBrains integration. It delivers great UI for interacting with agents and software. [18:10], [19:12] - **Synchronous Code Collaboration**: Git's asynchronous snapshots limit real-time collaboration; Zed enables humans and AI to work together in the editor like Figma for designers, with conversations anchored to code. Screen sharing forces passenger seat due to keystroke lag. [11:52], [12:29] - **Fine-Grained Edit Tracking**: Zed builds Delta DB for keystroke-level commits, anchoring permanent feedback and conversations to code evolution, unlike git snapshots. Code becomes a metadata backbone for contextual AI-human interactions. [15:47], [34:31] - **LLMs Excel In-Distribution**: LLMs shine generating standard code like Cloudflare Rust bindings or GPU pipelines from API docs when the vision is clear, acting as a knowledge extruder. They struggle with novel architecture thinking beyond code. [20:25], [29:04]
Topics Covered
- Source code remains human-readable language
- Performance can't be retrofitted
- ACP enables agent-agnostic IDE
- LLMs excel at in-distribution code
- Codebase becomes metadata backbone
Full Transcript
just doesn't make sense to me that human beings would stop interacting at all with source code until we get to like AGI, I guess, where human beings aren't going to be doing a lot of different
things. Um, but until then, I think we
things. Um, but until then, I think we need to we need to look at code. And so
then the question is what's the best user interface for doing that?
Today we're talking with Nathan Sobo, founder of Zed, who spent nearly two decades building idees. First building
Atom at GitHub and now Zed. Zed is a modern IDE written in Rust used by more than 150,000 active developers and also creates and maintains the ACP or agent client protocol which connects different
coding agents to different coding surfaces including Zed. Nathan shares a contrarian take. Despite all the hype
contrarian take. Despite all the hype around chat and terminal based AI coding tools, he argues that source code itself is a language meant for humans to read and that we'll always need visual
interfaces to understand what AI agents are doing. We dig into whether LLMs can
are doing. We dig into whether LLMs can actually code, what the richer collaboration layer between humans and AI might look like for coding, and Nathan's vision for turning code into a metadata backbone where conversations,
edits, and context all hang together.
Enjoy the show.
Nathan, thank you for joining us here today.
>> Yeah, thanks for having me on.
>> I want to start with a hard-hitting question. There's a lot of internet
question. There's a lot of internet talk, chatter about is this the death of the IDE? So if you roll back two years,
the IDE? So if you roll back two years, everyone was coding primarily in the IDE right?
>> And now it seems like as people move towards the terminal and more of these conversational experiences, there's a question in the air. Is this the death of the IDE?
>> Yeah. And I've actually asked myself that question at different times and you know different states of anxiety of like is it the death of the IDE? I've spent
my entire life grinding toward building the ultimate tool for this and is it not going to matter? Are these people right?
But I after mulling it over seriously because I definitely don't want to be goldplating a buggy whip. I think that those takes are not realistic. It is
mindblowing that you can sit down at a terminal uh and speak English with a script talking
to an LLM and make real progress on a codebase. And there are millions of
codebase. And there are millions of people doing that apparently, including me on occasion. Um, but the problem I ran into whenever you, you know, claude code was the thing I think I spent the
most time with, uh, is when it wants to show you what it just did and you're reviewing it, you sort of view it through this 10 line or, you know, tiny little excerpt in the terminal. Um, and
as soon as you want to see more, what do you do? Like, and so I think if you
you do? Like, and so I think if you believe that the ID is going to die, then I think that requires you to believe that human beings are not going to need to interact with source code
anymore. Like I don't need to take a
anymore. Like I don't need to take a look and see the context of this edit that the agent just made and all the different things that it's connected to um and understand that and load that
into my brain. And I just fundamentally think that source code is a language just like natural language is a language. So we have this revolutionary
language. So we have this revolutionary new tool for processing natural language that we've never had. But it's not like source code is binary that we feed to a
a processor, right? Like it is intended for human consumption. Like uh one of my heroes that that I learned a lot from Harold Abson, he's a computer science
professor. I think he was at MIT. He has
professor. I think he was at MIT. He has
this great quote that I've always loved which is like programs should be written for people to read and only incidentally for machines to execute. H
>> which is an extreme stance because it's like why are you writing the program if you don't want a machine to execute it?
There are like a lot of people that program in HA hasll and stuff that seem to not actually care about what gets done with all of this programmatic mination. I I tend to but I also see a
mination. I I tend to but I also see a lot of wisdom in that in that like fundamentally programs are about us
expressing some abstract process in a very precise way and there is no better language for talking about a lot of different sort of touring complete
programmatic concepts than source code itself. So, I just doesn't make sense to
itself. So, I just doesn't make sense to me that human beings would stop interacting at all with source code until we get to like AGI, I guess, where human beings aren't going to be doing a
lot of different things. Um, but until then, I think we need to we need we need to look at code. And so then the question is, >> what's the best user interface for doing that?
>> And is the best user interface a guey then?
>> I think so. or or it you know there are a lot of different ways of representing an interface to to code. Does it need to be graphical? I mean there are a lot of
be graphical? I mean there are a lot of people that are using Vim for example.
Vim's not a a graphical user interface, but is an interface, right, that's optimized around presenting source code, navigating source code, and yes, sometimes editing that source code
manually. Because I think in the same
manually. Because I think in the same way that the best way to understand software is often looking at the software, looking at the the best
humanine synthetic language we can derive for expressing this abstract process. Sometimes the best way of
process. Sometimes the best way of expressing it is just to write it. Yeah.
>> Uh directly. And I'm not here to say that I'm particularly a big fan of grinding through repetitive work um or having something that could be written
by an LLM. Like I have no desire to write that necessarily, but I do think that they're often often times still in software where the clearest way to articulate something is just to write
the code define a data structure. You
know, I could write a sentence to an LLM describing I want a strruct with the following four fields um or even zoom out and describe that on a more abstract level. But if I know what I want to
level. But if I know what I want to express, sometimes source code really is the most efficient way to do it. And in
that world, a tool designed for navigating and editing source code still seems like a really relevant tool. And I
have a feeling that even people that are heavily vibe coding with a tool in the terminal are probably running an editor alongside that tool to inspect what's going on.
>> You mentioned at the start that you've been working on IDs for your whole career. You are a legend in the in the
career. You are a legend in the in the uh IDA space. Just maybe for for our listeners just say a word on your background.
>> When I graduated from college, I decided I wanted to build my own editor. That
was 2006 that the year after I graduated college and I've been working my whole career to build the editor I envisioned which was always sort of at the time
textmate was a really popular tool. I
learned about textmate from dhh demoing rails or whatever it was just lightweight simple approachable fast.
I'd used Emacs, I'd used Eclipse, I'd used uh the Jet Brains products which are still really powerful and um all of them brought something to the table in terms of either extensibility or
responsiveness or feature richness, but none of them synthesized all those things into one package. And so it was like, yeah, 2006 when I decided I want
to build an editor that has the same power or more power as the most capable idees that take 10 years to start up,
you know, and feel kind of sluggish under my fingertips, but then have the same kind of responsiveness as a Textmate or a Vim, but also be really extensible, but not have to be
extensible in this arcane Vimcript language where I'm having to have this like pet that I'm feeding every weekend or whatever in my spare time to make sure that you know my Vim configuration
doesn't break. I wrote Adam in Vim. So
doesn't break. I wrote Adam in Vim. So
anyway, >> talk about Adam actually and then lessons from that and then why you started Zeb. Yeah, the first attempt at
started Zeb. Yeah, the first attempt at delivering this IDE of my dreams was Atom and uh I joined GitHub as the you know first one of the first two
engineers to work on that project and uh we wanted it to be extremely extensible and so we decided why don't we build it on web technology so in the process of creating atom we built the shell around
atom which we ended up naming electron and electron uh went on to be the foundation >> I just I just caught that reference Yeah. Uh that was Chris Wanstro's idea
Yeah. Uh that was Chris Wanstro's idea actually, but not mine. Yeah. And so
what we we did is we sort of married Node.js, which was uh getting really popular at the time with Chrome and then delivered this framework that kind of let you build a web page that looked
like a desktop app.
>> Um and it went on to be really successful. Adam had its day in the sun.
successful. Adam had its day in the sun.
And then Microsoft uh kind of copied our idea, took Electron, uh took code they already had that was running on the web and moved it over. And then the rest was history. I mean, VS Code went on to take
history. I mean, VS Code went on to take over the industry. But at some point, I'd kind of gotten to the point where I felt like Adam had run its course. I'd
learned some hard lessons there. Some of
them were just about like how do you design a data structure to efficiently represent large quantities of text and edit it which is a lang you know a language neutral lesson honestly to some
extent and some of the lessons were about not inviting people to to bring code onto the main thread um and destroy the performance of the application by
just running random extension code. Um
we made it very extensible. We were very open which sort of made it very popular quickly and then we drowned under the the promises we had made that were kind of premature. But one of the things was
of premature. But one of the things was I was just sick of web technology. Like
I remember opening up the the performance profile that was built built into Electron Chrome's dev tools basically and just looking at something that I was trying to optimize and I'm
just like I need to get inside of whatever these little lines are in this in this flame graph and figure out what's going on inside there. and I just hit the the ceiling, I guess, on how
fast I can make this thing. And so it was, yeah, I think 2017 that I decided we need to start over. Um, that a web browser is actually not a suitable foundation for the tool that I really
wanted to build, >> which had a lot to do with performance, which sounds like a no big deal. I mean,
but performance is not a feature that you can really go back and add later. If
you've chosen an architecture, you're going to accept the performance capabilities in of that architecture and and the web wasn't it for me.
>> You built Zed originally to make it easier for humans to pair program with other humans, >> right?
>> That ended up being very convenient as AI agents came about and humans started to need to collaborate with AI agents.
Talk about that dynamic a bit. I think
the whole industry has this idea of how we should all be collaborating together.
And I was actually at GitHub for the current way that we collaborate becoming popular. Um, and it's all about being
popular. Um, and it's all about being asynchronous that you kind of go off in your corner and do a bunch of work, take a snapshot of that, upload it, and then
in a web browser someone, you know, writes comments on your snapshot, and then maybe an hour or maybe a day later, you reply. And it's this very like email
you reply. And it's this very like email oriented experience, asynchronous experience, which when you're all on the same page. Um, or maybe when you're
same page. Um, or maybe when you're working on Linux, which is what Git was designed for originally, and there's people all over the world working on these very disperate things, maybe
that's an appropriate modality. But I
always believed that the best way to collaborate on software was including a lot more times where we're in the code together, writing code together or talking through code together and
getting on the same page in a format where we're actually talking to each other and can interrupt each other and also relate to each other as human beings in a way that I just don't see
happening on top of the git based >> flow. Uh we use Git uh all the time and
>> flow. Uh we use Git uh all the time and we don't do as much code review as a lot of teams because what we prefer to do is just talk to each other in real time in the code but there just wasn't a good
tool that enabled that. You could use screen sharing, but the problem with screen sharing is one person is very much in the passenger seat because you got a roundtrip keystrokes. And so,
yeah, the two big problems when I not knowing that AI was coming, right? The
two big problems I wanted to solve at the outset were fundamentally better performance. You know, when you type a
performance. You know, when you type a key, I want pixels responding to you on the next sync of the monitor. So,
there's zero perceptible lag. We're
pretty close. I I can't say we're 100% perfect, but we're hell of a lot closer than you could ever get in a web browser.
>> And can you say a word about how you've achieved that?
>> Yes, I'm on a digression here, but uh >> go ahead and say your second thing and then say a word about how you achieve performance.
>> Um but then the other big pillar other than performance at the outset was changing the way that developers collaborate on software. And so to do
that, I really feel like we need to bring the the presence of your teammates into the authoring environment itself in much the way that Figma did for designers. Now, designers didn't have a
designers. Now, designers didn't have a lot of good options.
They didn't have anything as as good as Git, for example, as a compelling alternative. But I still think that
alternative. But I still think that vision of >> you're in the tool looking at the actual thing you're creating and there are other people there with you
saw I wanted to bring to the experience of creating software and so that's why it felt appropriate to own the UI on this deep level. Now onto the rest of your question about what are the
implications of that for AI. The vision
with Zed was always I want to link conversations to the code in the authoring environment where the code's being written. And so I actually think
being written. And so I actually think that conversations in the code that used to be kind of a weird idea, right?
Because oh why would you need to have a conversation in the code? You write it by yourself and you push a snapshot then we'll have a conversation on a website about the code you wrote, right? Um, but
it's starting to feel a lot more relevant in a world where you're having this conversation all the time with this like spirit being or whatnot, right?
>> Ghost.
>> Yeah. All of us, even me included, as a big fan of this more synchronous mode of collaboration, um, are having a lot more conversations about code in the code.
And that's where I see sort of this snapshot oriented paradigm um really breaking down. Like when I'm interacting
breaking down. Like when I'm interacting with an an agent and it goes off and makes some changes and I want to give it feedback on those changes, ideally I want to kind of permanently store the
feedback that I gave on those changes uh and have a record of that. Um
what tools there's no sort of git for that if that makes sense. Right? But I'm not going to
makes sense. Right? But I'm not going to commit on every token the thing emits and then like do a pull request review on that, right? And so to be real, like Zed is very much a work in progress. And
I think to earn the right to deliver this experience, we first just had to build a capable editor that someone would just want to use to create software on their own. Um, I think we
made a ton of progress there and are now starting more earnestly on this phase 2 of this fine grained tracking mechanism
that's sort of the equivalent. It's not
exactly how it works, but it's kind of the equivalent of having a commit on every keystroke or a commit on every edit that the agent streams in and then being able to anchor interaction or
conversation directly to that. So that's
something that the tech we're building I think is something maybe we could have built in isolation.
Um but then the problem is well what experience do you deliver on top of that? And I always thought the best
that? And I always thought the best possible experience would be this vertically integrated. We designed the
vertically integrated. We designed the UI and all the infrastructure top bottom um to yeah deliver this immersive ability to interact directly in the code.
>> Yeah. with another being.
>> And so you've made the choice to make the IDE almost this Switzerland for humans to collaborate with different AI agents, >> right?
>> Um talk about the role that agentic coding protocol is that what it is ACP plays in that in that vision.
>> I really view our job as to provide the ultimate interface between the human being, the source code, other human beings or other artificial human beings
basically. and we built our f a
basically. and we built our f a first-party agent um like earlier this year and then but as we're doing that
and it's quite challenging as uh you know tuning the prompts I mean none of it feels like challenging in the same ways that building an IDE is in terms of
algorithmic complexity and you know uh getting the data structures just right and making sure that things are performant the actual like touring complete software parts of that are
fairly easy. The hard parts are like the
fairly easy. The hard parts are like the AI parts. And that's still something
AI parts. And that's still something that I think we're learning as a team. Like we come from a very different perspective.
Meanwhile, I see all these teams um with that all seem to be quite well funded from some of the big AI labs like Enthropic and Google uh Google with Gemini CLI that they were the first
people that we integrated with. um
claude code. Everyone's building an agent, it seems, and all these agents are rendering what I consider to be a fairly impoverished kind of terminal-based experience that would need to be
supplemented with an editor.
>> So the thought is, okay, we've got a great editor and all these people are trying to solve this problem like what needs to happen here is the same thing that the language server protocol did.
So one great thing that Microsoft did with VS Code is they took all the intelligence of the IDE that was typically bundled in like Jet Brain style right where the IDE comes
preconfigured knowing everything and they moved it out to the community. So
the PHP has a language server now and you know there's uh the TypeScript language server etc. We wanted to do the same thing with agents. The thought
being there's probably going to be different kinds of agents experimenting in different domains. Maybe there's
certain agents that are optimized for particular problems. They're agents competing with each other. So sometimes
one will be the best only to be leaprogged by another.
Externalizing all that and trying to democratize that and say, "Hey, whatever agent you want to use, we want to deliver a great UI for you to interact with that agent and your software." Um,
that was the thinking behind it. And so
far it's it's working out better than I might have expected actually. like um I didn't really know how many people are going to resonate with this idea, but JetBrains got on board. Uh and so that I
think is really exciting. They're
theoretically a competitor, but it's nice to have someone on the other side.
Um and now there are a bunch of different agent developers that are getting on board on the other side. Um
we're going to continue working on our own agent, but it's nice to be aligned with all that effort instead of competing with it.
>> Yeah. Do you vibe code >> sometimes?
But yeah, I mean I have so one successful case of vibe coding is um we had some very old like serverside
infrastructure that needed to be replaced. And so I decided to move all
replaced. And so I decided to move all of our serverside infrastructure to Cloudflare. And I vibecoded a simulator
Cloudflare. And I vibecoded a simulator for the Cloudflare API. Um, and so basically we have a trait in Rust that abstracts away everything Cloudflare can
do. And I had an afternoon basically and
do. And I had an afternoon basically and an idea and that was an amazing use case of
agentic coding of just like what did you >> uh I just described the idea to the agent. I think I g I fed it some API
agent. I think I g I fed it some API docs from Cloudflare's JavaScript APIs and I said I want to build a Rust Rust bindings to these APIs, but then I want
to, you know, build an abstraction that sort of lets me then plug in a simulator for these APIs as well. And I knew exactly what I wanted. I had a vision uh
and I didn't have a ton of time to express that vision. So in the past maybe I would have either done it myself which I definitely didn't have time for.
There were other things going on at the time or written some amorphous document >> uh trying to or you know try to explain it to engineers on my team what I had in
mind. But what this vibe coding session
mind. But what this vibe coding session enabled me to do was get somewhere in between if that makes sense. Like so I sort of apologetically handed this pile
of generated code to to the guys working on cloud and was like I generated this.
This is directionally the way that I want to go. Don't judge me too hard if you find some weird stuff in here that like doesn't quite make sense. Like it's
generated just so you know. That was a huge success though. I mean they were able to run forward with it.
>> Yeah. I think I maybe avoided the like I never want to be like, you know, the boss, the vibe coding boss or whatever, you know, like >> why not?
>> What I mean is I want to I want to be the vibe coding boss that's doing it well, but I don't want to be the vibe coding boss who's sort of clueless and thinks that they've solved 95% of the problem when really they've solved 5% of
the problem and they're just deluding themselves. I want to do it in an aware
themselves. I want to do it in an aware way and make sure that I'm actually moving the ball forward and not being annoying.
>> Yeah.
uh handing off a big mess of slop to somebody and being like here you go clean it up my great idea.
>> Actually speaking of so your user base is I you know on the order of 100,000 active developers inside >> it's 170 I think well anyway depends how you measure
>> 170,000 active developers inside >> and they tend to be like pretty hardcore engineers like you know elite have been coding for a long time. What is your
user base's overall perspective on AI and are they embracing it? Are they
using you know I just listened to the Carpathy interview where he's you know he uses autocomplete but doesn't really use the agentic loop as much. What are
what are your users doing in terms of adopting AI >> based on the metrics we have which are not perfect because it's an open- source IDE. It makes it very pe for people to
IDE. It makes it very pe for people to opt out. About half of the people using
opt out. About half of the people using Zet are using our edit prediction capability which is you know I'm coding along and it suggests the next thing. So
very much programmer in the driver's seat >> and about a quarter of pe of our active users are using agentic editing in some shape or form. M
>> you know we had some haters in the crowd and I think as we began to embrace AI there were definitely people who let us know what they thought thought about that you know that we weren't whatever
but I don't care about that like hey this is happening something's happening here like we're not going to just not go toward that like >> I'm not like that and so if they if they
signed up for zed for being a let or head in the sand or like >> or I don't know just clinging to tradition with all our might like we're not on board like I I want to move
toward the future.
>> We attract a more professional. Yeah.
Like you're saying hardcore audience because I think at least at the moment um and again the full vision isn't built yet. One of the things we have to offer
yet. One of the things we have to offer is this extreme performance while you know with every passing day the same features as the other things but just
much better performance. And so I think as a developer becomes more seasoned, they start to care about the tactile experience of the tool they're using under their hands. You use something 40
hours a day, >> it starts to bother you when it's dropping frames or just not being able to keep up with your with your hands basically.
>> Yeah.
>> So just the kind of people that tend to gravitate towards Zed now are the kind of people that just care about a really well-crafted fast tool. My daughter goes to school with a girl. Her mom is a
dentist and she's vibe coding some software for her dental practice right now right?
>> Does she know that she needs a fast editor that feels good under her fingers? I don't know. I'd like her to.
fingers? I don't know. I'd like her to.
I mean, it's kind of my job to I I think we have bigger plans and there are going to be things that speak to that wider audience over time, but for now, the the people that really care tend to be
people that are quite experienced. Mhm.
>> I remember one of your engineers, Conrad, wrote this article and you you texted it to me almost like sheepishly or apologetically. It was number one
or apologetically. It was number one hacker news, why LLMs can't build software. What is your kind of mental
software. What is your kind of mental model for what LLMs are good at in software versus where they're lacking and how quickly do you think that's changing?
>> I'm less convinced that Conrad that LLMs can't build software. I think his mental model of the things they're incapable of is maybe better than mine or he's just more
confident than I am in his take. Uh I'm
less convinced of what they're not ever going to be capable of doing, but I can tell you what they've worked well at in my own experience and where things get frustrating or go wrong for me. I
mentioned earlier the generating the cloud flare. Um,
cloud flare. Um, another earlier experience that I had pre-agentic, pre-agent, anything, it was just with GPT4
and I generated the a new backend for for our graphics framework that we had to write like so we you were asking earlier about how we achieve our performance. One of the things we do is
performance. One of the things we do is the entire Zed application is organized around delivering data to shaders that
run on the GPU that render the entire UI in much the same way that a video game >> would render frames of its UI or its experience. I guess it's not a UI
experience. I guess it's not a UI rendering a 2D UI and with some of the same techniques that video games use to render their 3D worlds.
>> Interesting. Um, so anyway, I was rewriting the graphics back end and though I did, you know, write the original, you know, graphics back end,
uh, I was rewriting it. The old one was like working well enough, it wasn't in the shape I quite wanted. Um, and so I was able to just generate like a
rendering pipeline that configured the GPU and all the different stages and all these things that like I would have in the past been searching around on Stack Overflow or digging around in obscure
documentation to do I was able to just go cuz all that knowledge I didn't have it but it's definitely out there like in the distribution of these models.
Another thing I did during that same rewrite or like fundamental change of our UI framework was I wrote a bunch of procedural macros. And macros
procedural macros. And macros in Rust are really powerful. They're
ways to do things that you can just put a little annotation at the top of something and it'll generate all this code behind the scenes before the compiler runs. But I never really
compiler runs. But I never really learned to write Rust macros. Definitely
not these procedural macros that I needed to write in order to basically pull all the ideas from Tailwind CSS into our Rust graphics framework, which
really delighted me. This idea that like, oh, we're pulling like these pop culture kind of like Tailwind web ideas into the systems programming language
and combining these two things together.
But Tailwind's definitely in the distribution of the LLM. It knows Rust well enough and it knew how to generate these procedural macros that I didn't
know how to generate. And so like faster than I I never would have even attempted to do what I did of like okay I'm going to write some macros that generate a
method for every single Tailwind class and but by kind of feeding some docs into it here. I view it as like a knowledge extruder >> of like there's all this sort of
generalized knowledge out there and and sure I could go read about it and learn about it but no I want it to be like squeezed out in exactly the shape that I want it and so that was like a perfect
use case for it I think of it was all pretty well-known standard stuff but I just didn't have it in the shape that I needed. M
>> um so that was like I think what they're really good at is it's like copy and paste >> but way way beyond but you're still sort of borrowing from the global knowledge set.
>> Yeah.
>> Where I've gotten more frustrated is like right now we're working on uh this Delta DB system which is trying to do fine grain tracking uh and real time
syncing of individual edits as they occur that layers on top of git. It's
fun because I it's been a while since I had some of these moments of just sitting in front of a problem and just struggling to like load all the different constraints that need to be
solved simultaneously in my head and hold that all in there long enough. LLMs
have not been >> that helpful. They at least not in writing the code because the code is not the constraint when we're solving this.
like >> yes, we're writing code, but much more important is the thinking going behind, you know, what is actually not that many total lines of code. They're just the
right lines, and I'm proud of how few lines there are. It's like people get excited about how much how many lines of code they're generating, but and for different kinds of software, maybe that makes sense. But I'm still using LLMs.
makes sense. But I'm still using LLMs. I'm just not using them to write the code. I'm using them to go explore an
code. I'm using them to go explore an idea or even have it generate some code that I never intend to run to just like see how it feels quickly. Um, so it's
not that I don't use them as a inherent part of my process. It's just like depending on what I'm doing, I'm not always so sure it's like going to make me faster >> writing code. So just to read that back
to you when the LM are doing something that's kind of indistribution of their training data and where you have a good abstract model of what you know almost the pseudo code of what you want it to do the models are actually very good at
implementing the code. Yeah.
>> Once you sort of go out of distribution or when writing the code is not actually the task at hand it's actually the the thinking of what you want the code to accomplish. Uh then then LLMs are just
accomplish. Uh then then LLMs are just not there yet.
>> I think that's right. And
another thing I'll say is I think the per I'm excited for LLMs to get faster.
I don't know that Haiku for example is intended to be used just directly like we have some work to do as tool builders to figure out when to switch to the
faster model or the smarter model. Um
but in general I think the faster they can go whilst not being totally silly unintelligent that's the trick. But I'm
excited for being able to kind of conjure a diff on demand >> because I think some of it is just like if I'm asking the agent to do this thing, I have a couple different choices. Like I can kind of sit there
choices. Like I can kind of sit there and watch it, which can sometimes be helpful and sometimes is important because I'm like stop.
Like whatever you're doing right now, like writing tests I didn't ask you to write that you make pass while the test I want to make pass is still not
passing. Um,
passing. Um, I can go like make a coffee or, you know, like go take care of some other task or maybe context switch to some company level concern or whatever. I
could go try to compete with it >> and like let it do its thing. Um, but
all of them are like a little annoying.
Like again, I'm someone who's obsessed with giving fast feedback. Like I
literally engineered the tool to give me keystroke feedback on the next frame.
And so it's that waiting that I think has been frustrating for me. But I think if I could get something almost correct in a tenth of the time, okay, maybe
we're talking about a shift again.
That's why the jury is very much still out for me >> around where it's all going.
>> What do you think is the vision for how the IDE ultimately will evolve for we're going to have lots of these agents. They're
going to become increasingly capable, >> right? um how does the guey evolve?
>> right? um how does the guey evolve?
>> So there's a couple different pieces of it. I mean for me the deeper piece is
it. I mean for me the deeper piece is this notion of treating the ID as this fundamentally collaborative environment >> and
honestly what's deployed in zed today is still pretty alpha quality on that front >> but we're taking all those lessons and have very good progress on on a new way
of representing collaboration and the IDE is going to be the place where that's all surface so an inherently collaborative experience to me multiple humans and multiple agents. The idea
that when you're having a conversation with an agent, that's potentially something you could pull other humans into and use the conversation with the
agent as background context or as a fast track to getting all the relevant code you want to talk about or the problem you want to discuss loaded up in a very easy to digest way to then pull in a
teammate and have that conversation. And
then the idea of permanence uh being able to reference locations in the code in a stable way. Having a continuous representation of the evolution of the
code rather than this punctuated snapshot based representation >> to me is going to be like a fundamental abstraction that we're going to need to build
any kind of interaction with the LLMs around so that we can remember everything and so that an LLM will be able to ask about a section of the code what are all the conversations that
happen behind this code to go plum that context and I guess the idea is having codebase be this like backbone on which all the data related to the code can
hang in a way that it just isn't today.
You can have comments in your code and you can have stuff tied to snapshots of your code in GitHub, but for the most part the code itself >> is devoid of metadata.
>> Yeah.
>> And so really like unlocking that, turning the code into this metadata backbone in the UI is a big piece of it.
Some of the stuff I was showing you is asking ourselves the question and I think a lot of people are asking have looked the same for who knows how long, right? Like it's the typical
long, right? Like it's the typical thing. You got the tree on the left, the
thing. You got the tree on the left, the tabs in the middle, maybe some git stuff on the right or maybe the agent panel.
That's where we have our agent panel today. And it's this very it's evolved
today. And it's this very it's evolved over time uh to solve kind of one human in one working copy solving one problem
manually maybe with some edit prediction in the middle. But if if someone really is working agentically as their primary like uh means of working, what does it
mean to put the conversation front and center? And we're not the only ones
center? And we're not the only ones thinking about this obviously. There are
other tools that are also exploring this. The cool thing about us though is
this. The cool thing about us though is we are a full-blown IDE. And so I really view it as there's a place for all these different ways of working. And I
definitely think that there's a very long lived place for this traditional way of I'm in a what's going on in this one working copy and I need to evolve the state of this copy forward until it
solves my problem. There's a place for that. But then there is a place I think
that. But then there is a place I think for all right you've got several potential conversations maybe on different projects uh going on at the
same time. Maybe that becomes less of an
same time. Maybe that becomes less of an issue when the LLMs get faster. Uh but
even so I think then you're going to be wanting to do even bigger things. And so
once you have this process that takes time there's this natural desire to multitask.
And so there's that, like how do we manage multiple conversations with multiple agents? And then how do we make
multiple agents? And then how do we make that conversation more valuable? And so that's really what
more valuable? And so that's really what we're pushing on in the mockups we've been doing. And like right now, I think
been doing. And like right now, I think we model these conversations as very much like a chat. There's more we can do. Let's put it that way. like
do. Let's put it that way. like
>> in the sense that as this conversation's evolving, you could view it as a chat, but it's also sort of a a document that's evolving over time. Uh model is a conversation, but you could also view it
as a document. And so inside that document, there are all these references that are being injected from different spots in your codebase. Um edits are occurring, and that's all kind of
getting unrolled over time as this log.
What I really want to do is make the that document surface uh less of a readonly artifact if that makes sense.
Like more useful as a primary editing surface where you could move your cursor up out of the what do you want to say to the agent next >> and up into the previous conversation to
do useful things. So one of the useful things I want to do is right now when we're rendering code it's like read only. What we're working to do now is
only. What we're working to do now is when we render a a window into the code and that code's fresh, you should be able to edit right there and have that synchronized with the actual location,
expand the context like you can on a GitHub pull request, for example. So in
Zed, we have this concept of a multibuffer, which is taking little pieces of the code from all over your codebase and combining them together into one user interface that you can
edit as if it's a single buffer, basically. And so I'm really intrigued
basically. And so I'm really intrigued at the idea of to what extent is this conversation that I'm having pulling code toward me potentially making some
edits and why can't I just reach out and interact with the code directly inside the conversation. Um and then also like
the conversation. Um and then also like when I select between two points can I review the changes that occurred in that period of time? So, it's really like a
trying to make that conversation more than just a chat and more keyboard navigable in a way that someone with their Vim bindings could just like quickly navigate up through the conversation and make it feel kind of
like an editor, a new kind of editor.
having built this thing from scratch and having deep control of all the primitives, an opportunity I'm excited to go grab is like how can we have this like new kind of editor. It's not just
showing you a a file in your codebase, but it's showing you like a conversation and pieces of all these different files and you can just reach out and interact with that >> as you could in an editor.
>> Super cool.
>> Thanks, Nathan. you've been on a quest to build the perfect tool for your craft for for a long time now and and it's exciting to to see what you've done with with Zed and I can't wait to see what
you do with with agents uh in the interface and with Delta DB. Thanks for
joining us today.
>> Yeah, you're most welcome. Yeah, really
had fun.
Loading video analysis...