A Cheeky Pint with Cognition CEO Scott Wu
By Stripe
Summary
## Key takeaways - **Moneyballification of Everything**: In maturing fields like poker, Smash Bros, and startups, intuitive sharp thinkers give way to math nerds optimizing every edge, as spaces resolve to their underlying math like chess engines. [08:45], [11:42] - **Devin: Async Junior Engineer**: Devin operates asynchronously via Slack or tickets like a coworker, handling bugs, simple features, migrations, and upgrades as a junior engineer, merging 30-40% of PRs in successful orgs. [12:52], [14:52] - **Essential vs Accidental Complexity**: Essential complexity is core logic decisions; accidental is routine implementation that consumes 80-90% of time—AI handles the latter asynchronously while humans make high-level choices synchronously. [15:46], [17:08] - **Coding Tools Beat Nihilist View**: Despite improving general models, specialized agents like Devin thrive on messy real-world context like Angular migrations and Datadog debugging that labs can't benchmark perfectly. [21:56], [23:44] - **We Already Have AGI**: Current RL-driven AI crushes benchmarks like IMO gold, shifting the challenge to defining real-world benchmarks amid endless software demand via Jevons paradox. [45:52], [25:11] - **Weekend Windsurf Acquisition**: After Friday rumors of Google deal, Cognition cold-contacted Windsurf, handshaked Saturday, all-nighter Sunday, signed Monday 9AM, gaining intact GTM team to complement core research. [47:03], [49:20]
Topics Covered
- Math Competitions Breed AI Founders
- Founding Hardened by Maturity Playbooks
- Moneyballification Rewards Math Over Intuition
- Agents Offload Accidental Complexity
- Code Vanishes as Engineer Interface
Full Transcript
Have you had Guinness before?
I have actually never had a beer in my entire life.
All right, well, you're starting with the best beer— So that's good.
You order your Amazon packages with Devin?
Yeah.
So you're just in Slack and you ask it to buy something for you?
Yeah just @Devin, can you go buy some more whiteboards for us?
Or something like that.
I really enjoyed math competitions and going and competing and doing these things.
And this is stuff like, if I ask you, "What's 694 squared."
It is 481636.
I have shuffled the cards. I am not collaborating.
We give them to Scott.
So now you have six cards and you're trying to make 163, right?
And one way that you could do that here is two times 8 is 16.
9 divided by 3 is 3.
3 plus 16 is 19.
12 times 12 is 144.
144 plus 19 is 163.
And so almost all combinations can be— But you're probably thinking like, "I could have done that.
That's too easy."
Let me just tip it upside down like that.
Very good.
Scott Wu is the co-founder and CEO of Cognition, which makes Devin, the AI coding agent.
Scott is a triple IOI gold medal winner and kind of famous for being a math whiz.
And now he's at the cutting edge of agentic software development.
Cheers.
All right. Cheers.
Tell me about your upbringing and all the math stuff.
I feel like you're known for the math stuff these days.
Yeah, so I grew up, I'm from Baton Rouge.
My parents were both chemical engineers and so they immigrated from China for grad school.
And then naturally when they were looking for jobs, they were doing air emissions permitting and things like that.
Louisiana has a lot of oil and gas and so that's kind of how we ended up— I love the air missions too, actually.
Yeah.
And so that's how we ended up there.
I always loved math as a kid.
I had an older brother named Neal.
Super, super close, the whole way through.
And Neal was about five years older than me.
Neal started doing math competitions when he was in middle school.
And so he would've been in sixth grade and I was in first grade at the time.
And naturally I, as a little brother, would go and just watch what he was doing and try to learn some of the same math too.
And that's kind of how I first got into math.
And then, I found that I really enjoyed math competitions and going and competing and doing these things.
And this is stuff like, if I ask you, "What's 694 squared?"
I think it's probably not quite things of that nature.
It is 481636.
But it's things math puzzles, things like, the frog that's like going up and then every night falls down the well.
And how many nights, these kinds of things where you get to— On the log.
Yeah Where you kind of get to do the critical thinking and come up with interesting ideas and stuff like that.
So I started doing math competitions in second grade.
I remember there was a contest at the local college that I went to, which was for middle schoolers and high schoolers.
I competed in the seventh grade math division as a second grader.
I did the competition.
It was my first time doing any of these.
I just really liked math and stuff.
And then they were calling out third place, second place, first place.
And none of them were me.
I still just remember, I was so upset.
That's your super villain origin story.
Yeah exactly.
That's how it all began, basically.
So then I trained a bunch, the next year I was in third grade and I competed in algebra one or something and I won that year.
Then I basically kept doing math competitions from there.
My last year of high school, which would've been my junior year.
I left a year early.
But I did IOI the programming.
Yeah.
I did IOI three times, and I got gold, yeah.
So, where'd you go to school?
I took a year off actually.
I left high school a year early.
I wasn't that good at school, I guess.
I left high school a year earlier— Obviously, that's surprising, you weren't that good at school.
Well I I wasn't that good at finishing school.
I have a middle school degree, but, I didn't really make it through high school or college.
So I left high school a year early.
I spent a year actually in the Bay working at a company called Addepar.
And I did that as a software engineer.
That was back in 2014.
Yeah, wow, that was a blast from the past.
Yeah, it was a while ago.
And then after that I decided, OK, I will go try out college after all and see what that's like.
I went to Harvard for two years and then I dropped out.
How did you end up at Addepar?
And that's very forward thinking of them, obviously, that they took on a high school aged, high school dropout.
Yeah. It was a fun group.
Funnily enough there were four of us who started at the same time as high schoolers.
It was myself, Alexandr Wang is another one.
We started on the same day.
Eugene Chen, who's now running Phoenix DEX, and then Sreenath Are, who's most recently at Sandbar as the CEO.
Wait, sorry, this is a real small group theory moment.
So you and Alex were in the same— That's right.
So we knew each other, we met in middle school.
Alex now of Meta.
Now of Meta. That's right.
MSL, I guess. Yeah.
We met in sixth grade.
He was from New Mexico, I was from Louisiana, but we met in this math competition called MATHCOUNTS.
We were both at the national competition, and then we started talking.
Google Hangouts was the thing at the time.
It turns out there's some math in AI.
Yeah, this may be an— Yeah, it's a fun thing.
Well, a lot of the folks, as it turns out from our vintage ended up being— I think there's like a real infectiousness of, being entrepreneurial too.
I think Alex deserves a lot of credit for, I'd say being the first of our group.
Alex Wang got you into the idea of starting a company.
Yeah, somehow I think there's definitely a bunch of that involved, for sure.
But also, a lot of folks, Johnny Ho, who is one of the cofounders of Perplexity, for example, Demi Guo, who started Pika, a lot of these...
Jesse Zhang who started Decagon.
A lot of us were actually competing in these math and programming competitions in the same year.
And we all knew each other.
OK, so this gets something I was wondering.
There's this topic that people talked about a while back of, where are the young founders?
There always used to be people in their early twenties working on breakout companies.
Michael Dell was 19 when he started Dell.
23 when he took it public.
Obviously, Mark Zuckerberg was very young when he started working on Facebook.
And when it was a real breakout, he was still very young.
And there was a period where there was no young founders.
And now there's many, many more, like a whole bunch of the people that you mentioned.
You're 28, running Cognition.
Is the presence of young people as founders of leading companies a biomarker for industry vibrancy? Where,
Michael Dell was young during the takeoff of the PC era and, Mark Zuckerberg was young during the takeoff of social networking, and now we're in the takeoff of AI coding tools.
I should just say, I appreciate you calling me young.
I think relative to being 18 or 19, is still a long way.
The test is in your twenties.
So I have a take on this, and I'm curious to hear yours on this.
I've been thinking about this question as well, and my take is actually just that overall being a founder has just gotten harder.
That's probably the biggest, the highest order of it.
I think the reason that young founders who are just really sharp and really determined, did very well is because at the end of the day, being a good first principles thinker does beat experience, and just a lot of being a founder is doing something that has never existed before and coming to your own conclusions.
The thing is now there's a lot of people who have both, the first principles thinking and the experience.
And I think things have gotten a lot more...
call it mature as a space.
And so basically it's gotten harder.
So there are fewer that are literally coming out of college.
It feels hard to make the claim that, it was easy to start a leading business in prior eras.
Facebook faced lots of competition.
It's not like Dell was the only PC maker.
I don't think they had it easy by any stretch of the imagination.
However, I think you are getting at something where clearly all the large companies these days, they're very aware, they're very connected with the ecosystem.
If you look at Satya or Mark Zuckerberg, they are very aware of everything that's going on AI, and they're paying a lot of attention to it.
And so yeah, maybe there aren't giant opportunities that are just being left on the ground by the big established companies.
Yeah, and maybe harder is not the right word.
It's more just that the space is a bit more mature and there's more of a playbook and more existing knowledge.
There's obviously something unique with every business, but lot of the details of, "Here's how you should structure equity, here's how you should figure out, fundraising, here's how you should hire your initial team."
Many of these things I think do carry over a lot with experience. Where,
with experience. Where, I think in previous areas where the book wasn't written at all almost, so it just came down to how sharp you were and how good you were at making your own decisions.
I think now there's a lot more experience to draw from.
Maybe that's part of it.
I also do just have a theory of like, I guess I would call it, the "Moneyballification" of everything, to give a few examples, one of the things that I do casually for fun is playing poker.
Poker's a very fun game.
It's actually much more mathematical than a lot of people realize.
It's very, of course people kind of— I think people know that, like the poker solvers in the odds tables and everything. Solvers and such.
and everything. Solvers and such.
Or is it more mathematical than that?
No, I think that's right.
Well, I think there's a first sort impression of, it's all about— I'm all in. Yeah, exactly.
Pay the person on the other end.
It obviously is much more mathematical than that.
But the one thing that's kinda interesting is you see it in the evolution of the top players in space as well.
Back in the day, in the 80s or 90s, the top pros, I don't think the idea that is less competitive, but the skills that made someone really great poker player were just really great intuition.
They understood a lot of the mathematical concepts, but just at a very system one level of just being able to kind of think about them and obviously they had a good feel for the game and a good sense of how they should be able to kind of improve their own play.
Now it's just all math nerds.
At some point when the space gets mature enough that, you know what I mean?
Where for a less mature space, when people don't know what the right questions to ask are or how to even think about it, like what is the right frame of reference.
Then I think there's something about just, having a really sharp intuition and coming to your own conclusions.
Then at some point, as these things get more mature, the conclusion of it is math.
That's been the case in a lot of different fields.
And I feel like it's happening a little bit for startups as well.
I see.
More spaces have resolved to their underlying, like a chess engine, just deciding that the position is, mate in 41 or something.
Yeah, and chess was totally the same way, by the way, back in the 1800s people— The romantic style of play is gone.
Yeah, exactly, the romantic style of play.
And now it's like, there is a right sequence of moves and, just seeing how close you are to that optimum.
What are other domains where the "Moneyballification" of everything is shown?
One of my other hobbies, which I played at least before, the advent of Cognition was a game called Super Smash Brothers.
I used to play tournaments for Smash.
You saw very much the same pattern where the, it's a game called Melee in particular.
I don't know if you played Smash.
OK.
It's for the Game Cube, which came up 2001.
So it's a very old game, but, people keep playing the same game.
For the first six to eight years of the game, the personality was really wily, sharp thinkers, people who are just quick on their feet and coming up with these ideas.
Now it's just all math.
And that the people who play and do really well are— I think some of the RTSs are a little bit that way as well.
Hopefully has gotten less creative as people have gotten better at them, so.
Yeah, and it's a funny thing where it's like, there's a lot of beauty in the nerd side of it too.
It's just a difference in what skills get most selected for, is maybe the way I describe it.
OK, I'm getting distracted from asking you about Cognition.
Yeah.
What is Cognition? What does it do?
Yeah, so we're building the AI software engineer.
We've been building Devin for, the last year and a half, and most recently just acquired Windsurf.
Devin, the agent in Windsurf, the IDE, but at a high level, we really want to build the future of software engineering.
Is it confusing for people that you have two brands, you have Cognition, the company, and then Devin, the slightly anthropomorphized instantiation of it.
We've been talking about that.
Now there's Windsurf as well, and so now there's a third thing.
But I think some consolidation is probably good.
Got it.
OK, and so people are maybe familiar with the, the GitHub copilots or the IDE-style paradigm where you're there writing code in your IDE and it helps you auto-complete it, or you can give some instructions in the IDE.
That is not the Cognition-Devin paradigm.
Instead with Devin, you're in a Slack channel with Devin, and you're prompting it to go off and "Build me an X or a Y," but you're talking to it as you would a coworker in Slack.
That's right.
Yeah, and so you can call it from Slack or Linear or Jira.
Or you can call it from your IDE as well, but you don't have to, right?
I think that's exactly right.
There's been this paradigm, in the past, I would say GitHub copilot was really the biggest and most well known originator of it, of IDEs.
I would describe it as basically when you are typing at the keyboard as an engineer, making you a little bit faster at it and giving you the tools and the shortcuts and everything to do that faster.
Devin is a very different paradigm of what I would call like an async experience.
Where you have an agent and you delegate a task.
And so Devin naturally operates a little bit more like at a ticket level or project level or something like that.
You have some issue and GitHub or something and you tag Devin and then Devin gets to work on it.
Yep. Yep.
What level of task is Devin doing a good job of today?
Yeah, we like to call Devin a junior engineer today.
There are some things that an AI, of course, is way, way better than all of us at, especially encyclopedic knowledge and just pulling facts and things like that.
There's some things that, that it still makes terrible decisions on.
But I think that's the right average overall.
What we see folks typically using it for are things like bugs, for example, or like simple feature requests and fixes and so on where you're talking about an issue and you and your team are figuring out what you should do and you're just like, "hey, @Devin go do this."
Or on the other hand, a lot of the more, I'll call it the repetitive tedious tasks that come up often in engineering work.
That's often, migrations or modernizations or refactors or version upgrades.
It's crazy how much testing and documentation, it's crazy how much of the software engineers of the world's time is, more things like going and fixing your Kubernetes's deploy than it is building, and coming up with really— Dependency management, all that kind of stuff.
Yeah.
What metrics can you share on where the business is at?
Devin is deployed in thousands of companies all over the world.
We work with some of the biggest banks in the world, like Goldman and Citibank, all the way down to, startups with two or three people.
In general, a lot of how we look at things is in terms of merge pull requests, and getting Devin to the point where it is a significant percentage of the merge pull requests in your org.
Typically in a successful org, Devin is merging something in the range of 30% to 40% of all the pull requests that come through.
You talked about this async model, but isn't it the case that as I look at other, the Github copilots and the Cursors and everything like that, or Claude Code, they're not fully synchronous because you prompt them and they go off and do something.
Are these distinctions a moment in time thing?
Do they go away where everyone is synchronous in the cases when they can do it instantly and asynchronous in the cases where they don't?
Is this a durable distinction?
It's a good question.
I think the two experiences continue to exist for the next while.
Then I actually think figuring out, the shared experience between them actually is the really interesting thing, right?
And that's a lot of recently with Windsurf and things like that.
It's something that we've already been thinking about and now are pretty excited to ship some things in the near future on.
Do you know the concepts of essential complexity and accidental complexity?
Have you heard about this before?
Yeah. OK. Yeah.
I think there's a real thing where, maybe one way to describe it is the ethos of a software engineer.
What it means to be a software engineer in my mind is basically just somebody who solves problems in the context of code, right?
It is somebody who tells the computer what to do and makes all these decisions of, it can be big decisions like "What is the right architecture that we want to use for all of this?"
Or it can be like a lot of these micro decisions like, "Oh by the way, there's a case where this balance is less than zero, and what do we want to do here?
Should we show an error or should we request this?" or whatever.
All of these decisions are what people typically call the essential complexity what is all of the actual underlying logic of the decisions of what the software is doing, right?
The accidental complexity is basically everything else.
All the things that you have to do to support things as they scale.
Or all of your standard, for example, anytime you have a class, you probably have all the standard carotid features along with that as well, where everyone knows that you need to have that in your class, but there's no real decision that needs to be made in terms of going and doing that.
It's an interesting thing, which is, up until, AI coding has come along, The meat of software engineering has been in making the decisions, and yet you spend 80% or 90% of your time doing more of the latter, of just going and doing the routine implementation and so on.
I think this merged experience that comes up is basically something where for anything that actually needs you in loop where you can go and make the decision, and you are looking at the high level strategy or deciding what you want to build, you are involved and you're doing that synchronously.
Then for all the parts that are pure execution, you are able to hand that off asynchronously.
So, the interesting thing is that obviously for an individual project, there are typically a long stretches that actually are one or the other, and it alternates between both of them, right?
And I think what that will effectively look like is, the synchronous experience is the IDE where you are looking at the code directly and you see each of these things, the asynchronous experience is the agent that will go off and do each of these things, but to be able to go back and forth between your IDE agent.
So you want the engineer to be interactive with the agent as it's going and working, but on the high-impact moments of important choices as opposed to all the grunt work?
How do you get large enterprises comfortable with giving Devin's sufficient permissions to be effective?
But like you talk about the migration use case.
Super boring.
And so you change the table and get it talking to the new table, and then eventually you delete the old table.
And that last step is kind of scary.
I think people still have a...
models hallucinate way less than they did, but people still have fear of the model just making something up— For sure. ...doing it.
For sure. ...doing it.
And so yeah, how do you get people comfortable with giving it enough power to be effective?
So we pretty strongly recommend that people using Devin don't give it, broad database access, for example.
That's one way.
I don't know of any instances where it has been an issue or things like that, but obviously, you'd just rather not take that chance.
The framing that I would give honestly is, we have processes for these things because humans make mistakes too.
And that's why we have pull requests and review.
That's why we have CI and that's why we have all these things already, right?
So Devin naturally slots neatly into all of these things.
Typically the way that folks will work with Devin is, they're doing some big code migration and they'll break up the task, or maybe they have 50,000 files that all need to go, upgrade from this version of Angular to that version or something like that.
And Devin will go and do each one and it'll make pull requests, right?
So you will go and review the code and make sure things look correct, but there's still this human— It's back to your point of incidental complexity, where the reason a migration is time consuming is not the actual single deletion step, all the time cost comes in other places.
Yeah. Exactly.
I think in practice what we see with folks is, especially in these like enterprise migrations is, is when folks measure internally, they see something like an 8 to 15x gain for a lot of these use cases with Devin.
Because, as you're saying, you're just reviewing the code, you're not going and writing every single line or going through every single reference or things like that.
So let's talk about that, because I think all organizations around the world are trying to figure out the productivity impact of AI coding.
I think what everyone sees is engineers for sure want to have access to AI tools for coding.
It's not totally obvious on the, PRs per dev type metrics, what's happening.
Generally, you see some increase there, but of course it's not clear how good even a, pull request per dev metric is.
Then maybe you can say that there's some, ongoing maintenance cost if you're shipping low quality code or something like that.
I feel like everyone right now is looking for some slam dunk productivity data on what is the impact of...
There's probably some CTOs looking at the slam dump data to justify, the spend to their CTO.
What's your view on how big is the productivity impact?
Is it actually measurable?
Yeah, for sure.
I think this is something where this gradual shift towards agents actually will help a lot, as it turns out.
If anything, to be honest, I think IDE productivity is often underrated because, how do you state it to your point, right?
Like, you look at the numbers and it's, of our engineering org on average people took the tab completion 238 times this week.
It seems quite clear that that should be worth something and it should make you faster.
But how much faster does it make you, it's a bit harder to say.
On the other hand, with agents a lot of the workflow obviously is going and doing the task for you, right?
So if it's a Jira ticket or something, or a migration or things like that where you typically do have a good sense of how many engineering hours are going to be needed for this and what's going on.
And because it's doing the whole thing end to end, it's a lot more clear of like, yeah, you didn't have to do this migration anymore, you reviewed the PR in five minutes and that's all done.
Yes. Yes.
As time goes on, I think these things will become more and more and more clear.
There is a view that some people have out there that coding tools are a moment in time thing that get run over by increasing model performance, GPT-6 or GPT-7.
Yeah.
Presumably you do not hold this view.
Yeah.
How do you avoid getting run over by the labs?
Yeah, for sure.
So look, I think the labs are obviously, I think they're incredible businesss.
As best as I understand it, I would describe this view as a, call it the nihilist computer use take.
Which is, of course all of these different things that we do in the world, in knowledge work involve using a computer.
And the AI is going to get better and better and better at using the computer until someday there is nothing left except just the AI going and using your computer and doing your work for you, to the best of my understanding, is kind of that the argument there.
I see the wisdom of it.
This is the kind of thing that's very hard to disprove.
But I think that, in practice what we've seen in the space is naturally there is a lot of contextual knowledge.
There's a lot of industry details, there's a lot of...
And so, as we were saying, going and doing some angular migration or doing some...
It's not to say that these things can't get better.
In fact, I think they will continue to get much better.
But I think that the way that we make models better and better at them is by giving it the right data of...
How good can you be at angular migrations if you've never seen angular, and you've never done an angular migration yourself?
And there's kind of a cap on that.
And obviously there are all sorts of these things of, using your Datadog to go and debug errors.
The biggest thing I would say here is software engineering in the real world is so messy, and there's all sorts of these things that come up.
I think in practice, most disciplines look like this.
I would say the same thing about law or medicine or, and so on.
So, while the general intelligence will continue to get smarter and smarter, I think there is still a lot of work to do in making something both, on the capabilities side, really good for your particular use cases, but also in actually going and delivering a product experience and bringing that to customers of how that actually happens in the real world.
So it's not a general intelligence task, it's a specific intelligence of, working in the Stripe code base requires some general intelligence, but requires a bunch of context, requires working within the workflows we have and everything like that.
And you think that persists as an area where you need to specialize?
Yeah exactly.
Maybe one way to put it is, I think the argument is something like a super intelligence.
I think in some sense, yes, I think we are, you could consider us short super intelligence.
I think what we're getting to with RL as this thing is improving and improving, and we see more and more of the gains and people are developing the techniques.
I think of RL and this paradigm of AI as basically, the platonic ideal of it is the ability to solve any benchmark, right?
You have exactly a dataset of here are the things that you want and here's how we measure success, and here's how we do that.
And whatever that benchmark is, it can be the hardest thing ever.
It can be a unsolved math problems or whatever.
Someday we want to get to the point where we can just take that and train a model that will just get a hundred percent on it.
I think, frankly, we're moving towards that ideal a lot faster than most folks would've expected.
I mean, there's been some pretty crazy, developments like the IMO Gold Medal or the scores on Sweet Venture or things like that.
The thing is, when that happens, I don't think what we end up with is just pure ASI, end of humanity, human knowledge work or whatever.
I think the thing that we end up in is basically a point where the hard question is, "All right, now, what is the benchmark?"
Right?
I think defining the benchmark in all of these spaces is a lot of the practical, real messiness of the world, right?
And so for a software engineer, obviously, what are all the tools that you interact with on a day-to-day basis?
How do you use those tools?
What does it mean to build a representation of the code base over time?
How do you decide whether you shipping the feature was successful or not successful?
All of these various things and creating the right environments around them.
And so can there be a good benchmark for a model's performance on the kinds of things that Devin wants to do?
Or is Devin's business model and...
Devin's revenue is the benchmark essentially?
Yeah, it's a good question.
From our perspective, we have a lot of benchmarks internally.
The biggest is one that we call junior dev, which we might need to upgrade to senior dev pretty soon.
But it is basically the ability to do a variety of just random real world junior dev tasks.
We've shared some of the examples.
Obviously we don't publish the whole benchmark because then it would, get obviated.
But a lot of the tasks are things like, "Hey, you need to go and fix this Grafana dashboard and get this going and then pull up the results."
And this is a very common thing that a software engineer does, right?
And the thing that's hard about it is perhaps not some algorithmic coding thing itself, but it turns out on the setup actually the server that's hosting this is running the wrong version of some package.
So you have to go through the errors and figure out what happened and then say, "OK, I need to downgrade the package to this.
Other one, which is actually the right dependency for this thing, and then I need to run it and pull this up and make sure the numbers look correct."
Things like that, which are basically as close as we can make them to what real software engineers spend their time on.
And so how have the newly released Claude 4.1 and GPT-5 done this benchmark?
Both of them are better at this benchmark than any of the models that we've seen before this week.
As you think about the AI business and industry over the next five to 10 years, like you think about all the different layers of the stack.
You have the data centers, and then you have labs, and then you have the application layers such as yourself.
Who benefits? What gets more competitive, what gets less competitive?
Are all these just classic competitive oligopolies?
Yeah, what's the market structure?
So everyone always makes fun of me whenever I say this, but I think all the layers are going to do very well.
I think all the— There's just going to be a lot of AI.
I think the prices are cheap everywhere.
I've been saying this at least for the last six to 12 months, and I think, we've seen prices go up at decent bit across all of these.
But no, at a high level, first of all, there's going to be a lot of AI.
It can't be understated in the sense that, I think we're kind of coming off of a decade of a lot of various, B2B SaaS and so on.
I think there was the internet obviously in the '90s and early 2000s, and then there was the mobile phone and cloud, which were kind of like late 2000s, early 2010s, right?
And those were some of the biggest things in the last 30 years.
Over the last 10 years or so, there was a real time where most of the stuff that was being built was a lot more incremental.
Each next thing and building for a particular niche or for a small part of the workflow and making that more efficient.
And AI now I think is the total opposite of that in the sense that, now we're talking about the entirety of knowledge work and perhaps the entirety of physical work as well, depending on what happens with the robotics, right?
First thing is there's just going to be a lot of AI.
The second thing about where does the value accrue?
And my honest answer on that is, value accrues wherever there's meaningful differentiation in the layer, right?
Simple, like if there's NVIDIA and there's TSMC, for as long as NVIDIA needs to work with TSMC and for as long as TSMC needs to work with NVIDIA, of course there'll be some rubbing up on each other's shoulders, but like they will continue to do great, right?
And you kind of see this down the stack as well.
I would argue that the problems that are being solved in all of these different layers are very, very different problems that have pretty meaningful differentiation.
You're saying this prevents too much vertical integration, basically, where the layers keep each doing their own thing?
Exactly.
And I think there's a real diff where, as soon as you go from hardware to, obviously foundation model training is its own can of worms, and very much like the DNA of the company is finding exceptionally strong researchers, giving them as many GPUs as you can afford to give them and setting up a culture that orients around that.
Then the application layer, I would say, is really focused...I'd say obviously it has a lot of the elements of research as well.
But I think in particular is really, really focused on just figuring out how to make one use case work for us.
For example, the only thing that we care about is building the future of software engineering.
Maybe one thing I would call out is, people often talk about AI code abstractly in a vacuum.
I think there are a lot of companies that think about code, in the foundation model layer or things like that.
We uniquely really think about software engineering.
And all of the messiness that that comes with and all the product interface and all the delivery and the usage model.
And of course like a lot of these particular capabilities that come with that.
So I think there's a real...
Everyone has their own DNA and everyone has their own things that they do best.
That makes sense.
We at Stripe have been thinking a lot about building the economic infrastructure for AI and what is required.
You can have an agent acting on behalf of a person, and you want to be able to just be prompting or doing stuff in your app.
And part of the tool use that your AI can engage in is going off and conducting commerce in the real world.
So we're building infrastructure for that.
And then we noticed that because of the economics of AI, everyone has usage based models, right?
Per token, per what have you.
And so we're building out, usage based billing infrastructure, and again, we find the billing systems people are building on Stripe, they're very different from the...
Classic SAS is per-seat pricing whereas again, everything in AI is per-unit consumed.
And you can get into how the agents engage in commerce with each other, where there's, no human in the loop.
So there are all these ways in which our product roadmap is being informed.
But I'm curious what you think the economic infrastructure for AI needs to look like.
Are there things that we should be keeping in mind?
Yeah, for sure.
Yeah, seat-based to usage-based, big one for sure.
I think on both sides, right?
From the perspective of one, seats don't really make sense when the AI themselves are arguably seats as well, they're doing a lot of the labor too.
And then on the other side, usage of obviously just goes so naturally with the cogs themselves because a lot of it's is, effectively GPU spend on how much you're spinning the models, basically.
I think that makes a ton of sense.
The other big one which comes to mind obviously is just for there to be an entire agent economy as well.
Today I would say is, still probably more of a talking point than a reality.
But things are pretty rapidly changing and getting to the point where your agents are, funnily enough we use Devin.
Devin is obviously entirely focused towards software engineering, but we order our DoorDash on Devin, we order our Amazon packages with Devin.
There are pieces of that that turn out to work nicely anyway— So you order your Amazon packages with Devin?
Yeah.
So you're just in Slack and you ask it to buy something for you?
Yeah, just "@Devin, can you go buy some more whiteboards for us?"
Or something like that.
At a certain point, do the real world things you ask Devin to do run into just blockers with sites trying to block bot activity?
A lot of Devin working really well, obviously, it relies on Devin being able to do these things and get through it.
But some of these things, I think are quite natural with the model, which is, you often have API keys or secrets or things like that that you want Devin to be able to hold onto.
So that works for credit card numbers as well.
Obviously there's a lot of work of, real world software engineering doesn't involve a lot of just going and browsing the web and finding different sites and clicking around on them, even if you're just testing your own front end or putting a documentation or something.
Good browser use is an important piece of that as well.
And I think it's just kind of something that's— So shouldn't you build a consumer app?
Doesn't everyone want this magic wand app?
Where you can just have your virtual assistant, there's a million virtual assistant startups.
It seems like none of them have really gotten to any scale.
Yeah, it's a fun question.
I think from our perspective, on the one hand, it's fun seeing Devin go and do these DoorDash things.
At the same time, we also just know that, our team is so small, we just don't have the, focus to be able to do that in addition to doing software engineering.
You're pulling up Devin and you're seeing this, and then on the other side there's the IDE there, but Devin's just going on DoorDash or something.
It's a very, fish out of water experience and it's fine for us to keep— But a lot of product development follows from people noticing how a product is being used— Like emerging patterns.
Exactly. And these emergent patterns.
Like Twitter especially.
People started linking to photos off site, so they built, in native image support.
Or the hashtag was invented by the community.
So similarly, you're checking the Devin logs and you notice people are buying a lot of DoorDash.
Maybe that's a suggestion on the product side of things.
Yeah. It's funny.
Well, to be fair, it's mostly just ourselves.
I know. Still, it's still emerging product usage.
I agree. It's a fun one.
That's funny. I love that.
We had a fun one where Devin was, Walden had a flight that got canceled and was trying to, use Devin to go and negotiate with the airline to get the refund for it.
Devin went to the site.
And naturally the site forwards you two their agent to have the conversation.
Then Devin was explaining these things and wasn't making progress.
Then at some point Devin said, "This is not working, I need to speak to a human right now."
And did it?
It did. It did, yeah.
So it got to the human and then it, the human got on the line and then it sent the link to like the airline contract, like "oh, Section 22 says this, this, and that," and then Walden actually did get— But sorry, Devin was speaking?
Devin was was chatting with the human.
I see.
Made it past the robot agent— Oh, that's funny.
...equivalent, and then got to a human.
And did it successfully get the flight refund?
It got the refund, yeah. OK.
Again, the people want this.
Going back to the economic infrastructure for AI, the other thing that, we think about is it feels like trust is going to become a bigger deal online.
I don't quite know what form that takes, because obviously, been a big bad internet for a long time.
There's a lot of scams out there, there's a lot of hacking, but the hacking attempts become more sophisticated, the deep fakes and everything.
And so having a good sense of who is a trusted individual, who is a trusted business just seems to become much more important in this world.
Related to that too, I think one of these things, I feel like the CloudFlare with agents and everything is a hot topic and— Explain the CloudFlare issue for— Oh yeah, of course.
So, there's a lot more agents browsing the web these days and there's been certain things, protections set up to not give agents access to websites.
I think the paradigm...
up until now, the paradigm for a lot of this stuff, I mean, there's robots.txt and all these things, has often been basically almost like a, there are tons of things which you are not allowed to do as a non-human.
And I think what we will probably need to see a lot more of over time is delegating access, if that makes sense.
Making it more clear that an agent can do something on your behalf.
In some sense you are attaching some of your reputation to it too.
There's a monetary question of how this works out, but there's also just actions that the agent takes are attributable to you.
Yeah. Yeah. On your behalf.
There's a great point.
Right now we have bots versus no bots, clankers versus clankers not allowed.
Whereas instead it needs to be bots allowed if you sign for them.
As I was going to say, simple version is just like if you're signed into your Google Chrome email account and you have a verified address, then you can have an agent run in that browser window and do things.
But all of the...
you are responsible for the work that it does.
Yes, it's sort of like API key permissions, but at a mass consumer scale across everything and the whole websites and everything.
Hmm. I like that.
How does the existence of Devin affect your own hiring of engineers?
Yeah, from our perspective we've always, loved keeping the core engineering team very tight and very elite.
What size? Like 30 people?
Yeah, so up until a few weeks ago, our whole team is about 35 people of whom— Across all roles?
Across all roles, yeah.
Of whom, I mean, almost everyone actually is an engineer by background, funnily enough.
But what we call core engineering is it was about 19.
Yep. With Windsurf, obviously, the team count has grown a lot, but actually, with core engineering itself, it hasn't actually gotten all that much bigger.
It's gone from 19 to something in the range of like 30 to 35.
OK, so you keep the engineering team smaller.
Are the engineers...
how are the engineers themselves different versus a company being built 20 years ago?
Yeah, so it is a pretty different profile of the work that we have to do in the sense that like there is a lot of execution and implementation that has to be done, but Devin does that so that humans don't need to.
And so what we typically look for, our whole interview process, for example, for a lot of these is, having people build their own Devin in eight hours and seeing how far they get with it.
And I think— Sorry, build their own version of Devin or build stuff with Devin?
Build their own version, their own agent, their own full end-to-end agent, in eight hours or six hours or whatever.
What we find is, and we'll see this trend generally in software engineering, which is knowing all the little, memorizing all the facts or knowing all the little details or being really good at syntax of some language or things like that are going to be less important.
What's going to be more important are a lot of the high level decision makings or understanding the technical concepts really well.
Having a good sense of products and then just having a good intuitive sense of what to build and what to do and being a self owner that way too.
A lot of our team actually are specifically former founders, which is kind of the fun one.
Of our initial kind of 35, 21 of us have founded a company before.
So it's been a very high density of that.
Wow.
OK. OK. good, good.
Very good.
Yeah, maybe I should have just come out earlier and then— I was wondering that, yeah.
I think that could have been— Interesting.
Good game.
It's a nice game. Yeah.
When will you hire your last engineer?
It's a good question.
I'll make a distinction here, which is that there will come a point, and my guess on this point is probably in the neighborhood of let's say two, three, four years from now, where we stop using code as the main interface.
And basically being a software engineer really is just instructing your computer and telling your computer what to do and saying, you're looking at your own product and you're saying, hey— You think two to four years from now, software engineers are not really looking at code in their day to day, just like they don't look at assembly, today. Exactly. Yeah.
today. Exactly. Yeah.
That's going and looking at your own product and deciding, "Oh yeah, like we need to make a new page here.
And by the way, like, all this data, let's save this this way and, let's index this according to X, Y, and Z, because here are the lookups that we need to do," or whatever.
Making a lot of these architectural decisions, but not looking at the code themselves.
At least in the majority of circumstances.
At that point, obviously the jobs change a lot.
Funnily enough, if anything, we will have way more software engineers, not fewer.
Just because the interface is not code anymore doesn't mean that the core skills of software...
People often ask us like, "My son or daughter is in high school or is just starting college, like, should they even be studying computer science?"
My answer is always absolutely yes.
If anything, funnily enough, I feel like university computer science always had the opposite sin of doing too much— It's theoretical— Of teaching you the concepts, what programming was about and what computer science was about, and not enough of like, all right, here's like syntax that you need to use, here's what it means to get a React app set up and whatever.
I think we'll get to a point where those theoretical concepts and that high level understanding of, maybe in one line, the model of a computer and how to make decisions and problem solve with the computer as a tool, that is what programming will be.
If anything, there's going to be a lot more software engineers.
One of the nice things is everyone talks about Jevons Paradox and how it relates to AI.
I think there's nowhere that it's more true than software because, we really never seem to run out of demand for more code and more software— You can just write a lot of software, yeah.
Yeah, the half joking way to say this is despite how many software engineers in the world, we all know this, there are so many products out there that are still so bad.
Yeah.
You know, you're locking into your bank or you're dealing with your checkout and retail or whatever it is.
And then there's all these things that are still super outdated, super buggy.
You're logging into to your healthcare platform or whatever and you're trying to click around and find your thing.
And it's like— We haven't finished writing all this software yet, yeah.
Isn't it shocking that the UIs haven't changed at all?
So we still, we talk to Siri, which is the same, I mean, button placement and the same brand on the iPhone as pre-transformer models.
You prompt Devin via Slack.
We use our AI tools in a web browser and we enter them into a text box like we're playing Zork in the 1980s or whenever that came out.
And so, 70s maybe.
I don't don't how old Zork is.
Do you know what Zork is?
I don't. Oh, you're too young.
It was like the original text-based adventure game.
Oh, I see. Yeah.
But when are we going to see AI UIs, because it's very retro right now.
Yeah, my high level thought on this is, you always see this with new waves of technology.
I think mobile phone is a great example where, the initial apps just look like basically websites, but in a smaller box.
And over time, you could still get a lot of value out of those.
So the core value prop of the phone was already there.
But of course over time we built a lot of cool touch interfaces or we, developed a lot of the science of what makes a good app UX.
Yeah, but we've no multitouch, we've no rubber banding.
Yeah.
I think we are entering that phase now where for a few years it was just replacing existing flows and just using AI to do that better.
Now we're starting to think about bit more of these various generative flows.
Mybe the simplest example that comes to mind is a lot more products now have the little chat box at the bottom where, rather than having to click through all the menus yourself, you can just kind of ask the chat box and find that.
Which is one very, very simple version of that.
But I think there's way more innovation to do.
One framing I was thinking about with this is, it became clear shortly after the invention of the transistor and the microchip, that everything would have a microchip in it.
Everything could benefit from having a small computer in it and, your car would have a small computer in it and your dishwasher would have a small computer in it, and everything.
There's some equivalent where everything will pass through a transformer model before it's consumed.
One of my thoughts on this too is I think AI is, I'd say uniquely different from some of these previous ways in an important way, which is, personal computer or internet or mobile phone.
All of these had, one of two things or often both.
One was a big hardware component, just go ship modems to everybody and you had to get people on the internet and you had to give everyone a phone first.
Then two was like a very core critical mass effect or empty room effect or whatever, network effect or whatever you want to call it.
Where, the internet was great and all obviously, but it doesn't really get that useful until all your friends are on the internet too, and like the restaurant that you're looking up is on the internet too, and various other things as well.
AI actually has neither of those problems. And as a result, what you see is like as soon as the tech works for somebody...
It's pure software, it can work single player and give you a ton of value directly.
It kind of works for everyone.
There's been a few things that we've seen as a result of that.
One is, there's a new person posting that they're the fastest company from 1 million to a hundred million, every couple weeks because, AI is just so much faster at... As soon as it works, it works for everyone.
The other part of that is to to your point, there's actually a bit of lag with product, I would say. Where,
I think you could freeze all the capabilities today and have no new models and no new research come out and there would still be a whole decade of product progress to make.
Whereas before, the product progress kind of tracked alongside the distribution itself.
Now, it's been much more sudden where two years total where everyone's been thinking about it.
And honestly, if we factor in a lot of the more recent capabilities, agentic capabilities, things like that, it's arguably less than one year for a lot of these.
We are all kind of grappling with that all of a sudden and trying to figure out what the right new product experiences are, right?
So it's just taking a bit more time.
What are your AGI timelines?
I think we have AGI.
OK, now. I was going to say, there's this joke that people talk about, which is, back in 2017 if you ask, "Do we have AGI?" The answer is no.
Today, obviously if you ask if we have AGI, the first thing everyone always says, well, you have to go define AGI— Yeah. It's hemming and hawing.
Yeah. It's hemming and hawing.
It's kind of true in some sense of— Devin will order your DoorDash for you.
Sounds like AGI to me.
Yeah.
And so obviously a bit of a facetious answer, but my honest opinion is, I think there is some, rapid singularity is super intelligence thing that people talk about.
It's very hard to say, nothing's impossible, but I would guess that that's not something that happens in the immediate future.
Especially because, as we said that a lot of the work to do is going and collecting all the real world...
What are the problems that you want to solve?
How do you define success for all these things?
With that said, we're going to just keep...
I think it's not so binary basically.
We're just going to keep rolling out more and more improvements and these things are going to be more and more capable, but I don't know that we have some sudden shift, at least for the next few years.
That makes a lot of sense.
We've got to talk about Windsurf.
Oh, yeah. It played out so quickly, so give us the play by play.
So we heard the news that it was going to be Google buying Windsurf, or I guess not technically buying...
This whole deal that was happening, that Friday at the same time everyone else did.
OK, so this was not something that played out in advance, the Friday where the news came out.
It was basically just as sudden for us, we heard some rumors maybe the night before— Devin was scrolling Twitter for you and— Yeah exactly.
Yeah, Devin came back and said, "Hey, you guys should check this out.
We probably should look at this."
So we heard the news then and naturally that afternoon we were talking about it, thinking about like, "Is there something that we should do off of this,?"
It's not uncommon that there's some crazy news that happens in AI.
But this is, especially, in our space, and we talked about this idea.
We reached out to them cold that evening and got to meet the new Windsurf leadership, Jeff and Graham and Kevin, that evening.
As we were both talking about it, I think we came to this conclusion together, which is, if there is something to do here at all, then it has to be ready to go by Monday morning.
Because everyone, all the customers were reeling.
The whole team was like, "Do I have a job?
Do I not have a job?"
It was a melting ice cube.
Exactly.
So, if it even waited until Thursday, instead of Monday, people were going to cancel their contracts, people were going to be interviewing at other places.
So we said, OK, what this means is if we want to explore this, we have to, just spend the entire weekend on this nonstop.
A lot of fun moments there.
We got to kind of the handshake agreement that Saturday, and then obviously there's all the legal and everything to figure out.
We all pulled an all nighter that Sunday night.
We had a very optimistic plan that we were going to get signed.
You also pulled an all nighter the Saturday night, or did you get some sleep?
We got a couple hours of sleep on Saturday.
It was especially...
A huge shout out to Jeff and Graham, Kevin, because they had had a pretty rough few days before— Totally.
It was very existential. As well actually.
And so they were already pretty sleep deprived coming into it.
We were going through...
We had this optimistic view that we were going to get it signed on Sunday night.
So then we could go and focus on filming and figuring out how we address the team and everything.
Obviously that did not happen.
We got it signed on Monday at 9:00 AM because us and the lawyers were up all night basically sorting out all these things.
We luckily filmed the Windsurf video in the Windsurf studio.
We said, OK, we should just film it anyway— You realize you can announce acquisitions without a video?
Yeah.
Well, it's always nice to have one.
Then as soon as we got things signed, we were up in front of the whole team and giving them the update and then sharing that publicly pretty soon after.
It was a lot of...It was fun.
I live for these moments, honestly.
So you read the news on Friday, and you signed and announced the deal on Monday.
But that means that you decided more or less instantaneously that you wanted to buy the remaining part of Windsurf.
Yeah, so we talked it through on Friday evening, and from our perspective, there were a few things that were nice about this.
First of all, obviously, we know the space very well.
So in that sense, we didn't really have to diligence the product or the customers because we knew that.
But as we were kind of understanding the pieces of what happened exactly with the team, how many of the folks are still there and who left.
We found that there was a very nice synergy in the sense that, there was a core research and product engineering team that went to Google and all the other functions were entirely intact, which includes enterprise engineering, infra, deployed engineering, go-to market, marketing, finance, operations, all these various things.
And funnily enough, with Cognition, for better or for worse, I think we had done a good job of building out this core research and product engineering team.
But were a little bit behind on growing all the other functions.
So we found a very natural fit there as well.
As we were just talking, they had JP Morgan and we had Goldman Sachs...
There were all of these very natural ways to fit in.
I think from our perspective, we knew there was something really interesting there and we wanted to do it and a lot of the rest was just figuring out the details.
So you got to acquire a bunch of people who have lots of familiarity with the space.
They have a product offering that is in an adjacent but not identical place to Devin.
And so you get to accelerate, it sounds like the go-to-market efforts and broaden out the product portfolio.
That's how you think about it?
Yeah. Absolutely.
And then of course the products themselves I think are...
Funnily enough, we were thinking about what does the interaction of an async product like Devin look like with a more sync product?
And we had some ideas for certain synchronous things that we wanted to build.
We weren't going to build an IDE entirely because it felt like there were a couple players in town already.
But as it turns out, having the ideas, there actually were a lot of natural synergies with a lot of the synchronous stuff that we thought about and, very simple thing...
We shipped Wave 11, a few days later after we closed that deal.
And there are a lot of these basic things like, yeah, like being able to access your DeepWiki and your IDE or being able to use all of the Devin code-based representation in search, or spinning up the agent there.
And all of these things...
We felt a lot of natural compliments.
And so from there felt like, if there was a right person to work with and do this with, it would be— So in six months, do I buy Devin's and I get Windsurf bundle?
Do I separately buy Windsurf and I can buy Devin?
Yeah.
How will it work?
Yeah, lots to figure out still.
We certainly want to keep each of the product philosophies the same, like I mentioned, I think there will still continue to be both sync and async products.
But making the integrations between them much stronger and much easier I think is going to be really nice.
And so certainly a lot that'll be much easier from the customer perspective, but if for some reason they really wanted to use one of the two, I'd imagine that they would still be able to do that.
It's obviously been an interesting aspect of the AI space, that there has been a number of these 49% licensing type deals to avoid the risk of an acquisition being blocked...
Companies buy the license to the IP and then the talent that they want to be able to be sure comes with the company.
Do you think that stays as a thing in the AI?
It's a funny moment in time thing, right?
Yeah, I certainly don't feel like I'm the expert on this one.
The thing that I find funny, there's one new bell or whistle each time.
I feel like there's a— On all the legal— The debt— And contractual stuff. Complexion, character.
Uou see like, there's one, and now we do, this licensing deal and now we...
And so I think the meta game around that is certainly developing.
There is some amount of polarity at the top level of AI as a space in the sense that, there is a point at which you want to just have...
these things do scale with resources and they scale...
I think basically the games get bigger, I guess is one way to put it.
I think for most companies, the question is basically whether they think they will get there themselves or whether they want to work with another company and go into— You're saying you would expect more M&A, whether it be classical M&A or this new model of M&A because there're skilled benefits in this game.
Maybe one of my hot takes is, I think for a lot of the big, of course there will be, many medium-sized outcomes in AI.
But I think in this space, a little bit more so than previous ones, it's a little bit more polarized towards, you become a hyperscaler or bust.
And so for some companies, that is the trajectory and then the moonshot that they want to go for.
That's one thing, for others, working with someone is something that people do.
And so now as you're bringing the Windsurf team on board, Cognition has this very intense culture, you know?
You guys work on the weekends, you all work out of this house and as you're doing this buyout offer.
Yeah.
I think for us it's...
And most folks have been really excited to come in and do it, and only a small fraction have taken the buyout.
But I think from our perspective, we just want to make sure it's an opt-in, situation for everyone because let's be honest, it isn't for everyone and I think it is a a very intentional thing there.
Was it the intensity you wanted people to, what did you want people to opt into?
Opt into the intensity and the new culture.
And we're going to be going after some very ambitious goals.
By revenue standards or by whatever you want to call it, there are...
there are...
Folks might call us a mid or later stage company, but from our perspective, we are still very much early stage in terms of the profile of what happens next and how much more there is to build and how much more there is to do.
Obviously at an early stage, we do all have to be, signing up for the uncertainty and the willingness to just go and take on a different challenge every week.
To put in a lot of hours and to have that culture.
That was a big piece of it.
Obviously, regardless of what happens, we wanted to make sure people were well taken care of.
Every day, Cognition is the largest company you've ever run.
Your speed running, coming up to, which is true of me with Stripe as well, to be clear, but you're speed running learning how to run a company.
I'm curious, how do you learn this stuff?
How do you use AI, but how do you learn more broadly?
Yeah.
No, I mean it's, I've got a lot to learn still for sure.
I think many of these functions are, if anything, like I mentioned, we have under invested in a lot of functions maybe because they're not as top of mind for us as they should be.
And now that's something that we're pretty actively working to do more of.
I don't believe in professional coach or career coach in the literal sense, but I think obviously you learn a lot from your peers and your friends who are doing similar things.
So having a lot of close friends who are wanting— People you went to math camp with, apparently.
Yeah, and then learn learning from all these different folks.
I do think as an entrepreneur, it helps a lot to have a close group of friends that you can just be very honest and say, "This thing is totally messed up and I have no idea what we're going to do, and please tell me if you have done anything like this before."
Or things like that, which has been really helpful.
I think Eric and Karim from RAMP, for example, or all these various folks from math competitions.
My previous co-founder Vlad from from Lunchclub, a lot of different folks that I talk to for advice.
It really does help a lot.
Last question, I'm curious, what is your information diet in terms of how you learn about the world?
I feel like Twitter for tech news is really the place to be.
We share a lot of things— Do you find there's too much video in the algorithm these days?
I think there— It's kind of become TikTok.
There is a lot of videos, but then I just don't watch the videos, for the most part, or you see the first few seconds...
Which is an interesting thing to think about as people who are making videos too, of make sure you can convey your point with no sound and with the first three seconds.
As much as you can do that, I think there are still like another like 5x of users you reach that are in that camp.
The Twitter algorithm is the extent of how AI affects my information— But that's you on the receiving end of AI as opposed to you using AI as a tool.
It's a good point. It's a good point.
I mean, I should have Devin, just a GitHub action— The morning report, like Zazu— Have a common job basically where Devin just goes and does the morning report and gets out.
There is a lot of optimization to do still.
The president's daily briefing.
Well, Scott, thank you. This is awesome.
Thank you so much for having me.
The national thing, obviously— Yeah, yeah, yeah.
You're like whatever. Once that got too easy is then you take eight cards and you make the current year.
And this year is now 2025, and so we get to try that one.
And in this case it's actually quite easy because 13 minus 12 is 1, 11 minus 10 is 1, 1 plus 1 plus 7 is 9.
And it turns out that 9 times 9 times 5 times 5 is exactly 2025.
But OK, so at some point that gets too easy.
And now you guys can give me your favorite four digit number, let's say, and then we'll try and make that.
Let's go with 6,843.
6843, OK, so it's a little bit bigger, so we'll use nine cards for that.
OK 6843 right?
So 6 plus 4 is 10, 12 minus 10 is 2, 2 times 10 times 10 is 200 plus 8 is 208, times 11 is 2288, minus 7 is 2281, and times three is 6843.
Cheers. Well, you get another sip of Guinness for that.
Yeah, I haven't had enough Guinness yet to be able to knock.
Exactly, and now we test as drink more Guinness how the performance goes.
Loading video analysis...