TLDW logo

Why Opus 4.5 Just Became the Most Influential AI Model

By Every

Summary

## Key takeaways - **Opus 4.5 Vibe Codes Endlessly**: Opus 4.5 is the first time where I've been able to vibe code and it just keeps going without tripping over itself. It just keeps building stuff and it doesn't have errors; if there are errors, it fixes it. [03:28], [04:02] - **Claude Code's Low-Level Tools**: The design principle that makes Claude Code so powerful is that anything that you can do on your computer Claude Code can do, with low-level tools like files, bash, and grep that create a composable and flexible system. Programs or features are just prompts, slash commands, and sub-agents written in English. [08:14:00], [09:17:00] - **Real-World Claude Code Builds**: Paul built a searchable index of a politician's newsletter using Claude Code on his phone between turkey and dessert at Thanksgiving, with SQLite on the back end. He also cloned a TR-808 drum machine in 20 minutes and transformed a grizzly Microsoft Access database into a modern web visualization. [11:32:00], [12:00:00] - **Job Roles Blur Overwhelmingly**: To see all of those categories blur—front-end engineer, product manager, designer—and all of the things that allow people to say where their value is frankly really overwhelming. Every product manager is now building their own app without engineers, and vice versa. [19:56:00], [20:21:00] - **GLP-1s Mirror AI Shock**: Paul lost 70 pounds on Mounjaro after a lifetime of not being able to lose weight, shattering rules like willpower or surgery only, which confused him like AI does now. Humans struggle to metabolize such rapid change, upending social systems. [25:39:00], [26:47:00] - **LLMs Reflect User Assumptions**: Claude's mild bearish consulting forecast mirrored Paul's anxiety, generating Sankey charts and stories like McKinsey revenues dropping from $16B to $4B by 2035 as AI does 95% as good analysis at 1% cost. Change the prompt and it mirrors differently. [52:15:00], [01:02:24]

Topics Covered

  • Claude Code Enables Persistent Vibe Coding
  • Shift to High-Level Abstraction Programming
  • Human Skills No Longer Guaranteed Relevant
  • AI Accelerates Software, Humans Struggle
  • Consulting Firms Face AI Disruption

Full Transcript

The world changed last week. Opus45 is the first time where I've been able to vibe code and it just keeps going without tripping over itself. It just keeps building stuff and it doesn't have errors. If there are errors, it fixes it. I kind of knew we were headed in this direction. Somebody on Blue Sky was like, I don't think anyone should have any opinions on AI until they spend 2 hours in the opus room. And I think that's right. I no longer feel I can in good faith say human skills are going to

be relevant. one way to look at this and be like, "Hey, you can take your craft, you can evaluate the output of this, and you can make sure that the people in your world are being getting good stuff faster, but also make sure it's safe and on rails." That just isn't how humans work, man. Humans want to type in the box and get a thing, and if it kind of works, they'll be like, "I did it. I don't know if society will completely reorder itself." Although, in a way, it

be relevant. one way to look at this and be like, "Hey, you can take your craft, you can evaluate the output of this, and you can make sure that the people in your world are being getting good stuff faster, but also make sure it's safe and on rails." That just isn't how humans work, man. Humans want to type in the box and get a thing, and if it kind of works, they'll be like, "I did it. I don't know if society will completely reorder itself." Although, in a way, it

seems to be trying to. So, that part's tricky. I think what's wild to me is learning how hard it is for humans to metabolize change. Everybody thinks lots of thoughts about me and themselves and their disciplines. Like I'm a front-end engineer. I'm a product manager. To see all of those categories blur and all of the things that allow people to say where their value is is frankly really overwhelming.

This podcast is sponsored by Google. Hey folks, I'm Omar, product and design lead at Google DeepMind. We just launched a revamped vibe coding experience in AI Studio that lets you mix and match AI capabilities to turn your ideas into reality faster than ever. Just describe your app and Gemini will automatically wire up the right models and APIs for you. And if you need a spark, hit I'm feeling lucky and we'll help you get started. Head to ai.studio/build to create your first app.

>> Paul, welcome to the show. >> It's great to be here. Thank you. >> I am so excited to get to interview you. You, for people who don't know you, you're the co-founder of Abort, which is a AI powered software delivery platform for businesses. Um but uh closer to my heart, you are a fantastic writer. >> Thank you. >> You wrote a piece um like when I was in college that like just it was like it's like the piece when I think of when I think of that era uh called what is code

u for Bloomberg. Um I would love to revisit that piece in a second. But >> when you were in college a mere 10 years ago. >> Oh Dan, that's fine. You drink some milk and talk to me here then. That's great. That's >> I I have to do stretches now. Like I didn't have to do stretches before. Yeah. >> At least you can do the stretches, dude. Enjoy it. >> Um so super excited to talk to you, but I think the thing that we were both super excited about is Cloud Code. Um and in particular, uh

>> Well, wait. I'm I'm super excited about my own product. Well, but yeah, Cloud Code. Let's talk about it, dude. What the hell just happened? Um, yeah, the world changed last week and um I I think people >> people don't know yet. People It's like they just don't know. It changed. Can Can you articulate it? I have my own thesis, but can you can What do you think it is? It's Opus 4.5 and Sonnet 4.5 inside of Cloud Code was a step change. How would you describe it? I

will say um the the most immediate thing that I noticed is for a long time we've had the ability to vibe code something in one shot that like looks like a passable app. Um but Opus45 is the first time where I' I've been able to vibe code and it just keeps going without tripping over itself. Like it just keeps building stuff and it it there it doesn't have errors. If there are errors it fixes it. And so like this week I built a like fully featured iPhone reading app that it's the coolest thing.

I can like take little pictures of books I'm reading and it will do a do an analysis, but then I can kick off like a research agent that will go and download the source text and like do like a close reading study of it and then it'll generate a custom introduction for me and a and a custom reading profile based on all the stuff in my photos app. Like it's it's crazy. It's a fully featured app that would have taken months to build that I have no idea how it works

and that's just a new world. I'm curious what what you're what you're seeing. >> Very similar. It's so I've been we have a tool that built if you go to aboard.com you can use it on the web like you build software for businesses at the prompt and we've been you know trying to wrap guard rails around the chaos of vibe coding because it doesn't finish things. It's it's the last mile's really long. it tends to leave a lot of loose ends. And so we've been very very

involved in the space and and and stayed really connected to it. And then about two weeks ago, right, like something changed and they they sort of released their models and and I think what I would say is that Cloud Code is I would go so far as to say it's the first true product built on top of an LLM. there are a lot of pro and and you know I want to believe that we're in there too and so on but what we're all trying to do is is build constraints and systems and kind of recursive methods of

understanding what the output is and making it better and making the LLM actually work the way people expect it to without all the the sort of strange endings and cloud code feels like they they took that seriously and in a funny way I I think it doesn't represent some giant step change in the capability of an LLM them like it feels like yeah like Sonnet and Opus are better but they're not like 9,000 times better. But they they added in a layer of kind of agent style

um thoughtfulness to the product. So it's constantly evaluating its own outputs and then improving them which leads to these really really complex outcomes when it comes to writing code. And so I'm in the same boat. Um I have a set of benchmark projects. There's one called there's this document it's got a terrible not document it's a database has a terrible name it's called iPads and a friend of mine asked if I could work with it like a year ago using AI and it's a government produced database

of every um college they have to fill it out and it's like what are their majors and and and what's the gender and race breakdown at the school and what is tuition and so on and so forth and um the It's grizzly. It's Microsoft Access databases and huge data dictionaries. And it's the sort of thing that literally I wouldn't have touched at an agency without hundreds of thousands of dollars to staff a team of engineers and really like it was a horrible horrible

programming problem to sort of take this transform it and put it on the web in a in a sort of modern way and and I man I just knocked it out. It wasn't easy. like I still had to kind of know a lot of stuff, but it did a really great job and it built me a nice visualization with smart search and I had to create an AI enhanced search tool. I've been using it to set up a pipeline to build little musical synthesizers just to see how that could work. And today I was like,

"Hey, clone a TR 808 drum machine." And it did it in 20 minutes, right? And it's just sort of like now I I spent whole days creating that pipeline, right? But that used to be like the work of a company. And and so I think what's tricky, I don't know if you have this experience. What's really tricky is you go, "Wow, I'm powerful." And then you realize like, "No, this is everybody now." Like you you feel like you've captured something. Like you got the ultimate Pokemon, but everyone's getting

the same Pokemon like shoved into the mailbox. This is this is me trying to come up with analogy that connects with you as someone who's a lot younger. Thank you for being so relatable. >> Yeah, this is part of my job. >> I think I I totally agree. My experience some of I've been I actually did a whole like presentation for our team this morning on like what I think has changed about programming and I would be curious I think you're the perfect person actually to talk to this about. Um

the the thing that is really interesting the the the design principle that I think made cloud code makes cloud code so powerful is that anything that you can do on your computer cloud code can do. And it has a set of tools that are um below the level of features. They're like they're low-level tools. They're like files like they're grap they're command line tools. It's bash. It's grap. It's like all this stuff. Um, and what that allows you to do is it creates

this system that is very composable and very flexible that you can build on top of and do use in ways that they couldn't necessarily predict. And what's also really important is um that means that the the programs or the features of claude cloud code are actually just prompts. They're slash commands and sub aents. So you can write features in English which lets you iterate faster as a company and also lets your users make their own features. And I think that

there's I think that that is a general principle that you can start to apply to any AI based application as a as a product principle which is anything in our application AI anything that a user can do AI can do. And generally, we're trying to move what used to be product functionality that is written in code into prompts that we uh the agent uses low-level tools to accomplish the feature outcome. And that opens up all of these like interesting cool new doors for software development.

>> I agree. Look, I think the patterns in this kind of programming and this kind of thinking are really really different. So, I'll give you some some claw examples, but frankly, as someone as a company that's like building a tool along these lines, I think the patterns are are emerging kind of for everybody in sort of all the LLMs. It's just that Cloud Code just really bundled them up very very efficiently and it it kind of hit its core audience of engineers, which is them like just like right

across the face because it's like they literally were like, "Here it is. Here's the future. It's going to look like this." And we all went, "Yeah, all right, man. Okay. Okay. You got it. Yes. Yes. Yes. Yes. Mr. Claude. Um, there's a few patterns, right? So, a Yeah. Everything you're saying like you're bundling stuff up as sentences, there's an other aspect of and it and it integrates with the existing system. So, it's not like it's not this world apart. It's an and and actually what I found

over the last week is where I normally would go to a command line and start typing, I start typing in English and forgetting I haven't gone into Claude, right? like I'm just it's it's so immediate because it's so much better at building and orchestrating and you know it's funny it's it really I'll give you an example um I wanted to deploy something I built that the weird database I was talking about earlier and so I went to like fly.io IO, which is a very fast deployment environment. And I

was like, you know, because I bet it'll be able to coordinate well here. And then I just was like, wait a minute. I have this random mass server just like sitting somewhere that I use for Scratch projects. Um, can you just SSH into that and just deploy this thing for me? And it was like, yeah, no problem. And it just like jumped onto the box and like looked around like, oh, it's an Ubuntu server. Yeah, let me update your EngineX. Oh, you need to get the the certificate installed here. Let's go

ahead and do that. And um 10 minutes later and then the killer was I was at Thanksgiving and my friend's dad was like, "Boy, I really need to make a searchable index of this one politician's newsletter for oppo research." And I was like, "Man, that's something." He's like, "Yeah, I've been cutting and pasting into Google Sheets." And I'm like, "Is it all available on the web?" And he's like, "It sure is." I open up Claude Code on my phone and literally between turkey and dessert, I

built and shipped it. was SQLite on the back end. It works just fine. He's going to do his oppo research. Don't worry, he's on the right side. And like and so like like I shipped a a pretty complicated searchbased full text search like I know that whole architecture really well. So it was really easy to instruct it, but off we go. And it it also is good with dealing kind of like I didn't have to use all the new custom fancy stuff. I could just use an old server that was sitting around because

it it knows. And so, so there's all of that going on and and I think that as I've been working with it, what I'm finding is is you got to think not just like in terms of solving the problem, but in terms of like a one level of abstraction up like I built a little I had it built a little musical synthesizer for me that that emulated like a Moog synth, something I know a reasonable amount about and it did like an okay job and had a lot of caveats and the the remaining work on it

would be hard and I didn't do it. But then I was like, okay, one level up. You need some more information about digital signal processing. So I'm going to go spider some books that are available free online and I'm going to put them into a database. Whenever you have a question, search this little tiny SQLite database and refer to it. So then I give it a reference source and then I was like, wait a minute, you keep writing code and Claude, you have to calm down because your code's okay, but

it's not that great. I want you to go find all the open source libraries that are really good about digital signal processing which is really edge casey and I want you to make a list of them and I want you to only build based on those things. You should adapt and create a library and then you should implement based on that library. And as as like five or six things five or six things at that level of abstraction unfolded I'm now able to say hey make me a synth that's like this and come back

20 minutes later. And that is in a lot actually. It's a little emotional and confusing to process after 200 years as a software person. Um but if you work at that level and I think that's the skill that's going to be emerging. >> Yeah, I agree. I I want to stop you there at the like the sort of emotional level of 200 years as a software engineer. Um, and I think that there's probably there's there's just a lot of people who are um profession who are professional software

engineers who love the craft of code and who maybe are pretty skeptical of AI because they're like, well, I can't write like the the the well-crafted code that I can write. you know, it has it does all these things like it does all these things that are it's you know, the code is uh not efficient and it's maybe not as dry as it needs to be. There's all this like stuff, right? Um and also uh if someone like you uses it, you can like move to this level of abstraction

where to some degree that code doesn't matter or it doesn't matter as much as it used to. Like how do you how do you square that sort of like craftsman mindset about code with what is now possible? >> Damn, man. I don't know. I don't know this week. I mean, I think two weeks ago I would have been able to be like, but like I got to tell you, I mean, I've been watching all this stuff real closely and I've been I know how LMS work. I did the homework and so on and so forth and I

kind of knew we were headed in this direction. But again, it's like a it's a step change in product. It's not a step change in technology. Like the technology is still roughly the same. It just feels like there but there but there's also this element of like one of one of the things we haven't talked about yet is you can instruct it to get better. You can be like, "Hey, if you were Claude, if you were I I was like, if you're a really good uh engineer at Enthropic, take a look at

this codebase and tell me how to make it more efficient." And it's like, well, I would do these things and get this stuff out of this file and put it over here and make this more searchable and let's make a command over here and let me write you some code. And so it's self-referential, which means it can accelerate. And so what I'm getting at is I no longer feel I can in good faith say, "Hey, calm down and take it as it comes. Humans are human skills are going

to be relevant." I I don't know if this is going to be a really good time for everybody because you've got 600,000 jobs in, you know, like Accenture alone. There's like 50 million devs in the world. There's a glut case to be made which is hey everybody can clean up their roadmap and it's a real great time for engineering to capture the value here and bring that acceleration to the organizations that they service and everybody can have their thing and that is really exciting and motivating and I

I think that would be one way to look at this and be like hey you can take your craft you can evaluate the output of this and you can make sure that the people in your world are being getting good stuff faster but also make sure it's safe and on rails. But that just isn't how humans work, man. Humans are just like humans want to type in the box and get a thing and if it kind of works, they'll be like, "I did it." Just like you with your app or me with my apps. Like they might be

crap. You might be looking at this and you might have like app glaze all over it. Just like we see with images and text, but you can't see it yet because it's so shocking. Except that it's software and it's like it's not like there's no API glaze. Like it pulls from the database or it doesn't. So, it's just this very confusing moment where it's doing really practical, really difficult things that used to be really expensive. All I can tell people to do is like somebody on Blue Sky, I don't

crap. You might be looking at this and you might have like app glaze all over it. Just like we see with images and text, but you can't see it yet because it's so shocking. Except that it's software and it's like it's not like there's no API glaze. Like it pulls from the database or it doesn't. So, it's just this very confusing moment where it's doing really practical, really difficult things that used to be really expensive. All I can tell people to do is like somebody on Blue Sky, I don't

know who it was, I just was like, which you know, Blue Sky doesn't love this stuff. Um, was like, I don't think anyone should have any opinions on AI until they spend two hours in the Opus room. And I I think that's right. Like, you got to just give it two hours and see where you get. And then you can be as grumpy as you want, but like you got to give it a go. >> I agree. I think um and I I would love to get to some of the like social implications, but I'm mostly interested

at first because I think the the only way to or or I think the best way to understand the the larger implications is to understand like the implications on yourself. Like how is it changing how you process the world and how you think about yourself? And so I'm curious about that for you. >> You know, it's funny. I'm building a AI company with a wonderful business partner who I've worked with forever. I'm looking out. We have a nice office and we have a great team and we have

clients and we work with them and we're doing what I just described. We are moving their roadmap along and we're bringing them tools much more cheaply and much more quickly than we used to be able to. And I think it'll get faster, right? like we want to we want to drive that value out and so in some way things are pretty normal in that I come to work on the train every day and in some ways they're not in that there was so much friction built in for good reasons into

the software development process and the software development process is social you know like engineers say no a lot and they say no for good reason and I used to train them to say no because clients would ask for things and it would blow up the scope hope and then the whole project wouldn't ship and then they'd call me at on a Saturday and yell at me and I didn't want that to happen and so I'd be like we got to say no up front and my my co-founder has a wonderful

maxim which is there's there's no bad news 90 days out if you see something failing and you tell somebody hey like I think we're going to have a problem I'm not going to be able to build your thing but it's 3 months ahead and you say let's work together to find a solution people tend to be very accommodating and understanding it's only like 3 days before when you're like we're going to miss the deadline that they freak out. And so my whole life has been architected around the fact that

everything I do is exhausting, takes time, and involves some of the most difficult people who have ever existed on the face of the earth who usually hate me and each other. Okay? And like that is my dayto-day and I'm pretty good at it and everybody thinks lots of thoughts about me and themselves and their disciplines and people are very very anchored to their disciplines, right? I'm a front-end engineer. I'm a full stack engineer. I'm a designer. I'm a product manager.

And to see all of those categories blur and all of those rules change and all of the things that allow people to say where their value is is frankly really overwhelming. And I don't I don't want to devalue that emotional response because I've been kind of coming in and being like, "Hey, let's all do this together and let's move forward." But boy, I don't know about you, but there are elements of this that are just a freaking smack across the face. >> You probably lose so much time in the

gaps between tools. You design in one place, you write and manage content in another, and then you publish somewhere else. Every jump is a chance for work or context to get lost or for something to go wrong. Framer is different. Framer already built the fastest way to publish beautiful productionready websites and it's now redefining how to design for the web. With the recent launch of Design Pages, a free canvas-based design tool, Framer is more than a site

builder. It's a true all-in-one design platform. From social assets to campaign visuals to vectors and icons, all the way to a live site, Framer is where ideas go live from start to finish. Design, CMS, and publishing all on one canvas. No handoff, no hoping the final version looks like your mockup. What you make is what goes live. Frameware isn't a stripped down demo. It's a free fullfeatured design tool. You get vectors, 3D transforms, P3 colors, SV animation, unlimited projects, and

builder. It's a true all-in-one design platform. From social assets to campaign visuals to vectors and icons, all the way to a live site, Framer is where ideas go live from start to finish. Design, CMS, and publishing all on one canvas. No handoff, no hoping the final version looks like your mockup. What you make is what goes live. Frameware isn't a stripped down demo. It's a free fullfeatured design tool. You get vectors, 3D transforms, P3 colors, SV animation, unlimited projects, and

collaborators. Are you ready to design, iterate, and publish allin-one tool? Start creating for free at framer.com/design and use the code dan for a free month of Framer Pro. That's framer.com/design and use the promo code dan. Rules and restrictions may apply. And now back to the episode. It's interesting. I've definitely I've had moments of that both on the writing side and on the coding side, but I think that we're so in the center of just figuring out okay, what

collaborators. Are you ready to design, iterate, and publish allin-one tool? Start creating for free at framer.com/design and use the code dan for a free month of Framer Pro. That's framer.com/design and use the promo code dan. Rules and restrictions may apply. And now back to the episode. It's interesting. I've definitely I've had moments of that both on the writing side and on the coding side, but I think that we're so in the center of just figuring out okay, what

do we do now that it has quickly shifted to like there's so much to do. So, so it's I think I'm familiar with the emotional experience. >> Well, you chose to jump in, right? You're like, I'm going to build infrastructure and community in order to address this change. We built a lovely office. you should come visit. Uh literally because we know that New York City is not ready for AI and we're like okay let's at least have a place where people can like and we've been having

not for profofits in and lots of folks who like are going to get ignored so that we can talk about this. So I think that part feels really good. I think it's just like it's a lot of change like we're coming on we got GLP's pandemic and now writing is funny for me too because I'm like I actually see the writing is because like it doesn't write for me. I kind of don't get it to write for me. It just it can't be me. Like I'm I just am what I am as a writer. But I see a lot

of people who aren't writers and my god it's good for them. Like and I I'm like it gives them access to a world and and to kind of entree into a more formal style of communication that they didn't have before. And so like to me writing is supposed to empower and like if the robot helps you that's good. If the robot thinks for you that's bad. So >> yeah, I think um I I've been trying to sort of process like okay, what are the what are what are those moments where I

have that existential freak out? What is that like? because I had that a few times sort of during this process and each time I've once I got over it felt like okay there there was something there that I missed and I'm I'm trying to like update my intuition or my analogies for like so I can understand those experiences better and there's that moment where um the present sort of like collapses into the past and everything that you used to know looks really old and you're like what's next

And the the intuitive experience that I think matches to this most closely is um before we had really good uh sea travel, we used to think that um if you if you went into the ocean, there would be like an edge that everything would fall off. Um there there's there's an edge of the world. And that's our intuitive notion in a lot of ways of what happens when you get to the horizon. And what we found when we got to the horizon is that there's more horizons. Um, and my

experience, I think that that maps pretty well onto my experience with AI is like each time I encounter this new thing, I'm like, "Oh my god, I'm at the edge of the world and there's like it's a cliff and it's just going to like drop off." And then each time I sort of step over the horizon, I'm like, whoa, there's this whole new territory. Which is not to say that um there are no bad effects and and and there's there's not like complicated social issues to to work out, but it is to say that

I've learned to catch that edge of the world intuition and try and try to update it with there's probably not an edge. There's just a new horizon. >> That's a good way to look at it. I agree with that. I think for me it doesn't I don't think human beings are going to change. I don't know if society will completely reorder itself. Although in a way it seems to be trying to. So that part's tricky. But I think what's wild to me is learning how hard it is for humans to

metabolize change. Um, for me the moment the one that blew my mind as the the last time I felt this way just like exactly like this was um my doctor put me on Mangaro very early. I needed it and >> what's Mjaro? >> Uh it's like ampic. It's an it's a GLP1. Okay. So suddenly after a lifetime of not being able to lose weight, I lost like 70 pounds in a hurry and I was very dangerously big. I'm still pretty big, but like my health changed and it was really after a lifetime of being told

metabolize change. Um, for me the moment the one that blew my mind as the the last time I felt this way just like exactly like this was um my doctor put me on Mangaro very early. I needed it and >> what's Mjaro? >> Uh it's like ampic. It's an it's a GLP1. Okay. So suddenly after a lifetime of not being able to lose weight, I lost like 70 pounds in a hurry and I was very dangerously big. I'm still pretty big, but like my health changed and it was really after a lifetime of being told

like this is how this works. This is the only way it works. You can only do surgery. There is willpower and so on. So so all these rules in this whole social system and things that I heard from doctors and it was one day they went and it was really confusing. It was re I'm an adult man and it was really confusing to go from this is the system of the world. This is what weight is. This is what obesity is and these are the only ways that things can change to and then to hear the next day that

actually it kind of was a medical condition. Whoops. And then knowing that this would push through the world and this would change the way that we talk about our bodies completely. And it did. Like I just like I knew in that moment like oh we're not going to put this back in the box. This is going to be very different. People are going to have very strong opinions about it. Oprah's going to do a special and here we go. And I feel that way about this. Not that we

can't process the change, but just that a year or two, which is how long it's going to take for like the idea that you can just have code by typing in a box and it's pretty advanced and it does things like ship apps is nowhere near enough time to process that. like it's just nowhere near and it's it's actually going to look like that horizon. It might take a couple years for people to figure out that they can have any software they want any time. I use the

concept a lot. I call it latent software like PDFs that describe pro procurement forms or Google spreadsheets that are floating around. Like my my company aboard is all about taking latent software and making it real and getting into people's hands. And so we've been trying to coach people along and they're very confused. And now you're about to see like you know that open AI is going to build their own and you know that it co-pilot's going to get smarter and you

know that there's going to be Super Bowl ads if not this year then next year about how you can have anything you ever wanted and we just rebuilt the whole society over the last 30 years around software right like software is eating the world was this whole idea and now it's eating itself and and so like I do look you're right like are we going to are Are we going to be okay as a species? About as okay as we ever are. Will there still be jobs? Yes. Right. Like I I don't I'm not actually a

pessimist, but I am after the pandemic and GOP ones and Trump and everything. I'm just like very nervous about the human ability to tolerate change. And we've created the ultimate change engine that sits in the middle of our global economy and spews out change like at an unbelievable rate. And we've we just created the number one change accelerator possible, which is move software much much faster. And so I don't think we're going to see it's not going to be familiar. Parts of it will

be very familiar, but I think parts will be very very weird and it's going to be really really strange to watch. >> I love the GOP one example. Um, and it's interesting that you listed GOP ones with Trump and the pandemic, but I assume you're, which, you know, in my world, those are two pretty negative things, but GOP ones, I assume you're >> um >> I'm big positive, you have a positive experience with them. So, it's sort of interesting, >> which is hard, man. I I was in client

services for 20 years. It is hard. I still am. >> It It's I have a really good product that can really help people. I have an organization that can really help people. I see claude code showing up and I'm showing it to people in my world because similar to you I'm like whoa and they're like well hold on a minute and I'm like no I and it's not me saying I want you to use this I literally just want to say it was like this when I was writing I just want to show you so that

you can figure out what to do next and what I have found over and over in the course of my life is that merely by showing people they tend to panic they don't want this change and they say they do. They want the output. They want the value. Everybody wants to be an app developer. But what they want is it to run the way it used to. I don't know if you've noticed this, but every product manager, you know, is now building their own app. And every engineer is building

their own app without product managers. And the product managers are building without engineers. And the designers are trying to figure out how to ship. And they're all really happy to get out get everybody out of their world, right? And they're pretty sure they're going to be able to capture the value of the revolution. and they want it to follow the rules that used to be there, but it won't like it won't. And so you can be I I like I don't know what we are. Are we

all pipeline builders? Are we all coders now? Are we all app builders? And like everybody's having the experience you and I are having who is deep in on this? But we're about to find that everything we created is probably more disposable and less exciting than we thought it was like two weeks from now. And so I am puzzling that. I think it's I think this is going to be a rough one deep down, an exciting one with an enormous amount of good things. And I I can't I I'm so

excited for everybody to have all the software they ever wanted cuz that's always been my dream, but now that it's here, I'm a little scared. Isn't that interesting? Like I've been thinking about that too a little bit is if I took a step back and like reround like seven years or 10 years and I said there's just going to be a thing where you type into it and it just makes whatever you want. >> Yeah. >> I would have been like that's great. That's definitely not scary.

>> Finally happened. Yeah. They've been promising this. They have been promising this for 70 years. and then it just happened and then you're like like it makes me question if anything could happen that would be an unalloyed good. >> No, that's been the lesson of the last like 15 years. No is the answer. And that's I don't know like that's also the lesson of adulthood, right? Like and and it's also the lesson of working with people. When you work with people, their

best qualities are always their worst qualities. You know, I'm good at thinking big thoughts but often terrible at delivery. So, you have to pair me with somebody who's good at delivery. >> Yeah. >> Um, you know, because I get distracted. the um you know what's funny though I and tangential to that the promise of software if you go back to like the Xerox Park days even before list programming language and so on is that we would have sets of composable objects that could interact and that an

average human being would be able to learn the system and build whatever they wanted. Okay, that was the whole point of like Alan K and the Dina book in the 70s. If you don't know what this is like it's very legible. It's essentially like a laptop that kids can use to build any software they want. Proposed in the 70s at Xerox Park. Go look at the Wikipedia page. Um it's it's kind of what we thought and we thought that was going to be the iPhone, right? We thought that

was in particularly the iPad to the point that like Steve Jobs and Alan K were were kind of like talking about that in the as the iPhone was being rolled out like hey I think we're getting closer you know and it was the idea was you you you'd manipulate code in ever more abstract ways and what happened is LLM's computers continued to suck and suck and suck and be horrible and never work and and our solution was actually to simulate humans so that they could do it for you rather than make the

computer really really usable or figure out how to make really really robust code and there's good reasons for that but I don't want to go into them right now but like that people have been trying for decades and so suddenly we have it we have the fantasy of the 70s I could sit I can train anybody I think at this point to think algorithmically and structurally enough about applications you know and there's going to be a lot of retooling around how we educate people about what software does but I

think in about two weeks you could start to build really really meaningful stuff and I think in about two years you can probably build just about anything and that that used to be the work of 20 years that is great and it is great. I don't want to like freak out too much. I just spent all it's Thanksgiving weekenders were like just ended and I just spent too much time on the computer I think. But I I want to I want to stick there because I love this that's the story of adulthood because

the you're absolutely right and it that is my problem with a section of the AI discourse that is I I would say more the mainstream section which is has this hidden underlying assumption that anything that could have negative effects is bad and so and is looking for the only those more or less as opposed to like a little bit more um like in adulthood you're like there's some really good stuff here and there's some problems here and it's sort of this like you know wonderful and terrifying mix of

things and our job is to acknowledge the good stuff and deal with the bad stuff as best as best we can and I think um that's what's that's what's difficult to access when you're at the edge of the world, you know, is like, >> "Oh, okay. I know exactly what you're talking about here. I I see it differently." So, you've got a variety of discourses, right? So, let let's take one, which is the And the one you're talking about is like very left adjacent, very much shows up

on Blue Sky, right? In some ways, that's kind of my home base. Like, that's my my family, the way I was raised. You've got one group that is like, "AGI is coming. Get ready. The computer is God." Okay. And so like we've all kind of learned to make our peace with them. They don't live here in New York City. We're just going to like they seem good. It's a lot of guys, a lot of polyamory and good for them. I wish they would >> yoga, you know. >> Yeah. And they also really like they've

also kind of all shut up about AGI because there there's so much money to be made. Like you know, Samman cracks me up, right? Because he wants to be Steve Jobs, but he's Steve Balmer. He just kind of got the wrong Steve. And and it's just like here we go. Okay. commerce capitalism. >> That is a hot take. >> I mean, am I wrong, though? Tell me if I'm wrong. >> I I would love for you to unpack that. I think that's a I think it's a great it's a great line. >> Oh, do I even need to? He's a really

really good salesman. He's a really good deal guy. He told us we were headed towards AI Jesus and now we're getting um shopping, >> right? Like he's he's a commerce guy. I don't actually I think he's good at that, you know? I think Anthropic, it's funny if you compare the two companies like OpenAI is very much Microsoft. Like whatever you want, whatever you want. We're going to sell this to you and you're going to have it. God, let me give you more. And and anthropic is

Google. Like, and it's actually funny because look where they're buying their chips. Like like Anthropic is literally buying Google Tpus. Like they're >> I thought you were going to say Anthropic is Apple. No, nobody's Apple because nobody's really um Claude Code is great, but it has nothing to do with human beings. It has to do with it's still for engineers. You can't put anyone You can't put a civilian in front of that interface. It makes no sense. >> That's true.

Google. Like, and it's actually funny because look where they're buying their chips. Like like Anthropic is literally buying Google Tpus. Like they're >> I thought you were going to say Anthropic is Apple. No, nobody's Apple because nobody's really um Claude Code is great, but it has nothing to do with human beings. It has to do with it's still for engineers. You can't put anyone You can't put a civilian in front of that interface. It makes no sense. >> That's true.

>> You just can't. Now, could they get there? Maybe. I just don't think they even want to. I think they want to just accelerate accelerate accelerate engineering and let everybody go run off and then they'll figure out how to productize along the way. Whereas like I think open AI wants to make a play for the whole shebang. They want to be the operating system. Um and the Apple in the middle the PE like what's it going to look the thing about Apple is it made the computer disappear.

So who's going to make the LLM disappear, right? And just sort of align it with what people want to do today. And I I don't know if we're even there yet with this technology. >> I don't think so either. >> Oh, so wait. So that's that's group one. Okay, we got group one and then here is my I'll actually give some advice which is Silicon Valley in particular dropped this absolutely bizarre thing, told everybody it would solve every possible social ill and didn't really come with a

plan. And there were real harms that emerged and people panicked and the harm frameworks weren't clear. And I think what we got to do, cuz I'm in there too, man. I love this stuff. I use it every day. And then I go on Blue Sky where like 80% of my feed is people saying how much they hate everything that I'm touching all day long. And I get it. I get it because they also hated the tech industry. I I think you got to just like let them burn it out. There will be people who

plan. And there were real harms that emerged and people panicked and the harm frameworks weren't clear. And I think what we got to do, cuz I'm in there too, man. I love this stuff. I use it every day. And then I go on Blue Sky where like 80% of my feed is people saying how much they hate everything that I'm touching all day long. And I get it. I get it because they also hated the tech industry. I I think you got to just like let them burn it out. There will be people who

just hate this [ __ ] for the rest of their life. Um, and what you'll find, because I'll tell you, here's what's wild. And this is actually as someone who's very much kind of on their I feel I'm on their side. Um, I got my kind of progressive type literary types from my, you know, I used to be an editor at Harper's Magazine, right? And so like there's a whole world there for me where those people want nothing to do with this. They want their pros untouched by a robot. And they want

a certain world and a certain vision of the world to persevere. And this is all noise and distraction from that just like everything is, just like the tech industry is, just like the web is, like blogging was. And they're just like, "Please let me get back to my purity and please get out of my hair." And okay, like that's what they want. Then I think there's but then there's this very tricky thing going on. There's a lot of people are like, "This is just an

unaloyed evil and we have to reject it." And at the same time, I'm sitting here in my nice office in New York City, but I'm hearing from and working with children's health charities and scientists and real dogooders and climate types who are like, "This can accelerate our roadmap and we want to do it. We want to use these tools to achieve our mission." And their mission is unaloyed what I believe to be positive in the world. They see the value. They're often coming to it like

as scientists. They see the risks and they're like let's please use it in order to get that done. And they are not software is not the star of the show for them. Their work is their community. They're donors. Uh and they're like what can we do to aggregate the data or deploy the platform or manage the content or do this stuff in such a way that we can do more of the other thing we want to do which is we believe in unaloyed good for the world. And they're super excited and motivated. And so what

as scientists. They see the risks and they're like let's please use it in order to get that done. And they are not software is not the star of the show for them. Their work is their community. They're donors. Uh and they're like what can we do to aggregate the data or deploy the platform or manage the content or do this stuff in such a way that we can do more of the other thing we want to do which is we believe in unaloyed good for the world. And they're super excited and motivated. And so what

I see when you're talking about that stuff, there's actually a strange fork. There's a group of people who are like, I believe that I have a really good ethical model for what humans need and I believe we have to reject this outright. And then there's another group that is like, I believe that and it's my day-to-day job. And group A is like, keep this out of everything. And group B is like, I can't wait to use more of this. And it's very, very confusing. And I think that tension is going to just

keep rising. And in the same time, there are people who are like, I'm a professor. I teach research methods. I don't want this near my students. I need their brains to work. And I get that. I actually think that's right. Like, good. Okay, draw that line. Make them figure it out. They're going to go use it anyway. They know that. But like, if you want to put them in a box for a minute so that they actually learn the history of how to think and and what to do, and

keep rising. And in the same time, there are people who are like, I'm a professor. I teach research methods. I don't want this near my students. I need their brains to work. And I get that. I actually think that's right. Like, good. Okay, draw that line. Make them figure it out. They're going to go use it anyway. They know that. But like, if you want to put them in a box for a minute so that they actually learn the history of how to think and and what to do, and

you feel that that's important as an educator, I'm not going to second guess you. I respect that. So, I think it's trying to find a balance in all this, but ultimately the balance is like you're you're there with that prompt and it does something for you that's really useful and kind of knowing what's good and what's bad about it and then going on with your life. Cuz if you even try to engage with any of the discourse around this technology, you're just in hell, which I mean, I'm glad I didn't

start a business totally focused on that problem. >> That's This is why I stay off of Blue Sky. I can't imagine I can't imagine being you on Blue Sky. But it sounds like it sucks. I get a I get a funny hall pass with this stuff cuz I'm an old and and you know I just like I just I I still get yelled at on a regular basis. But like yes yesterday I was just like Simon Willis um who I'm guessing many of your listeners should know is just like wow. He sort of stirred the hornets's nest by

talking about how AI was changing coding and I just did a like he's right you should listen post and you know like half the people what what happens is everybody comes out and they're like yep yep and then the other half are like no there's this one time and it's this and it's that and it let them fight man let them fight in your mentions. I think this is actually a very typical um basically reaction to a paradigm shift. >> And to some degree, people who have who are like

know how they do things and want to keep doing it that way are just going to keep doing it. And it's the same. >> It's such a sad day. You also got people coming in from the West Coast telling you how it must be done forever more. >> Yeah. And that's it feels real bad like and and they just dismiss your concerns, right? We're used to it. We're tech nerds and we're used to we're used to nerds just kind of like stumbling in. Nerds never actually fully acknowledge

how much power they have in a room. And so they're like, "What? Why is everybody so upset? It's just really cool technology." And then and then, you know, it's like, "Well, because I was going to make my living as an illustrator and I was going to send my children to a like, you know, we were going to go on vacation once." and they're like, "Well, whatever, UBI." And and like that that whole thing, that's how that comes across. It's just this tin ear on the West Coast. And it is

pretty hard for people, I think, to be told over and over how they're it's okay that they're being devalued without being celebrated in any way. And and so you end up with stuff like Anthropic having to pay $ 1.5 billion to publishers, right? because like of all that stuff, you know, it's just like these they feel vulnerable and then they feel attacked and then they're going to use what power they have and one of the powers they have is to just complain. Um, and I don't know. I think you got to

we have to own that because we got to keep all the money. >> Well, let's let's let's unpack that a little bit. >> Okay. First of all, let's let's bring in some employees to unpack it with us as the as the leaders of our companies, right? Hey guys, come on in. Let's talk about how >> what I want to understand. What I want like I I love the they feel vulnerable and uh if you feel vulnerable and something new comes along, it's like it's an it's an obvious immediate

reaction to be like this is bad. I don't I don't like this. I don't want this. Right. >> But doesn't help that they all went to the White House and kumbayad with Donald Trump including Jensen Huang. I mean it doesn't like help the vulnerable people feel less vulnerable. Let's just just putting that out there. Anyway, go on. >> That lost my mom. Um, which is sad. >> Yeah. >> Yeah. >> Um, but >> what are you doing in League with Satan? >> You're you're you're replaying my

Thanksgiving conversations. Um, no. My mom is much uh much she's very proud. Um, >> she should be she should be very proud. >> She wants me to be careful. >> You better be careful >> associating with the league. Um >> yeah, >> I think um but let's let's anoint ourselves, you know, in between worlds type people where we like the tech stuff and then we also care a lot about writing in the humanities. And so >> um ideally uh because we're amazing New York tech people, we can kind we can

Thanksgiving conversations. Um, no. My mom is much uh much she's very proud. Um, >> she should be she should be very proud. >> She wants me to be careful. >> You better be careful >> associating with the league. Um >> yeah, >> I think um but let's let's anoint ourselves, you know, in between worlds type people where we like the tech stuff and then we also care a lot about writing in the humanities. And so >> um ideally uh because we're amazing New York tech people, we can kind we can

kind of be the bridge that's missing between these two camps. And what I want to understand, let's say we're trying to explore. >> Lit literally, we you and I can do an event, have a nice space. We can we can bring them all here. It'll be great. >> I would love that. That would be amazing. >> We're we're going to do that. We're going to do um we're going to do an event where humanities people can come yell at us. >> I I didn't sign up for that. >> You're boo. You get the

>> No, no, no. We're doing it. We're going to bring in the angriest, overpaid professors from the most expensive schools in America to tell us how bad we are. >> Here's what I want to understand. Let like let's just let's take the the balanced perspective for a second and say we want to examine we want to examine the the arguments of the people of the people on the left who are loudest about this um being bad. Mhm. >> And um like what what are the what are what

do you think are the actual real bad things that have happened or are happening or will happen that um a reasonable person who loves this technology should care about? >> That's a very good question. Let me think for a second before running my mouth because I think look there are a lot of stories and narratives about specific harms. you see them in the paper. Um, and you know it'll be uh ChachiPT encouraging suicide in teens. And I think there's an element I have I have

a tricky reaction to that because as a technologist I've watched and I'm I'm I'm 51, right? I've watched like two or three generations of internet technology and these harms just spill out at scale and it it's really not stopping the harms is not always possible. You have a new technology, you see ways that and and I think what happens is you see these orgs, they get they get a narrative of their own importance in the world because they're getting constant positive feedback. The money's pouring

in. People are saying, "My god, you know, my this really helped my daughter. this really helped my son. This is we're using this in all sorts of exciting scientific ways. And then they're shocked when something bad happens, right? Because there's so much good pouring in and it's coming with so much money and they're shocked and then they do like a fullcourt press and then you end up in this like bizarre cycle where you know it always ends up with like somebody getting really into MMA as

in. People are saying, "My god, you know, my this really helped my daughter. this really helped my son. This is we're using this in all sorts of exciting scientific ways. And then they're shocked when something bad happens, right? Because there's so much good pouring in and it's coming with so much money and they're shocked and then they do like a fullcourt press and then you end up in this like bizarre cycle where you know it always ends up with like somebody getting really into MMA as

like a CEO, right? I just sort of like that's No, but I really think that that's like them asserting they they they're so they feel so attacked and they feel so vulnerable cuz people keep telling them that they're kind of evil that they're like I'm going to become a freaking cage fighter and that's going to show them and and you know it's like it's kite surfing is like the gateway drug to that and like it's just like a whole thing that happens. So you've got

this whole like cultural dynamic playing out inside of giant tech orgs as the money pours in and it's like a whole thing and then you've got the the press desperately seeking for very specific harms to get a story that can turn into a narrative that can be a little bit broader. And you smash those two things together and it's pretty hideous. And the only way that you resolve that is through regulation and oversight. But our society is at least a little bit collapsing and it just doesn't seem

interested in that. And so so now what what would be a way to do with this? First, first of all, I don't want to what would be a thing to do here? First of all, I don't want to put LM's back in the box. I would say that when we're talking about harms, not specific harms, the lack of provenence is bad. I would like to know what goes into my meat. Okay? Okay, like I I I want I want I want some nutritional guidelines as to what's in my anthropic LLM and what it's

using and where that data came from. I don't want to be surprised by huge copyright cases. I I shouldn't be. I should know what I'm I'm using. I know that Google is the web roughly and Google doesn't go into secret parts of the web and it honors robots.txt. That is a contract that Google made with the web and when it doesn't honor it, it's really bad. And in fact, there have been technologies where Google kind of like tried to sidestep the open web and people got really upset. Um, like AMP

pages and things like that. >> Oh, you and I are you're drinking a Spin Drift Tropical Lemonade. >> Love it. >> Looks like I am too. >> Great minds. >> Spin Drift. >> The brand of New York liberal tech nerds. >> God, it's so bad. So bad. Um it's terrible place to be in technology New York City the um uh so anyway coming back to it right like what is the harm that's been done we won't know the real harm not the specific harm but the broad I don't see it as harm I just see it as change what

kind of society do we want to have to deal with the kind of change that is coming um a 50 million person underpinning of the entire glob global economy, the tech industry, you've got giant consulting firms, you've got consult um uh tech integration firms and software companies, their core product has been radically devalued. What do we think about that? Who gets to talk about that? Like who is going to the the AI folks are going to be like it's great. It's best thing ever

happened. Everybody gets their software. I'm going to say that because I I'm building a product along those lines. But like if we're going to have this level of change, it almost feels like you're not even What I think is going to shock people is how how people see it coming but then don't really plan for it. Like everybody and and that's what actually panics me a little bit, Dan. I like because people are like well you're still going to need engineers for this

happened. Everybody gets their software. I'm going to say that because I I'm building a product along those lines. But like if we're going to have this level of change, it almost feels like you're not even What I think is going to shock people is how how people see it coming but then don't really plan for it. Like everybody and and that's what actually panics me a little bit, Dan. I like because people are like well you're still going to need engineers for this

and you're still everybody is like well but when they see this new technology and I think we have to start internalizing actually horizons aside this will change a lot of the ways that people do things and it might change the way they make money and it may change what their lives are like so what's that going to look like and ironically I had Claude make me a prediction model for the future of the consulting industry and write me little stories. >> They were all >> What's that?

>> What'd you get? What did it say? >> Oh, dude. They were really sad. I was like I No, cuz I literally was like, "Okay, you know what, Paul, you get a little cynical. Just say mild bearish." Mild bearish. Okay. And it was like Rahul thought that he had made a good choice by going to computer science. Like one after the other. It was like how to draw a Sanki chart. I can share it with you. You can share with like it's I published it as an artifact here. Let me

just give it to you. Let me let me let me show you this thing >> please. >> Hold on because I want you to I want you to see it. One sec. There we go. You see that? >> Yep, I do. Okay. So, this is I didn't give it this title. Uh and in fact, I tried to really hedge. I was like, "Hey, looks like AI might really change the consulting industry, and I want you to make a Seni chart and um tell me uh >> what is a Sanki chart?" >> It's one of these guys. Okay. So, it's like chart.

>> Yeah. Yeah. Stuff comes in on the left stuff and it gets turned into work on the right. So, like >> financial services clients, you know, feed in and then they make this much money off of consulting. So, right now we're looking at Deote, giant consulting firm, and it does audit and assurance and consulting and tax and legal. Let me see if I zoom in a little bit. Oh, wait. I just zoomed in on you. Here we go. Okay. So, mostly like and like I said, I said mild

bearish case. It does seem like this could really affect these industries. Just show me kind of what what might happen if AI was going. And you know, it's kind of ironic to ask Claude. And so, I was like, let's look at Mackenzie. Everybody loves Mackenzie. everybody's favorite company. So $16 billion in revenue 45K employees headquarters New York City and in 2024 their their revenue is about 16 billion. Now I didn't have it do deep research. It was just very hand wavy. So

bearish case. It does seem like this could really affect these industries. Just show me kind of what what might happen if AI was going. And you know, it's kind of ironic to ask Claude. And so, I was like, let's look at Mackenzie. Everybody loves Mackenzie. everybody's favorite company. So $16 billion in revenue 45K employees headquarters New York City and in 2024 their their revenue is about 16 billion. Now I didn't have it do deep research. It was just very hand wavy. So

I'm guessing all this is kind of wrong. Let's be clear like it's not. But it says that by 2035 Mackenzie's revenues if it loses digital services are going to get down to 4 billion. And so you can see that here if we switch to four billion uh the whole chart shrinks and we go you know let's go back to right now we're making our money through corporate strategy operations and so on. So I had it right employee stories for each company and so Alexander >> done everything right.

>> Oh yeah everything. Stanford undergrad, Harvard MBA, Mackenzie associate 27. She was on the partner track billing 800 uh an hour to tell Fortune 500 CEOs what they already suspected but needed external validation to act on. I got to say Claude just decided to burn the [ __ ] out of Mackenzie. Like it just again like I'm not I'm not grinding an axe here. I was just like you know just write little stories about what's up. Um the dirty secret of strategy consulting

was that the frameworks weren't magic. They were structured thinking applied to ambiguous problems. and structured thinking turned out to be exactly what AI was good at. By 2027, a CEO could upload their company's data, describe their strategic question, and get a McKenzie quality analysis in an hour, complete with market sizing, competitive dynamics, and three options with trade-offs. It wasn't as polished, didn't come with the McKenzie name, but it was 95% as good at 1% of the price.

McKenzie tried to go up market. We don't sell analysis, we sell judgment, the partner said. We sell access, relationships, implementation support, but implementation was getting automated, too. And relationships only mattered if you had something valuable to offer. Alex made partner in 2029 just as the firm started its long contraction. She was one of the last. By 2032, McKenzie was a quarter of its former size, serving only the largest clients who needed the brand for board

cover again. Damn, Claude. She left for CL. She left for a client, chief strategy officer, a midcap industrial company, less prestigious, more stable. She actually got to see her decisions play out, which was novel. >> Oh my god. >> Sometimes she missed the intellectual intensity, the feeling of being the smartest people in the room. Then she remembered that the smartest thing in every room now was the computer. >> Incredible. Incredibly scary. I'll share this with you so you can share it with

cover again. Damn, Claude. She left for CL. She left for a client, chief strategy officer, a midcap industrial company, less prestigious, more stable. She actually got to see her decisions play out, which was novel. >> Oh my god. >> Sometimes she missed the intellectual intensity, the feeling of being the smartest people in the room. Then she remembered that the smartest thing in every room now was the computer. >> Incredible. Incredibly scary. I'll share this with you so you can share it with

your listeners, but it is um it's a clog code artifact that I built yesterday for fun and I shared it with somebody who works at one of the firms and they're like they got the numbers a little bit wrong and then they were just really quiet for a minute and they went interesting. So but yeah Accenture Rahul had spent 15 years armies got smaller they disappeared and then at the end uh he took a buyout at 45 started a small consultancy helping mid-market companies with the human side of AI adoption

change management the squishy stuff the AI couldn't do well it's a living some weeks he almost believes he's adding value mild bearish um anyway uh you know I I think is is it a little ridiculous that I'm using AI to explore this particular part of the world. Sure. Is it or do I do I buy this? No. Because I actually I do your horizon thing is real. Nobody knows what's on the other side. Right. This is a the mild bearish case is is that an economic contraction won't have a sudden

flowering of new opportunity and that people won't figure out what to do next and they'll just be captured in in this kind of like shrinking world while robots do more for the rest of their lives. And that's not actually how humans and societies work like but but I do think it is a change at that level of magnitude that we're going to have to react to. >> I agree. Um I love I love that. I think that's so interesting and I think it's actually a good um example of why language models are so

powerful and what makes them sort of um special and and in that is an interesting example of why I think consulting firms oddly are going to still be valuable and important. >> Oh, let's bring that in. Let's hear what you got. >> Great. So the the thing that um it seems to have picked up on in its mild bare case is that um you can get the analysis and the judgment for you know 1% of the cost and obviously like this the the thing is like oh it's not buying analysis and

judgment or whatever but I there's something I I want to just stick with the like analysis and judgment for 1% of the cost because like I have done this too. I have put all of our company financials into Claude and had it write our investor update and it did a [ __ ] phenomenal job. >> Yeah. I mean, it's so good. Anything kind of bureaucratic, it's just magical. >> Yeah. And I've also done a lot of strategy stuff with it. And I think that I I I think you one way to

you can break up human thought or just uh ways of solving problems into two broad categories. In one category, there's a right answer and it's extremely rare, but there's only one. It's a needle in a haststack, which is what traditional programming is actually quite good at. Um, it's math, it's logic, it's all that kind of stuff. >> Give me an example like just kind of like Excel spreadsheet. >> Exactly. >> Okay. >> Uh, you know, uh, how profitable were we were we this this quarter? is there's

like a you know there's a set of rules that you can apply and there's a there's one right answer because you have very precise definitions of what right is. >> Make me a pie chart of what we're selling >> exactly. Um and on the on the other end is um uh and and and the the literary analogy is uh uh uh the Bourhees story like the the Library of Babel where it's like um uh every book is there but there's um there's infinitely many books between where you are and where you want to get

to and they're all nonsense. So you you you're just always in nonsense basically unless you've artificially constrained the search space. The other the other, you know, I don't know, branch of human thought or way to think about the way the world is is um instead of this li this infinite library where um um you're you're you're sitting in this sea of nonsense, but you know that if you go get through enough nonsense, you'll get to the right answer. and there is a right answer because there's

only so many, you know, pieces of hay between you and the answer. You're going to get to the needle and the hay stack. On the other end is a library where every single book is meaningful and has a story. Um, but there are infinitely many books between where you are and where you want to be. Um, it's not count it's not countably infinite. It's just it just there's you're just sort of in this enchanted forest of stories that you can that you can go read and each of

them has has a plausible sounding answer and you have to use your own human intuition or judgment um and and feedback from the world to like move your way to like generally the right area, but there's no right answer. And um when we're thinking about a question like when we're thinking about a question like um what's going to happen to consulting businesses or what strategy could consulting businesses take or you know what's the mild bear case I think we're much more likely to

be especially if we're looking at a clawed answer in the regime of there's infinitely many meaningful stories and um we're looking at one of them but sort of treating it like it like it's the this other one where it's like there there's right answer and Claude just found the right answer. Um because if you change your prompt slightly, you're going to you Claude could write you a great story about why consulting businesses are going to do really really well.

>> That's right. It's a it was a mirror of my anxiety at the moment, but you're absolutely right. Like I'm one literally five minutes away and if it hadn't told me that I was running out of opus credits, I probably would have done it, which I wasn't, by the way. a little product problem there in case the people from Anthropic are I had 20% left and it's like hey we're almost done here and I'm like really because I I I have a problem but it's not that profound. So

anyway, but yeah, you're right. Right. like the mirror the mirror of it and that's the tool of it and that's a really hard thing to convey because what people are used to is putting words in a box like with Google and getting a response and being able to like trust and evaluate that response and instead you're putting work words in a box and it's translating your idea into another form and that is simply going to mirror what was inherent in the idea as according to the rules of the LLM as

opposed to actually being an answer to your question, but it's suspiciously like an answer. And so this is such a subtle thing. And and again, this is where I get if you ask me kind of what going back to harms the greatest harms that the LLM companies do and I actually think that Anthropic does a better job here is to anthropomorphize the bots that has caused like the fact that it looks like it's answering rather than statistically translating a question into an answer and then that answer into

code and then that code into other code. If they had emphasized translation as opposed to chat, I think we'd be in a much better place with this technology and I think we'd have a better understanding of it. >> What would that look what would that look like in from a UI that in a way that would make sense, >> you know? I think what would be useful is instead of a it's a good question. I don't I don't have an immediate answer, but my my instinct is you would keep

you know I mean this will be really nerdy but more like a GitHub commit log like you put this in and then I and actually this is what cloud code and other things end up looking like which is here was our state and then I evaluated it and I did a bunch of queries in my internal database and I transformed it into this new state. I've saved the old state in case we want to go back to it. But here we are now today. So we have a whole new kind of context and we've actually we've changed

the way that we're working. Where do you want to go from here? Well, I want to do this and I want to do that. Great. I'm going to update the state again and I'm going to keep a really clear log and I'm going to keep the relationships between where I was when we started doing this and where I am now. I'll keep that explicit so that you can learn how this works and how to do this and how to do it repeatably and how to do it uh on guard rails and how how to do it in such

a way that you have confidence that it will be the same today as it was yesterday and if you gave me that which does an average human being really want that I don't know but I do right and is that going to work better than chat no probably not it probably won't get you 700 million users. But I I think that like LLMs are complicated. It's really hard to learn how they work. I actually had Chat GPT write me a medieval quest in which a a a magic spell was said, tokenized, and

sent through the different layers of the LLM. I highly recommend it. Like find an analogy that works for you, and then make it explain LLM in the context of like a quest through a journey. Yeah. because otherwise you don't there's a lot of things that just get go missing like the fact that there's zillions of layers happening and each layer is kind of like talking back and forth to the other layers and and sort of your it's not like your your question is being

answered. Your your question is being broken up and and spread across sort of like a zillion meta databases that are then coming back and forming something that looks like an answer but without consciousness. said like that, you know, I I don't know how to explain that to people just yet. >> I got to I got to stop you there. I Well, there's I have so we could do a whole other podcast on this, but I I just want to let me let me respond and then I'm curious. I'm curious what you

answered. Your your question is being broken up and and spread across sort of like a zillion meta databases that are then coming back and forming something that looks like an answer but without consciousness. said like that, you know, I I don't know how to explain that to people just yet. >> I got to I got to stop you there. I Well, there's I have so we could do a whole other podcast on this, but I I just want to let me let me respond and then I'm curious. I'm curious what you

think. And then I think we should we should definitely do a part two of this conversation. >> But what I hear what I'm hearing is >> we could do a live event, too. That'd be fun. We could record. >> Let's do it. Yeah. And we can we can invite all the liberals and they can yell at us as as you said. Yeah. Um um so um what I hear you saying or or almost yearning for sounds like traditional code. >> You know you know what you're going to get. You know if you do it today it's

think. And then I think we should we should definitely do a part two of this conversation. >> But what I hear what I'm hearing is >> we could do a live event, too. That'd be fun. We could record. >> Let's do it. Yeah. And we can we can invite all the liberals and they can yell at us as as you said. Yeah. Um um so um what I hear you saying or or almost yearning for sounds like traditional code. >> You know you know what you're going to get. You know if you do it today it's

going to be the same as it was yesterday or tomorrow it's going to be the same as was today. It's very traceable. Um and I also hear a little bit of like it's not actually giving you an answer. It's more of like a stocastic parrot type thing. Um, >> a little bit, but keep going. I'll I'll respond. Keep going. >> Yeah. >> And my feeling about this is actually we are extremely well equipped to work with the way language models work and we're much better equipped than we are

to work with code for people who are non-experts. And that's because the and and and I think it's actually a good thing that they're anthropomorphized because um we have models, very advanced models for how to deal with human beings. >> And human beings are like this. They are squishy. They do not necessarily give you the same answer today as they did yesterday. Um and there are specific kinds of people that are particularly like language models. Um so people

pleasers. Um, as a as a people pleaser, I'm very much like a language model. >> Um, >> I I have a lot of empathy for my language models. That's true. >> Yeah. And um and and you you you get that sense from a people pleaser where like other people pleasers in my life. Like I can just see when they're kind of like doing that thing where they're just telling me what I want to hear and I'm like stop. Like I just want to know what you think. you know, and so I think we have a lot of basically innate

pleasers. Um, as a as a people pleaser, I'm very much like a language model. >> Um, >> I I have a lot of empathy for my language models. That's true. >> Yeah. And um and and you you you get that sense from a people pleaser where like other people pleasers in my life. Like I can just see when they're kind of like doing that thing where they're just telling me what I want to hear and I'm like stop. Like I just want to know what you think. you know, and so I think we have a lot of basically innate

biological machinery for dealing with this kind of interaction and that um yes, there's there's an adjustment period and yes, like for example, if you we should be detecting if you're in a delusional state and CHBT should not talk to you or it should at at least not like uh you know go along with your delusions, right? Um, but I think people will will very naturally learn because there's a a a really uh close analog, they'll very naturally learn to use it and then very naturally learn to

separate it from other types of things. Um, and put it in its own sort of category and and and I think uh that's why I think it is actually kind of genius that it is a chat and it is a little bit anthropomorphized and it is interacting in that kind of way.

H

I don't know. I see it. I get it. I just don't know if we can handle this, man. I don't know. I I think the humans are pretty when I'm when I'm talking about making it reproducible. That's me as a kind of programmer outliner type. I get that. But I think what's what's tricky and what's thorny is when you talk to businesses and orgs, ones that really want to use it, not ones that are just like trying to figure out what generative AI means. That lack of reproducibility is really scary because

they need to know that something. You know what I think? Here's what it sounds like you're saying to me and push back on this, Paul. It sounds like you want it to work like computer, but it doesn't work like computer. It works like new thing and you should get used to new thing instead of expecting it to work like computer. It's not quite right. It's close. Um the the the slight um um the slight change I would make is it works like new thing that is very close

to thing that is older and more innate for you to interact with than computer. Um and that gives you a lot of uh innate biological cultural like machinery for how to deal with new thing productively in a way that you actually did not have with computer um and comes with costs. It's not cost-f free. You may confuse new thing with person. Uh but it is also part of its um part of its power and beauty and part of the reason why it has been adopted so heavily and makes me optimistic that we

will also start to naturally separate out new thing into a into a clearly new category that we know how to deal with because we know how to do that with people. we know how to do that with the people in our lives who like act a certain way we know we have to like deal with and that's why actually I think some of the outrage or some of the news articles or whatever is productive because it's the only way to or it's not maybe the only way but it is one good way to

get people to just like pay attention and just be like okay I got to like be a little bit suspicious of CHBT but I'm still going to use it. Um and so I think that's you know I I would write the articles differently. I would write the headlines differently, but I think um what we're trying for is some um some way to differentiate between person and new thing. But I I think that's a productive process that's going to happen. >> I mean, interesting. Okay, I'm puzzling it out

because what is my actual crit my criticism here is that but here's what I want. What I want if I am a business or a not for-p profofit or I manage a lot of electronic health records. If you want me to use new thing, I need to know that it works like computer because I trust computer. Computer is encrypted and saves the data and it's good. And you're telling me that new thing will let me have more of this. But I need to know that it's going to be the same today as it was yesterday as an

interface, as a way to get to that stuff. And maybe the way to get to that stuff is you get the new robot to write the code and as a result you have this very reproducible environment. Maybe it can stand up things that repeat. But that ambiguity, it's not really just my ambiguity. Like I think that's the ambiguity that a lot of organizational thinkers are are dealing with, right? Like how do I trust this? I know I can do stuff with it, but how do I trust it?

Um, and what you're giving back to me is like at some level it feels like you're saying you can't because it's like people >> and companies run with people. Um, >> yeah, but they'd love not to. See, this is the this is I know. So, this is the fantasy, right? So, take a second if we have a second and tease this out because I think this is really important. The fantasy of this technology, which I think I agree with you is not actually what it's for, is that it will give me

the interface to human beings, but the discipline and predictability of the computer. And that isn't working yet. >> Absolutely not. >> And what's happening I I do think that like OpenAI is saying just give us a minute. Just give we're going to get you that. We're going to get you the people that you don't have to pay that do exactly what you tell. We just need a little more time. and and at some level I feel like that's where AGI has end has landed as a as a concept like a a cohort

of disciplined bots. Um do you where do you think we're going? What like I'm saying this I'm sort of watching your face do funny things like what are you thinking? >> I think well I have a whole AGI take. Um but um the really important thing is do exactly what you tell them. And exactly what you tell them is that's the whole that's the whole ball game. Like what are you going to what are you going to tell them? Um and I think the the way that our intuition fails us is well if it does

exactly what I say it's going to be the perfect thing. And that's actually just not true because you often don't know what to say. Like it's a process. It's a creative process of figuring out what to say with experience, with other people, with the machine. Um and I think also

there there are organiz right that is the actual value of this thing is that it generates constructive confusion and that you have to address it with it but then it can actually you can kind of iterate through confusion and get to goals. >> Yeah. And that is very very real and it is not salailable. That's not what anybody wants to buy. >> I think that there's so we do a lot of consulting too with big companies and I think there there is there's room for AI inside of big companies. However, I

think it may actually be that and this is this this should actually be a um a positive thing if you're afraid of AI adoption being too quick is I don't think that you can it's very hard to be totally AI native retrofitting into a big company. I just don't think that that happens really. Um, and so even though >> explain that because it's I mean literally that's kind of we're trying to build that bridge and it's hard and I think I know what you're talking about.

>> The exact thing that I'm talking about is exactly what you're saying. Exactly what you're saying which is like well they want it to be predictable and do the same thing today as it did yesterday. And that's just not how this technology is. Um, and so >> at its best, like I think there is a way to make things very predictable, but you're saying like at its best, >> yeah, at its best, that's not what it is. Um, >> okay. >> And so, uh, so big companies can use this and

can start to adopt it, but because they have all these forces and constraints that make it difficult to uh use things that can't totally be trusted and are totally new, um, it's difficult to use it to to its maximal extent. Um but I think so a I think that will uh lead to less change than might be intuitive um to those of us who are sitting around at Thanksgiving being like holy [ __ ] like OS45 just changed the entire world. I I have this debate constantly with my

business partner because I'm like man that's it death is coming and you're like just shut up right like have you seen bureaucracy and he's right like I've worked with some of the largest bureaucracies in the world >> takes a long time >> I once we were up for a project with the with America years and years ago Obama era and they're like God if you guys could do it we could give you 20 grand if you could just like take an AMX and we're like we'll do it we'll help America. And they

and then like a week later they called back. They're like, "Nah, we're just going to give the Navy $2 million." And um it was for like a essentially a glorified RSS feed reader. Like it wasn't like no it would have been like a $50,000 project. We were going to take a hit, but but it's madness, right? And so like the largest bureaucracies have never had a sense of value being and and money sort of like and the actual delivery being all that way all that connected as much as like an

individual developer might feel. So I think excuse me I think you're not going to change that. I think that is right. The only thing I think though, Dan, is like >> I want to I want to finish though because I think there there's there's this there's this other there's this other component of that which is pace of change is slower but companies like ours that are right now we're about 20 people like sub 20 people that are growing up in this world where every single person

is using cloud code across the organization for every single thing. you're you're creating all these new primitives for how to work with this squishy technology that is not about how do we make it like so predictable that it doesn't take risks. It's like how do we do the most we possibly can with it because um uh because we're we're small enough and young enough that we can take those kinds of risks and those kinds of companies I think there's the only small number right now but they are going

they're going to be a lot of companies like that over the next 5 or 10 years and they are going to become big companies um and be acquired by big companies and so that's the um that's the other side of it is instead of trying to make the technology uh legible to someone who's like running a multi-billion dollar company, you can you're actually going to get the best out of it by making it like the most useful thing for these this like small group of early adopters that are

figuring out like how do we use the squishiness to our advantage. >> Yeah. I mean, I think I think there's going to be it's like anything it's so big and this space is so big. it was already so big and we're dropping such a big change into it that it's going to express multiple different ways. Like I do like I I completely buy that there will be lots of AI native orgs especially now that CL like I'm seeing cloud code and like the actual promised future of accelerated delivery

is here like you can I mean our thing too like you can build a business app in like five minutes and it used to be five months and so like and that's true of 3D rendering and that's true of like all these categories that were really really complicated before um and so I think there'll be this huge a layer of acceleration from relatively small organizations that can deal with that, take it in, learn it, and apply it and and have a desire to like share the value. They want to like do more, get

paid less, but move faster. I think like there's huge opportunities there. I think where people are screwed is if they're like, "Cool, now I can engineer 10 times faster. I'm going to go on vacation and I'll just get all my work done in like five minutes and nobody will know." That is going to come bite you. But I also do I think though it's too big of a change and people are going to want some of that for themselves. Like I'm just sort of thinking about really big orgs I've

worked with where the engineers just say no all the time and the CEO is really frustrated but that's just life. That's just how it goes. That's what it's always been like. And then somebody shows up and they're just like it doesn't have to be that way. you know, you can have everything. That's going to feel so good. It's going to feel so good. And they're going to throw it by the wayside. It's going to be like a live, laugh, love kind of like trip to Italy for them. They're going to just be

I, you know, they're going to abandon their family because they can suddenly like the supply chain SAP integration that was scheduled for 36 months now takes three. Oh my god. I just got the other the other thing too and I'm sorry to get corporate with it but like SMBs can't afford big enterprise software but they also like don't have CTO's like they still know what to do in the middle and they can have really good tools now they can like which means for them that

instead of implementing Salesforce they can buy a summer home like it's like that that's sort of where that equation plays out. I don't think because what you're saying here is is all true up until the point that you realize that a vast amount of spend on technology goes to like five companies and everybody kind of hates those five companies. Like unless they make money from them, they hate them. Like they come to us and they say, "I hate this company and I will do

anything to never work with their their software again." And so like given that being out there, I think there's a lot of drama ahead as people decide if they want to spend millions of dollars on SAS or not and and and sort of heavy enterprise builds. So I I I think it's kind of yes to everything as well as status quo because it's such a big space. It's not going to change. But I I think we got to watch the margins. I think stuff is going to shift really weirdly in in ways that we

weren't expecting. >> I agree. Um, and I think that's a great place to leave it, Paul. Fantastic conversation. It was really great to get to chat with you. >> Yeah, let's let's let's hang out, Dan. >> I would love to do that. Uh, if people are looking for you, where can they find you on the internet? >> They should check out our website, abort.com. We have a really really nice, think of it as like super pro vibe coding platform that lets you build stuff, but we build it with you. We we

weren't expecting. >> I agree. Um, and I think that's a great place to leave it, Paul. Fantastic conversation. It was really great to get to chat with you. >> Yeah, let's let's let's hang out, Dan. >> I would love to do that. Uh, if people are looking for you, where can they find you on the internet? >> They should check out our website, abort.com. We have a really really nice, think of it as like super pro vibe coding platform that lets you build stuff, but we build it with you. We we

don't just give you a tool. We we make sure that like we have good product managers. We call them solution engineers who listen and they will help you out. So that's enough shilling. Um you can send me an email paul.for.com. You can find me on LinkedIn. You can find me on Blue Sky. I'm off of Twitter. All the regular places. I'm pretty easy to find. >> Awesome. Thanks Paul. >> Yep. Anything you need, let me know.

Oh my gosh, folks. You absolutely, positively have to smash that like button and subscribe to AI and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster of emotions, insights, and laughter that will leave you on the edge of your seat, craving for more. It's not just a show. It's a journey into the future with Dan

Shipper as the captain of the spaceship. So, do yourself a favor, hit like, smash subscribe, and strap in for the ride of your life. And now, without any further ado, let me just say, Dan, I'm absolutely hopelessly in love with you.

Loading...

Loading video analysis...