TLDW logo

Can Today's AI Replace 12% of Work?

By The AI Daily Brief: Artificial Intelligence News

Summary

Topics Covered

  • AI Agents Reshape Coordination
  • Headlines Misread Skill Overlap as Job Loss
  • Jobs Shift as Skills Automate
  • Models Accelerate Exponentially
  • Anthropic Engineers Delegate 60%

Full Transcript

Welcome back to the AI daily brief. Ah

my friends, we are once again talking about an MIT study which the headlines seem determined to get wrong. But in

this case at least the study itself is actually much more interesting. And the

reason that there is a lot of noise around it is that it is hitting on one of the central questions of the moment which is trying to understand just how much work AI can actually replace right

now and perhaps more importantly what sort of trajectory is it on. One of the things that we've been tracking here on the show is the increasing political acrimony around AI. This is coming from

both the right and the left and as I think I'll prelude and jockeying to position ahead of next year's midterm elections in the United States. So we're

going to look at two different pieces of evidence around this question today of how much work AI can actually replace.

One is this project out of MIT called Project Iceberg which is generating some number of scary headlines like this one from CNBC. MIT study finds AI can

from CNBC. MIT study finds AI can already replace 11.7% of the US workforce. But then we're also going to

workforce. But then we're also going to look at the direct testimony from Anthropic in their recently released blog post how AI is transforming work at Anthropic. So let's talk about this

Anthropic. So let's talk about this study first. In explaining itself,

study first. In explaining itself, Project Iceberg writes, "Current AI research has focused on individual agent capabilities, building models that can read, write, reason, and create. But

what happens when they interact? When

millions of AI agents interact with each other and with humans in the same environment, collective behavior is shaped less by individual capabilities and more by the coordination protocols between them. Project iceberg explores

between them. Project iceberg explores this algorithmic frontier, designing and testing coordination mechanisms for human AI populations at scale. They

basically want to understand how the hybrid workforce is going to evolve and impact the way that we do work. Now, the

specific context for all of this reporting is their recently released iceberg index, which is a measure of skill-centered exposure in the AI economy. And that word skill-centered is

economy. And that word skill-centered is going to become important as we'll see.

The goal of the iceberg index is to provide a better picture of automation capability. That is forward-looking

capability. That is forward-looking rather than backwards looking. In other

words, as they point out, traditional workforce metrics only measure employment outcomes after a particular disruption has occurred. They do not, as iceberg puts it, show where AI

capabilities overlap with human skills before adoption crystallizes. So what

Project Iceberg did is use what they call a large population model to quote simulate the human AI labor market representing 151 million workers as autonomous agents, executing over 32,000

skills across 3,000 countries, and interacting with thousands of AI tools.

The iceberg index is a skill-entered metric that measures the wage value of skills AI systems can perform within each occupation. What they found is that

each occupation. What they found is that right now visible exposure is concentrated around software related work such as software development and data science. This represents around

data science. This represents around 2.2% of wage earning skills and is basically the part of the iceberg that they say is above the surface. However,

beneath the surface, they find that current AI can automate about 11.7% of current wage earning skills, and that this hidden cognitive automation, their phrase, expands the visible tech

adoption around software work to cognitive work in areas such as finance, HR, and customer support. So, that 11.7% is where the number from these headlines come from. Going back to CNBC again, the

come from. Going back to CNBC again, the headline reads, MIT study finds AI can already replace 11.7% of the US workforce. representing as much as 1.2

workforce. representing as much as 1.2 trillion in wages across areas including finance, healthcare, and professional services. Now, Project Iceberg itself

services. Now, Project Iceberg itself goes out of its way to make clear that this is not a measure of potential job loss or employment displacement. The

very first question in their frequently asked questions says the index measures where AI systems overlap with the skills used in each occupation. A score

reflects the share of wage value linked to skills where current AI systems show technical capability. For example, a

technical capability. For example, a score of 12% means AI overlaps with skills representing 12% of that occupation's wage value, not 12% of

jobs. This reflects skills overlap, not

jobs. This reflects skills overlap, not job displacement. The second entry in

job displacement. The second entry in their FAQ, does the index predict job loss or displacement? No. The index

reports technical skill overlap with AI.

It does not estimate job loss, workforce reductions, adoption timelines, or net employment effects. They reiterate this

employment effects. They reiterate this in the abstract of the paper as well.

The index captures technical exposure, not displacement outcomes or adoption timelines. And despite CNBC in their

timelines. And despite CNBC in their article writing, the index is not a prediction engine about exactly when or where jobs will be lost, they still use this headline, which they know is

incorrect. So there are two things going

incorrect. So there are two things going on here. One is the important

on here. One is the important observation that just because a thing can be automated doesn't mean that it will be automated. There's an entire set of social structure and human and organizational inertia which can

significantly slow down the adoption of any automation technology. But two,

there is not a onetoone correlation between a wage earning skill and a job.

In other words, jobs are collections of skills, not the instantiation of a single skill. I use Gemini to create a

single skill. I use Gemini to create a graphic to try to visualize this. What

the iceberg index is saying is not that 12% of jobs are going to be eliminated.

it's that 12% of tasks within all jobs could be automated right now by current AI. The critical difference here is that

AI. The critical difference here is that part of the market adaptation that's going to happen is that which skills constitute any given role or job are inevitably going to change. If you view

your job as a bucket of skills, some of which can be automated only difficultly or with more advanced AI and some of which can't be automated at all, there is likely allocation of time and distribution towards the skills that

can't be automated and away from the skills that can. Now, that does not mean, of course, that there will be no job displacement from task level and skill level automation. For example,

there are some jobs that are highly concentrated around a single highly automatable skill. There are also jobs

automatable skill. There are also jobs that although they have a bunch of different skills are collections of skills that are all highly automatable.

Those jobs are obviously highly exposed even if we appropriately recognize that this study is talking about skills and not jobs. Also, it should be noted that

not jobs. Also, it should be noted that if some meaningful portion of a job skills can be automated, even if those roles don't go away automatically, it is possible that with the expanded time

that's won back from people handing over the automatable part to automation, maybe there are fewer of those roles in aggregate because the people who have been freed up for higher value tasks can do more of them themselves and don't

need as much redundancy in the workforce. In other words, there could

workforce. In other words, there could still be significant and meaningful employment displacement even in the context of actually appropriately understanding what studies like this are saying. It's just not the hysterical

saying. It's just not the hysterical headline of 12% of jobs eliminated right away. And of course, none of this takes

away. And of course, none of this takes into account the fact that new skills are being enabled and that new roles will come online as well. One of the challenges with any new technology is that we see the destruction in creative destruction before the creation. But

what about some practical evidence in reality? I want to turn to this post

reality? I want to turn to this post from Anthropic about how AI is transforming work inside that company.

And I want to kick it off with comments from CEO Dario Amade at the Dealbook Summit on Wednesday, December 3rd.

There's just an exponential just like we had an exponential with Moors law chips getting faster and faster until they could you know do any you know simple calculation you know faster than faster

faster than any human. I think the models are just going to get more and more capable at everything. Every few

months we release a new model. Gets

better at coding. It gets better at science. You know, now models are

science. You know, now models are routinely winning. You know, ma high

routinely winning. You know, ma high school math olympiads are moving on to college math Olympiads. They're starting

to do new new new mathematics. For the

first time, I've had internal people at Enthropic say, "I don't write any code anymore. I don't write I don't open up

anymore. I don't write I don't open up an editor and and and and write code. I

just let Claude Code write the first draft and and all I do is edit it." Um,

we had never reached that point before.

and the drum beat is just going to continue and and I I I I don't think there's any privilege point around there's no point at which the models start to do something different. What

what we're going to see in the future is just like we've g we've seen in the past except more so the models are just going to get more and more intellectually capable and you know the the revenue is going to keep adding zeros.

>> So let's talk a little bit more about what they're finding around AI's impact at work inside their company. This is of course part of Anthropic's broader attempt to understand AI's impact on the economy which they call their economic

index. The economic index looks both

index. The economic index looks both inside and outside and publishes regular research on markets, jobs in the economy. So this particular study comes

economy. So this particular study comes from a survey of 132 anthropic engineers and researchers that was conducted in August of this year. It also involved 53 in-depth qualitative interviews as well

as looking at cloud code usage data. The

TLDDR they say is we find AI is radically changing the nature of work for software developers generating both hope and concern. Engineers they say are getting a lot more done becoming more full stack accelerating their learning

and iteration speed and tackling previously neglected tasks. So much so that it's actually bringing up questions of whether they will lose deeper technical competence or become less able to supervise the outputs. So some of

their key findings their engineers and researchers use cloud code most for fixing code errors and learning about the codebase. In other words, despite

the codebase. In other words, despite Daario talking about how some folks are completely turning it over to let Claude Code write the code for them, that doesn't seem to be the norm just quite yet. Anthropic team members are

yet. Anthropic team members are definitely using Claude more and seeing more benefits. Employees, they say,

more benefits. Employees, they say, self-report using Claude in 60% of their work and achieving a 50% productivity boost, which is a 2 to 3x increase from a year ago. The productivity increase is

a little bit about spending less time on things and even more about an increase in output volume. 27% of the work done with Claude consists of tasks that wouldn't be done otherwise and most

employees say that they can fully delegate between 0 and 20% of their work to Claude at this stage. Now on the qualitative side part of why the delegation is increasing is that they

find that employees are in their words developing intuitions for AI delegation.

They write engineers tend to delegate tasks that are easily verifiable where they can relatively easily snip check on correctness and many describe a trust progression starting with simple tasks and gradually delegating more complex

work. They find that Claude is handling

work. They find that Claude is handling increasingly complex tasks more autonomously. The measure they have

autonomously. The measure they have compared to 6 months ago the complexity of the tasks tackled with cloud code has increased. The number of consecutive

increased. The number of consecutive tool calls cloud code can make more than doubled and the amount of human input needed to accomplish a given task has decreased significantly. The impacts are

decreased significantly. The impacts are profound enough that it's causing a lot of questions internally around how it all shakes out. For example, they find that skill sets are broadening into more

areas, but people are also worried about the atrophy of deeper skill sets. There

is career evolution and uncertainty, changing perceptions with how people perceive their relationship with their work, and maybe even workplace social dynamics changing as people turn to Claude first rather than going to

colleagues. To me, one of the things

colleagues. To me, one of the things that I think is going to happen over the course of the next 12 months and is going to be a hallmark of 2026 is on the one hand, we're going to see a lot more

academic studies like this one from MIT, but we're also going to hopefully get a lot more of this sort of internal focus study that shows the reality on the ground. The magnitude of the potential

ground. The magnitude of the potential disruption here is such that it's extraordinarily hard to predict exactly how it's going to play out in practice.

There are so many more factors than just what AI is technically capable of that will determine how it diffuses throughout workplaces and the broader economy. For now, it is really

economy. For now, it is really interesting to see these testimonials from the front lines of the companies that are building the technology. But

that is going to do it for today's AI daily brief. I appreciate you listening

daily brief. I appreciate you listening or watching as always and until next time peace.

Loading...

Loading video analysis...