TLDW logo

A Practical Guide to Scaling AI

By The AI Daily Brief: Artificial Intelligence News

Summary

## Key takeaways - **Shift from Tools to Systems**: AI moves differently. Its capabilities evolve in weeks, not quarters, and its impact reaches every part of the organization. Success depends less on a single tool's performance and more on how quickly teams can learn, adapt, and apply AI to solve problems. [03:36], [04:19] - **New Features Every 3 Days**: Across ChatGPT and the API, there had been a new feature released approximately every 3 days this year. This creates an incredible organizational burden for companies trying to adopt all of that new capacity. [05:51], [06:08] - **Innovation from Any Role**: Innovation can come from any team. A marketing analyst who automates reporting can find use cases that scale across the whole company, with no seniority level prerequisite for figuring out how to use AI better. [06:46], [07:17] - **62% Still in Early Stages**: McKenzie found 32% still experimenting and another 30% in the piloting stages, meaning just 38% total were scaling or fully scaled. If you head into 2026 in those stages, you have to treat yourself as officially behind. [01:12], [02:08] - **Four-Part Scaling Framework**: Setting foundations with executive alignment, governance, and data access; creating AI fluency with champion networks; scoping ideas with repeatable intake and prioritization; building products iteratively for safe deployment. [07:57], [08:30] - **Design for Reuse Early**: As you prioritize, look for recurring patterns, code, orchestration flows or data assets that can support multiple use cases. Designing with reuse in mind compounds speed, lowers costs and creates a technical memory. [18:49], [19:19]

Topics Covered

  • Shift from AI Tools to Systems
  • AI Features Released Every 3 Days
  • Innovation from Any Employee
  • Foundations Persist Throughout Scaling
  • Prioritize High-Value High-Effort Ideas

Full Transcript

Welcome back to the AI daily brief.

Today we are talking about some practical useful frameworks for scaling AI. In other words, moving beyond the

AI. In other words, moving beyond the pilot stage. And the specific frame of

pilot stage. And the specific frame of reference and framework that we're going to be using comes from a new guide from OpenAI called from experiments to deployments, a practical path to scaling

AI. But even outside of this particular

AI. But even outside of this particular document, it is very clear to me that a huge theme heading into next year is going to be this idea of whole org

transformation and postpilot post experimentation phase artificial intelligence inside the enterprise. If

you look at literally any study of AI adoption and impact, the story is pretty clear. massive and increasing adoption

clear. massive and increasing adoption and usage, initial extremely promising ROI and impact, but some real barriers to converting individual value to whole

org value. Indeed, the very first set of

org value. Indeed, the very first set of charts in McKenzie's state of AI doc shows this really crisply. On the one hand, the percentage of organizations that are using AI in at least one function continues to reach new highs

and increasingly it is spread across multiple functions. But a huge number of

multiple functions. But a huge number of organizations remain in really early stages. McKenzie found 32% still

stages. McKenzie found 32% still experimenting and another 30% in the piloting stages, meaning just 38% total were scaling or fully scaled and only 7%

of those were in that fully scaled phase. Now to be honest, the 7% in fully

phase. Now to be honest, the 7% in fully scaled I don't think is all that bad.

When you have a technology that is going to impact every single part of the organization, I would actually be surprised if those 7% actually are fully scaled. There's just so much to do to

scaled. There's just so much to do to get there. The more concerning piece,

get there. The more concerning piece, especially if you are in that cohort, is the 62% that are still in those really early stages. I genuinely think that if

early stages. I genuinely think that if you head into 2026 in those stages, you have to treat yourself as officially behind. One thing we've recently done at

behind. One thing we've recently done at Super Intelligent is as part of our agent readiness audits, we were very frequently recommending some version of quick win pilots as part of the initial things to do, especially for

organizations that found themselves in the explorer stage, which is very early in their AI journey. I've now basically hard blocked and demanded the removal of any sort of mention of quick win pilots.

I just think that if we're talking in that sort of language and acting like a couple of quick wins is an okay place to be, it's doing our customers a disservice. This does not mean that I

disservice. This does not mean that I think that organizations have to have everything wired right now. I think it's fine to build momentum with pilots that show value quickly, but I think that the

frame of reference and the overall vision for this has to be systemic. I

think organizations need to be thinking comprehensively and systematically or else they risk falling farther and farther behind. Which brings us to this

farther behind. Which brings us to this new guide from OpenAI. From experiments

to deployments, a practical path to scaling AI. Now, the folks over at

scaling AI. Now, the folks over at OpenAI have been producing this sort of resource more regularly. And I think what's valuable about this is not just that it's a sort of from the hor's mouth kind of document, although that is

useful. It's also valuable because it

useful. It's also valuable because it reflects not just their best insights, but the aggregated wisdom that comes from their boatload of enterprise relationships. Kicking us off, they

relationships. Kicking us off, they reiterate this problem that we've been talking about for the last couple of minutes that there is increasingly a divide between laggers and leaders. And

while some get stuck in pilots, others are weaving AI into daily operations and customer products. Now they don't put it

customer products. Now they don't put it quite this crisply but to me at core there are four big mental shifts that OpenAI is suggesting as a basis for all

of this work. The first is a shift from thinking about tools to thinking about systems. In their introduction section they write for years companies have focused on validating whether software

was fit for purpose. The approach was simple. Start small test a specific use

simple. Start small test a specific use case and scale once results are proven.

This worked when technology evolved slowly and served a single department at a time. AI moves differently. Its

a time. AI moves differently. Its

capabilities evolve in weeks, not quarters, and its impact reaches every part of the organization. Success

depends less on a single tool's performance and more on how quickly teams can learn, adapt, and apply AI to solve the problems in front of them.

These shifts demand a new operating rhythm that balances speed with structure and evolves as fast as the technology itself. I actually think that

technology itself. I actually think that breaking out of the tools-based view of software is way more challenging than many organizations are realizing. When

you think about the entire structure of information around technology and innovation, so much of it is anchored to this old tool-based world. The biggest

enterprise research and innovation company in the world is Gardener. $6

billion a year in revenue, 20 billion in market cap. And their most popular tool

market cap. And their most popular tool is their magic quadrant. The magic

quadrant is of course all about helping enterprises pick which tools. They

divide things into a quadrant of categories like challenges, leaders, niche players, and visionaries and plot companies on that axis. The problem with this, of course, is that when it comes

to AI, the difference in your organizational success will have almost nothing to do with whether you choose OpenAI or Microsoft Copilot or Google Gemini. Sorry to all my friends who work

Gemini. Sorry to all my friends who work at those companies. That's not to say that different models won't be better or worse at different purposes, but when it comes to whether an enterprise gets the

most out of AI, it will absolutely be based on how good are the systems that they put around AI. And so this entire toolbased frame of reference kind of needs to get booted out the window. So

that's the first big mental shift. The

second big mental shift is just thinking at a new velocity. Part of the reason that we can't get stuck in the toolbased way of thinking is that the tools themselves don't stay in one place for

very long. One of the more remarkable

very long. One of the more remarkable charts in this entire document, they went back and looked across chat GBT and the API and found that there had been a

new feature released approximately every 3 days this year. That is absolutely insane. And it also creates an

insane. And it also creates an incredible organizational burden for companies that are trying to adopt all of that new capacity. It has been clear for some time that the capability set of

AI tools vastly outstrips the ability of business users to put them into practice. And I don't see that gap doing

practice. And I don't see that gap doing anything but expanding. A third big mental shift has to do with leadership and innovation. And there are really two

and innovation. And there are really two parts of this. The first is that because AI is crosscutting, innovation that happens in one team can actually be relevant for another team in a way that

was not the case before. When your sales team was using specialized data enrichment software to help with its sales prospecting, that wasn't necessarily going to help marketing.

However, now there are certain types of prompts and use cases that sales could discover that would be useful for marketing. As OpenAI puts it, innovation

marketing. As OpenAI puts it, innovation can come from any team. A marketing

analyst they write who automates reporting can find use cases that scale across the whole company. And that gets at the second part of the shift.

Solutions from anywhere doesn't just mean from any team. It also means from any type of employee. There is no seniority level that is a prerequisite for figuring out how to use AI better.

In fact, one of the things that we talk about very frequently on this show is how that it's still early enough that there's basically no experts, just people who have more time on task and more reps with these tools. All that

comes together for OpenAI to present a vision of compounding ROI. And I think this is really valuable. It's very easy to get stuck in thinking about different types of impact or different types of

ROI as disconnected from one another. In

other words, this AI use case is a timesaver. This use case is a costsaver.

timesaver. This use case is a costsaver.

That really exciting one, that's a new revenue generator. OpenAI is suggesting

revenue generator. OpenAI is suggesting that instead we think about these things as cumulative and linked and ultimately compounding. Okay, so four big mental

compounding. Okay, so four big mental shifts from tools to systems, speed of change, solutions from anywhere, and compounding ROI.

So what overall is the AI framework for creating a repeatable system for scaling AI? Four parts. The first is setting the

AI? Four parts. The first is setting the foundations, establishing executive alignment, governance, and data access.

The second is creating AI fluency, building literacy, champion networks, and sharing learnings across teams. The third is scoping and prioritization.

Capturing and prioritizing ideas through a repeatable intake process focused on business impact. Fourth and finally

business impact. Fourth and finally building and scaling products combining orchestration, measurement and feedback loops to deliver safely and efficiently.

You can see they put it in this cumulative and repeating cycle where from those foundations you layer AI fluency, scope and prioritization and then building and scaling products and have that iteration cycle throughout. So

let's talk about foundations first.

Within each of these categories, OpenAI gives a set of steps almost like a recipe for what an organization could do to start to think in this more systematic terms. So for example, in the

context of the foundation step, step one they have is assessing your maturity.

Step two is bringing executives into AI early. Step three is strengthening

early. Step three is strengthening access to data. Step four is designing governance for motion. Step five is setting clear goals and incentives. Now,

a lot of our work at Super Intelligent is, of course, in and around these 0 to1 moments. And so, a lot of it is

moments. And so, a lot of it is resonant. That maturity assessment is an

resonant. That maturity assessment is an incredibly important step because usually what you're going to find is that an organization's readiness for AI and agents is very jagged. There are

certain pockets of the organization that are ahead and optimized for exactly the sort of iterative adoption that is required, whereas other parts of the organization, not necessarily the ones that you would think, might lag for

reasons that aren't just technical. But

the point of course is that you have to know where you stand before you can build a program to move the whole organization together on the idea of bringing executives into AI early. It is

absolutely true. But one important note that builds on what we've seen is that this needs to be a two-way buyin recruitment. Yes, executives need to be

recruitment. Yes, executives need to be brought into AI early and be seen to be using these tools and changing how they work because of them, but they also need to have a ground level view and a pulse

from what employees are thinking. I

think that when chatbt first came out and enterprise adoption first started, people might have assumed that it would be executive buyin that was the blocker.

But in fact, it has often been the opposite where executives get super excited and actually kind of exhaust their employees by pouring too many new things on them at once. Both are

important. You just have to have a birectional conversation about buyin from all different parts of the organization. on the idea of governance.

organization. on the idea of governance.

You might remember that when I did some research and analysis across the thousands and thousands of interviews we've conducted, one really interesting stat that stood out was that organizations that had robust

articulated governance programs around AI scored on average 6.6 points higher on our 100 point agent readiness scale.

It was the single biggest differentiating factor in terms of how much impact it had if that governance program was there versus if it wasn't.

They suggest creating a cross functional center of excellence and that's a pattern we see a lot. A last note from foundations is around the data. They

write reliable data and tools underpin every AI initiative. Start with low sensitivity data sets to move quickly while improving quality and governance in parallel. And again, this is just a

in parallel. And again, this is just a framework, but this I think actually reveals how much easier it is to say this stuff versus actually do it. In

fact, when I was thinking about the foundations piece, I think it might be valuable to think about foundations not as a thing that you do in day zero before the other phases. They basically

have foundations is day zero, AI fluency is day 30, scope and prioritize is day 60, build and scale is day 90. And I

know that's just a demonstration example to get people thinking in relative terms, but I might actually put foundations as an ongoing process that happens throughout and around all the

other parts of this iterative framework.

If you think about just three categories of these foundations, that leadership team alignment that I was talking about, governance that can evolve as the technology evolves, which is extremely important and very different than some

other types of governance structures that we've dealt with in the past, and what I'm calling loosely data improvement, which means a continual improvement of the quality of the data, the readiness of the data, as well as the access and provisioning of the data,

which is no mean feat in and of itself.

These are things that are not ultimately going to get done once and just be done.

They are instead ongoing processes that will continue to shape the relative success or failure of AI initiatives throughout the life cycle of those initiatives. So of course we had AI's

initiatives. So of course we had AI's help to modify the visual slightly to show foundations as a process that happens throughout and around the rest of the work. Let's move on to the next

phase though creating AI fluency. Once

again with this unnamed dialectic between leaders and laggers they write many organizations roll out AI tools before building skills and adoption and experimentation stalls. The companies

experimentation stalls. The companies progressing fastest treat AI as a discipline that must be learned, reinforced and rewarded. So what are their suggestions? First step they

their suggestions? First step they suggest is scale learning foundations and then tailoring it by role. So for

example, make sure that there's some common basis of understanding around prompting and perhaps other everyday use cases before starting to hone in function by function. Next step, create

rituals that sustain learning. Basically

making this an organizational habit, not a short-term initiative. Step three,

building champions networks and subject matter experts. This is basically the

matter experts. This is basically the human side of that center of excellence where you create a mechanism for people who are getting out ahead of their peers to share what they are learning back

into the system. Relatedly, step four, recognizing and rewarding experimentation. Basically, OpenAI is

experimentation. Basically, OpenAI is suggesting to make success highly visible, highlighting the teams, the individuals, the groups that can connect AI usage to results. Couple things I

want to double click on here. This idea

of champions networks is one of the more ubiquitous ideas that we see, but one of the most powerful. Again, going back to this idea of no experts yet, just people with more practice. One of the things

that a champions network does is it rightly recognizes that to get really good at AI, you have to get good at AI in context. And it doesn't so much

in context. And it doesn't so much matter what any external experts say about how to use AI if it doesn't translate to your specific organizational environment. For that

organizational environment. For that reason, it is incredibly valuable when you start to have people who are in different roles within your organization who have done the work of translating

the general lessons to the specific organizational context and who can then share that with the rest of the organization. Every single enterprise,

organization. Every single enterprise, no matter how far behind you are, has these champions internally that are just waiting to be recognized and organized and they are an incredibly powerful

resource. On step four, this idea of

resource. On step four, this idea of rewarding experimentation. I very much

rewarding experimentation. I very much agree. But I think that if we are

agree. But I think that if we are thinking systematically, it needs to go a step further. It shouldn't just be recognizing and rewarding experimentation. There needs to be a

experimentation. There needs to be a mechanism for distributing new best practices, use cases, great prompts, etc. across the organization. If you

look at most new tools, most new technologies, it's a small handful of people that figure out all the use cases and best practices and then the rest of us just copy them. Organizations should

be set up to have a similar sort of distribution mechanism. That can just be

distribution mechanism. That can just be your teams or your Slack account, but it still needs to be intentionally designed. In other words, you don't just

designed. In other words, you don't just want to recognize and reward experimentation. You want that

experimentation. You want that experimentation to filter into the rest of the system as well. One last note which is sort of captured in create rituals that sustain learning but I think maybe deserves its own highlight

as well is that in addition to quote creating consistent spaces for teams to test ideas, share outcomes and learn from peers. All of which is great. The

from peers. All of which is great. The

even simpler part of this is that you have to create official formal time allocated away from normal work and to learning these tools. The key paradox

across these more than thousand interviews that we did in the middle of this year were that people were too busy to learn the thing that saves them time.

And especially as we move to a world where there are more and more AI usage mandates, they have to come with formal space chopped away from other types of

work to do the AI learning. Okay. Now

with phase three and phase 4, we move away from just sort of general AI usage to some of the more advanced opportunities that are in this framework of compounding ROI not just about

employee productivity or even organizational efficiency, but actually translate into the ROI that ultimately is the one that matters most for the long-term success of the organization, which is revenue generation and new

revenue. In many ways, this is the part

revenue. In many ways, this is the part where OpenAI is trying to provide a framework for organizations that maybe haven't made the leap from those foundational and AI fluency stages to really developing new products and

opportunities with AI deeply integrated and embedded. So phase three they call

and embedded. So phase three they call scope and prioritize. And the objective they write is to create a clear repeatable system for capturing, evaluating, and prioritizing opportunities across the organization.

This one is simple to write but does require work to do well. Step one they suggest is to create open channels for idea intake. And this very much hearkens

idea intake. And this very much hearkens back to the idea that innovation can come from anywhere in the era of AI.

Basically rather than people having to work through the traditional channels, anyone should be encouraged to submit ideas, be it for use cases or new products or whatever in a formal way.

From there they suggest hosting discovery sessions that can turn some of those ideas into prototypes. They write

these sessions act as both filters and accelerators the strongest advanced proof of concept. Others feed insights back into the backlog to guide future work. And as the scoping happens, they

work. And as the scoping happens, they actually share their own little magic quadrant for how to prioritize different ideas. On the x-axis, they have low

ideas. On the x-axis, they have low effort to high effort in terms of how much lift it takes to actually get a thing done. And on the y, low value to

thing done. And on the y, low value to high value. So for example, a low value

high value. So for example, a low value but higheffort idea is one that you're likely to want to depprioritize.

Basically, the juice isn't worth the squeeze in that case. A low value but loweffort idea might be more in the realm of self-service. I think actually a lot of our day-to-day time-saving use

cases fit in that bucket. Meeting note

summarization, email assistance, things like that. Remember, low value doesn't

like that. Remember, low value doesn't mean no value or you should ignore it.

It just means when it comes to the organization as a whole, it's comparatively lower than something that for example is going to generate new revenue. That's why they designated it

revenue. That's why they designated it self-service. Now, high value, low

self-service. Now, high value, low effort, that's just kind of a no-brainer, right? If you can do

no-brainer, right? If you can do something with little effort that creates a high value for the organization, you should just do it. And

to the extent that you can find those ideas, they're a really good thing to run with. Maybe the most interesting

run with. Maybe the most interesting category from an organizational planning perspective is in the highv value, high effort quadrant where something really good could come out of it, but it's going to take a lot of work to get

there. And unfortunately, I think for

there. And unfortunately, I think for enterprises, a lot of the best use cases, in fact, the vast majority of the best use cases are going to be in that bucket. And that's why organizations

bucket. And that's why organizations need to scope and prioritize them because you can only do so many of those high-v value high effort initiatives.

Now, one last note that I think is a really valuable call out is they suggest as you are doing this as an organization to design for reuse from the very beginning. As you prioritize, look for

beginning. As you prioritize, look for recurring patterns, code, orchestration flows or data assets that can support multiple use cases. Designing with reuse in mind compounds speed, lowers costs

and creates a technical memory that turns each project into a launchpad for the next. Basically, once again, don't

the next. Basically, once again, don't view these efforts in isolation. View

them as part of a larger system and see what can be reused from each process to make the next thing move a little bit faster or work a little bit better.

Which brings us to phase 4, building and scaling products. The objective they

scaling products. The objective they suggest is to develop a consistent, reliable method for turning new ideas and use cases into internal and external products. And the watch word of this

products. And the watch word of this whole section is iteration. Building

with AI they write is uniquely powerful because AI systems can learn and adapt rather than relying on fixed logic. AI

products improve through repeated iterations of the project itself. Each

new version is assessed on how it responds to real data context and whether it is reliable and cost effective. As teams run evaluations,

effective. As teams run evaluations, integrate new information, and adjust system prompts or workflows. These

refinements strengthen the final product. So their set of tips for this

product. So their set of tips for this include one, building the right teams. And really here they're talking about combining technical and other types of talent. Pair engineers they suggest with

talent. Pair engineers they suggest with subject matter experts who define success data leads who ensure access to the right information and an executive sponsor to remove blockers. Basically if

these things are systemic and crossorganizational get all the types of people that you need rather than developing them in isolation. Step two

unblock the path. They argue that most slowdowns stem from access and approvals. And certainly it is the case

approvals. And certainly it is the case that the biggest constraint on AI's impact in the organization is organizational inertia. This by the way

organizational inertia. This by the way though is also why I think governance should be viewed as something that is constantly iterated as well. In fact, it will very often be when you are getting

close to this actual building and scaling products phase that you figure out where your governance is insufficient and have to update it. Step

three is basically taking a build path that is iterative by design. build

incrementally and measure as you go.

This is a little bit more native for small companies and startups, but can be really hard for big older organizations that have very ingrained processes that don't move at the space of AI. I think

there is a lot of work in unlearning some of those systems, but that's ultimately what they're suggesting. So

that is OpenAI's practical path to scaling AI. I think the way to think

scaling AI. I think the way to think about this is not as some gospel framework that you have to follow exactly, but much more like a cookbook that has a bunch of recipes that if you

prepared all of them in some combination, in some sequence, would probably add up to a pretty kick butt feast. The metaphors are getting a

feast. The metaphors are getting a little tortured here, but you get what I'm saying. As we head into 2026, the

I'm saying. As we head into 2026, the key thing I think more than anything else is to think systematically and systemically. Getting the most out of AI

systemically. Getting the most out of AI is going to be a whole org effort. These

things can't be done in isolation. And

so whether it's this framework or another one that you've developed yourself or have from someone else, if you are thinking systemically, I think you're going to be ahead. Anyways

friends, that is going to do it for today's AI daily brief. Appreciate you

listening or watching as always and until next time, peace.

Loading...

Loading video analysis...