TLDW logo

99% of Investors Miss The AI Backbone (My Full Map)

By BWB - Business With Brian

Summary

## Key takeaways - **$7T AI Split: 60% Compute, 40% Facilities**: McKenzie's report breaks that $7 trillion into two main buckets. 60% goes to compute, the servers, GPUs, chips, memory, and storage. 40% goes to the facilities. It's the buildings, the power systems, the cooling, the real estate. [03:32], [04:11] - **Facilities Layer Steady 10-15% Growth**: The facilities layer then is steadier. Power systems, cooling equipment, data center real estate. Think more like 10 to 15% annual growth. It's not flashy, but it is essential. And this side wins no matter which chip company comes out on top. [04:31], [04:53] - **Hyperscalers Build Own Chips, Cooling**: Hyperscalers. Think Amazon, Microsoft, and Google. They're actually building it entire regions at a time. They're engineering silicon specifically for their own AI workloads, Amazon's Tranium and Graviton, Google's TPUs, Microsoft's Maya. Amazon rolled out its own in-house liquid cooling system. [06:45], [07:39] - **Data Center Power Demand Jumps 165%**: Goldman Sachs expects data center power demand to jump 165% by 2030. Goldman Sachs says that the US data center construction has tripled in the last 3 years, and it's still accelerating. [01:35], [03:21] - **$100 Allocation: 45 Compute, 30 Hyperscalers, 25 Facilities**: $45 goes straight into compute. This is the fastest growing part of data centers. $30 then would go into hyperscalers. Amazon, Microsoft, Google. And my last $25 would go into facilities, power systems, cooling, electrical gear, real estate. [13:15], [13:58] - **$100 Grows to $200-235 in 5 Years**: Using what I think is realistic growth rates for each of these groups, compute would be growing in the low 20% range. Hyperscalers around 12 to 13% and facilities around 8 to 10%. That simple $100 would then grow into roughly $200 to $235. [13:55], [14:19]

Topics Covered

  • Ignore Nvidia—Bet on Data Center Buildout
  • 60/40 Split: Compute Grows Faster Than Facilities
  • Hyperscalers Control Entire AI Stack
  • Allocate 45% Compute for Maximum Upside

Full Transcript

$7 trillion is about to get poured into one specific area and most investors aren't even looking at the whole picture. McKenzie states that data

picture. McKenzie states that data center infrastructure will grow between 14 and 23% a year over the next 5 years.

This is the biggest physical buildout since probably the railroad was booming and it's happening behind all of the headlines. Of course, everyone is

headlines. Of course, everyone is watching Nvidia's stock price, but almost nobody pays attention to the buildings that are going up behind it.

And that's where the real story begins.

People are asking where all of these Nvidia GPUs are actually going. And the

answer isn't a mystery. They're all

being packed into these massive data centers that are built to run AI models around the clock. But most people never see that part of the industry. So they

have no idea why the buildout is exploding. Even Bill Gates said that AI

exploding. Even Bill Gates said that AI is the biggest technological shift of his lifetime. And I've seen this

his lifetime. And I've seen this firsthand because I used to work on machine learning models for Amazon when I worked on their pricing models and that was long before this was ever

mainstream. My point is that AI is not

mainstream. My point is that AI is not speculative, but I do believe that it is completely misunderstood. It's already

completely misunderstood. It's already replacing entire chunks of work and it's coordinating fleets of robots in logistics and manufacturing. Now, data

centers aren't these little server rooms. These buildings stretch the size of football fields and some use more electricity in a year than the entire state of Alaska. And inside you've got

hundreds of millions, possibly even billions of dollars worth of hardware that has to stay powered, cooled, and connected every second of the day or the whole system just falls apart. Goldman

Sachs expects data center power demand to jump 165% by 2030. And if power demand grows that fast, something has to build and fuel the grid behind it.

That's why I covered nuclear stocks a few weeks ago and of course several times over the past 2 years because AI doesn't run without massive stable electricity. Now here's where that $7

electricity. Now here's where that $7 trillion actually goes and more importantly who gets paid from it. And

I'm going to state right now that I had created a massive spreadsheet of every public company that I could possibly find that fits within this space. And of

course, I will have a link to all of those down in the description. But for

this video, I'm going to be breaking out where the money is flowing. Several

companies that are tied to each of those areas and how I'm going to be investing broadly into each tier. But first,

here's why the buildout is hitting overdrive right now. People think that AI lives in the cloud, but every single prompt runs on physical hardware that's within these facilities. Elon's Colossus

cluster in Memphis uses around a 100,000 Nvidia H100 GPUs for training. Meta is

building the same kind of scale for Llama. At this point, these facilities

Llama. At this point, these facilities need their own power substations just to stay online. Nvidia's newest AI chips

stay online. Nvidia's newest AI chips pull up to three times more power than the last generation. And all that energy turns into heat real fast. That's why

these facilities need industrial-grade liquid cooling systems, dedicated power infrastructure, backup generators, battery banks, and of course, networking. Goldman Sachs says that the

networking. Goldman Sachs says that the US data center construction has tripled in the last 3 years, and it's still accelerating. Once again, AI isn't just

accelerating. Once again, AI isn't just about cloud-based software. It includes

steel concrete electricity and cooling. And that's where the real money

cooling. And that's where the real money is flowing right now. Now, before I jump into the breakout, if you're getting any value from my content and my spreadsheets and my free newsletter with my portfolio, then please consider

pressing the like button so my content can continue to grow. McKenzie's report

breaks that $7 trillion into two main buckets. 60% goes to compute, the

buckets. 60% goes to compute, the servers, GPUs, chips, memory, and of course, storage. Basically, the machines

course, storage. Basically, the machines that are doing all the work. 40% goes to the facilities. It's the buildings, the

the facilities. It's the buildings, the power systems, the cooling, the real estate. It's the shell that keeps the

estate. It's the shell that keeps the whole thing alive. That's roughly about 4.2 trillion flowing into hardware and cloud platforms and another 2.8 trillion into power, cooling, and the physical

footprint that's behind it. Here's why

this split matters for us investors. The

two layers behave nothing alike. They

have different growth, they have different risk, and they have different winners. The compute layer is the high

winners. The compute layer is the high growth side. AI chips, cloud platforms,

growth side. AI chips, cloud platforms, server manufacturers. These are

server manufacturers. These are companies that can grow 20, 30, and even 50% a year because AI demand isn't slowing down. But it is a faster game.

slowing down. But it is a faster game.

Technology moves quickly, and one new chip design can reshuffle the entire market share simply overnight. The

facilities layer then is steadier. Power

systems, cooling equipment, data center real estate. Think more like 10 to 15%

real estate. Think more like 10 to 15% annual growth. It's not flashy, but it

annual growth. It's not flashy, but it is essential. And this side wins no

is essential. And this side wins no matter which chip company comes out on top. But think of it this way. The

top. But think of it this way. The

compute layer is betting on who's going to win the AI race. But the facilities layer, well, they're betting that the race happens even at all. And someone of course has to build the track. That now

leads us to our sponsor, Funstrat, which was founded by Tom Lee, a trusted Wall Street voice and former chief equity strategist at JP Morgan that I often refer to in my videos. Tom Lee's FS

Insight by Fundstrat is dedicated to democratizing Wall Street research. And

their goal is simple. Empower

self-directed investors like us with the same evidence-based research that Wall Street uses to navigate the market.

Where I know that I personally look forward to their emails every day. We

know Tom for his signature evidence-based research and famous calls like the V-shaped recovery after COVID.

When you join, the multidisciplinary team sends you real-time market alerts called flash insights, daily video updates, and actionable research across equities and crypto. This gives you the

clarity to make better, more timely decisions, and confidently control your portfolio. If you want access to the

portfolio. If you want access to the same research that banks and hedge funds use, now more than ever is the time to invest in knowledge. And right now, this is FS Insight's biggest sale of the

year. You can choose from macro, crypto,

year. You can choose from macro, crypto, or even their pro package. If you'd like to learn more, please feel free to check it out down in the description below.

McKenzie says that 60% of the money is going to be going into compute and 40% into the facilities. But that doesn't necessarily mean that we have to invest 60/40. If you're like me and your goal

60/40. If you're like me and your goal is maximum upside, then I would lean in heavier into the compute because that's where the fastest growth lives today.

That and it has long-term demand.

Meaning once a data center is built, the construction companies don't really make any more money from it. When I look at the data, McKenzie's 60/40 split makes a lot of sense from a high level. Compute

on one side, facilities on the other.

But of course, there's a third group that's sitting above both layers. And

they break every rule within this model.

And of course, these are the hyperscalers. Think Amazon, Microsoft,

hyperscalers. Think Amazon, Microsoft, and Google. These companies don't just

and Google. These companies don't just buy data center capacity. They're

actually building it entire regions at a time. They're negotiating multi-gawatt

time. They're negotiating multi-gawatt power deals before the rest of the market even knows that the demand is coming. In fact, they're even designing

coming. In fact, they're even designing their own chips. Amazon's Tranium and Graviton, Google's TPUs, Microsoft's Maya. They're engineering silicon

Maya. They're engineering silicon specifically for their own AI workloads, so they're not solely dependent on anyone else. And even now, they're

anyone else. And even now, they're inventing their own cooling system. Just

last week, Amazon rolled out its own in-house liquid cooling system because the traditional suppliers are backlogged for years because they weren't willing to wait. They built their own system so

to wait. They built their own system so they could deploy their highdensity GPU racks right now, not in 2027. They're

also running the clouds where every enterprise AI workload lands today. This

is AWS, Azure, Google Cloud. And of

course, they're not alone. Oracle is

accelerating with their AI HPC leasing.

Alibaba and Tencent run massive AI regions across Asia. And IBM is carving out their own niche within the regulated industry, but once again, those big three still sit in a category all their

own. They're the only players that are

own. They're the only players that are touching every layer of the stack. And

that's why hyperscalers, in my mind, get their own bucket. Let's go ahead and jump into the compute layer because this is where most of the growth is happening. And everything begins with

happening. And everything begins with the chips where Nvidia is still way out in front of everybody else. But AMD has real momentum with their MI300 and Broadcom and Marll are showing up inside

almost every major AI system and Intel is still there pushing hard to get back into the conversation. And the thing that most people overlook is memory. GPU

demand is huge, but memory demand is exploding right alongside of it. This

translates into Micron, Samsung, and SKHix, which are all sold out in high bandwidth memory for years out. Once

again, these systems can't run without massive amounts of memory. Then we've

got the companies that are turning all that silicon into actual racks. Super

Micro has been scaling almost faster than anyone else in this area. Then you

have Dell and HPE, which anchor the enterprise market. Then there's Lovo,

enterprise market. Then there's Lovo, which is huge across Asia and has a big share of global server shipments. And

it's not just the big companies anymore.

There's a growing group of GPU cloud providers trying to keep up with all that demand. Think Applied Digital,

that demand. Think Applied Digital, Okami, Digital Ocean, Iris Energy.

They're all building out dedicated AI compute as fast as they can get hardware delivered and then they're leasing it out to those big players also as quick as they can. And of course, behind all of this is the semiconductor supply

chain. Taiwan Semiconductor manufactures

chain. Taiwan Semiconductor manufactures almost every advanced AI chip that's out there. ASML is the choke point for the

there. ASML is the choke point for the tools that everyone needs in creating these chips. Lamb Research, KLA, and

these chips. Lamb Research, KLA, and Applied Materials, they handle the rest of the equipment that makes these high-end chips even possible. Then once

that hardware hits the racks, the networking becomes critical. Arista

leads the cloudscale switching. Cisco

drives a lot of that enterprise traffic, and companies like Sienna, Lumenum, and Coherent move data across long distances between buildings, regions, and entire countries. This whole layer is moving

countries. This whole layer is moving extremely fast. It's where most of the

extremely fast. It's where most of the revenue growth is happening today and it's the part of the stack that investors look at when they're aiming for a lot of that upside. Now, let's go ahead and move into the facilities

layer. This is the part of the system

layer. This is the part of the system that you never really see, but nothing works without it. We'll start with power and cooling because that's where most of the physical buildout is happening where

Verive is tied directly to the rise in AI data centers. Eaton and Schneider Electric handle the electrical distribution and the switch gear. Then

Johnson Controls, Train and Dyken manage the thermal side. Then Modine and Invent are growing really fast too as more racks shift to highdensity cooling. Then

of course you have the grid itself. AI

is pushing power demand higher than the grid was ever designed for. So companies

like Seammens, ABB, and Quant Services are all seeing real tailwinds where you have Bloom Energy and Cumins helping with on-site generation and backup power when facilities need more stability than

the grid could ever provide. From there,

it's the companies that actually own the buildings. Equinex, Digital Realy, and

buildings. Equinex, Digital Realy, and Iron Mountain build and lease these types of shells. They provide the space, the interconnects, and the reliability that lets everyone else plug in and scale. And then you've got fiber and

scale. And then you've got fiber and optical side, the long haul links between all these data centers. Infiner

is a company that handles long-distance optical systems. Then you have Fujitsu and ZTE, which are major suppliers in Asia. And these companies move the data

Asia. And these companies move the data between campuses, regions, and entire countries. Finally, you have the

countries. Finally, you have the software layer that keeps the whole environment stable. Think VMware, IBM,

environment stable. Think VMware, IBM, Nanix, and Service Now. They handle the orchestration, virtualization, and the automation. the stuff that keeps

automation. the stuff that keeps workloads balanced and the hardware running efficiently. But unfortunately,

running efficiently. But unfortunately, this layer doesn't move quite as fast as compute. But it does scale with every

compute. But it does scale with every new facility, every new rack, and every new watt that gets pulled onto the grid.

And the best part is it's steady and it benefits no matter which chipset or cloud platform is winning. So, now that we've laid out all the layers, the hyperscalers, the compute names, and the

facilities, let's talk about how I actually invest in this. Because knowing

the players is only step one. It isn't

the same as knowing where to put the money to work the hardest. But before I break anything out, here's the simple truth. Not every part of this ecosystem

truth. Not every part of this ecosystem grows at the same speed. Like I keep saying, compute keeps moving the fastest. Facilities move a little bit

fastest. Facilities move a little bit slower, but they are very consistent.

And hyperscalers sit right in the middle. They're big, they're steady, and

middle. They're big, they're steady, and they're essential. For me, the goal is

they're essential. For me, the goal is never just to own everything equally. My

goal is to match the growth, the risk, and the timing with what the data is telling us today. And I think that most of us can agree that the data is fairly clear. The money that's flowing into AI

clear. The money that's flowing into AI and data centers, it is not being split out evenly. Most of the upside is

out evenly. Most of the upside is landing in compute and most of the stability is coming from facilities. And

the hyperscalers, well, they capture pieces on both side without a lot of volatility. And once again, I I want to

volatility. And once again, I I want to share that I have a massive spreadsheet down in the description with over a hundred companies that I happen to be tracking in this particular space where I promise that I'm going to continue to

drill down and find the undervalued and the highest return opportunities over time. Today, I've probably mentioned a

time. Today, I've probably mentioned a lot of stocks that you're unfamiliar with, and I'll dive into those in future videos. But for now, here's how I'd put

videos. But for now, here's how I'd put $100 to work across the data center stack today. $45 goes straight into

stack today. $45 goes straight into compute. This is the fastest growing

compute. This is the fastest growing part of data centers. Think chips,

memory, servers, networking, and the demand is still running really hot. $30

then would go into hyperscalers. Amazon,

Microsoft, Google. They're building the data centers. They're filling them, and

data centers. They're filling them, and they're running the cloud platforms that sit on top, and they'll generate revenue for the long term. And my last $25 would go into facilities, power systems,

cooling, electrical gear, real estate.

These companies get paid every time a new data center flips the lights on, regardless of what chips are inside.

Now, what does that $100 look like in five years? Using what I think is

five years? Using what I think is realistic growth rates for each of these groups, compute would be growing in the low 20% range. Hyperscalers around 12 to

13% and facilities around 8 to 10%. That

simple $100 would then grow into roughly $200 to $235.

There's really no guessing. There's no

moonshots. It's just clean exposure.

across the parts of the data centers that are doing the real work. Now, I

always try to give some added information to those of you that prefer ETFs. And two that align to data centers

ETFs. And two that align to data centers the most are the Global X data center and Digital Infrastructure ETF with the symbol DTCR and the Eyesshares US Digital Infrastructure and Real Estate

ETF, IDGT. Now, I have to admit that I

ETF, IDGT. Now, I have to admit that I have not dug into these much at all, but they do cover a fair amount of these basics. Honestly, I wish that I had the

basics. Honestly, I wish that I had the ability to make my own ETFs as I'd make them so much more efficient from what I see in the market. Some of the companies in these ETFs really make no sense to

me. But hey, I digress. In summary, by

me. But hey, I digress. In summary, by 2030, analysts expect global data center power capacity to jump from 81 gawatt to 222 GW because AI needs far more

horsepower than the grid was ever built to deliver. And that's the real

to deliver. And that's the real investment story. the companies building

investment story. the companies building the compute engines and the companies building the backbone that's right behind them. They're all stepping into a

behind them. They're all stepping into a $7 trillion wave that's already in motion. In my opinion, the money isn't

motion. In my opinion, the money isn't in the hype. It's in the hardware, the power, and the concrete that makes AI even possible. And that's where the next

even possible. And that's where the next 5 years of returns gets decided. As

always, thank you so much for watching.

Loading...

Loading video analysis...