TLDW logo

Nvidia Engages In Twitter War With Michael Burry

By Wall Street Millennial

Summary

## Key takeaways - **AI Boom Mirrors Dot-Com Overbuild**: Michael Burry compares current AI data center spending to 1990s broadband bubble where companies like Global Crossing overbuilt capacity; by 2002 only 5% was utilized, leading to bankruptcy with assets sold for 2% of book value despite rising subscribers. [02:26], [03:38] - **Nvidia Accelerates GPU Obsolescence**: Nvidia shortened data center GPU release cycle from 2 years (A100 2020, H100 2022) to 1 year (B200 2024, B300 2025, Rubin 2026); Jensen Huang joked Blackwell makes H100s ungivable due to 40x token output in half the racks. [05:06], [07:26] - **Hyperscalers Extend GPU Lives**: Meta, Google, Microsoft doubled GPU useful life estimates from 3 years in 2020 to 6 years by 2025; Amazon cut from 6 to 5 years in Q1 2025, increasing depreciation by $217M and cutting net income $162M due to AI tech pace. [08:13], [09:03] - **BYU Impairs $2.3B in Obsolete GPUs**: China's BYU took $2.3B impairment on fixed assets (over 50% of $4.5B total) as existing infrastructure no longer meets computing efficiency needs; likely aging A100s or export-limited H800/H20, foreshadowing US hyperscaler issues. [09:36], [10:37] - **CoreWeave Can't Recoup GPU Costs**: CoreWeave's $15.7B in operational AI data centers won't break even in 6 years at 15% annual H100 rental decline (from $3/hr 2023 to $2.90/hr Nov 2025), yielding only $14B gross profit before interest at 7-15%. [12:25], [14:35] - **Nvidia's $119B Circular Investments**: Nvidia committed $119B to AI firms like OpenAI ($100B), xAI ($2B SPV for its GPUs), representing over half trailing revenue; memo refutes Enron parallels but ignores off-balance guarantees like $6.3B CoreWeave capacity. [15:42], [22:01]

Topics Covered

  • AI Data Centers Echo 1990s Broadband Overbuild
  • OpenAI Burns VC Cash on Every User
  • Nvidia's Rapid Cycles Obsolete Hopper GPUs
  • BYU's $2.3B GPU Impairment Signals Future Pain
  • Nvidia's $119B Investments Fuel Circular Revenue

Full Transcript

[music] With a market capitalization in excess of $4 trillion, Nvidia is the most valuable company in the world. When

you're already at the top, there's not much room to climb higher. So, you are instead gripped by the fear of falling down. The fear across the AI industry is

down. The fear across the AI industry is palpable with even big tech CEOs talking about a potential bubble. One of the biggest voices warning about an AI bubble is the former hedge fund manager

Michael Bur. Bur is best known for

Michael Bur. Bur is best known for shorting subprime mortgages in 2009. Bur

inspired the Hollywood film The Big Short where he was played by Christian Bale. Recently, Bur has taken to Twitter

Bale. Recently, Bur has taken to Twitter to warn about a bubble in the AI sector.

At the center of this alleged bubble sits Nvidia. Nvidia was so disturbed by

sits Nvidia. Nvidia was so disturbed by Bur's criticisms that they sent a memo to Wall Street analysts to try to refute some of his allegations. In the memo, Nvidia tries to explain that they are

not similar to Enron. To be clear, Nvidia is nothing like Enron, but the fact that they feel they need to say this is not a good look. In this video, we'll look at the escalating war of words between Nvidia and Michael Bur and

what this can tell us about the state of the AI bubble.

We've been running Wall Street Millennial for more than 5 years now.

The majority, in fact, the vast majority of the videos we've ever published contain stock footage. We get all of our stock footage from Storyblocks, which is also the sponsor of today's video.

Storyblocks offers unlimited downloads of diverse and high-quality media for one predictable subscription cost.

Everything you need in one place. 4K and

HD video, templates, music, sound effects, images, and more. You pay a monthly or annual subscription, and there are no hidden or extra fees. A lot

of times when I'm editing a video, I download many dozens of clips of stock footage from Storyblocks. As I edit the video, I only end up using a fraction of what I downloaded. But it doesn't matter because a subscription gives you

unlimited downloads. Everything is

unlimited downloads. Everything is royalty-free and ready to use. This

unlimited source of content gives you the freedom to test, experiment, and create more effective video. I use

Premiere Pro to edit my videos. I know

nothing about animation, but I can still use the Premiere Pro templates from Storylocks. They also have templates for

Storylocks. They also have templates for After Effects, Da Vinci Resolves, and Apple Motion. If you're an aspiring

Apple Motion. If you're an aspiring YouTuber or if you want to make videos of any kind, I cannot recommend Storylocks enough. To get started with

Storylocks enough. To get started with unlimited stock media downloads at one set price, head to storyblocks.com/wall

streetmillennial or click the link in the description.

And now back to the video. In a series of Twitter posts and Substack blogs, Bur lays out his barecase for the AI industry. Bur compares the current AI

industry. Bur compares the current AI spending boom to the dot bubble of the 1990s. In the 1990s, broadband companies

1990s. In the 1990s, broadband companies including AT&T, WorldCom, and Global Crossing spent tens of billions of dollars to deploy broadband internet cables. The thinking was that demand for

cables. The thinking was that demand for internet traffic would grow exponentially. So, this broadband

exponentially. So, this broadband infrastructure will generate massive revenue. Internet adoption was much

revenue. Internet adoption was much slower than these companies had projected, resulting in the NASDAQ bubble bursting in 2000. To understand

the scale of overbuilding, by 2002, only 5% of broadband internet capacity in the US was being utilized. The broadband

companies generated far less revenue than expected. Many of the broadband

than expected. Many of the broadband companies borrowed money to build out the internet infrastructure in the first place. In 2002, one of the biggest

place. In 2002, one of the biggest broadband companies, Global Crossing, went bankrupt. Their broadband assets

went bankrupt. Their broadband assets were marked as being worth $22 billion on their balance sheet. They were sold by the bankruptcy court for just $500 million, about 2% of the book value.

During this entire period, the number of US broadband subscribers increased and continued to increase continuously over the next 20 years. The internet

eventually did become as ubiquitous as the bulls thought. It just took a lot longer than expected. So, the

infrastructure investments were not justified. The current AI data center

justified. The current AI data center boom has some similarities to the broadband bubble of the 1990s. Cloud

service providers are spending hundreds of billions of dollars to make new AI data centers. The AI data centers are

data centers. The AI data centers are indeed being utilized. Open AAI,

Anthropic, and other AI startups are currently consuming all the compute capacity the data center operators can provide. But OpenAI is generating

provide. But OpenAI is generating minuscule revenue compared to its expenditures. Open AAI loses money on

expenditures. Open AAI loses money on every Chat GBT user. They even lose money on ChatGpt Pro, which costs $200 per month. The AI infrastructure is

per month. The AI infrastructure is indeed being utilized, but end consumers aren't paying for it. It's being

subsidized by OpenAI's venture capital investors. The situation is not

investors. The situation is not sustainable. Eventually, the venture

sustainable. Eventually, the venture capital will run out. Open AAI is expected to generate $13 billion of revenue in 2025. They need to increase their annual revenue to hundreds of billions within the next few years to

pay for all the data center capacity which is currently under construction.

But Michael Bur has another criticism of the AI bubble. Even if OpenAI can somehow afford to pay for all of its spending commitments, he contends that the data center companies will still lose money. The cloud service providers

lose money. The cloud service providers have spent hundreds of billions of dollars to buy Nvidia GPUs. Even if

demand for AI compute remains strong, Bur argues that these GPUs will become obsolete before the cloud service providers can make back the cost of their investments.

Since the release of Chat GBT in 2022, Nvidia has accelerated its product launch cycle. In 2020, they released

launch cycle. In 2020, they released their A100 AER GPU. In 2022, they released their H100 Hopper GPU. That's a

2-year gap. In 2024, they released the H200 Hopper extended memory, which is an improved version of the H100. That same

year, they also released the B200 Blackwell GPU. Again, there was a 2-year

Blackwell GPU. Again, there was a 2-year gap. In 2025, they released the B300

gap. In 2025, they released the B300 Blackwell Ultra. That's only one year

Blackwell Ultra. That's only one year after they released the B200. In 2026,

they plan to release a new data center GPU called the Vera Rubin. In 2027, they plan to release the Vera Rubin Ultra.

Prior to 2024, they released a new data center GPU every two years. Now, it's

every one year. This makes sense given the surge in demand for GPUs. The amount

of profit Nvidia can make has exploded.

It's now worth their while to spend far more on research and development to develop better GPUs faster. As Nvidia's

product cycle compresses, old GPUs will become obsolete more quickly.

In March of 2025, Nvidia CEO Jensen Huang publicly joked that once Blackwell is released, you won't even be able to give away the older Hopper GPUs.

I said before that when Blackwell starts shipping in volume, you couldn't give hoppers away.

And this is what I mean, and this makes sense. If anybody if you're still

sense. If anybody if you're still looking to buy a hopper, don't be afraid. It's okay. But

afraid. It's okay. But

I'm the chief re revenue destroyer.

My sales guys are going, "Oh no, don't say that."

There are circumstances where Hopper is fine.

That's the best thing I could say about Hopper. There are circumstances where

Hopper. There are circumstances where you're fine.

not many if I have to take a swing. And so that's kind of my point. Um when the technology is moving this fast, uh you you and because the workload is so intense and

you're building these things, there are factories, you we really we really like you to to um uh uh to invest in the right the right versions. According to

Nvidia, a 100 megawatt data center full of Hopper H100 GPUs will require 1,400 racks and produce 300 million AI tokens in a given period of time. If you

instead switch to Blackwell B200 GPUs, they'll only take up 600 racks, so less than half of the space, but they'll produce 12 billion AI tokens, 40 times greater than the H100's. As cloud

service providers deploy B200s, the supply of computing capacity will surge.

This will decrease the price of compute.

Eventually, the price will become so low that is no longer profitable to operate H100s. They are too energy inefficient.

H100s. They are too energy inefficient.

That's why Jensen Hong was joking that you won't be able to give H100s away.

Even if an H100 is in perfect condition and can work as intended, it will become obsolete.

Over the past few years, the so-called hyperscalers have increased their estimated useful life estimates for their GPUs. In 2020, Meta, Google, and

their GPUs. In 2020, Meta, Google, and Microsoft depreciated them over an estimated useful life of three years, Amazon four years, and Oracle 5 years.

By 2025, Meta increased their estimate to 5 1/2 years. Google, Oracle,

Microsoft, and Amazon increased to 6 years. Amazon subsequently decreased it

years. Amazon subsequently decreased it to 5 years. The biggest increases came from Google and Microsoft, who both doubled their estimated useful life.

These two companies are also believed to have spent the most money on Nvidia GPUs during this period. So, how do the hyperscalers justify this? They claim

that advances in technology and software make it such that the GPUs physically last longer. Even if this is true, an

last longer. Even if this is true, an old GPU can become economically obsolete, even if it's in perfect physical condition.

We are already starting to see some signs of this. In the first quarter of 2025, Amazon reduced the estimated useful life of its GPUs from 6 years down to 5 years. The shorter useful lives are due to the increased pace of

technology development particularly in the area of artificial intelligence and machine learning. This resulted in an

machine learning. This resulted in an increase in depreciation in amortization expense of $217 million and a reduction in net income of $162 million during the quarter. If the hyperscalers are forced

quarter. If the hyperscalers are forced to decrease the useful lives of their GPUs, this will decrease their reported net income which investors will not be happy about. Michael Bur points to the

happy about. Michael Bur points to the example of BYU as the first hyperscaler to recognize a large impairment of GPUs.

BYU operates China's largest search engine and they're also a cloud service provider. In 2023, BYU launched its own

provider. In 2023, BYU launched its own AI chatbot called Ernie. In addition to its own Ernie chatbot, BYU also sells computing capacity to third party AI

startups in China. You can think of BYU as a Chinese version of Google. In the

third quarter of 2025, BYU recorded an impairment charge of $2.3 billion. This

caused them to post a loss for the quarter. In the previous quarter, the

quarter. In the previous quarter, the entirety of their fixed assets was valued at $4.5 billion, the vast majority of which were data centers. So,

this impairment represented more than 50% of their fixed assets. BYU CFO

explained, quote, we've conducted a comprehensive review of our infrastructure portfolio. Some of the

infrastructure portfolio. Some of the existing assets no longer meet today's computing efficiency requirements.

unquote. In other words, they have a bunch of GPUs that are now obsolete and no longer useful.

Up until 2022, BYU purchased Nvidia's A100 GPUs. In late 2022, the US

A100 GPUs. In late 2022, the US government implemented export controls which banned Nvidia from selling its highest end chips to Chinese customers.

So, they stopped selling the A100s to BU. In response, Nvidia made the H800

BU. In response, Nvidia made the H800 and later the H20 GPUs. These have lower performance specifications to comply with the export controls. These chips

are made exclusively for the Chinese market. Recently, BYU has ordered AI

market. Recently, BYU has ordered AI chips from Huawei and is making its own AI chips. These homegrown Chinese chips

AI chips. These homegrown Chinese chips are inferior to Nvidia's top-of-the-line GPUs, but they are probably better than the watered down export compliant versions. We don't know which GPU

versions. We don't know which GPU specifically were responsible for BYU's recent impairment, but it was probably either the aging Nvidia A100 or possibly the watered down H800 and H20 export

variants. BYU's situation is not

variants. BYU's situation is not directly comparable to the US hyperscalers like Amazon, Google and Microsoft. The US hyperscalers are not

Microsoft. The US hyperscalers are not subject to export controls. So they've

been buying Nvidia's latest and greatest GPUs. But this just means that the US

GPUs. But this just means that the US hyperscalers are a few years ahead.

Eventually they will face the same problems as BU. Let's look back at Nvidia's product cycle timeline. The

A100s are probably now obsolete. The AI

data center investment boom started in earnest in 2023. The US hyperscalers bought tens of billions of dollars worth of H100's. The H100s are a generation

of H100's. The H100s are a generation ahead of the A100's, so they're still usable. But what happens once Vera Rubin

usable. But what happens once Vera Rubin comes out in 2026? The H100s will probably become obsolete and the US hyperscalers will be forced to take massive impairments just like BYU.

We've talked a lot about abstract accounting terms like depreciation and estimated useful lives. But what does all this mean in practice? There's a

company called Silicon Data which tracks the rental prices for Nvidia's H100 GPUs. The full data set is behind a payw

GPUs. The full data set is behind a payw wall, but we were able to find a few data points in publicly available news articles. In September of 2023, the

articles. In September of 2023, the average rental price was about $3 per hour. By May of 2025, it had decreased

hour. By May of 2025, it had decreased to $2.50. By November of 2025, it had

to $2.50. By November of 2025, it had decreased to $29.

That's a decline of 30% in a little over 2 years, or about 15% per year. Let's

just assume that the rental price continues to decline at a rate of 15% per year. We'll analyze this using

per year. We'll analyze this using Cororeweave's financials. They're a pure

Cororeweave's financials. They're a pure play AI data center company, which makes the analysis easier. As of the end of the third quarter, Cororeweave had 14.6 billion of technology equipment. This is

mostly Nvidia GPUs and other supporting chips and server components. Plus, they

have $1.1 billion of data center equipment and leaseold improvements.

Corewave depreciates its GPUs over an estimated useful life of 6 years. When

you replace the GPUs, you probably have to replace most of the data center equipment as well because the newer models of GPUs have different infrastructure requirements. So,

infrastructure requirements. So, Coreweave spent almost $16 billion on AI data centers. They need to make back

data centers. They need to make back this investment within 6 years. They

also have $7 billion of construction in progress. These are new data centers

progress. These are new data centers that are not yet operational. For now,

we're just looking at the $16 billion of data centers that are already operational. In the third quarter of

operational. In the third quarter of 2025, they generated $1.4 billion of revenue. Their cost of revenue was $370

revenue. Their cost of revenue was $370 million, giving them gross profit of a little over $1 billion. Coreweave's cost

of revenue consists of the direct cost to operate their data centers, rent, utilities, staff, etc. It does not include GPU depreciation. In the first year, they'll make $5.4 billion of

revenue. That's just their Q3 revenue

revenue. That's just their Q3 revenue annualized. They incur a cost of revenue

annualized. They incur a cost of revenue of $ 1.5 billion. So they generate a little less than $4 billion of gross profit. Each year, their revenue

profit. Each year, their revenue declines by 15% while their cost of revenue remains constant. So their gross profit gradually shrinks. By the end of the sixth year, they've cumulatively

generated $14 billion of gross profit.

Their investment was $15.7 billion. They

will not even make back their upfront investment. To fund the construction of

investment. To fund the construction of these data centers in the first place, Coree had to borrow billions of dollars at interest rates ranging from 7 to 15%.

They're probably not even going to generate enough gross profit to cover the initial investment, let alone the billions of dollars of interest expense that will occur over the next 6 years.

Keep in mind that all of this data center buildout is predicated on orders for AI companies, mostly OpenAI and Anthropic. To pay for all of this,

Anthropic. To pay for all of this, OpenAI, Anthropic, and other AI startups will need to increase their annual revenues by hundreds of billions of dollars within the next few years.

Otherwise, they'll default on their purchase orders. But even if OpenAI can

purchase orders. But even if OpenAI can make good on its spending commitments, the AI data center companies like Cororeweave and the Hyperscalers will probably still lose money because the GPUs depreciate so quickly. In the short

term, the fast product launch cycle is good for Nvidia. But in the long run, if the data center companies have to recognize massive impairments, they will eventually realize that this is an unprofitable business, at which point

they'll scale back their expansion plans and Nvidia's revenue will decline.

Nvidia was so spooked by Bur's criticisms that in late November, they sent a memo to all the Wall Street analysts who covered their stock. The

full memo was leaked to the media. In

the memo, Nvidia responded to two critics. Firstly, they responded to

critics. Firstly, they responded to Michael Bur's statements on Twitter.

Secondly, they responded to a Substack blog titled the algorithm that detected a $610 billion fraud. How machine

intelligence exposed the AI industry circular financing scheme. It was

written by a guy named Shinaka Pereira.

Shinaka's post is viewable for free on his Substack page. I've linked it in the description below if you want to read the whole thing. But these are his key points. Nvidia's accounts receivables

points. Nvidia's accounts receivables are increasing. This indicates that his

are increasing. This indicates that his customers are having trouble paying for the GPUs. Nvidia's inventory is

the GPUs. Nvidia's inventory is increasing. This may indicate that

increasing. This may indicate that they're having trouble selling all the GPUs they're producing. Shanaka points

out Nvidia's circular financing deals.

He specifically flags Nvidia's $2 billion investment into Elon Musk's XAI in October of 2025. Nvidia's XAI

investment is a bit complicated. Nvidia

isn't investing into XAI directly. XAI

created a special purpose vehicle or SPV. The SPV will buy Nvidia GPUs and

SPV. The SPV will buy Nvidia GPUs and rent them to XAI. The total value of the SPV is $20 billion, of which 7.5 billion

is equity and 12.5 billion is debt.

Nvidia contributed $2 billion to the equity portion of the SPV. The terms of the SPV require it to spend the money on Nvidia GPUs and keep their utilization

above 70%. The SPV will sell the

above 70%. The SPV will sell the computing capacity to XAI. In the most inflammatory part of Shanaka's blog, he compares Nvidia to Enron. Enron created

SPVS which it owned and controlled.

Enron sold its telecom and energy infrastructure assets to its own SPVS. This allowed them to book fake revenue from their SPVS even though they were not receiving any cash from external

customers. Shanaka compares the XAI SPV

customers. Shanaka compares the XAI SPV to Enron's SPVS. Quote, Nvidia provides equity capital to an entity that exists primarily to purchase Nvidia's products.

The transaction appears as an armslength sale in Nvidia's accounting, but economically Nvidia is funding its own revenue."

revenue." In Nvidia's memo, they try to refute Michael Bur and Shanaka Pereira's criticisms one by one. Nvidia maintains

that demand for its data center GPUs remains strong and they are in fact supply constrained. Their inventory

supply constrained. Their inventory balance is growing, but this is not because they're having trouble selling their products. Inventory includes

their products. Inventory includes significant raw materials and work in progress. As they grow their revenue,

progress. As they grow their revenue, you should expect them to have more work in progress at any given time as they're producing more GPUs. And while their accounts receivable has increased, they have not had any significant customer

defaults. In my opinion, Nvidia's

defaults. In my opinion, Nvidia's responses to these allegations are pretty solid. There hasn't really been

pretty solid. There hasn't really been any sign that Nvidia is having trouble selling its GPUs to date. The more

concerning allegation is about the circular financing. In the third quarter

circular financing. In the third quarter of 2025, Nvidia made $3.7 billion of strategic investments into AI startups.

This represented 7% of their revenue in the quarter. In the first three quarters

the quarter. In the first three quarters of 2025, they made $4.7 billion of strategic investments, representing just 3% of their revenue. The companies in Nvidia's strategic investment portfolio

mainly raise capital from thirdparty financing providers, not from Nvidia.

These companies are growing their revenues rapidly, indicating a path to profitability and strong underlying customer demand for AI applications.

Furthermore, they predominantly generate revenue from thirdparty customers, not from Nvidia.

Nvidia has a venture capital arm called N Ventures where they invest in dozens of small AI startups. For the most part, these investments are relatively small.

Additionally, Nvidia sometimes makes larger investments into various AI and tech companies. Here I've compiled a

tech companies. Here I've compiled a list of the strategic investments Nvidia has made in the first 9 months of 2025.

Collectively, Nvidia invested $4.7 billion into these companies. The

companies largely fit into two categories. Nvidia invested into

categories. Nvidia invested into so-called Neoclouds including Lambda, Nscale, and Fermis. Neoclouds are cloud service providers that exclusively provide computing services to AI

startups. These companies are direct

startups. These companies are direct customers of Nvidia. They buy Nvidia GPUs to fill up their data centers. The

second category is AI startups including cohhere together.ai and thinking machines. These companies are developing

machines. These companies are developing AI models. They typically do not

AI models. They typically do not purchase NVIDIA GPUs directly. They

instead rent computing power from Neocloud such as Lambda and Scale Infirmis. Thus, they are indirect

Infirmis. Thus, they are indirect customers of Nvidia. Nvidia also made an investment into a company called Commonwealth Fusion Systems which is trying to develop nuclear fusion power.

Nvidia claims that the companies in its strategic investment portfolio predominantly generate revenue from thirdparty customers, not from Nvidia.

While this is technically true, Nvidia's web of investments across the AI landscape is so broad that its money is flowing around almost everywhere. Nvidia

owns equity stakes in various Neoclouds.

The Neoclouds generate revenue from AI startups. Many of these AI startups have

startups. Many of these AI startups have received investment from Nvidia. While

the Neocloud's revenue does not directly come from Nvidia, a lot of this money comes from Nvidia indirectly.

Even if there is some circularity in Nvidia's investments, they argue that it's too small to matter. Their

strategic investments only represented 3% of their revenue in the first 9 months of the year. Even if all of that is circular, that means at least 97% of their revenue is real revenue from

unrelated external customers. This is

technically true, but we also have to look at the trend. In the first half of 2025, Nvidia only made $1 billion of strategic investments. In the third

strategic investments. In the third quarter alone, they made $3.7 billion of investments. Over the past few months,

investments. Over the past few months, Nvidia has committed to investing $100 billion into OpenAI, $10 billion into Anthropic, $5 billion into Intel, $2

billion into Synopsis, and $2 billion into XAI. These commitments add up to

into XAI. These commitments add up to $119 billion. All of these investments

$119 billion. All of these investments have roundtrip characteristics. In the

trailing 12-month period, Nvidia's revenue was $187 billion. The investment

commitments Nvidia has made now represent more than half of their annual revenue. This doesn't even include their

revenue. This doesn't even include their offbalance sheet arrangements, such as their $6.3 billion commitment to purchase Corweave's unused capacity in the future.

In Shanaka's Substack blog, he specifically calls out Nvidia's $2 billion investment into a special purpose vehicle associated with Elon Musk's XAI. The sole purpose of this SPV

Musk's XAI. The sole purpose of this SPV is to buy Nvidia's products. Nvidia

categorically denies any comparison between themselves and Enron. Nvidia

claims it has only one guarantee for which the maximum exposure is $860 million. They reference Note 9.

million. They reference Note 9.

Unfortunately, the version of the memo I was able to find does not include the notes, so I'm not sure what they're referring to. However, this is already a

referring to. However, this is already a bit fishy. Nvidia provided a $6.3

bit fishy. Nvidia provided a $6.3 billion guarantee to purchase Cororee's unused capacity. As far as I know, this

unused capacity. As far as I know, this guarantee is still standing. So, how can they say that their only guarantee is $860 million when they have a $6.3 billion guarantee to Coreeave? To be

fair, the Coreeave guarantee does not include an SPV. Nvidia made the guarantee directly to Cororeweave. This

is an off-balance sheet liability. We

only know about it because it was reported in the media. What other

guarantees does Nvidia have that we don't know about?

To be clear, Nvidia is not Enron. Enron

created a convoluted web with hundreds of SPVS. Enron controlled all these SPVS and effectively had 100% economic ownership of them. But they were deconolidated from Enron's balance

sheet. Enron did two things with these

sheet. Enron did two things with these SPVS. Firstly, they sold various of their assets to their own SPVS to create fake revenue. Secondly, they transferred

fake revenue. Secondly, they transferred liabilities to their SPVS to make their own balance sheet look better. In

reality, they had 100% economic exposure to the SPVS, so they were still on the hook for the liabilities. The reason why Enron was a fraud is because they owned 100% of their SPVS. For the most part,

these SPVS had minimal or no outside investors. So, it was fraudulent for

investors. So, it was fraudulent for Enron to deconolidate them. The XAI SPV is different. It has $20 billion of

is different. It has $20 billion of which $7.5 billion is equity and $12.5 billion as debt. Nvidia reportedly

contributed $2 billion to the equity portion. That's only about 25%. The

portion. That's only about 25%. The

majority of the money came from external venture capital firms. Nvidia has no obligations to the SPV. It's actually

the other way around. The SPV has obligations to purchase Nvidia GPUs.

Shanaka is right to draw attention to the XAI investment. It indeed appears to be circular, but comparing it to Enron is a bit over the top.

The final issue we'll look at is depreciation. Nvidia's memo states the

depreciation. Nvidia's memo states the following quote. Claim Nvidia's

following quote. Claim Nvidia's depreciating PP&E more slowly than its peers, indicating that depreciation expense is understated. If properly

reported, depreciation would be higher and net income would be lower." This is what you call a straw man argument. In

the source document section of the memo, Nvidia claims that is responding to Michael Bur's tweets as well as Shanaka's Pereira Substack article.

Neither Bur nor Shanaka have alleged that Nvidia understates its own depreciation. It was mainly Michael Bur

depreciation. It was mainly Michael Bur who talked about depreciation. Bur said

the data center operators, the likes of Google, Amazon, Microsoft, and Oracle are understating depreciation. While

Nvidia makes GPUs, it does not operate its own GPUs in any significant quantity. So, it doesn't even make sense

quantity. So, it doesn't even make sense to talk about Nvidia's depreciation.

Separately, Nvidia claims that A100 GPUs, which were released in 2020, continue to run at high utilization and generate strong contribution margins, retaining meaningful economic value.

They tell you to reference appendix A.

But if you look at Appendix A, all it has is a list of depreciation schedules of the hyperscalers. Nvidia provides

zero evidence that any of them are operating A100s at high utilization. In

my opinion, Nvidia's assertion about the economic longevity of A100s is almost certainly false. Remember earlier in the

certainly false. Remember earlier in the video, we looked at BYU's massive impairment. We believe that this

impairment. We believe that this impairment was likely linked to their aging A100 GPUs. If A100s are already obsolete in China, they're even more obsolete in the US. The US is a couple

generations ahead of China due to the chip export controls.

So, what's the takeaway from all of this? Nvidia is not Enron. It's not a

this? Nvidia is not Enron. It's not a fraud, but the massive revenue growth they've experienced over the past couple years is probably not sustainable.

Nvidia's stock is very expensive. They

have a $4.4 trillion market cap. Their

price toearnings ratio of 45 is much higher than that of the S&P 500. To

sustain this premium valuation, they need to continue growing their revenue and profits. The underlying demand for

and profits. The underlying demand for AI services is not enough to support this. That's why they feel obliged to

this. That's why they feel obliged to engage in roundtrip investments of everinccreasing size. Nvidia is one of

everinccreasing size. Nvidia is one of the most innovative and successful companies of our time. But their

valuation is being propped up by a house of cards. Nvidia understands this.

of cards. Nvidia understands this.

They're nervous. They're so nervous that they're now getting into Twitter debates with the likes of Michael Bur and Shanaka Pereira.

All right, guys. That wraps it up for this video. What do you think about

this video. What do you think about Nvidia? Let us know in the comments

Nvidia? Let us know in the comments section below. As always, thank you so

section below. As always, thank you so much for watching and we'll see you in the next one.

Loading...

Loading video analysis...