The Data Center Nightmare
Huang’s Law vs. the Giant Mainframe
A lot of investors, worried about Nvidia $NVDA and other big AI plays, are being pushed to invest in data centers.
Equinix $EQIX and Digital Realty Trust $DLR have been hot stocks for years. They’re organized as Real Estate Investment Trusts (REITs). You value them for their dividend income, which has been rising continuously, thereby increasing the yield you get from holding the stocks over time.
Big investors are especially enamored of that income, which REITs are obligated to send out to shareholders. That’s why a lot of data center companies are now owned by private equity.
These are the companies that are buying huge chunks of land, filling them with racks of Nvidia chips, and jacking up your energy bills. They come to town claiming they’ll create jobs, but those jobs are only on the front end. Once the data center is running, most of the jobs go away.
A lot of folks are now focused on this scam. But I’m more interested in the financial side, the inherent lack of stability.
The data center boom is like that of railroads in the late 19th century. Early money came out of the boom great. But when the roads began operating profits dried up. New York went from building Grand Central Station to tearing down Penn Station in less than a lifetime.
Data Centers are Railroads
Data centers are the same way, as even a cursory look at their business models will tell you. Only this time it’s happening in Internet Time.
Unlike Cloud Czars, who build with cash flow, data centers build with debt. Profits are not an option. If you don’t pay the interest on the notes they call the repo man. The data centers also have an uncertain revenue stream. You’ll always Google, but these people are landlords. If new service providers aren’t signing rental contracts, they’re in deep doo-doo.
Then there’s the structure of the data center itself. Like I said, they’re filled with racks-upon-racks of Nvidia chips, depreciating over 5-6 years, sometimes more. But Huang’s Law says Nvidia chips improve even faster than Gordon Moore first suggested, by ignoring the problems of heat and size. The performance of an “early” Hopper chip, launched barely three years ago, is miniscule next to the new Rubin chip. The intervening Blackwell chips were 3 times faster than the Hoppers, and the Rubins are 4 times faster than that.
Think about what that means. Consumers like me tossed about 10 different PCs into the trash while going through such an evolution. They became obsolete. It’s true that Rubin runs Hopper apps, but Windows ran DOS, too. If you’re renting Hoppers, you’re selling DOS.
The only comment from Wall Street is that the Rubins came “sooner than expected.” The idiots can’t even read a roadmap when it’s published on the Web.
Moore Always Wins
Data center balance sheets aren’t built to handle this. Their assets are depreciating faster than they can be written off. Much faster. Soon they will be written off faster than the growth of money coming in.
Then there’s the real estate. Remember what happened to your PC back in the day? I do. We went from inefficient desktops with multiple slots to laptops to phones. And you’ve probably had 10 different phones since 2009. New devices are also more energy efficient. My iPod became obsolete not because of streaming, but because its power drain became incompatible.
At some point, the improvements in Nvidia chips cease to require more room. Data centers are enormous mainframes, which during my lifetime became minicomputers, PCs, and eventually phones. The current process is happening even faster. The next Nvidia generation, called Feynmann, is due out in just 2 years and will make Rubin look like a Kaypro next to an iPhone 17.
Improvements in hardware are matched by those in software, DeepSeek, which wowed people with its price-performance a year ago, is wowing them again. The reason Huang is talking up autonomous robots and cars is because he wants the PC-size markets for his chips that client-based AI will provide. Remember, a Waymo and a robot shouldn’t need a data center, there’s too much latency in making the call. Apple $AAPL, number two behind Nvidia by market cap, is avoiding the LLM craze for just that reason. They know all this development, in time, leads to AI on the client, not the network.
The Bottom Line
There’s a reason why tech booster Dan Ives is now touting Microsoft $MSFT instead of Nvidia. Microsoft software generates cash, and each new generation of software generates more cash. Microsoft is still building with cash flow. Cash is king. Microsoft doesn’t have the debt levels of the data centers. It can control its depreciation schedule, and still make money, by raising prices and introducing new software.
Ives’ call is a warning. It won’t be heeded. People will continue to invest in data centers, for income, until their need for land rolls over.
By that time, it will be too late to get out.




Well said.
I would say, however, that Microsoft is also tying its future to to this (adding Copilot to Windows, MS Office, etc.), although to a lesser degree than others.
Fantastic piece. The railroad parallel is spot-on, especially the timing asymmetry between capital returns and operational reality. I worked adjacen to a REITlast year and the internal conversations about refresh cycles vs lease commitments were... tense. The Hopper-to-Rubin jump is genuinely wild when mapped against typical 6-7 year depreciation schedules. Curious if the eventual move is liquid cooling retrofits to extend asset life, or if were just gonna see a wave of distressed sales once the debt-to-obsolescence gap closes.