5 Comments
User's avatar
Marginal Gains's avatar

As we all know, the current tech company bubble appears to be fueled by the near-unanimous optimism of Silicon Valley, venture capitalists, and Wall Street, all of whom seem to believe that artificial intelligence (AI)—and more specifically, artificial general intelligence (AGI)—is poised to solve humanity's greatest challenges. Depending on who you ask, AGI is predicted to arrive within the next 1 to 5 years. This belief justifies massive capital spending on infrastructure, as companies rush to prepare for a future where AI dominates. However, if this narrative turns out to be overly optimistic or outright wrong, we could see a rapid unwinding of this bubble, much like the dot-com crash of the early 2000s.

Let’s assume, for argument's sake, that the optimistic scenario holds true and AGI does emerge within the next five years. Even then, the market seems to be overlooking a host of critical challenges. While companies may rush to cut costs by replacing humans with bots, the implementation of such revolutionary technology is far from straightforward. Real-world deployment of AGI will face significant hurdles, including:

1. Implementation Challenges: Transitioning from proof-of-concept to large-scale deployment will require overcoming countless "last-mile" edge cases, where AGI might fail to perform reliably in complex or unpredictable scenarios.

2. Risks of AGI: The market appears to discount the ethical, regulatory, and existential risks associated with AGI. The debate over how to manage these risks is barely beginning, and unresolved issues could delay or derail widespread adoption.

3. Societal and Economic Impact: Replacing human labor with bots en masse could lead to widespread societal disruption, including unemployment, inequality, and resistance from policymakers and the public. These consequences are not being priced into the current AI frenzy.

4. Overestimating Technological Readiness: Building a groundbreaking technology is often easier than implementing it effectively. Having worked in the tech industry for over two decades, I’ve seen firsthand how ambitious projects can falter due to the complexities of real-world integration.

Ultimately, the market may be operating under a speculative "voting machine" dynamic, as famously described by Ben Graham:

“In the short-run, the stock market is a voting machine. Yet, in the long run, it is a weighing machine.”

In this case, the "voting" may be driven by hype and unrealistic expectations surrounding AGI, rather than its actual near-term potential. Still, the short term may stretch longer than usual, given the wide range of possibilities for AGI's arrival. While the current narrative drives valuations higher, the long-term reality will inevitably be determined by the practical challenges and risks that lie ahead. If these challenges are underestimated, the bubble will eventually burst, much like past speculative manias.

Expand full comment
Marginal Gains's avatar

I also think the tech giants and the market underestimate the skilled workforce needed to implement and productively use AI. That will also take years to develop since just having technology does not mean that it will automatically become part of the workflow.

I will end with one of my favorite quotes. I do not know who has said it:

"Reality always wins; your job is to get in touch with it."

And with Hofstadter's Law: "It always takes longer than you expect, even when you take into account Hofstadter's Law."

In this case, reality will also win, regardless of the current market situation.

Expand full comment
Joachim Klement's avatar

So true. I constantly remind people of Cisco in the late 1990s. Back then people thought we will need massive investments in internet infrastructure to reap the benefits of the internet. Then they found out that there are better ways to compress data and the capex on infrastructure became worthless or unnecessary. Cisco and all the old-fashioned tech companies that built their business model on capex spend and telecom hardware collapsed. The survivors were essentially software companies (even Microsoft almost disappeared because they missed the boat on mobile OS).

If any of this sounds familiar in a world where everyone spend billions on AI infrastructure and energy supply to power the energy needs to train an AI, then this is no coincidence.

Oh, and now we are witnessing the possible emergence of a rival AI that requires much less AI infrastructure and much less energy...

I think the folks at NVidia, OpenAI, Microsoft, Amazon, Broadcom, etc. are praying that DeepSeek is just a hack and hasn't really figured out how to train an AI with a fraction of the energy and chips needed.

Expand full comment
Marginal Gains's avatar

Agreed.

Even at the current level of AI technology, I believe the first significant shift will be humans using AI, replacing humans who don’t use AI across various domains. This dynamic will likely lead to substantial changes in how work is approached, valued, and rewarded.

Less experienced workers will likely feel the initial impact, while skilled professionals who embrace AI will see their productivity increase dramatically. Tasks traditionally assigned to junior employees—such as repetitive or routine work—are precisely the tasks at which AI excels.

Consequently, the demand for entry-level roles may shrink, forcing less experienced workers to upskill and quickly learn how to collaborate with AI tools to remain competitive. On the other hand, skilled workers who integrate AI into their workflows will become far more efficient and valuable, creating a widening gap between those who adapt and those who do not.

Having used AI tools for nearly two years, I’ve experienced firsthand the significant productivity boost they provide in certain areas.

That said, I don’t think society is fully prepared for even this initial shift. Spending $100,000–$400,000 on education, as many do in the U.S. for a bachelor’s degree, only to find limited job opportunities. For decades, people have been told that obtaining a college degree is the best way to climb the economic ladder. If this promise no longer holds true, frustration and resistance are inevitable. While this resistance may not last long in the face of inevitable technological progress, it could trigger significant societal and economic disruptions in the short term.

Expand full comment
Gunnar Miller's avatar

The US wants to win the AI race so badly that they put sanctions on Nvidia H100s. Because China can’t use the best AI chips, they had to figure out how to work with the trailing-edge Nvidia H800 and needed to think outside the box to create more efficient ways to train and run their large language models, which have reportedly exceeded expectations. So, seemingly overnight they can allegedly do everything OpenAI and all the other megacap techs can at 5% of the price, and we don't need any leading-edge chips anymore ... nor ASML, nor big data centers.

This is exactly why people said Russian programmers in the 1980/90s were the best in the world. They had to be extra-clever to get stuff to run on slow, obsolete junk. Take that, fat, lazy American capitalist pig programmers! https://youtu.be/oHjaAu1GTZU?si=XjTxOKAxSLFQuwKa&t=76 . I also remember coming to work in the early '90s and all the PC stocks were being crushed because someone had reportedly come up with a PC which could be user-upgraded to use the latest Intel chip instead of having to buy a whole new PC.

People just love "David vs. Goliath" and "Man Bites Dog" stories. A quote I read this morning said "Silicon Valley is great at taking things from 0 to 1, but the Chinese are great at taking things from 1 to 10." One guy raises the possibility they're lying https://open.substack.com/pub/benvanroo/p/deepseek-the-biggest-splash-in-ai . Another raises the possibility that they actually piggy-backed on another LLM API and/or used smuggled H100 GPUs https://open.substack.com/pub/thatstocksguy/p/a-few-thoughts-on-deepseek .

Look, the AI hype has clearly inflated some valuations. The question is whether this really is a Cisco inventory write-down moment that popped the internet-related silicon bubble back in early 2021. That would make the $500 billion Stargate announcement the equivalent of the 2000 Super Bowl as the absolute peak of the Nasdaq 100. The 30 Jan 2000 "Dot Com Superbowl" came just days before the peak https://en.wikipedia.org/wiki/List_of_Super_Bowl_commercials#2000s . This was around the time where, having bearish on tech all the way up, Merrill Lynch began to run its "be bullish" ad campaign https://www.youtube.com/watch?v=9I5JEokM6gI . In Apr 2001, Cisco warned it would write off >$2 billion in semiconductor inventory https://www.cnet.com/tech/mobile/ciscos-2-25-billion-mea-culpa/ . It took 15 years for the Nasdaq to reach 5,000 again.

That said, one key difference between the dot com bubble and today's megacap tech-dominated markets is that the latter has massive profitabliy and cash flow ... so some actual price/earnings ratios can actually be calculated versus price/yearnings ;-)

Expand full comment