SHARE

SHARE

SHARE

WATCH

7 Reasons Why the AI Investment Boom Could Bust

Goldman Sachs has released a startling figure claiming that $1,000,000,000,000 will be spent on AI infrastructure in data centers over the next few years. Yes, that is one trillion dollars—a number usually reserved for discussing the GDP of entire nations, rather than investments in growing tech sectors. 

Perhaps if Goldman Sachs had more confidence that this massive investment would pay off, the news wouldn’t be so alarming. But the investment banking giant released this report sounding the alarm that a massive amount of money is currently being funneled into a technology that may not ever be able to provide a return on investment. 

Yes, AI seems promising and will surely continue to grow at a staggering rate. But will it grow to be worth a full trillion dollars? That remains to be seen. If the hype falls flat and the bubble bursts, it could cause a tremendous economic crash for AI investors. 

Here are some of the reasons Goldman Sachs says AI investors should rein in the spending a little bit—at least for now. 

1. We don’t know yet whether generative AI will provide returns.

There is a misconception that almost anyone who develops or uses AI can strike it big with relative ease. That is, after all, one of the hallmarks of a technology “boom.” 

Goldman Sachs researchers are not so optimistic. They believe that even if one or a handful of AI applications comes out on top, the industry as a whole may have trouble obtaining a sufficient return on investment after spending so much money. 

2. There are still hardware shortages and energy limitations to consider.

The AI boom largely began in the United States, but global competition is fierce. In addition, many countries—the U.S. included—are grappling with microchip shortages, limited available power, and green energy initiatives that are seemingly becoming stricter with every passing year. 

Only time will tell which major players—corporations and entire countries included—will be able to meet demand. 

3. AI isn’t filling a necessary market gap. In fact, it’s almost doing the opposite. 

Typically, when a new technology emerges on the scene, it does so because it fills some previously unfulfilled gap in the market. The internet, for example, provided companies with a low-cost method of conducting business online. In many cases, businesses were able to shrink their overhead dramatically by switching from the brick-and-mortar model to online. Increasingly, real-world locations disappeared and digital storefronts flourished. 

AI, however, appears to be doing almost the opposite. Instead of solving high-cost problems with low-cost solutions, it’s offering a high-cost solution to get rid of low-cost job positions and “obsolete” (inexpensive) older tech. 

As a result, MIT economist Daron Acemoglu surmises, “only a quarter of AI-exposed tasks will be cost-effective to automate within the next ten years, implying that AI will impact less than five percent of all tasks.” 

4. There’s a dearth of high-quality training data.

AI companies have managed to scrape nearly the entire public domain scope of the internet—and several portions that are not in the public domain, as we’ll cover in a moment. 

That means every “usable” Tweet, forum post, and web page has already fueled the AI boom and AI is running out of content to digest. Researchers said in 2022 that it’s likely AI developers will run out of usable data by 2026. 

The keyword here, of course, is usable. Developers try to avoid feeding algorithms inaccurate, incomplete, low-quality, or private data to avoid corrupting the output. However, this selection process of what is and isn’t usable data leaves something to be desired. 

Despite the best of intentions, the content that AI algorithms have scraped up hasn’t always been high-quality. User posts often contain a mishmash of opinions, random facts that may not be true, and outright falsehoods. And, in the world of large language models (LLMs), low-quality data is a big problem. AI learns from data, so when we feed it unreliable material, it learns to be unreliable. 

This leads us to our next problem. 

5. Large language models make mistakes. A lot of mistakes.

Despite the moniker “artificial intelligence,” AI isn’t actually intelligent. That is, it doesn’t understand language and context, it just generates it with varying degrees of confidence based on data points it has gathered. At its best, AI emulates human knowledge. But as many users have noticed since the beginning of the generative AI boom, AI is often very inaccurate. 

This issue is so common that the tendency for AI to make up information and present it as fact is called hallucinations. The implication is that AI hallucinates these false facts into existence, but they’re actually just statistical errors. Unfortunately, there will always be some degree of statistical error within any data set; developers can’t simply train the mistakes out of the algorithm.

This is a problem, because some of the hallucinations are fairly severe. Doctors use AI models to help with disease diagnostics, writers use AI to brainstorm wellness articles that give people health advice, lawyers use AI to research laws and precedence, and programmers use AI to build frameworks for applications and write code. When something goes wrong in one of those areas, it has the potential to be life-altering for users who follow the bad advice, enter private data into the insecure software, or receive the medical misdiagnosis. 

6. Unethical data scraping could turn into a huge legal battle.

Perhaps accidentally or deliberately—the lawsuits haven’t been settled yet—developers have been using private data to feed AI algorithms. OpenAI, Microsoft, and Meta are all facing numerous lawsuits contending that they scraped and used non-public data to train their AI LLMs. 

The New York Times was one of the first entities to challenge OpenAI and Microsoft in court over this issue, but they have since been joined by eight other large newspapers alleging that AI models have been trained using private material. In completely separate instances, a wave of actors, musicians, authors, voice actors, and programmers have also come forward with cases of their own. 

It would seem that at least some of this misuse of private property happened on purpose. The New York Times provided recordings of Meta executives saying they deliberately used copyrighted material without obtaining the required permission. In their opinions, asking for permission would take too long and slow down the AI development process. 

Such blatant disregard for private material is probably not an isolated incident. This spells trouble for AI companies, as it opens the doors for massive lawsuits to drain funds and waste valuable time. 

Additionally, tech bubbles tend to expand based on public perception. If the general public is excited for new AI developments on the horizon, investors will leap into the frenzy for a chance at the action. If, however, public confidence in AI collapses, that bubble could burst quickly and spectacularly, before consumer-grade AI even gets off the ground. 

If AI developers lose their court battles and end up having to pay for licensing rights to the data they use to train LLMs, the costs will add up quickly. With the enormous amount of data needed for algorithm training, AI developers would have to pay to license a lot of copyrighted works. The already substantial cost of AI will skyrocket even further if this becomes the case. 

7. Humans aren’t so easily replaced. 

Have you heard that Walmart, Target, and other major retailers are scaling back on self-checkouts? When they first appeared, self-checkouts faced a similar (but on a much smaller scale) situation as generative AI does today. They were supposed to save retailers countless millions of dollars by replacing the salaries of human checkout workers. 

That grand plan hasn’t worked out like the retail giants thought. Customers miscount items, merchandise goes missing, and some folks walk away from the transaction entirely when they get frustrated by the experience. In many cases, self-checkouts cost more than employing human workers. 

If that one example isn’t compelling enough, consider that Toyota spent the previous decade hiring human workers to replace their factory robots. They learned the hard way that humans are far more capable of adapting quickly to changing trends and conditions than machines that need to be reprogrammed to shift functionality. 

It’s not possible to predict at this early stage how the AI boom will actually turn out. But it might be prudent to watch how the above developments play out before investing too much more in data center infrastructure. 

Recommended Posts

Tech LIFT

6 Data Center Trends To Watch In 2025

What will 2025 bring to data centers? Plenty—especially for data centers moving into power-hungry fields such as AI and facilities planning for equipment upgrades,

Tech LIFT

5 Big Challenges Faced By Healthcare Data Centers

Operating a data center is no easy task. Operating a healthcare data center—one that services hospitals, clinics, pharmacies, and other facilities—comes with its own

enter the information below to download the whitepaper

The Data Center Migration Guide

enter the information below to download the whitepaper

The Data Center Safety Guidebook

enter the information below to download the whitepaper

Best Practices for Moving IT Department in the Data Center

enter the information below to download the whitepaper

Best Practices for Data Center Equipment Handling

enter the information below to download the whitepaper

data center consolidation action plan white paper

enter the information below to download the whitepaper

Buying a Data Center Lifting Device