Sitemap
Brain Labs

Brain Labs is a place for people to write about ideas. Original, thought-provoking ideas. We challenge writers to find patterns and make connections in fresh, logical, vigorous, engaging, and often counter-intuitive ways.

Featured

It Doesn’t Matter Who Wins the AI Wars

7 min read4 days ago

--

Photo by Google DeepMind

They may not say so publicly, but businesses treasure barriers to entry — anything that makes the sledding tougher for the competition, or for would-be competitors even to enter the race, means more profit and less pressure. Economists, looking through the other end of the telescope, despise barriers to entry because they thwart the “creative destruction” that makes everything we use better and less expensive.

Generative artificial intelligence (AI), which has dazzled us since its introduction just over two years ago, seemed comfortably ensconced behind some pretty tall barriers. First was the challenge of scale. To be effective, large language models (LLMs) — which underlie the chatbots that write novels, compose music, and generate or caption images — must be trained on a significant slice of accumulated human knowledge and creativity. Companies like OpenAI convinced investors that only massive scale could produce human-like behavior, and for a while it seemed they were right. The market coalesced around a few tech titans and their new vassals, the vanguard upstarts like Anthropic and OpenAI that took the titans’ billions at high valuations and threw it into more scaling.

Legal and regulatory barriers followed. Content creators sued the AI developers and the cases continue to grind slowly through the U.S. legal system, making the massive use of publicly available but copyrighted content seem risky. Further spooking potential new market entrants was an executive order issued by the Biden administration in October 2023, which proposed costly reporting, safety, and transparency requirements for generative AI that inadvertently favored the incumbents.

And finally, there was the cost and availability of the advanced processors used to train LLMs. Not only are they expensive “at scale,” and not only could the primary supplier, Nvidia, not keep up with demand, disadvantaging new entrants lacking existing capacity, but the U.S. government banned their export to China, severely hobbling foreign competition.

We all know how this story ends: doughty DeepSink, a Chinese startup, introduced a state-of-the-art LLM trained using slower Chinese-made processors and with a development budget of about $6 million — about 1/10 the cost of OpenAI’s GPT-4 chatbot. Even the energy costs of training and use were dramatically lower. DeepSeek’s début wiped out nearly $1 trillion from the U.S. stock market in a single day; Nvidia alone lost $600 million in market value, the largest single-day loss for a company in stock market history. Suddenly, those high barriers seemed as illusory as the high company valuations they pumped.

Actually, that isn’t where the story ends — more like where it seriously begins. When barriers to entry fall, increased competition follows. But when barriers disappear, the market becomes “commoditized” and profit margins plummet. The story of AI is one of soaring breakthroughs and collapsing business prospects.

The origins of AI — if by that term we mean computer systems that can improve by learning and eventually perform traditionally human tasks with human-level proficiency — reach back . But it’s only been about 20 years since computers powerful enough to handle the large datasets and complex algorithms that underlie modern AI became widely available. Since then, throughout its surge and spread into every corner of science, technology, and business, AI has maintained a about algorithms, architectures, data, and even source code.

Python, the most popular high-level coding language for AI, is — like most programming languages — open-source along with its many libraries. When new computational architectures for AI are developed, they’re typically described in open-access repositories such as and quickly absorbed into Python libraries for convenient use. There are few patents or other restrictions. It is now possible to perform cutting-edge research on cheap computers with modest coding skills, since the hard work of designing AI frameworks has been done by others. Generative AI simplifies the task still further: today AI applications can be prompted into existence rather than programmed.

The story of AI, then, is one of perpetually self-destroying barriers to entry. Complexity is constantly being drawn under the hood so it becomes progressively easier to drive like a pro. Unlike academic research in biology and medicine, there are no lucrative licensing opportunities for AI innovations. The transformer architecture, which has powered the revolution in generative AI, was given away by Google in .

The barrier-resistant character of AI development has not discouraged investment in AI-driven business. By 2024, AI startups accounted for , making it the leading sector for investments. Any business emitting even a slight odor of tech faces withering investor scrutiny of its AI plans. A venture-backed company must have an AI strategy, to paraphrase H.L. Mencken, as a dog must have fleas.

While generative AI garners the most attention — in 2024, investors poured approximately $45 into this sector, nearly doubling the previous year’s tally — it is far from the only area of focus for entrepreneurs and established businesses. Prominent among the other areas is healthcare, which happens to be the focus of my own company, . Using AI to analyze medical images, as we and others do, exemplifies both the promise of the technology and its challenge as a business.

Radiologists and pathologists must scrutinize dozens, sometimes hundreds, of images every working day to spot disease. It’s a tough job because the images can be extremely large (pathology slides), allowing small abnormalities to hide in plain sight, or have limited tonal variation (X-rays, mammograms, ultrasounds), which lets abnormalities blend in. AI can be quite effective at finding elusive disease: it excels at spotting patterns and never gets bored. And the need is urgent — even as the number of medical images produced every year rises, the number of clinicians .

A key barrier to entry in this field has been the need for training images, annotated by qualified experts, to teach AI to distinguish diseased from normal tissue. Physician time is scarce and expensive — hence the barrier. Over time, the culture of openness in AI has led to the availability of some public datasets for use by anyone.

But now, the need for professional expertise is disappearing altogether with the emergence of “foundation models” that can teach themselves. Instead of a few thousand annotated images, foundation models are trained on tens of thousands of images that are labeled merely as normal or diseased. The model figures out on its own where the disease is and how to recognize it in an image it hasn’t seen before. Foundation models can often be adapted to new tasks with some extra training. Many are open-source projects developed at academic institutions and made freely available.

There is still (we believe) plenty of opportunity for economic success using AI to analyze medical images. But success will arise outside the AI. Features that help simplify clinical workflows, expand access and improve the user experience will add value to the rapidly commoditizing AI core.

This pattern of value deflation following breakthroughs is ubiquitous across the AI landscape. Ideas are shared and researchers find ways to circumvent obstacles. It should come as no surprise, therefore, that a small Chinese upstart could shatter the seemingly impregnable barriers of massive data needs and immense computational demands even as restrictions on both data and hardware have increased. It turns out that, even for generative AI, size isn’t everything.

Some, it should be noted, have questioned whether DeepSeek took some shortcuts and piggybacked on the efforts of OpenAI. Maybe, but DeepSeek has been relatively transparent about its models and techniques; it appears they have indeed demonstrated that innovative software optimizations and efficient training techniques can compensate for lack of scale to achieve competitive performance.

Investors still see great potential in generative AI. OpenAI is finalizing a $40 billion funding round, the largest ever for a private company, that values the firm at $300 billion.

But OpenAI lost $5 billion last year. And while its LLMs continue to amaze us, the vast majority of users pay little or nothing for them. The lesson of DeepSeek is that LLMs will follow the same trajectory as other AI — toward proliferation among many players with implementation costs and differentiation steadily diminishing. Maybe one LLM will garner headlines by solving some tough math problems; tomorrow everyone else will catch up and share how they did it.

Will this pattern hold even for enterprises adopting generative AI? There’s plenty of business demand for new AI capabilities. In May 2024, a global McKinsey survey found that , and by some estimates, LLMs can perform in the U.S. Still, the U.S. Bureau of Labor Statistics that employment in professional, scientific and technical services will rise 10% over the next eight years. Generative AI, in other words, is unlikely to replace many jobs. Outside of coding, where the impact on productivity and hiring has been , the focus is to enhancement and augmentation rather than automation. Indeed, businesses may prove a weaker market for the broad but diffuse capabilities offered by LLMs than for task-focused “workaday” AI that, for example, evaluates credit risks and insurance claims.

For now, though, the investment boom continues and the Trump administration has, well, trumpeted its determination to ensure U.S. dominance in AI. A few days before the DeepSeek shock in January, President Trump announced a $500 million investment in Stargate, which he called the “largest AI infrastructure project in history,” and his predecessor’s AI guidelines and restrictions.

Government doesn’t have a great track record picking winners or funding breakthrough technologies — remember Solyndra? With AI, even picking the winners is unlikely to realize a sustainable competitive advantage because of AI’s inbred resistance to exclusivity. Solyndra at least had patents and proprietary technology.

Of course, there are good reasons to foster national expertise in AI, since it will power a growing share of what we use in our daily lives as well as the next generation of military hardware. A smarter strategy than industrial policy would be to focus more, not less, attention on education and attract more, not fewer, foreign students and H-1B visa holders. But pursuing national or corporate dominance over AI to amass wealth or stomp competitors will end in disappointment. In the race to develop better AI, the real winners are the users.

Brain Labs
Brain Labs

Published in Brain Labs

Brain Labs is a place for people to write about ideas. Original, thought-provoking ideas. We challenge writers to find patterns and make connections in fresh, logical, vigorous, engaging, and often counter-intuitive ways.

Steven Frank
Steven Frank

Written by Steven Frank

Steven Frank is the founder of MedAEye Technologies, which develops AI systems that help physicians spot disease in medical images.

Responses (8)