AI AND TECHNOLOGY
Decentralized AI: Empowerment or Chaos?
How DeepSeek made a large step towards the decentralization of AI and why this matters more than you may think
On Monday 27th of January 2025, DeepSeek — a Chinese startup virtually emerging out of nowhere — released their new AI model, DeepSeek V3.
Unlike other AI models, DeepSeek V3 is open-sourced, significantly cheaper, and runs on not at all state-of-the-art Nvidia GPUs, yet, its performance is kind of the same as other state-of-the-art AI models.
Naturally, everyone lost their minds, , and.
On the other side of the globe, DeepSeek’s success , and the Hang Seng Tech Index — the Chinese tech stocks index —
What about DeepSeek?
So, is a Chinese AI company founded in 2023 by Liang Wenfeng, which has rapidly evolved into a major player in the AI sector. DeepSeek’s AI models have quickly caught the industry’s and the public’s attention, due to a bunch of key differences in comparison to the industry’s status quo. To name a few:
- Significantly lower computational resources — far less and far not so fancy GPUs.
- Open-Source — they have released several models to the open-source community, including DeepSeek-R1 and DeepSeek-V3. DeepSeek-V3 is a Mixture-of-Experts (MoE). In particular, it is not only open source but under , allowing commercial use (!).
- Crazy growth — In just 2 years they went from zero to a valuation of billions.
In short, DeepSeek V3 isn’t just another AI model. It’s a fundamental proof of concept that state-of-the-art AI can be open-source, cheap, and run on older hardware. A rather big disruption on the current pricey AI landscape, full of closed doors.
But why does this matter?
But why does it matter — there are dozens of AI models out there — what difference does one more make?
There is only a handful of AI companies and their respective AI models, each and every one of them costing a whole lot of money in order to be up and running. For instance, the training cost alone for ChatGPT was a cheeky $100m along with 100,000 GPUs, let alone the hundreds of thousands of dollars it takes to run it on a daily basis.
Undeniably, the performance improvements of state-of-the-art AI models like ChatGPT, Llama, or Claude in the past years are impressive, to say the least, and one can only expect such models to become even more complex and ‘intelligent’ in the future. However, the cost and resources required to run such models constrain them to remain centralized.
Money problems aside, having a few tech giants like Meta, OpenAI, or Google centrally control AI models comes with other significant issues, as you may imagine — data privacy, censorship, and monopolization to name a few. It is no secret that personal data and attention are the main commodities of the 21st century, and nowadays we are essentially willingly surrendering them to a few big players, and letting them decide for our fates.
The reality of such corporations amassing gigantic amounts of personal data, inevitably raises ethical and privacy questions, as well as, renders users vulnerable to privacy violations, hacking, or surveillance, irrespectively of how law-abiding and good-hearted those corporations may be. On top of this, corporations in control of AI models can largely influence the content that gets promoted or silenced.
An AI model biased towards specific opinions or affected by corporate interests can greatly affect our perception of reality. Thus, when a couple of companies basically control to a great extent not only AI infrastructure, but also AI applications — what one sees, hears, believes, or buys — it is safe to assume that such concentration of power is going to inevitably create imbalances and inequalities in society.
On a short political note, the DeepSeek V3 release is also extremely important in the context of , aiming to reduce reliance on Western chipmakers selling them GPUs from last year (like Nvidia) and accelerating homegrown innovation.
Anyways!
Once upon a time, were so massive that they occupied entire rooms. No one could imagine having a personal computer back then. — a leading figure in the computing industry of the 1970s — even got to say that “There is no reason anyone would want a computer in their home.” Mr. Olsen’s statement aged like milk, as due to transistors and integrated circuits, computers gradually became smaller, faster, and more efficient. Once personal computers landed the desktops of every household in the 1900s and 2000s, the impact on society was impeccable, for instance with blogging, e-commerce, social media, and even the dark web.
So, one may assume that the real breakthrough lies not in making ‘smarter’ AI models — current AI models are already smart enough to help the average person with their daily tasks — but rather in creating AI models that are simpler, cheaper, and provide the potential for decentralization. And this is exactly what DeepSeek did.
DeepSeek V3 has allegedly the same performance as ChatGPT, Llama or Claude, or any other state-of-the-art AI model, but for a fraction of the cost — DeepSeek’s V3 model training costs just $5.6m, or in other words 94.4% less than ChatGPT!
And here’s the trick!
A simpler and cheaper AI model allows for massive, decentralized distribution, providing the potential for running it locally rather than running just one model centrally. That is a big deal — the real endgame of AI if you would! A decentralized run of AI models would open endless potential while allowing for privacy, security, and autonomy.
DIY truth
The real question is, if you had a personal ChatGPT at home, what would you use it for? If mommy OpenAI wasn’t paying attention anymore, would we still be good girls and boys? Hmm…
- Maybe you could use it to convince everyone on Reddit that we are living in a simulation, by creating endless discussions with people that are suspecting the exact same thing.
- Create an army of AI-generated Karens flooding customer service lines, demanding to speak to the manager.
- Write a bunch of fake papers, citing fake sources, proving that world leaders are actually shapeshifting reptilian aliens? 🦎
- Start a cult and convince everyone you’re the chosen one
Persuade an entire subreddit that birds aren’t real; flood academia with AI-generated paradoxes; overflow the stock market with fake insider leaks — the possibilities are endless. Oh, I can’t wait! What I am trying to say is that, given a personal, local AI model, it would be really easy for anyone to cause misinformation chaos.
And it seems that we have kind of accepted that this misinformation chaos is going to eventually occur in one way or another — the is a huge piece of this. Essentially, on January 2025 the largest social media platform told us “eh, good luck” 🤷♀️ — the reality is now a puzzle you have to assemble yourself! This change is intended to reduce misinformation and censorship, and promote free expression, but will it though? If the most popular, or loud version of reality wins, social media platforms will most likely turn into just AI playgrounds.
Ultimately, what will it be? A utopia of AI democratization, or a chaotic misinformation world? I believe that the decentralization of AI will eventually become a reality — much like the scale-down of massive mainframe machines to personal computers. A big question that remains is whether a decentralized AI structure can also be a functional one, or if after all, we need big tech mommy and daddy at the wheel.
In my mind
DeepSeek V3 model was a gigantic leap in the evolution of AI — cheap, local, and uncensored. It’s probably much more important than we are able to realize right now. Essentially, it opens up the path for the decentralization of AI, and whatever consequences — good or bad — this is accompanied by.
Ultimately, the decentralization of AI is inevitable, but will it result in empowerment or chaos? For sure, such a situation could provide freedom from corporate control, privacy, and autonomy, as well as, an explosion of innovation.
But what if it’s not that simple? What happens in a world where anyone gets to deploy an AI model running unchecked, overflowing the internet with personalized misinformation, bias, and deepfakes indistinguishable from the truth? What happens when everyone has their own version of reality, but no one has the tools to verify any of it?