AI Needs Decentralized Governance
Not e/acc techno-optimists, not EU bureaucrats, but a new paradigm
By Michael J. Casey
The acrimony displayed at the Paris AI Action Summit has clarified a stark divide between the U.S. government and Silicon Valley on one side and, on the other, European leaders and a bevy of activists and non-profits over how to manage the threats and opportunities posed by artificial intelligence. It’s a battle between optimistic builders and pessimistic regulators.
When you look at their differences through the lens of decentralization, however, you find that this is kind of a false dichotomy. While both sides are right to be concerned about the danger in the other’s approach, each is advocating a solution that risks landing in the same place, where society is at the mercy of a centralized, autocratic system of control.
The chasm was captured in the competing speeches of US Vice President JD Vance and EU Commission President Ursula von der Leyen and in observers’ differing responses to them.
Vance against what he called “massive” European regulation proposals. He labeled demands for content moderation of US Big Tech platfrorms “authoritarian censorship” and railed against Europe’s burdensome privacy requirements. Predictably, the US. — joined by the UK. — refused to sign the Summit declaration, perhaps because got Trump Administration folks triggered on “DEI” grounds. Then von der Leyen took the stage. Among other details, she . (Notably, Vance had left the room, seemingly uninterested in hearing the EU perspective.)
Optimistic builders from Silicon Valley and parts of the crypto community and . They portrayed the former as enabling an unhindered private sector to efficiently deploy capital, brainpower and other resources into the development of AI. They saw the latter — by which governments have a key role in dictating AI’s direction, through regulation, and in this case, by contributing and helping to mobilize official funding — as at best a path to bureaucratic inefficiency and at worst a recipe for totalitarianism.
The divide is not just about traditional preferences over the role of the state in the economy or regulation of markets. It manifests in two very divergent perspectives on the dangers AI poses and on whether society should be encouraging the advance of the technology itself to address those dangers or put constraints on it.
On one side are techno-optimists like the venture capitalist Marc Andreessen and Extropic founder Guillaume Verdonthe, who via his X handle became the most influential figure behind a , or effective accelerationism. In reframing the controversial concept of effective altruism, the e/acc movement posits that the best way to address the potential threat that artificial general intelligence (AGI) could pose is to encourage the fastest development of AI so that can get bureaucrats out of the way and start to solve humanity’s problems.
On the other side are the pessimists, a group I also call the “pause cause” — so named because of who called on governments to mandate a moratorium that would pause AGI-focused research until society can get a better handle on the risks it poses to humanity. This side also encapsulates views that were even more extreme, a “shut it all down” mindset exemplified by computer scientist Eliezer Yudkowsky’s during the same period. Yudkowsky said pausing would not go far enough to prevent AI from morphing into a force that would quickly exterminate the entire human race.
These sides went head to head this week on X. Their dispute is captured in , most known for her innovative research on sexual activity but who also has a solid following among techies. Critiquing , who was enthusiastically cheerleading Vance’s speech, Aella said “we’re all dead” because “midwits in power” were doing “the equivalent of building a planet-sized nuke.” As is the nature of X, she came under attack, not only from the e/acc crowd but also from some moderate thinkers who described as unhelpful her accompanying remark that it was too “boring and complicated and technical” for her to explain why the danger is so great.
My view is that both sides are right about the problems but wrong about the solution and, in reality, are advocating frameworks that end up with a centralized autocratic AI technology infrastructure that puts the interests of the most powerful over those of human beings. Whether the all-powerful controller of that system is the European Commission or a multi-trillion-dollar U.S. corporate monopoly is irrelevant.
There truly is a danger that unknown outcomes from exponential acceleration in AI development could create a destructive machine that will either take on an anti-human life of its own or fall into the hands of a demagogue whose good-old human lust for power will turn into a source of scorched-earth terror and violence. But pausing development at the current phase and creating ovrersight through which the current clique of AI titans could set the standards for everyone else will only entrench their dominance and foster regulatory capture. The idea that governments should order a stop to AI development is even worse. It’s not only fanciful, it’s dangerous, given the impossibility of policing it. How on earth would we hold the Kremlin to that rule without resorting to Yudkowsky’s most extreme — and utterly untenable — proposal to launch targeted nuclear attacks on Russian cities?
The answer, I believe, lies not so much in a middle path of compromise but in a completely different paradigm: an embrace of decentralization within the context of democracy and the spirit of international multi-stakeholder frameworks on which the internet was governed from the start.
This idea was best articulated by Ethereum founder Vitalik Buterin with his tweak on the e/acc concept. In response to Andreessen’s controversial Buterin called for adherence to “d/acc” AI development principles, which would be based on “decentralized and democratic, differential defensive acceleration.” In a , he reflected a year later on the d/acc concept’s evolution, highlighting, with cautious optimism, a variety of fields in which these open, democratic and decentralized principles were being applied to constructive tech development.
I consider myself a card-carrying member of the d/acc club, which as Buterin rightly points out is more than just a belief in the power of blockchains and crypto. While I am also an unabashed supporter of decentralized AI and have even formed the as a non-profit to promote DeAI tech development, I agree with him entirely that if we solely focus on decentralization and do not also embrace the “defensive” part of the d/acc mandate, we’ll create systems that will still generate threats to humanity.
If we have an “offense-favoring environment,” Buterin wrote last month, “there’s constant ongoing risk of either catastrophe, or someone positioning themselves as a protector and permanently establishing themselves at the top.” That last reference rings true, I think, in the alarming way in which many e/acc and crypto advocates seem to have let their messianic instincts take charge. They have elevated certain people in tech, business and politics — people who need not be identified here since the news cycle is already full of their names — into centralized positions of enormous, unprecedented power and influence. This development within the crypto community, the antithesis of the decentralization’s core tenets, illustrates the exact risks that the Ethereum founder is identifying.
Look, at heart I’m a techno-optimist (with caveats). I think we should celebrate technological progress and invest in its capability to make our lives richer and better. But I also believe we need some friction in the system — a capacity through human intervention to activate the “defensive” part of the d/acc approach with speedbumps that allow a human collective to ask, “hang on a second, is this really a good idea?” I’m not sure that contemporary nation-state democracies are up to the task, but I do believe we should be embracing multi-stakeholder online and offline governance models that can bring that friction to bear where needed while enabling inspirational ideas to be brought to market by unencumbered startups.
This, by the way, is not a new idea. It is pretty much what the United States’ founding fathers had in mind when they crafted the U.S. Constitution in 1789. It was a framework for enabling human progress but with appropriate checks and balances on the excessive accumulation of power that that progress can facilitate. We need the same ideas to be reframed for a 21st century global digital economy.
The nearest thing we have to that democratic approach in tech development lies in the principles of open-source licenses, token-based participatory ownership and the consensus-seeking governance approaches of decentralized autonomous organizations (DAOs). Add to that self-sovereign identity, data property rights, cryptographic privacy, blockchain accountability and open, permissionless competitive environments in which a thousand startups can bloom and we have the makings of a workable framework for the AI age. Designed to coexist with traditional legal and policy boundaries, it can put a check on the emergence of corruptible centralized monopolies, whether they take the form of a government or a private corporation.
I’m talking about an AI-ready approach that encourages an effective “marketplace of ideas,” a concept that was once touted as a positive feature of a globally interconnected internet until centralized social media platforms let their ad revenue-maximizing algorithms foster an internet of division, discord and the worst of human behavior. Given the right d/acc framework, we can revive that particular form of techno-optimism to ensure that society defaults to technologies which, on balance, do more good than harm to human existence.