Member-only story
A Processor from 1997 Ran AI Program
And this is what happened
The tech community gasped in unison a few weeks ago—not for the latest Apple keynote or NVIDIA’s trillion-dollar market cap—but for something that initially sounded laughably absurd: a successful test demonstrating that a contemporary AI could be run on a 1997 processor and a mere 128 MB (not GB) of RAM. Let that sink in. Not gigabytes. Megabytes. And not even a dual-core Pentium. This was an Intel Pentium MMX, running at 233 MHz—nostalgia for the days when everyone was still on dial-up and Windows 95 was the new hip.
But the implications of this test are no joke. Indeed, they challenge long-held assumptions we’ve made about the resource-thirsty appetites of artificial intelligence and pose very sobering questions about the future trajectory of modern computing, the accessibility of AI, and even the long-term viability of our digital infrastructure.
Let's break down how this happened, what this is actually, and why all tech folks—from programmers to environmental activists—should care.
The test was conducted by hardware and software enthusiasts collectively led by lone researcher Gregor Lushinsky and a group of open-source AI enthusiasts. They were not attempting to…