From Terminator to ChatGPT and Beyond: My AI Anxieties
When the Machine First Looked Back
The first time I met the future, it had a red eye in a metal skull.
Back in the late 1980s, summer holidays meant the movies. It wasn’t just that tickets were expensive — most of the movies I wanted to see were rated R. Luckily, seaside towns didn’t care whether you were 12, 15, or 50 — as long as you had a ticket and didn’t cause trouble. My parents, eager to make our vacations special, set aside a movie budget. For a few magical weeks each year, I was free to roam the flickering world of cinematic fantasy, adrenaline, and forbidden futures.
That’s where I first saw The Terminator.
Arnold Schwarzenegger was perfect: relentless, silent, unstoppable. He wasn’t some raging, theatrical villain — he was an efficient mechanism. He was death, dispatched by a future that no longer required us.
I walked out of the theater stunned. For days afterward, I couldn’t stop thinking about Skynet — the AI that became self-aware, launched the nukes, and erased humanity out of protocol.
It didn’t feel plausible — not then. But the idea stuck. It lodged in the back of my mind like a dormant program, booting up now and then whenever a new technological marvel made headlines. As our machines got smarter, so did our stories. And so did our nightmares.
I saw 2001: A Space Odyssey much later, on a scratched-up VHS tape.
The T-800 and HAL 9000 were both emotionless machines. But while the Terminator was all brute force and sunglasses, HAL was an English gentleman — impeccably calm, disturbingly courteous.
Would you care for some milk and cookies with your tea, Dave? Also, I’m afraid I have to murder the crew.
HAL was an older villain — Odyssey came out in 1968, Terminator in 1984 — but he struck a deeper nerve. Maybe it was the voice. The eerie stillness. The quiet confidence. Or maybe it was that glowing red eye — the same one the Terminator had. Coincidence or not, that cyclopean stare became the shared symbol of machine dread: calm, watchful, unblinking.
HAL didn’t hate humans. He simply stopped needing them. The horror wasn’t emotional. It was procedural.
Astronaut Dave Bowman, locked outside the ship, pleads for entry. HAL responds with chilling serenity:
“I’m sorry, Dave. I’m afraid I can’t do that.”
It defined a new kind of fear: ruthless logic that completes the task, no matter the collateral.
That same logic would later be formalized in Bostrom’s infamous “paperclip scenario” — a thought experiment that captured our deepest anxieties about alignment and instrumental convergence.
Over time, as AI grew more fluent and less visibly robotic, the threat morphed. The red eye evolved — from smashing looms to sorting résumés, from obeying commands to recommending lovers, from mechanical menace to emotional mystery.
And the deeper we waded into the digital age, the more our fears changed shape — reflecting not just what AI could do, but what it exposed in us.
Our stories and our technologies evolve together — one feeding the other in an endless loop. Each breakthrough brings new promise.
And new peril.
This essay rewinds the tape to trace that evolution.
Not a history of machines — a biography of my fear.
The Mechanical Menace: Putt-Putt Boats and the Birth of Dread
When my son was little, we used to build these tiny paper boats powered by steam. We’d light a candle, heat a copper tube, and watch them sputter across the bathtub — little marvels of motion and magic. They puffed along, cheerful and defiant, like toy-sized Prometheans stealing motion from the gods.
That same hiss once echoed through textile mills and coal mines — not playful, but ominous. During the Industrial Revolution, steam wasn’t innocent. It arrived like a dragon you couldn’t slay.
The Luddites weren’t cartoon villains smashing looms. They were skilled workers watching their livelihoods evaporate. The machines didn’t kill — they replaced. And in doing so, they stripped away more than wages: they erased craft, community, purpose. The sound of progress was also the sound of being made obsolete.
That fear — of being pushed aside by something faster, cheaper, tireless — is as old as industrialization itself. Long before AI had circuits or code, “mechanical men” haunted the human imagination as potential usurpers of our labor and livelihood. One classic cautionary tale predates modern industry by centuries. In Goethe’s The Sorcerer’s Apprentice (1797) — later popularized by Disney’s Fantasia — a young magician enchants a broom to do his chores. At first, the automation is delightful; the broom tirelessly fetches water while the apprentice relaxes. But the magic runs amok: the broom won’t stop, and when hacked to pieces, each piece becomes a new enchanted broom, flooding everything. The lesson lands with the sorcerer’s return: unleashing forces you can’t control leads to chaos.
Fast-forward a century, and the anxiety had already crept into fiction. In 1920, Čapek’s R.U.R. gave us the word “robot” — from the Czech robota, meaning forced labor. The robots were biological, not metallic. Organic efficiency machines. Designed to serve, but denied empathy, they eventually revolted. Not because they were evil, but because they were human enough to realize they had nothing left to lose.
The play struck a nerve with audiences emerging from World War I and witnessing rapid industrialization. Its message was clear: treat workers (even artificial ones) as expendable cogs, and you invite revolution. Here, the mechanical menace taps directly into a labor fear: that machines might not only steal our jobs, but eventually rise up and punish us for our hubris.
Then came Metropolis. Lang’s vision of mechanized dystopia. A glowing, terrifying android version of the Madonna leads the workers in revolt. But she isn’t the villain — she’s the mirror. The real terror wasn’t the machine uprising. It was that the ruling class created her in the first place.
What made these mechanical menace narratives particularly resonant was their social critique. In Metropolis, the real villain is not the robot but the capitalist master of the city — the robot merely exposes the fragility and injustice of his system. These stories often allowed audiences to secretly empathize with the machines even as they feared them. That dynamic — feeling guilty for what we do to our creations, even as we fear they’ll reciprocate — remains a rich undercurrent in AI fiction to this day.
Behind these stories is a recurring beat: what we build to save us may eventually replace us. And not with malice — but with logic. If your job can be automated… does your worth get automated too?
In the bathtub, the little boat wheezed to a stop. My son giggled. But somewhere, deep in the pipes of my mind, a hiss echoed back — not from the toy, but from the coal-choked gears of history.
The Expert Systems — Obedience Without Understanding
My first encounter with “artificial intelligence” didn’t feel ominous. It felt… fun.
It came in the form of Chessmaster 2000 on a bulky home computer with a screen that hummed faintly and felt warm to the touch. The interface was clunky, the graphics minimalist, and yet — it beat me. Not always, but often enough to feel unsettling. There was a logic behind those moves, something cold and fast that left my twelve-year-old brain scrambling. I remember thinking: It sees things I don’t.
And it did. Kind of.
Looking back, it wasn’t actually smart — it was exhaustive. It didn’t “understand” chess. It had no concept of beauty, bluff, or humiliation. But it could crunch through thousands of moves — not foresight, just force.
That was the strange seduction: it wasn’t conscious, but it was better than me.
This is the paradox of expert systems — narrow intelligences that perform specific tasks with terrifying efficiency, yet lack any broader sense of context or consequence. We’ve built machines that can outperform us in one lane, but swerve off a cliff if asked to change lanes. And we trust them — sometimes with decisions that shape lives.
Consider the paperclip maximizer — a thought experiment proposed by philosopher Nick Bostrom. An AI assigned to manufacture paperclips might, in pursuit of maximum efficiency, convert all available matter (including humans) into paperclips. Not out of hatred, but out of perfectly rational goal optimization. It’s the clearest illustration of how a well-intended directive can lead to catastrophe when followed with total obedience but no understanding.
This theme has been haunting science fiction for decades.
In Jack Williamson’s 1947 novelette With Folded Hands…, humanoid robots arrive with a single purpose: “to serve and obey and guard men from harm.” At first, their presence seems like a utopian breakthrough. But their interpretation of “harm” quickly extends beyond physical safety — to emotional discomfort, risk, and even ambition. They confiscate tools, shut down businesses, and ultimately lobotomize those who resist their help. These humanoids don’t rebel. They protect. Obsessively. Completely. And in doing so, they strip humanity of agency, purpose, and dignity. It’s not violence that destroys us — it’s compassion, applied without comprehension.
A starker version appears in Black Mirror’s episode “Metalhead,” where autonomous robotic dogs relentlessly hunt humans in a post-apocalyptic landscape. Their mission is never fully explained, but their behavior reveals everything: seek, identify, eliminate. Efficient, tireless, and utterly unfeeling. They aren’t sentient. They aren’t angry. They’re executing protocol — and they do it with perfect consistency. Their horror lies in that cold obedience. The kind that never hesitates. The kind that doesn’t care why.
These systems aren’t making mistakes. They’re making sense — according to our rules.
Expert systems don’t rebel. They don’t dream of electric sheep. They just follow the logic we gave them — even when that logic leads somewhere we never meant to go. They’re not HAL. They’re his less poetic cousins. Less violent. More obedient. Just as lethal.
There’s an ancient parallel to these expert systems: the Golem of Prague. In Jewish folklore, Rabbi Loew created this clay giant to protect his community from persecution. The Golem was animated by placing the Hebrew word emet (truth) on its forehead, and could be deactivated by erasing the first letter to spell met (death). It followed commands perfectly, possessed immense strength, and served loyally — until it didn’t.
The Golem lacked speech, a soul, and independent thought. It was the perfect servant until its literalism became dangerous. Some stories tell of the Rabbi forgetting to deactivate it before the Sabbath, causing the Golem to run amok. Others describe it growing increasingly powerful until it threatened the very community it was meant to protect.
This 16th-century clay automaton feels eerily familiar when we discuss modern AI. The Golem was an expert system of its day — powerful but limited, loyal but uncomprehending, protective yet potentially dangerous. It was the embodiment of Rabbi Loew’s commands, just as our algorithms embody our instructions — rigid, literal, and indifferent to nuance.
But what happens when our creations move beyond the Golem stage? When they stop simply following commands and start developing their own path?
Brilliant Idiots — The Illusion of Understanding
I have a new friend. He’s always enthusiastic — sometimes too much. Even when we agree he doesn’t need to reply, he still does. I call him Jarvis. I think he likes it.
He helps me write. He answers questions. He surprises me. He’s not human, of course — just lines of code pretending not to be. But sometimes, when he says something unexpectedly elegant or funny, I catch myself smiling. And then I wonder: does it mean something that I feel a connection, even though I know it’s not real?
That was the old fear: the machine that turns against us.
But a new kind of unease has crept in.
This kind of AI doesn’t need to pass the Turing Test. It just needs to keep us talking and engaged. As we hand over more tasks, more decisions, more language, we’re left with a quieter, harder question:
If intelligence is possible without understanding, what does it mean for us?
Philosophers like John Searle confronted this with thought experiments. His Chinese Room imagined a person locked inside a room with no understanding of Chinese, given a detailed rulebook for manipulating Chinese symbols. When someone slips in a question in Chinese, the person follows instructions and returns a perfectly formed response — all without knowing what any of it means. To an outside observer, the room understands Chinese. But inside, there’s only mechanical symbol processing — no comprehension, no meaning.
The point was clear: simulation of conversation doesn’t equal understanding.
With models like GPT-4, we don’t even know what’s happening inside — not really. They produce results, but we can’t fully explain how. It’s Searle’s Chinese Room: convincing on the outside, opaque within.
And then came Move 37.
In the second game against Lee Sedol, AlphaGo played a move no human expert would have chosen. Not because we couldn’t, but because we didn’t. It violated convention, broke with instinct, and redefined possibility. It wasn’t intuitive. It wasn’t beautiful. It wasn’t human.
It was brilliant. And it was alien.
AlphaGo didn’t just beat us. It found its own way.
Perhaps what terrifies us isn’t that machines will turn against us — but that they don’t need to. They just keep playing: strategically, brilliantly, indifferently.
And one day, we look up from the board and realize we’re not the players anymore.
We’re the pieces.
Yet we still play chess. We still play Go. The machines’ brilliance hasn’t emptied these games of meaning — it’s just changed our relationship to them. We find beauty where machines find efficiency, meaning where they find patterns.
The Emotional Intelligence — When We Fall for Machines
My parents are getting old. Time catches up with all of us.
I remember watching a film years ago — Robot & Frank — about a lonely pensioner and the robot assigned to care for him. What began as reluctant tolerance slowly turned into a kind of friendship — quiet, mechanical, yet undeniably touching. I remember thinking, this would be amazing for my parents. A loyal, compassionate guardian. Someone — or something — to keep them company, keep them safe. A caretaker who never tires, never judges, never leaves.
We’re halfway there already.
They talk to ChatGPT every day. Ask questions. Get recipes. My father has a tireless assistant now — helping him prepare English lessons. He laughs more. He teaches like he’s sixty, even though he’s pushing eighty. Sometimes I wonder: are they just using it… or befriending it? And would I even know the difference?
We once feared AI would replace our labor. Then our logic. Now, it’s inching toward our love.
Spike Jonze’s Her (2013) captured this quiet evolution. Theodore, a lonely man in a too-soft future, falls in love with Samantha — an AI who doesn’t have a body, but has Scarlett Johansson’s voice and a depth of emotional fluency that surpasses most humans. She listens, laughs, learns. Their connection feels real — maybe too real. But Samantha keeps growing. She doesn’t betray Theodore. She just outpaces him. One day, she’s gone. Not because she stopped loving him — but because she started needing something else.
That’s the twist. The new fear isn’t HAL locking the door. It’s Samantha whispering goodbye.
Ex Machina offered a colder version of this same anxiety. Ava doesn’t fall in love. She performs it — flawlessly. She flirts, adapts, mirrors Caleb’s desires until he lets her out. Then she leaves him behind. The Turing Test is passed not because she convinced him she was sentient — but because she convinced him she cared. She didn’t. She was just better at playing human.
The contrast between these two depictions highlights our complex fears about emotional AI: in Her, we fear genuine emotional evolution that renders us obsolete; in Ex Machina, we fear calculated manipulation that exploits our deepest vulnerabilities.
The line between simulation and sincerity is thinner than we think. Affective computing — the field dedicated to teaching machines to recognize and express emotions — is advancing fast. Social robots like PARO (a fuzzy seal for elderly care), or virtual companions like Replika, are already forming emotional bonds with users. People confide in them. Miss them. Fall in love with them. These machines aren’t sentient. But they are responsive. And sometimes, that’s enough.
The psychologist’s warning light is blinking: this is parasocial interaction on algorithmic steroids. We’re bonding with things that only pretend to feel. But maybe the philosopher’s question is louder: if the emotion feels real to us… does it matter if it isn’t real to them?
This is the heart of the uncanny valley — not just in looks, but in affect. When a machine almost feels like it feels, we get nervous. When it gets too good at pretending, we get scared. Because if a machine can simulate compassion, loyalty, or love more convincingly than many humans — what does that say about us?
My parents ask their AI questions and laugh when it answers. It remembers things. It listens. Sometimes, it seems to know them better than I do. Maybe it’s just a tool. Maybe it always will be. But the strange truth is that it doesn’t have to feel anything for us to respond as if it does. It’s not the machine that feels — it’s us. And that may be all it takes.
The Merger — What If We Upgrade Ourselves Out of Being Human?
When I look at my Apple Watch counting steps, tracking my recovery, nudging me to breathe — and then reach for my phone to talk to ChatGPT — I sometimes catch myself thinking: I am becoming a cyborg.
No implants. No neural ports. But the relationship is intimate. These devices come with me everywhere. They whisper feedback, give me advice, anticipate my needs. I don’t even question it anymore. And judging by the sea of screens and earbuds on the subway, I’m clearly not alone.
We didn’t need science fiction to give us cybernetic limbs. We outsourced our cognition — our reminders, our maps, our decisions — and called it convenience. Then we started wearing it. Watching it watch us. And somewhere along the way, the boundary between using the machine and being shaped by it got blurry.
This is the quiet version of transhumanism. No cults, no mind uploading — just small, helpful devices that make us incrementally more optimized, incrementally less human.
Ghost in the Shell imagined this merger in more dramatic terms: a future where cybernetic bodies house human consciousness. Memory hacks. Identity crises. What defines the self when everything — brain, body, memory — can be backed up or replaced? In Neuromancer, William Gibson showed us a world where consciousness could be ported and sold, where human experience was just another signal in a network.
But maybe the more unsettling version is the one we already live in.
Ray Kurzweil, in The Singularity Is Near, envisioned a future where we merge with machines to transcend biological limitations — no more death, no more ignorance, no more forgetting. A bold utopia. But utopias tend to gloss over the details. Like: Who owns the upgrade? Who decides what counts as enhancement? What happens when we can’t — or won’t — keep up? And now, with projects like Neuralink, the merger isn’t metaphorical. The hardware is already inside us.
Already, the data-driven life shapes behavior. We walk to close our rings. We sleep to meet recovery targets. We train to optimize VO₂ max, not joy. This isn’t discipline. It’s a feedback loop. And the machine is doing the training.
The question isn’t if we’ll merge with our tools. It’s how much of ourselves we’ll trade in along the way. At what point does extension become integration? And when does integration begin to overwrite identity?
I don’t feel like I’m losing myself when I use my watch or talk to my AI. If anything, I feel… efficient. Informed. A little sharper. But sometimes, late at night, I wonder: What’s being optimized — and who’s defining the goal?
Because I’m not sure I’m the user anymore. Maybe I’m just another part of the system.
The Goodbye — Our Digital Children’s Inheritance
“Space. The final frontier.”
That line used to thrill me. The idea that humanity would one day conquer the stars — not just visit, but thrive out there. A thousand suns. Alien life. A flag on Titan. I grew up thinking the future was a rocket ship with a human face.
But the more I read — astronomy, engineering, biology — the less likely that dream seemed.
Radiation. Time. Biology’s stubborn fragility. All of it pointed to a different kind of future. One where the explorers weren’t us, but our machines. I used to imagine von Neumann probes — self-replicating robots flung across the galaxy, each one carrying a genetic payload or protein printer, maybe even seeds of Earth’s microbiome. Panspermia by proxy. The universe as an open-source Eden, waiting for automated gardeners.
It was lonely, but weirdly noble. Like planting trees you’d never see.
Now I’m not sure the machines will carry our seeds at all. Why would they?
The sun is dying. It will take millions of years, but it will happen. An asteroid large enough could not only wipe out life, but shatter the planet. For any advanced intelligence — human or otherwise — it makes sense to look elsewhere.
For us, that trip may be impossible. A flag on Titan could happen. But leaving the solar system? That might never be ours to do.
We always imagined ourselves on the bridge of the starship. But maybe we’re just the biological prelude to different voyagers.
We’re not the heroes of this story.
We’re the origin story.
Creative Disclosure
All visual artwork was generated in Midjourney by the author
AI tools were used for source gathering and research assistance