Member-only story
Why You Trust AI Even When You Know You Shouldn’t
The uncomfortable truth about why we keep consulting systems we know are wrong
There’s something darkly funny about aviation safety guidelines: When an aircraft instrument fails or gives wrong readings, regulations require not just a warning sticker next to it labeled “INOP” (as in “inoperative”) but demand the instrument be physically deactivated or removed entirely.
Why? Because pilots, highly trained professionals who spend countless hours learning about instrument reliability and cross-verification, keep looking at them anyway.
I’ve experienced this myself flying a small aircraft recreationally. Despite the warning stickers next to the broken instrument, despite knowing better, your eyes are magically drawn to these dead instruments — as if staring at false readings long enough might somehow make them true.
Locking Up People in a Dark Room
This isn’t surprising given what we know about human decision-making under uncertainty. Suppose for a second that we were to conduct a psychological experiment in which we lock up 10 test subjects in a dark room without external stimuli for a few weeks. We would probably have trouble obtaining an ethics approval for this or finding voluntary…