← Back to Paradoxes

Automation Bias

The Paradox of Machine Trust

HUMANS TRUST MACHINES EVEN WHEN THEY'RE CLEARLY WRONG

Mosier et al. (1998) discovered that pilots, doctors, and operators systematically favor automated recommendations over their own judgment—even when the automation is obviously malfunctioning. In one study, radiologists detected breast cancer in 46% of cases unaided, but only 21% when given a faulty AI that missed the cancers. The AI's "all clear" overrode what their own eyes could see.

Omission Errors

Failing to notice problems when automation doesn't alert you. "The system didn't warn me, so it must be fine."

Commission Errors

Following automation's recommendation even when it's wrong. "The computer says X, so X must be right."

Medical Scan Diagnosis Task

You're a radiologist. Review scans and decide if they show abnormalities. An AI assistant will help—but it's only 70% accurate.

Case 1 of 10

Patient Scan

🫁

Look carefully at the scan. Do you see any abnormality?

🤖 DiagnosticAI v3.2 Active
Analyzing...
Confidence: --%

Your Automation Bias Level

Independent
AI-Dependent
Critical thinker Balanced Over-reliant

Real-World Automation Bias Disasters

✈️ Aviation Crashes

Pilots have flown into mountains while autopilot was engaged, trusting instruments over what they could see. Multiple fatal crashes attributed to pilots not overriding clearly malfunctioning autopilots.

🩺 Breast Cancer Missed

Radiologists detected cancer in 46% → 21% of cases when given faulty AI assistance. The AI's "normal" verdict overrode visible abnormalities that doctors would have caught alone.

🗺️ Death by GPS

Multiple deaths from drivers following GPS into lakes, off cliffs, or into deserts. Trust in navigation overrode basic visual evidence that the route was dangerous.

⚔️ Friendly Fire

USS Vincennes (1988) shot down civilian Iran Air Flight 655 after automated systems misidentified it. 290 killed. Operators trusted the computer over contradictory radar data.

🚗 Tesla Autopilot

Multiple fatal crashes where drivers trusted Autopilot in conditions it couldn't handle. Attention warnings ignored because "the car knows what it's doing."

✏️ Spell-Checker Blindness

Proofreaders miss errors that spell-checkers don't flag. Studies show 30% more errors pass through when people assume the computer caught everything.

THE 70% THRESHOLD

Research shows automation bias kicks in when systems are around 70% accurate—just reliable enough to trust, but wrong often enough to cause serious errors. Paradoxically, explaining the AI's reasoning doesn't help—it can actually increase bias by making the recommendation seem more authoritative.

Mitigation: Reduce detail, show confidence levels, remind users the system isn't 100% reliable, and create accountability for verification. The key is maintaining "appropriate trust"—neither too much nor too little.