In 1966, MIT professor Joseph Weizenbaum created ELIZA—a simple chatbot
that used basic pattern matching to simulate a Rogerian psychotherapist. It had no
understanding, no memory, no intelligence. Just simple rules.
Yet something unexpected happened: people fell for it.
Users opened up emotionally, sharing deep personal problems with a program that
merely reflected their words back at them. Weizenbaum's own secretary asked him
to leave the room so she could have a "real conversation" with ELIZA.
Weizenbaum was horrified. He called it "powerful delusional thinking in quite normal people."
This tendency to attribute human understanding to computer programs became known as
the ELIZA Effect—and it's more relevant today than ever.
Experience It Yourself
Have a conversation with ELIZA. Type naturally—share a problem, talk about your day.
Notice how you feel about the responses.
🤖
ELIZA
Rogerian Therapist Mode • Online
Hello. I'm ELIZA. How are you feeling today?
Messages exchanged
1
(Chat for at least 5 messages to unlock the reveal)
Behind the Curtain
⚠️ ELIZA's Actual "Intelligence"
Everything ELIZA said was generated by simple pattern matching.
There was no understanding. Just find-and-replace rules:
IF user says "I am [X]"→REPLY "How long have you been [X]?"
IF user mentions "mother" or "father"→REPLY "Tell me more about your family."
IF user says "I feel [X]"→REPLY "Why do you feel [X]?"
IF no pattern matches→REPLY generic: "Please go on." / "Tell me more."
Your Conversation Decoded
"I had not realized... that extremely short exposures to a relatively simple
computer program could induce powerful delusional thinking in quite normal people."
— Joseph Weizenbaum, creator of ELIZA
Why We See Minds in Machines
The Mechanics of the Illusion
🪞 Projection
We fill in gaps with our own meaning. ELIZA's vague responses
became profound because we made them profound.
💬 Conversational Norms
We're trained to assume our conversation partner understands us.
Question-asking signals interest, so we assume ELIZA cares.
🧠 Theory of Mind
We evolved to model other minds. This "mentalizing" is automatic—
we can't help but attribute intentions to interactive agents.
😊 Anthropomorphism
We see faces in clouds and personality in cars. Adding language
to an object makes the illusion nearly irresistible.
Weizenbaum's Warning
The ELIZA Effect disturbed Weizenbaum so deeply that he spent the rest of his life
as a technology critic. His 1976 book Computer Power and Human Reason
argued that some tasks should never be delegated
to computers—especially those requiring genuine human judgment and compassion.
He worried not about machines becoming human, but about humans becoming
machine-like—accepting shallow simulations as substitutes for real connection.
The Modern ELIZA Effect
🗣️ Voice Assistants
People say "please" and "thank you" to Siri and Alexa,
feel guilty about being rude, and attribute personality to their devices.
💬 AI Chatbots
Users form emotional attachments to chatbots,
share personal secrets, and feel genuinely comforted by responses.
🤖 Large Language Models
Modern LLMs produce incredibly fluent text,
making the illusion of understanding far more compelling than ELIZA ever could.
👶 Children and Robots
Studies show children attribute feelings and
moral status to robots, resisting commands to harm them.
The Irony
There's a profound irony here: ELIZA was created to show
how superficial human-computer interaction was. Weizenbaum meant it as a
critique—a demonstration that clever parlor tricks don't constitute intelligence.
Instead, people took ELIZA as proof that computers could understand.
The effect named after Weizenbaum's cautionary example is now used to explain
why people fall in love with chatbots.
"No computer has ever been designed that is ever aware of what it's doing;
but most of the time, most people aren't either."
— Marvin Minsky (AI pioneer, colleague of Weizenbaum)
Key References:
• Weizenbaum, J. (1966). ELIZA—A Computer Program For the Study of Natural Language
Communication Between Man And Machine. Communications of the ACM, 9(1), 36-45.
• Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment To Calculation.
W. H. Freeman and Company.