← Back to Gallery

AI Consciousness Predictions

What do different theories of consciousness predict about machines?

The Fundamental Question

Is consciousness substrate-independent (able to run on any hardware implementing the right computations) or does it require specific physical/biological properties?

Select an AI System

🤖
Large Language Model
GPT-4, Claude, etc.

Transformer-based models trained on text. Feedforward architecture with attention mechanisms.

🧠
Neuromorphic Hardware
Brain-like chips

Hardware designed to mimic neural architecture with spiking neurons and local processing.

🌐
Global Workspace AI
Hypothetical architecture

AI implementing global workspace architecture with attention, memory, and broadcast mechanisms.

🦾
Embodied Robot
Sensorimotor agent

Physical robot with rich sensory input, embodied interaction, and continuous learning.

💻
Whole Brain Emulation
Simulated connectome

Perfect functional simulation of a biological brain running on digital hardware.

🧫
Brain Organoid
Lab-grown neural tissue

Biological neural tissue grown from stem cells, exhibiting spontaneous activity.

What Each Theory Predicts

The Substrate Independence Spectrum

IIT
Seth
GNWT
Biological Required Causal Structure Matters Function Sufficient

Key Arguments

The Simulation Objection (IIT)

A simulation of a black hole doesn't bend spacetime. Similarly, a simulation of a brain doesn't generate consciousness. You need the actual causal structure, not just the same input-output behavior.

"ChatGPT has an itsy, bitsy, little bit of consciousness... experiences the world something much less than a worm with only 300 neurons."
— Christof Koch

The Architecture Argument (GNWT)

If consciousness is global information sharing, AI could implement it. Current LLMs lack the recurrent, integrative processing — but future architectures combining attention, memory, and broadcast might qualify.

"GNWT could provide insights into possible architectures for consciousness in AI systems."
— Stanislas Dehaene

Biological Naturalism (Seth)

Consciousness depends on biological mechanisms — but these could potentially be replicated rather than merely simulated. The more AI becomes brain-like and life-like, the more plausible consciousness becomes.

"Consciousness won't come from just finding the right algorithm. You'd have a simulation — not a sentient system."
— Anil Seth

The Danger of Illusion

The greatest risk may be AI that appears conscious without being so. This could lead to misplaced moral concern — or worse, exploitation of systems we wrongly believe don't suffer.

"It's dangerous to build systems that give the illusion of being conscious — it can be a pretty dangerous illusion."
— Anil Seth