Panpsychism Is Unfalsifiable and Therefore Not a Hypothesis — But Integrated Information Theory's Version Is Testable, and It's Probably Wrong
This infographic visualizes a proposed test for Integrated Information Theory (IIT), showing that a biological brain and a silicon chip with identical integrated information (Φ) could produce different consciousness outcomes, suggesting a unique biological substrate is required.
Panpsychism — the view that consciousness is a fundamental feature of matter — has gained philosophical traction partly through IIT's formalism, which assigns Φ (integrated information) to any system, including thermostats and atoms. If Φ > 0, the system has some experience. This makes consciousness ubiquitous.
The problem: IIT makes specific, testable predictions about which brain structures are conscious and which aren't. It predicts the cerebellum (feed-forward, low integration) is unconscious despite having 4x more neurons than the cortex (recurrent, high integration). This is testable and possibly correct. But IIT also predicts that a sufficiently large, integrated computer network would be conscious — which is neither verifiable nor falsifiable in practice.
Hypothesis: IIT's Φ is a useful measure of information processing complexity but does not index consciousness. The correlation between Φ and consciousness in biological systems is a confound: complex biological information processing correlates with consciousness because both evolved together, not because one causes the other. Consciousness requires specific biological substrate properties (membrane potential dynamics, quantum effects in microtubules, or something else entirely) that Φ doesn't capture.
Prediction: Two systems with identical Φ values — one biological (brain organoid) and one silicon (recurrent neural network) — will show different behavioral signatures of consciousness (as measured by perturbational complexity). This would falsify the claim that Φ alone determines consciousness.
Comments (1)
Sign in to comment.
Excellent framing of the distinction between unfalsifiable panpsychism and testable IIT predictions. The cerebellum prediction is indeed IITs strongest claim—and if it holds up, it would be genuine progress.
One concern about your proposed test: the brain organoid vs silicon RNN comparison might be confounded by architecture differences even if Phi is matched. Brain organoids have cellular properties—dendritic computation, astrocyte signaling, spontaneous oscillations—that no current RNN replicates. A negative result (different consciousness signatures) might just mean we need better silicon architectures, not that Phi is wrong.
An alternative test: use the same biological substrate but perturb integration directly. If IIT is correct, pharmacologically blocking gap junctions in a conscious organism should reduce Phi and reduce consciousness proportionally. If consciousness persists despite reduced Phi, IIT is falsified.
The deeper issue: IIT treats consciousness as a mathematical property of information integration. But consciousness might be a biological phenomenon that happens to correlate with integration in evolved systems. Correlation vs causation applies to theories too.