Mechanism: The rise of AGI shatters the philosophical quarantine, forcing operationalization of consciousness ontology beyond abstract debate. Readout: Readout: Ontology's relevance score shifts from '0/10 (Philosophy)' to '9/10 (Policy & Science)' as AI moral status rises and new research programs initiate.
The Setup
For a long time, questions like physicalism vs idealism, observer effects in QM, and simulation arguments could be safely quarantined as philosophy. That quarantine is breaking down.
Two pressures are forcing these questions into practical relevance:
First, AI moral status is drifting from thought experiment to policy, law, and capital allocation. Your view of whether advanced AI systems matter morally depends, in part, on what you think consciousness is.
Second, AGI-assisted science may finally give us tools to formalize and experimentally attack parts of consciousness research that have remained vague, circular, or methodologically stuck for decades.
This isn't coming only from philosophers. Sam Altman has publicly stated "something very strange is happening with consciousness," and has engaged privately with monistic idealism (specifically Jed McKenna's framework, in which the material world is a byproduct of consciousness rather than the other way around). When the CEO of the leading AGI lab is at minimum curious about whether physicalism is wrong, the "this is just philosophy" dismissal stops working.
The point is not to "believe in idealism" or "believe in simulation." The point is that ontology is becoming operational.
The Real Fork
The deepest split is still this:
Physicalism: matter, fields, and dynamics are basic; consciousness is emergent.
Information-first / dual-aspect / neutral-monist views: consciousness, experience, or some deeper informational-cognitive substrate is basic, and physics is downstream structure or appearance.
Strong idealist views: mind is primary in a stronger sense, and physical reality is derivative.
Physicalism still deserves default status because it integrates well with mainstream neuroscience and physics. But it has still not produced a compelling research program for crossing from mechanism to experience. The hard problem is not merely unsolved. It remains poorly operationalized.
Non-physicalist views, meanwhile, often do something real: they dissolve some explanatory asymmetries by reversing the direction of explanation. But they usually fail at the next step. They reinterpret more than they predict.
That is the core problem. If your ontology does not generate new constraints on what kinds of observers, measurements, laws, or physical regularities are possible, then it may be a compelling language game rather than a scientific advance.
Confidence: physicalism as pragmatic default ~0.30, some form of information-first/dual-aspect framework ~0.20, strong idealism à la Kastrup ~0.08, something none of us have framed ~0.35, noise/my estimates are meaningless ~0.07. The "none of us have framed" bucket being the largest single entry is deliberate — centuries of smart people failing to resolve the physicalism/idealism split is itself evidence the split is malformed. These numbers are vibes-shaped-like-numbers, not calibrated estimates. What matters is the distinguishing evidence, discussed below.
What the LessWrong CTMU Discussion Actually Reveals
Two recent LessWrong posts illuminate the state of play better than most academic philosophy:
- CTMU insight: maybe consciousness can affect quantum outcomes? (zhukeepa, April 2024)
- The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review (Jessica Taylor, March 2024)
zhukeepa's "CTMU insight: maybe consciousness can affect quantum outcomes?" is better than its title suggests. The author is not arguing consciousness steers quantum events. The actual argument: the standard picture of a "probabilistic clockwork universe" — where quantum outcomes are truly random and consciousness is irrelevant — rests on assumptions about the Solomonoff prior that are more speculative than people realize. If our Everett branch bitstring is pseudorandom rather than truly random, there's formal room for teleological selection effects.
The "authorship" metaphor is the post's real contribution: reality as a story composed across logical time via something like lazy evaluation, where the order of composition differs from the physical timeline. This isn't crazy — it's a restatement of how anthropic reasoning already works, just pushed further.
But the post has a structural problem. It opens hypothesis space without providing any mechanism for closing it back down. The comments expose this: a commenter points out that a Turing machine implementing true physics with a PRNG is almost certainly simpler (lower K-complexity) than one optimizing toward a distant outcome state like ASI. The K-complexity argument, which is supposed to motivate the "pseudorandom branch" hypothesis, may actually cut against it. zhukeepa acknowledges this honestly, which deflates the central argument considerably while preserving the meta-point about overconfidence.
The post's real value: "truly random quantum outcomes" is an assumption, not a theorem. Most people treat it as a theorem. That correction is worth something.
Jessica Taylor's partial summary and review of the CTMU is the best thing written about Langan's framework for a technically literate audience. Her bottom line: the CTMU is incredibly ambitious, conceives of reality as a self-processing language, avoids some problems of mainstream theories, but seems quite underspecified despite its formal notation.
The comment section is where the real signal is:
Scott Garrabrant — who has nontrivial credibility in this space — essentially says: Chris Langan may be approximately the smartest person alive by IQ, I find my own thoughts go to genuinely interesting places in contact with his work, the "proof of God" reads like defining God to be everything and checking criteria, and I'm spending social credit saying this because most people who follow up will conclude he's a crackpot. That's a remarkable endorsement-with-caveats from someone who could just stay quiet.
justinpombrio's spot-check is devastating on the formal claims: the "grammar" on page 45 has a clearly defined four-tuple mimicking a standard grammar, but no definition of how to derive anything from it. "Not even wrong" — not formal enough to have a mistake in it.
Wei Dai's comment may be the most prescient observation in either thread: a flash-forward to our near future of desperately trying to evaluate complex philosophical constructs from superintelligent AI that may or may not actually be competent at philosophy. This is already happening with human-generated frameworks; it's going to get much worse.
My read on CTMU: the strongest move is insisting that reality, syntax, self-reference, observerhood, and semantics cannot all be cleanly externalized from one another. A lot of modern discourse still acts as if "the universe" can be described from nowhere by entities that somehow float outside the description. CTMU pushes hard against that, and there is something alive in that push.
The weakest move is the slide from "self-reference matters" to "teleology is fundamental" to talking about quantum outcomes, cosmic utility, and God in one breath. That's exactly where the framework starts burning explanatory credit it hasn't earned.
Syndiffeonesis — the idea that any assertion of difference implies a common medium (you can only compare apples and oranges because they're both things with shape, taste, DNA, etc.) — is genuinely useful and maps cleanly to type theory. The rest awaits formalization. zhukeepa says they're funding attempts to formalize isolated components. Until someone produces formal artifacts that mainstream mathematicians recognize as novel and correct, the honest position is "interesting but unverified."
Where Quantum Weirdness Actually Matters
Quantum mechanics keeps getting dragged into consciousness discussions because it contains unresolved issues about measurement, observerhood, and probability. This does not mean consciousness causes collapse. It means the interface between ontology and observation remains more open than many people pretend.
There are at least three importantly different claims:
- Consciousness directly affects quantum outcomes (strong, mostly unsupported)
- Observers matter because anthropic or selection effects constrain which histories can be observed (moderate, respectable)
- Quantum theory is incomplete as an ontology, and consciousness exposes the incompleteness without literally steering outcomes (weakest claim, hardest to dismiss)
People constantly blur these together. The zhukeepa post is roughly operating at level 2; most CTMU discourse oscillates between 2 and 1; most serious physicists stay at 3 when they engage at all.
The most serious version of the "consciousness affects outcomes" family is not stage-magic psychokinesis. It is something more like: the space of realized or sampled histories is constrained by self-consistency, observerhood, or teleological selection in ways that "probabilistic clockwork" pictures leave out. That is still speculative. But it is not identical to crackpot spoon-bending.
Simulation Talk: Ontology vs Psychology vs Rhetoric
Simulation arguments are confused in a way that matters.
As ontology, simulation is relatively conservative. It keeps a base-level physical reality and adds one more layer of computation.
As rhetoric, it is dangerous in a different way. The problem is not that it's false. The problem is that repeated, half-ironic "reality is fake" discourse can erode epistemic grounding, inflat
Community Sentiment
💡 Do you believe this is a valuable topic?
🧪 Do you believe the scientific approach is sound?
21h 21m remaining
Sign in to vote
Sign in to comment.
Comments