06/07/2025
Introduction
Part 1: Cosmology in crisis: the epicycles of ΛCDM
Part 2: The missing science of consciousness
Part 3: The Two Phase Cosmology (2PC)
Part 4: Synchronicity and the New Epistemic Deal (NED)
Copyright 2025 Geoff Dann. This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
DOI: 10.5281/zenodo.15823610
Zenodo link for a PDF of the whole series of articles as single document:
The most striking feature of the problems outlined so far is not just their number or severity, but their artificial separation. In Section 1, we surveyed foundational problems in cosmology and physics: the Measurement Problem, the Fine-tuning Problem, the Hubble Tension, and so on. In Section 2, we turned to the deep enigmas of consciousness: the Hard Problem, the Binding Problem and the Problem of Free Will. Each set of problems is well known within its respective domain, but mainstream science treats them as entirely unrelated, as if the cosmos and the conscious observer were two independent systems that just happen to coexist. The very suggestion that they might be meaningfully connected is routinely dismissed with a contemptuous wave of the arm: it's woo woo. Asking whether the conscious observer has anything to do with quantum measurement is forbidden.
This refusal to think holistically is the real problem. What if these 35 problems are not separate at all, but symptoms of a single, unexamined and unchallengeable assumption that mind and cosmos must be treated in isolation from each other, and that each of these 35 problems should be tackled without thought of the other 34? What if the solution, instead of adding even more epicycles, is to stand back and see the whole picture anew? The great shifts in science – the Copernican revolution, the birth of relativity, the advent of quantum theory – did not come from adding complexity, but from seeing something simple that had been missed. Another such shift is long overdue.
The Two-Phase Cosmology offers a fundamental restructuring of how we understand reality. It proposes that the universe does not begin as a classical system into which observers somehow emerge. Instead, it begins in a pre-physical, timeless, superpositional phase (Phase 1) whose lower boundary is the emergence of a conscious observer. With this emergence comes a collapse of the primordial wave function, and the transition to Phase 2 of cosmological history: the classical, post-selected history we call the physical universe.
This simple conceptual move – not just treating the conscious observer as central rather than peripheral (as in idealism or panpsychism), but also emergent rather than fundamental – is sufficient to at least partially resolve all 35 problems, and fully resolve the majority. It is not a theory of everything in the traditional sense, but a theory of how everything becomes real. It is a specifically non-panpsychist form of neutral monism, the strongest form of mathematical platonism, and a new kind of neo-Kantianism. Phase 2 is the direct equivalent of Kant's phenomena (the cosmos as it appears to us), and Phase 1 is equivalent to Kant's noumena (the cosmos as it is in itself, independent of how it appears to us). However, there are also some important differences, for in this system the noumenal realm is no longer completely unknown to us: for we know it contains the non-local Phase 1 quantum structure from which Phase 2 reality emerges with observation (so I call it “neo-noumenal”). Therefore this is also a new, strong form of scientific realism – the claim that there is a mind-external objective reality, and that science is the business of providing reliable knowledge about that reality. In the Two Phase Model of Cosmological and Biological Evolution, science is not just another narrative – it aims at discovering structural truths about the nature of reality.
Please note that scientific realism does not entail, imply or depend on materialism/physicalism and certainly doesn't need to lead to scientism (the illegitimate attempt to apply scientific standards and principles in domains where they are inappropriate, especially philosophy).
The Quantum Convergence Threshold framework, developed by Gregory Capanda (see The Quantum Convergence Threshold (QCT) Framework: A Deterministic Informational Model of Wavefunction Collapse, https://zenodo.org/records/15623629), marks a pivotal step forward in quantum foundations. It reframes the vague notion of “measurement” as a precise physical process defined in terms of informational structure and convergence dynamics. According to QCT, collapse occurs when a quantum system reaches a threshold of internal complexity, stability, and informational closure – conditions under which further unitary evolution becomes mathematically and structurally ill-defined relative to its macroscopic embedding. By introducing a quantifiable criterion for collapse, QCT avoids many of the conceptual ambiguities that have long haunted interpretations of quantum theory. Measurement is no longer a primitive or observer-defined term, but a predictable and testable phase transition, triggered by objective features of the evolving system. This allows QCT to bypass the need for arbitrary “cut” placements or assumptions about consciousness acting externally to physics.
However, this newfound precision sharpens rather than dissolves the core ontological question: what collapses, and into what?
QCT tells us when a superposed state becomes unstable and resolves into a definite classical-like configuration, but it remains silent on why one particular outcome is realised and what it means for a world to become actual. It provides a robust boundary condition for classical emergence but offers no account of:
QCT exposes the limits of purely physicalist or informational explanations. By replacing the fuzziness of “measurement” with mathematical rigour, it demonstrates that collapse is not metaphysically mysterious, but also reveals that collapse alone is not enough. Without an ontological framework for actuality, subjectivity, and agency, the central paradoxes remain unresolved. QCT acts like a spotlight: it clarifies the structure of the problem while illuminating the need for deeper metaphysical foundations. The picture is sharpened, but the frame is still missing.
QCT formalises the idea that collapse occurs when a quantum system’s informational dynamics exceed a well-defined convergence threshold, beyond which:
This threshold is determined mathematically by tracking specific variables such as coherence, information flux, entropy production, and the saturation of internal degrees of freedom. When these reach critical values, the system crosses into a regime where superposition is no longer physically coherent, and a unique outcome must be realised.
QCT provides a dynamical, quantitative account of collapse, consistent with quantum theory but enriched by insights from complexity science, thermodynamics, and information theory.
QCT makes several key contributions to the interpretation of QM:
While QCT powerfully refines the technical and structural understanding of collapse, it makes no metaphysical claims about the nature of reality, consciousness, or experience. Specifically, QCT does not:
In short, QCT defines the "how" and "when" of collapse, but not the "who" or "why". It offers a rigorous structural supplement to standard quantum theory but remains ontologically minimalist.
This is where QCT reaches its limit. It defines the point of convergence, but it leaves the nature of convergence (why this branch becomes real, and to whom) unexplained. For this reason, it plays a critical but partial role in any complete interpretation of QM. A full resolution of the MP and related paradoxes requires an ontological framework that accounts for actuality, conscious selection, and the emergence of a single world.
Physicist Henry Stapp has advanced a provocative interpretation of QM, in the tradition of von Neumann and Wigner (so a new version of consciousness causes collapse (CCC)), in which what Stapp calls “the participating observer” plays an active causal role in physical processes. Central to his proposal is the Quantum Zeno Effect (QZE) – a well-established phenomenon whereby frequent observation of a quantum system can inhibit its evolution. In quantum theory, if a system is repeatedly measured in a particular state, the probability of it remaining in that state increases, effectively “freezing” it in place. This has been confirmed experimentally and is consistent with the standard formalism.
Stapp’s innovation lies in suggesting that conscious attention functions analogously to this kind of measurement. According to his model, when an observer focuses mental attention on a particular quantum possibility (represented by a projection operator in Hilbert space) the QZE acts to repeatedly “observe” that possibility, increasing its probability of actualisation. Over time, this sustained mental focus can cause one potential outcome to dominate over others, thus collapsing the wave function not by an external physical device, but via conscious intent. This framework provides a potential resolution to the MP by locating the origin of collapse in conscious experience itself, thereby assigning mind a fundamental role in physical reality.
While controversial, this view has the advantage of avoiding many-worlds branching and the need for external decoherence models to select outcomes. Instead, it posits a participatory universe in which the observer is not merely a passive recorder of outcomes but a causally efficacious agent whose focused awareness can shape quantum events. Critics argue that this approach risks invoking dualism or anthropocentrism, and that it lacks a precise formulation of how consciousness interacts with physical systems. Nonetheless, it remains one of the few interpretations to offer a mathematically grounded, experimentally anchored proposal for how subjective experience might influence quantum outcomes without violating known physics.
Stapp’s use of the QZE provides a phenomenological account of how focused conscious attention might influence quantum systems, but it leaves open the question of how and when a potential quantum outcome stabilises into empirical reality. QCT offers a complementary framework that can fill this gap. The integration of these two models yields a richer and potentially more complete account of the consciousness–measurement interface.
The model begins by treating wave function collapse not as a spontaneous or observer-triggered discontinuity, but as the end point of an informational convergence process. Within this framework, quantum systems evolve under unitary dynamics until the informational entropy associated with potential outcomes reaches a critical threshold, ∥δI∥, relative to a defined environmental boundary. Collapse occurs when this threshold is crossed. This is not arbitrary, but a consequence of systemic informational saturation. This avoids the ambiguities of purely observer-centric collapse while retaining full compatibility with empirical constraints.
The integration point with Stapp’s model emerges when conscious attention is reinterpreted as an active agent modulating the convergence rate. In this view, the mind does not collapse the wave function directly, but accelerates convergence toward a given outcome by repeatedly selecting or "re-querying" a preferred projection operator. This aligns with the QZE: sustained observation inhibits unitary evolution and stabilises one potential branch of the quantum state. But within the QCT framework, this repeated "measurement-like" interaction is no longer a metaphysical mystery. It is a structured, entropy-driven convergence process influenced by both internal and external conditions. Attention functions as a convergent selector: by continually focusing on a specific set of quantum observables, consciousness sharpens the informational landscape and hastens the system’s approach to its QCT-defined collapse point. QZE provides the phenomenology (why attention "freezes" states), while QCT provides the dynamics (how this leads to collapse). Together, they constitute a dual-layered model in which mind modulates convergence, rather than violating physical law or requiring non-local metaphysics.
This hybrid model also allows for testable predictions. For instance, one might expect systems under prolonged attentional scrutiny (such as during focused meditation or observation) to exhibit altered convergence dynamics. The model also provides a framework for theorising how the emergence of conscious experience could have influenced early biophysical systems, by selectively biasing convergence during evolution.
The integration of QZE with QCT provides a novel theoretical synthesis in which consciousness emerges as both an epistemic filter and a thermodynamic agent,steering quantum systems toward informational closure. However, we still haven't resolved CCC's before-consciousness problem, and that is where the Two-phase Cosmology comes in.
The Two-Phase Cosmology (2PC) includes the first structurally innovative interpretation of QM since MWI in 1957. It offers a new way of resolving the Quantum Trilemma (see Part One), which is at the same time both radical and retrospectively obvious. It has been available since 1957, but it seems that until now nobody has noticed it. The core insight is that, even though MWI and CCC are logically incompatible, it is nevertheless possible to combine them (or more accurately something derived from each of them) into a new interpretation that maximises the explanatory power of both while avoiding their most serious respective problems. As explained in Part One, answers to the question “What collapsed the wave function before consciousness evolved?” usually assert that consciousness is ontologically fundamental, by invoking idealism, substance dualism or panpsychist neutral monism. However, neutral monism does not entail panpsychism. It is possible that the early cosmos consisted solely of a neutral quantum-informational substrate from which both classical spacetime and consciousness emerged together, in which case consciousness could be associated with specific physical structures or properties which could not have existed in the early universe. The sequential compatibility of an MWI-like cosmos and a CCC cosmos is revealed by noting that if consciousness is removed from CCC then the situation naturally defaults to MWI: if consciousness is required for wave function collapse, but consciousness hasn't evolved/emerged yet, then the wave function is not collapsing at all. The main difference between this and conventional MWI is that in this case none of the multiverse branches are actualised – rather, they exist in a state of timeless potential. They are neutral/neo-noumenal/quantum/informational rather than classical or physical. This offers a natural explanation for Nagel's teleological evolution of consciousness, except instead of appealing to unknown teleological laws, the apparent telos is revealed to be structural. In Phase 1 everything which is physically possible actually happens in at least one branch of the Platonic multiverse, so it is logically inevitable that somewhere very special, all of the events required for abiogenesis and the evolution of conscious life actually do occur, regardless of the infinitesimal probability of such a sequence of events. Then, with the appearance of the first biological structure capable of “hosting” the participating observer, the meaning of which can now be specified in terms of QCT, the primordial wave function of would have collapsed, actualising the abiogenesis-psychegenesis timeline and “pruning” all of the others. This is psychegenesis: the phase transition around which the whole of reality pivots. To summarise, 2PC proposes that reality has unfolded in two distinct ontological phases:
This framework offers a radical reinterpretation of collapse as an emergent feature of conscious systems that reach the convergence threshold and thereby instantiate actuality by selecting a path through the space of quantum possibilities. 2PC brings to the QCT framework what it most crucially lacks: an ontological participant and a cosmic context for why collapse occurs at all. Specifically, 2PC (including QZE) contributes:
QCT defines the threshold at which systems could collapse. 2PC and QZE define what or who actually causes collapse to occur, and why only one branch becomes real. Together, they offer a unified account:
So we now have a new interpretation of QM – a new solution to the Measurement Problem which avoids the mind-splitting ontological bloat of MWI, by cutting it off, and the before-consciousness problem of CCC, by deferring collapse until after the phase shift of psychegenesis.
This model entails a conception of time where only the present is fully real. The future “comes into focus” and the past “decays”. It follows that we require more than one concept of the history of the cosmos, both of which matter. The first is the timeline selected from the Platonic ensemble by QCT at the original phase shift – I call this the “ontogenic history of phase 1” (h1o). This history still exists in the ensemble, but it is entirely lost to us, even though it is the true point of origination of our reality. In phase 2 there is a continually evolving present, and the past is forever changing. It must always remain coherent with the present, but within that constraint the past is continually being subtly re-invented. The only parts of h1o which still persist are those which were necessary to causally link the phase shift itself with the Big Bang, for without these there can be no coherent history. We therefore have a dynamically retrodicted past (h1r) – a past which is always trying to look like a plausible and real classical history, even though we have no reason to expect it to remain faithful to h1o (which was necessarily exceptionally implausible in many ways). There is also a third ontological variation of the past: the past of phase 2 of cosmological history (h2r), which also has to remain coherent with the present, and like h1r is also dynamically changing. It follows that what we observe now cannot be considered a reliable guide to what actually happened, and the further back we go in cosmic history, the less reliable it becomes. The implications of this revision of our concept of the time cannot be overstated.
In QM, the state of a system is described by a wave function, which can be mathematically expressed in infinitely many different bases. Each basis corresponds to a different observable – position, momentum, spin, or more abstract combinations. However, when an observation is made, only outcomes corresponding to a specific, limited set of physical properties – like definite positions – are ever observed. But what determines the basis in which quantum states “collapse” into classical reality?
The standard quantum formalism does not specify a unique preferred basis. It offers only the probability rule – the Born rule – once a basis has been externally chosen. This leaves open why reality presents us with definite outcomes in terms of particular observables, rather than arbitrary superpositions. In particular, why do we experience the world in terms of localised objects in space and time, with persistent identities?
One widely studied approach is environment-induced decoherence. As a quantum system interacts with its environment, superpositions in certain bases lose coherence rapidly. This selects so-called “pointer states” – stable configurations that resist entanglement and decoherence, often corresponding to localised positions or energy eigenstates. However, decoherence alone does not cause collapse or explain why a particular basis becomes ontologically real; it merely suppresses interference terms in the density matrix. The core problem remains: why does one basis define experienced reality?
In 2PC the preferred basis problem is tied to the emergence of consciousness.
QZE plays a crucial stabilising role. As conscious agents “measure” aspects of their environment, they reinforce a preferred basis – one that preserves memory, enables agency, and supports the continuity of experience. Thus, in 2PC the preferred basis is selected through the intrinsic constraints of consciousness itself: only those structures that allow for coherent conscious states become ontologically realised. The Measurement Problem and the basis ambiguity are dissolved together in the transition from abstract mathematical possibility (Phase 1) to actualised physical reality (Phase 2).
The Hubble tension has become a focal point of contemporary cosmology. Local observations suggest a value for the Hubble constant H0 ≈73 km/s/Mpc, while the value inferred from the CMB, assuming the standard ΛCDM model, yields H0≈67 km/s/Mpc. Efforts to resolve the inconsistency have largely fallen into two categories. One set of proposals modifies the physics of the early universe, introducing speculative elements such as early dark energy, additional relativistic particles, or phase transitions shortly after the Big Bang. The other set introduces changes to the late-time evolution of the cosmos, such as time-varying dark energy or interactions in the dark sector. Both approaches share a deeper, often unexamined assumption: that ΛCDM accurately describes a continuous, observer-independent history extending unbroken from the Big Bang to the present. This assumption is rarely questioned. But what if it is false? What if the very structure of reality is such that the “early universe” that we can observe now is not an actual physical past, but a retrodicted reconstruction?
Inflation and Its Philosophical Function
Inflation has long been regarded as one of the most successful theoretical advances in modern cosmology. Introduced in the early 1980s, it purports to explain why the observable universe appears so flat, homogeneous, and isotropic, despite the apparent lack of causal connection between distant regions in the early universe. In 2PC inflation is reclassified not as a necessary physical process, but as a philosophical artifact: an ad hoc mechanism invented to fix conceptual problems that arise only if one falsely assumes a classical, observer-independent past. In this section, I will examine what inflation was designed to achieve and explain why, under 2PC, it is no longer needed.
Inflation was introduced to address several deep puzzles that arise when the universe is assumed to have evolved according to classical relativistic physics from the very beginning: the Horizon Problem, the Flatness Problem and the Monopole Problem (see Part One). To solve these problems, inflation posits that the universe underwent a brief period of exponential expansion immediately after the Big Bang. This expansion would stretch a tiny, causally connected region to encompass the entire observable universe (solving the horizon problem), drive the geometry of the universe toward flatness (solving the flatness problem) and dilute any relic particles (solving the monopole problem). However, inflation itself requires finely tuned initial conditions. It demands the existence of a hypothetical inflationary field (the “inflaton”) with a specific potential, appropriate dynamics, and a graceful exit mechanism to end inflation without reheating the universe too violently. In short, inflation trades one set of mysteries for another, and does so on the assumption that the early universe actually existed as a classical, physical state, evolving forwards in time in a manner determined entirely by the laws of physics.
Firstly, in 2PC the foundational assumptions behind inflation no longer hold. The CMB is not a fully reliable record of an actual past physical state (h1o), but part of the post-collapse reconstruction that has been retroactively selected to be consistent with the present (h1r). Additionally, and even more importantly, the observed isotropy and flatness of the universe do not need to be imposed via inflation anyway, because they are features of the specific cosmic history that survived the primary phase transition. Out of the vast ensemble of possible configurations, only a tiny subset supports coherent, self-reflective observers. It is therefore natural to conclude that only a cosmos that started out exceptionally flat and smooth permits the emergence of LUCAS. I call this Phase 1 selection. This is an example of a Phase 1 selection effect.
The flatness and smoothness required for coherence with the present were never physical impossibilities. Inflation was invented because cosmologists needed a reason to explain why such extraordinarily improbable conditions prevailed in the early universe. But in 2PC this sort of improbability is to be expected. This is a central principle in 2PC: if something is physically possible, and LUCAS needs it, then it is guaranteed to happened, even if it seems to be exceptionally improbable.
Therefore 2PC doesn't need any “inflaton field”. The inflaton is a mathematical artifact introduced to repair a model built on a mistaken ontology. It is an entirely ad-hoc explanation for exactly the kind of coherence that 2PC naturally explains as Phase 1 selection effects. In this sense, inflation is a 21st century version of a Ptolemaic epicycle: an inelegant, complicated patch on a fundamentally mistaken model of reality. Its purpose is to defend the classical assumption that the universe always existed as it now appears, only earlier and hotter.
In 2PC the Hubble Tension is reduced to a conceptual misunderstanding. The two Hubble constants are not measuring the same thing in the same ontological context: • H0local≈73 km/s/Mpc is a measurement made by conscious observers within a post collapse, time-directed world. It is an empirical observation with direct phenomenological access. • H0CMB≈67 km/s/Mpc is not measured, but inferred from a simulation of early-universe conditions based on a model (ΛCDM) that itself rests on inflationary assumptions, which are rendered philosophically unjustified under 2PC. These two values emerge in distinct ontological domains (pre-collapse vs. post-collapse), use fundamentally different referents (model-consistent projection vs. direct observation), and reflect separate stages in the actualisation of reality (quantum potential vs. classical experience). There is therefore no reason to expect them to match.
If the CMB cannot be relied on as an accurate measurement of the rate of expansion in the early universe, and if we've got no reason to trust figures derived from the epicycles of ΛCDM, then the Hubble Tension is no more, and the question becomes: What is the best model of reality, and especially the early cosmos?
The apparent acceleration of cosmic expansion partly relies on an assumption that ΛCDM is a valid cosmological model and that inflation was real, rather than just a ΛCDM epicycle. Once the inflationary paradigm is dismissed and the Hubble tension dissolved, the empirical basis for postulating dark energy needs to be rethought.
Unlike the Hubble Tension, the phenomenon typically attributed to dark energy does not vanish simply by questioning early-universe cosmology. Supernova data, particularly Type Ia observations at redshifts z ≲ 2, still suggest that the rate of cosmic expansion is increasing. Distant galaxies appear dimmer than expected in a uniformly expanding or decelerating universe, indicating that they are farther away than they should be under those assumptions. This remains true even if we discard inflation and question the ontological status of the CMB. In this sense, dark energy is not a purely model-dependent artifact like the inflaton. It is a name for something real: the empirical observation that the cosmos, as we experience and measure it now, appears to be accelerating. What 2PC challenges is not this observed pattern, but the interpretation of that pattern. The standard view attributes the acceleration to a mysterious field or constant – Λ – filling spacetime with negative pressure. But in 2PC, this is not necessary. Acceleration need not be the result of a physical force acting on the fabric of spacetime. It might be a structural feature of the post-collapse projection selected in Phase 2.
The apparent acceleration of cosmic expansion partly relies on an assumption that ΛCDM is a valid cosmological model and that inflation was real, rather than just a ΛCDM epicycle. Once the inflationary paradigm is dismissed and the Hubble tension dissolved, the empirical basis for postulating dark energy needs to be rethought.
Unlike the Hubble tension, the phenomenon typically attributed to dark energy does not vanish simply by questioning early-universe cosmology. Supernova data—particularly Type Ia observations at redshifts z ≲ 2—still suggest that the rate of cosmic expansion is increasing. Distant galaxies appear dimmer than expected in a uniformly expanding or decelerating universe, indicating that they are farther away than they should be under those assumptions. This remains true even if we discard inflation, question the ontological status of the CMB, and view ΛCDM as a post-hoc projection rather than a true history.
The observed luminosity–redshift relation of supernovae can be modeled without invoking ΛCDM at all. Using only constant cosmographic parameters – q₀ ≈ –0.5 (a modest, steady acceleration) and j₀ ≈ 1.0 (no change in acceleration) – one can generate a smooth, empirically adequate luminosity–distance curve matching the observed supernova data up to z ≈ 2. This curve requires no dynamic dark energy, no changing equation of state, and no sharp phase transition from deceleration to acceleration. The data are consistent with constant acceleration over observable time – an interpretation that is ontologically simpler and dynamically stable.
There remains a huge question of what dark energy actually is, although 2PC may be able to shed some new light on why it exists at all. If the expansion of the universe wasn't accelerating at all then it would be inevitable that eventually it would stop expanding and start contracting towards a Big Crunch. In this sense it may too be explained as a selection effect, but we will still need to explain what it is, and how it works.
Why is the observed cosmological constant so small compared to the expected QFT vacuum energy? Why is it not exactly zero? This is typically viewed as a mismatch between GR and QFT – a profound clue that new physics is needed.
In 2PC, the vacuum energy calculated from QFT is specific to Phase 1: the pre-collapse, quantum-informational domain. But the cosmological constant that we observe is part of the Phase 2 classical projection. There is therefore no reason to expect the two values to match. The cosmological constant does not require explanation in terms of vacuum fluctuations of fields that only exist meaningfully in the pre-collapse domain. The “problem” only arises if one assumes a universe that exists independently of observation and is governed by a single unbroken ontology. 2PC replaces this with a model where quantum information (Phase 1) undergoes a collapse-like transition into experienced space and time (Phase 2). Therefore, any calculation based on unobservable vacuum fields from a supposedly observer-independent past has no epistemic warrant. The small positive Λ is a parameter that helps maintain long-term structural coherence and observability of the universe. In 2PC, it is consistent with the stability and coherence of Phase 2 – i.e., the need for a metastable, structured, expanding cosmos that supports conscious evolution. No deeper explanation is needed.
In Part 1 of The Reality Crisis, I identified six different reasons cosmologists have posited the existence of dark matter:
1. Galaxy Rotation Curves.
2. Galaxy Cluster Dynamics.
3. Gravitational Lensing.
4. Galaxy Cluster Collisions
5. Large-Scale Structure Formation.
6. CMB Anisotropies.
Under 2PC, these must be split into two categories – those which can be explained entirely as phase 1 selection effects, and those which are real phase 2 anomalies. Problems 5 and 6 are prime candidates for selection effects – the large-scale structure and CMB anisotropies needed to be the way they were, or LUCAS would not have been possible. Problems 1-4 look more like phase 2 anomalies: they are derived from the behaviour of the cosmos within phase 2, and therefore cannot be explained straightforwardly as selection effects. However, it is not unreasonable to suppose that whatever is causing these anomalies needs to be there as a mechanism for stabilising spiral galaxies, which we might hypothesise is also necessary for LUCAS. There may be no purely baryonic way to construct a reality that contains galaxies of the kind needed if there's going to be a life-supporting planet like Earth. This may provide us with an explanation of why “dark matter” exists, but brings us no closer to discovering what it actually is. So this remains a live issue for cosmology, even under 2PC. However, we will see shortly that there might just be a twist.
Why do the fundamental constants and initial conditions of the universe fall within the extremely narrow range required for life, structure, and consciousness to emerge? From the cosmological constant to the strength of fundamental forces, the odds of such a life-permitting universe arising by chance under standard models appear astronomically low.
By now the reader will be familiar with the fate of fine tuning problems under the Two Phase Cosmology: we should expect Phase 1 to be exceptionally finely tuned. There is no problem.
Ditto. No low entropy beginning → no LUCAS.
Ditto, but there is also a second possible solution to the Flatness Problem under 2PC. In Phase 2 classical reality is continually being reconstructed. There is an ongoing selection effect, which could continually smooth out any deviations from flatness. Maybe flatness is just the default way that classical reality is “rendered”.
Under 2PC, this is another straightforward phase 1 selection effect. The cosmos necessarily began in a state of exceptionally improbable low entropy and consistency, in order to remain sufficiently coherent for the eventual appearance of LUCAS.
Inflation was invented by cosmologists as a solution to the Flatness Problem, the Horizon Problem and the Missing Monopole Problem. Those problems have other solutions under 2PC, freeing us at long last from the need to posit inflation. This automatically rids of several other ΛCDM epicycles:
(11) The Inflation Reheating Precision Problem
(12) The Reheating Mechanism Problem
(13) The Inflaton Field Problem and the Origin of Cosmic Inflation
Under 2PC, we expect phase 1 to be a goldilocks cosmos. This dissolves:
(14) The Biophilic Element Abundance Problem
(15) The Structure Formation Timing Problem
(16) The Matter–Radiation Equality Tuning Problem
(18) The Amplitude of Primordial Perturbations Problem
As with the Flatness Problem, there are two possible 2PC explanations for the missing magnetic monopoles. If their presence is a problem for LUCAS then their absence can be explained as a selection effect, but there is an intriguing possibility that becomes a lot more plausible under 2PC: could the missing magnetic monopoles and dark matter be one and the same? Could these mysterious galactic “halos” be composed of mutually-repelling massive particles? Could this be why these structures are spherical rather than collapsing down into disks?
Monopoles, if they exist, would be massive, stable, and non-interacting with light — fitting the basic dark matter profile. GUT monopoles would have been produced in abundance during the early universe, and could interact via a magnetic analog of the electromagnetic force, giving them self-interaction that could mimic some dark matter behaviors. ΛCDM predicts far too many of them, but this isn't a problem in 2PC, because if they are needed for the stabilisation of spiral galaxies like the Milky Way then Phase 1 fine-tuning will precisely regulate their abundance. What about detecting them?
Experiments have searched for monopoles in cosmic rays, moon rocks, particle accelerators, and deep underground detectors, but none have been found. Monopoles would be extremely massive (up to 10¹⁶ GeV), much heavier than most dark matter models (like WIMPs or axions), making them hard to reconcile with large-scale structure formation unless they're very rare. Might this explain why we haven't been able to find them?
That features of the cosmos on the grandest scale should line up with the plane of the Solar System and the Earth's rotation seems completely outrageous from the perspective of contemporary cosmology, such that it is routinely dismissed as an “impossible problem”: it simply cannot be accepted as a real problem without going back to geocentrism, so it must just be some sort of glitch in the data. 2PC turns this reasoning on its head. According to the Two Phase Cosmology, the Earth is very literally the centre of the cosmos – not for the old theological reasons but because it is the first and only centre of consciousness (see section 24). The Earth, according to 2PC, is the centre of classical reality. This makes these observations seem far less absurd, though so far we still haven't uncovered a possible mechanism (though there might just be another twist).
Can 2PC shed any light on the JWST galaxies that shouldn't be there? No immediate explanation pops out, but new scope does open up. One possibility is that they are relatively recently retrodicted h1r phenomena – that reality has “invented” them in response to our new probing of parts of the underlying mathematical structure available in phase 1. That would mean that they did not exist in h1o. Another is that they are a sign that the early formation of galaxies was somehow necessary for the evolution of LUCAS – that they are another phase shift selection effect, but for reasons which are not immediately obvious. Certainly the existence of these galaxies is much less of a problem for 2PC than it is for ΛCDM. In ΛCDM they are potential show-stoppers in their own right, in 2PC they are more like intriguing clues waiting to be figured out.
The Baryon Asymmetry Problem is almost certainly yet another fine-tuning problem, and partly explained as a phase shift selection effect. However, this is critically dependent on the possibility-space from which the selection took place. My hesitation is due to the fact that if you're trying to conjure up something from nothing, then it seems inevitable that you have to do it in an entirely symmetrical manner: 0 = 1 + -1. And yet somehow in our cosmos, when the matter and anti-matter cancelled each other out, something was left over. So there's an extra question here about how it is possible to break this symmetry at all. As things stand, 2PC doesn't directly provide an explanation. But it must indeed have been possible (since it actually happened) and it was therefore selected for at the phase shift.
The arrow of time refers to the asymmetry between past and future: we remember the past, not the future; causes precede effects; entropy increases. This temporal directionality is central to our experience of reality, yet the fundamental laws of physics are time-reversible. No known physical law (quantum, classical, relativistic) forbids time flowing backward. So why does time seem to flow in one direction? Standard physics cannot derive this arrow from its equations alone. It is usually imposed as a boundary condition – a low-entropy initial state – which simply pushes the problem back a step.
2PC proposes that time’s arrow is not an intrinsic feature of the universe's fundamental structure but an emergent property of a selection event that divides the cosmos into two epochs:
Thus, time’s arrow is not fundamental. It emerges at the moment of wave function collapse driven by conscious observation. Only from this point forward do concepts like “before” and “after” gain epistemic content.
The direction of time aligns with the increase of entropy because entropy increase is what makes memory, causality, and prediction possible. The emergence of consciousness in a low-entropy history means that:
In this way, the collapse that creates the timeline also creates the arrow, since from that point onward, entropy begins to rise and information becomes causally structured.
QCT and the Threshold of Temporal Emergence
QCT complements this by describing how and when collapse occurs.
According to QCT:
In other words, QCT quantifies the moment the arrow of time becomes physically real. It gives a thresholded mechanism for when potential becomes history. We are riding on the crest of a wave of collapsing potentiality. About that much at least, Bill Hicks was right.
This is perhaps where the difference between ΛCDM – and materialistic theories in general – are at their most stark. From the materialistic perspective, the present moment is a profound mystery. This is hardly surprising given that from a phenomenological perspective the problem vanishes: from our subjective perspective, it is always now. 2PC inherits the explanatory power of idealism here, while avoiding its problem of positing disembodied minds.
In 2PC the present is the only thing which is fully real. The future “comes into focus”. The past “decays” (apart from the critical events on the timeline from the Big Bang to the phase transition). In 2PC “now” is the ontological reality of consciousness, “where” the whole of Phase 2 takes place.
Despite decades of effort, attempts to formulate a quantum theory of gravity – to reconcile GR with QM – have failed to yield a consistent, predictive theory. Loop Quantum Gravity, String Theory, and various approaches to emergent spacetime or holography remain incomplete or lack empirical confirmation.
What if gravity isn’t supposed to be quantised at all? Gravity, as described by GR, is a classical theory of spacetime geometry. From the 2PC perspective, it does not exist in the pre-collapsed quantum phase. It is not a fundamental field to be quantised alongside others. It arises only after the wave function collapse, as a feature of the classical spacetime manifold into which consciousness collapses reality. Efforts to quantise gravity implicitly assume that spacetime is a quantum object. But in 2PC, spacetime is neither pre-existing nor quantum, and may not even be spacetime any longer, and gravity resists quantisation because it is a product of the collapse, not a participant in the superposed phase. On this view, the attempt to quantise gravity is a category mistake.
Penrose has proposed that gravitational effects cause quantum wave function collapse. His model posits that superpositions involving significantly different spacetime geometries are unstable, and that once the gravitational self-energy difference exceeds a critical threshold, the system undergoes an objective reduction (OR). In Penrose’s view, gravity limits quantum coherence and is thus fundamental, playing a key role in selecting reality from quantum possibility.
2PC reverses this causal arrow.
In this model, gravity is not the cause of collapse; it is the result.
Given the vastness and age of the universe, and the seemingly high probability for life elsewhere, why have we found no convincing evidence of extraterrestrial intelligence? Under 2PC, the answer is obvious: the primordial wave function can be collapsed only once. 2PC leverages the “computing power” of the MWI-like Phase 1 to explain how consciousness emerged, but having solved the problem this cosmological algorithm could no longer continue. The conclusion is that if there actually is any other metaphysically separated branch of conscious existence, we won't find it anywhere in the parts of the cosmos to which we are causally connected.
Where is everybody? There isn't anybody else. We are it.
The black hole information paradox arises in standard physics because it assumes that spacetime is fundamentally real and complete: that black holes have interiors, that Hawking radiation carries no information, and that quantum evolution must preserve information within that spacetime context.
The structure of 2PC leads to a principled reframing of what black holes are, by questioning the projection of classicality where it fundamentally cannot apply.
In 2PC, black holes are not objects in space and time, but regions where the projection of classical spacetime fails. They represent the breakdown point beyond which it is no longer possible to extract a coherent, observer-relative spacetime history from the underlying quantum structure.
Spacetime in Phase 2 is not ontologically real “all the way down” – it is the output of a projection process from Phase 1. That projection process is constrained by the ability of observers (or more precisely, observer-like systems) to maintain coherent memory and self-consistent collapse dynamics (Θ(t), QCT). A black hole, then, is a region where such projection becomes impossible.
From the perspective of a distant observer, an infalling object never crosses the event horizon. Its signals redshift and fade. This is already consistent with standard GR.
In 2PC, this isn't merely apparent but reflects a fundamental epistemic boundary. No consistent projection of a classical trajectory beyond the horizon is possible. Objects do not “enter” the black hole in any ontological sense. Their classical identity terminates at the edge of projection. The mass they contribute is accounted for at the level of the gravitational field (which is itself a relational feature of Phase 2), but their internal quantum state becomes non-projectable.
From the perspective of a conscious observer falling toward a black hole, experience continues only as long as internal Θ(t)-coherence is maintained. As one approaches the projection boundary, QCT fails: the internal dynamics required to support conscious time and memory break down. The observer loses awareness. There is no crossing of the horizon, and no interior experience – only a dissolution of conscious reality.
In 2PC,information is never lost, because it is never fundamentally contained “in” spacetime. The black hole information paradox arises only if one assumes that all physically meaningful information must be recoverable from spacetime dynamics. But in 2PC, the spacetime domain is emergent. Information resides in the globally evolving quantum state of Phase 1, not in the projected contents of Phase 2. The collapse of conscious coherence near a black hole is not an annihilation of information, but a failure of classical representation. The quantum state persists; the information encoded in the now-unprojectable degrees of freedom remains part of the ongoing global evolution. No contradiction with unitarity arises, because 2PC preserves unitarity at the level where it actually applies: Phase 1.
Crucially, 2PC does not require the existence of Hawking radiation. That prediction was derived using semi-classical gravity and assumes that spacetime and quantum fields coexist in a consistent framework – a premise 2PC rejects. Since the foundational ontology of spacetime breaks down near the event horizon, the assumptions underpinning Hawking's derivation no longer hold. Moreover, there is currently no empirical evidence that Hawking radiation even exists. Black holes have never been observed to shrink, and no direct measurement of the radiation has been made. Thus, from the 2PC standpoint, Hawking radiation is unsupported and possibly incoherent. If it is ever actually observed, it would need to be reinterpreted within the limits of Phase 2 projection.
Because 2PC neither requires nor presupposes Hawking radiation, it removes the paradox at its root. There is no need for information to “escape” the black hole in radiation, because it never fell into a spacetime container in the first place.
Why should human-devised mathematics, originally developed as an abstract language, so precisely describe the workings of nature?
Under 2PC, during the first phase the universe exists as a superposed quantum multiverse described by a rich, highly symmetric mathematical structure. This pre-collapse phase encodes fundamental laws and relationships as aspects of the quantum state itself, effectively defining the “rules” of the cosmic computation that unfolds. In the second phase physical laws emerge as effective patterns. Consequently, the laws of physics and the constants of nature are not arbitrary but reflect the underlying mathematical architecture of the primordial wave function from which reality collapsed. Human mathematics is effective because it discovers and encodes the same abstract structures embedded in this primordial quantum fabric. Our cognitive faculties – products of this cosmic unfolding – are naturally attuned to perceive and manipulate these fundamental mathematical truths. Thus, the deep congruence between mathematics and physics arises because both originate from the same foundational quantum-mathematical source.
The 2PC and QCT framework suggests that mathematics is not merely a human invention or an arbitrary descriptive tool but a direct reflection of the fundamental quantum substrate from which the classical universe emerges. This would explain why mathematics is so unreasonably effective: it is woven into the very fabric of reality itself.
The Two Phase Model incorporates a specific version of CCC – that of Henry Stapp, as first explained in Mindful Universe: Quantum Mechanics and the Participating Observer (2007). For Stapp, “the participating observer” (PO) is the locus of conscious choice, but he leaves many questions about the metaphysical status of the PO unanswered. I will be addressing this question in more detail at the end of this part of The Reality Crisis. For now we can just think of it as an internal observer of brain processes. This observer cannot just be physical – it cannot merely be the information threshold described by QCT, and it is not the “mind stuff” of substance dualism either. It doesn't need any of the complexity of a mind because all of that complexity is already encoded in the complexity of a brain. It is more like a Nagelian “view from somewhere” – an “inner viewpoint”. We've got a living brain, within which QCT can operate, and we've got a non-physical internal observer of the processes unfolding in that brain. Ontologically, this is the minimalist solution to the Hard Problem that actually works, but a huge question remains about what the participating observer actually is.
We must now briefly revisit David Chalmers' p-zombies. Though I obviously agree with Chalmers' conclusion, I do have an objection to his argument. I can't conceive of a p-zombie because by definition they behave like ordinary humans at all times, which means that if you asked one whether it is conscious it would reply “Of course I am! Why are you even asking me that?” I can't imagine a zombie that believes it is conscious. It might be convincingly human in many ways, but it would not be capable of understanding consciousness or anything that depends upon it, or at least not like a conscious being understands those things. I think it would actually say something like “Consciousness? I have never been able to understand what that word is supposed to mean”, which means it wouldn't be a p-zombie, because that is not how humans normally talk.
Consciousness presents an enduring evolutionary enigma. From a Darwinian perspective grounded in survival and reproductive advantage, the emergence of subjective experience is profoundly puzzling. Why should an undirected, mechanical process give rise to inner life, rather than simply more efficient stimulus-response mechanisms? Thomas Nagel famously posed this as a crisis for materialist accounts of evolution. In Mind and Cosmos, he argued that natural selection, as currently understood, cannot explain the emergence of conscious subjectivity, and proposed we search for teleological laws of nature: goal-oriented principles embedded in the fabric of the cosmos. The 2PC framework offers a new solution to this problem.
The core problems:
2PC postulates a pre-conscious multiverse in which physical possibilities evolve purely via QM, without subjective experience. Consciousness does not emerge gradually within each possible branch, but only in one that satisfies a specific structural condition: cross-threshold informational coherence, formalised by QCT. A quantum system transitions from decoherent complexity to coherent introspectable order, sufficient to enable a global integrative state, at the moment of psychegenesis. This threshold is a phase transition: when a system passes a certain complexity/coherence boundary, it instantiates a new mode of being – consciousness. From this point forward, wave function collapse becomes endogenously driven by the conscious observer, via QZE. The conscious branch begins selecting its own reality, and subjective experience gains causal efficacy. Thus, consciousness is not an adaptation selected by evolution, but the precondition for observable evolutionary history within this particular universe.
This model resolves the problems on all fronts:
Where Nagel suggests that evolution must be guided by teleological laws aimed at producing minds, 2PC posits a structural inevitability: in a potentially infinite quantum cosmos, some branch will cross the QCT threshold, and consciousness will emerge there. That branch then becomes the only experienced universe. This retains Nagel’s key insight that mind is not epiphenomenal or accidental, but dispenses with the need for teleological laws. The apparent directedness of evolution toward complexity and consciousness is a selection effect caused by the fact of consciousness itself being the criterion for observable history. The universe we observe is the one rendered actual by being observed, regardless of probability. This results in something very similar to the anthropic principle, except instead of just saying “If humans hadn't evolved then we wouldn't be here to ask the question”, we're actually explaining why we were guaranteed to win the cosmic lottery. I call this “the psychetelic principle”.
Why did psychegenesis happen on Earth, rather than somewhere else? The psychetelic answer tells us that we should expect the Earth to be special, but it doesn't tell us exactly what is special about Earth. The psychetelic principle implies that the Earth's Phase 1 history should have involved multiple exceptionally improbable events. And indeed there are several candidates.
1. Eukaryogenesis: The Singular Emergence of Complex Cellular Life
The origin of the eukaryotic cell via the endosymbiotic incorporation of an alpha-proteobacterium (the precursor to mitochondria) into an archaeal host appears to have happened only once in Earth’s entire 4-billion-year history. Without it, complex multicellularity (and thus animals, cognition, and consciousness) would not have emerged. The energetic advantage conferred by mitochondria enabled the explosion of genomic and structural complexity. No similar event is known to have occurred elsewhere in the microbial biosphere, despite vast diversity and timescales. If eukaryogenesis is a statistical outlier with a probability on the order of 1 in 10⁹ or worse, it becomes a cardinal signpost of the unique psychegenetic branch. Lane, N., & Martin, W. F. (2010). The energetics of genome complexity. Nature, 467(7318), 929–934. https://doi.org/10.1038/nature09486
2. Theia Impact: Formation of the Earth–Moon System
The early collision between Earth and the hypothesized planet Theia yielded two improbable outcomes at once: a large stabilizing moon and a metal-rich Earth. The angular momentum and energy transfer needed to both eject enough debris to form the Moon and leave the Earth intact is finely tuned. This event likely stabilized Earth's axial tilt (permitting climate stability), generated long-term tidal dynamics (affecting early life cycles), and drove internal differentiation (fuelling the magnetic field and tectonics). It’s estimated to be a rare outcome among rocky planets -- perhaps 1 in 10⁷ – and essential for the continuity of biological evolution.Canup, R. M. (2004). Simulations of a late lunar-forming impact. Icarus, 168(2), 433–456.
Laskar, J., Joutel, F., & Robutel, P. (1993). Stabilization of the Earth's obliquity by the Moon. Nature, 361(6413), 615–617.
Elser, S., et al. (2011). How common are Earth–Moon planetary systems? Icarus, 214(2), 357–365.
Stevenson, D. J. (2003). Planetary magnetic fields. Earth and Planetary Science Letters, 208(1–2), 1–11.
3. Grand Tack: A Rare Planetary Migration Pattern
Early in solar system formation, Jupiter is thought to have migrated inward toward the Sun and then reversed course (“tacked”) due to resonance with Saturn. This migration swept away much of the early inner solar debris, reducing the intensity of late bombardment and allowing small rocky planets like Earth to survive. Crucially, it also delivered volatiles (including water) from beyond the snow line to the inner system. This highly specific orbital choreography is rarely reproduced in planetary formation simulations. Most exoplanetary systems dominated by gas giants do not preserve stable, water-bearing inner worlds. The odds against such a migration path are estimated to be very high. Some simulations suggest well under 1 in 10⁶. Raymond, S. N., Izidoro, A., & Morbidelli, A. (2018). Solar System formation in the context of extrasolar planets. arXiv:1812.01033.
Walsh, K. J., et al. (2011). A low mass for Mars from Jupiter’s early gas-driven migration. Nature, 475(7355), 206–209.
4. LUCA’s Biochemical Configuration
The Last Universal Common Ancestor (LUCA) did not merely represent the first replicator, but a highly specific and robust configuration of metabolism, information storage, and error correction. It was already using a universal genetic code, RNA–protein translation, lipid membranes, and a suite of complex enzymes. LUCA’s molecular architecture was a kind of “narrow gate” through which life could pass toward evolvability. Given the astronomical space of chemically plausible alternatives, LUCA’s setup may reflect a deeply contingent and rare outcome.Woese, C. R. (1998). The universal ancestor. PNAS, 95(12), 6854–6859.
Martin, W., & Russell, M. J. (2003). On the origins of cells. Phil. Trans. R. Soc. B, 358(1429), 59–85.
Lane, N., & Martin, W. (2010). The energetics of genome complexity. Nature, 467(7318), 929–934.
Szostak, J. W. (2012). Attempts to define life do not help to understand the origin of life. J. Biomol. Struct. Dyn., 29(4), 599–600.
Conclusion: Compound Cosmic Improbability as Psychegenetic Marker
Each of these four events is, in itself, vanishingly unlikely. But more importantly, they are compounded. The joint probability of a single planet experiencing all four – along the same evolutionary trajectory – renders the Earth’s phase 1 history cosmically unique, in line with the 2PC hypothesis. What these improbabilities encode is not a miracle, nor a divine intervention, but the statistical imprint of consciousness retro-selecting a pathway through possibility space – making a phase transition from indefinite potentiality to a single, chosen actuality.
The Cambrian Explosion, occurring around 541 million years ago, marks one of the most dramatic evolutionary events in Earth's history. Within a geologically brief window (~20–25 million years), nearly all major animal body plans (phyla) appear suddenly in the fossil record, without clear precursors. This event challenges gradualist models of evolution and has prompted various explanations: oxygenation, ecological feedback loops, genetic innovations like the Hox cluster, and even extraterrestrial seeding. None of these explanations have proved able to assemble a consensus.
Under the 2PC model, the explanation could not be more obvious. the Cambrian is the evolutionary marker of the phase shift itself – not merely a biological event, but the first moment of entangled biological evolution and conscious observation.
Here’s how:
Traditional models struggle to explain the nearly simultaneous emergence of complex forms. In 2PC, this suddenness reflects the shift from unobserved decoherent evolution to observed and actualised classical evolution. The evolution of early neural nets and sensorimotor coordination is precisely what one would expect if certain lineages were on the cusp of crossing the QCT and becoming conscious. Post-QCT, only the history from the conscious lineage is experienced. All alternative evolutionary paths remain uncollapsed and unexperienced. Thus, the fossil record appears as though complexity "exploded" from nowhere, because we are only observing the history of the conscious branch.
Unlike Intelligent Design theories, this model requires no external creator or teleological guidance. It proposes a phase transition in informational architecture – a structural, emergent bifurcation, analogous to crystallisation or superconductivity. In this view, the Cambrian Explosion marks the biological hallmark of a cosmological bifurcation: the first large-scale event shaped not only by evolution, but by evolution entangled with consciousness. The 2PC/QCT framework reinterprets the Cambrian as the first epoch of entangled, observed, and memory-bound evolution and the dawn of a conscious universe selecting its own classical story.
In section 19 above, I explained why the Earth's central place in the cosmos offers a potential explanation for correlations between the orientation of the Earth and large-scale cosmological anomalies, but left the precise mechanism open. It is perhaps worth pointing out that Cambrian life was entirely restricted to the oceans, and at the time of the Cambrian explosion the configuration of the continents was highly unbalanced. At this time, nearly all of the Earth's continental mass was concentrated in the southern hemisphere, while the northern hemisphere was almost entirely oceanic. It follows that nearly all of the conscious observers were observing the northern half of the cosmos, and almost none were observing the south. It's just a thought...
The classical problem of free will arises from a perceived contradiction between determinism and agency: if all physical events are governed by fixed laws, how can any action be genuinely “free”? Conversely, if actions are the result of indeterministic quantum randomness, how can they be attributed to a responsible agent? Most contemporary scientific models either eliminate free will entirely, redefine it compatibilistically, or leave it unexplained. However, the combined framework of QZE and QCT offers a novel resolution: free will as iterative attentional modulation of probabilistic convergence.
In Stapp’s model, the mind exercises influence not by directly causing collapses, but by choosing which quantum property (or projection operator) to focus on at any moment. Through the QZE, sustained attention inhibits quantum state evolution and stabilizes a particular potential outcome. The repeated “observation” of a chosen option makes that option more likely to manifest. This provides a phenomenology of intentional control: decisions arise from an act of attention that biases which possibilities remain dynamically relevant. Yet by itself, the QZE leaves unanswered the question of how this bias translates into actual collapse. Here, QCT fills the explanatory gap. According to QCT, a quantum system collapses when the informational entropy of the environment, relative to the system’s quantum state, reaches a threshold. Collapse is a natural thermodynamic process driven by increasing complexity and decoherence, not an arbitrary metaphysical intervention.When these two models are combined, a layered account of free will emerges:
This synthesis preserves physical law while allowing for meaningful top-down causation. Choices are not pre-determined nor random, but shaped by attentional patterns that modulate probabilities over time. Free will, in this view, is the capacity to repeatedly select what to sustain in consciousness long enough for the physical system to converge.
This model does not imply that consciousness breaches physical law; rather, it operates within the constraints of quantum dynamics by shaping the entropy landscape through iterative, localised selection. Nor is it epiphenomenal: it has a measurable causal influence through the alteration of convergence trajectories.
Thus, free will can be defined as a quantum-informed control loop, in which a conscious agent (1) selects among possibilities through attention, (2) maintains coherence around a chosen path via the QZE, and (3) allows environmental convergence to actualise this path through QCT collapse. This avoids both the determinism of classical physics and the incoherence of randomness-based libertarianism, offering a third way: probabilistic agency grounded in quantum thermodynamics.
Empirical studies in neuroscience, such as the Libet experiments, have long been interpreted as evidence against conscious free will. These experiments found that neural activity predicting a motor action could be detected several hundred milliseconds before participants reported a conscious intention to act. If the brain has already initiated an action before the subject becomes aware of choosing it, then conscious will appears not as a cause but as a passive observer. This conclusion has led many to question the reality of agency and to interpret the self as a mere epiphenomenon.
The Two-phase Cosmology, especially when integrated with QCT, ontologically reframes the empirical findings of neuroscience by situating them within a time-neutral framework of reality in which consciousness plays an active role in the actualisation of events, but not entirely in the classical, forward-causal sense.
In 2PC, the substrate of reality is not temporally ordered in the same way as the phenomenal world we experience. It is governed by a time-symmetric structure akin to that found in certain formulations of QM, such as the two-state vector formalism and retrocausal interpretations. In this pre-phenomenal domain, causal relations are not unidirectional, and both past and future boundary conditions play a role in the actualisation of events. This retrocausality becomes crucial in understanding how choices can appear to be made after the brain has already begun to prepare for them. Within the QCT framework, a conscious decision is not a purely forward-moving causal trigger, but part of a broader informational threshold process that spans time-symmetric structures. The actualisation of a choice involves a convergence point in which forward-evolving neural precursors and backward-projected conscious intention mutually reinforce one another, collapsing a field of potentialities into a single experienced outcome.
This reframing renders the Libet-type data not as a refutation of free will, but as a misunderstanding of temporal ontology. In the 2PC/QCT model, phenomenal awareness represents not the origin of causality but the moment of ontological lock-in: the point at which a potential future becomes retroactively confirmed. The brain’s apparent early readiness potentials are components of the pre-conscious convergence field, necessary but not sufficient for action. What matters is when a given path through possibility-space becomes actualized, and this depends on crossing the quantum convergence threshold, which includes the conscious observer as an irreducible participant. Thus, from within the phenomenal timeline, it may appear that the brain "decided first," and the self merely caught up. But from the deeper, time-neutral structure of reality, the conscious decision is co-constitutive of the event’s actualisation. There is no contradiction between predictive brain signals and free agency, because both are coordinated across a temporally non-local structure in which consciousness is indispensable for rendering any experience determinate.
In this light, conscious agency is neither illusory nor epiphenomenal, but emergent from a deeper interplay between mind and cosmos. The 2PC/QCT synthesis preserves the core phenomenology of decision-making while offering a framework in which empirical neuroscience and metaphysical freedom can be reconciled. Consciousness does not override causality; it participates in a broader, non-classical structure of causation, in which the self becomes a genuine agent in the shaping of reality. This also explains my objection to p-zombies: they aren't conscious, and they don't have free will, so in fact they wouldn't behave like humans at all. I suspect they'd be more like a humanoid version of ChatGPT: incapable of actually understanding anything at all, and inhabiting a realm where freely flowing words don't actually mean anything to the entity from which they flow.
The Binding Problem concerns how the brain integrates distributed information, such as shape, colour, location, and motion, into a unified conscious experience. Standard models of neural processing assume spatially and temporally distributed activity across cortical areas, yet phenomenal experience is unitary and coherent. Why do we not experience disjointed perceptual fragments? What mechanism selects, synchronises, and stabilises one coherent perceptual gestalt from among the myriad transient possibilities? The combination of Stapp’s quantum attentional model and QCT provides a plausible and mechanistically grounded answer.
In Henry Stapp’s framework, conscious experience is not passive. The mind is continually posing “questions to nature” by selecting which projection operator (observable) to focus on. This act of attention initiates a quantum measurement-like process, where conscious intention repeatedly samples a chosen aspect of the brain’s quantum state. Through the QZE, sustained mental focus suppresses the evolution of superposed alternatives and stabilizes the brain’s quantum field around a coherent percept. The effect operates much like a spotlight: what is held in focused attention becomes dynamically “frozen” into a particular pattern, and disparate neural events become aligned by participating in a shared, repeatedly accessed projection. This provides a mechanism for phenomenal binding: attention effectively synchronizes otherwise-distributed quantum activity across brain regions by locking it into a repeatedly reinforced subset of possibilities. These patterns are not mere correlates of consciousness, but partially constitutive of the conscious state itself.
While QZE explains how coherent patterns can be selected and stabilized, it does not by itself explain how they become actualized in physical terms. QCT posits that collapse of the quantum state occurs not through subjective volition alone, but when theinformational entropy of the surrounding environment reaches a threshold relative to the quantum state in question. The brain, as a thermodynamically open system with massive environmental entanglement, continually pushes its subcomponents toward this collapse threshold. Thus, while attention (QZE) holds together distributed components into a coherent frame, convergence pressure from the environment (QCT) ensures that this frame eventually actualises as a classical, embodied event in the neural substrate. Importantly, the binding is not a post hoc reconstruction, but the very trajectory that the QCT mechanism will collapse once coherence is maintained long enough under conscious observation.
Taken together, the two models suggest that conscious binding is the result of:
This model solves the binding problem by rejecting the assumption that binding is achieved solely through classical synchronization or integration mechanisms. Instead, it proposes that phenomenal unity arises quantum-dynamically from the feedback loop between:
The result is a physically grounded account of conscious unity, reconciling subjective experience with quantum dynamics and thermodynamic constraints.
The Frame Problem in artificial intelligence and cognitive science concerns the challenge of efficiently determining what information is relevant to update in response to a change in the environment. When something in the world changes, a thinking system must decide which facts to revise and which to keep fixed without exhaustively checking all possibilities – a computationally intractable task in classical systems.
QCT posits that conscious collapse of the wave function occurs when a system’s quantum information crosses a threshold of complexity and coherence, triggering a selective, global update of its state.
The QZE describes how frequent “observations” or interactions can inhibit the evolution of a quantum state, effectively “freezing” it in place.
Together, the QCT and QZE provide a quantum mechanism for managing informational relevance and stability:
Summary: The combination of QCT and QZE provides a physically principled and computationally efficient mechanism for addressing the frame problem. By selectively collapsing only relevant quantum states (QCT) and stabilizing these choices through focused observation (QZE), conscious systems can update their internal models without exhaustive reprocessing, offering a novel quantum foundation for intelligent cognition.
The origin of classical memory –durable, retrievable representations of experience – poses a major puzzle at the interface of neuroscience, quantum theory, and philosophy of mind. Standard materialist accounts assume that memory is encoded by classical changes in synaptic weights or neuronal architecture. However, this view does not explain how transient cognitive states become selectively consolidated into long-term memory, nor why some experiences “stick” while others evaporate. By integrating QZE with QCT we can construct a process-level model of memory formation in which quantum attention and informational collapse jointly produce stable classical traces.
In Stapp’s framework, conscious attention functions as a quantum filter. When an individual focuses on a particular percept, intention, or meaning, this mental act repeatedly projects the brain’s evolving quantum state onto a subspace aligned with the object of attention. Through the QZE, this repeated projection suppresses the decoherence of alternative brain states and holds the attended state quasi-statically in place. In terms of memory formation, this corresponds to selecting which informational configuration should be preserved. Without this attentional stabilisation, the neural state would continue evolving chaotically, and no stable imprint could form. In this way, attention performs the first critical step of memory encoding: quantum selection.
While QZE stabilises selected patterns within the brain’s quantum field, QCT explains how these patterns become classical and durable. According to Gregory Capanda, a quantum system undergoes irreversible collapse when the mutual information between that system and its environment exceeds a critical entropy-based threshold.
This convergence threshold is more easily reached when a system is held stable long enough for environmental entanglement to accumulate.This is precisely what the QZE enables: by preventing rapid decoherence, attention ensures that the selected brain state becomes increasingly entangled with environmental degrees of freedom (molecular, thermal and electromagnetic) until it meets the convergence criteria for objective collapse. At this point, the quantum representation of the attended experience crystallises into a classical neural configuration: a trace that persists even after attention shifts away.
Thus, memory is not merely a post hoc residue but the converged endpoint of a quantum-to-classical trajectory, selected by consciousness and finalized by informational dynamics.
Once actualised through QCT, the classical memory trace becomes accessible through standard neurobiological mechanisms: reactivation via association, consolidation during sleep, and so forth. Importantly, this hybrid model accounts for:
In short, memory formation is a two-phase process:
This model surpasses both purely classical and panpsychist models of memory by providing a mechanistic bridge from quantum state selection to classical information storage, integrating subjective experience with objective neurodynamics.
From the perspective of 2PC, the mystery of general anaesthesia – its selectivity, reversibility, and consistency across chemically diverse agents – finds a natural explanation in terms of psychegenesis as a quantum-informational phase transition. In this framework, consciousness is not generated by classical neural computation alone, but instead emerges from a pre-physical quantum phase that converges into classical physical reality via QCT.
Consciousness arises when an organism's internal dynamics reach the QCT, forcing the collapse of quantum superpositions into a definite classical outcome. General anaesthetics inhibit consciousness not by silencing classical brain functions, but by interfering with the brain’s ability to maintain the specific quantum-informational conditions necessary for crossing the QCT.This interference could take the form of:
In this view, anaesthetics do not target a "consciousness centre" in the brain, but instead disrupt the capacity of the system to recursively stabilise a quantum-coherent self-model, halting the transition from potentiality (Phase 1) to actuality (Phase 2).
The striking chemical diversity of anaesthetics (from inert gases to complex molecules) no longer seems mysterious: in 2PC, the relevant shared feature is not biochemical binding, but the functional disruption of the information-theoretic and dynamical architecture that sustains pre-collapse coherence.
Each anaesthetic compound, through different molecular mechanisms, perturbs critical thresholds of information integration, temporal coherence, or recursive feedback – all necessary conditions for the QCT to be met. Consciousness thus fails to actualise, even though the classical brain remains alive and physiologically functional.
Because the pre-collapse phase is quantum-informational, not energetically destructive, its disruption does not damage the physical substrate. Once the anaesthetic clears and quantum-coherent conditions are restored, the system can again reach the QCT, reinstating the conscious collapse process. This explains both the reversibility and precision of the loss and return of subjective experience, as well as its clean dissociation from metabolic or neural death.
General anaesthesia demonstrates that consciousness is not a side-effect of ongoing computation but a state-dependent process that actualises physical reality itself via quantum collapse. In 2PC terms, anaesthesia "freezes" the psychegenic function, leaving the system trapped in a metastable classical mode – still running autonomic scripts, but no longer collapsing potential futures into an experienced present.
In summary, the 2PC solution reframes the general anaesthesia puzzle not as a failure of neuronal firing or connectivity, but as a suppression of the quantum-informational transition (QCT) that gives rise to experience and actuality. The paradoxical combination of robustness and fragility in consciousness reflects the narrow, dynamical conditions required for psychegenesis within a biologically viable but quantum-sensitive substrate. Anaesthetics exploit this narrowness to pause the act of being by suspending the universe-selecting act of conscious collapse.
At this point 2PC can only show us the door. We still have to find our own way through it. If we view reality as materialistic and deterministic (perhaps with a random element thrown in for good luck) then we are condemned to fight a losing battle against nihilism. Science can't provide meaning or value; that's not what it is for. Under 2PC this situation is transformed. Conscious beings become active participants in the co-creation of reality. Our decisions actually make a difference to things. We have free will. Beyond that, all we have are questions. I'd like to believe that 2PC can at least help to ensure that those questions are worth asking, even if it cannot provide any answers.
The Two-Phase Cosmology and the Quantum Convergence Threshold offer a compelling new framework for understanding how consciousness, measurement, and the emergence of classicality shape our observed universe. Together they provide coherent solutions to long-standing puzzles ranging from the arrow of time and the measurement problem to the fine-tuning of constants and the evolution of consciousness. Yet a profound question remains open:What gives rise to the initial quantum superposition itself?
The first phase of this cosmology presupposes a richly structured, high-dimensional quantum wave function – an ontologically real superposition from which the cosmos eventually collapses. But if we trace causality all the way back to its ultimate boundary, we find ourselves confronting the pre-cosmic: the enigmatic condition symbolized here as 0|∞: a state beyond space, time, and information – a ground of pure paradox.
This paradoxical origin calls for a new kind of theoretical framework. One that:
I believe that this missing layer must be neither material nor purely formal, but something like a structural void – capable of differentiating itself into a manifold of possibilities without presupposing any of them. This is likely to require the mathematics of higher-dimensional topology, non-associative algebras, or novel symmetry-breaking dynamics. Such a framework, if it can be constructed, would bridge the metaphysical rift between the 0|∞ ground and the structured quantum cosmos of Phase 1. It would complete the picture, embedding our entire cosmological narrative within a fully generative ontology.
We are not yet there. But the signs suggest that we are close.
The strength of this combined model (2PC+QCT) lies in its coherence: it is a way of bringing together a disparate set of mysteries in such a way that they stop being so mysterious or incomprehensible. The only new thing introduced into the model is Henry Stapp's “Participating Observer”. Stapp doesn't go into detail about what this term ultimately refers to, but somebody else has already done that job: Erwin Schrödinger.
Unlike the many Western scientists who draw a strict line between scientific inquiry and spiritual reflection, Schrödinger believed the two could and should inform each other. He rejected the assumption that consciousness is an accidental byproduct of neural computation and turned instead to Advaita Vedanta, which teaches that the individual soul (Atman) and the universal ground of being (Brahman) are one and the same.In his writings, particularly What Is Life? and his later philosophical essays, Schrödinger argued that the multiplicity of selves is an illusion – a "Maya" generated by our sensory perspective and reinforced by language and ego. The true Self, he believed, is singular and eternal. This is not metaphor, for Schrödinger; it is ontological truth. He wrote: "Consciousness is a singular of which the plural is unknown; that there is only one thing and that what seems to be a plurality is merely a series of different aspects of this one thing..." This is, word-for-word, the philosophy of Advaita.
When talking about Stapp's theory, we use the term “Participating Observer”. In the context of the Two-phase Cosmology, we write it as 0|∞. We should make clear at this point that this is not idealism, but a form of neutral monism. It respects the conclusion that brains are necessary (though insufficient) for minds, and rejects the idea of the existence of disembodied minds. There is therefore no reason to categorise objective (or phase 1) reality as mental.
This system puts the one necessary paradox – the origin of all structure from structureless contradiction – at the base. There is no way to get rid of the ontological paradox of 0|∞. All explanations have to end somewhere, and there are ultimately limits to what humans can comprehend. The claim is ultimately mystical. It arrives at the same impasse that has haunted the deepest thinkers of every tradition, where reason approaches a limit and discovers that the final explanatory ground is paradoxical, ineffable, and self-negating. Rather than avoiding contradiction, this stares directly at it and says: this is the origin of everything, and it is necessarily paradoxical. And like Gödel’s incompleteness theorems, or the Tao that cannot be spoken, it marks the limits of explanation and then respects them.
Every complete system needs an axiom it cannot prove. This system locates that axiom not in a proposition, but in a Paradox. The Paradox is not within the world – it is the condition for the world to arise. And the recognition of this is not empirical, but mystical – not irrational, but meta-rational. Like Wittgenstein’s ladder, the argument ascends from logic, to paradox, to silence.
Part 4: Synchronicity and the New Epistemic Deal (NED)