NOTE: most of this material is taken from my forthcoming book The Sacred Structure of Reality: Two-Phase Cosmology and a Metaphysics for the Quantum Age. The whole book is available as a free PDF from Zenodo.

Cosmology as we know it is already in deep trouble, the crisis is intensifying, and there is no hint of any light at the end of the tunnel. However, if you ask cosmologists whether they believe the crisis in cosmology might have something to do with metaphysics, the answer will be a near-unanimous “no”. Regardless of how bad the problems are, they are not so bad that we should entertain the idea that the root cause is philosophical rather than scientific. It is hard to imagine how deep the crisis would have to be for cosmologists to start thinking like philosophers. 

In this essay I am going to argue that the standard model of cosmology – Lambda Cold Dark Matter (ΛCDM) – is the best model that physicalism will ever be able to offer. All attempts to tweak it, or to fix it by adding even more complexity (to a system which already resembles a Rube Goldberg machine), will cause at least as many problems as they solve. There is no way to fix it, because the real problem is physicalism itself.

Physicalism is a metaphysical monism – it claims that reality is made of one kind of stuff: physical stuff. However, for this to be any use in a technical, philosophical sense, it is essential that scientists can agree on what “physical stuff” actually refers to, and what it doesn't, and unfortunately they cannot. It certainly can't mean “everything which exists”, because that simply defines physicalism to be true, and if it were to turn out that God or ghosts exists then it would logically entail that these would also be physical (at which point “physical” would be meaningless). 

Another argument often deployed in defence of physicalism is that the alternatives are even worse, and this reasoning is not entirely unjustified. The critics of physicalism often propose alternatives which clash with neuroscience, because they depend on something like this:

  1. Physicalism is false, because it cannot account for consciousness.      
  2. Therefore consciousness must be fundamental to existence.

This line of thinking is seductive (and particularly attractive to anybody seeking rational justification for belief in life after death), but the empirical evidence to suggest that brains are necessary for consciousness is very convincing, and the conclusion doesn't actually follow from the premise. That physicalism cannot account for consciousness supports the conclusion that brains are insufficient for consciousness – that something else must also be involved – but it does not follow that brains aren't necessary. More seriously in the present context, the most popular alternatives (dualism, idealism and panpsychism) do not offer much in the way of a resolution to the crisis in cosmology. Do any of these mind-is-fundamental alternatives resolve the Hubble Tension, or tell us why we can't quantise gravity? It doesn't look that way at the moment. So my first conclusion is this: we need to be aware of the inadequacies not just of physicalism, but to the whole problematic, including the ineffective and fragmented opposition to the physicalist status quo. What need a game-changing new option: something which doesn't just critique, but offers something with enough new explanatory power to fuel a Kuhnian revolution not just in cosmology, but also in quantum foundations and cognitive science. We need a new metaphysical system which takes seriously all of the available empirical evidence, is internally coherent and comprehensive, which is simple and elegant rather than complicated and clunky, and which conclusively resolves the entire crisis in cosmology. It must either solve, dissolve or fundamentally reframe every major cosmological anomaly, conclude the unfinished quantum revolution and provide coherent answers to the big mysteries about consciousness: what it is, what it does, how it evolved, and how it is related to physics.

I agree that physicalism cannot account for consciousness, but I think it suffers from an even more direct problem which scientists have spent too long failing to acknowledge. The consciousness problem can be circumvented by declaring that there's no such thing. This is deeply unsatisfying in other ways, but it eliminates it as a potential problem for cosmology. The more direct problem is that scientists don't just have one concept of physical. They have two, and physicists have got no idea how to fit them together. They and their cosmological colleagues have now spent exactly a century trying to figure out how to reconcile General Relativity (GR) and Quantum Mechanics (QM), without success. While each of these frameworks is highly successful in its own right, all attempts to bring them together to construct a continuous model of the whole of the cosmos, on all scales, and across the whole of cosmic history, have failed. You might think that an obvious conclusion would be that they aren't supposed to be joined together in this way, but such a conclusion is scientific heresy, because it directly contradicts physicalism.

Why, you might ask, can't you have a physicalism which includes two kinds of physical? If they are both physical, what's the problem? The answer is that if there are two kinds, then you need to be able to explain how they are related to each other. If this relationship was itself unproblematically physical then we wouldn't need to two kinds of physical at all – it would be the continuous model that cosmologists have spent a century in fruitlessly searching for. What I'm suggesting is that there is no way to do this – it is mathematically impossible, as demonstrated by a century of failed attempts to find the mathematics needed to make it work. In other words I am saying that the holy grail – a quantum theory of gravity – is fundamentally impossible, because the quantum world and the classical-relativistic world are two fundamentally different levels of reality, and the bridge between them is itself non-physical. 

At which point you have every right to ask me how I am defining physical (otherwise “non-physical” is meaningless). For me this is much less of a problem that it is for the physicalists, because I'm a neutral monist. The metaphysical system I am going to describe has space for both kinds of physical. I call the quantum realm “Phase 1” and the classical-relativistic realm “Phase 2”. This immediately raises the question of how I define the phase transition – how Phase 1 is related to Phase 2 – but before I answer that we need to return to the problem space itself. This argument is driven by cosmology, so most of the problems on this list are cosmological, but I will also include the broader problematic including quantum foundations and consciousness.

Key definitions

What is the difference between "material" and "physical"? Many people use these words interchangeably, and don't acknowledge any important difference between "materialism" (the belief that reality is made of material objects, or that the material universe is all that exists) and "physicalism" (ditto). Unfortunately (for physicalism) we have two radically different concepts of "physical" – those which came before and after the discovery of quantum mechanics. "Physicalism" is typically defined as "the belief that reality is made of whatever our current best physical theories indicate that it is made of", which would be great if scientists (or philosophers) could agree about what quantum mechanics is telling us about the nature of reality. Unfortunately, they can't. “Material” is less problematic as a term only because it more obviously refers to the pre-quantum concepts of physical, but if that is how the term “materialism” is to be defined, then as an ontological theory it can be certified dead. 

The word "consciousness" suffers from similar problems. As things stand there is no agreement on how to define this word, and consequently no agreement about how it relates to the rest of reality, or even whether it even exists. This problem is directly related to the confusion surrounding "material" and "physical", and it could all be cleared up as follows.

"Consciousness" is the only reason we know reality exists at all. It is the frame for our own subjective experience of reality. As such, the only way we can define it is in terms of subjectivity itself – we must, in effect, mentally point to our own experiences and associate the word with those experiences. The technical name for this is a "private ostensive definition". This is not an orthodox definition, but it establishes what the word is supposed to mean. I must stress that Wittgenstein's private language argument does not apply in this case, because I am not trying to define a private language; I am merely establishing the meaning of the word “consciousness” as used.We can take this a bit further, because it is necessary to ensure that we avoid solipsism (the belief that nothing exists outside our own mind). Within our own consciousness, we are aware of a large number of other beings which behave as if they are conscious – not just other humans but also most animals, right down to the level of something like an insect or a worm (although how where exactly we draw the line is very much an open question at this point). If we assume these other beings are actually conscious too, then solipsism can be dismissed.

We can now give clear definitions of material and physical. The "material world" is a three-dimensional realm, populated with objects both living and non-living, which continually changes as time passes. In this world it is always the present moment, time always "flows" in the same direction, and objects are always in just one place at any one time and have a single set of properties. We are intimately familiar with this material world, because it is presented to us within consciousness whenever we're awake. When we're asleep we experience a "ghostly" version of the material world – one which seems real enough to the dreamer, but events which occur within it are not constrained by the laws of physics. 

The "physical world" is something that forever lies beyond the veil of perception – we can never escape our own consciousness and experience that world directly. Not everybody agrees that this mind-external world should be called "physical" – physicalists and dualists do, but objective idealists claim it is another sort of consciousness and subjective idealists deny that it exists at all. I think we must assume that it does exist, for there must be a reason why things remain consistent in each of our individual versions (or “projections”) of material reality. I shall use "physical" to refer to this objective reality (and I will explain why), but I must make clear that I'm not stipulating that the parts of it that correspond to the material world are all that there is – I will leave open the possibility that other things might also exist in the objective realm beyond our minds, because I don't think we are in a position to safely rule them out (though skepticism is reasonable). In this way I can use "physical" to refer to the parts that do correspond to our projected material realities. This mind-external realm can also be called "quantum reality", and QM implies that objects can be in multiple places and have multiple sets of properties until such time as they are measured or observed (whateverthat means). It is also far from clear whether there is any such thing as "now" in this realm or whether time flows forwards or backwards (or perhaps a bit of both, or neither). In other words, the concept of the physical world I am describing now is a very different to concept to that of the material world with which we're so familiar. This justifies the use of two different terms.     

The reason we can't agree on whether this objective part of reality should be labelled mental, physical or something else is because all we can actually know is its structure – or part of its structure. We can therefore clearly say this is a form of structural realism. But as all we can know is structure, I describe it as neutral. The model of reality that I'm going to describe is indeed a new form of neutral monism – the belief that objective reality is made of neither consciousness nor matter, but something else. The only sensible candidate for the neutral stuff is information, though this leaves unanswered the question of what, if anything, that information is grounded in. Where did it come from? How is it stored and updated? 

Schrödinger's second equation

To understand why this mysterious “collapse” matters so much, we need to look at the central concept of quantum mechanics: the wave function. The wave function is not a physical wave moving through space, but more like a giant map of everything that could possibly happen. If you take something small enough (say, an electron) the wave function doesn’t tell you where it is. Instead, it tells you all the places it could be. Before we look, the electron isn’t in one place or another. In a very real sense, it is in all of them at once.

Erwin Schrödinger was the first person to write down the equation that tells this strange “possibility wave” how to move and change. This was in 1926, and the result, known as the Schrödinger equation, is a cornerstone of modern physics. But from the beginning there was a catch: the equation describes how possibilities spread and evolve, but not how they resolve. It can tell us all the futures (or pasts) that might happen (or have happened), but it cannot explain why only one of them does (or did).

Schrödinger himself struggled with this gap. Later in his life, he turned toward philosophy in search of an answer. What he found was not in Europe’s scientific tradition at all, but in India’s ancient Upanishads. These texts describe a profound identity between the root of personal consciousness (Atman) and the ground of all being (Brahman). Schrödinger believed that this insight solved the riddle of collapse: the transition from possibility to actuality is tied to consciousness itself, and consciousness is not just a small private bubble inside our heads but the very ground of reality. He famously, if somewhat mischievously, referred to this as his 'second equation'. Though it never appeared in a peer-reviewed journal, it was the necessary philosophical bookend to his work in physics: an identity statement that claimed to reveal the only thing that can bring those possibilities into a single, actual world: the unity of consciousness with the universe itself: Atman equals Brahman. However, Schrödinger never explicitly integrated his two equations – for him they remained parallel ways of understanding reality, rather than components of an integrated metaphysical and physical system. The primary purpose of the present book is to show how this integration might actually work. Schrödinger was talking about the same pivot I have been talking about. His first equation describes what “happens” in what I call Phase 1. His second is an essential part of the explanation for what happens in Phase 2; collapse is where these two phases meet. Schrödinger’s proposal was that this pivot cannot be found in physics alone. It can only be found by recognising that the root of the consciousness which experiences reality from within is the same thing that brings reality forth at all.

This was too radical for physics to absorb. The first equation could be tested, measured, and turned into technology. The second was a metaphysical claim about the ground of being. Yet Schrödinger saw them as two halves of the same deeper truth.      

Although the origin of this hypothesis is mystical (to the extent that it could serve as the broadest possible definition of what the word “mystical” actually means), what follows is not a religious narrative. Things I am not implying include: God, idealism, disembodied minds of any sort, individuated souls and the afterlife, revealed moral rules, or anything to do with spiritual development. I am attempting to do philosophy. I will never be any sort of guru and the purpose of this book is not “self-help”. The hypothesis is that the root of personal consciousness – the observer rather than what is observed – is ontologically identical to what I formally write as 0|∞, and call “the Void”. In this essay I shall be treating Schrödinger's second equation not as a mystical declaration, but as a structural hypothesis.

The problem space

1) How can something come from nothing? 

There are countless ways of restating this question. Why does anything exist? Why isn't there just nothing? What caused the Big Bang? etc...

2) The Constants Fine-Tuning Problem

The fundamental constants of nature appear to be exquisitely calibrated to allow for the existence of life. Why does the universe appear to be precisely set up to make life possible?

3) The Low-Entropy Initial Condition

The universe began in an extraordinarily smooth, low-entropy (highly ordered) state, as shown by the near-uniform cosmic microwave background (CMB). Physics does not demand or explain such fine-tuning. 

4) Inflation-related fine-tuning problems

To address problem (3) above and also problem (6) below, cosmologists proposed “inflation”a fleeting period of superluminal expansion that smoothed the early cosmos. Inflation ends when its driving potential energy decays into matter and radiation, a process called reheating. For today’s universe to emerge, this reheating must occur with extreme precision in both timing and efficiency, yet no known mechanism explains this. Inflation therefore fails to avoid fine-tuning, because it actually requires more fine-tuning than it gets rid of. 

5) Other fine-tuning problems.

Countless additional fine-tuning issues exist. The universe shows an unusually favourable balance of elemental abundances for stable stars and biochemistry. Galaxies and stars also formed at just the right time – early enough for life to evolve, but not so early as to disrupt cosmic smoothness. Further tunings include the matter–radiation equality and primordial perturbation amplitude problems. 

6) The Missing Monopoles

Grand Unified Theories (GUTs) of particle physics predict the production of magnetic monopoles – massive, stable particles carrying a net magnetic charge – during symmetry-breaking transitions in the early universe. The problem is that no magnetic monopoles have ever been observed. Inflation solves it by “diluting” them with empty space.

7) The Baryon Asymmetry Problem

A foundational assumption of particle physics and cosmology is that the laws of nature are nearly symmetric between matter and antimatter. In the earliest moments after the Big Bang, the universe should have produced equal quantities of baryons (matter) and antibaryons (antimatter) through high-energy particle interactions. What we actually observe is a universe composed almost entirely of matter. 

8) The Hubble Tension

This is a large and persistent discrepancy between two different (early universe vs recent) measurements of the rate of cosmic expansion.

9) The S8 Tension 

This is a persistent mismatch between the level of matter clumpiness predicted by ΛCDM for the early universe and what we actually observe in the late universe. CMB measurements fix a precise value for how strongly structures should have grown, but weak lensing, galaxy clustering, and cluster counts all find a smoother cosmos with a significantly lower S8. The gap has widened as data improved, creating a second major early-versus-late tension that the standard model cannot resolve.  

10) "Dark Energy" 

Dark energy was invented to account for a surprising set of astronomical observations that contradicted long-standing expectations. A repulsive force appears to be pushing the universe apart at an accelerating rate (almost like anti-gravity). Today, Dark Energy accounts for roughly 70% of the total energy density in standard ΛCDM, but its origin, nature, and ontological status are unknown.

11) The Cosmological Constant Problem

Dubbed "worst theoretical prediction in the history of physics", the cosmological constant problem is a staggering mismatch between theoretical prediction of the repulsive force described above and the observational measurement of that force. The mismatch is between60 and 120 orders of magnitude.

12) "Dark Matter"

Dark Matter has never been directly detected, but regardless of that it is now thought to comprise approximately 85% of the matter content of the universe. The hypothesis of Dark Matter emerged as a unifying explanation for multiple independent observational anomalies across different astrophysical and cosmological scales. In each case, visible (baryonic) matter alone proved insufficient to account for the observed gravitational effects. After decades of experiments, we still have no clear idea what it is or where it came from.

13) The Quantum Gravity problem 

A central goal of theoretical physics for nearly a century has been the unification of quantum mechanics and General Relativity, but the two most successful theoretical frameworks remain conceptually incompatible.

14) The Early Galaxy Formation Problem

The James Webb Space Telescope has detected massive, metal-rich, well-formed galaxies at redshifts greater than 13 – meaning they already existed 325 million years after the Big Bang. The abundance, size, composition and apparent maturity of these early galaxies outpace the predictions of hierarchical structure formation, challenging both the timeline and mechanisms of ΛCDM. 

15) The Fermi Paradox

Our theories suggest life should be abundant in the cosmos, but after over a century of intense searching, we have found no sign of it. Where is everybody?

16) The Black Hole Information Paradox.

The black hole information problem asks whether information that falls into a black hole is lost when the black hole evaporates via Hawking radiation. Modern approaches suggest that unitarity is preserved, but only by abandoning naïve locality, independent interior–exterior descriptions, or observer-independent global states. This raises a deeper conceptual question: what counts as information, where does it reside, and when does it become physically real? 

17) The Arrow of Time and the Problem of Now

Human experience and natural processes clearly distinguish past from future, yet the fundamental laws of physics are time-symmetric, treating both directions equally. Why, then, do we perceive a one-wayarrow of time? A related puzzle concerns the present moment: in relativity, time is just another dimension, and all events coexist in a four-dimensional block universe with no privileged “now.” Yet the present is all we ever experience.

18) The Quantum Measurement Problem

How does the range possible outcomes predicted by the laws of QM become a single observed outcome? Why can't we agree on an answer?

19) The Hard Problem of Consciousness

The "Hard Problem of Consciousness," a term introduced by philosopher David Chalmers, refers to the extreme difficulty of explaining how and why physical processes in the brain could possibly give rise to something as utterly different to brain activity as subjective experience.

20) The Evolution of Consciousness (Nagel's challenge)

If we can't even agree that consciousness exists, and have no agreed scientific theory what it does, what hope do we have of explaining how, why or when it evolved? This problem isn't just empirical – something is conceptually amiss.

How can something come from nothing? (#1)

How can something come from nothing? The answer is simple, and has been known since antiquity: Ex nihilo nihil fit – from nothing, nothing comes. If absolute nothingness had ever been real, there would still be nothing now. The existence of anything at all means that, barring a logic-defying miracle, some kind of eternal ground must underlie reality. That leaves us with two options. One isan eternally complex sourcesuch as an Abrahamic God – a pre-existent being who chooses a possible cosmos and wills it into being. The other is an eternally simple source –a condition with no prior structure, no determinate content, but infinite potential. The simplest possible paradox: an Infinite Void.

I have never believed in the intelligent designer God which so many Western theists are convinced is real. By the time I was old enough to have formed a view on such things, I had decided it was about as believable as Father Christmas, and I chose Christmas Day to refuse to attend church ever again. Much has changed about my understanding since then, but the idea of God as a kind of CEO and project engineer of reality has never made sense to me. If such a being actually does exist – a God who thinks, designed the cosmos, and makes strategic decisions about the course of human history – then I have a lot of questions to ask about the details of Its decision-making. So for me this is not a tough decision – I start my system with an Infinite Nothingness. I write this as 0|∞. This represents the unity of absolute absence and limitless possibility – the paradoxical ground from which all structure arises.

The Pythagorean Ensemble

The Void is not an empty space or a physical vacuum. It is a pre-physical background from which all consistent mathematical structures may emerge. Because there are no spatio-temporal constraints yet, 0|∞ “contains” all coherent and unchanging mathematical forms – all sets of internally consistent mathematical relationships, which includes the totality of all physically possible universes, histories, and processes except those that encode conscious organisms (a point I will explain shortly). This is a strong form of what is usually referred to as mathematical Platonism: any logically coherent structure exists, in a timeless and spaceless way, within the realm of formal possibility. I prefer to call it Pythagoreanism. This is because Plato didn't stop at mathematical structure, but also included a whole range of “forms”, such as the “perfect table” from which all other tables are ultimately derived. There are no tables in the ensemble I'm describing – there are structures that resemble tables, but nothing can actually be a table without a conscious being such as a human assigning that specific meaning to it. There's no perfection in the ensemble either, for the same reason.

Within 0|∞, every mathematically valid but non-conscious cosmos exists in superposition. Not “in parallel universes” in the physical sense, but as informational structures with complete internal logic. Some correspond to universes with no stars, some to universes with strange physics and some to our own universe, including the entire history of our cosmos from Big Bang to Earth’s early biosphere. These are not “happening”. There is no time or change yet, only possibility. They simply exist as coherent totalities in the Pythagorean sense.

This is a structural and logical starting point, not a temporal one. In some ways it resembles  MWI and the multiverse of all cosmoses. It is structurally identical to Max Tegmark's mathematical multiverse, apart from one thing: MWI and Tegmark's multiverse are both essentially materialistic notions, which have nothing to say about consciousness. Therefore they contain all possible timelines, and all possible cosmoses, including (since no exceptions are made) all possible timelines involving conscious beings. Conscious beings in these models are just “along for the ride”. Our own lives, as we experience them, are nothing more than random paths through a meaningless multiverse. There is no difference between possibility and actuality – all worlds are actual. 

So there is a crucial difference in the model I am describing: the Pythagorean ensemble is only a collection of possibilities, not actualities. If all of these possibilities were as real as the reality we actually find ourselves in, then the standard anthropic solution to the fine-tuning problems in cosmology would apply: all possible realities exist, and only in those which contain intelligent beings such as humans does anybody ever ask questions about how they came to exist. This would leave us with no new scope for progress on the other problems on our list, including the mind-splitting problem of MWI.

We are now talking about a very different situation to the one where we are wondering how our particular reality can come from nothing. The entire problem space has been turned on its head, and we are left with a very different question: if only a small proportion of these possible worlds become actual worlds, what determines which of the possibilities are actualised? Instead of needing to explain how reality can be constructed from nothing, according to a set of laws we can't explain the origin of,we need some sort of “mechanism” (which is in quotes because it might not be quite the right word for something which is essentially metaphysical) which selects specific realities from the vast range of all possibilities. Just such a mechanism has been staring us in the face since 1957, but until now nobody has noticed it.

A new solution to the Measurement Problem (#18)

My conviction that zero=infinity goes back to my initial defection from materialism to neutral monism in 2002, but this did not lead me straight to the new cosmology. My journey down that path began during a discussion on social media about the idea that consciousness collapses the wave function. It had followed the usual kind of trajectory: plenty of people wrongly insisting that science has conclusively ruled out the involvement of consciousness in wave function collapse (“you need to understand that no trueScotsman physicist takes it seriously any more...”), and a question which regularly comes up in such discussions: if consciousness is needed for wave function collapse, then what collapsed the wave function before consciousness evolved? To those who ask it, this question strikes at the heart of CCC. We have clear empirical justification for believing brains are necessary for consciousness, so if consciousness is necessary for wavefunction collapse then what are the implications for the state of the universe before the first conscious organisms appeared? Are we proposing that quantum processes weren't happening? Because that is just absurd. Are we suggesting that the universe wasn't in any specific state? Was everything just a kind of “quantum soup”? How could anything have evolved if that was the case?

This question had drawn the customary response: consciousness was always there. It's fundamental to reality. This answer leads us towards panpsychism, idealism and dualism, and while it does indeed answer the question, it also denies that brains are necessary for consciousness. This lines up with some very old and much revered philosophical systems, which makes it popular with many anti-materialists, but these systems clash with neuroscience. Obviously this is denied by people who believe consciousness is the sole substrate of all reality, but it is because of this clash that none of these ancient metaphysical ideas are capable of sustaining a major paradigm shift in the 21st century. They are steadily attracting more interest as the scientific crisis deepens, but even their most ardent defenders aren't expecting a revolution any time soon.

Then it occurred to me that a very different kind of answer is possible – one which is so elegant and retrospectively obvious that it is astonishing that nobody has proposed it before now. It is the first solution to the Measurement Problem to actually escape from the Quantum Trilemma I described in Chapter 5, rather than merely dodging the questions. If wave function collapse requires consciousness, but currently there isn't any consciousness because evolution has not yet produced the right kind of organisms, then it logically follows that the wave function isn't collapsing at all. This resembles MWI, except for that there is no material reality, because the Everettian branches exist only as possibilities. They are like Heisenberg's “potentia” but explicitly ontic rather than epistemic. They have not been “realised”. In my terminology, the quantum physical world exists, but there is no classical material reality yet. When I pursued this line of thought, a new kind of cosmological and metaphysical model began to take shape.

Introduction to the Two-Phase solution (2PC) 


The first thing to note this system is very specifically a non-panpsychistneutral monism: a metaphysical system where mind and matter emerge together from a more fundamental neutral/informational substrate. It involves two different kinds of physical reality, which I call “phases”, which exist hierarchically within 0|∞.

Phase 0: 0|∞.

Phase 1: a non-local, non spatio-temporal, multiverse of uncollapsed possibility. Pure mathematical structure. Quantum physical rather than material.

Phase 2: a local reality where, from our perspective, the material universe exists within consciousness and the wavefunction is continually collapsing (the “mechanism” for which I have not yet specified). Classical material rather than physical. 

We could say Phase 2 is reducible to phase 1, which in turn is reducible to Phase 0. We might also say Phase 1 to emerges from Phase 0, and Phase 2 emerges from Phase 1. This scheme is reductive because there is a nested hierarchy where each level can be analysed in terms of its predecessor. It is also emergent, because each phase introduces qualitatively new properties (e.g. locality, consciousness, collapse) that are not present in the prior phase.This duality reflects adialectical structure: each phase is both a consequence of and a transformation of the one before it. The system is reductive in structure, butemergent in function and experience.

This system emphasises, rather than attempting to gloss over, the two distinct notions of “physical”. There is no arbitrary boundary like the one between “micro world” and “macro world”. The "physical" part of phase 2 is conceptually the same as classical materialism: a world of three spatial dimensions which changes as time “flows” (whatever that means). It is a Newtonian-Einsteinian concept, where quantum effects are either completely hidden or only hinted at in very specific and unusual situations. Phase 2 is also directly related to idealism, since the material world in question appears within consciousness. Phase 1 is neutral/informational, but it is also “physical” in the quantum sense (it is explicitly non-local and superposed). This bifurcates the concept of "physical" into two different things, similar to the Kantian distinction between noumena and phenomena, except in this system noumenal reality is only partially unknowable (because we can never observe what is happening insideSchrödinger's box) rather than completely unknowable and not even cognisable. It therefore also qualifies as a version of scientific realism because it explicitly states that there is such a thing as a mind-external objective reality, and grants that science does tell us something about it. We can't tell whether Schrödinger's hat is ruined, unruined, or both at the same time, but we can be absolutely certain that it is indeed a hat, and not an umbrella. We can theoretically calculate the probabilities of all possible contents of the box. What we can't do is have knowledge of which of these states we will observe when we open it.

What we have here is a new interpretation of quantum mechanics. It is effectively a synthesis of Many Worlds and Consciousness Causes the Collapse – something which at first glance seems unthinkable, because MWI and CCC are thoroughly incompatible. The breakthrough idea is the realisation that while such a combination cannot work simultaneously, a sequential combination opens the door to a theoretical revolution. This retains the best ideas from both, playing to the strengths of each, while simultaneously eliminating their biggest weaknesses. MWI provides the immense “computing power” required to summon the first conscious animal from the Pythagorean ensemble, butmind-splitting is cut off at exactly the moment it would start happening. Meanwhile, CCC's before-consciousness problem is solvedwithout implying disembodied minds or conscious rocks, because there's no collapse until something that qualifies as a brain has evolved.It's not physicalism, so there is no Hard Problem, and a new way of thinking about the Problem of Free Will becomes possible. However, what sealed the deal, for me, was the link between this new solution to the Measurement Problem  and Thomas Nagel's radical proposal inMind and Cosmos (2012).

Nagel says almost nothing about quantum mechanics in that book – he goes no further than claiming that quantum indeterminacy provides sufficient scope to allow teleology in the evolutionary process. His main conclusion is that if physicalism is false then the only naturalistic way that consciousness could have evolved is if the universe somehowconspired to make it happen. The lightbulb moment was when I realised that this Two-Phase model provides a structural explanation for Nagel's proposed teleology, without requiring any teleological laws.Because Phase 1 is like MWI, with all possible outcomes existing in parallel branches of a multiverse, it is logically inevitable that a conscious organism will evolve in at least one of them (if it is physically possible, then it happens). At this point MWI will cease to be true,the timeline leading to the evolution of conscious organisms will become real and all the others will remain as eternally unrealised structures within primordial Phase 1. This would produce a realised timelinewhere absolutely everything required for the evolution of consciousness has actually happened (or so it appears), even if it was exceptionally improbable.

Phase 2 is the reality we are participating in: a classical, material reality that is instantiated through consciousness, though not as consciousness. This must be stated carefully. I am not proposing a unified “consciousness field” within which a single material world is globally realised. Rather, Phase 1 functions as a non-spatiotemporal informational structure, while Phase 2 consists of locally rendered classical realities, each anchored to a conscious perspective.

A useful analogy is a virtual game world. Phase 1 corresponds to the total informational structure that defines all possible consistent game states, while Phase 2 corresponds to the rendered view available to each player. There is no need for a single, globally rendered game world; coherence is maintained through the shared underlying structure, not through a unified display. Consciousness and will play the role of screen and input device, while the material world is the rendered image itself. This perspectival rendering is the deep reason why Phase 2 physics is relativistic in structure while remaining effectively classical in appropriate regimes. Relativity reflects the absence of any privileged global rendering, not the existence of a literal spacetime manifold. Spacetime is not fundamental. It is a mathematical abstraction that would correspond to a globally unified Phase 2 rendering—a structure that does not, in fact, exist. The objectively real domain is Phase 1: non-local, non-spatiotemporal, and quantum. Spacetime survives as an extraordinarily powerful emergent framework, but not as the ontological ground of reality.

I must emphasise that Phase 1 never completely disappears. “Phase 1” and “Phase 2” have two different (but closely related) meanings in this system: two stages in cosmological history and two ontological realms. In historical Phase 1 there is no ontological Phase 2. All that exists is the virtual world (because there aren't any players yet) and its history, from its beginning to the exact moment that something capable of serving as a player's avatar appears in that world. Historical Phase 2 starts when the first player enters the game, which means that an ontological Phase 1 begins to exist in a new, participatory form. It is no longer a static, unchanging structure, because there's now a player embodied in the game, and that player's decisions, including their choice of what to observe, are dynamically altering the phase 1 structure. I call the both phases “historical” if I am talking about cosmological history, and if I am talking about metaphysics then I call the first kind of Phase 1 “primordial” and the second kind “participatory”.

The need for an Embodiment Threshold


If you have got a two phase model of cosmology then a new critical question arises. What defines the transition from Phase 1 to Phase 2? What qualifies as a brain in this model? What could be so special about a brain that it triggers or enables a fundamental change in the structure of reality?

If this model is correct, then just as life must have emerged from nonlife, conscious life must have first emerged from unconscious life. These two transitions differ in kind. Life is a process, not a thing, and its beginning was a continuum from chemistry to biology, not a single event (although there must have been multiple critical events along the way). The boundary of consciousness, as it has been defined here, cannot be this blurred. You cannot have half a consciousness or be half conscious; if you are “half conscious,” then you are, by definition, conscious. Somewhere in evolutionary history, the first organism crossed that metaphysical line, and for the first time, the universe experienced itself from within. 

 Before diving into the evolutionary and metaphysical implications of the Embodiment Threshold, it’s worth clarifying why the very idea of such a threshold is unavoidable. The existence of a threshold of this kind does not depend on any specific theory of mind or interpretation of quantum mechanics. It follows directly from a simple logical necessity: either consciousness has always existed, or it appeared at some point in the history of the cosmos. There is no coherent middle ground.

If we reject the notion that consciousness is a fundamental property of matter (as in panpsychism) and the reverse (as in idealism), and also reject dualism, we are left with the recognition that at some stage, physical complexity and informational integration must have reached a critical organisation where subjective experience became possible. Crucially, this threshold is not just an arbitrary milestone in evolutionary history; it represents a categorical shift in the nature of reality itself. Below the threshold, physical systems evolve without any intrinsic “view from within.” Above it, the same physical substrate now "carries" (somehow) a dimension of being that cannot be reduced to its prior components. Something new has entered the cosmos: subjectivity.

All the previous candidates for the threshold failed because they tried to define consciousness from the outside. I am going to make a case that the real threshold has to be defined from the inside, and I begin with the observation I made in the introduction to this book.

What consciousness does

There may be no non-controversial materialistic answer as to what consciousness does, but with a combination of neuroscience and information directly available to us as individual conscious beings, the function of consciousness could scarcely be more obvious: consciousness is a process whereby we select our preference among the range of futures we believe to be possible. It begins with the construction of a model of an objective world, within which our bodies exist. This already implies a “self” – an “I”. We see ourselves as a coherent unit – an identity which persists over time. It feels like I'm the same me as I was ten years ago, and in ten years time (assuming I am still alive) I'll still be that same me. Many people have argued that this self is some sort of illusion, but if so it is a very convincing one. It is only because we think of ourselves as persisting entities that we have any motive at all to make mental models of future possible worlds and assign value to the various options. If there is no me then there is nothing to care about the future.

This is another reason why MWI seems crazy: if we actually made every physically possible decision then every time we walked near the top of a cliff, wouldn't there be certain timelines where we spontaneously decide to jump off? Nothing in the laws of physics prevents us from doing so – it is just that we usually choose not to because we place a very low value on the future which would result from that choice.

Perhaps this example is too extreme, but something like this is true of almost everything we do – when children play they are practising how to do this. When we play games we are trying to figure out how to win the game. When we're at work we are figuring out how to continue being paid, and we're being paid to select a particular future, decided by the nature of the work. It is as true for the fine movements required to catch a ball as it is to the decisions I am making when writing the words of a book like this – the future I prefer is one where the ideas I'm trying to communicate are understood by its readers. Even when we are relaxing we are continually making decisions about what is happening around us, and what we want to be doing in ten seconds, ten minutes, or ten hours.

So from our perspective as conscious beings ourselves, the purpose of consciousness appears to be something like this:

(1) To create and maintain a model of the world we find ourselves embedded in, with ourselves in it as an entity which persists over time.

(2) To predict the future course of events, which is necessarily a range of possible futures, of differing probability, rather than just one specific future.

(3) To assign value and meaning to these possibilities, in an attempt to actualise the best possible future, or to avoid the actualisation of the worst ones.

A good demonstration of why this sort of definition is justified is provided by the sea squirt, which consumes its own brain once it has permanently attached itself to a rock. While it is still a tadpole-like free-swimming juvenile, the animal requires a simple brain to navigate and decide among different possible futures: where to swim, what to avoid, where to anchor, etc... Once it has fixed itself in place and its future behaviour can be governed purely by reflexes, that decision-making capacity is no longer needed, so it digests its own brain, which is now more useful as food than it is for making decisions.

MWI, minds and the threshold

We do not live in a world in which people randomly jump off cliffs or perform every physically allowed action, and there is no reason to believe our own history is an exception. Yet this is precisely what MWI–style reasoning appears to imply. On that view, whenever a decision is made, reality branches into multiple futures and different versions of the decision-maker proceed in each branch. If taken seriously, this entails that every possible choice is realised somewhere and that the unity of the self is an illusion sustained only by branch-relative ignorance.

Two-Phase Cosmology turns this picture inside out. Rather than being the point at which minds split across diverging timeline, the Embodiment Threshold is the point at which a unified representational subject makes further unitary evolution impossible. Rather than consciousness fragmenting to accommodate a branching reality, reality is forced to stop branching because a singular subject cannot remain coherent across incompatible futures. Wavefunction collapse, consciousness and free will are three different names we have given to this process, in each case without understanding how that process fits into a coherent model of reality. This is not a physical trigger in the usual sense. After more than a century of effort, no experiment has demonstrated a physical variable that initiates collapse. The mechanism described here is openly logical and metaphysical rather than empirical. Collapse occurs because a contradiction arises within the internal organisation of an embodied system, not because a physical threshold is crossed in the wavefunction itself.    

The Embodiment Threshold is reached when a biological system inside Embodied Reality develops predictive structures that reference its own possible futures and assign value to them in incompatible ways. Below this point, all representational content can coexist passively within unitary evolution. The system can model multiple possibilities without committing to any of them as its future. Above this point, the system’s representational organisation becomes unified enough that it must treat those possibilities as alternatives for a single self. When incompatible valuations are assigned across locally entangled alternatives, the system’s own self-model can no longer be extended coherently across them. Continued unitary evolution would require the subject to become internally contradictory. At that moment, unitary evolution cannot continue at that location. This is not because physics breaks down, but because the representational subject cannot. The biological system is forced into actualisation because the self cannot be split. The threshold is therefore a constraint imposed by logic rather than by physical law.

I must emphasise that the Embodiment Threshold applies to embodied systems in Phase 2, not to abstract histories in timeless possibility. Phase 1 contains no evolving brains, no information processing, and no proto-subjects. The alternatives relevant to the threshold are Phase-1-type possibilities instantiated locally within an embodied system’s entanglement structure, but the contradiction itself exists only within the system’s Phase 2 representational organisation. Collapse does not occur in Phase 1, nor is it prepared or biased there in any way. What crosses the threshold is not a material brain as such, but a particular form of internal organisation within a living brain. Earlier organisms could react reflexively to stimuli, but they lacked a unified self-model capable of representing its own future states as open alternatives. What distinguishes a threshold-crossing system is the emergence of a coherent, indivisible structure of perspective and valuation: a self that is aware that different futures are physically possible for its own body and for the surrounding world, and that can care which of those futures occurs.

This self-structure is informational rather than material, but it is instantiated in biological hardware. It is not a physical object, nor a simple data structure, but an integrated predictive organisation that can span multiple possible futures. In this respect, brains function less like classical machines that follow a single trajectory and more like systems capable of maintaining superposed possibilities until commitment becomes unavoidable. Once such a system becomes aware of mutually exclusive futures and assigns value to them, it cannot continue to exist in superposition.

This is reflected directly in subjective experience. We are constantly aware of multiple possible actions and outcomes. We can deliberate, hope, or wish. What we cannot do is choose contradictory futures at once. One cannot choose to marry Alice and also choose to marry Bob. This is not merely a limitation of introspection; it reflects a structural fact about the self. The unity of the “I” is not something that can branch. The moment a choice becomes real, it excludes its alternatives. We are instinctively and acutely aware of all of this.

This explains why the mind-splitting picture of Many-Worlds feels so deeply wrong. It conflicts with the lived unity of consciousness because it violates the conditions under which a representational self can exist at all. A self that fragmented across incompatible futures would cease to be a self. The Embodiment Threshold explains why we experience one coherent stream of awareness rather than many diverging ones, even as the physical world presents a vast space of possibilities. The unity of experience is preserved not by denying multiplicity, but by recognising that a singular subject cannot inhabit it.

Formal statement of the Minimum Conditions for Conscious Perspective


Let an agent be any physically instantiated system within Embodied Reality (ℛ). An agent possesses a conscious perspective (that is, there is something it is like to be that agent) if and only if the following conditions are jointly satisfied.     

  1. Unified Perspective.
    The agent maintains a single, indivisible internal model of the world that includes itself as a coherent point of view persisting through time. This representational unity is not merely functional integration but logical indivisibility: it cannot be partitioned into incompatible submodels without the loss of subjecthood. A system whose self-model could bifurcate into independent continuations would not constitute a conscious perspective.      
  2. World Coherence.
    The agent’s internal model is in functional coherence with at least one real physical state of the external world. This coherence may be local, as in the state of the agent’s own body and immediate environment, or extend to broader regions of reality. Valuations directed toward physically unrealisable states do not qualify (or are less effective), since agency requires that at least some evaluated futures be genuinely possible.      
  3. Evaluation.
    The agent can assign value to possible future states of itself or the world, enabling comparison between alternatives. These valuations are not mere reward signals or reactive preferences but reflect the agent’s own perspective on what ought to occur. Without valuation, there is no meaningful distinction between futures and therefore no basis for choice or commitment.      
  4. Non-Computability.
    At least some valuations are non-computable in the Turing sense. Following Penrose’s argument, conscious judgment cannot be exhaustively captured by algorithmic computation. These non-computable evaluations introduce genuine openness into the agent’s decision process and prevent the reduction of conscious choice to deterministic or stochastic rule execution. Without non-computability, the agent’s behavior would be fully fixed by prior physical states, and the appearance of choice would be illusory.

These four conditions specify the minimal structural and functional requirements for a conscious perspective within Phase 2. When they are jointly satisfied in an embodied system that is locally entangled with multiple physically possible futures, a new constraint arises. The agent’s unified self-model must treat those futures as alternatives for a single subject, while its non-computable valuations may rank them incompatibly. Continued unitary evolution would therefore require the agent to sustain mutually contradictory commitments across those alternatives. At this point, unitary evolution cannot be maintained at the level of the agent’s representational organisation. Collapse occurs as the logical resolution of this incompatibility, selecting a single embodied continuation that preserves the agent’s unified perspective. This process is not triggered in Phase 1, nor is it prepared or biased there in any way. Phase 1 contains only timeless possibilities; the contradiction and its resolution arise entirely within Embodied Reality.

In summary, consciousness, free will, and collapse are not separate phenomena. They are three complementary descriptions of a single Phase 2 process: the resolution of representational contradiction in an embodied system whose unified, non-computable valuations cannot be coherently extended across multiple incompatible futures. A system that satisfies conditions (1)–(4) is collapse-competent; a system that fails any one of them is not.

The storm metaphor

Ordinary consciousness feels continuous: a steady flow in which perceptions, thoughts, and intentions seem to unfold seamlessly through time. Yet, upon closer examination, this continuity is not a property of physical time itself, but of the way consciousness integrates a sequence of discrete moments into a coherent whole. Philosophers and psychologists have long called this interval the specious present: the brief window of lived time in which successive events are experienced as part of a single, unified now. The specious present is a field of simultaneity stretched across a short duration – no more than a few seconds, probably much less. Within this window, multiple events coexist in immediate awareness: the note that is fading, the note that is beginning, and the expectation of the next note in a melody. Conscious experience is a dynamic synthesis. The living present is a wavefront of integration.

In 2PC, this living present corresponds to what may be called a storm of micro-collapses: a cascade of local actualisations through which portions of the background superposition are momentarily stabilised. Each micro-collapse is a tiny act of realisation, in which one physically possible configuration becomes actual where representational coherence requires it. These events form a coherent pattern, overlapping and interacting across the specious present, much as eddies and vortices compose the moving texture of a storm.

The storm metaphor captures the essential features of lived consciousness. It is dynamic rather than static; it is a process rather than a thing; it exhibits local coherence amid global indeterminacy, with stability arising only through continual renewal; and it integrates activity across multiple scales. Countless micro-events combine to form the fluid unity of subjective experience. In this view, the specious present is not merely a psychological artifact but an ontological structure: the timescale over which local aspects of reality are continually stabilised through ongoing collapse activity. The “flow” of consciousness is the temporal cross-section of this activity – the living wave of embodiment where potential becomes actual. Each moment of awareness is a pulse of value-realisation sustained by the Void’s ontological grounding. Continuity of consciousness, then, is not the persistence of a substance but the persistence of a pattern.

Attention and will are the shaping winds of this storm. They do not conjure collapses out of nothing, nor do they directly select outcomes. Instead, over the span of the specious present, they modulate the pattern, timing, and weighting of micro-collapses, biasing the trajectory of experience toward some continuations rather than others.

The storm metaphor also clarifies the limits of consciousness. A storm can stretch across vast regions if conditions align, but it cannot cover the entire globe. Likewise, collapse processes can extend across entangled systems, but they cannot span the whole cosmos, as occurred only at the original transition when historical Phase 1 gave way to historical Phase 2. Coherence gives the storm its reach; decoherence disperses it into background noise. And like every storm, an individual consciousness eventually dissipates. When the biological conditions that sustain the pattern break down, the storm ceases to be. The self (the weather system) and the soul (the Void’s grounding of that system) disappear together. Nothing persists but the wider field of possibility, out of which new storms may someday form.

Self-Models and the Unity of Subjectivity

A growing body of work in philosophy of mind and cognitive science converges on the idea that subjectivity does not arise from a metaphysical essence but from a particular kind of representational organisation. On this view, the self is not a substance but a process: a structured, embodied model through which a system represents the world and itself within it.

Thomas Metzinger’s Self-Model Theory of Subjectivity provides one of the most influential formulations of this idea. Metzinger argues that there is, strictly speaking, “no self,” only a transparent phenomenal self-model (PSM) generated by the brain. This model integrates bodily state, perspective, intention, and agency into a unified point of view. Because the model is transparent (its representational nature is not itself represented) it is not experienced as a model at all. It is simply lived as “me, here, now.” This transparency explains the phenomenal unity of subjectivity: the system cannot access or manipulate the self-model as an object, and therefore cannot experience itself as divided into multiple simultaneous selves.

A complementary account emerges from Antonio Damasio’s work in neuroscience. Damasio distinguishes between the proto-self of basic bodily regulation, the core self of moment-to-moment conscious presence, and the autobiographical self constructed through memory and narrative. The core self, in particular, is a transient, embodied process that arises from the integrated regulation of the living organism. Although dynamically constructed and fragile, it functions as a single centre of experience. Its unity is not imposed from outside but arises from the organism’s integrated biological organisation.

Phenomenological approaches reach a closely related conclusion. Shaun Gallagher distinguishes the minimal self (the immediate, embodied “I” that anchors experience) from the narrative self extended across memory and social context. Dan Zahavi likewise emphasises the intrinsic “mineness” (ipseity) of experience: every conscious episode is given as belonging to a single subject. This first-personal character is not an added feature but a structural property of experience itself. For these thinkers, the minimal self is not a theoretical posit but a phenomenological invariant: experience is always presented from one perspective and cannot be shared or jointly occupied.

An even broader framework is provided by the enactive approach developed by Francisco Varela, Evan Thompson, and Eleanor Rosch. On this view, cognition is not the passive construction of internal representations but the active enactment of a meaningful world through embodied interaction. Subjectivity arises from the dynamic coupling of organism and environment. Thompson has further argued that this participatory structure grounds the continuity of the self: conscious perspective is inherently situated, embodied, and world-involving, not detachable or freely duplicable.

Taken together, these lines of research converge on a single structural insight. Once a system achieves a transparent, embodied first-person perspective (a phenomenal self-model, a core self, or a minimal “I”) its subjectivity is unified in a way that cannot be partitioned without ceasing to exist as a subject. The unity described by Metzinger as transparency, by Damasio as the immediacy of the core self, by Gallagher and Zahavi as the indivisibility of first-person presence, and by Varela and Thompson as participatory coupling, all point to the same constraint: first-person experience is singular.

2PC extends this convergence beyond phenomenology and cognitive architecture. The Embodiment Inconsistency Theorem states that the indivisibility of subjectivity is not merely a feature of experience but a metaphysical constraint on how reality can evolve once such a subject exists. In an embodied system within Phase 2, a unified self-model cannot be coherently extended across multiple incompatible futures. When representational unity and valuation meet unresolved physical alternatives, unitary evolution becomes untenable. Collapse is therefore required, not as an added mechanism, but as the necessary resolution of a contradiction introduced by the existence of a singular, first-person perspective.

In this way, contemporary theories of self-models do not merely describe how the self appears. They implicitly delineate the conditions under which a self can exist at all. 2PC makes this implication explicit: the unity of subjectivity is not only lived, but enforced by the structure of reality itself.

The Embodiment Threshold (ET)

The Embodiment Threshold is the point at which a system gives rise to an internally unified representational structure whose continued coherence requires the resolution of incompatible valuations across locally entangled, decoherent alternatives. At ET, a single prospective subject is forced to assign mutually incompatible values to alternative future continuations that cannot be jointly realised by that same subject. As a result, the system’s self-model can no longer be coherently extended across all of the branching possibilities it simultaneously references, making ontological collapse necessary.

Formally, let a system S be characterised at a pre-symbolic level by an internal informational organisation, written as IS(t). This organisation is not a physical state description and does not yet involve belief or semantic content. Instead, it consists of mesoscopic predictive structures capable of supporting valuation under the assumption of a single continuing subject.

Let VS be a valuation map that assigns to each such predictive structure a value in an abstract valuation space V. In simple terms, VS maps elements of IS(t) to values in V.

The Embodiment Threshold is reached when these valuations can no longer be jointly satisfied under the constraint that they all belong to one unified subject. To make this precise, consider a decoherent set of locally entangled alternative continuations, written as {αᵢ}, to which the same representational subject assigns valuations. ET occurs when there is no coherent extension of the self-model that allows all of these assigned valuations to be realised by a single subject. In this case, the joint satisfiability condition fails.

In shorthand, ET occurs when: Joint_Satisfiability({αᵢ}, VS) = FALSE

This failure is not due to physical incompatibility between the alternatives themselves. Instead, it arises from a referential contradiction: the representational "I" cannot be coherently duplicated across incompatible valuation contexts. Ontological collapse is therefore forced at or before ET as a condition of representational coherence, thereby instantiating Phase 2 embodied reality.

The Embodiment Inconsistency Theorem (EIT)

The Embodiment Inconsistency Theorem explains why embodiment must occur once a system reaches the Embodiment Threshold. It is a meta-theoretic consistency result rather than a dynamical law. It does not rely on measurement, physical collapse mechanisms, environmental decoherence, or any modification of quantum dynamics. Instead, it follows from a logical constraint: a single referential “I” cannot coherently sustain stable valuations across mutually incompatible futures. 

Theorem:Let a system S satisfy the following axioms:     

  • Valuation: The system assigns intrinsic valuations VS(hi) to physically possible future continuations. This is defined by a valuation map VS : {hi} -> V, where each hi corresponds to a decoherent alternative locally entangled with S and describable as a Phase 1 admissible continuation            
  • Entanglement: The system’s informational degrees of freedom are distributed across multiple decoherent alternatives. Its future self-model spans a family {hi} with stable decoherence relations.      
  • No-Overdetermination: The universal wavefunction provides no rule by which a single subject can reconcile incompatible intrinsic valuations across all hi. No      physical law selects a unified continuation for a valuative subject  in advance.      
  • Ontological Coherence: A single subject cannot inhabit futures that assign incompatible intrinsic valuations to its own continuation. This is governed by a coherence predicate C({VS(hi)}). If the valuations across the set of histories are mutually inconsistent for a single referent, the predicate returns FALSE.

Conclusion

The Embodiment Threshold is reached at the exact moment the coherence predicate fails: C({VS(hi)}) = FALSE.At this point, no globally consistent assignment of a single subject across all hi is possible. The contradiction lies in the system’s own representational dynamics, not in the universal wavefunction. Because reality cannot host a single referent with mutually inconsistent successor states, embodiment becomes necessary. One history is instantiated for S, and the system becomes embodied within that history. EIT shows that the existence of a unified subject is incompatible with the continued unitary evolution of its entangled alternatives once valuations diverge beyond the threshold of coherence.

Formal Characterisation of the Embodiment Threshold

Let a Phase 2 system S be locally entangled with a family of decoherent alternatives, written as {hi}. Each alternative carries a decoherence weight pi derived from the diagonal of the decoherence functional. These weights describe the relative presence of alternatives within the superposition but do not cause collapse; they define the domain over which the system’s self-model must maintain representational coherence.The internal organisation IS(t) provides valuations VS(hi) in an abstract valuation space. Representational coherence requires that there exist a single valuation vector v* that could serve as a unified perspective across all relevant alternatives hi.

We define the inconsistency index Lambda_S over the set {hi} as follows:

Lambda_S({hi}) = min_v [ sum_i ( pi * distance(VS(hi), v)^2 ) ]

This quantity is a mathematical summary of the degree to which the system’s valuations fail to admit a single coherent perspective. The minimising vector v* represents the best conceptual compromise the system could maintain across the alternatives. The Embodiment Threshold, denoted t*, is reached when Lambda_S exceeds the system’s tolerance for representational divergence, Theta_S. This tolerance is intrinsic to the architecture of the system’s self-model and defines the maximum divergence compatible with a unified subject. When Lambda_S > Theta_S, representational coherence fails. A single subject cannot be coherently extended across the family{hi}. At this point, a single embodied continuation becomes metaphysically necessary, and one Phase 2 history is realised for the system. The contradiction arises entirely within the Phase 2 system’s own predictive organisation.

Competition Resolved Collapse (CRC)

Technical Definition of Competition Resolved Collapse

After the Embodiment Threshold has been passed, collapse unfolds as a storm of micro-collapses inside the subject’s specious present. This storm is the ongoing process that sustains embodiment.Each micro-collapse (ci) is a local stabilisation of entangled alternatives. The hazard rate (which explains when the world collapses (driven by attention and value)) for a micro-collapse is:

lambda_i(t) = lambda_0 * [1 + alpha_V * V_i(t) + alpha_P * P_i(t) + alpha_A * A_i(t) + alpha_C * C_i(t)]

Competing micro-collapses share overlapping support in Hilbert space, and the realised one minimises the inconsistency functional (which path the world takes (the one that causes the least "logical pain")). :

F[ci] = | <Psi| O_ci |Psi> - V_ci |^2 + beta * D(rho_SE || rho_S (x) rho_E)

The dynamics across the specious present follow a rate-modulated stochastic field, defined by the master equation:

rho_dot_S = L_U[rho_S] + i * sum( lambda_i(t) * (M^i * rho_S * M^i_dagger - 0.5 * {M^i_dagger * M^i, rho_S}) )

This equation represents the structural completion of the model: it shows how the standard laws of physics (L_U) and the subject's internal valuations (lambda_i) combine to create a single, unfolding reality. This "master equation" is the final rule that combines it all into a moving picture of reality.

How Competition-Resolved Collapse closes the multi-agent hole

Once the Embodiment Threshold has been crossed, collapse is no longer a single global event. It becomes an ongoing process: a dense, overlapping storm of local stabilisations occurring within the specious present of embodied agents. Each micro-collapse is a resolution of nearby physical alternatives under valuation, prediction, attention, and coherence constraints. This process sustains the continuity of experience and action for a subject.

The problem that CRC is designed to address arises only when more than one such subject exists. If conscious agents are genuinely real, and if their valuations matter, then situations inevitably arise in which two or more agents assign incompatible values to the same unfolding physical situation. One agent intends one outcome; another intends a mutually exclusive outcome. If collapse were driven independently by each agent’s valuations alone, the theory would permit incompatible realised worlds – private realities with no principled mechanism forcing agreement. That is the multi-agent hole. CRC as defined here is deliberately minimal. It exists to close this structural inconsistency, not to explain meaning.

CRC closes this hole without introducing a new force, a coordinating intelligence, or a hidden global selector. The key move is simple but non-negotiable: when agents interact, their micro-collapse processes become entangled. Their valuation-weighted collapse hazards no longer operate on disjoint physical supports. Instead, overlapping regions of Hilbert space are jointly constrained by all participating agents’ valuations and predictive structures. At that point, incompatible continuations are not merely in conflict at the level of desire or belief. They are in conflict at the level of representability. A continuation in which both incompatible valuations are simultaneously realised cannot remain dynamically coherent, because it would require the same shared physical degrees of freedom to stabilise in mutually exclusive ways.

CRC resolves this by treating collapse as a competition among overlapping micro-collapses. Each candidate stabilisation carries a cost, measured by an inconsistency functional. This functional penalises two things: mismatch between the physical outcome and the valuations applied to it, and breakdown of coherence between the agents and their shared environment. The realised micro-collapse is the one that minimises this combined inconsistency. Importantly, this is not a vote, a negotiation, or a moral arbitration. No agent “wins” because it is stronger, more numerous, or more important. The outcome is whichever continuation can be jointly stabilised by the entangled system of agents and environment with the least representational contradiction. The others simply fail to stabilise and are not realised. This is how a single shared world is maintained. Agents remain free to value, intend, and predict independently. Conflicts are not prevented. But when those conflicts concern the same physical degrees of freedom, only outcomes that can be jointly embodied survive the collapse process. The shared world is not imposed from outside; it is the residue of what multiple embodied perspectives can coherently sustain together.

CRC therefore does exactly one thing: it ensures that multi-agent reality does not fragment. It does not explain why particular symbolic meanings arise, why certain patterns feel significant, or why some coincidences strike us as meaningful rather than accidental. Those phenomena occur after the closure CRC provides, within the space of stabilised shared reality. In other words, CRC closes the door on solipsistic branching, but it does not seal the building. What follows, once a shared world is secured, is a further question: how symbolic systems, shared myths, and authorised interpretive frameworks modulate which micro-collapses are preferentially stabilised within that world.

The Hard Problem of consciousness (#19) 

TheHard Problem of Consciousness is a problem only under materialist or physicalist assumptions. From the perspective of 2PC it is readily resolved: consciousness is not produced by matter per se but arises from the instantiation of a unified self-model that grounds Phase 2 embodiment. The Even Harder Problem, by contrast, concerns the lack of a framework capable of uniting competing metaphysical perspectives. This problem remains open: I cannot claim to have solved it until such time as 2PC has convinced a sufficiently broad community that it provides both a coherent foundation for scientific explanation and a complete, internally consistent model of reality.

Psychegenesis and the Psychetelic Principle

People have long wondered about abiogenesis – the emergence of life from non-life. In the Two-Phase cosmology, what matters is the first appearance of conscious organisms, and abiogenesis gets thrown in for free, because it is necessarily on the path from non-life to consciousness. I use the term “psychegenesis” to refer not only to the emergence of conscious life from unconscious life, but to the entire process that made this possible. The resulting explanations are related to the Anthropic Principle – the idea that it is only in cosmoses or timelines where intelligent beings like humans exist that anybody asks questions about how their world came into being. The Psychetelic Principle (“psyche” + “telos”) says that a cosmos is only realised if it can host a subject, and that its history was always destined toward the conditions that let consciousness arise. Psychegenesis is the long, directional drift of matter toward structures that can carry a point of view, and once that point of view appears, the cosmos locks into the form that consciousness can inhabit. In other words, the Psychetelic Principle provides a mechanism, not just an unsatisfying explanation.

The Evolution of Consciousness (#20)

In Mind and Cosmos, Thomas Nagel argued that there can be no materialistic neo-Darwinian explanation for how consciousness evolved, and that the only credible naturalistic alternative involves teleology: consciousness was destined to evolve. The Two-Phase Cosmology proposes that the explanation for this apparent teleology is not the undiscovered teleological laws that Nagel suggests we should be searching for, but the same thing that resolves the Hard Problem and the Measurement Problem. The Cambrian Explosion is now permitted to have the explanation that should always have been obvious. The unifying characteristic of the Cambrian fauna was that they actuallylook like animals as we intuitively understand that word to mean. This is in contrast to very simple animals like sponges, which many people assume to belong to some other branch of life entirely (as did early naturalists). The same applies to the Ediacaran fauna which dominated the world in which the Last Universal Common Ancestor of Subjectivity (LUCAS) lived – they were either sessile (they didn't move at all), or their mobility was akin to that of jellyfish.

Are jellyfish conscious? They lack brains, and it is hard to see how they can be intuitively aware of different possible futures about which they can make decisions. Their movements are purely reflexive. They are likely to be examples of the most advanced level of behavioural complexity that unconscious animals are capable of. I obviously cannot prove this, but my tentative conclusion is that their nervous system does not cross the Embodiment Threshold. A “central processing unit” is required: a brain.

The immediate ancestors of LUCAS must have been stymied by the Frame Problem. Their increasing cognitive complexity would have amplified the combinatorial explosion of potential futures they could model, but at that point they had no mechanism to assign meaning or value. Without the capacity for valuation that only comes with consciousness, effective decision-making would have been difficult. However, so long as reality remained superposed, meaningful choice was metaphysically impossible anyway – all physically possible outcomes existed in parallel, just as believers in MWI think they still are. 

The only way for the informational structure to continue coherently in this situation is through a selective mechanism – a mechanism experienced as free choice. This moment is psychegenesis and the beginning of phase 2. Cosmic history before this point was actualised retrocausally: the entire evolutionary trajectory leading to LUCAS was selected as a unified classical timeline. After this point, evolution could proceed with genuine decision-making capacity. A new sort of phase 1 information structure began to exist. The original phase 1 history, selected for realisation at the moment LUCAS becomes conscious, was a timeless and unchanging block. Now there was a dynamic structure in which the Void was embedded and animals were conscious.

My suggestion for the best candidate we're aware of for LUCAS is aworm-like organism smaller than a grain of rice. It lived at the end of the late Ediacaran Period, around 560 to 555 million years ago, not long before the Cambrian Explosion. It is the earliest confirmed organism exhibiting bilateral symmetry, a through-gut, and a clear front-back axis. It is the simplest known organism that might have been capable of making choices in a metaphysically meaningful way. Ikaria wariootia was small, blind, and simple, but for a brief period of time the Earth was its oyster;  at the end of the Ediacaran nothing could challenge it. It appeared when the only other complex “animals” were so unlike anything living today that scientists are uneasy using the word “fauna” to refer to them. Maybe they weren't animals at all, but some other branch of life which died out, because it was unable to compete when the Cambrian got going. It took about 40 million years to get from the simple burrower Ikaria to the formidable apex predator Anomalocaris. The story of conscious animal life was, and remains, all about meaningful choice. No amount of speed or strength can substitute for the ability to make the right decisions at the right time.

The uniqueness of the original psychegenesis event (the collapse of the primordial wavefunction).  

The first phase shift event – the one involving LUCAS – is unique in two ways. Firstly it selects an entire cosmic history stretching back to the big bang, not just a small part of it. Secondly it is the original selection, and this cannot be  “invoking” or “compelling” the Void to involve itself in the situation. The problem here is that Pythagorean Ensemble of Phase 1 must contain infinite examples of possible structures which arrive at the ET. If each of them had the power to compel the Void then it would need to respond to all of them simultaneously (and possibly even repeatedly). This situation would be even more ontologically bloated than MWI, but is easily avoided by assuming that only one cosmos is realised at any one time. If this model is correct then there are an infinite number of realities-in-waiting, and eternity for it to be their turn. I call them “cosmic eggs”. Somehow a particular reality must be chosen for realisation when the previous cosmos has run its course. The determination of which of these cosmic eggs are selected into reality, and in which order, is an interesting question. I am skeptical that it can be random, because I can't imagine where randomness could come from. It is possible that some informational rule determines this original selection (and I will have more to say about this later in this article). Perhaps there is some other explanation. All we can say that it does indeed happen, and our own cosmos was the most recent cosmic egg to hatch.

However, that there is one cosmos/reality does not imply that there can only be one conscious organism, and once a cosmos and timeline has been realised at the collapse of the primordial wave function then any future instances of ET-crossing organisms can indeed “invoke” the Void. If they satisfy the physical conditions, then they will become conscious. This must be true under this model, for it is the reason why any physical organism capable of becoming conscious actually does become conscious. Once Phase 2 has begun then Brahman is always ready to become Atman when a view from somewhere is available.

The Psychetelic Principle


From a conventional scientific viewpoint, consciousness is an inexplicable enigma. From a Darwinian perspective grounded in survival and reproductive advantage, the emergence of subjective experience is profoundly puzzling. Why should an undirected, mechanical process give rise to inner life, rather than simply more efficient stimulus-response mechanisms? In Mind and Cosmos, Thomas Nagel argued that natural selection, as currently understood, cannot explain the emergence of conscious subjectivity, and proposed we search for teleological laws of nature: goal-oriented principles embedded in the fabric of the cosmos. The new model resolves the problems on all fronts. Consciousness is not selected, but ontologically prior to evolutionary competition within our observed universe. Where Nagel suggests that evolution must be guided by teleological laws aimed at producing minds, 2PC posits a structural inevitability: in a potentially infinite quantum cosmos, some branch will arrive at the ET, and there it will remain in timeless incompletion until such time as the Void resolves the situation by realising that structure, allowing consciousness to emerge and start collapsing possibility into actuality. If and when that happens then that branch becomes a realised cosmos inhabited by conscious observers. This retains Nagel’s key insight that mind is not epiphenomenal or accidental, but dispenses with his hypothetical teleological laws. The apparent directedness of evolution toward complexity and consciousness is a selection effect caused by the fact of consciousness itself being the criterion for observable history. The universe we observe is the one rendered actual by being observed, regardless of probability. This results in something very similar to the anthropic principle, except instead of just saying “If humans hadn't evolved then we wouldn't be here to ask the question”, we're actually explaining why conscious organisms were guaranteed to win the cosmic lottery in this way. I call this “the Psychetelic Principle”.

Why did psychegenesis happen on Earth, rather than somewhere else? The PP tells us that we should expect the Earth to be special, but it doesn't tell us exactly what is special about Earth. It makes an empirical prediction: if the model is correct, then there should have been multiple exceptionally improbable events in Earth's phase 1 history. These, if they exist, would be signatures of psychegenesis. There are many examples, the most extreme of which are these four:

1. Eukaryogenesis: The Singular Emergence of Complex Cellular Life

Eukaryogenesis is the origin of the eukaryotic cell via the endosymbiotic incorporation of an alpha-proteobacterium (the precursor to mitochondria) into an archaeal host, and it appears to have happened only once in Earth’s entire 4-billion-year history. Without it, complex multicellularity (and thus animals, cognition, and consciousness) would not have emerged. The energetic advantage conferred by mitochondria enabled the explosion of genomic and structural complexity. No similar event is known to have occurred elsewhere in the microbial biosphere, despite vast diversity and timescales. If eukaryogenesis is a statistical outlier with a probability on the order of 1 in 10⁹ or worse, it becomes a cardinal signpost of the unique psychegenetic branch.

2. Theia Impact: Formation of the Earth–Moon System

The early collision between Earth and the hypothesised planet Theia yielded two improbable outcomes at once: a large stabilising moon and a metal-rich Earth. The angular momentum and energy transfer needed to both eject enough debris to form the Moon and leave the Earth intact was extremely finely tuned. This event likely stabilised Earth's axial tilt (permitting climate stability), generated long-term tidal dynamics (affecting early life cycles), and drove the internal differentiation which fuels the magnetic field and active tectonics. It’s estimated to be a rare outcome among rocky planets – perhaps 1 in 10⁷ – and essential for the continuity of biological evolution.

3. Grand Tack: A Rare Planetary Migration Pattern

Early in solar system formation, Jupiter is thought to have migrated inward toward the Sun and then reversed course (“tacked”) due to resonance with Saturn. This migration swept away much of the early inner solar debris, reducing the intensity of late bombardment and allowing small rocky planets like Earth to survive. Crucially, it also delivered volatiles (including water) to the inner system. This highly specific orbital choreography is rarely reproduced in planetary formation simulations. Most exoplanetary systems dominated by gas giants do not preserve stable, water-bearing inner worlds. The odds against such a migration path are estimated to be very high. Some simulations suggest well under 1 in 10⁶.

4. LUCA’s Biochemical Configuration

The Last Universal Common Ancestor (LUCA) did not merely represent the first replicator, but a highly specific and robust configuration of metabolism, information storage, and error correction. It was already using a universal genetic code, RNA–protein translation, lipid membranes, and a suite of complex enzymes. LUCA’s molecular architecture was a kind of “narrow gate” through which life could pass toward evolvability. Given the astronomical space of chemically plausible alternatives, LUCA’s setup may reflect a deeply contingent and rare outcome.

Conclusion: Compound Cosmic Improbability as Psychegenetic Marker

Each of these four events is, in itself, vanishingly unlikely. But more importantly, they are compounded. The joint probability of a single planet experiencing all four – along the same evolutionary trajectory – does indeed render the Earth’s phase 1 history cosmically unique. Under 2PC these improbabilities indicate the statistical imprint of consciousness retro-selecting a pathway through possibility space – making a phase transition from indefinite potentiality to a single, chosen actuality. 

Free Will

Given that they are individually two of the deepest mysteries of existence, the relationship between consciousness and free will might seem like a mystery squared. You could call them the passive and active modes of consciousness, but then you would have to explain how the two modes relate. It is not clear that consciousness is ever completely passive. Even if we are just sitting and admiring a view, we still choose where our eyes rest and what we give our attention to. The closest case of purely passive consciousness is the rare situation in which a person is supposed to be under a general anaesthetic but is in fact paralysed and aware of what is happening around them (this is known as anaesthesia awareness). Even then, the person can still hold a preference for physically possible futures, since they would surely want either the anaesthetic to take full effect or for someone to notice what is going on. The motor nervous system may have been disconnected from will, but that does not mean will has been disconnected from consciousness.

It is also true that at any time we can will things that have nothing to do with our body, although whether this could have any causal effect beyond the body is a separate and highly controversial matter. Even people who deny any nonphysical causation still express preferences; roulette makes that obvious enough.      

The classical problem of free will arises from a supposed tension between determinism and agency. If all physical events follow immutable laws, how can any action be free? But if actions emerge from quantum randomness, how can they belong to a responsible agent? Most scientific or philosophical accounts either eliminate free will, redefine it in a compatibilist sense, or leave it unresolved.

From the perspective of 2PC, this framing misses the crucial point. Both sides assume reality is already fully instantiated and that human choice must be explained entirely within a fixed causal order. In 2PC, this assumption is false. The world we inhabit is not a static block but a dynamically embodied reality, selected from a vast field of unrealised possibilities. Consciousness is not a passive observer of the world; it is the mechanism through which potential becomes actual.A physical system in Phase 1 evolves according to natural law, exploring all consistent pathways in quantum superposition. Nothing in this regime collapses or becomes determinate. Once a system crosses the Embodiment Threshold, however, the background superposition can no longer sustain branching without contradiction. Collapse into Phase 2 reality is then metaphysically necessary, enforced by the internal structure of valuations within a single subject. Free will, in this framework, is neither an illusion nor an inexplicable exception to causality. It is a structural feature of reality itself: conscious agents locally instantiate the cosmos by resolving indeterminate possibilities into determinate experience. Each act of choice is a metaphysical event in which the Void binds one branch of potentiality into the lived unfolding of the world.

This perspective reframes the classical dilemma in three ways:     

  1. Determinism governs Phase 1possibilities, but it never reaches embodiment; it provides the scaffolding of potential rather than the manifestation of reality.      
  2. Free will emerges at the Embodiment Threshold. When agents cross this threshold, they collapse the possibility space into actual outcomes. Freedom is thus participation in embodiment, not escape from law.      
  3. Moral responsibility follows naturally. Each conscious act is a genuine selection that shapes embodied reality, making agents co-authors of the world’s unfolding.

Under this model, earlier impasses are resolved. Compatibilism is correct that freedom operates within natural law, but law governs only possibilities, not their resolution into actuality. Hard determinism and MWI are incomplete because they treat Phase 1 as the entirety of reality, erasing the creative role of consciousness. Libertarianism is correct to emphasise origination, but misidentifies it as a mysterious nonphysical substance. Origination is real: collapse becomes necessary when valuation and subjectivity arise.

Long-standing puzzles now fall into place. Quantum indeterminacy is not a blind lottery but the openness required for value-laden resolution. Nagel’s problem of autonomy fades: the subjective standpoint is the arena in which the objective order of Phase 1 becomes the embodied actuality of Phase 2. Free will is the hinge of reality: the point at which possibility meets value, and value becomes part of the lived unfolding of the world.

Reconciling Free Will with Neuroscience

Neuroscience is often taken to challenge conscious agency. Libet-style experiments reveal neural precursors that appear before subjects report forming an intention, which has been interpreted as evidence that the brain “decides” first and consciousness merely observes. This interpretation assumes a sharp, punctate moment of decision and a clear boundary between unconscious cause and conscious effect.

In the storm model, there is no single collapse point. Instead, consciousness unfolds as a continuous field of local micro-collapses distributed across the specious present. Neural precursors and the felt moment of intention are two complementary views of the same dynamic pattern. Readiness potentials reflect the gradual accumulation of correlated neural activity, which raises the likelihood of particular micro-collapses. This activity belongs to the scaffolding of possibility rather than the determination of outcomes.

What becomes lived action is shaped by the agent’s valuations, attention, and predictive signals, which modulate the storm over a short temporal window. Influence manifests as subtle shifts in micro-collapse rates and stabilities, not as a last-second command. In this framework, the apparent paradox of Libet-type data dissolves: early neural activity corresponds to the forward-evolving preparation of possibilities, while conscious influence is distributed and time-integrated. The felt moment of intention emerges from the accumulation of these modulations, producing a stable selection that appears instantaneous from the inside. Agency is neither mysterious nor mechanical. It does not violate physical law, nor is it random noise. It is a structural property of systems capable of maintaining a unified self and generating incompatible valuations across live possibilities. A local embodiment event occurs when the storm, shaped by these valuations and the system’s coherence, tips one branch into actuality. Responsibility follows naturally: the pattern of modulation executed by the agent actively contributes to the selection of the real trajectory.

This perspective also clarifies the distinction between a human and a p-zombie. A genuine subject participates in the storm that resolves possibilities into lived outcomes. A hypothetical zombie may display behaviour resembling choice, but it lacks the internal patterning that produces realisation. Its actions are mere correlations in physical processes, not the work of a system stabilising a branch of reality.

In sum, Libet-type findings do not undermine free will. They reveal the preparatory dynamics through which possibilities are structured prior to embodiment. Consciousness is not a passive observer arriving too late; it is the selective process by which events take form, the mechanism that transforms potential into lived experience.

The Arrow of Time and the Problem of Now (#17)

Time is the thread through which reality unfolds, yet it remains one of the most elusive and paradoxical aspects of existence. Physics measures it, mathematics describes it, and consciousness experiences it, but there is no clear explanation of why it feels real, why the present is privileged, or how the past and future acquire coherence. Physics treats time as a coordinate, while consciousness experiences it as an animated present: an unfolding now that cannot be captured by static mathematical structures. This deep mystery is the “Problem of Now,” which remains unsolved in physics, cosmology, and consciousness studies.

In Phase 1 of 2PC there is no time as we know it. No unfolding, no sequence, no becoming, only the coexistence of every physically consistent history. Each possible universe is present there in its entirety, not as an event that will ever happen but as a finished mathematical form: an informational structure that contains all the relations that would constitute a world if it were to become real. There is no arrow of time because nothing changes, and there is no “now” because there is no differentiation between past and future. What we later interpret as temporal order exists here only as an internal coordinate within those mathematical structures, not as an experienced flow. Only when consciousness arises and collapse begins does one of those mathematical timelines ignite into lived duration.

The transition from Phase 1 to Phase 2 marks the birth of time. A system arises that can form a representation of itself, issue valuations, and thereby generate incompatibilities across branches of the superposition. Further extension of timeless mathematical structure becomes untenable, because a logical contradiction has appeared inside the ensemble. When LUCAS crossed the Embodiment Threshold, the timeless order of primordial Phase 1 could not remain in superposition and collapse followed. From that point onward, reality was a dynamically updating structure, continually regenerated through local acts of collapse involving conscious participation. Each such act introduces irreversibility: a fundamental asymmetry between what is still potential and what has become actual. The first collapse is therefore not merely an event in time but the beginning of temporality: the point at which the universe ceased to be a static informational structure and began to happen. Once the first participant exists, the universe begins to “update.” What we experience as the passage of time is that ongoing process of update: the continual reconstitution of the present as new portions of Phase 1 are drawn into actuality. Time, then, begins with meaning: the universe starts to "run" only when something within it cares which version of itself is real.

With the advent of Phase 2, the cosmos acquires something utterly new: a present. In ordinary physics, “now” is treated as an arbitrary slice through spacetime: a matter of convention, not ontology. In 2PC, the present is fundamental, because it is the locus of collapse, the site of participation, and the place where the Void engages the world. The present is where the universe commits to a single version of itself. Phase 2 is thus defined by its ongoing presentness. Each collapse event constitutes a renewal of the present.

This situation requires a new vocabulary of temporality. The present is not a knife-edge dividing past and future but a dynamic zone of coherence: a self-sustaining storm of micro-collapses through which continuity is woven. The “flow” of time is the felt texture of this storm. The present is not infinitesimal; it has thickness, extension, and internal rhythm. It was William James who first popularised the term "specious present" to describe this. Within that window, the self experiences its own persistence, integrating discrete collapses into the seamless continuity of being. The past lingers as residue, the future waits as unresolved potential, but only the present is.

The Arrow of Time: From Potential to Commitment

The existence of a present immediately entails an asymmetry between what is already real and what remains unresolved. This asymmetry is the arrow of time, and it is a direct consequence of collapse itself: once a possibility is resolved into actuality, it cannot be un-resolved. This irreversibility is a feature of ontology, not thermodynamics. From the standpoint of 2PC, time’s direction is simply the direction of commitment from indeterminacy toward realised structure. The universe advances because the process of consciousness and collapse cannot go backwards. The act of knowing is intrinsically irreversible: to experience and to choose is to foreclose alternatives.

Entropy, in this framework, is a statistical shadow cast by collapse. As potentialities become actual, the range of unchosen alternatives grows in relative measure, giving rise to the appearance of disorder. But entropy does not drive time; rather, time drives entropy. The world becomes increasingly structured not because energy dissipates, but because each act of embodiment locks in a new layer of irreversible coherence.

From within consciousness we feel this asymmetry as the passage of time: the continual advance of the present into the unformed future, leaving behind the stabilised residue we call the past. We do not remember the future for the same reason we cannot unmake a decision we have already lived through: the future has not yet been resolved, while the past has been fixed through commitment.

The Problem of Now and the Limits of Physics

Modern physics, for all its precision, has almost nothing to say about the present. It cannot tell us why there is a “now,” or what it means for something to happen. In the equations of relativity, time is merely a coordinate; in quantum mechanics, it is an external parameter that measures change but never participates in it. The entire edifice of physical theory rests on the assumption that time exists independently of the act of being – that it flows on its own, indifferent to observation.

This omission is not trivial. Einstein himself admitted that “the experience of now” stands outside physical theory. The block universe of relativity contains no privileged moment, no unfolding, no movement from past to future. All events coexist eternally, like frames in a film reel, and nothing in the mathematics distinguishes one frame as “current.” The flow of time, in this view, is an illusion of consciousness.

2PC turns this on its head. The “illusion” of temporal flow is in fact a limitation in physics. The reason physics finds no present is that physics describes only Phase 1. It cannot describe Phase 2. The “Now” is not a variable that can be inserted into an equation, because it is the condition of existence that allows equations to apply at all. In 2PC, the present is where physics meets metaphysics: the missing bridge between the formal structure of possibility and the lived reality of participation.

Philosophers have long sensed this divide. John McTaggart called it the difference between the A-series (past–present–future) and the B-series (earlier–later). Henri Bergson spoke of duration (lived time, qualitative and flowing) as something that spatialised physics could never grasp. Whitehead described each “actual occasion” as a pulse of becoming, a tiny act of self-creation. 2PC unites these intuitions under a single ontological framework: the A-series is the lived form of collapse, duration is the felt rhythm of becoming, and each actual occasion is a micro-collapse within the storm of participation. The Problem of Now dissolves when we see that physics has never had the tools to describe it. The present is not something to be measured, but something to be lived.

Time and the Storm of Micro-Collapses

If the present is where possibility becomes real, then continuity (the sense that the world endures from moment to moment) must itself be an emergent construct. In 2PC, the flow of time does not arise from any external clock, but from the pattern of local collapses through which consciousness sustains its own coherence. Each micro-collapse is an act of realisation, yet no single collapse is sufficient to produce an ongoing world. Continuity emerges only when countless such collapses interlock into a stable rhythm: a storm of micro-collapses.

This storm is self-organising. Within its flux, each collapse depends on the predictive structure left behind by its predecessors. The mind, in this sense, is a stabilising feedback loop within the storm. It is like a field of coherence that maintains the thread of identity through continuous re-instantiation. To be conscious is to inhabit this field; to remain conscious is to keep the storm in motion.

From within experience, this manifests as the specious present: the felt thickness of time in which sensations, intentions, and memories overlap. What appears to us as the seamless continuity of perception is in fact the rapid re-creation of the world, just as a wave continually re-creates itself as it moves across the surface of the ocean. The self is the dynamic process through which time holds itself together.

Temporal Scales and the Density of Collapse

Time does not flow uniformly across all scales of existence. In 2PC, the apparent pace of reality is determined by the density of micro-collapses: the rate at which possibilities are resolved into actuality. Each conscious agent generates a local stream of collapses, a rhythm that structures its own experience of the present. The faster the density of collapse, the more events are resolved per unit of lived experience; the slower the density, the more time seems to stretch. This provides a natural account of subjective time dilation. In moments of intense focus, meditation, or trauma, the local density of collapse may increase or decrease, causing the felt flow of time to expand or contract. Seconds can feel like minutes, minutes like seconds, yet no external clock has changed. The tempo of experience is an internal, participatory phenomenon: time is experienced in proportion to the rhythm of the storm of micro-collapses sustaining consciousness.

At larger scales, cosmic time emerges from the integration of countless local collapse streams. Each agent, each system capable of stabilising part of the universe, contributes to the global tempo. What we call universal time – the seconds ticked on a clock, the progression of planetary orbits, the expansion of galaxies – is the macroscopic signature of innumerable local acts of collapse interwoven into coherent patterns. The universe’s arrow of time is not imposed externally; it is the emergent aggregate of the rhythms of countless conscious participants, each entangled with overlapping regions of the superposition.

Thus, time is fundamentally multiscalar. The present is locally experienced, temporally thickened by micro-collapses; it is globally coordinated through entanglement, memory, and intersubjective coherence; and it is phenomenologically flexible, sensitive to the agent’s mode of attention and engagement. From the smallest act of perception to the unfolding of cosmic history, reality progresses through a storm of collapses, producing the seamless flow of experience that we call time.

Death, Dissolution, and the Return to Stillness

The arrow of time is not merely cosmic but personal. It advances wherever consciousness sustains its storm. When the storm falters, in deep sleep, anaesthesia, or death, the continuity of the self dissolves, and with it, the subjective flow of time. When the storm of micro-collapses ceases, time as lived continuity comes to an end. The self, which exists only as the dynamic coherence of those collapses, cannot persist once the process stops. In 2PC there is no enduring entity that departs from the body or continues to experience elsewhere; both self and soul are co-extensive with the storm itself. The soul is the Void’s participation in that field of collapse, and when the field dissipates, the grounding is withdrawn, and the self falls back into uninstantiated possibility.

To the world that continues, the traces of that life remain as structural residues: memories in other minds, genetic sequences, cultural imprints, and physical changes in the environment. These are the echoes of participation: the way the local storm has altered the wider field of the present. But ontologically, the personal “I” no longer exists. There is no persisting observer to inhabit another time, or to re-enter Phase 1 as a disembodied witness. The self has dissolved back into stillness. Each conscious life is a temporally bounded act of world-making: a finite region of reality stabilised through the recursive coherence of lived moments. The end of that coherence marks the return of its contents to indeterminacy. The Void does not reclaim an object; it simply ceases to instantiate that specific configuration of coherence.

This view offers an alternative to both materialist finality and spiritual continuation. The self is neither an illusion nor an immortal essence; it is an episode of ontological participation. When that episode ends, all that remains is the stillness of Phase 1. Yet something of that participation endures: every act of valuation, every commitment, every realised structure becomes part of the shared fabric of the now that others inherit. Death, in this light, is the rejoining of stillness by a pattern that has completed its time as a storm.

Two-Phase Cosmology

The boundary between physics and philosophy has been blurry for a long time, at least where  consciousness and quantum mechanics are concerned. You still find physicalists who insist that both belong entirely to science and always will, but most people who work across disciplines understand that the hardest questions about mind and the quantum world are philosophical. Cosmology is different. Almost nobody thinks the crisis in cosmology could be the result of a serious philosophical mistake, including most critics of the standard model. It is taken for granted that the only people who should touch the problems inside ΛCDM are cosmologists and physicists, leading to a ubiquitous assumption that these are straightforward empirical puzzles that will eventually yield to empirical fixes. Yet those fixes never arrive. The field has been impotently watching the problems multiply since the 1970s.

It is no longer possible to pretend that this situation is healthy. Sabine Hossenfelder has been calling out the culture that rewards papers few people can understand, filled with predictions most people suspect will be ruled out when tested. The work keeps coming because the incentives demand it, not because the theory is converging on truth. The system keeps running, but the crisis is not being resolved.

We can anticipate huge resistance to the coming paradigm shift from the established powers within professional cosmology. It is not that scientific cosmology is going to come to an end. The problem is that a lot of people actually working in cosmology have spent their whole careers busily barking up trees which aren't just the wrong trees, but aren't even in the right forest. The real problem isn't the empirical work but the rotten foundation: physicalism.

Why inflation is like the luminiferous aether (#2,#3,#4,#5)

Inflation has long been regarded as one of the most successful theoretical advances in modern cosmology. Introduced in the early 1980s, it purports to explain why the observable universe appears so flat, homogeneous, and isotropic, despite the apparent lack of causal connection between distant regions in the early universe. In 2PC inflation is reclassified as an ad hoc mechanism invented to fix problems that arise only if one assumes a classical, observer-independent past. 

Inflation was introduced to address several deep puzzles that arise when the universe is assumed to have evolved according to classical relativistic physics from the very beginning: the Horizon Problem, the Flatness Problem and the Monopole Problem. To solve these problems, inflation posits that the universe underwent a brief period of exponential expansion immediately after the Big Bang. This expansion would stretch a tiny, causally connected region to encompass the entire observable universe (solving the horizon problem), drive the geometry of the universe toward flatness (solving the flatness problem) and dilute any relic particles with empty space (avoiding the monopole catastrophe). However, inflation itself requires finely tuned initial conditions. It demands the existence of a hypothetical inflationary field (the “inflaton”) with a specific potential, appropriate dynamics, and a graceful exit mechanism to end inflation without reheating the universe too violently. Inflation trades one set of mysteries for another, and does so on the assumption that the early universe actually existed as a classical, physical state, evolving forwards in time in a manner determined entirely by the laws of physics.

The observed isotropy and flatness of the universe do not need to be imposed via inflation because they are features of the specific cosmic history that survived the primary phase transition. They can be explained psychetelically: only a cosmos that started out exceptionally flat and smooth permits the emergence of LUCAS.They are selection effects. The flatness and smoothness were never physical impossibilities. Rather, inflation was invented because cosmologists needed a reason to explain why such extraordinarily improbable conditions prevailed in the early universe. In 2PC this sort of improbability is to be expected. This is a central principle in 2PC: if something was physically possible, and LUCAS needed it, then it was guaranteed to happened, even if it seems that that was exceptionally improbable.      

We don't need an “inflaton field” in 2PC. The inflaton is a mathematical artifact introduced to repair a model built on a mistaken ontology – an entirely ad-hoc explanation for exactly the kind of coherence that 2PC naturally explains as selection effects. In this sense, inflation really is the 21st century equivalent of the aether: an inelegant patch on a fundamentally mistaken model of the cosmos. Its purpose is to defend the classical assumption that the universe always existed as it now appears, only earlier and hotter. 

Just as the luminiferous aether was once posited to explain the propagation of light by imagining a substantive, all-pervading medium, inflation introduces an unobserved and unnecessary field to explain early cosmic conditions that only appear puzzling under a mistaken framework. Aether theory collapsed not because the wave nature of light disappeared, but because special relativity provided a better, simpler account rooted in a deeper understanding of space and time. Two-Phase Cosmology renders inflation obsolete. It is a clever but ultimately misguided attempt to preserve the idea of a continuous, classical cosmic history – a backstory that the new framework reveals never existed. It's like Hamlet's childhood. Once the deeper structure is understood, the explanatory crutch can be discarded.It follows that the low entropy starting condition is now an empirical prediction/retrodiction instead of a massive headache. There is no Flatness Problem, no Horizon Problem, no Inflation Reheating Precision Problem, no Reheating Mechanism Problem and no Inflaton Field Problem. There is also no constants fine-tuning problem, and there will never be any other fine-tuning problems. The 13 billion year cosmic history selected as a whole block from phase 1 was full goldilocks.The real problem is the failure to understand how and why the cosmos was and remains fine-tuned for conscious life.

The Hubble Tension (#8)

Local measurements of the expansion rate of the universe give a value near 73 km/s/Mpc. The figure extracted from the CMB only gives about 67 km/s/Mpc, but that lower number is not something we ever measured in the world. It only appears when the CMB data are interpreted through ΛCDM, which builds in a seamless physical history stretching from the Big Bang to the present. Most people take this continuity for granted, so the two numbers look like they should match. The tension rests on that expectation. Once you slow down and look at what each number is actually telling you, the mismatch ceases to be a mystery. The local value comes straight from observations, and it reflects the behaviour of the cosmos to which we physically belong. The CMB value is produced by running a simulation that includes inflation, a fixed early energy budget and a fully continuous past. Under 2PC that past is not a classical history at all. It is only the structure visible when our present observational surface is pushed backward through the old model’s rules. If the early universe is not an actual past, then the CMB value is not describing the same thing that the local value is describing. Both procedures assume an accelerating expansion (positive cosmological constant Λ), but they diverge in how they interpret this parameter.

The Hubble tension arises because two different procedures extract a parameter called “H₀” under different ontological assumptions. Local distance-ladder measurements yield H₀ ≈ 73 km/s/Mpc directly from observations within the present cosmic state. The lower value, H₀ ≈ 67 km/s/Mpc, is not measured in the same sense. It is inferred by embedding CMB observables into a ΛCDM model that presupposes a single continuous FLRW spacetime extending from recombination to the present.

Under 2PC, that continuity is not physically real. The early universe is not an actual past evolving forward into the present, but a projection of present Phase 2 structure onto the timeless Phase 1 ensemble, interpreted as if it were a historical past  Consequently, the CMB-inferred H₀ does not describe the same geometric quantity as the locally measured value, even though both are labeled “the Hubble constant.”

Type Ia supernova data independently establish that the present cosmic geometry exhibits accelerating expansion, corresponding to a positive Λ. This result does not depend on inflation or CMB reconstruction. What is disputed is not the existence of acceleration, but the ontological status of Λ. In standard cosmology, Λ is treated as vacuum energy permeating spacetime. In 2PC, Λ is the intrinsic curvature of the Phase 2 manifold an emergent structural feature of instantiated reality, not a physical substance or force (on which I will elaborate in the coming pages).

The locally measured H₀ therefore directly characterises the actual curvature of the Phase 2 geometry we inhabit. The CMB-derived value characterises the Hubble parameter that would be required for consistency if that same curvature were assumed to belong to a single, continuous classical history extending into a physically real early universe. The numerical mismatch reflects this category difference. It is not a discrepancy between two measurements of the same quantity, but a mismatch between a direct geometric measurement and a model-dependent reconstruction tied to a denied ontology. The tension dissolves once we recognise that one number measures the geometry of our actualised world, while the other measures consistency with a fictional history.

The S8 Tension (#9)

The S8 tension is usually framed as a mismatch between the amplitude of matter clustering inferred from early-universe observations, primarily the CMB, and the lower values measured directly at late times through weak lensing and galaxy surveys. In standard cosmology, this difference suggests either new physics, measurement bias, or statistical fluctuation treating both numbers as measurements of the same underlying quantity across time.

Under 2PC, the situation parallels the Hubble tension. The "early-universe" S8 is not a direct measurement. It is extracted by interpreting the CMB through ΛCDM, which projects present observables backward onto a hypothetical continuous timeline assuming inflation and a physically real hot Big Bang. That value exists only as a reconstruction what the clustering amplitude would have been if the early universe were a physical state evolving forward into our present. The low-redshift S8, in contrast, comes from direct observation of matter clustering within the present Phase 2 state. It reflects the actual structure of the branch we inhabit.

Consequently, these two figures are not discrepant measurements of the same quantity. The locally measured S8 characterises the actual growth of structure in our instantiated cosmos. The CMB-derived S8 characterises the clustering amplitude that would be required for consistency if structure were assumed to have grown continuously from a physically real early universe under ΛCDM dynamics. The numerical mismatch reflects this ontological difference: one measures the geometry of Phase 2 directly; the other measures consistency with a projected phantom history.

Reconciling S8 across epochs thus becomes a matter of mapping the Phase 2 realised branch to what ΛCDM would reconstruct from the CMB, not a problem of missing physics. The apparent tension dissolves once we recognise that early-universe reconstructions are model-dependent projections onto Phase 1, not literal histories of Phase 2 evolution.

Dark Energy (#10)

The observed late-time acceleration of cosmic expansion is often presented as evidence for a new physical component (Dark Energy) permeating space and driving the universe apart. In standard ΛCDM cosmology, this interpretation is embedded within a continuous physical history in which an early inflationary acceleration is followed by matter-dominated deceleration and then by a second acceleration attributed to vacuum energy. The apparent necessity of Dark Energy arises from treating this entire timeline as physically real.

Under Two-Phase Cosmology, that assumption is rejected. Inflation and the detailed early expansion history inferred from the CMB are not taken to describe an actual past evolving forward in time, but a model-dependent reconstruction constrained by present observations. However, this does not eliminate the empirical reality of late-time acceleration. Independent of inflation or CMB data, Type Ia supernovae and related distance indicators robustly establish that the present cosmic geometry exhibits accelerating expansion.

What 2PC denies is not the acceleration, but its standard interpretation. In ΛCDM, acceleration is attributed to a physical substance vacuum energy or a dark field acting as a repulsive force. In 2PC, no such component exists. The cosmological constant Λ is instead understood as an intrinsic curvature parameter of the Phase 2 manifold: a structural feature of instantiated reality required for the global coherence of the experienced cosmic geometry. The acceleration is therefore real, but kinematic rather than dynamical an expression of how geodesic separations behave in the realised metric, not the effect of a force acting on space.

On this view, Λ is neither a hidden energy reservoir nor a relic of early-universe physics. It is an emergent, constant property of the Phase 2 geometry itself, fixed at the moment of instantiation rather than evolving through a historical process. The need for “Dark Energy” as a physical entity thus disappears, not because the acceleration is illusory, but because its explanation no longer invokes an additional ontological ingredient. As with gravity in general relativity, what appears as a force in one framework is revealed, in a deeper account, as geometry.

The Cosmological Constant Problem (#11)

The cosmological constant problem is the problem of explaining why the observed vacuum energy density (~10⁻⁹ J/m³) is 120 orders of magnitude smaller than the value calculated from quantum field theory, and why it is not exactly zero. In standard cosmology, this problem arises because we treat the quantum vacuum as a physical state whose energy should gravitate, requiring either a miraculous cancellation or an unexplained tuning to near-zero.

Under Two-Phase Cosmology, the problem dissolves entirely. The enormous vacuum energy density calculated by QFT belongs to Phase 1: the timeless ensemble of possibilities. It exists in the mathematical structure of unactualised fields, not in the physically instantiated Phase 2 cosmos. The calculation assumes a continuous physical vacuum pervading spacetime; in 2PC, no such vacuum exists prior to instantiation. Phase 1 is not a physical state with energy density it is the space of coherent mathematical forms.

The observed cosmological constant, by contrast, is not a vacuum energy at all. It is the intrinsic curvature of the Phase 2 manifold: the geometric tension required to maintain a single-sheeted, coherent spacetime capable of hosting conscious observers (see Chapter 10). The small positive value (~10⁻⁵² m⁻²) is not a fine-tuned residue of cancelled vacuum fluctuations, but the minimal curvature necessary for the global consistency of an instantiated branch.

Once we recognise that Phase 2 is the only physically realised geometry, and that its curvature emerges from the Embodiment Threshold rather than from quantum fields in a pre-existing vacuum, the QFT prediction becomes irrelevant to the observed value. The question “Why is Λ not 10¹²⁰ times larger?” rests on the false premise that Phase 2 inherits energy density from Phase 1. It does not. The only “cosmological constant” in 2PC is the geometric parameter Λ that characterises the curvature of the actualised manifold a necessary structural feature of any branch capable of supporting coherent experience, neither zero nor arbitrary, but precisely the value required for the stability of the storm of micro-collapses that constitutes Phase 2 reality.

The Monopole Problem and the Identity of Dark Matter (#6, #12) 

When people talk about Dark Matter, they usually bundle together several different observations and pretend they point to one thing. In earlier chapters I separated them out. Some of these signals look like features that simply had to be present if a branch was ever going to contain a being like LUCAS. Large scale structure and the CMB fall into that category. Their specific patterns feel like Phase 1 filters rather than Phase 2 surprises. Others, like the rotation curves of spiral galaxies, the dynamics of clusters, lensing maps, and the behaviour of colliding clusters, feel more like Phase 2 anomalies. Even so, it is not hard to imagine that any embodied universe which supports creatures on a planet inside a long lived spiral galaxy might need some extra mass component to keep that galaxy stable, and similar conditions will therefore be commonplace throughout the cosmos. There may be no purely baryonic way to build the kind of stellar environments that eventually support biology. If that is true, then whatever we call Dark Matter might not be optional at all, but part of the minimal scaffolding a consciousness bearing cosmos requires. That gives us a way to understand why something like Dark Matter shows up, though it does not yet tell us what it is.

The monopole story looks different on the surface, yet it fits into the same frame once the 2PC lens is in place. Standard cosmology treats the predicted overproduction of magnetic monopoles as one of the reasons inflation had to happen. High energy theories tell us that monopoles should be created during symmetry breaking, and in such large numbers that they would dominate the early energy budget and collapse the universe. Inflation was brought in partly to dilute them away. This is often told as if it were a logical necessity. In reality it is another probabilistic worry. It says that a typical symmetry breaking history would flood the universe with monopoles. It does not say that other outcomes are impossible.    

In 2PC, these sorts of worries get translated into questions about the structure of Phase 1. Phase 1 contains every coherent pattern of field values, including every possible outcome of symmetry breaking. Monopole abundance is just another variable that ranges over this space of possibilities. Somewhere in that landscape are branches where monopoles never proliferate, or bind into stable pairs, or otherwise settle into harmless configurations. These branches may be rare in a statistical sense, but rarity is totally irrelevant here. Selectability is the only thing that matters. A cosmos that contains free monopoles in catastrophic numbers never makes it to chemistry. A cosmos with no Dark Matter analogue never builds galaxies. Consciousness cannot arise in either, so the only branches that can be embodied are ones where monopoles behave in exactly the narrow way that keeps the early universe stable and the later universe structured. No inflationary dilution is required because the branches that would have been needed are not selected. What looks like miraculous fine-tuning from the standpoint of physical theory becomes an ordinary consequence of metaphysical filtering. Embodiment demands balance, and the balance becomes tighter the closer we move toward the origin of the embodied timeline.

Once this is clear, the monopole problem stops being a problem. It becomes evidence that the early universe sits inside a corridor shaped by the conditions needed for consciousness to form. If monopoles, or more plausibly their bound state (monopolium), do end up being the unseen matter that stabilises galaxies, then the absence of free monopoles and the presence of Dark Matter become two sides of the same selective pressure. The monopoles that would have destroyed the universe never show up in an embodied branch, and the ones that help to hold galaxies together do. This provides a non-baryonic Dark Matter candidate that actually comes from the Standard Model (or simple GUT extensions), rather than inventing a "Dark Sector" out of thin air. Physics still carries the empirical task of working out the details. People can compute relic densities and annihilation rates and look for signals in detectors. 2PC does not replace that work, but it removes the philosophical anxiety that something impossible is being asked of nature. Unless somebody demonstrates that a universe with the right monopole behaviour is physically impossible, and therefore unavailable within Phase 1 and fully selectable at the point where embodiment becomes possible, there is no problem. The rest is an exercise in getting the physics right.

Note on empirical consequences

2PC constrains which monopole behaviours are compatible with an embodied universe, but it does not by itself fix their detailed physical parameters. Working out whether monopoles, their bound states, or some related sector could play the role attributed to Dark Matter remains an empirical task for particle physics and cosmology. The aim here is not to pre-empt that work, but to show that no contradiction is required in principle and why even extreme fine-tuning is not a problem. This book is concerned with the conceptual architecture within which such empirical questions can be pursued coherently. If the question is whether 2PC makes any new empirical predictions then this is a strong candidate: if Dark Matter turns out to be monopolium, that is corroboration from a new empirical observation. However, it is also the case that if Dark Matter turns out to be something else, it would not falsify the core of 2PC.

The Baryon Asymmetry Problem (#7)


The imbalance between matter and antimatter looks like the sort of feature that would be handled by selection, but there is a deeper issue beneath it. If one starts from genuine nothing, or even from a perfectly symmetric possibility space, then any creation event feels as if it should preserve that symmetry. Zero splits into plus and minus. That is the intuitive picture, and it leads to the familiarpuzzle: if matter and antimatter were produced in equal amounts, and almost all of them annihilated, why was anything left over? Something about the early process broke a symmetry that seems, at first glance, unbreakable.

2PC does not pretend to solve that. It does not give a mechanism for baryogenesis and it does not claim that the symmetry could not have been exact. What it does is shift where the real question sits. The problem is usually framed as if there were one unique physical history and that history somehow had to produce a small excess of baryons through a specific set of dynamical steps. In 2PC, there is no single dynamical history inside Phase 1. There is a space of all coherent histories, including every way the early symmetry could have unfolded. Some branches preserve it perfectly and end in sterile radiation. Others break it by tiny amounts, or by large amounts, or in ways we may never classify. Somewhere in that range sit the branches where the imbalance is exactly in the narrow window that allows chemistry, planets, and eventually consciousness.

The fact that we see a nonzero asymmetry tells us only that the symmetry was breakable within the space of coherent possibilities. The fact that the amount is small tells us we live in a branch where it broke by just enough to produce matter without immediately collapsing the young universe. And the fact that this branch became embodied means the Void selected it at the phase shift. This does not explain how baryogenesis worked. It explains why the only branches we ever encounter are the ones where it worked in the right way.

In this sense, the asymmetry is a selection effect layered on top of an open physical question. The physics of baryogenesis remains a job for cosmologists. The metaphysics simply frames the puzzle: whatever the true mechanism is, it must have been one of the possibilities available in Phase 1, and it must have produced a branch capable of embodiment. Nothing more is claimed. Nothing less is needed. It can only be yet another selection effect.

The Quantum Gravity Problem (#13)

The long search for quantum gravity usually starts with the assumption that spacetime itself must be quantum. People then try to quantise curvature, treat geometry as a field, or imagine space and time emerging from deeper quantum structures. This effort has continued for decades without consensus on what would even count as the correct explanatory target. In 2PC, the reason is simple: the project begins in the wrong category.

Gravity is not a quantum system waiting for its operator form. It belongs to the classical side of Phase 2, which only comes into being once collapse has already occurred. Before collapse there is no space or time, only a superposed set of possibilities. Geometry does not evolve within Phase 1; it appears when Phase 1 is resolved into Phase 2. Trying to quantise gravity is therefore like trying to quantise the page beneath the equations. It mistakes the background created by collapse for a field inside the superposition.

Penrose treated the problem differently and argued that gravity itself drives collapse. In his picture, superpositions of different curvatures strain against each other, and when the tension grows large enough the state reduces. This makes gravity a pre-existing feature of the world that cannot tolerate incompatible geometries. In 2PC, the causal order is reversed. The world begins without geometry. Collapse occurs when a self-referential observer arises whose valuations cannot remain coherent across branches. That act of resolution instantiates a single classical reality, and gravity follows as its structural consequence, not as a force reaching back into Phase 1 to resolve superpositions. Collapse generates geometry; gravity is the name we give to the structure of that instantiated geometry.

How countless local collapse events coordinate to produce a single, coherent global geometry with uniform curvature is explained above in the section called "Competition Resolved Collapse". For our purposes here it suffices to recognise that gravity is not a quantum field awaiting discovery, but the structural description of the geometry that emerges whenever Phase 1 is resolved into Phase 2.

The Early Galaxy Formation Problem (#14)

The unusually early, massive galaxies revealed by JWST present a serious problem for ΛCDM because, within that framework, there simply isn’t enough time for such systems to form. The problem only arises if the ΛCDM timeline is treated as a literal record of events, stretching cleanly from a hot dense beginning to the present day. If that whole history is taken for granted, the new data look impossible.

2PC removes that expectation. If the early universe is not an earlier physical era but an observational surface reconstructed from the present cosmic state, then this question, like so many others, changes shape. Instead of asking how real galaxies managed to assemble so quickly in a young universe, we must ask why the structures that appear on the reconstructed early surface have the mass and maturity that they do.      

One potential explanation is that the pattern we see is a by-product of the same selection constraints that shape the rest of the embodied branch. If complex life requires a particular set of large-scale conditions, and those conditions depend on the way structure develops over cosmic time, then only those retrodicted early surfaces that support a viable long-term path to embodiment will appear within Phase 2. This does not mean that LUCAS needed distant galaxies to form early in a literal past. It means that in the version of the cosmos where everything was just right for LUCAS to evolve in the Milky Way, conditions across the entire cosmos favoured the early development of this kind of galaxies. Indeed, here might be the answer to the question we were left with in Chapter 11 about which "cosmic egg" is selected for realisation. There are infinite possible cosmoses in the Pythagorean ensemble, but the existence of these galaxies suggests that the selected cosmos (the next in the queue to hatch) is the one where conscious life evolves in the shortest period of time. The selected cosmos is not the one that “wants” consciousness fastest, but the one whose global structure reaches representational stability earliest under the same physical laws. This would mean that not only do we live in a cosmos where everything is just right for the evolution of conscious organisms, but the one where everything is so exceptionally just right that the evolution of LUCAS also happens as quickly as is physically possible. If future observations revealed that galaxy formation was delayed until significantly later cosmic times (say, z < 5 for massive galaxies) this would present a challenge to the 'fast track' interpretation, suggesting that selection favors other constraints over temporal efficiency. This is a second novel empirical prediction.

The Black Hole Information Paradox (#16) 

In standard physics, the Black Hole Information Paradox arises because Hawking radiation appears thermal and featureless, suggesting that information falling into a black hole is permanently lost, in violation of quantum mechanical unitarity. In 2PC, this paradox dissolves once we recognise that black holes mark not a breakdown of physics, but the boundary beyond which Phase-2 realisation cannot be sustained.

Phase 1 and Phase 2 Perspectives

From the perspective of Phase 1, no information is ever lost. The global superposition preserves complete unitarity; all degrees of freedom, including those that appear to fall into black holes, remain encoded in the mathematical structure of possibility space. The paradox arises only if we assume that Phase 2 constitutes the entirety of reality.

Phase 2, by contrast, is local and selective. It contains only those structures that have become entangled with collapse loci regions where conscious systems have instantiated determinate states. The interior of a black hole, beyond the event horizon, is therefore a Phase-2 exclusion region: not because physical laws fail there, but because the conditions required for sustained representational coherence cannot ultimately be maintained. Information entering the horizon does not vanish; it simply remains in Phase 1 as uninstantiated formal structure.

Hawking Radiation and the Page Curve

Hawking radiation emerges in standard theory as a quantum effect associated with the horizon. In 2PC, this radiation exists formally within Phase 1, while only the portion that becomes entangled with existing collapse loci is realised in Phase 2. Crucially, the radiation realised in Phase 2 carries only information about exterior degrees of freedom that were already part of the realised manifold. It does not encode interior microstates, because those states were never instantiated in Phase 2 to begin with. This resolves the tension with the Page-curve requirement of unitarity. From the Phase-1 perspective, unitarity is perfectly preserved: all information remains encoded in the timeless superposition. From the Phase 2 perspective, radiation appears thermal and information appears lost, reflecting the locality limits of realisation rather than any breakdown of quantum mechanics. Hawking radiation is thus the observable signature of the boundary between realised and unrealised domains.

The Infalling Observer

The status of an infalling observer requires careful clarification. In 2PC, the event horizon is not a physical barrier and does not violate the equivalence principle. A freely falling observer experiences no local discontinuity at the horizon, exactly as general relativity predicts. The horizon is instead an ontological boundary defined by global causal structure, not by local physics. Phase 2 participation requires sustained representational coherence within an open network of bidirectional causal exchange. Crossing the horizon does not abruptly terminate this participation; rather, it initiates an irreversible process. Beyond the horizon, the observer’s causal embedding progressively shrinks. Increasing portions of the realised universe become permanently inaccessible, and the predictive self-model can no longer maintain global coherence with the wider collapse network. As this causal isolation deepens, the storm of micro-collapses sustaining Phase 2 existence becomes progressively destabilised. From the internal perspective, therefore, there is no sharp event or instantaneous “firewall.” Instead, Phase 2 participation fades asymptotically as the system loses the ability to sustain coherent self-modelling within the realised manifold. Consciousness terminates as the conditions required for stable collapse can no longer be met. The observer’s informational structure is not destroyed; it persists within Phase 1 as unrealised formal possibility.

Conceptual Significance

Black holes in 2PC are neither destroyers of information nor sites requiring exotic quantum-gravity mechanisms to preserve unitarity. They are regions where the Phase-2 manifold reaches a causal limit beyond which sustained representational coherence becomes impossible. The information paradox disappears once we distinguish two domains: Phase 1, where all information is eternally preserved within the mathematical ensemble, and Phase 2, where only that which can be coherently instantiated becomes real. The event horizon marks the global boundary of Phase-2 participation, while the dissolution of realised structures occurs gradually as causal isolation erodes the conditions required for ongoing collapse. Hawking radiation is the thermal trace of this boundary, carrying away the limited information that was ever part of the realised manifold, while the full informational content remains intact within the timeless vault of possibility.

The Fermi Paradox (#15)

The Fermi paradox has always carried a strange mix of curiosity and dread. The universe is old and wide and full of places where life should have taken hold, and yet every radio dish comes up empty. Under 2PC this becomes painfully clear. The primordial wave function could collapse only once, because collapse is not a repeating physical glitch but a metaphysical resolution of contradiction. When that resolution happened, the universe stopped behaving like a Phase 1 generator of branching possibilities and settled into the single embodied history that could host a coherent subject. Phase 1 had something like the power of an unbounded search, but that power only existed before consciousness appeared. Once a subject emerged and valuations had to be unified, that search space was no longer available. The cosmos we inhabit is the one that made it through the threshold. It is not physically impossible that it could happen again, but it is so extraordinarily improbable that we can rule it out (this is the very same "almost certainly false" that appears in the subtitle of Mind and Cosmos). So the question of where everyone is becomes almost rhetorical. If any separate locus of conscious existence ever did arise, it would almost certainly belong to a metaphysically disconnected branch that we can never encounter from inside this collapse. In the world we can interact with, there is no one else. We are it.

This is the third novel empirical prediction. If we were to find strong evidence for a second example of life having evolved, anywhere in the cosmos, we would have legitimate reasons for doubting the two phase model, and if we find conscious life then 2PC is almost certainly false.

Neogeocentrism

It turns out the old geocentrists weren’t as far off as we like to think. Ever since the shift from Ptolemy to Copernicus, people have assumed that once Earth was removed from the centre of the solar system it must also have lost any special place in the larger cosmos. Later we decided the universe didn’t have a centre at all, and most people accepted that even if they never quite understood what it meant. It felt harmless to believe the universe was everywhere and nowhere at once, so no one worried about it. 2PC changes this in a very simple way. If consciousness emerged only once, and if that emergence is what brought Phase 2 into being, then the point of origin for the embodied cosmos is not a spatial coordinate at all but a locus of subjectivity. Our planet does not sit at some geometric centre, but it is the only location that carries the structures capable of embodiment. Earth is the metaphysical centre of the cosmos.

The desperate need for a new paradigm 

As should now be crystal clear, the currently dominant paradigm of scientific materialism cannot  account for the empirical data in a coherent, holistic manner. This is systematically overlooked whenever I start talking about paradigm shifts (and this applies to both humans and AI). A typical initial response to such suggestions is to ask what novelempirical predictions the new paradigm makes – people immediately demand empirical proof before any new paradigm even will be considered. Which means if you are talking about 20 different problems then that's 20 lots of empirical proof. This demand, therefore, intrinsically assumes that the new paradigm is both reductionist and primarily scientific. It assumes that what needs to change is thescientific part of scientific materialism, not the materialistic part, and that methodological reductionism is the only legitimate way of doing science. It assumes the continuous validity of what Thomas Kuhn called “normal science”, and systematically rejects anything that sounds too revolutionary. Physicalism and reductionism are sacrosanct – you're not allowed to challenge those, because doing so is believed/alleged to threaten science itself. But what if it the broken part of scientific materialism is physicalism and reductive thinking? What if scientists aren't doing anythingempirically wrong? What if the real problem is that they're trying to fit valid empirical data into a model of reality which metaphysically broken? What if the real problem is the philosophy and the only possible solution is a major Kuhnian revolution?

If the currently dominant paradigm could coherently and elegantly account for the existing empirical data (technically, if it was “empirically adequate”) then the demand for empirical proof would be perfectly reasonable. The same would apply if there were multiple competing options for a new paradigm to take its place – if there were two paradigms-in-waiting, both of which managed to rid us of this enormous list of problems in similarly parsimonious ways, then we would require an empirical means of distinguishing between them. In fact, neither of these things is true. As long as there is no clear candidate for a new paradigm, a status quo which is empirically inadequate will continue to stagger around in no particular direction, steadily losing hope of ever arriving at a consensus resolution of its increasingly miserable existence. Scientific Materialism and its entrenched opposition together comprise a zombie non-paradigm; materialism (or more accurately, physicalism) is kept half-alive only by the absence of a viable replacement. Some of the problems on this list currently have no proposed physicalist solution at all, others have an already-large and still-growing list of proposed solutions, none of which stands out as the conclusive answer. It is also the case that most of these proposed solutions are themselves either philosophical rather than scientific, or trying to be scientific but failing. The demand for empirical proof before there is meaningful intellectual engagement is therefore not reasonable. At this point, if somebody can propose a new model of reality which provides a coherent, integrated resolution to most of these problems – one which is empirically adequate – then it should not only be taken seriously, but should be accepted as a legitimate new paradigm by anybody who believes empirical adequacy should be the first priority in science. 

So there we have it. Until such time as the physicalists can offer the world a cosmology which can actually account for the empirical evidence we already have, they can take their demands for “new empirical predictions” and shove them somewhere that the sun doesn't shine, because such demands amount to the most outrageous double-standard in the history of modern science. That said, it is worth repeating that in addition to cleanly resolving a long list of problems that ΛCDM cannot,2PC does make a few new empirical predictions, the clearest of which are these:

  1. Dark matter is probably magnetic monopoles in the bound state of      monopolium.      
  2. We will never find alien life.      
  3. The cosmos is not just fine-tuned for conscious life, but on the fastest      possible physical track for its evolution.

The fate of ΛCDM

From the perspective of the old paradigm, which is also the perspective of academic cosmology, it is the claims I have made in this article which are outrageous. I am expecting them to be ignored, dismissed and ridiculed, in that order. But the reality is that if we change the metaphysical foundation – if we reject physicalism and replace it with the hierarchical neutral monism I have described here, then the problems thatΛCDM and physicalism cannot solve either disappear, or are fundamentally reframed. The truth is that ΛCDM is like a ship holed beneath the waterline, and as fast as cosmologists try to seal the holes, more holes appear. This ship is heading for the bottom. The only thing that has been keeping it afloat is the absence of an empirically adequate paradigm to replace it. Or rather I should say was, because The Two-Phase Cosmology is that paradigm. This will be no ordinary paradigm shift though, because physicalism is going down with ΛCDM, because ΛCDM is the best model that physicalism can support. 

Comments
* The email will not be published on the website.