06/07/2025
Introduction
Part 1: Cosmology in crisis: the epicycles of ΛCDM
Part 2: The missing science of consciousness
Part 3: The Two Phase Cosmology (2PC)
Part 4: Synchronicity and the New Epistemic Deal (NED)
Copyright 2025 Geoff Dann. This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
DOI: 10.5281/zenodo.15823610
Zenodo link for a PDF of the whole series of articles as single document
Four hundred years after the scientific revolution, science has mapped the stars, split the atom, and traced the birth of the universe to a fraction of a second after the Big Bang. Yet it remains unable to answer what is arguably the most immediate and intimate of all questions: What is consciousness? Is it a physical process, a computational illusion, or something more fundamental? There isn't even any consensus that it exists at all. This is not merely an unsolved problem; it is an unframed one, a profound gap in the architecture of science itself. The roots of this omission lie in the origins of modern science. The methodological revolution of the 17th century succeeded by excluding consciousness. Galileo and Descartes both argued that objective reality must be defined in terms of mathematical quantities – length, mass, velocity, and force – leaving subjective experience to philosophy or theology. This strategic division allowed physics to flourish, but it came at a cost: the subjective domain of experience was bracketed off as scientifically intractable. Cartesian dualism created a conceptual firewall between mind and matter, and for centuries the physical sciences developed exclusively on one side of it.
The 20th century introduced new complexities. Advances in neuroscience revealed a growing correlation between brain activity and states of awareness, but failed to explain why such activity should be accompanied by any experience at all. Attempts to naturalise consciousness by reducing it to neural computation, higher-order representation, or information integration, have produced plenty of complicated models, but no explanations. The so-called “hard problem” of consciousness, as formulated by David Chalmers, remains untouched: Why should physical processes in the brain give rise to subjective experience? Why should there be something it is like to be a brain?
Compounding this mystery is the mysterious role of observation in quantum physics. As early as the 1920s, the founders of QM realised that the observer could not be cleanly separated from the system observed. This was not merely a technical limitation, but a structural feature of the theory: measurement collapses superpositions into definite outcomes, yet the exact mechanism – and whether consciousness plays a role – remains a wide open question.
Consciousness is the greatest blind spot of the scientific worldview. It is the one phenomenon that cannot be objectified or measured, but cannot be dismissed without undermining the very framework of science itself. Unlike dark matter or dark energy, consciousness is not simply unobserved; it is what makes observation possible in the first place.
In Part Two of The Reality Crisis, we turn to the missing science of consciousness.
The "hard problem of consciousness," a term introduced by philosopher David Chalmers, refers to the extreme difficulty of explaining how and why physical processes in the brain give rise to subjective experience. While the so-called “easy problems” of consciousness (attention, perception, memory, decision-making...) can in principle be explained by identifying underlying neural mechanisms, the hard problem resists such treatment. It asks: Why is there something it is like to be a conscious subject? How can objective physical processes produce qualia – the raw, first-person feel of experience, such as the redness of red, the sting of pain, or the taste of chocolate? This challenge is not merely one of empirical detail but of explanatory structure. Even a complete and exhaustive mapping of the brain’s physical and functional organisation would leave unanswered why those processes are accompanied by consciousness at all. The existence of an explanatory gap between third-person physical descriptions and first-person phenomenal experience suggests that something fundamental is missing from the materialist account of mind. Moreover, consciousness appears to be non-essential to many cognitive functions. Reflexes, behavioural outputs, and even complex decision-making can often be explained mechanistically without any apparent reference to phenomenal awareness. This raises the question: if consciousness is not required for function, why does it exist?
Chalmers’ well-known philosophical zombie thought experiment dramatises this point. A philosophical zombie is a being physically and behaviourally identical to a human, yet entirely devoid of subjective experience. The conceivability of such an entity (regardless of whether it is metaphysically possible) suggests that consciousness is not logically entailed by the physical facts alone. Thus, the hard problem is not just a problem of neuroscience but a problem for the metaphysical foundations of physicalism.
I must emphasise that the Hard Problem arises specifically within the context of a materialist ontology – one in which reality is exhausted by the physical. For dualists, idealists, and other non-materialist metaphysical positions, consciousness is either fundamental to reality or straightforwardly accounted for. In contrast, materialism and physicalism must attempt to explain how consciousness arises from an ontology that explicitly excludes it at the outset. The hard problem is therefore best understood as the inevitable product of a metaphysical framework that begins from the presupposition that consciousness is not part of the basic furniture of reality. In this light, “hard” becomes a euphemism for “impossible.”
However, rejecting materialism/physicalism does not automatically offer a viable way forward. The absence of consensus on a non-materialist alternative underscores the broader theoretical crisis in the philosophy of mind. Existing responses to acceptance of the intractability of the Hard Problem can be categorized as follows:
Some philosophers and cognitive scientists have responded by denying the existence of consciousness altogether. Eliminative materialists contend that our intuitions about subjective experience are systematically mistaken and that a mature neuroscience will dispense with consciousness as a theoretical entity. While logically consistent, this position is deeply counter-intuitive and self-defeating: if consciousness does not exist, then neither does the subjective standpoint from which such claims are asserted. This view sacrifices the undeniable reality of experience to preserve the coherence of materialist ontology.
In contrast, idealist theories assert that consciousness is the fundamental substrate of reality, with the physical world emerging from or within it. This position has a long philosophical lineage and has gained renewed attention in recent years (e.g., Kastrup, 2021). However, idealism continues to face significant resistance, largely because it appears to minimise the ontological status of the physical world and invites the problematic implication of disembodied minds: that consciousness can exist independently of any physical substrate, brains included.
Panpsychism proposes that consciousness is a ubiquitous feature of the natural world, present to some degree in all matter. Rather than emerging from complex arrangements of non-conscious parts, consciousness is posited as a fundamental property akin to mass or charge. This view attempts to bridge the explanatory gap by denying that consciousness must "emerge" at all. While panpsychism has gained increasing traction as dissatisfaction with materialism grows, it suffers from counter-intuitive implications (are rocks conscious?) and difficulty in explaining how micro-experiences combine to form unified macro-experiences (a version of the Binding Problem).
A final position holds that consciousness “emerges” from sufficiently complex arrangements of matter. This form of emergentism attempts to preserve materialist commitments while acknowledging the novelty of consciousness, but doesn't get us any closer to a coherent explanation. To assert that a radically different ontological category (subjectivity) emerges from material complexity is to posit a form of naturalistic magic unless the nature of this emergence, and the causal interaction between consciousness and the brain, can be clearly explained. Why does consciousness emerge? Under what conditions? Does it exert causal influence, and if so, how? If it does not, how can the brain know anything about it?
Modern biology explains the development of life in terms of variation, selection, and adaptation, but it has yet to offer a coherent account of how or why conscious organisms evolved. It cannot say what consciousness is for, or what it does, in a way that fits cleanly within evolutionary logic. Natural selection acts on function. It explains the emergence of complex traits and structures by showing how they enhanced an organism’s chances of survival and reproduction. But consciousness, defined here not as behaviour or information processing but as subjective experience, has no clearly defined function. One can describe the adaptive advantages of perception or decision-making without invoking the felt experience of seeing red or making a choice.
This leads to a troubling possibility: that consciousness is an accidental, epiphenomenal by-product of brain activity, lacking causal power. But if that were the case, then it becomes unclear how evolution could have “selected” for it at all, because evolution does not select for non-functional by-products. Either consciousness has a function and influences behaviour, in which case its causal role must be identified, or it does not, in which case its evolution remains unexplained. The standard paradigm offers no satisfying resolution to this dilemma.
Further complicating the picture is the fact that we cannot observe consciousness in others directly. We infer its existence based on behaviour, but surely behaviour should be, at least in principle, produced by unconscious systems. If an organism behaves intelligently and adaptively, there is no clear empirical test to determine whether it has conscious experience or is merely simulating it. This leads to a second major problem: we cannot define consciousness operationally, in terms that science can measure or model. Even if consciousness is somehow tied to brain complexity, there is no agreed-upon threshold where it is supposed to “turn on”, and no clear evolutionary lineage. The traditional tools of evolutionary biology do not reveal when consciousness first appeared, or in what form.
These problems, regarding both the function of consciousness and its origin, has led some philosophers to question whether the prevailing physicalist account of mind can ever be sufficient. Among the most high-profile of these critics is philosopher Thomas Nagel, whose 2012 book Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False ignited significant controversy within academic and scientific communities. Nagel’s central argument is that consciousness, reason, and value cannot be adequately explained by materialist or mechanistic frameworks alone. He painstakingly argues that subjective consciousness cannot be merely an emergent feature of complex brains, and must somehow be a fundamental aspect of reality that demands its own form of explanation. In his view, the standard evolutionary model treats consciousness as an afterthought – something to be accommodated only once all physicalist assumptions are in place, which fails to do justice to what consciousness is and how it appears in the world. He concludes that the only credible way to explain the evolution of consciousness naturalistically is in terms of a teleological naturalism in which mind is a basic and irreducible part of nature, not reducible to physical processes and not derivable from them through current scientific methods. He suggests this teleology must have been governed by currently unknown teleological laws, and that we should embark on a search for more examples of teleological processes in nature.Nagel's book was widely criticised for (allegedly) undermining the authority of science or giving tacit aid to intelligent design, regardless of Nagel' emphatic and conclusive rejection of any idea of divine intelligence as a believable explanation for the evolution of consciousness (or for anything else in the natural world). Some also accused him of insufficient expertise in the relevant scientific disciplines. Yet others, including some sympathetic critics, defended Nagel for highlighting foundational blind spots in current theories. Whether or not one agrees with Nagel’s conclusions, Mind and Cosmos gave voice to a growing sense that something essential is missing from the current picture.
What, you might ask, is this problem doing here? What has the Cambrian Explosion got to do with consciousness? The official scientific answer is: “Nothing, as far as we can tell.” And yet the answer should be blindingly obvious.
Just short of 540 million years ago, the fossil record reveals a remarkable and rapid diversification of multicellular animal life. Within a relatively short time, virtually all major animal phyla appeared, alongside numerous other lineages that ultimately represented evolutionary dead ends. This event marks one of the most significant and perplexing episodes in the history of life on Earth, yet its underlying causes remain a subject of intense debate and unresolved mystery.
The suddenness and breadth of morphological innovation observed during the Cambrian Explosion defy straightforward explanation within the standard framework of gradual evolutionary processes. Multiple hypotheses have been proposed to account for this unprecedented diversification, though consensus remains elusive. Among the discussed explanations are:
The Cambrian Explosion remains an unresolved enigma. No single factor, or combination thereof, has achieved widespread acceptance as the definitive cause.
The problem of free will is a longstanding dilemma concerned with the apparent conflict between human agency and the causal structure of the universe. At its core lies the question: How can we be genuinely free agents if our actions are the outcome of deterministic and random processes? This issue sits at the intersection of metaphysics, ethics, philosophy of mind, and empirical science, raising doubts about moral responsibility, meaning, dignity, and perhaps even personal identity.
The classic formulation of the problem arises from the apparent incompatibility of three claims, each of which seems independently plausible:
Together, these claims appear logically inconsistent. If determinism is true, it is unclear how agents can be said to "choose", and indeterministic randomness is equally incapable of grounding responsibility. The intuition that we are morally responsible is difficult to reconcile with either possibility. There are three principal philosophical responses:
1. Compatibilism
According to compatibilists, free will and determinism are not in conflict. Freedom consists in acting according to one's desires and intentions without external coercion, even if those desires are themselves causally determined. This view preserves moral responsibility and practical autonomy within a deterministic framework. However, critics (notably Kant, who called it a “wretched subterfuge”) charge that it redefines "freedom" in a way that evades the real question: whether human beings originate their actions in any ultimate sense. To many incompatibilists, compatibilism appears as a semantic manoeuvre that bypasses the metaphysical problem rather than resolving it.
2. Hard Determinism
Hard determinists accept the reality of determinism and reject the existence of free will. On this view, human beings are biological machines and free will is an illusion – perhaps a psychologically useful one, but not ontologically real. The implications are profound: if individuals cannot help but act as they do, then traditional notions of praise, blame, and justice must be rethought. Some hard determinists advocate replacing retributive systems with therapeutic or corrective approaches. Many find such a view deeply unsettling.
3. Libertarianism
Libertarian theories assert that free will is real and therefore determinism must be false. Some actions, on this view, are uncaused or are caused by non-physical agents operating outside the chain of physical causation. Libertarianism preserves moral responsibility by insisting that agents are the ultimate originators of their choices. Yet this position faces serious difficulties: it posits metaphysical entities that cannot be empirically verified, and often struggles to provide a coherent account of how non-physical causation operates.
Empirical science complicates matters further. Classical physics operates on a fully deterministic model, while QM introduces indeterminacy, yet randomness alone does not confer agency. Neuroscientific studies, such as the well-known Libet experiments, suggest that decisions may be initiated in the brain before subjects become consciously aware of them, raising questions about the causal role of conscious will. If conscious deliberation is post hoc, then the traditional conception of the self as a locus of free agency is called into question.
Thomas Nagel offers a nuanced articulation of the problem. In The View from Nowhere (1986), he identifies a deep conflict between the subjective standpoint – the lived experience of choosing freely – and the objective standpoint – which views human action as part of a causally determined physical system. For Nagel, this conflict generates what he calls a problem of autonomy: how can we be morally responsible if our actions are ultimately the result of forces beyond our control Nagel rejects compatibilism as insufficient, not because it is logically flawed, but because it fails to address the deeper existential intuition that true agency must involve some form of origination. He is equally unsatisfied with libertarianism, which seems to offer no intelligible account of how indeterminacy could ground meaningful control. In the end, Nagel acknowledges the impasse:
“I believe that the compatibility question has not been properly formulated, and that nothing currently on the table resolves it. I cannot even tell whether the truth lies in one of the existing views or in some alternative not yet imagined. My impulse is to say something that I know is not really coherent: that we are somehow responsible for what we do, even though it is ultimately a matter of luck.”
How does the brain integrate information from separate neural processes into a unified, coherent experience? In cognitive neuroscience, it is well-established that different features of a perceptual scene (such as colour, shape, motion, depth, and spatial location) are processed by specialised and anatomically distinct areas of the brain. The visual cortex alone is divided into multiple subregions, each tuned to specific aspects of the visual field. Despite this functional fragmentation, our conscious experience presents itself as a seamless whole: we see a red apple as red, round, and located there, not as a disjointed collection of independently processed features. What binds disparate neural signals into a single phenomenological object? The problem becomes especially acute when we consider that no central neural hub has been identified that performs this integrative function. Even the so-called “global workspace” theories, while offering a framework for large-scale integration, do not yet explain how binding occurs at the level of individual perceptual objects or moments.
Various theories have attempted to resolve the binding problem:
The Binding Problem lies at the nexus of subjective unity and objective multiplicity – a theme that echoes in both the problem of consciousness and the free will debate. The brain appears to be a distributed, parallel-processing system with no single control centre And yet subjective awareness seems to operate with remarkable cohesion. As with the problem of agency, the binding problem may point to an explanatory gap between third-person functional accounts and first-person phenomenology, making it more than just a neurocomputational puzzle. And as Chalmers has pointed out, such questions quickly bleed into the Hard Problem.
The Frame Problem is a fundamental challenge in artificial intelligence (AI), cognitive science, and the philosophy of mind. It concerns how a cognitive system – whether artificial or biological – determines what matters when something in the world changes. More formally, it asks: How can an intelligent agent efficiently update its knowledge or make decisions without needing to consider every possible consequence of an action or event?
At its core, the frame problem is about relevance determination: when a change occurs in the environment, what needs to be re-evaluated, and what can safely be ignored?
Imagine a robot trying to leave a room. As it approaches the door, the door unexpectedly swings open. Now the robot must decide:
Even this seemingly simple scenario reveals the difficulty: if the robot tries to compute every possible side-effect of every event, it becomes paralysed by combinatorial explosion. But if it ignores too much, it risks missing critical changes that affect its goals. The robot must strike a delicate balance between comprehensiveness and efficiency, and doing so requires a kind of contextual discernment that machines generally lack.
Humans, by contrast, handle such situations effortlessly. We intuitively know which details are relevant and which can be safely ignored. When the door opens, we do not pause to reconsider the molecular state of the air. Our brains apply massive, unconscious filtering and prioritization, informed by embodied experience, semantic memory, and shared world models. This seemingly trivial ability is actually a hallmark of human intelligence.
The frame problem reveals deep cracks in the foundations of symbolic AI, where knowledge and rules are encoded explicitly. It also persists, in subtler form, in machine learning and large language models, which often struggle with context shifts, long-range dependencies, and implicit relevance. Even with vast data and computation, these systems frequently fail to distinguish the essential from the incidental, and can be easily misled by small perturbations or ambiguous instructions. Beyond engineering, the frame problem points to something philosophically profound: it suggests that intelligence is not merely rule-following or pattern recognition, but fundamentally involves the selective framing of the world – a way of constructing and constraining relevance based on goals, attention, embodiment, and meaning. This touches on unresolved questions in epistemology, perception, and the philosophy of mind. In this way, the frame problem serves as a microcosm of the greater challenge of artificial general intelligence (AGI). Until machines can frame situations appropriately, they will remain brittle, narrow, and alien in comparison to human minds. In the present context the real mystery here is not why machines suffer from the frame problem, but why humans do not.
Memory, as we subjectively experience it, is the foundation of continuity, identity, and meaning. It seems to refer to fixed, determinate events in the past: “what happened” in our personal history or in the shared history of the world. We regard memories as records: enduring physical traces in the brain encoding the results of prior events, which can be reliably accessed and brought into present awareness. But how does this apparent solidity of the past emerge from a quantum universe in which events are, in some sense, only determined upon observation?
The universe is governed by QM, a theory in which the past is not always a fixed sequence of events, but rather a set of potentialities governed by a wave function In many interpretations of quantum theory, especially the CI and MWI, the notion of a definite outcome only arises upon measurement or decoherence. Until then, systems exist in superpositions of multiple states. If this indeterminacy is fundamental, then the classical assumption that the past is an immutable sequence of determinate events becomes problematic. This raises the question: How do classical memories, apparently fixed and unambiguous, emerge from a quantum substrate that is inherently probabilistic and observer-dependent?
Several scientific and philosophical tensions come into play here:
This problem intersects with the Measurement Problem, and with broader metaphysical questions about the nature of time. In some interpretations of QM (e.g., transactional or retrocausal models), the past is not wholly fixed until "closed" by future interactions. This opens the door to the radical possibility that the past itself may be observer-dependent or co-constructed with the present. From this angle, memory is not merely a passive recording device but an active process of world-building – a way in which consciousness participates in the creation of a determinate classical reality.
The Memory Stabilisation Problem is not just a problem of neuroscience or physics. It is a philosophical challenge to our basic assumptions about time, observation, and objectivity. Memory, in this view, may be not only a record of the past, but also a mechanism by which a conscious universe writes its own story.
General anaesthesia presents a profound mystery at the intersection of neuroscience, pharmacology, and the philosophy of mind. Despite over a century of clinical use, the mechanism by which anaesthetics induce loss of consciousness remains poorly understood. This is especially surprising given the chemical diversity of substances that act as general anaesthetics: they include everything from simple inert gases like xenon to larger, structurally complex molecules such as propofol or ketamine.
All these compounds, despite their wide variation in structure and interaction profiles, have the capacity to completely suppress consciousness in a controlled and reversible way. Crucially, this suppression occurs:
This raises an important and still-unanswered question: What common feature of brain function is being targeted by such chemically disparate substances to produce such a specific and dramatic phenomenological effect? The fact that consciousness can be “turned off” so cleanly and reliably, without widespread damage, implies that consciousness is functionally delicate (sensitive to precise patterns or states of activity) but also biochemically robust, capable of returning unharmed once the anaesthetic wears off. This duality challenges assumptions in both neuroscience and cognitive science. Standard accounts typically invoke modulation of GABAergic inhibition, disruption of thalamocortical connectivity, or altered synaptic transmission, but none of these mechanisms fully explains the selectivity, reversibility, and universality of anaesthetic effects across such chemically varied agents. Moreover, the anaesthetic mechanism problem hints at a deeper issue: if consciousness can be suppressed by such minimal interventions, perhaps its emergence depends on extremely fine-tuned dynamical or informational conditions in the brain – conditions not yet captured by mainstream theories of cognition.
The final and perhaps the most intractable problem on this list is not physical or even biological. It is existential. It concerns the status of meaning and value in a scientific universe. In Mind and Cosmos, Nagel argued that the prevailing materialist paradigm of reductive physicalism fails to account for the very capacities that make scientific understanding possible in the first place: consciousness, cognition, and the perception of value.
According to standard science, the universe consists of law-governed particles and fields, impersonal and indifferent. But we, who are made of those particles, find ourselves asking:
For Nagel, these are not “soft” philosophical distractions from the real business of physics – they are central features of reality. He argues that the emergence of consciousness, intentionality, and normativity cannot be explained by a worldview that treats only physical facts as ontologically fundamental. If value and meaning are real – if they exist – then they must be part of the natural order, not afterthoughts or illusions.
Yet the current scientific picture offers no place for such things. Evolutionary biology explains moral judgment and aesthetic feeling as adaptive functions; neuroscience reduces rationality to neural computation; cosmology regards the entire history of life and mind as a byproduct of blind initial conditions. In this view, truth, goodness, and beauty are accidental epiphenomena, lacking objective status. As Nagel puts it:
“It is difficult to make sense of the idea that life is something that could be fully understood by chemistry and physics alone.”
This leads to a deeper dilemma:
The problem of meaning and value thus undermines the coherence of the materialist framework from within. If we are the kinds of beings who can perceive meaning, recognise value, and pursue truth, then any complete account of the universe must include these capacities not as anomalies, but as central explanatory data. As with consciousness and the mind-body problem, this challenge points to a crisis in the current metaphysical foundations of science. Nagel calls for a radical rethinking: a new conception of nature that integrates mind, value, and reason into its core ontology, rather than attempting to explain them away. Until we can explain how meaning and value are possible, and how they fit into the architecture of reality, our picture of the universe remains incomplete.
Part 3: The Two Phase Cosmology (2PC)