06/07/2025
Introduction
Part 1: Cosmology in crisis: the epicycles of ΛCDM
Part 2: The missing science of consciousness
Part 3: The Two Phase Cosmology (2PC)
Part 4: Synchronicity and the New Epistemic Deal (NED)
Copyright 2025 Geoff Dann. This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
DOI: 10.5281/zenodo.15823610
Zenodo link for a PDF of the whole series of articles as single document
The Lambda Cold Dark Matter model (ΛCDM) stands today as the foundational framework of modern cosmology. It is the product of a century-long journey through some of the most transformative discoveries in physics, from general relativity (GR) and cosmic expansion to the faint afterglow of the Big Bang. Despite its current prominence and dominance, ΛCDM was not a theory born fully formed. Rather, it is the culmination of successive refinements and incorporations, each driven by new observations that reshaped our understanding of the cosmos.
The story begins in the early 20th century, when Albert Einstein formulated his general theory of relativity (1915), providing a new geometric understanding of gravity and spacetime. Initially resistant to the idea of a dynamic universe, Einstein introduced the cosmological constant (Λ) to maintain a static cosmos. But within a few years, that view would be upended. In the 1920s, Edwin Hubble's observations of distant galaxies revealed a unmistakable pattern: galaxies appeared to be receding from us, and the farther away they were, the faster they moved. This discovery of cosmic expansion gave empirical support to dynamic models of the universe, particularly those derived from Alexander Friedmann’s solutions to Einstein’s equations. Einstein abandoned his cosmological constant, famously calling it his “greatest blunder,” though the idea would return decades later in a new guise.
The next milestone came in 1965 with the accidental discovery of the cosmic microwave background (CMB) radiation by Arno Penzias and Robert Wilson. This relic glow from the early universe provided striking evidence for the Big Bang model, and launched precision cosmology as an observational science. Over the following decades, increasingly detailed measurements of the CMB, particularly from satellite missions like COBE (1989), WMAP (2001), and Planck (2009), offered a wealth of information about the universe’s age, composition, and geometry. Meanwhile, other observations introduced new puzzles. In the 1970s, astronomers found that the rotational speeds of galaxies did not match expectations based on visible matter alone. This led to the postulation of dark matter: an unseen form of mass that exerts gravitational influence without emitting or absorbing light. Early candidates included massive astrophysical objects or exotic particles, but whatever its nature, cold dark matter (non-relativistic and slow-moving) became a necessary ingredient to explain large-scale structure formation. The final key component entered the picture in the late 1990s, when two independent teams observing distant Type Ia supernovae discovered that the universe’s expansion was not slowing down, but accelerating. This stunning result revived interest in Einstein’s cosmological constant, now interpreted as “dark energy”: a mysterious, repulsive force driving the acceleration. Incorporating dark energy (Λ) alongside cold dark matter (CDM), cosmologists arrived at the model that now defines the standard cosmological paradigm: ΛCDM.
In its current form, ΛCDM describes a universe composed of approximately 5% ordinary matter, 25% cold dark matter, and 70% dark energy. It requires that there was a hot Big Bang, an early period of near-instantaneous inflation, the formation of light elements through primordial nucleosynthesis, the release of the CMB, and the gradual formation of galaxies and clusters via gravitational collapse. The model is relatively simple, and has proved effective at fitting a broad array of cosmological data.
However, even the most successful models can eventually fall apart. ΛCDM has become the benchmark for cosmological theory, but it is also the beginning of deeper questions, several of which are now seriously compromising the model's explanatory power.
The previous section traced the rise of ΛCDM through the intertwined histories of GR, observational astronomy, and the discovery of cosmic expansion, dark matter, and dark energy. However, it made no mention of quantum mechanics (QM), even though this is the other great pillar of 20th-century physics. This omission reflects not just a historical divergence in scientific development, but a deep conceptual rift that haunts contemporary cosmology.
Like GR, QM emerged in the early 20th century, but it spoke a radically different language. Where relativity describes smooth, continuous spacetime shaped by mass and energy, QM dealt in discontinuities, probabilities, and the strange behaviour of particles at microscopic scales. The wave-particle duality, the uncertainty principle, superposition, and entanglement painted a picture of nature that was probabilistic, non-local and utterly foreign to the classical intuition behind gravity and cosmic geometry. Despite their shared birth, the two theories developed along parallel but disconnected tracks. QM revolutionised atomic, nuclear, and particle physics, while GR governed astrophysics, black holes, and cosmology. Each theory was extraordinarily successful in its own domain, yet stubbornly incompatible with the other.
This divide is perhaps most evident in ΛCDM. While the model depends critically on QM in its earliest moments, it quickly reverts to a classical framework. Once the inflationary field decays the evolution of the universe is described almost entirely using GR and classical fluid dynamics, supplemented by particle physics inputs (cross-sections, decay rates) that are themselves treated semi-classically. This methodological division allowed cosmologists to make extraordinary progress without resolving the foundational conflict between quantum theory and gravity. In a sense, the success of ΛCDM has been both a triumph and a trap: it permitted the construction of a comprehensive cosmological model without properly integrating the underlying theories.
Nonetheless, QM is present (quietly) in several key areas of standard cosmology:
Despite these roles, QM remains something of a hidden scaffolding in ΛCDM: necessary for certain inputs, but excluded from the overall architecture. The model does not incorporate quantum uncertainty in the evolution of spacetime itself, nor does it offer an account of quantum measurement, decoherence, or the transition from potential to actual cosmic histories. These omissions lie at the heart of the model’s growing empirical inadequacies.
The absence of QM in the core of cosmological modelling reflects a deeper problem: we do not have a theory of quantum gravity. No one knows how to consistently describe spacetime itself as a quantum entity. Efforts to do so, such as string theory, loop quantum gravity and causal set theory, remain incomplete and speculative. As a result, cosmology has proceeded with a pragmatic truce: use QM for matter and fields, and GR for spacetime. This truce has worked...until it hasn’t.
As anomalies mount – the Hubble tension, the nature of dark matter and dark energy, the unresolved cosmological constant problem and many more – it becomes increasingly clear that something revolutionary is needed. We need a model that unites QM with spacetime evolution, not just at the Planck scale, but in a way that informs how we understand observation, measurement, and cosmic history at every level.
If ΛCDM is a classical skeleton with a few quantum joints, the next paradigm may require a fully quantum nervous system – one in which observation, information, and probability are not just calculational tools, but ontological ingredients. The missing thread of QM may in fact be the central thread, and its proper integration with cosmology may reveal a universe far stranger, but also far more unified, than ΛCDM allows. The rest of Part One of The Reality Crisis is audit of the anomalies plaguing ΛCDM.
The metaphysical interpretations of QM represent competing philosophical responses to the Measurement Problem (MP), which arises from a fundamental discrepancy between the formalism of quantum theory and our empirical experience of the world. Specifically, the problem emerges from the mismatch between:
(a) the mathematical structure of QM, which governs the evolution of the wave function via the Schrödinger equation: a linear, deterministic process that yields an ever-expanding superposition of possible outcomes; and
(b) the apparent collapse of these possibilities into a single, definite outcome upon measurement, as consistently observed in empirical reality.
Any interpretation of QM must therefore explain how – or whether – this transition from superposition to actuality occurs. They can be classified into three broad categories, which together constitute what I term the quantum trilemma. Each of these approaches offers a distinct resolution to the Measurement Problem, yet each also encounters deep conceptual or empirical difficulties.
Physical or objective collapse theories (PC) propose that the wave function's evolution is not purely unitary, but punctuated by real, physical collapse events. The most familiar version is the Copenhagen Interpretation (CI), which holds that measurement induces a transition from superposition to a single outcome. However, it infamously refuses to specify the mechanism by which this occurs. More recent theories, such as GRW and other physical collapse models, attempt to formalise the process by introducing stochastic or spontaneous collapse dynamics.
The promise of these models is empirical testability. The problem, however, is that no such collapse mechanism has been observed. To date, all experimental evidence remains consistent with unitary (collapse-free) evolution of the wave function. As a result, these theories rely on hypothetical processes that remain both empirically inaccessible and theoretically underdetermined, rendering them speculative add-ons rather than explanatory breakthroughs.
The second class of theories, derived from the mathematical work of John von Neumann in 1932, proposes that it is the conscious observer who causes the wave function to collapse – consciousness causes collapse (CCC). Von Neumann’s formalism allowed the "cut" between observer and observed to be placed arbitrarily, but his conclusion that only a conscious mind can complete the measurement process means that consciousness is not just involved in observation, but is a necessary condition for reality to become definite. This explicitly contradicts metaphysical materialism, since the observer is only being proposed in the first place because it is presumed to be external to the entire physical (quantum) system being measured. It is worth noting that von Neumann was no mystic – he removed the “collapse event” from the physical system because he had no means of modelling the mathematics if it remained within it, not because he had a weakness for woo-woo.
While CCC theories bypass the need for an undefined physical mechanism, they open a different Pandora’s box: How did measurement occur in a pre-conscious universe? In answer to this question, proponents typically invoke panpsychism or idealism, but these come with their own unresolved theoretical burdens. Most seriously, they entail that minds can exist in the absence of brains – something which is very hard to square with our incontestable knowledge that damage to brains directly causes corresponding damage to the contents of consciousness. It doesn't switch it off (as general anaesthetics do), but it degrades it, and it does so in highly predictable ways. How can this be accounted for if brains aren't necessary for consciousness? This problem is serious enough to have ensured that CCC theories have never made any notable inroads from the fringes.
The third major approach, introduced by Hugh Everett III in 1957, rejects the notion of collapse altogether. Instead, the Many Worlds Interpretation (MWI) asserts that the unitary evolution described by the Schrödinger equation is universally valid. All possible outcomes of a quantum event are realised, each in a distinct and non-interacting branch of an ever-expanding multiverse. In this interpretation, apparent wave function collapse is an illusion.
MWI cleanly avoids the central problems of both PC and CCC theories by eliminating the need for a collapse mechanism, but it replaces it with an ontologically extravagant view in which every quantum interaction spawns a multiplicity of parallel worlds. This implies that our minds are continually splitting, and that there are an infinite number of timelines where strange versions of ourselves act in random and inexplicable ways. The role of probability, which is central to quantum predictions, is also thrown into doubt: without collapse it becomes difficult to explain why observers should expect Born-rule statistics (objective randomness) at all.
These three interpretive strategies appear to exhaust the logical space of viable responses to the Measurement Problem. Either the wave function collapses or it does not. If it collapses, the cause must be either internal to the physical system (PC) or external to it (CCC). If it does not, then all outcomes must be realised (MWI). Interpretations that attempt to evade this trilemma leave key explanatory questions unanswered. For example, Bohmian mechanics tries to have its cake and eat it, or rather it tries to retain the wave function and collapse it too (the unrealised branches in Bohm's model are both real and unreal). Insofar as any such interpretation seeks to achieve completeness, it must eventually confront this trilemma.
This is where the foundations of QM have been stuck since 1957.
In QM the state of a system can be mathematically expressed in many different bases, each providing a valid description of the system’s properties. However, in actual observations, we only ever perceive outcomes corresponding to certain specific bases. This raises a fundamental question: what determines the “preferred basis” in which quantum states appear to collapse or decohere into classical reality?
The standard quantum formalism does not specify a unique criterion for selecting this basis. While the mathematics allows infinite equivalent representations, our measurements consistently align with particular bases (such as position or momentum) depending on the experimental context. The mechanism or principle that picks out this “preferred basis” remains ambiguous and is central to the measurement problem and the transition from quantum possibilities to classical actualities.
Resolving this ambiguity is crucial for understanding how the classical world emerges from quantum foundations and why our experience of reality is stable and determinate despite the underlying quantum indeterminacy.
In recent years, a large and persistent discrepancy has emerged between independent measurements of the Hubble constant (H0) – the parameter that describes the rate of cosmic expansion. Resolving this conflict, known as the Hubble tension, is one of the most pressing challenges in contemporary cosmology. It has prompted serious reflection on the assumptions underpinning ΛCDM.
There are two primary and independent methods used to determine the value of H0 and they yield results that differ well beyond the range of mutual error bars:
The discrepancy between these two values now exceeds 5 standard deviations, which makes it unlikely to be attributable to statistical error. While some have suggested that unrecognised systematic errors may be responsible, extensive reanalyses and cross-checks using different methods and observatories have failed to eliminate the discrepancy.
The Hubble tension is not merely a technical detail: it may point to a deep flaw in our understanding of the universe’s early conditions, the nature of dark energy, or the validity of the ΛCDM model itself. Possibilities under investigation include modifications to the physics of the early universe (such as early dark energy or extra relativistic species), revised models of dark matter, and even exotic proposals involving varying fundamental constants or departures from GR.
In sum, the Hubble tension represents a critical juncture in cosmology, where empirical observations strain the theoretical framework that has guided the field for decades.
Dark energy was invented to account for a surprising set of astronomical observations that contradicted long-standing expectations. A repulsive force appears to be pushing the universe apart at an accelerating rate. Today, dark energy accounts for roughly 70% of the total energy density in the standard ΛCDM model. Yet its origin, nature, and ontological status remain unknown.
For most of the 20th century, it was assumed that the expansion of the universe must be slowing down due to the mutual gravitational attraction of all matter. Cosmologists expected that measurements of distant galaxies would reveal a deceleration, indicating that the rate of expansion had been higher in the past and was gradually tapering off. However, in 1998, two independent teams published results based on observations of Type Ia supernovae at high redshift that defied this expectation. These “standard candles” were fainter than expected, suggesting that the expansion of the universe was not decelerating, but accelerating. This was an astonishing result that required a major revision of the cosmological model.
To explain the observed acceleration, cosmologists revived Einstein’s cosmological constant Λ, a term he had introduced in 1917 to achieve a static universe and later discarded after the discovery of expansion. In modern terms, the cosmological constant corresponds to a constant energy density filling space homogeneously, with a negative pressure that drives accelerated expansion according to the Friedmann equations. Mathematically, the cosmological constant exerts a pressure which causes the expansion of the universe to speed up rather than slow down. Reintroducing this term allowed cosmologists to fit the supernova data within the framework of GR, without altering its fundamental structure.
Following the supernova results, further confirmation of accelerated expansion came from independent sources. In particular:
This implied that some form of energy – smoothly distributed and gravitationally repulsive – must provide the remaining 70%. Including a cosmological constant or similar “dark energy” component made the model consistent with:
Thus, dark energy became a necessary ingredient not only to explain supernova dimming, but to maintain internal consistency across the entire cosmological dataset.
While dark energy plugs the particular hole it was invented to plug, it immediately raises profound conceptual problems. The observed value of the cosmological constant is extremely small, yet nonzero: ρΛ∼10−47 GeV4. |
Why should this energy density be so small compared to the Planck scale, and yet dominate the universe precisely at the present epoch? This is the so-called coincidence problem: why are the energy densities of matter and dark energy of the same order now, even though they evolve differently over time?
These problems suggest that dark energy, if real, is not merely a straightforward parameter in Einstein's equations, but a sign of deeply unexplained physics, possibly involving new fields, modified gravity, or unknown quantum effects. Various dynamic alternatives to the cosmological constant have been proposed, such as quintessence, phantom energy, or k-essence, but none have received empirical confirmation.
Dark energy has no confirmed microphysical interpretation. It does not clump, does not interact with light or matter in any measurable way, and seems to have constant or nearly constant energy density throughout spacetime. Its effects are purely gravitational, inferred from the large-scale dynamics of the universe, and its essential characteristic is the generation of negative pressure. But what substance or mechanism produces this effect remains unknown. Indeed, it is not clear whether dark energy represents:
Dark energy was invented to explain why the universe is not slowing down but speeding up in its expansion, contrary to long-held expectations. Its necessity is based on:
Although a cosmological constant provides a good empirical fit, its extreme smallness and fine-tuning raise major theoretical questions. The dark energy problem is not only one of unknown identity, but one of missing explanation for the largest-scale dynamics of the cosmos.
Among the deepest unresolved puzzles in theoretical physics and cosmology is a staggering mismatch between theoretical prediction and observational measurement that calls into question our understanding of both quantum field theory and gravitation. It is not merely a technical inconsistency, but a foundational conflict that has resisted solution for decades.
In Einstein's field equations of GR, the cosmological constant Λ appears as a term that counteracts the attractive force of gravity and drives the accelerated expansion of spacetime. Observations over the past few decades (including high-redshift supernovae, baryon acoustic oscillations, and the CMB) have converged on the conclusion that the universe's expansion is indeed accelerating, and that this acceleration can be accurately modelled by including a small, positive value of Λ.
In natural units, the observed value of the cosmological constant is approximately:Λobs∼(10−3 eV)4. This value corresponds to an energy density of about ρΛ∼10−47 GeV4, which is small (but nonzero) and contributes about 70% of the total energy budget of the present-day universe.
From the perspective of quantum field theory (QFT), the vacuum is not empty. Instead, it teems with virtual particles and fluctuating fields, whose zero-point energies should contribute to the vacuum energy density. If one naively sums the zero-point energies of all known quantum fields up to the Planck scale – where quantum gravity effects are expected to become significant – one obtains a predicted vacuum energy of the order:ρvacQFT∼MPl4∼(1018 GeV)4=1076 GeV4.
Even with more conservative cutoffs at the electroweak or QCD scale, the predicted value remains many orders of magnitude too large. The mismatch between the theoretical prediction and the observed value is often quoted as being between60 and 120 orders of magnitude, depending on the energy scale at which the calculation is cut off. This is by far the largest known discrepancy between theory and observation in the history of physics.
The cosmological constant problem is not simply that the predicted and observed values are different. It is that there is no known symmetry or mechanism in QFT or GR that would cancel or suppress the vacuum energy to the observed tiny value, without requiring extreme fine-tuning. In order for the total cosmological constant to match observations, one must postulate a bare gravitational constant that nearly cancels the enormous vacuum energy:Λeff=Λbare+8πGρvac≈(10−3 eV)4. This cancellation must occur to at least 120 decimal places, with no physical explanation for such a precise tuning.
Efforts to solve the cosmological constant problem have ranged from supersymmetry (which cancels bosonic and fermionic contributions), to dynamical dark energy models (e.g., quintessence), to modifications of gravity at large scales, to anthropic reasoning within the string theory landscape and multiverse frameworks. Despite decades of work, no consensus has emerged.
The cosmological constant problem lies at the intersection of quantum theory, gravitation, and cosmology. Its persistence signals a fundamental incompleteness in our current understanding of the vacuum, the nature of spacetime, and the interface between microphysics and cosmological structure. The problem has even been called the “worst theoretical prediction in the history of physics,” and its resolution is widely regarded as a key to progress in unifying QM and GR.
Dark matter has never been directly detected, but regardless of that it is now thought to comprise approximately 85% of the matter content of the universe and about 27% of its total energy density. The hypothesis of dark matter was not introduced for a single reason, but rather emerged as a unifying explanation for multiple independent observational anomalies across different astrophysical and cosmological scales. In each case, visible (baryonic) matter alone proved insufficient to account for the observed gravitational effects.
The original and most famous evidence for dark matter came from the study of spiral galaxy rotation curves. According to Newtonian dynamics, the rotational velocity v(r) of stars orbiting at a distance r from the galactic centre should decrease with distance once outside the bulk of the visible mass, roughly following: v(r)∝1r.v(r).
However, beginning with the work of Vera Rubin and others in the 1970s, it was found that rotation curves tend to flatten at large radii: stars and gas far from the galactic centre orbit at roughly constant velocities, rather than slowing down. This observation suggests the presence of an extended, invisible halo of mass surrounding each galaxy, whose gravitational influence maintains the high orbital speeds. The discrepancy between the mass inferred from starlight and the mass required to explain the rotation curves is substantial – typically an order of magnitude or more.
Earlier still, in the 1930s, Fritz Zwicky observed that galaxies in the Coma Cluster were moving too rapidly to be gravitationally bound if the cluster contained only the mass visible in stars. Applying the virial theorem to estimate the total mass required to keep the cluster from dispersing, he found that the luminous matter fell short by a factor of up to 100.This mass discrepancy in galaxy clusters was later confirmed through:
GR predicts that massive objects curve spacetime and thus bend the paths of light – a phenomenon known as gravitational lensing. When distant galaxies or quasars are viewed through massive intervening structures like galaxy clusters, the degree of lensing observed allows cosmologists to infer the total mass along the line of sight.
In many such cases, especially with strong and weak lensing maps, the lensing mass significantly exceeds the luminous mass, reinforcing the existence of large quantities of invisible mass. Importantly, gravitational lensing provides a direct measure of total mass, independent of dynamical assumptions.
One of the most striking pieces of evidence comes from observations of colliding galaxy clusters, such as the Bullet Cluster (1E 0657-56). In these systems, the visible baryonic matter slows and interacts during the collision, while the gravitational mass, inferred from lensing, appears to pass through relatively undisturbed. The spatial offset between the baryonic mass and the total gravitational mass strongly suggests the presence ofnon-collisional mass, consistent with dark matter that interacts gravitationally but not electromagnetically. Similar signatures have been found in other merging clusters.
Another key motivation for dark matter arises from the need to explain the formation of cosmic structure: the growth of density fluctuations into galaxies, clusters, and filaments in the early universe. The standard model of cosmology assumes that the tiny fluctuations observed in the CMB grew over billions of years into the structures we observe today. However, calculations show that baryonic matter alone, coupled to radiation before recombination, cannot grow fast enough to account for the observed structure, especially on small scales. Dark matter (being non-baryonic and non-interacting with radiation) can begin clumping earlier, seeding gravitational wells into which baryons later fall. Simulations of structure formation match observations only when dark matter is included.
Precision measurements of the CMB have revealed tiny fluctuations in temperature across the sky, corresponding to density variations in the early universe. The detailed angular power spectrum of these anisotropies depends sensitively on the composition of the universe. The best-fit models to CMB data require a significant component of cold, non-baryonic dark matter to reproduce the relative heights and positions of the acoustic peaks. This result is independent of galaxy dynamics and provides a cosmological-scale confirmation of dark matter.
Dark matter was not proposed to solve a single problem, but to provide a coherent explanation for:
Despite its success in explaining these phenomena within the ΛCDM framework, the true nature of dark matter remains unknown. Candidates range from weakly interacting massive particles (WIMPs) to axions, sterile neutrinos, and more exotic possibilities. Decades of direct detection experiments, collider searches, and astrophysical probes have yet to yield definitive evidence for its particle identity.
The fundamental constants of nature appear to be exquisitely calibrated to allow for the existence of life. Even minuscule deviations in these constants would render the universe lacking stars, stable matter or chemistry. This observation raises a pressing metaphysical and scientific question: Why is the universe so precisely set up to allow life? This issue was famously articulated by cosmologist Martin Rees, who identified six dimensionless or normalised constants that collectively determine the structure, evolution, and large-scale features of the universe. The life-permitting range for each is remarkably narrow:
The apparent fine-tuning of these constants invites two broad categories of explanation:
The fine-tuning problem remains a live philosophical and scientific challenge, inviting us to reconsider not only the structure of physical laws but the epistemic framework within which we interpret them. If the universe is fundamentally contingent at its roots – if its life-permitting structure is not derivable from first principles – then the explanatory burden may ultimately fall outside of physics itself.
The cosmological constants are only the start. We should really be talking about the fine-tuning problems, for there are a very large number of them, as we shall see in the following sections.
Observational data from the CMB radiation, combined with the ΛCDM model, indicates that the observable universe began in a condition of extreme thermodynamic order: despite being hot and dense, it was gravitationally smooth and remarkably homogeneous. This initial low entropy is crucial: it underpins the second law of thermodynamics as applied to cosmology, which is in turn connected to the arrow of time and the emergence of complexity. Without such a low-entropy start, the universe would have been dominated by gravitational collapse, or would have lacked the thermodynamic gradients necessary for the evolution of stars, planets, and life. However, from the standpoint of statistical mechanics, such a configuration is overwhelmingly improbable. Given the phase space of all possible microstates compatible with the macroscopic constraints of the early universe, high-entropy (disordered) states vastly outnumber low-entropy ones. Yet our universe appears to have emerged from the tiniest corner of that phase space.
Why did the universe begin in such an improbable state?The laws of physics do not require a low-entropy boundary condition. Time-symmetric dynamical laws, like those governing GR or the Schrödinger equation, are compatible with universes beginning in high-entropy configurations. Therefore, the low-entropy past must be regarded as a contingent feature of our universe, not an inevitable consequence of known laws.
Several responses to this problem have been proposed:
Penrose has emphasised the scale of the problem by estimating the phase-space volume of the observable universe's initial state: the probability of such a state arising by chance is roughly 1 in 10^(10^123), a number so minuscule that it effectively defies explanation by appeal to brute statistical happenstance. Thus, the low-entropy initial condition represents a fundamentalconceptual challenge: any cosmological framework that aspires to explain the emergence of complexity, life, and time itself must grapple with the question of why the universe began with such extraordinary thermodynamic asymmetry.
The universe’s spatial geometry is observed to be extraordinarily close to flat: neither positively curved (closed) nor negatively curved (open), but spatially Euclidean. According to the Friedmann equations derived from GR, the degree of spatial curvature evolves dynamically over time: any deviation from exact flatness in the early universe would have rapidly amplified, leading to a universe that is either extremely curved or has already recollapsed. Yet present observations from the CMB reveal a universe that is flat to within a fraction of a percent. This implies that the total energy density of the universe at early times had to be equal to the critical density to within one part in 10^60 or better – an astonishing degree of precision. This uncanny balance is known as the Flatness Problem. It raises the question: why did the early universe begin so precariously close to a state of critical density, with no obvious mechanism to enforce such a condition? Inflation provides a solution by driving the universe toward flatness regardless of initial curvature, but it does so at the cost of introducing its own unexplained dynamics and parameters.
Observations show that widely separated regions of the sky – now tens of billions of light-years apart – exhibit nearly identical temperatures and physical properties in the CMB. This is puzzling because, under standard Big Bang expansion, these regions should have been causally disconnected: no light or energy could have passed between them before recombination, and certainly not before inflation. This uniformity implies that some kind of equilibration must have occurred between these regions very early in cosmic history. But according to the standard model without inflation, there was simply not enough time for such causal contact. This contradiction is known as the horizon problem. Inflation solves it by expanding a small, initially causally connected region to encompass the entire observable universe. However, this explanatory move again depends on the correct initiation and cessation of inflation at just the right time, introducing new fine-tuning concerns of its own.
Inflation ends when the potential energy driving exponential expansion decays into ordinary matter and radiation – a process known as reheating. For the universe to resemble what we observe today, this reheating must occur with extraordinary precision in both timing and efficiency. If reheating happens too early, the universe may not inflate long enough to solve the horizon and flatness problems. If it happens too late or too inefficiently, the universe could be left too cold, too empty, or dominated by relics incompatible with structure formation. The temperature of the universe after reheating must fall within a narrow window to allow nucleosynthesis, matter-radiation equality, and galaxy formation to proceed correctly. This the Reheating Precision Problem, and it reveals that solving fine-tuning problems via inflation just introduces another one.
In addition to the need for precision, there is also a fundamental lack of clarity about the microphysical mechanism of reheating. In most inflationary models, the process by which the inflaton field decays into the standard model particles is only sketched in, relying on speculative couplings, parametric resonance, or perturbative decay schemes. No experimentally verified mechanism or standard field-theoretic interaction has been confirmed to realise this transition. The detailed dynamics of how the vacuum-like energy of inflation converts into a hot, thermalized plasma (the birth of the observable universe as we know it) remain deeply uncertain. This is the Reheating Mechanism Problem: the mechanism must not only exist but execute precisely under extreme conditions without observational guidance, further compounding the implausibility of accidental success.
Cosmic inflation is the dominant theory used to explain several early-universe puzzles, including the Flatness Problem and the Horizon Problem. It posits that the universe underwent a brief period of exponential expansion shortly after the Big Bang, driven by a hypothetical quantum field known as the inflaton. But despite its popularity, the inflationary paradigm raises major foundational problems of its own.
Inflation requires the existence of a scalar field with a very specific potential energy landscape – flat enough to drive rapid expansion, then steep enough to decay into standard particles. Yet no known field in the Standard Model of particle physics behaves this way. The inflaton has never been observed, and its origin, nature, and physical justification remain completely unknown. It is a hypothetical entity postulated purely to make the inflationary model work. Moreover (surprise, surprise!) the inflaton field must possess extremely finely tuned properties:
These requirements amount to an elaborate layer of theoretical scaffolding with no direct empirical foundation. In most models, the inflaton is simply inserted by hand, without derivation from deeper theory.
Even if we accept inflation as a real event, further questions follow. Why did inflation start at all What triggered the inflaton field’s initial conditions? What determines when and how it ends? Why does the universe begin in a state conducive to inflation in the first place?This is the mother of all ΛCDM epicycles.
The universe exhibits a strikingly favourable balance of elemental abundances for both stellar structure and biochemistry. Hydrogen and helium dominate, as expected from Big Bang nucleosynthesis, but heavier elements like carbon, oxygen, nitrogen, phosphorus, and iron, which are essential for life and planetary systems, are also present in just the right trace quantities. These heavier elements are forged inside stars through nuclear fusion pathways, particularly the triple-alpha process, which is exquisitely sensitive to nuclear resonance levels and fundamental constants. Even small deviations in the strengths of nuclear forces, the masses of quarks, or the coupling constants would shut down or drastically alter these processes, preventing the formation of carbon and oxygen altogether. The problem is not just the existence of these elements, but the fact that their relative cosmic abundances fall within narrow windows that support both stable, long-lived stars and complex organic chemistry. This has been called a biophilic tuning of elemental synthesis: conditions that are neither generic nor expected in a random universe.
The timeline of cosmic structure formation is finely poised. Observations show that galaxies, stars, and large-scale filaments began forming just early enough in the history of the universe to allow biological evolution to proceed, but not so early as to disrupt the smoothness and expansion of space. If structure formation had begun significantly earlier, gravity could have overpowered the expansion rate, leading to premature collapse or black hole dominance. If it had occurred significantly later, matter would have dispersed too much for galaxies to condense. This requires a delicate balancing act between expansion dynamics, initial density perturbations, and dark matter behaviour, none of which are naturally fixed by first principles. The apparent "just right" onset of structure formation is therefore considered a further fine-tuning problem.
In the early universe, radiation dominated the energy density, preventing the growth of structure due to its pressure and relativistic behaviour. As the universe expanded and cooled, there came a moment –matter-radiation equality – when matter began to dominate, allowing density fluctuations to grow into galaxies and clusters. This transition had to occur at just the right time in cosmic history. If matter had come to dominate too early, gravitational clumping would have become too strong, leading to an inhomogeneous, turbulent universe. If it had occurred too late, structures would not have had time to form before dark energy accelerated the expansion. The precise timing of this phase transition is not predicted by fundamental physics, but must be tuned by adjusting initial densities of matter and radiation. This introduces yet another layer of unexplained calibration into ΛCDM.
Many Grand Unified Theories (GUTs) of particle physics predict the production of magnetic monopoles – massive, stable particles carrying a net magnetic charge – during symmetry-breaking transitions in the early universe. According to standard thermodynamic calculations, such monopoles should have been copiously produced in the first fractions of a second. Yet no magnetic monopoles have ever been observed. Their complete absence from the observable universe presents a paradox: if they exist and were produced as expected, they should now dominate the mass density or at least be detectable in cosmic ray experiments. Inflation offers a solution by diluting the monopole density through exponential expansion. But this move relies on the prior existence of inflation and on assumptions about when it occurred. Without such a mechanism, the lack of monopoles is another glaring anomaly, casting doubt on the compatibility of high-energy particle physics with early-universe cosmology.
The CMB reveals that the early universe contained tiny but nonzero fluctuations in density: about one part in 100,000. These primordial perturbations are critical, for they serve as the seeds from which all later structure (galaxies, stars, clusters) formed via gravitational amplification.
However, there is a fine-tuning problem in their amplitude. The perturbations had to be:
Inflationary models can generate such perturbations via quantum fluctuations stretched to macroscopic scales. But the actual amplitude observed (~10⁻⁵) is not a robust prediction of most inflationary potentials. It must be precisely calibrated by adjusting the energy scale and shape of the inflaton potential – yet another arbitrary tuning.
One of the most striking surprises in precision cosmology is the emergence of large-scale directional anomalies in the CMB, especially the so-called “Axis of Evil.” This term refers to an unexpected alignment of the lowest-order multipoles – the quadrupole (ℓ = 2) and octopole (ℓ = 3) – which appear to point along a common axis in the sky. This is in direct conflict with the cosmological principle, which asserts that the universe should be statistically isotropic and homogeneous on large scales. The CMB should exhibit random orientations of its large-scale modes, but instead we observe:
These features persist across datasets, though some debate remains about whether they arise from foreground contamination, data processing artifacts, or are signs of new physics (such as topology, anisotropic expansion, or violations of inflationary assumptions). Either way, the Axis of Evil represents another unresolved and foundational puzzle.
Within its first year of operation, the James Webb Space Telescope detected massive, well-formed galaxies at redshifts greater than 10 – meaning they already existed less than 500 million years after the Big Bang.
According to ΛCDM, such galaxies:
The abundance, size, and apparent maturity of these early galaxies outpace the predictions of hierarchical structure formation, challenging both the timeline and mechanisms assumed in ΛCDM. While some argue that adjustments to star formation efficiency or feedback processes might resolve this, others view it as a more serious anomaly, possibly requiring rethinking cosmic expansion history, matter content, or the nature of early gravitational collapse.
A foundational assumption of particle physics and cosmology is that the laws of nature are nearly symmetric between matter and antimatter. In the earliest moments after the Big Bang, the universe should have producedequal quantities of baryons (matter) and antibaryons (antimatter) through high-energy particle interactions. But this is not what we observe. Instead, the universe today is composed almost entirely of matter. Antimatter is exceedingly rare, and no large-scale regions of the cosmos exhibit the annihilation signatures that would result from collisions between matter and antimatter domains. This implies a tiny but decisive imbalance in the early universe – roughly one extra baryon for every billion matter-antimatter pairs. This small excess survived the mutual annihilation that occurred as the universe cooled, and it ultimately seeded all the galaxies, stars, planets, and observers that exist today. The observed baryon-to-photon ratio is approximately:η∼6×10−10
This ratio is not predicted by the Standard Model of particle physics. While theoretical mechanisms such as baryogenesis or leptogenesis have been proposed, these require additional conditions beyond known physics, including:
Moreover, the existence of observers or life does not straightforwardly explain the asymmetry via anthropic reasoning. A universe with no net baryon asymmetry would likely contain no complex structures at all, not even radiation-dominated observers, since complete annihilation would prevent any baryonic residues from persisting.
Nothing is more mysterious than time – the apparent asymmetry between past and future that pervades human experience, biological processes, and thermodynamic systems. This directional flow of time is so deeply woven into our perception of reality that we take it for granted. Yet at the level of fundamental physics, the situation is paradoxical: the laws governing the microphysical world, such as Newtonian mechanics, Maxwell’s equations, GR, and the Schrödinger equation, are time-symmetric. They make no intrinsic distinction between forward and backward temporal evolution. Nonetheless, macroscopic phenomena display a striking temporal asymmetry. This is most evident in the second law of thermodynamics, which asserts that entropy tends to increase over time in closed systems. This thermodynamic arrow aligns with our psychological sense of the passage of time, with the causal structure of events, with the directionality of memory, and with the irreversibility of biological and evolutionary processes. Why does a symmetric microphysical substrate give rise to a manifestly asymmetric macroscopic reality?
The standard scientific response invokes initial conditions. As discussed in section 8 above, the observable universe appears to have begun in a state of extraordinarily low entropy. This special boundary condition does seem to be related to the arrow of time, so perhaps the progression of entropy, causality, and memory, follows naturally. But this just forces us back into asking why was the universe’s initial entropy so low in the first place. The explanatory burden thus shifts from dynamics to cosmology, and from physical law to metaphysical contingency. Moreover, even if the thermodynamic arrow is accounted for by low entropy at the Big Bang, it remains unclear why the subjective experience of time should correlate so precisely with the thermodynamic gradient. This invites questions that straddle physics, neuroscience, and philosophy of mind: Is the psychological arrow of timereducible to entropy increase, or does it point to something deeper?
Several speculative approaches have been proposed:
As with the low-entropy problem, the arrow of time exposes a deep incompleteness in the current paradigm. Our best physical theories describe time as a dimension, but experience treats it as process. The mystery of time’s arrow continues to haunt both physics and philosophy.
One of the most under-addressed problems in modern physics is the absence of any formal representation of the present moment – what we intuitively call “now.” In both special and general relativity time is treated as a dimension akin to space. The equations governing the universe’s evolution are time-symmetric and do not single out any preferred moment as "the present." Instead, the universe is modelled as afour-dimensional block – a spacetime continuum in which all events, past, present, and future, coexist equally. In this "block universe" picture:
This sharply contradicts direct human experience, in which the present moment appears privileged: we exist now, not in the past or future. Our conscious experience is temporally localised. We make decisions, experience change, and observe the unfolding of events, all in a way that presupposes a dynamically moving present.
Efforts to reconcile this include:
None of these are incorporated into ΛCDM or fundamental physics. As a result, there is no place for now in the official language of the universe, even though conscious observers – the very ones constructing physical theories – experience it in every moment. This disconnect between temporal ontology in physics and phenomenological reality gives rise to what is sometimes called the “Problem of Now,” or the hard problem of temporal existence.
A central goal of theoretical physics for nearly a century has been the unification of QM and GR, but the two most successful theoretical frameworks remain conceptually incompatible. QFT has successfully described the electromagnetic, weak, and strong nuclear forces within the Standard Model, but attempts to quantise gravity using the same techniques have consistently run into intractable mathematical and conceptual problems, suggesting a deep structural mismatch between the quantum and gravitational domains. The core difficulty stems from the non-renormalisability of gravity when treated as a quantum field. Unlike other forces, the graviton (the hypothetical quantum of the gravitational field) gives rise to infinite quantities in loop calculations that cannot be systematically tamed using standard renormalisation procedures. This failure implies that GR, when naively quantised, loses predictive power at high energies or small distances – precisely where a quantum theory of gravity is most needed, such as near black hole singularities or in the early universe.
Several alternative approaches have been developed in response:
This persistent failure to quantise gravity raises the possibility that the gravitational field may not be fundamentally quantum in nature. Some have suggested that gravity may be emergent from deeper, possibly informational or thermodynamic principles, rather than a force to be quantised in the conventional sense. Others propose that QM itself may need revision in order to accommodate a fully relational or background-independent theory of spacetime. At stake is the coherence of our metaphysical picture of reality. If gravity fundamentally resists quantisation, this may indicate that the unification of physics cannot be achieved solely through the tools of 20th-century quantum theory. It may instead require a paradigm shift that reconceptualises either the quantum, the gravitational, or both, as emergent from a deeper substratum.
The black hole information paradox arises from an apparent contradiction between quantum theory and GR in the context of black hole evaporation. In standard physics, black holes are regions of spacetime from which nothing – not even light – can escape. They are formed from the collapse of massive objects and are classically described by the solutions to Einstein’s field equations in GR.
However, in 1974, Stephen Hawking (allegedly) showed that black holes are not entirely black. Due to quantum effects near the event horizon, black holes emit what is now known as Hawking radiation –a form of thermal radiation resulting from quantum particle-antiparticle pair production near the horizon. Over immense timescales, this leads to the slow evaporation of the black hole. The paradox emerges because Hawking radiation is thermal, and thus appears to contain no information about the matter that fell into the black hole. If the black hole completely evaporates, and the radiation carries no information about its initial contents, this implies a loss of information – violating a central principle of QM: unitarity, the idea that quantum evolution preserves information.
This leads to a core contradiction:
Several responses have been proposed within orthodox physics, including:
No consensus solution has been reached.
The Fermi Paradox arises from a striking contradiction between two widely held premises:
The paradox takes its name from a casual 1950 remark by physicist Enrico Fermi: “Where is everybody?” Given even conservative assumptions about the likelihood and longevity of advanced civilisations, many should have emerged and become detectable by now. Yet, the observable universe remains silent. The Drake Equation – an attempt to estimate the number of communicative civilisations in the galaxy – reinforces the puzzle. Even with pessimistic parameters, it suggests we should not be alone. Proposed solutions to the Fermi Paradox include:
As with many other foundational problems explored in this work, the Fermi Paradox is not a problem lacking in proposed solutions; the real problem is that none of the existing solutions command consensus, leaving the paradox unresolved.
Mathematics, originally developed as an abstract and purely formal discipline by humans, frequently proves to be astonishingly effective in describing and predicting the behaviour of the physical universe. From the laws of motion to QM and GR, mathematical structures developed without empirical motivation often find profound application in physical theories. This remarkable alignment between abstract mathematical concepts and empirical reality remains deeply puzzling from a philosophical standpoint. Why should mathematics, a product of human cognition, so precisely capture the fundamental workings of nature?