[Back to contents][This book is available in print form.]
Disclaimer:I am not a scholar of the history of quantum metaphysics and some parts of this appendix (particularly towards the end) were written with assistance from AI.I felt the book would be incomplete without this information, and AI is now very much part of our world, and there to be used.
Physics at the end of the 19th century
At the end of the 19th century there was a major disconnect between science and philosophy. Since the publication of Kant’s Critique of Pure Reason in 1781 they had been heading in different directions. In philosophy the golden age of German Idealism had culminated in Hegel’s grand metaphysical system, while science had been through its own golden age of discovery of the mechanistic workings of the material world. These two ways of understanding reality are at odds with each other. The scientific view was that reality was made of material stuff, the behaviour of which could be completely reduced to mathematical laws. From the point of view of the idealistic philosophers, reality was made of mental stuff, matter had a subordinate sort of existence within the ultimate realm of mind, and the behaviour of reality was, at least possibly or partially, irreducible to mathematical laws. It looked like both of these ways of understanding reality were almost finished. Hegel claimed his system was the end of philosophy, and physicists were hopeful that physics was nearing completion – some loose ends needed tidying up here and there, but there was no reason to suspect a dramatic paradigm shift was in the offing. Physics and philosophy were not just separated, but had lost contact with each other. Scientists had no reason to think about reality in Kantian terms of phenomena and noumena. This caused them no problems at all, because when you’re doing science rather than metaphysics, the distinction between phenomena and noumena is usually irrelevant.
An exception to this general rule occurred in the formative days of atomic theory earlier in the 19th century. On this occasion the boundary between physics and metaphysics was tested by a disagreement between scientists over the material existence of atoms. Way back in 1808 British chemist John Dalton had discovered that if each chemical element is given a standard weight, then they always combine in fixed ratios: water is one part oxygen to two parts hydrogen. Dalton took these constant ratios to be evidence of the actual combination of real physical atoms, and proposed that at the smallest level, the material world is made of atoms. Most scientists agreed with him, but a small minority rejected this claim on the grounds that it went beyond the facts – after all, nobody had ever seen an atom, or any direct evidence of the existence of real atoms. When in 1826 Dalton received the Royal Society’s medal of honour from chemist Humphry Davy, Davy warned that the word “atom” should be taken to mean no more than “chemical equivalent”. He recognised Dalton’s achievement in purely practical terms: a discovery about how to do science, rather than what reality is made of. French chemist Jean Baptiste Dumas agreed that the word “atom” had no legitimate place in chemistry, on the grounds that it goes beyond experience.
German chemist Friedrich August Kekulé claimed that the entire debate belonged to metaphysics, and that the question of whether or not atoms actually exist has nothing to do with chemistry. In fact, the first direct human experience of atoms occurred in 1828 when Scottish botanist Robert Brown observed pollen grains in water lurching about as they were randomly bombarded by atoms, but at the time it was assumed that “Brownian motion” had some sort of biological explanation. German chemist Wilhelm Ostwald proposed an alternative to the atomic hypothesis, based on the laws of thermodynamics. He claimed that atoms were mathematical fictions, and that the base level of reality was pure energy. Bitter disputes followed between the “atomists” and “energeticists”. Ostwald gave a speech in 1895 with the title On Overcoming Scientific Materialism. “We must renounce the hope of representing the physical world by referring natural phenomena to a mechanics of atoms....Our task is not to see the world through a dark and distorted mirror, but directly, so far as the nature of our minds permits. The task of science is to discern relations among realities, i.e. demonstrable and measurable quantities...It is not a search for forces we cannot measure, acting between atoms we cannot observe.”
In the late 19th century science, the viewpoint of the majority of scientists was that everything that existed in the world could be reduced to two sorts of entities: matter (or energy) and fields. Both were assumed by scientists to be real. It made no apparent difference whether they were considered to be part of phenomenal reality or noumenal reality. Kant’s transcendental idealism was philosophy, not science, and physicists were not trying to provide foundations for a science of mind. But note the words used by Ostwald: “measure”, “observe”, “the nature of our minds”. Since then, the precise meaning of these words and their relevance to the foundational assumptions of physics have become central concepts in a battle over the nature of reality that is far from over.
There were two physical fields: gravitational and electromagnetic.Footnote: Modern physics was soon to add two more – the strong nuclear field, which binds atomic nuclei, and the weak nuclear field, which breaks them apart during radioactive decay. These “fields” are the distribution of forces in space, and they have very different ranges. The three classical forces – electricity, magnetism and gravity – decrease with the square of the distance, but never reach zero: everything in the universe is attracted to everything else by gravity, even if the effect is infinitesimally small at great distances. Footnote:The two modern forces only act over the extremely short distances that apply to atomic nuclei, which is why they were unknown until the 20th century.
From the viewpoint of classical physics, only two sorts of laws are needed to explain everything – those that govern the motion of matter, and those that explain the behaviour of fields in terms of matter. These laws are all completely deterministic – if you know everything about the current situation, then, in theory, you can know everything about the outcome. At the time, deism was a popular belief – the idea that God created the universe like a piece of cosmic clockwork, set it in motion and then left it to look after itself in a completely deterministic manner. This determinism was defined by Newton’s laws of motion and his field of gravity, but the field laws of electricity and magnetism were not discovered until the 1860s, by Scottish physicist James Clerk Maxwell. This discovery would lead to the unravelling of classical physics.
Maxwell’s laws combined electricity and magnetism – they were two forces, but reducible to a single field. Quite unexpectedly, these laws cleared up what had until then been a mystery about the nature of light. Fields are associated with matter – if you shake the matter, then you shake the associated field, and this sends waves radiating away from the location of the shaking, just as waves radiating from a pebble thrown into a pond. Maxwell’s laws enabled physicists to calculate the speed that electromagnetic waves travel, and this perfectly matched the speed of light, which had already been measured. This led directly to the conclusion that light must be high-frequency electromagnetic waves, and Maxwell also correctly concluded that there must be electromagnetic waves of other frequencies. Sure enough, Heinrich Hertz discovered radio waves in the late 1880s and in 1895 Wilhelm Conrad Röntgen discovered X-rays. Classical physics appeared to be complete.
Or at least, almost complete, for there was a fly in the ointment, known as “the black body radiation problem” or “the ultraviolet catastrophe”. Black objects have no intrinsic colour, but they take on a colour when they are heated up. The colour of iron changes as the temperature rises – red hot, white hot, etc... Physicists wanted to know how to calculate the colour, now known to be an electromagnetic wavelength, from the temperature. This, they presumed, must have something to do with the matter in the black body shaking more violently as it heats up, which everybody assumed must follow Newton’s laws. [Footnote: We now know light is emitted by moving electrons, which don’t follow Newton’s laws. The problem was that if you do the mathematics, the prediction is that black bodies should glow bright blue regardless of the temperature.]
Then in 1900 German physicist Max Planck made a breakthrough. Instead of theoretically allowing matter to vibrate at any frequency, he restricted the frequencies to those that follow the rule E=nhf, where E is the particle’s energy, n is any integer, f is the frequency and h is a constant that is now named after Planck. This rule restricts the particles to a finite set of energies defined by the value of hf. This innovation was not intended by Planck to be representative of reality – it was a trick to make the maths simpler, and Planck planned to eventually get rid of it by making the constant zero, so that this finite set was so huge that the matter could once again vibrate at any energy it wanted to, or as near as makes no difference. But he ran into another problem: if he set the constant to zero, the bright blue glow returned. This problem had a solution he was not expecting – it turned out that if he set h to one specific value (6.62607015×10−34), then the predicted colour perfectly matched experimental values. This constant later became known as the “quantum of action”, since it has the dimension of energy multiplied by time, which is known as “action” in classical physics.
Nobody realised that this was the thin end of a wedge that would soon break classical physics apart. The theory produced exactly the right answer, but nobody could make any sense of it, because it directly contradicted the Newtonian view of reality. Classical physics and Planck’s theory were both mathematical, but everything in classical physics is analogue, like a vinyl record. It was as if classical physicists had been searching for the last record to complete their collection of long players, and now Planck had found the missing recording, but instead of being a vinyl record, he had found a compact disc. This was not yet understood to be heralding a revolution in physics, because nobody understood the implications. Planck’s new theory had solved the black body radiation problem, but could only describe realitydigitally.
Einstein’s four papers in 1905
Albert Einstein (1879-1955) was a German Jew who had taught himself higher mathematics as a teenager. At the age of 13 he was introduced to the work of Kant, who soon became his favourite philosopher. In 1905, when working as a patent clerk in Switzerland, Einstein published four physics papers that, combined with his 1915 paper on general relativity, provided the foundation for 20th century physics. The first paper explained the ability of light to knock electrons out of metal (the photoelectric effect). Classical physicists had thought of light entirely as waves. Einstein’s paper was the first hint that it also had particle-like characteristics. Photo-electric experiments had shown that increasing the intensity of a light beam does not increase the amount of energy of each electron knocked out – instead it increases the number of electrons knocked out, while the energy of each remains the same. If, however, you increase the light frequency (which changes its colour along the spectrum from red to blue) then you increase the amount of energy per ejected electron. Einstein’s explanation was that light behaves like a shower of particles, each with the energy given by Planck’s expression E=hf(an equation that contains Planck’s constant quantum of action). Another piece of the puzzle had been found to be digital rather than analogue. It was for this discovery that Einstein was awarded a Nobel prize.
The second paper explained Brownian motion, which was revealed to be not biological at all. Atoms, first hypothesised by Leucippus and Democritus in the 5th century BC, were now accepted by everybody to be real. The energeticists admitted defeat on this question, but there was a twist coming later in the year. The third paper eclipsed the first two by introducing a completely new concept of space and time. According to Special Relativity, space and time are not absolute but depend on the velocity of the observer, and the speed of light was a new absolute – the same for all observers. This absolute, unchanging speed was also declared to be a limit that no signal can exceed. This theory did not look much like it belonged to classical physics, but there was no quantum element to it either. In spirit, it owed something to Kant – four-dimensional space time seemed as different to the material world we experience that it should surely count as noumenal rather than phenomenal, although unlike Kant, Einstein had found a way to place space and time in a realm beyond experience. However, Einstein’s third paper did not cause any immediate metaphysical crisis in science. His fourth paper discussed a consequence of special relativity: the equivalence of energy and matter according to the equation E=mc2.The energeticists, it turned out, weren’t completely wrong after all.
Waves and particles
Waves are unlike particles in three important ways. Firstly, a wave spreads out while a particle is confined to a small area. Secondly, waves can be infinitely split into parts that go off in different directions while particles can only go in a single direction. Thirdly, two waves can pass through each other, while two particles coming together suffer a collision. In 1923 American physicist Arthur Holly Compton discovered another experimental occurrence of Einstein’s “particles of light”. Compton shone a beam of X-rays into a gas that contained atoms with loosely bound electrons, and was able to detect both the ejected electron and the particle of light that recoiled in the manner of the cue ball in snooker (the “recoil photon”). This is called “the Compton Effect”, reducible to the “Compton relation” p=hk.
Then in 1924 French aristocrat Louis de Broglie submitted a PhD thesis to the Sorbonne in Paris that proposed a theory of electron waves and predicted a wave-particle dual nature of matter. His thesis professor was not convinced, but sent a copy to Einstein, who backed the idea, and de Broglie passed his PhD. De Broglie was pointing towards a fundamentally new theory of physics. Classical physics had reduced the world to matter and fields, or particles and waves. Planck, Einstein and Compton had all provided reasons why waves must sometimes be thought of as particles, and now de Broglie was saying particles can sometimes be thought of as waves. At this point nobody had filled in all of the mathematical details, and nobody had a clue how reality could be made of stuff that was simultaneously a wave and a particle. Then in 1925 not one but three quantum theories arrived.
Werner Heisenberg represented a quantum system with a set of matrices, so his theory is called matrix mechanics. Each matrix represents a different attribute, such as energy or momentum, and each entry in the matrix is for a different value of that attribute. The matrices have diagonal entries that represent the probability that the system has that specific value, and off-diagonal entries that represent the strength of the quantum connections between possible values for that attribute. So in this system the momentum, position and other attributes of a quantum entity such as an electron is represented by one of these matrices, rather than by a single number. ErwinSchrödinger represented a quantum system as a wave form and wrote quantum laws of motion, of the sort that waves must obey, which is known as the Schrödinger equation. His theory is called wave mechanics.
Paul Dirac’s theory is harder to explain. For Dirac, the motion of quantum entities corresponds to the rotation of an arrow in a multi-dimensional abstract space. To describe this rotation you need to impose a co-ordinate system over the space, and your choice of co-ordinate system results in what appear to be significantly different quantum descriptions. Dirac’s theory concerns how you can transform between these different descriptions, and so is called “transformation theory.” Dirac swiftly proved that the theories of both Heisenberg and Schrödinger were special instances of his own theory; the three theories were mathematically equivalent. Two things are important to note from this. The first is that quantum theory is purely mathematical, and the mathematics can be represented in several different ways. The second is that probability is inherent in this mathematics.
With the mathematics not only complete, but three versions available, each of which can be used in different situations, it was now possible for other scientists and engineers to start using quantum theory for all sorts of scientific and technological work. This would eventually lead to the development of nuclear weapons and power. But there was still a huge unanswered question about the implications of the theory for our understanding of the nature of reality. How can something be a wave and a particle at the same time? What are we to make of the probabilistic nature of the mathematics? It is often claimed that unless you have a deep understanding of the mathematics, it is impossible to understand the philosophical relevance of quantum theory. I do not have this level of understanding. However, if the claim was actually true then we should expect our greatest mathematicians to arrive at converging conclusions that would eliminate the mystery. Given that this is not happening, the differences in metaphysical interpretation are presumably the result of the understanding of something else (or the lack of it). The truth is that quantum theory does not, on its own, supply any conclusive answers about the nature of reality. But it does raise some very intriguing questions. The scientific and technological progress of quantum theory has been faultless. Some people have serious objections to the ethics and practical outcomes of nuclear technology, but quantum theory has never failed to match our observations of reality. We have every reason to believe it is mathematically the correct description of reality. It is not a complete description, but in the century since its discovery, nobody has discovered a flaw.
The debate about the meaning – the metaphysical interpretation – of quantum mechanics, has been much less conclusive. Between 1925 and 1935 Bohr and Einstein debated the meaning, and a patchy, incomplete consensus emerged for an interpretation favoured by Bohr, Heisenberg and Max Born. This became known as the Copenhagen Interpretation (Bohr’s institute was in Copenhagen) and it remains the most popular interpretation today, regardless of its many faults. It is anti-realist, in the sense that it claims there is no “deep reality” at all. Footnote:“Bohr does not deny the evidence of his senses. The world we see around us is real enough...but it floats on a world that is not real. Everyday phenomena are themselves built not out of phenomena but out of an utterly different kind of being.” Quantum Reality: Beyond the New Physics: Nick Herbert, 1985, Anchor Books/Doubleday, p16. You can be forgiven for wondering what the hell that is supposed to mean, and many other physicists objected, Einstein among them. Surely, they said, this goes too far: it is premature to conclude that no future technology could reveal a deeper truth, and all we should say is that, for now, we’ll be cautious and skeptical. But Bohr was having none of it. “There is no quantum world,” he said, “there is only an abstract quantum description.” Heisenberg took a similar view: “The hope that new experiments will lead us back to objective events in time and space is about as well founded as the hope of discovering the end of the world in the unexplored regions of the Arctic.” In other words, the Copenhagen Interpretation (CI) is a forthright attempt to shut down questions about the ultimate nature of reality. “No use looking for reality here. Quantum theory is unbelievably weird, but we’ve nailed the mathematics and you’re never going to get any deeper answers so you might as well stop asking awkward questions. Move along please; there is nothing to see here.”
The questions did not go away. For all the Copenhagenists’ vehement denials that quantum theory can tell us anything about the nature of reality, their own interpretation raises very specific questions. There are two parts to the CI. The first is that there is no reality in the absence of observations, and the second is that observation somehow creates reality. All very good, but if that’s your theory then it is rather important to explain exactly what “observation” means, and the CI doesn’t even try. In the words of physicist Murray Gell-Mann in 1976: “Niels Bohr brainwashed a whole generation of physicists into thinking the job was done fifty years ago.”
The Uncertainty Principle
The Uncertainty Principle was formulated by Heisenberg in 1927. It asserts that there is a fundamental limit to the precision with which certain pairs of physical properties, called conjugate variables, can be simultaneously known. The most famous pair of these variables are position and velocity (or momentum). The more precisely we know the position of a particle, the less precisely we can know its momentum, and vice versa. This is not due to technological limitations. It is a fundamental feature of the quantum world.
The Uncertainty Principle can be thought of as a result of the wave-particle duality described above. A particle’s position corresponds to a localised state, while its momentum corresponds to a wavelength (from its wave-like behaviour). Localising the particle more precisely in space makes its momentum (or wavelength) less defined. This principle upends classical mechanics, where, at least in theory, you can measure a particle’s position and momentum as precisely as you want. It introduces the idea that at a quantum level, reality isn’t fully deterministic. The future states of a system can’t be predicted with absolute certainty but only in terms of probabilities. There is also an uncertainty relation between energy and time that implies that during very short time intervals, large fluctuations in energy can occur, which is often invoked in phenomena like quantum tunnellingor the existence of virtual particles in quantum field theory.
The measurement problem
From the perspective of this book, the measurement problem is the most important metaphysical issue raised by quantum theory. The difference between “observation” and “measurement” is subtle. “Observation” implies a human or other conscious observer, whereas “measurement” implies a measuring device, but from the point of view of physics they play the same role: they are the explanation for when, why and how a set of quantum probabilities becomes a single manifested outcome. Schrödinger’s wave function evolves deterministically according to his wave equation, predicting the system’s future states. However, since it’s a wave, it spreads out in multiple directions simultaneously. Despite this, actual measurements/observations always find the system in a definite state. This means that the act of measurement/observation alters the system in a way not explained by the wave function’s evolution. To rephrase Steven Weinberg: If the Schrödinger equation can predict the wave function at any time, and if observers themselves are described by this wave function, why can’t we predict exact measurement outcomes, only probabilities? How do we bridge the gap between quantum reality and our conscious experience of a material world in a definite state? This is the measurement problem.
Schrödinger came up with a now famous thought experiment to illustrate the implications for our understanding of reality. A cat’s fate is linked to a quantum event – the decay of a radioactive atom. Before observation, the atom – and by extension, the cat – is in a superposition of decayed/undecayed and alive/dead states. Yet, when we open the box and observe its contents, we find the cat either alive or dead, and never in a superposition. When, how and why does it stop being in a superposition? Schrödinger did not believe in dead-and-alive cats. He was highlighting a defect in the CI, which does not provide any answer to this question, because it does not define what an observation is.
It is worth noting that Schrödinger was an unapologetic mystical idealist. He never directly connected this metaphysical belief with quantum mechanics, but it isn’t hard to join the dots. He had first been exposed to mystical philosophy through the works of Arthur Schopenhauer, and became a student of the Upanishads. He referred to the claim that Atman (the root of personal consciousness) is identical to Brahman (the ground of all Being) as “the second Schrödinger equation”. He had no obvious reason to specify that the box in his thought experiment contained a conscious animal – it would have worked just as well if instead of a dead-and-alive cat, the box contained a spilled-and-unspilled pot of paint, which would have removed consciousness from the situation entirely. Then perhaps we could introduce the conscious cat as a variation on the thought experiment. Did Schrödinger believe consciousness has anything to do with the collapse of the wave function? He did not explicitly say so. If the whole of reality is consciousness and quantum theory is our best description of reality then how can they not be connected in some way? He made his views clearer in his 1944 essay What is Life?, in which he also anticipated the discovery of DNA. [Footnote: Saying we should be looking for an “aperiodic solid” that contained genetic information in covalent chemical bonds. The essay ends with a discussion about determinism, free will and consciousness.“]
Let us see whether we cannot draw the correct non-contradictory conclusion from the following two premises: (1) My body functions as a pure mechanism according to Laws of Nature; and (2) Yet I know, by incontrovertible direct experience, that I am directing its motions, of which I foresee the effects, that may be fateful and all-important, in which case I feel and take full responsibility for them. The only possible inference from these two facts is, I think, that I – I in the widest meaning of the word, that is to say, every conscious mind that has ever said or felt 'I' – am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature.”
John von Neumann
John von Neumann was the greatest mathematician, arguably the greatest scientist, of the 20th century. He warrants extended discussion, because I have found when I mention his contribution to quantum mechanics, certain people immediately dismiss him with a comment along the lines of “Sounds like Deepak Chopra. I feel pretty safe ignoring that.” [Footnote: For anybody interested in von Neumann and how he fits into the narrative of this book, I can thoroughly recommend two resources: the final episode of Jacob Bronowksi’s 1973 BBC documentary series The Ascent of Man; andThe Man From The Future: The Visionary Life of John von Neumann, Ananyo Bhattacharya, 2022, Penguin Books.] He wasn’t infallible, but he came pretty close. As we will see below, a similarly dismissive reaction, though minus the disrespect, is often displayed by people in positions of much greater authority, such as the author of a textbook on the metaphysics of quantum mechanics published by Oxford University Press – Quantum Ontology: A Guide to the Metaphysics of Quantum Mechanics by Peter J Lewis (2016).
Von Neumann was about as far from a scientific heretic as could be imagined. He came from a wealthy Hungarian Jewish family, and was exceptionally gifted. At six he could divide two eight digit numbers in his head, and converse in ancient Greek. At eight he had mastered calculus. An interest in history prompted him to read through the whole of a 46-volume world history series.He attended the best school in Budapest, and his father hired private tutors to provide extra tuition in areas he displayed an aptitude for. These included the famous mathematicianGábor Szegő, who upon their first meeting was brought to tears by von Neumann’s talent for producing instant solutions to complex mathematical problems. At 19 he was having mathematical papers published.
In his 34-year career, von Neumann authored over 150 papers, making numerous major contributions to mathematics, physics, computing and economics, and playing a key role in the development of computers, AI, game theory and the hydrogen bomb. At the time of his death, aged just 53, he was America’s greatest expert and authority on nuclear weapons, and was the originator of the MAD (mutually assured destruction) strategy to control the arms race. In 1999 the Financial Times named him Person of the Century, as representative of the 20th century’s ideal that the power of the mind could shape the physical world (the double meaning here is unlikely to have been accidental).
In 1932, von Neumann’s definitive mathematical analysis of quantum mechanics was published. The Mathematical Foundations of Quantum Mechanics is quantum theory’s Bible. In it von Neumannmathematically proved that an “ordinary reality” cannot underlie the mathematical quantum facts. He showed that if electrons are ordinary objects, or constructed of ordinary objects, then the behaviour of these objects contradicts what quantum mechanics is telling us about reality. Here is Lewis on von Neumann: “...the worry is that at the end of the day there may be no good way to decide between [the competing solutions to the measurement problem], so the project of basing our ontology on our best physical theory will prove to be impossible, at least at this level of description.
However, there is another line of thought running through the history of quantum mechanics according to which there is no genuine underdetermination here. The measurement postulate has always sat uncomfortably with the rest of quantum mechanics. Von Neumann first explicitly formulated the measurement postulate as part of quantum mechanics, but he also claims that when the measurement postulate applies is arbitrary to a large extent, suggesting that it is not an objective physical process but something more closely connected to our situation in the world as observers. In that case, maybe the measurement postulate should never have been regarded as part of the physical theory itself; perhaps we should read the textbooks as saying that a system undergoing a measurement looks as if its state collapses to an eigenstate.” [Footnote:Quantum Ontology: A Guide to the Metaphysics of Quantum Mechanics, Peter J. Lewis, 2016, Oxford University Press, p64]
So far, so good. But having pointed out von Neumann’s claim that measurement is not an objective physical process but something more closely related to our status as observers, he does not go on to say anything about consciousness collapsing the wave function. Instead, he implies that this might mean that the collapse only “looks like” it happened, which leads him in the next sentence to the Many Worlds Interpretation. Von Neumann gets no other mention in Lewis’ book and Stapp isn’t mentioned at all. It appears Lewis is leaving out von Neumann’s interpretation completely. The only mention of the theory that consciousness causes collapse anywhere in this book on quantum ontology is a claim on page 179 about Eugene Wigner, who defended von Neumann’s position in the 1950s. Wigner’s view that consciousness causes the collapse, says Lewis, “falls short of being possible” because it “requires a deeply problematic interactionist dualism.” Lewis does not explain in the main text why he thinks this is impossible. He relegates it to a footnote:
“One problem is that it seems utterly mysterious on this view how the ability to collapse wave functions could have evolved with the evolution of conscious beings, since evolution is a purely physical process. For further discussion see Chalmers’ The Conscious Mind 156 and 356).”
As covered elsewhere in this book in some detail, evolution cannot have been a purely physical process, and an entirely unmysterious answer to his question is available. This should be taken as a measure of how deep the materialistic bias goes in some quarters. The von Neumann/Wigner/Stapp interpretation has been dismissed, without serious discussion, as impossible because it is “utterly mysterious” how to square this idea with the evolution of conscious beings.
Feynman, Bohm and Everett<h1>
In the late 1940s Richard Feynman came up with a fourth version of quantum theory, called “sum over histories”. To calculate an electron’s fate, Feynman adds up all its possible histories, cancels everything out, and whatever is left represents what will actually happen, expressed as a pattern of probabilities.
In 1951 David Bohm’s book Quantum Theory argued “electrons are not things” and the following year Bohm appeared to do what von Neumann had claimed was impossible: he created a model that allowed electrons to be ordinary objects without contradicting quantum theory. However, Bohm’s theory is very strange. It involves something called “pilot waves” – a new sort of physical entity, with its own fundamental field and a new law of motion. Quantum entities “ride” on the pilot wave, which is aware of everything else going on in the universe, including “measurements”, and communicates this to the electron. Now the electron can be a normal electron, and the unordinariness is in the wave. Had the world’s greatest mathematician made a mistake?
In 1957 Hugh Everett invented a radically new interpretation – the Many Worlds Interpretation. Since measurement devices are no different to anything else in the world, measurement interactions cannot be special. Bohr had to assign a special status to measuring devices, conferring on them a classical-style existence that is not possessed by the entities they are measuring. Von Neumann didn’t consider measuring devices special, describing them in terms of possibility waves just like any other sort of matter. But in order to justify this, von Neumann had to make the act of measurement metaphysically special – he had to remove it from the rest of reality. The MWI gets rid of the act of measurement altogether, by positing that where von Neumann thinks there is an act of measurement or observation, reality splits into multiple diverging timelines. All outcomes happen in different parallel versions of reality.
Competing quantum realities before Bell
These were the options available in 1964, before the publication of Bell’s theorem:
Copenhagen Interpretation (original form)
This was Bohr’s view, and maybe still is the default position for physicists if you ask them what quantum mechanics tells us about reality. Electrons exist, but have no dynamic attributes of their own. When not being measured, quantum entities don’t have definite attributes. This position flatly contradicts the Newtonian conception of a material reality made of microscopic entities that are themselves ordinary objects with ordinary properties, but fails to replace it with anything but a mystery. This might work if all you’re interested in is the practical application of quantum theory, but it raises major philosophical questions that it makes no attempt to answer. Reality is “out there” – it is local – but at the smallest level it is fuzzily undefinable. But what does “smallest level” mean? This interpretation requires a completely arbitrary “Heisenberg Cut” – a fundamental division between two radically different sorts of reality, without any explanation as to why two different sorts of reality exist, or why the line between them should be in any particular place. The big problem with Bohr’s view is that he treats measuring devices differently from everything else in reality: everything is treated as a probability wave except for the measuring device. But why should measuring devices be granted immunity to the quantum laws that apply to everything else? If your answer is that the measuring device should be treated as a quantum system that is measured by another measuring device then you have an infinite regress – what is the real measuring device? This is called “von Neumann’s paradox of infinite regress”, becausevon Neumann was obliged to break the infinite regress by postulating the collapse of the wave function.
Copenhagen Interpretation (extended)
This version emphasises, rather than ignoring or attempting to hide, the special status of the observer. The observer intervenes in reality by freely choosing which attribute he wants to look for. If he looks for position, then that becomes fixed and velocity is vaguer, and vice versa. The observer’s choices therefore have a decisive influence on the state of reality. Physicist John Wheeler put this as “No elementary phenomenon is a real phenomenon until it is an observed phenomenon.” Wheeler took observer-created reality beyond mere rainbows with what he called a “delayed-choice experiment.” Here, the observer creates not only the present attributes of quantum entities, but also attributes that they possessed long ago. The entity could be a distant galaxy, which could mean that the observation is creating the attributes of entities that existed billions of years ago. This seems to imply retrocausality – that the past can change. For Wheeler, this only applies to elementary phenomena such as electrons, but other people say it could also apply to the whole of reality. Where do you draw the line? Physicist David Mermin believes that the attributes of all entities – including cats, rainbows and distant galaxies – are not real until somebody looks at them with a measuring device. There is no agreement among proponents of this view as to what counts as an observation. Some of them believe it requires the making of a record – an irreversible event. You have to write it down, rather than just remember it. But what’s the difference? A few physicists believe that record-making machines are not enough: only a conscious observation counts as a measurement. American physicist Nick Herbert puts it like this: “Until conscious observers came upon the scene, the universe existed in an indefinite state, unable to decide even what kinds of attributes it possessed let alone their particular values. Large portions of the universe (everything that’s not being looked at right now by a conscious observer) are still in this indecisive situation, waiting for a conscious observer to grant them a more definite style of existence.” Footnote:Quantum Reality, Herbert, p167.
Heisenberg’s view
Quantum theory, according to the CI, represents the world in two different ways: the observer’s experience is expressed in the classical language of actualities, while the unmeasured quantum realm is represented as a superposition of possibilities. Heisenberg suggested we take these representations literally as a model of the way things really are. Thus, according to Heisenberg’s duplex vision, the unmeasured world actually is what quantum theory represents it to be: a superposition of mere possibilities. Heisenberg called thempotentia: unrealised tendencies for action, awaiting the magic moment of measurement that will grant one of these tendencies a more concrete style of being that we humans experience as actuality. It is worth noting that while this view is close to that of von Neumann and Stapp in important respects, Heisenberg thought of “measurement” as something more epistemological than ontological – measurement as a change in knowledge rather than in reality.
Bohm’s view
Bohm’s view was that everything is in immediate contact with everything else, courtesy of the pilot waves. Quantum attributes reside in “the entire experimental arrangement”... which has to end up meaning the whole of reality, since the whole universe could be implicated in a single measurement. Everything is entangled with everything else.
The Many Worlds Interpretation
There is no collapse of the wave function. Reality is continually splitting.
Consciousness creates reality
This is the interpretation derived from von Neumann. The observer is removed from the quantum system and collapses the wave function from outside. Von Neumann simply assumed that quantum theory is correct and applied it uniformly to everything in the world, including measuring devices. This means everything is represented by a proxy wave, with no exemption for measuring devices or anything else. In order to square this with the brute fact that we experience a reality where only one outcome actually manifests, von Neumann needed a wave function collapse (the only other alternative is the MWI, which was yet to be dreamed up). Somewhere between the system being observed and the mind of the observer, the many possibilities of the proxy wave must contract into the one actual observed outcome. This is von Neumann’s “measurement act”, and he showed that it could be located anywhere without changing the results, but that it could not be eliminated altogether. Which leaves the question of where does it actually collapse? For von Neumann, solving the measurement problem means finding the place where nature breaks the chain of wave function evolution with a “quantum jump”.
Bell’s Theorem and its consequences
In 1964 John Bell set out to answer the question raised by Bohm’s theory: had von Neumann made a mistake? He had not. Von Neumann’s proof involved the caveat that no theory involving ordinary objects combining in reasonable ways is consistent with quantum theory. There is nothing ordinary or reasonable about Bohm’s pilot waves, not least because they are instantly aware of everything going on in the universe. This involves faster-than-light (superluminal) transmission of information, which is prohibited by Einstein’s special relativity.
In his attempt to understand what had gone wrong, Bell made the most important advance since 1925. He came up with a mathematical theorem (not a mere theory, but a mathematical proof that will never be overturned) that proves that any model of reality (not just quantum mechanical, but any model at all) must be non-local. Bell proved that Bohm’s superluminal connections are unavoidable. If you believe objective reality is non-local (as Kant did) then this is no problem at all, because the superluminal connections can exist at the noumenal level – they can be part of non-spatio-temporal “reality as it is in itself” rather than part of reality as it appears to us. However, anybody who believes in a local objective reality and accepts Einstein’s theory of special relativity has got some major rethinking to do. [Footnote: The 2022 Nobel Prize in Physics was awarded to Alain Aspect, John Clauser, and Anton Zeilinger for their works on “quantum nonlocality” in quantum mechanics.] Quantum nonlocality is a phenomenon where connected particles can affect each other instantly, regardless of the distance separated. After Bell, any theory of reality has to either be explicitly non-local, or somehow make the local/non-local distinction irrelevant. This has implications for the options listed above.
Bohr’s view involves a local reality. “The entire measurement situation” now depends not only on local observers but on the contents of the entire universe, and this implies superluminal connections. This cannot work without breaking Einstein’s speed limit, so (assuming Einstein was right) this interpretation is dead.
The extended version of the CI fares better, because Bell’s theorem only requires noumenal reality to be superluminally linked, not phenomenal reality. So this version is still alive, but there’s a problem. If the “observer” is a noumenal brain (not a non-physical observer) then what is the difference between a brain and any other quantum system? Why should a human brain collapse a wave function?
Bohm’s view is still alive, and for the same reason – the entanglement exists in the noumenal world, not the phenomenal world of experience. Heisenberg’s view also survives, provided the noumenal “potentia” are superluminally connected. The MWI is immune to Bell’s theorem, because Bell’s theorem assumes that measurements produce a definite result (because it requires something called “contrafactual definiteness”). However, the MWI seems to imply non-locality in its own bizarre way, since what appears to be local reality is continually splitting into other realities, most of which clearly aren’t local because we’re completely separated from them, forever.
Henry Stapp’s extension to von Neumann’s interpretation
Von Neumann proposed that the measurement problem involves a chain of interactions, starting from a physical system and moving through the measuring device to the observer’s brain, and finally culminating in the conscious observer’s mind.He introduced a dualistic framework, distinguishing between two processes:
Von Neumann argued that the collapse can’t be located at any specific point in the physical world (e.g., within the measuring device) but rather only in themind of the conscious observer. On this view, consciousness plays an essential role in resolving the quantum uncertainty, but von Neumann didn’t suggest a mechanism for how this might work.
In 2007, Stapp took von Neumann’s framework and extended it, giving consciousness a more active and fundamental role in the quantum process. He introduced the idea that conscious intention can influence physical outcomes through repeated acts of observation, employing the Quantum Zeno Effect. This is the idea that frequent measurement can prevent a quantum system from evolving, suggesting that conscious attention or focus can “freeze” certain quantum states and influence their evolution. In Stapp’s interpretation, conscious choices, made by the mind, select among possible quantum states and repeated acts of conscious attention can stabilise certain outcomes over others, actively shaping the physical world. This differs from von Neumann, who saw consciousness more passively – simply collapsing the wave function without necessarily influencing the physical outcome with repeated decisions. Stapp argued that mental intention plays an active and causal role in determining which possibilities become reality.
Stapp sees the brain as a quantum system in which Process 1 corresponds to mental effort or the focus of attention, which leads to the selection of possible outcomes, and Process 2 is the deterministic evolution of brain states according to quantum laws. These can be influenced by the choices made during Process 1. Conscious decisions can therefore have a direct influence on the physical state of the brain, extending von Neumann’s abstract idea of consciousness collapsing the wave function into a concrete model of mind-brain interaction. This provides a model for free will in a quantum context, which Stapp further explored in Quantum Theory and Free Will: How Mental Intentions Translate into Bodily Actions (2017).
What we need to know about quantum metaphysics
Neither physics nor pure reason can tell us which of the (still standing) metaphysical interpretations of quantum mechanics is true. However, there are a number of things we can safely conclude.
Firstly, the metaphysics associated with classical physics should have been comprehensively consigned to the history books. No person who has a basic grasp of modern physics should be a naïve materialist any more. It should already have been obvious by the 1950s that the new physics had shown that reality is not what it appears to be, but for anybody still not sure then Bell’s theorem ought to have sealed the deal. Outside of mainstream science something like this did happen – or at least it tried to. But instead of accepting the need for a major philosophical paradigm shift, the bulk of the scientific community has opposed it. Both within physics (as demonstrated by Lewis) and beyond physics in the wider scientific community, the implications of QM are admitted to be very strange, and it is accepted that many questions remain unanswered, but there is still serious resistance to the idea that there could be a connection between consciousness and quantum mechanics in general and the collapse of the wave function in particular. The mainstream scientific position is that regardless of the profound strangeness of quantum theory and the complete absence of any consensus about what it tells us about the nature of reality... Thou shalt not refer to consciousness and quantum mechanics in the same sentence.
Our true epistemic situation, regarding quantum theory, boils down to a choice between a small number of possible solutions to a clearly definable problem. The mathematics of quantum theory provide a probabilistic prediction about future observations, but in reality we only ever observe one outcome. The task is to provide a metaphysical explanation of how a set of probabilities regarding future observations becomes a single manifested outcome. The first major attempt at an explanation was the CI, but this introduced the notion of an “observer” or “measurement” without being clear what that meant. But it does help to explain the problem: this unspecified observer was introduced to bridge the “quantum leap” between the set of probabilities and the single outcome, by a process that has become known as “collapsing the wave function”. Four types of solution are available. The correct answer to the measurement problem must fall into one of these categories:
Category 1 is the Many Worlds Interpretation. In the MWI there is no collapse and no observer, because all possible outcomes occur in vast array of ever-diverging realities. Once diverged, these realities lose all contact with each other. They are not interdependent.
Category 2 is an objectively random single world. In these cases only one outcome occurs, and the apparent randomness is always objectively random. “Objective” here means it really is random, and doesn’t just appear that way to us because we lack the information that would allow us to see why it isn’t really random.
Category 3 is a deterministic single world. Again only one outcome occurs, but in these cases the apparent randomness is really the result of deterministic laws or naturalistic principles that we are currently unaware of (it is only subjectively random). It might be the case that we will never discover these laws or principles, but regardless of this they are governing what happens.
Category 4 is a praeternaturalistic single world. Again only one outcome occurs, and the collapse is caused by interaction with a non-physical Participating Observer (i.e. something outside of the physical system). The external observer can also potentially load the quantum dice. This includes the von Neumann/Stapp interpretation, but it could also involve anything outside the physical system that can load the quantum dice, including all of the things I categorised in Chapter Three as praeternatural. To be clear, these things (free will, synchronicity, and so on) aren’t what we normally consider to be observers, but they are taking the place of an observer in quantum theory. They are causing the collapse, and potentially influencing the probability of which of the possible outcomes occurs.
Various combinations of single-world interpretations may be possible. It is possible we live in a reality where the apparent quantum randomness is sometimes objectively random, sometimes the result of hidden determinism, and sometimes the result of praeternatural phenomena.
Relational quantum mechanics
The single world interpretations can also all be modified by another theory known as relational quantum mechanics (RQM). In RQM there is no single, absolute, objective state of the world – all observations are relative to other observers (or other quantum systems). RQM is incompatible with the MWI, because in the MWI there aren’t any observers. In the case of RQM the apparent randomness is at least partly the result of a requirement to keep the various systems consistent with each other.
The essential difference between the standard versions of the single world interpretations and the RQM versions is that in the standard version there is a single objective reality and in RQM there is a web of interdependent realities. This interdependence ensures that the whole system has characteristics which are true in all of the interdependent realities. It might look like we have lost objective reality, but it has actually just shifted up to a higher level. On this view “Objective reality” is the deep structure that must be true for all observers in order to keep the whole system coherent. From our perspective, the most interesting combination is of RQM and category four.
Combining Stapp’s view with relational quantum mechanics
If Stapp’s interpretation is true, conscious observers would not only participate in quantum reality but would do so in a relational way. Each observer’s conscious mind would shape reality through its own quantum interactions, but these interactions would be contextual and relational rather than objective or absolute. Since RQM denies objective states, the conscious experience of reality itself would be relational. Each conscious mind would experience a different version of reality based on its specific quantum interactions with other systems and minds. This would seem to imply that multiple, possibly divergent realities could exist simultaneously, depending on the relational perspective of the observer.
Both interpretations support the idea that reality is fundamentally interconnected. In Stapp’s view, mind influences matter through quantum processes, and in RQM, reality is built on interactions between systems. Together, this implies a deeply participatory and relational cosmos, where conscious agents continuously shape their reality through interactions. Quantum events would not just “collapse” into a single, observer-independent reality but would manifest different outcomes relative to each observer’s conscious interaction.Reality would be more like a network of interdependentobservations than a static, objective world. There would be no single, objective reality. Instead, the reality each observer experiences is shaped by theirconscious choicesand their relational position within the network of quantum systems. What we know about the world would depend on ourrelations with it.
Stapp’s model grants free will to conscious observers, but with RQM, that free will might only manifest relative to a given set of interactions. Each observer’s choices affect their local quantum reality but maynot be universal; other observers might experience a different set of outcomes based on their relational stance to the same events. Different people might live in parallel yet relationally connected realities,where conscious choices affect the world in locally meaningful ways but without a universal frame of reference. In this sense, our interpersonal realities would be constantly shifting, with the boundaries of reality partly defined by who is interacting with whom. The combination of Stapp and RQM would create a universe where reality emerges from ongoing, interactive co-creationbetween conscious observers and the quantum systems they interact with, leading to a constantly evolving, multi-perspectival reality. However, the whole system would still have to remain a whole system, and there would still be many facts that are true for all observers. There still wouldn’t be any realities where hypernatural phenomena are possible, or economic growth is sustainable in a finite physical system.
Structural relational realism
Structural relational realism (SRR) combines structural realism and relationalism. It aims to explain the nature of the world by emphasising the relations and structures between entities, rather than focusing on individual objects or intrinsic properties. Relationalism argues that the properties and existence of objects are dependent on their relations to other objects or systems. This view challenges substantivalism, which holds that objects have intrinsic properties that exist independently of other things. In quantum mechanics, relational interpretations suggest that properties like position, momentum, or even identity are not intrinsic to particles but arise from their interactions with other particles or systems. This is particularly evident in RQM where quantum states only exist in relation to specific observers or measuring systems. SRR synthesises these two approaches by asserting that both the structures of the world and the relations between entities are fundamental. It makes two key claims.
(1)Reality consists of structures undergoing processes: the world is composed of patterns, regularities, and relationships, rather than individual objects with intrinsic, independent existence.
(2) Relations are fundamental: the entities that populate these structures exist only in relation to other entities. Their properties, and even their identities, are determined by their place within a broader network of relations.
The focus is on how things relate to one another and how these relations form stable, knowable structures. Reality is understood as a web of relationships that form structures. These structures define what exists and how things behave, with entities like particles or objects being nothing more than points or nodes within these networks of relations. In QM this means the behaviour of particles (such as entanglement or superposition) is understood in terms of their relations to other particles or systems. The particles have no independent existence apart from these relationships.
Traditional metaphysics posits that there are fundamental substances or objects that exist independently of their relationships. SRR challenges this by suggesting that there are no underlying substances. What we call “things” are just patterns of relationships. This view aligns with the idea that the mathematical structures we use to describe the world (such as in quantum mechanics or relativity) are not merely convenient models but actually reflect the deep structure of reality itself. This is particularly relevant in quantum field theory, where fields and interactions (rather than individual particles) are primary. In quantum entanglement, particles are connected in such a way that their states are dependent on each other, even when separated by vast distances. SRR sees entanglement as a clear example of how relations (rather than objects) are fundamental. The structure of entanglement represents a pattern of relations that doesn’t depend on the individual identity of the particles. What is real is the entangled relationship, not the particles themselves as independent entities.
SRR has implications for how we think about everything from elementary particles to ecosystems. It has the potential to bridge the gap between different fields of inquiry. For example, in biology, ecosystems are understood as networks of relations between organisms. In sociology, human beings are analysed in terms of their relationships within social structures. This framework could unify different areas of science and philosophy by focusing on relational structures as the common foundation of reality. There are some criticisms and challenges, of course. One of the major challenges to SRR is the question of whether relations can exist without some underlying objects to relate. Critics argue that relations presuppose entities or objects to stand in relation to one another, and thus, relations alone cannot constitute reality. Another problem is that while SRR fits well with physics, applying it to higher-level sciences such as biology or psychology may be more challenging. In these fields, entities like organisms, minds, and ecosystems appear to have more stable, identifiable features that resist reduction to mere relations.
Further reading:
Quantum Questions: Mystical Writings of the World’s Great Physicists, Ken Wilber, Shambhala Publications, 1984.
[Back to contents][This book is available in print form.]