For Whom the Bell (Theorem) Tolls
Bell’s Theorem excludes a certain class of theories from being viable, and experiments have shown it to be correct, so we’d better pay attention to what the Universe says about its own behaviour. No matter how beautiful you think your theory is, nor how much you love it, it’s always the Universe that gets the final say.
One of the most renowned quantum physicists of the latter part of the 20th Century was John Stewart Bell. The theorem named after him was first put forward in a paper in 1964, the second of a long series of papers on the foundations of quantum theory written before his untimely death in 1990, which have been collected together into a single indispensable volume (Bell 1964, 2004). That paper set out to consider perhaps the most famous counter-argument against the “standard” or “Copenhagen” interpretation (discussed in Part III): the paper by Einstein, Podolsky and Rosen, now ubiquitously known as simply “EPR” (Einstein et al. 1935). The first Bell paper, only published later in 1966 due to an editorial processing error (Bell 1966), had disproved a famous 1932 theorem due to John von Neumann which claimed that hidden variables in quantum mechanics are impossible; similar claims were also made by others after von Neumann and these, too, were shown to be incorrect. For decades the von Neumann “proof” was used to silence any and all discussion about whether it might be possible to “complete” quantum mechanics by way of so-called “hidden variables” (the major criticism of it in EPR was that it was incomplete and therefore needed more variables to complete it; see below). Interestingly, as early as 1935 Grete Hermann had found the flaw in von Neumann’s argument, published this observation in what was a fairly obscure philosophy journal (Hermann 1935), and despite its important finding was completely ignored.
Bell later reflected (1982) on how he came to derive his theorem. The Abstract of that paper says:
The strange story of the von Neumann impossibility proof is recalled, and the even stranger story of later impossibility proofs, and how the impossible was done by de Broglie and Bohm. Morals are drawn.
Note that the above quote is the entire Abstract of that article, and it captures a good sense of his legendary impatience with grandiose claims as regards what is and is not possible in quantum theory.
I won’t go into too much detail about EPR and the Bell Theorem because volumes have been written about both. The point that is of direct interest to us here is that Bell’s Theorem implies a very important result: that local hidden-variables theories cannot reproduce the statistical predictions of quantum mechanics. Let’s dig into this.
Conventional quantum theory claims that quantum particles do not possess the values of their measured attributes until the moment of measurement, when those values are brought into being by the measurement. (This is the origin of David Mermin’s statement that unless we are looking at the moon it is not there.) In this situation, the act of measurement bringing forth the value of the measured attribute “collapses” the wavefunction, which is a superposition of all possible values that could be measured. Certain situations can be set up so that the measurement of an attribute on one particle of a correlated pair (say, a spin singlet state) means that the same attribute of the other particle would have to be the opposite value. Schrödinger (1935b, 1935a) famously called this “entanglement” (Verschränkung). This suggested a thought experiment.
EPR argued that the probabilistic predictions of quantum theory are due to it being an incomplete description of reality. Their thought experiment tried to show that the supposed “collapse” of the wave function in a certain experimental situation revealing the measured attribute implied that either nature was non-local (i.e., influences travelled faster than the speed of light in order to “tell” the other particle what value it now needed to have), or that quantum mechanics was incomplete. The idea of nature violating the principle of relativity (i.e., being non-local) was so patently absurd that it was never really considered beyond simply being a demonstration of a reductio ad absurdum—it was so out of bounds as a legitimate physics possibility that of course it was only the other option that was seriously entertained, namely the incompleteness of the quantum mechanics formalism.1 This therefore meant that quantum mechanics would require “hidden variables” not part of the then-current formalism to be a complete description of reality. That is, the required variables would carry the information of what was “really” happening physically, and since these were not part of the formalism, they were thus “hidden”. The idea that particles have attributes with real values independent of measurement is known as “realism”.
Bell showed that if one assumed both locality (i.e., that influences travel at or below the speed of light, as per the principle of relativity) as well as the existence of these hidden variables carrying definite values—a situation known as “local realism”—then a contradiction arises. The statistical predictions of local realism differ from those of conventional quantum mechanics. He expressed this difference in terms of a certain numerical inequality, what Bell himself always called the locality inequality theorem. Local realism sets an upper bound to the value of this inequality, which conventional quantum mechanics violates, meaning that it might be possible, in principle, to conduct an experiment to see which of these two situations nature actually obeys. Therefore, Bell’s Theorem definitively moved the debate beyond mere philosophical discussion and argumentation into the realm of empirical investigation and experimentation. One might imagine that the global physics community as a whole would simply jump at the chance to do so. They did not. At least not at first.
This was not a road easily travelled by those who wanted to. As discussed in the references of the previous post—the book by Freire (2015) in particular—those who sought to test this idea were not treated at all well by the orthodoxy. And yet, some did manage to find a way to do so. Beginning with Clauser in the early 1970s, then Aspect in the 1980s, and Zellinger in the 1990s (along with numerous collaborators, of course), it was shown experimentally, that Bell’s inequality is indeed violated, showing that nature does indeed behave non-locally at the quantum level. Eventually, the 2022 Nobel Prize in Physics was awarded to Clauser, Aspect and Zellinger for this work.
Some physicists thought these results implied that hidden variable (or “realist”) interpretations of quantum mechanics were therefore impossible (probably recalling the flawed “proof” by von Neumann of this claim). This was a common misconception. There is a detailed examination of this in the the more technical of the two books by Bricmont (2016, Sect. 7.5). It suffices here to quote from that section (2016, p.258):
Viewing Bell’s argument as a refutation of hidden variables theories (an almost universal reaction, as we will see) is doubly mistaken: first because, combined with EPR, Bell proves nonlocality; and second because the de Broglie–Bohm theory, which Bell explained and defended all his life, proves that a hidden variables theory is actually possible. Bell was perfectly clear about this … He was also conscious of the misunderstandings of his results.
What Bell had shown was that a local hidden variables theory was not possible. The de Broglie-Bohm theory, however, is explicitly non-local, by way of the quantum potential, $V_Q$, as was discussed in the previous post. It is not ruled out by Bell’s Theorem, nor by the experimental results, which is why he found it not only to be a viable theory but also to be a coherent one (as opposed to the “unprofessionally vague and ambiguous” Copenhagen interpretation), and why he so championed it for much of his career.
So, in sum, the results of the Bell Theorem tests imply local realist quantum theories are in conflict with experiment and thus not viable. It is this class of theories for whom the Bell Theorem tolls, and this means we must give up either locality, realism, or both. Not surprisingly, there are interpretations that seek to do just that.
For example, conventional Copenhagen quantum mechanics gives up realism: it assumes the attributes which are observed (the “observables”) do not possess those values until they are actually observed, so that reality is not independent of the observer (this is the “is the moon there when no-one is watching?” issue). Bell always pushed back against this claim, and coined the term “beables” (i.e., be-ables, which actually exist independently of measurement) in contrast to the “observables”. Conventional quantum mechanics keeps a form of non-locality, in the (problematic) “collapse” of the wave function. Quantum Bayesianism (or QBism) similarly abandons realism by interpreting the wavefunction as describing our subjective beliefs about the quantum system, thus also abandoning the idea of an observer-independent reality, as does Relational Quantum Mechanics although in a different way.
Another (largely taken-for-granted) assumption that underpins the experimental tests of Bell inequalities is that the measurements made of quantum attributes by the measuring apparatus are independent of the hidden variables. This is generally known as “free choice” or “no conspiracy”, and has been the subject of recent investigation (e.g., Wiseman 2014). This means that under this view, the experimental results imply that one must give up one or more of locality, realism, or measurement independence. This third option underpins the so-named “retrocausal” interpretations of the quantum formalism, such as Cramer’s Transactional Interpretation. In this view, the future quantum state sends an “offer wave” back in time to the past, while the past state sends a “confirmation wave” forward in time to the future, which completes a “handshake” that is outside of time (“acausal”) but which we do not see as such because of our limited perspective within the flow of time (Cramer 2016).2 (Something like this might be the actual mechanism of the non-locality of the quantum potential $V_Q$, but that is something for another day).
Another related approach is called Superdeterminism, wherein the hidden variables are correlated with the measurement settings through some common influence in the past. As Bell (2004, p.244) put it, “in such ‘superdeterministic’ theories the apparent free will of experimenters, and any other apparent randomness, would be illusory.” While not ruled out logically, the full implications of this view suggest that experimental science itself is called into doubt, which many researchers consider too high a price to pay. Bell himself noted (ibid.): “However I do not expect to see a serious theory of this kind.”
Another interpretation that asks us to pay a heavy price—Everett’s (1957) relative state interpretation, more popularly known as “many worlds”—survives the filter by denying that a single definite observable outcome occurs at all during measurement, and that instead the Universe splits into different branches where each possible observable outcome occurs, one branch per outcome. Therefore, the wavefunction does not “collapse”, as in conventional quantum mechanics. Rather, it branches; and it branches endlessly. The result is that there are uncounted multitudes of universes coming into being at every moment, all of which but for our own branch are entirely unavailable and inaccessible to us.3 I have heard this referred to as “ontological extravagance”. I can remember joking with some of my fellow grad students that perhaps the reason our Universe was expanding was because of all these other universes constantly coming into existence and branching off all the time.
There is an immense literature on Bell’s Theorem, and its implications have reverberated down the timeline (of this Universe, at least) since its publication over sixty years ago (e.g., Brunner et al. 2014). But perhaps the most important implication of Bell’s theoretical work was that it gave the incentive to move beyond philosophical discussion and to conduct actual experimental tests—and those have shown empirically that nature does indeed behave non-locally, at least at the quantum level. The empirical fact of non-locality in physics is the really astonishing thing. It is not at all surprising then that particle physicist Henry Stapp claimed (1975, p.271): “Bell’s theorem is the most profound discovery of science.”
But the main implication for our exploration is that there are realist interpretations of quantum mechanics that are not excluded by the Theorem (and its subsequent versions and refinements), because they abandon locality. Some of these do not have particles as their fundamental object (“primitive ontology”). Objective collapse theories, such as GRW, add non-linear and/or stochastic terms to the Schrödinger equation which causes the wavefunction to collapse, while the Diósi-Penrose model uses gravity to collapse the wavefunction. These theories actually make empirical predictions that differ from standard quantum mechanics, which could, in principle, be tested, although as yet no deviations have been detected so far.
The last remaining subclass is of non-local realist theories with particles as part of the primitive ontology. We are particularly interested in these for the simple reason that the lowest level of the ladder of abstraction on the classical side is particles, and the initial question was whether there was such an analogue on the quantum side. We have already met the de Broglie-Bohm theory in the previous post, but there is one other that also falls into this category: Stochastic Mechanics.
The key idea of Stochastic Mechanics was hinted at as early as Einstein’s work on Brownian motion in 1905. It is interesting to note that Erwin Madelung (1927) sought to re-express the Schrödinger equation as a hydrodynamic equation, where he found a “quantum pressure” term in a form of the Hamilton-Jacobi equation which was the same term that Bohm found later for the “quantum potential”. In 1952, at the same time Bohm was developing his theory, Imre Fényes (1952) developed a version of quantum mechanics based on stochastic processes from which emerged the Heisenberg relations, as well as the main structural features of quantum mechanics. Another stochastic approach was published by David Kershaw (1964) at the same time Bell was thinking about the hidden variables issue, but it was the one by Edward Nelson (1966) that has garnered the most attention and for which there is the most literature. In this view, particles undergo Brownian motion with a diffusion coefficient of $\frac{\hbar}{2m}$, and the resulting stochastic dynamics reproduce the Schrödinger equation. A complete book-length presentation of the theory appeared in 1985 (Nelson 1985), and Goldstein (1987) explicitly noted the non-local character of the theory. Soon after, however, Wallstrom (1989, 1994) showed that in order for the dynamics described by Stochastic Mechanics to conform to quantum mechanics, it was necessary to introduce an essentially ad hoc quantisation condition analogous to Bohr’s original 1913 quantisation postulate for the hydrogen atom from the old quantum theory (Pinke et al. 2025). Wallstrom noted (1994, p.49), that the issues raised by his analysis “do not seem to apply” to the de Broglie or Bohm theories. From then on, interest seems to have faded in Stochastic Mechanics, even by Nelson himself.
The end result of this examination is that there exists one realist hidden variables formulation of quantum mechanics that has a primitive ontology that includes particles and which is not only not precluded by Bell’s Theorem, but which is explicitly nonlocal, as is required by the major discovery of the Bell-testing experiments—none other than our old friend, the de Broglie-Bohm theory.
Recall the von Neumann and other “proofs” of the impossibility of such a theory that were repeatedly (and mistakenly!) cited for decades, which discouraged the search for alternative theories or interpretations beyond Copenhagen. Recall also Bell’s wry commentary in the short quote nearer the top of this post. Having heard of the proof, but not read it (it was only in German at that time which he couldn’t read), he himself had “relegated the question to the back of my mind. … But in 1952 I saw the impossible done. It was in papers by David Bohm” (Bell 2004, p.160). In the preface (p.xi) to the collection of his papers, he also comments:
Of course, despite the unspeakable ‘impossibility proofs’, the pilot-wave picture of de Broglie and Bohm exists.
Given the fact that it exists, and reproduces all of the experimental results of conventional quantum mechanics, yet suffers from none of the flaws which he so railed against, he also asks the very good (and quite justifiably pointed) question (Bell 2004, p.160):
Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?
I like to think he would be gladdened to know that it now is beginning to find its way into quantum physics textbooks (e.g., Bricmont 2016; Norsen 2017), as well as more broadly into the genre of “popular physics” books (e.g., Bricmont 2017), so that knowledge of this intuitive and coherent version of quantum mechanics is now less likely to be completely overshadowed by the conventional counter-intuitive incoherent interpretation. As a futurist, I am always interested in the dynamics that give rise to historical change, given that the future grows out of the present which is just the endpoint of the current past. The historian W. Warren Wagar even considered futures inquiry as a form of applied history (Wagar 1993). So as a physicist-futurist, I now sometimes like to muse on the “counterfactual” of what the history of 20th Century physics might have been had this theory not been ignored for 100 years. It certainly would have made my life as a physics student much easier, and potentially avoided a hell of a lot of quantum nonsense along the way (Bricmont 2017).
So, it seems that my attempt to follow the Feynman-Seneca advice to go right back to basics and find a way forward from that start has led me to the same place that a true giant of the field arrived at long ago, and had been trying for many years to exhort others to come and visit. I feel very much more secure now in the idea that I have most assuredly not been led astray in this exploration, even though I have wandered quite a way away from the conventional path of orthodox quantum theory. No, far from it, in fact. For, in the land of quantum foundations, the de Broglie-Bohm pilot wave theory has always been a road much less travelled, even though it may well be one of, if not the only, roads out of the incoherent conventional Copenhagen wilderness, of moons that are only there when you happen to be looking at them.
Next: Part V – ‘The Duckmole Problem’
Notes
-
Schrödinger (1935b, p.812; 1980), also tried to demonstrate a reductio ad absurdum with his famous cat in a box thought experiment: Man kann auch ganz burleske Fälte konstruieren. Eine Katze wird in eine Stahlkammer gesperrt …; “One can also construct quite burlesque [i.e., ridiculous] scenarios. A cat is locked in a steel chamber…”↩︎
-
I admit to a certain fondness for this interpretation, which stems from a very early theory due to Wheeler and Feynman (1945) that I had explored briefly during my PhD candidature. In this view, as long as every signal propagating into the far future is eventually absorbed and re-emitted back into the past, then the retrocausal aspect of the dynamics completely disappears, leaving only the usual radiation reaction term in the electrodynamics. But if there is imperfect absorption and re-emission, then there could arise a volume of spacetime where potential “premonitory signals” from the future might be possible. I used to muse that this might have been why walking onto the Swinburne campus the first time felt so familiar—I wasn’t remembering it from the past, but rather I was remembering it from the future.↩︎
-
Interestingly, the British science fiction author John Wyndham wrote a short story (“Opposite Number”, 1954) predating (and eerily anticipating) the Everett interpretation (1957) wherein a man is visited by a version of himself from one of a multitude of infinitely-branching parallel universes where a single decision was the branch point that led to their respective lives diverging. I recall being quite captivated by it as a teenager, when I read as much Wyndham as I could possibly get hold of. He was greatly underappreciated as a writer.↩︎