”… a number of lines of argument (string theory’s 10^{500} solutions is just one) suggest the unfortunate truth may very well be that we live in a multiverse. If so, any Theory of Everything will have to have deeply irritating arbitrary elements, determinable only by experiment. ‘The only game in town’ would then be ‘crooked.’ ” – comment on Dr Woit’s *Not Even Wrong *blog post ‘The Only Game in Town’.

*All* claims of a multiverse are ‘not even wrong’ pseudoscientific junk. The stringy mainstream still ignores Feynman’s path integrals as being a reformulation of QM (a third option), seeing them instead as QFT: Feynman’s paper ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, *Reviews of Modern Physics,* volume 20, page 367 (1948), makes it clear that his path integrals are a reformulation of quantum mechanics which gets rid of the uncertainty principle and all the pseudoscience it brings with it.

Richard P. Feynman, *QED,* Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of *all* the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no *need* for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

Hence, classical and quantum field theories differ due to the physical exchange of field quanta between charges. This exchange of discrete virtual quanta causes chaotic interferences to individual fundamental charges in strong force fields. Field quanta induce Brownian-type motion of individual electrons inside atoms, but this does not arise for very large charges (many electrons in a big, macroscopic object), because statistically the virtual field quanta avert randomness in such cases by averaging out. If the average rate of exchange of field quanta is *N* quanta per second, then the random standard deviation is 100/*N*^{1/2} percent. Hence the statistics prove that the bigger the rate of field quanta exchange, the smaller the amount of chaotic variation. For large numbers of field quanta resulting in forces over long distances and for large charges like charged metal spheres in a laboratory, the rate at which charges exchange field quanta with one another is so high that the Brownian motion resulting to *individual* electrons from chaotic exchange gets statistically cancelled out, so we see a smooth net force and classical physics is accurate to an extremely good approximation.

Thus, chaos on small scales has a provably beautiful simple physical mechanism and mathematical model behind it: path integrals with phase amplitudes for every path. This is analogous to the Brownian motion of individual ~500 m/sec air molecules striking dust particles which creates chaotic motion due to the randomness of air pressure on small scales, while a ship with a large sail is blown steadily by averaging out the chaotic impacts of immense numbers of air molecule impacts per second. So nature is extremely simple: there is no evidence for the mainstream ‘uncertainty principle’-based metaphysical selection of parallel universes upon wavefunction collapse. (Stringers love metaphysics.) Dr Thomas Love, who writes comments at Dr Woit’s Not Even Wrong blog sometimes, kindly emailed me a preprint explaining:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

Sometimes people try to defend metaphysical ‘interpretations’ of ‘unexplainable quantum mechanics’ by vaguely bleating about ‘quantum entanglement’, Alain Aspect’s 1982 experiments to test Bell’s inequality, and that kind of irrelevant test of obsolete hidden variables theories which have nothing to do with Feynman’s path integrals for quantum field theory and interferences due to field quanta.

I don’t really want to discuss Aspect’s experiment because it just indicates an incompatibility between the uncertainty principle and the speed of light limit in relativity, without proving which is wrong. As Feynman said, the uncertainty principle is not a fundamental principle but is the consequence of field operations, such as chaotic interferences on small scales.

Briefly, though, here is the story:

1. Bohr’s idea that electrons orbit nuclei led Rutherford to send Bohr a letter dismissing Bohr’s idea on the grounds that orbiting electrons would radiate all their energy within a fraction of a second, and spiral into the nucleus.

2. Bohr went almost totally insane and became certifiably paranoid about criticisms he couldn’t answer (the answer is actually simple; the electron is always radiating; all electrons are always radiating so there is an equilibrium of emission and reception established in the universe, called exchange radiation/vector bosons/gauge bosons, which can only be ‘seen’ via force fields they produce; ‘real’ radiation simply occurs when the normally invisible exchange equilibrium gets temporarily upset by the acceleration of a charge) when he received Rutherford’s letter, and in response he invented pseudoscientific laws (complementarity and correspondence principles) to ban all further research and even questions on the subject!

3. Heisenberg came along with his uncertainty principle, and in 1927 at the Solvay Congress sided with Bohr, against Einstein.

4. Einstein after many fruitless arguments with Bohr (and errors!) finally in 1935 came up with a paper co-authored by Podolsky and Rosen:

‘In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) the description of reality given by the wave function in quantum mechanics is not complete or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false. One is thus led to conclude that the description of reality as given by a wave function is not complete.’

They suggested generating two moving particles with similar initial states and then measuring their states (e.g., the direction of polariation) after they have moved far apart to see what differences resulted from the act of measurement, i.e., the ‘wavefunction collapse’ in Bohr’s and Heisenberg’s dogma. David Bohm developed ‘hidden variables’ theories which are obviously wrong (containing infinite point potentials) to try to explain nature in an Einsteinian way, instead of sticking to field quanta facts (as shown in the previous post, field quanta are not hidden variables but facts which cause the Casimir effect, accurately tested to within 15% of the prediction). In 1964, John Bell showed that the quantum mechanics and the Einstein-Bohm hidden variables assumptions lead to results different by the factor 3/2, and then in 1982 Alain Aspect and others tested Bell’s inequality and experimentally falsified Einstein-Bohm classes of hidden variables. This has absolutely nothing to do with field quanta and path integrals.

I realise that Dr Woit has no obligation to increase the number of controversies he engages in just because multiverse pseudoscience is rife in QM, but it would be kind if he could make some comment on Feynman’s argument that nature has a beautiful simplicity, not ugly multiverse pseudoscience:

‘… nature has a simplicity and therefore a great beauty.’

– Richard P. Feynman (*The Character of Physical law,* p. 173)

The double slit experiment, Feynman explains, proves that light uses a small core of space where the phase amplitudes for paths add together instead of cancelling out, so if that core overlaps two nearby slits the photon diffracts through both the slits:

‘Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

– R. P. Feynman, *QED,* Penguin, 1990, page 54.

Hence nature is simple, with no need for the wavefunction collapse:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

I should add here that the researcher Caroline H. Thompson of University of Wales, Aberystwyth, who kindly and helpfully corresponded with me by email before her death from cancer in 2006, wrote some useful arXiv papers on problems in Professor Alain Aspects entanglement experiments, e.g.

http://arxiv.org/PS_cache/quant-ph/pdf/9806/9806043v1.pdf:

‘All experiments up to now rely on some unproven assumptions, leaving loopholes for local hidden variables theories. P. Pearle, Phys. Rev. D 2, 1418 (1970); E. Santos, Phys. Rev. A 46, 3646 (1992); L. De Caro, and A. Garuccio, Phys. Rev. A 50, R2803 (1994); E. Santos, Phys. Lett. A 212, 10 (1996); C. H. Thompson, http://arxiv.org/abs/quant-ph/9711044’.

Where Caroline H. Thompson’s paper, http://arxiv.org/PS_cache/quant-ph/pdf/9711/9711044v2.pdf is *Timing, “Accidentals” and Other Artifacts in EPR Experiments,* by Caroline H. Thompson (Department of Computer Science, University of Wales Aberystwyth)

(Submitted on 20 Nov 1997 (v1), last revised 25 Nov 1997 (this version, v2)) Abstract: ‘Subtraction of “accidentals” in Einstein-Podolsky-Rosen experiments frequently changes results compatible with local realism into ones that appear to demonstrate non-locality. The validity of the procedure depends on the unproven assumption of the independence of emission events. Other possible sources of bias include enhancement, imperfect synchronisation, over-reliance on rotational invariance, and the well-known detection loophole. Investigation of existing results may be more fruitful than attempts at loophole-free Bell tests, improving our understanding of light.’

http://freespace.virgin.net/ch.thompson1/intro.htm:

‘The best proof I know for the test actually used in two out of Alain Aspect’s three experiments is included as an appendix to my paper on the subtraction of accidentals, quant-ph/9903066. Aspect, by the way, is the person who did those experiments in 1981-2 that led to the current belief in the impossible.

‘The most important thing to know about Bell tests is that the majority of them are invalidated by the “detection loophole”, also known as the “fair sampling assumption”. In real experiments, it is necessary to allow for the fact that the detectors do not register every “particle”, and to make any test possible auxiliary assumptions are needed (for a fairly comprehensive list, see … quant-ph/9903066). The most popular tests depend on the assumption that the ensemble of pairs detected is a fair sample of those emitted. I should be surprised if any realist who has examined the facts thinks this is reasonable. In realist models, you only get a fair sample in very special cases, and these cases are most extremely unlikely to occur in the actual experiments. In the optical ones, an important factor is how the detectors respond as you change the input intensity. This is something that the people concerned carefully avoid investigating! I have not had reports back from the experimenters who did at one point rashly offer to check …

‘I repeat: the Bell tests used are not the perfect ones that Bell himself considered! These perfect ones are discussed in popular books and articles, but they have never been tested. The tests had to be modified because in all real experiments it is known that detectors have low “efficiencies”. …

‘Next time you read that realist models are as bizarre as quantum theory ones, I hope you will know better. A realist model that agreed with the quantum theory prediction and worked with perfect detectors would indeed be strange, but there is no need for this: detectors are not perfect. I confidently predict that if ever a perfect Bell test were performed it would not be violated, as the real world is local.’

http://freespace.virgin.net/ch.thompson1/Letters/newsci_Geneva.htm:

‘July 28, 2000

The Editor

New Scientist

‘Dear Sir

‘Justin Mullin reported (29 July, p12) on the latest quantum magic from Geneva: Wolfgang Tittel and his team’s estimate of the speed of travel of quantum information. What he did not report, though – and I can see why, as it is not mentioned in the e-print he quotes – is that the team have yet to establish that any quantum information is involved at all!

‘None of the recent Geneva experiments has been accompanied by satisfactory checks, so that none has ruled out the possibility that the observed correlations are more than just an interesting consequence of perfectly ordinary shared values carried from the source.

‘What we are talking about here is “quantum entanglement”, and this means that we are concerned with our old friends, the EPR (Einstein-Podolsky-Rosen) or Bell test experiments that have been with us now since about 1970. They have all had loopholes, and I have now corresponded with many of the experimenters concerned, including several members of the Geneva team. I pointed out that their 1997 experiment (http://arxiv.org/abs/quant-ph/9707042) used an invalid test (they subtracted “accidentals”, which is perfectly OK in some contexts but ruins your Bell test). They agreed, and in their next experiment were careful to make sure that they did not depend on the subtraction. However, there were various other possible pitfalls, and they have not been able to convince me that they have not fallen foul of at least one of them.

‘The experiment that Justin’s report relies on is quant-ph/007009 (referenced from quant-ph/007008). In this it seems clear that they made no attempt to block one potentially fatal loophole! They kept one detector fixed and altered the setting of the other, which is only allowable if you have first checked that your source is “rotationally invariant”. They do not mention this.

‘For more on the Bell test loopholes … look at my own contributions to the quant-ph archive, notably 9611037, 9903066 and 9912082.

Sorry, folks, but quantum entanglement is a house of cards that would collapse the instant the loopholes were properly investigated. If there is no entanglement, then of course an experiment that measures its speed is pure fantasy!

‘Yours faithfully

Caroline H Thompson’

http://freespace.virgin.net/ch.thompson1/bibliography.htm:

Editorial policy of the American Physical Society journals (including PRL and PRA):

‘In 1964, John Bell proved that local realistic theories led to an upper bound on correlations between distant events (Bell’s inequality) and that quantum mechanics had predictions that violated that inequality. Ten years later, experimenters started to test in the laboratory the violation of Bell’s inequality (or similar predictions of local realism). No experiment is perfect, and various authors invented “loopholes” such that the experiments were still compatible with local realism. Of course nobody proposed a local realistic theory that would reproduce quantitative predictions of quantum theory (energy levels, transition rates, etc.).

‘This loophole hunting has no interest whatsoever in physics. It tells us nothing on the properties of nature. It makes no prediction that can be tested in new experiments. Therefore I recommend not to publish such papers in Physical Review A. Perhaps they could be suitable for a journal on the philosophy of science.’

http://arxiv.org/abs/quant-ph/9903066:

Subtraction of ‘accidentals’ and the validity of Bell tests

http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

Authors: Caroline H. Thompson (Department of Computer Science, University of Wales Aberystwyth)

(Submitted on 18 Mar 1999 (v1), last revised 21 Apr 1999 (this version, v2))

Abstract: ‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal’ explanations for the observations have been wrongfully neglected.’

http://arxiv.org/abs/quant-ph/0210150

http://arxiv.org/PS_cache/quant-ph/pdf/0210/0210150v3.pdf

The “Chaotic Ball” model, local realism and the Bell test loopholes

Authors: Caroline H. Thompson, Horst Holstein

(Submitted on 22 Oct 2002 (v1), last revised 30 Nov 2005 (this version, v3))

Abstract: ‘It has long been known that the “detection loophole”, present when detector efficiencies are below a critical figure, could open the way for alternative “local realist” explanations for the violation of Bell tests. It has in recent years become common to assume the loophole can be ignored, regardless of which version of the Bell test is employed. A simple model is presented that illustrates that this may not be justified. Two of the versions – the standard test of form -2 <= S <= 2 and the currently-popular "visibility" test – are at grave risk of bias. Statements implying that experimental evidence "refutes local realism" or shows that the quantum world really is "weird" should be reviewed. The detection loophole is on its own unlikely to account for more than one or two test violations, but when taken in conjunction with other loopholes (briefly discussed) it is seen that the experiments refute only a narrow class of "local hidden variable" models, applicable to idealised situations, not to the real world. The full class of local realist models provides straightforward explanations not only for the publicised Bell-test violations but also for some lesser-known "anomalies".'

http://freespace.virgin.net/ch.thompson1/bibliography.htm:

“Homodyne detection and optical parametric amplification: a classical approach applied to proposed “loophole-free” Bell tests”, C H Thompson, January 2005, revised July 2005.

Submitted to Physical Review A, 02:08:05; rejected. Submitted after significant improvements and slight change of title to J. Opt. B, November 2005, with copy at quant-ph/0512141 [For the earlier, PRA, edition see quant-ph/0508024.]

‘The “loophole-free” tests proposed by Garcia-Patron Sanchez et al may well be truly loophole-free, but will anyone be surprised if they do not violate any Bell inequality? Local realists will in any case expect this, but I think that quantum theorists are likely to do so too, since they will not be able to prove that they are dealing with “non-classical” light. The whole theory behind their method of producing the light and their test for “non-classicality” (involving interpretation of distributions of homodyne detection voltage differences and negative Wigner densities) is highly suspect.’

http://freespace.virgin.net/ch.thompson1/EPR_Progress.htm:

‘The story, as you may have realised, is that there is no evidence for any quantum weirdness: quantum entanglement of separated particles just does not happen. This means that the theoretical basis for quantum computing and encryption is null and void. It does not necessarily follow that the research being done under this heading is entirely worthless, but it does mean that the funding for it is being received under false pretences. It is not surprising that the recipients of that funding are on the defensive. I’m afraid they need to find another way to justify their work, and they have not yet picked up the various hints I have tried to give them. There are interesting correlations that they can use. It just happens that they are ordinary ones, not quantum ones, better described using variations of classical theory than quantum optics.

‘Why do I seem to be almost alone telling this tale? There are in fact many others who know the same basic facts about those Bell test loopholes, though perhaps very few who have even tried to understand the real correlations that are at work in the PDC experiments. I am almost alone because, I strongly suspect, nobody employed in the establishment dares openly to challenge entanglement, for fear of damaging not only his own career but those of his friends.’

Thank you so much for this article! Throughout high-school and my first year of college, I was swept away by the metaphysical, modern ideas revolving around quantum mechanics. Now that I have a bit more maturity as well as some much needed mathematics and science classes under my belt, I can wrap my mind more fully around the issue as a whole, and this article gave me so much insight (and plenty of additional resources for more study) into this.

While I support wholeheartedly doing away with pseudoscience, I can’t help but feel something of a loss. There was something that seemed intangibly valuable about knowing the universe had unexplainable, unresolvable, and untestable qualities. It was this element of mysticism allowing for the possibility that, if there is a God, or an intrinsic spiritual meaning in the universe, maybe it exists in the infinitesimally small.

In reality, I think trying to approach anything in the universe based on what amounts to mystical assumptions is in every possible way, counter productive to the scientific method. Plato formulated loads of brilliant ideas about the universe and methods of logic, but he failed to produce the scientific method because he based all observations on the assumption that this world is an imperfect reflection of an untestable, unresolvable, unexplainable perfect world. How is the mainstream, metaphysical approach to quantum mechanics any different?

If God exists, and/or if meaning exists, it exists regardless and independent of universal laws and scientific discoveries. It exists wholly in the individual as a matter of faith. I think if this more universally understood, reformulating ideas and approaches toward quantum mechanics wouldn’t feel like such a loss.

Just my thoughts.