Quantum mechanics revisited by Lee Smolin in 2006 and Lubos Motl’s arXiv trackback to it

In an earlier post, it was explained that mainstream first quantization (e.g., the Schroedinger wave equation or the Heisenberg matrices) was known non-relativistic since the 1920s and to falsely quantize the wrong variables: first quantization falsely keeps the Coulomb field potential classical and makes position/momentum intrinsically uncertain, instead of allowing the random, chaotic exchange of field quanta between charges to produce the indeterminism and uncertainty of atomic electron orbits.

“Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

- 1965 physics Nobel Laureate Richard P. Feynman, quoted in Jagdish Mehra, The Beat of a Different Drum (Oxford, 1994, pp. 245-248).

“Niels Bohr brainwashed a whole generation of theorists into thinking that the job of interpreting quantum theory was done 50 years ago.” – 1969 physics Nobel Laureate Murray Gell-Mann.

For more evidence about Bohr’s deep ignorance of physics and his crank propaganda, see the 325 page 1999 book Quantum Dialogue by the expert on the history of quantum mechanics, Professor Mara Beller (1945-2004), or read her article The Sokal Hoax: At Whom Are We Laughing?

Feynman book QED
The widely-ignored (due to Bohr’s brainwashing first quantization/uncertainty principle lie) Fig. 65 from Richard P. Feynman’s 1985 book QED illustrates how individual electromagnetic field quanta exchanges cause indeterministic electron orbits: ‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’ – Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56. ‘… with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. … we have to sum the arrows to predict where an electron is likely to be.’ – Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5. Hence, the indeterminate electron motion in the atom is simply caused by second-quantization: the field quanta randomly interacting and deflecting the electron.

So the mainstream argument for the uncertainty principle is based on the assumption that first quantization (the false, non-relativistic, Heisenberg matrix mechanics and Schroedinger wave equation) quantum mechanics is true, when in fact such first quantization has been known to be a convenient lie since 1927 when discovered incompatible with relativity. It’s convenient for teaching quantum mechanics because the people who teach it are just concerned with use of the mathematical machinery, and don’t care about the underlying physical processes, since the atom calculations in quantum mechanics are simplest when using the false physical model of first quantization. Because atomic electrons only orbit at speeds of about 1% of the velocity of light, the fact that first quantization (such as Heisenberg’s matrix mechanics and Schroedinger’s wave equation) is non-relativistic and false in the relativistic limit will not pose a significant error, and such non-physical quantum mechanical descriptions of atoms do produce useful predictions, just as Ptolemy’s physically inaccurate, highly convoluted medieval geocentric epicycle predictions were approximately correct and in some ways (circular orbits) easy to calculate with, prior to the Copernican solar system and Keplerian elliptical orbit. What the mainstream has done is to take a physically false theory of primitive 1927 quantum mechanics (first quantization, unlike the later physically correct second quantization – i.e. non-classical, with a field quanta-mediated Coulomb potential – of QFT by Dirac, Feynman et al.), and use the inaccurate physically reasoning of that false model as if it were true in order to try to discredit the concept of quantum gravity, by showing that Schroedinger’s equation violates the equivalence principle between inertial and gravitational mass.

The mainstream quantum mechanics hyping of the uncertainty principle fails to see that Schroedinger’s equation also violates relativity, which is precisely why Dirac came up with his relativistic quantum field equation. Yes, Schroedinger’s equation is in disagreement with gravitational observations. No, that doesn’t mean quantum gravity is impossible: it just means that Schroedinger’s equation is wrong as was known by Dirac and others back in the 1920s.

There is an Orwellian “double-think” (“the act of simultaneously accepting as correct two mutually contradictory beliefs“) manifesting itself in physics about first and second quantization: instead of proclaiming first quantization to be wrong, everyone in the mainstream and even outside it seems to endlessly refuse to see that first and second quantization are incompatible physically (although the predictions for bound states are similar, in the non-relativistic limit for approximate calculations). As we shall see below in this post, this problem is particularly severe for string theorist Lubos Motl, and it is also behind the failure of less mainstream-dominated researchers like Professor Lee Smolin to come to understand quantum mechanics. When I write up the quantum gravity paper, I will have to go into the details of this contradiction to show how first quantization has led to quantum mechanics confusion, blocking progress in quantum gravity.

In the previous post, we finally formulated and presented the draft diagram that has been desperately lacking since the mechanism idea originated in 1996. There was a limit to how much progress could be made without getting the geometry crystal clear. I don’t like the presentation, but it (1) is fact based and (2) makes checkable predictions which have been confirmed.

The post before that was called Second quantization (Quantum Field Theory of Dirac, Feynman et al.) is physically correct and debunks the non-relativistic, physically wrong first quantization approximation to Quantum Mechanics (Schroedinger and Heisenberg).

To summarize, the Heisenberg and Schroedinger approaches to quantum mechanics are non-relativistic; they’re useful approximations for bound states (electrons bound to atoms) but they’re fundamentally wrong in principle because they use the wrong hamiltonian energy formula. They fail to put space and time on an equal footing, i.e. they don’t incorporate relativity, and they wrongly model the Coulomb electric field potential energy classically as a continuous and non-fluctuating parameter, ignoring the fact that in QED – as experimentally proved by the Casimir effect and by such facts as the quantum tunnelling of 8.78 MeV alpha particles out from polonium-212 nuclei through a classically impenetrable 26 MeV “Coulomb barrier” – the electromagnetic field is experimentally known to be mediated by the exchange of virtual (off-shell) photons in a stochastic manner between electromagnetic charges. It is precisely the quantum field nature of the real Coulomb potential (as opposed to its classical formulation) that makes the orbital electron’s motion indeterministic and non-classical, and which causes the “Coulomb barrier” faced by an alpha particle in the nucleus to vary chaotically with time about its average value, occasionally weakening enough for an alpha particle with classically insufficient energy to escape or “tunnel out”.

Similarly, in nuclear fusion of protons, the stochastic nature of the real quantized Coulomb field allows “tunnelling in” and significant fusion cross-sections to exist at energies which are totally forbidden by the classical Coulomb barrier.

Hence, the whole reason for the indeterminancy in quantum mechanics is falsely assumed to be an intrinsic uncertainty in the position and momentum product of a real (on-shell) particle. Wrong. That assumption is so-called ‘first quantization’, where the uncertainty principle is used in the form of operators for uncertainties in momentum or position, and the classical Coulomb field energy potential is assumed true.

In fact, the chaotic motion of the electron is not due to intrinsic uncertainty by itself, but is due to the uncertainty in the exchange of virtual (off-shell) photons between the nucleus, the orbital electron, and other charges around it. The electric field is chaotic on small spacetime scales because the numbers of field quanta being exchanged to produce the Coulomb force is smaller than it is on large scales. This effect is like the transition from classical, steady air pressure on large areas to Brownian motion on micron sized dust particles where individual air molecule impacts don’t average out perfectly and random, chaotic motion results!

If you try to measure the position of an electron by firing a photon at it, sure, then the uncertainty principle can be used correctly to describe the minimum uncertainty in position and momentum. But generally, as Feynman stated in his book QED, you don’t need an uncertainty principle.

Instead, you just need to sum over the histories of many chaotic virtual photon exchanges and the randomness in that quantum field replaces the classical Coulomb field and explains the reason why a wave equation is statistically a good model for finding the probability that the electron will be found in a given location.

This is called “second quantization”, and Dirac’s quantum field theory equation of 1929 is an example, although it is falsely presented in many treatments as an addition to the basic ideas of quantum theory, when in fact it is totally incompatible because it’s relativistic, not non-relativistic, and thus has a totally different hamiltonian, describing energy of the system. Feynman’s sum-over-histories or “path integrals” approach to quantum mechanics is vital for understanding physically the difference between second quantization (QED) and non-relativistic first quantization (Heisenberg/Schroedinger quantum mechanics).

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

- Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

- Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

We should add a bit more about the history of attacks against (and defence of) the causal basis of quantum fields in producing indeterminancy and debunking non-relativistic Heisenberg/Schroedinger first quantization (so-called “QM”). Dirac in 1929 came up with the Dirac equation which replaces first quantization and is relativistic, unlike QM. Bohr apparently never understood the difference between first and second quantization, as shown by his 1948 Pocono conference attack on Feynman’s second quantization path integrals which was quoted above (which were initially presented in non-relativistic format as a conceptually simple alternative to first-quantization, but are easily made relativistic and now are used in relativistic high energy particle physics, e.g. the Standard Model).

Bohr never retracted his irrational beliefs in the first-quantization uncertainty principle religion. But Dirac, having come up with the relativistic quantum field theory equation, knew that Bohr’s religion was wrong, although he was unable to counter its propaganda. Dirac loved relativity but could see that atomic “indeterminancy” arises not from Bohr’s command, but instead from particle interactions with the vacuum (relevant quotations from Dirac are in comments here, here and here):

‘Physical knowledge has advanced much since 1905, notably by the arrival of quantum mechanics, and the situation has again changed. If one examines the question in the light of present-day knowledge, one finds that the aether is no longer ruled out by relativity, and good reasons can now be advanced for postulating an æther. . . .

‘We must make some profound alterations to the theoretical idea of the vacuum. . . . Thus, with the new theory of electrodynamics we are rather forced to have an æther.’ – P. A. M. Dirac, ‘Is There an æther?’, Nature, v168 (1951), pp. 906-7.

‘Infeld has shown how the field equations of my new electrodynamics can be written so as not to require an æther. This is not sufficient to make a complete dynamical theory. It is necessary to set up an action principle and to get a Hamiltonian formulation of the equations suitable for quantization purposes, and for this the æther velocity is required.’ – P. A. M. Dirac, ‘Is there an æther?’, Nature, v169 (1952), p. 702.

This causal explanation of quantum indeterminancy didn’t go down very well against the anti-aether propaganda (and some of Dirac’s arguments were simplistic and wrong, anyway, including his “Dirac sea” aether and some details of his theory of the “large numbers coincidence”). Dirac’s defence of aether in the 1950s coincided with a dramatic reversal from his early pragmatic view of physics. On page 7 of his 1930 book The Principles of Quantum Mechanics Dirac stated:

‘The only object of theoretical physics is to calculate results that can be compared with experiment.’

But on 7 May 1963 Dirac told Thomas Kuhn during an interview:

‘It is more important to have beauty in one’s equations, than to have them fit experiment.’

- Dirac, ‘The Evolution of the Physicist’s Picture of Nature’, Scientific American, May 1963, 208, 47.

What Dirac clearly has in mind in 1963 is the excellent prediction of the Feynman-Schwinger-Tomonaga QED virtual particle mechanism Lamb shift and magnetic moment of the electron and muon. Dirac strongly objected to Feynman’s extension of his quantum field theory and he rejected the renormalization of charge and mass with arbitrary cutoffs on running couplings at high energy to prevent infinities in the equations, a procedure which he considered an ugly ad hoc fix. This is despite the fact that it was a paper about the role of “action” in quantum field theory by Dirac which prompted Feynman’s path integrals formulation. Schroedinger’s time-dependent wave equation has an exponential solution whereby the wavefunction as a function of time is proportional to e-iHT/ħ where H is the energy operator (Hamiltonian), T is time and ħ is Planck’s constant over twice Pi. Squaring this wavefunction gives the probability of finding the particle, i.e., the exponential represents a kind of “amplitude”. Dirac took e-iHT/ħ and derived the more fundamental lagrangian amplitude for action S, i.e. eiS/ħ. Feynman showed that summing this amplitude factor eiS/ħ over all possible paths or interaction histories gave a result proportional to the total probability for a given interaction. This is the path integral. Notice that the amplitude depends on the size of the action relative to Planck’s constant: where S/ħ is a big number, you get classical physics, and if S/ħ is small then you get quantum mechanics. But although it is derived from the time-dependent Schroedinger equation, it is not any longer theoretically equivalent to that equation, because it is now being summed or integrated over, to make it represent endless different interactions from virtual particles which contribute to the outcome.

In other words, the path integral, by summing over all possible interactions, in effect includes the quantum field particle creation and annihilation operators, allowing random field fluctuations to introduce statistical variations on small scales where the action S for a path is small. This is totally ignored in first quantization procedures and is omitted from the time-dependent Schroedinger formulation which doesn’t include the many virtual particle interaction contributions to the end result, and therefore lacks the proper mechanical basis for the indeterminancy and statistical basis of quantum mechanical predictions. Dirac, like Bohr and others, objected to Feynman’s path integrals at Pocono in 1948: “Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron.”

Dr Chris Oakley also quotes Dirac stating: “[Renormalization is] just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr’s orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.”

Dr Oakley quotes Feynman (in his 1985 book QED) stating: “The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”

The fact is that calculus itself suffers from the reductionist fallacy: discontinuties are ignored, so in the real world you can’t treat real length’s the same way as you do in mathematics (e.g., a hundred feet of rope in the form of 100 separate 1-foot long lengths is less use to a sailor than a single length of 100 feet of rope; the law of addition may tell you that both are mathematically similar but in the real world it’s obvious that there is an important difference). The mathematical model of any physical process is never completely accurate: it’s just a convenient calculating procedure. Regarding renormalization, it cuts off a running coupling from making a charge tend towards infinity at an arbitary high energy. Mathematically, this introduces a discontinuity. But physically, the real world has such cutoffs on natural laws we use every day. E.g., the inverse-square law of solar radiation would predict infinite energy density at zero distance from the sun’s core. But we have to cutoff the application of the inverse-square law at the sun’s radius: inside the sun’s radius, the inverse-square law breaks down because it is a plasma of ions and not a vacuum.

This is a good example of how a mathematical law holds up to a point and then breaks down when pushed further, due to a simple change in the physical mechanism of what it is modelling! These introduce a mathematical discontinuity naturally due to physical effects you can understand. Renormalization cutoffs in QFT are similar: as renormalization investigators like Wilson argued, it is physically logical to take a high-energy cutoff because as you go to higher energy, particles approach more closely and eventually they would be so close that there would not be enough spacetime between them for virtual particles to pop into existence, become polarized by the field, and thus shield the field. Hence, at extremely high energy, the distance scales become so small that the physical basis for the running coupling (the shielding of charge due to the polariation of virtual particles) cannot fit into the small space. The “grain size” of the vacuum is the smallest space that the virtual particle creation and polarization processes can fit into. At higher energy, the coupling (relative charge) is no longer running (varying) as a function of energy, because there is no more shielding: it remains constant at higher energy simply because there is no physical mechanism at play at higher energy (smaller distances) for vacuum shielding of charge.

Dr Oakley’s work focusses on the Haag theorem of 1955, which is a mathematical attack on renormalization that shows that a free field vacuum of virtual particles doesn’t exist. However, in a gauge theory, the virtual particles are not a free field vacuum as such: they are always being exchanged between real charged particles, e.g. between real electrons. If you take away the charges in the universe, the vacuum field of exchanged quanta would no longer exist, because those virtual quanta which mediate force fields only exist when being exchanged between particles. So the field quanta don’t exist independently of the real particles: they are not a free field vacuum. E.g., you can, as we have seen in the previous post and others, accurately model the gauge interaction process physically by treating real charges as radiating black holes: the radiation behaves as field quanta (gauge bosons). Every electron in the universe is radiating field quanta “gauge bosons”. Thus, every electron in the universe is also receiving field quanta: in steady state situations (with no net forces acting to produce accelerations) there is an equilibrium of exchange of field quanta between charges. This interaction picture simply does not imply the existence of “free field particles” in the vacuum, which Haag objects to. The field quanta aren’t free but are instead generated by the real particles in the universe, which act as both sources and sinks for the virtual particles. In addition, as we have pointed out before, there are no annihilation-creation operators for a free vacuum: Schwinger showed that there is a cutoff on pair-production in the vacuum and it simply can’t occur where the steady electric field strength is less than 1.3 x 1018 volts per metre which only exists out to 33 fm from the centre of a unit charge like an electron! In the vacuum beyond that small 33 fm distance, there are no creation-annihilation spacetime Feynman diagram loops, because the field strength is simply too weak to make virtual fermions pop into existence from the ground state of the vacuum. (For proof, just see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 and equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040.)

This is the low-energy or IR cutoff to the logarithmic running coupling formula in QFT corresponding to electron collision energies on the order of 1 MeV: at lower energies and thus at distances beyond 33 fm, the charge of the electron no longer falls due to the running coupling, but is constant because there is no further pair production and polarization of the vacuum. Forces then vary merely due to the geometric (inverse square law) effect of distance on the spreading out of force-mediating gauge boson exchange radiation. However, if you read popular accounts of quantum field theory, they all ignore equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 and equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040, and instead claim that the entire vacuum is full of virtual particles spontaneously popping into existence and being annihilated. Nope. It isn’t. This is why long-range forces vary according to the geometric inverse-square law over large distances, without an additional logarithmic or exponential attenuation factor (which you do need when modelling shorer range forces where pair production does exist, e.g. the weak and strong nuclear forces). Over larger distances than 33 fm from a fundamental particle, due to the IR cutoff the vacuum doesn’t contain any Feynman diagram loops: it merely contains bosonic exchange radiations. There can be no spontaneous pair production of virtual fermions where the electric field is below Schwinger’s 1.3 x 1018 volts/metre threshold, because the field is then too weak to allow them to be created. (Think of the photoelectric effect for an analogy to this threshold: you can only release electrons from a metal by photon impacts if the photon energy exceeds a threshold energy called the work function of the electron. In other words, charges have a binding energy, and you must deliver more than that binding energy before you can free them.)

Like Dirac, Einstein also objected to Bohr, but obviously he did not side with Dirac and objected to first quantization on different grounds to Dirac (because Einstein simply didn’t want any particles, but just classical “continuum” fields such as extensions to general relativity), and Einstein was also widely ignored in favour of Bohr’s philosophy (Smolin’s 2006 book for instance quotes Dyson as skipping a meeting with Einstein, after Dyson read Einstein’s latest papers and decided they were poor).

Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. Poincare chaos means that chaotic motions on atomic scales can result from 2+ electrons influencing one another, e.g. from the randomly produced pairs of charges (creation-annihilation “loops” on spacetime Feynman diagrams) which exist randomly within 10-15 metre from an electron (where the electric field is over Schwinger’s threshold for spontaneous pair-production in the vacuum, about 1.3 x 1018 v/m) causing deflections in motion (these effects might average out over long times and large distances, but would cause more chaotic motion on smaller time and distance scales). The failure of determinism (predictable closed orbits, etc.) is therefore present in classical, Newtonian physics, which can’t even deal with a collision of 3 billiard balls. Newtonian physics works in the solar system only because the planets all have masses – i.e. gravitational charges – far smaller than the mass of the sun, reducing it to effectively a two-body problem where only the masses of the planet under consideration and the sun are important for calculating the gravitational force. By contrast, in the atom, the electrons carry charges which are a much larger fraction of the nuclear charge with opposite sign (for a hydrogen atom the electron has an identical amount of electric charge as the nucleus), so mutual interference between electrons in nearby atoms (or electron shells in the same atom), will cause a massive amount of chaos in the subatomic world which isn’t seen in the solar system where the sun’s mass dominates (over 99.8% of the mass of the solar system is the sun) and planet-planet interactions are therefore relatively trivial:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

Dr Tim Poston and Dr Ian Stewart, Analog, November 1981.

So why is this fact – that the chaos of quantum mechanics is simply due to the random exchange of virtual particles in the quantum electromagnetic field between charges in atoms – being covered-up?

Professor Lee Smolin at the Perimeter Institute has enlightened me as to why the mainstream ignores this: I’ve just read his 2006 arXiv paper http://arxiv.org/abs/quant-ph/0609109, ‘Could quantum mechanics be an approximation to another theory?’ Smolin simply fails to discriminate first from second quantization, then goes into details of deriving the Schroedinger equation (first quantization, i.e. wrong quantum mechanics) from stochastic processes. In other words, he completely ignores Feynman’s explanation that the field quanta cause indeterminism in the atom and other small-scale phenomena (such as light passing through nearby double slits and being influenced by the electromagnetic fields of the electrons in the slits) and talks instead about modifying Bohm’s 1952 discredited “hidden variables” ideas:

“We consider the hypothesis that quantum mechanics is an approximation to another, cosmological theory, accurate only for the description of subsystems of the universe. Quantum theory is then to be derived from the cosmological theory by averaging over variables which are not internal to the subsystem, which may be considered non-local hidden variables. We find conditions for arriving at quantum mechanics through such a procedure. The key lesson is that the effect of the coupling to the external degrees of freedom introduces noise into the evolution of the system degrees of freedom, while preserving a notion of averaged conserved energy and time reversal invariance.

“These conditions imply that the effective description of the subsystem is Nelson’s stochastic formulation of quantum theory. We show that Nelson’s formulation is not, by itself, a classical stochastic theory as the conserved averaged energy is not a linear function of the probability density. We also investigate an argument of Wallstrom posed against the equivalence of Nelson’s stochastic mechanics and quantum mechanics and show that, at least for a simple case, it is in error.”

ArXiv, being controlled by string theory partisans who don’t like Smolin’s loop quantum gravity, have permitted a trackback to the paper from Dr Lubos Motl’s blog post called “Wavefunctions and hydrodynamics: crackpots vs. rational thinking”, where Lubos writes:

“It is no secret that I consider all people whose main scientific focus is a revision of the basic postulates of quantum mechanics – and a return to the classical reasoning – to be crackpots. They just seem too stubborn and dogmatic or too intellectually limited to understand one of the most important results of the 20th century science.

“Every new prediction based on the assumption that there is a classical theory that underlies the laws of quantum mechanics has been proven wrong. The local hidden variables have first predicted wrong outcomes in the EPR experiments and later they predicted the validity of Bell’s inequalities and we know for sure that these inequalities are violated in Nature, just like quantum mechanics implies and quantifies. The non-local hidden variables predict a genuine violation of the Lorentz symmetry. I think that all these theories predict such a brutal violation of the Lorentz symmetry that they are safely ruled out, too. But even if someone managed to reduce the violation of the laws of special relativity in that strange framework, these theories will be ruled out in the future. Their whole philosophy and basic motivation is wrong.

“The whole political movement to return physics to the pre-quantum era is a manifestation of a highly regressive attitude to science – an even more obvious crackpotism than the attempts to return physics to the era prior to string theory. But among the proposals to undo the 20th century in physics, some of the papers are even more stupid than the average.”

As with his string theory propaganda, Lubos is wrong about the Bell inequality tests as shown by the following evidence:

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.

Lubos then goes on and on about Smolin’s paper because Smolin claimed to debunk Wallstrom’s objection to Nelson’s hidden variables (actually Smolin was not defending Nelson, he argues that Wallstrom simply gives a false reason to debunk Nelson and states that the real problem in Nelson are Hilbert space states with discontinuous wavefunctions). I do have to agree with Lubos on the issue that the few people with influence who are probing the foundations of quantum mechanics, like Professor Smolin, are not approaching the subject correctly. The correct approach is to describe quantum fields throughout the universe by summing over histories; i.e., the physical use of path integrals (by analogy to their use in Brownian motion) for modelling the results of field quanta exchange as force fields. Instead, Smolin and others avoid simple modelling at all costs, and choose to work on the problem using variations of old approaches which are failures.

David Bohm’s hidden variables theory was first published in his 1952 paper “A suggested interpretation of quantum theory in terms of hidden variables, I and II”, Physical Review v. 85, pp. 166-93. Bohm ignored the evidence for quantum fields causing indeterminancy as accepted in second quantization (quantum field theory) and instead abstrusely and controversially introduced “hidden variables” to explain the chaotic, Brownian motion-type indeterminancy of electron orbits, deriving the first-quantization Schroedinger equation! Bohm’s error is plain to see: he should have been rebuilding quantum mechanics using quantum field theory, with Feynman’s path integrals to sum the exchange of field quanta between charges in the universe, instead of trying to derive the epicycle-like non-relativistic first quantization model of quantum mechanics.

Later, in 1969, Edward Nelson published his “Derivation of the Schroedinger equation from Newtonian mechanics” in Physical Review, v. 150, p. 1079. The title alone tells you why Lubos and other string theorists get so angry with this stuff: they want their complex, stringy mathematical ugliness to replace the deep simplicity in the world just as Ptolemy’s geocentric epicycles of 150 A.D. won out over Aristarchus’ more physically correct solar system of 250 B.C.; their stringy mathematics is analogous to Ptolemy’s geocentric, non-physical epicycles (where the correct theory is not such a mathematical landscape of endless ad hoc epicycles with many fine-tuned anthropic-derived parameters, but is simply a solar system with elliptical rather than circular orbits).

Nelson also wrote a book, “Quantum fluctuations” (Princeton University Press, 1985), available as a PDF download. It doesn’t address anything interesting and although the aim is very interesting, it focusses on the wrong theory (first quantization quantum mechanics, not QFT/second quantization). Another book with similar errors is Peter R. Holland’s “The Quantum Theory of Motion: An Account of the De Broglie-Bohm Causal Interpretation of Quantum Mechanics”, Cambridge University Press, 1995. Although the aim is good, the method and results are wrong because it again focusses on deriving the wrong theory!

To understand why they are all wrong, imagine this is the year 1500 A.D. and some errors in Ptolemy’s geocentric universe epicycle theory have been found using more accurate measurements of planetary positions. Instead of everyone trying different things to solve the problem, the mainstream all goes and adds more epicycles to cover up the problems (exactly like today’s string theorists addressing the problems of the Standard Model); while the heretics such as Copernicus try to resurrect Aristarchus’s solar system complete with its own system of circular orbits and epicycles to explain retrograde motion. Nobody works on elliptical orbits. Both the mainstream and the heretics work on false models. Everyone wrongly agrees that circles are the most beautiful mathematical tool and that nature must have planets moving in circles: they only disagree on whether the earth or the sun is the centre.

Arthur Koestler’s 1959 “The Sleepwalkers: A History of Man’s Changing Vision of the Universe” actually counted the epicycles up and found 40 in Ptolemy’s Earth-centred-system in his “Almagest” of 150 A.D., versus 80 in Copernicus’s solar system of 1500 A.D. (which used circular orbits with epicycles instead of ellipses like Kepler). This was contrary to the prevailing history of science, which insisted that Copernicus was accepted on the basis of Occam’s Razon due to having the fewer epicycles than Ptolemy. Actually sometimes more complex theories are closer to nature and there were different reasons why Copernicus was preferred. (Viz: Mercury and Venus are always observed from Earth to be on a bearing within 90 degrees of the position of the sun, a fact which is explained very simply in the solar system model by Mercury and Venus having orbits closer in to the sun than the Earth’s orbit. Additionally, the apparent size of the Moon seen from Earth in Ptolemy’s model should vary by a factor of two monthly due to its epicycles, when in fact it doesn’t appear to vary in size.)

The road ahead, by analogy to the road to the atom: How our knowledge of matter was developed through guesswork, experiment-forced correction of simplistic theory, and a reluctant acceptance for unpredicted complexity

‘In considering the history of thought, it is necessary to distinguish the real stream, determining a period, from the ineffectual thoughts casually entertained. In the eighteenth century every well-educated man read Lucretius and entertained ideas about atoms. But John Dalton made them efficient in the stream of science …’ – Alfred North Whitehead, Science in the Modern World, Harvard, 1925.

‘… John Dalton made the theory quantitative. By showing how the weights of different atoms relative to one another could be determined, he introduced a feeling of reality into a purely abstract idea. … Most of his weights were subsequently proved to be erroneous, but Dalton sowed the seed which grew, where others had previously merely turned over the soil. … It provided an explanation or, at least, an interpretation of many chemical facts and, of greater consequence, it acted as a guide to further experimentation and investigation. … A fact may be defined as something for the actual existence of which there is definite evidence. A theory or hypothesis, on the other hand, is a purely conceptual attempt to explain or interpret known facts. While facts are presumably established and unalterable, a theory may be altered or discarded if it proves to be inadequate.’ – Samuel Glasstone, Sourcebook on Atomic Energy, D. van Nostrand, 2nd ed., New York, 1958, pp. 2-3. (Copyright of the U.S. Government.)

There were two rival theories of matter in Ancient Greece, circa 500 B.C. Leucippus and his student Democritus thought matter ultimately composed of void and tiny fundamental ‘atoms’ (this name meaning ‘non-divisible’, from the Greek, a-temnein). Empedocles and Aristotle rejected the atomic hypothesis, preferring a theory in which all matter is formed from a combination of one or more of four fundamental ‘elements’: air, earth, fire and water.

In 1774, Antoine Laurent Lavoisier proved that air is not an element but a mixture of (mainly) nitrogen and oxygen, only the latter of which supports combustion. In 1781, Joseph Priestley and Henry Cavendish similarly debunked the theory that water is a fundamental element, by proving it to be a compound of hydrogen and oxygen. In 1808, John Dalton’s New System of Chemical Philosophy was published, containing relative masses measured for various types of atom, i.e., various elements.

Dalton made serious errors in assuming that water was a simple compound of equal numbers of hydrogen and oxygen atoms (HO), and that ammonia was similarly simple (NH). These and other errors caused Dalton to deduce incorrect masses for oxygen and nitrogen atoms of 7 and 5 relative to hydrogen, instead of 14 and 15 using the true formulae H2O and NH3, respectively. The correct masses of oxygen and nitrogen atoms relative to hydrogen are 16 and 14. Despite errors, Dalton’s idea caused progress.

In 1811, Amadeo Avogadro argued that, under constant temperature and pressure, the density of any gas is directly proportional to the relative mass of its constituent molecules:

‘Setting out from this hypothesis, it is apparent that we have the means of determining very easily the relative masses of the molecules of substances obtainable in the gaseous state.’

For practical purposes, Avogadro’s law led to the concept of the gram-molecule or ‘mole’ whereby the atomic mass of a molecule to roughly that of hydrogen – or, as defined precisely, to one-twelfth of the mass of carbon-12 – expressed in units of grams, is the mass of one mole. E.g., for water (H2O), one mole is roughly 2 + 16 = 18 grams. According to Avogadro’s law, one mole of any gas occupies a volume of 1 litre at 1 atmosphere pressure and 0 oC temperature. In 1905, Albert Einstein worked out a diffusion equation for the Brownian motion of small dust grains hit by air molecules which was used by experimentalist Jean Perrin in 1908 and subsequent researchers to calculate that there are about 6.022 x 1023 molecules in one mole of any substance. This permitted the masses of different atoms to be estimated.

In 1816, William Prout stated his hypothesis that all atomic masses were integral multiples of the mass of hydrogen. This was statistically defended by Lord Rayleigh in 1901:

‘The atomic weights tend to approximate to whole numbers far more closely than can reasonably be accounted for by any accidental coincidence … the chance of any such coincidence being the explanation is not more than 1 in 1000.’

Detractors of Prout’s hypothesis pointed out that the accurately measured masses of chlorine and copper (about 35.5 and 63.5) were definitely not integers. Instead of the hypothesis being abandoned as empirically false, the non-integer masses were later explained by the discovery of isotopes due to a variable numbers of neutrons present in the nucleus of atoms of a given element, as well as the mass of the nuclear fields which bind nuclear particles together. The presence of neutrons was not merely a problem in producing non-integer average masses for some elements. Neutrons also introduced complexity into the relationship between the chemical properties of elements and their relative masses.

John Newlands in 1865 tried to arrange the known elements into a table on the basis of their weights and chemical properties. He discovered a ‘law of octaves’ in which the eighth most massive atom has similarities to the first, and so on. However, due to Newland’s omission of undiscovered elements, Newland’s law was simplistic, and wrongly related iron with sulphur and gold with iodine.

In 1869, Mendelyeev published his periodic table that correctly associated the properties of elemental atoms of different masses by allowing some empty spaces in the table to ensure that the properties in each column correlated correctly. Three of the gaps were soon filled by the discovery of gallium in 1875, scandium in 1879 and germanium in 1886, which had the properties predicted by Mendelyeev. As a result of these correct predictions, Mendelyeev’s periodic table was taken seriously and was first reported in English in the London Chemical News journal of December 1875.

Mendelyeev’s periodic table contains vertical columns correlating the chemical properties of elements and horizontal ‘periods’ containing an increasing number of elements: 2 in the first period (hydrogen and helium), the 8 elements in each of the next two periods (lithium to neon, and sodium to argon), followed by 18 in the following period (potassium to krypton).

The Pauli exclusion principle of quantum theory, by imposing constraints on the combinations of electron pairing in atomic shells, explains this periodic table of the elements.

One deep lesson from this history of matter is that we have to follow and model experimental data when theoretical guesswork fails or is uncheckable: science isn’t concerned with uncheckable speculations. Another deep lesson is that the theory of the atom which resulted wasn’t anything like the “beautiful” regular geometric solids idea of the ancient Greeks, and the atoms were not even unsplittable. If you try to ignore and censor out advances without checking them on the basis that your “gut reaction” is that they don’t look pretty, you are in effect just a gutless supporter of mainstream groupthink. If the scientific evidence supports a model (be it intuitive, counterintuitive, simple, complex, “elegant”, or “ugly”, a model with many fans and promoters, or none at all), you need to take it seriously. Otherwise, you’re just acting emotionally, not scientifically.

Update (20 November 2009):

Dr Woit has a new blog post about the increasing difficulties in trying to find a Higgs boson, “Higgs Escapes Part of Exclusion Region”:

“A thoroughly irresponsible person might see some significance in the fact that, unlike the analysis from earlier this year, the new improved analysis with more data does a worse job of exclusion than expected over much of the low mass range…”

The accurately predictive model for quantum gravity which we’re working on puts gravity into the Standard Model and shows the speculative electroweak symmetry breaking Higgs field is incorrect (not merely vague about the alleged Higgs boson(s) mass(es)).

Instead of all three SU(2) gauge bosons being massless at high energy (electroweak unification) and massive at low energy (broken electroweak symmetry, giving a short ranged left-handed weak force and a stronger long-range electromagnetic force), due to a Higgs field boson miring low-energy SU(2) gauge bosons but not miring high-energy SU(2) bosons, the actual mechanism for the difference between the weak and electromagnetic gauge interactions of nature is different, although it gives mathematically identical predictions for low energy (i.e. for “broken symmetry” conditions in the Standard Model).

SU(2) even in the Standard Model is known to contribute to the U(1) electromagnetic interaction because to make SU(2) work, Glashow had to “mix up” the properties of the massless neutral SU(2) gauge boson with the U(1) neutral gauge boson: the amount of mixing is an ad hoc fiddle to get the gauge theory to match experiment, and it is denoted by the Weinberg mixing angle. U(1), popularly described as the electromagnetic interaction, in fact isn’t pure electromagnetism even in the Standard Model. It’s actually a hypothetical “hypercharge” which isn’t observed in nature as a fundamental interaction, and electromagnetism is not the same thing as U(1). The gauge boson of electromagnetism, the photon, emerges in the Standard Model only from the mixing of the neutral SU(2) boson with the U(1) boson. Hence, the Standard Model mixes up the U(1) and SU(2) neutral gauge bosons according to the Weinberg mixing angle fudge-factor and produces two mixed-up bosons, one of which represents electromagnetism and the other is the neutral Z0 weak boson (which, for low-energy, then acquires mass from the vague Higgs fairy field epicycle). The discovery of “neutral currents” (i.e. simply neutral Z0 exchanges) in the early 1970s and the massive weak gauge bosons at CERN in 1983 confirmed the mathematical epicycle predictions model for low-energy, but it did not confirm the Higgs mechanism for the alleged electroweak unification at high energy. The empirically confirmed parts of the Standard Model prove that U(1) and SU(2) are mixed according to Glashow’s “Weinberg mixing angle” even at low energy (broken symmetry).

To put quantum gravity into a gauge theory, since gravity is related to electromagnetism by also being a long-range interaction, we need to take account of the fact that electromagnetism even in the Standard Model is not purely U(1) but is a mixture of the neutral gauge bosons of both U(1) and SU(2). Clearly U(1) and SU(2) are not simply electromagnetism and weak interactions at low energy: they have to be mixed up. The existing mixing technique doesn’t include quantum gravity (with mass-energy as the “charge” for the quantum gravity interaction), so when quantum gravity is added to the Standard Model, the existing epicycle-type Weinberg mixing scheme and the Higgs symmetry breaking scheme will be replaced. Feynman makes this point in the final chapter of his 1985 book QED, where he points out that the Standard Model as a unification scheme is inelegantly glued together by ad hoc epicycles, and needs improvement: sure, says Feynman, empirical data confirms that there must be a connection between the electromagnetic photon, neutral W0 of SU(2) and the observed weak Z0, but the fact that there must be a connection doesn’t prove that the particular ad hoc duct tape-fix which is employed in the Standard Model is the right way to connect them!

The fact that the weak interaction works only on left-handed spinors is really a clue that SU(2) is not purely a weak force model: it’s as if only some left-handed SU(2) bosons acquire mass. By giving mass to only half the SU(2) gauge bosons for weak interactions, we get the other half as massless SU(2) gauge bosons which include two charged, massless bosons that better represent the mechanism for electromagnetic gauge interactions because the “virtual photon” needed for gauge interactions has 4 polarizations: 2 more than the real photon are required in order to explain attraction of unlike charges, although the nature of the additional polarizations isn’t defined in the Standard Model. These additional 2 polarizations are positive and negative electric charge for the electromagnetic gauge boson.

The force field around an electron core is mediated by negatively charged gauge bosons giving the negative electric charge we observe there. Notice that nobody has ever seen the charged “core” of an electron as no particle accelerator can reach the Planck scale, so what is reported as electron charge is actually just what is observed in the way of the field we can observe around the core of the charged fundamental particle.

Normally, a charged gauge boson will imply a Yang-Mills (non-Abelian) gauge interaction in which the interaction can result in the charge of the fundamental particles changing: this doesn’t occur with electromagnetism (so the extra term in the Yang-Mills equations is effectively zero, reducing them to simply the Maxwell equations), because the equilibrium of exchange of electromagnetic gauge bosons is controlled by the magnetic self-inductance effect.

The propagation of real charged massless photons is impossible in one direction in a vacuum because of the magnetic self-inductance, but the magnetic field vector is simply cancelled out for virtual photons which are being exchanged (travelling in both directions between two particles), since the directions of the magnetic vectors exactly oppose each other. This is what makes the Yang-Mills SU(2) massless charged gauge boson electromagnetic interaction reduce to simply the Maxwell equations, instead of the Yang-Mills equations which would otherwise permit the charge of an electron to change in an electromagnetic interaction (due to the net transfer of charge by a charged gauge boson). Because of the self-inductance effect on the propagation of massless charged gauge bosons in the vacuum, there is a requirement for the amount of electromagnetic charge (but not necessarily energy) exchanged by charged virtual (off-shell) gauge bosons between two real (on-shell) charged particles to be in equilibrium with equal rates of charge emitted and received by each real charge: the physical mechanism requiring this equilibrium therefore prohibits the special (charge varying) term in the Yang-Mills equations from operating for massless charged gauge bosons. For massless charged gauge bosons, therefore, the special term is always zero so the non-Abelian Yang-Mills equations of SU(2) in that case automatically reduce to the Abelian Maxwellian equations. This is not so for massive charged gauge bosons, e.g. the SU(2) gauge bosons which acquire mass: because they are massive, the magnetic self-inductance problem does not apply and there is therefore no physical mechanism at work which demands a perfect equilibrium in the exchange of charge by charged gauge bosons moving between real charges. Therefore, for massive charged gauge bosons, the Yang-Mills equations do not automatically simplify to the Abelian Maxwellian equations.

This model changes the nature of the electroweak U(1) x SU(2) gauge theory, allowing the long range spin-1 neutral gauge boson to be the graviton, instead of the electromagnetic virtual photon.