Building upon solid factual foundations, not hot air

“When mankind is faced with an opportunity to embark on any great undertaking, there are always three human weaknesses that devilishly hamper our efforts. The first is an inability to define or agree upon our objectives. The second is an inability to raise sufficient funds. The third is the fear of a disastrous failure.” – Freeman Dyson, The Scientist as Rebel, 2008, p. 301

Above: Professor Freeman Dyson had to battle with Oppenheimer to get a fair hearing for Feynman’s path integrals quantum field theory. Dyson writes in his New York Review of Books (13 May 2004) review of string theorist Brian Greene’s book Fabric of the Cosmos (this review is included in Dyson’s 2008 book, The Scientist as Rebel):

“… there is always a tension between revolutionaries and conservatives, between those who build grand castles in the air and those who prefer to lay one brick at a time on solid ground. … in the late 1940s and early 1950s, the revolutionaries were old and the conservatives [e.g. Feynman and Dyson] were young. The old revolutionaries were Albert Einstein, Dirac, Heisenberg, Max Born, and Erwin Schroedinger. Every one of them had a crazy theory that he thought would be the key to understanding everything. Einstein had his unified field theory, Heisenberg had his fundamental length theory, Born had a new version of quantum theory that he called reciprocity, Schroedinger had a new version of Einstein’s unified field theory that he called the Final Affine Laws, and Dirac had a weird version of quantum theory in which every state had probability either plus two or minus two. … In Dirac’s Alice-in-Wonderland world, every state happens either more often than always or less often than never. Each of the five old men believed that physics needed another revolution as profound as the quantum revolution that they had led twenty-five years earlier. Each of them believed that his pet idea was the crucial first step …

“Young people like me saw all these famous old men making fools of themselves, and so we became conservatives. The chief young players then were Julian Schwinger and Richard Feynman in America and Sin-Itiro Tomonaga in Japan. Anyone who knew Feynman might be surprised to hear him labeled a conservative, but the label is accurate. Feynman’s style was ebullient and wonderfully original, but the substance of his science was conservative. … the old revolutionaries were still not convinced. … I brashly accosted Dirac … Dirac, as usual, stayed silent for a while before replying. ‘I might have thought that the new ideas were correct,’ he said, ‘if they had not been so ugly.’ That was the end of our conversation.

“Einstein too was unimpressed by our success. During the time that the young physicists at the Institute for Advanced Study in Princeton were deeply engaged in developing the new electrodynamics, Einstein was working in the same building and walking every day past our windows on his way … He never came to our seminars and never asked us about our work. To the end of his life, he remained faithful to his unified field theory.”

The physics Nobel Laureate Frank Wilczek states that Feynman became a closet ether believer when he couldn’t get rid of vacuum interactions:

“As for Feynman … He told me he lost confidence in … emptying space when he found that both his mathematics and experimental facts required the kind of vacuum polarization modification of electromagnetic processes depicted – as he found it, using Feynman graphs … the influence of one particle on another is conveyed by the photon … the electromagnetic field gets modified by its interaction with a spontaneous fluctuation in the electron field – or, in other words, by its interaction with a virtual electron-positron pair. In describing this process, it becomes very difficult to avoid reference to space-filling fields. The virtual pair is a consequence of spontaneous activity in the electron field. It can occur anywhere.”

Frank Wilczek, The Lightness of Being: Mass, Ether, and the Unification of Forces, Basic Books, N.Y., 2008, p. 89. (See also press release linked here.)

Wilczek, like many others who claim that the vacuum is full of spontaneous virtual pair production, is incorrect in claiming that the virtual pair can occur anywhere: physics Nobel Laureate Julian Schwinger had proved in 1948 that pair production can’t occur anywhere can’t because there is a threshold field strength for spontaneous pair production at a distance of 33 femtometres from a real (i.e., a particle “on-shell”, or on the relativistic mass shell) unit charge like a real electron, since polarizable virtual pair production requires an electromagnetic field strength exceeding 1.3 x 1018 v/m, which is the field strength at a distance of 33 fm from an electron. Schwinger’s equation for this theshold on the pair production polarizability of the vacuum can be found as equation 359 in Freeman Dyson’s quantum field theory introduction, http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040.

So why the hell doesn’t Nobel Laureate Wilczek know this basic fact? Why all the obfuscating BS on QFT from these people? (Possibly part of the reason is a misunderstanding of the Casimir effect. Instead of the virtual radiation that push metal plates together being generated locally by spontaneous pair production and annihilation, the vacuum contains long-range virtual radiation. A positive charge and a negative charge don’t “know” they they should stop radiating field quanta just because they are nearby; the apparent “absence” of a net electric field a distance from a “neutral” atom is due to the positive and negative fields superimposing to give a neutral field, not being absent altogether! This is justified by field energy conservation! Every fundamental particle behaves like a black hole, radiating off-shell bosons, i.e. virtual-Hawking radiation, which I think is the basis of the quantum field theory. Because particles are all radiating, they have long attained an equilibrium, and thus normally receive the same flux that they radiate. Moving a particle causes a force, such as inertia, to be experienced, because it disturbs the equilibrium. Accelerating a charge causes an imbalance in the equilibrium which we see as “real” or “on shell” radiation, like the radio waves emitted by accelerating electrons in an antenna. Rueda and Haisch have published papers on this “Stochastic Electrodynamics” but I think they have confused the gravitational and electromagnetic field quanta, so I’m now writing up a paper which explains everything properly.)

This threshold distance and the corresponding collision energy of around 1 MeV is known as the infrared cutoff, the low-energy limit of the running coupling, a logarithmic relationship between the effective or “screened” charge of a particle and the collision energy, which is determines how close together charges come in interactions. Without this ~1 MeV infrared cutoff at 33 fm, the vacuum could continue to polarize and thus continue to reduce the observable electric charge of an electron at greater distances, so the electronic charge would fall off with distance, so the Coulomb force law at long distances would fall faster than the simple geometric inverse-square of distance, because of the fall in charge with distance! That’s proof that the infrared cutoff or pair production threshold is real and needed in the theory, to make it work in the real world.

The name “infrared cutoff” originates with the analogy that the spectrum of light is cutoff by the atmosphere at low energies (long wavelengths, infrared), since water vapor has a wideband infrared absorption. Similarly, shorter wavelengths of light than violet (i.e. ultraviolet) disappear due to a high-energy cutoff, since oxygen and ozone in the atmosphere absorbs short wavelengths, producing a cutoff at that frequency in the spectrum. (This is why have evolved to see using the abundant visible light spectrum available in the daytime, and can’t see the infrared light given off by warm objects at night without special imaging equipment.) The virtual pair production doesn’t fill the entire vacuum; instead, it is limited to the strong electric field radius out to just 33 fm from an electron. Beyond that distance, the Coulomb field is simply too weak to boil off virtual fermion pairs from the unobservable ground state of the vacuum to a free state that has observable consequences.

The next error Wilczek makes is referring to the vacuum as an “ether” (in the book’s title and numerous places in the text), “Higgs condensate” (page 96), and “multi-layered, multi-colored superconductor” (page 97). Because of the existence of the infrared cutoff on the electromagnetic running coupling, the vacuum over most of its volume is far less exciting that Wilczek claims! There are similar cutoffs for other fundamental interactions, and the range of the cutoffs is roughly the limiting range of the short-range interactions like the strong nuclear force.

This makes you realise that the potential energy absorbed from the electromagnetic field by the polarization of the virtual fermions (which themselves have virtual bosonic fields, so that the absorption of the real fermion field energy by the polarized vacuum ends up creating virtual bosonic fields that will mediate short range weak and strong nuclear interactions!) is providing the energy of the short-range weak and strong virtual boson fields! This is a real bootstrap theory of short-range interactions, based entirely on experimentally defensible facts. What happens physically in detail is this. The Coulomb field quanta are intense enough above Schwinger’s threshold to knock pairs of virtual fermions free, on borrowed energy. As such they have a short lifetime before they annihilate back into field quanta. However, the strong Coulomb field does something very important in the brief period that the virtual fermion pair exists: it polarizes them, bringing the virtual fermion of unlike sign to the real electron closer to it (attraction) and repelling the virtual fermion of like sign to it. This polarization process sucks in energy from the Coulomb field, which is used to drive the virtual fermion pair further apart, lengthening their duration of existence and therefore lengthening the duration of existence of the virtual bosonic fields which accompany them, and which can be also used to mediate short ranged weak and strong interactions. Therefore, we have a mechanism for exactly how the energy attenuated from the electromagnetic field is being converted into the energy used to mediate weak and strong interactions.

In other words, as illustrated before on this blog, the mainstream unification approach is entirely wrong: instead of all fundamental forces being “unified” by fiddling the theory to make all the couplings for all different interactions the same number at very high energy (the ultraviolet cutoff, assumed without proof to be the Planck scale by mainstream speculators), by using the ad hoc “epicycles” of SUSY, “unification” has an entirely different meaning and outcome, depending on energy conservation not numerical similarity of couplings: it actually is to be found instead in the way that much of the electromagnetic energy from the real (on-shell) charge is absorbed by the “screening” polarized vacuum within 33 fm of it and is then converted into short-ranged nuclear interaction fields by way of the energy gained by the virtual bosons produced by the virtual fermion pairs that surround the real (on-shell) charge!

I’ve never met Frank Wilczek, but he and his colleagues are making my job very difficult indeed. Not only can’t I explain the new facts straight off, and have to first explain quantum field theory, since such people have tried to submerge it with drivel, but I’ve also got to explain that the parts of quantum field theory they have claimed to popularize are totally false, since they apparently don’t understand what they are doing in physical terms. Sure they got grade A+ in mathematics exams, sure they can use the existing theory to make calculations that win prizes, but they go about it in a wooden way, and can’t think clearly outside the box even about the fundamentals. Here’s another example to enjoy:

FIRST QUANTIZATION LIE DEBUNKED BY FEYNMAN; WHY THE UNCERTAINTY PRINCIPLE ISN’T THE EXPLANATION FOR ATOMIC CHAOS

Heisenberg and Schroedinger’s first-quantization quantum mechanics uses the lie of applying the uncertainty principle to the position and momentum of real fundamental particles, falsely representing real particles by a wave function, instead of correctly quantizing by introducing uncertainty with field operators (quantum field theory). Nobel laureate Richard P. Feynman tried to debunk the mainstream first-quantization uncertainty principle of quantum mechanics. Instead of the uncertainty principle directly “explaining” the chaotic motion of real (long-lived) particles like orbital electrons, the indeterminate electron motion in the atom is caused by second-quantization, i.e. by the Coulomb field quanta randomly deflecting the electron in the atom:

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits]”, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [Young double slit experiment] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on.

“The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [for individual possible field quanta interactions, instead of using the average, the classical Coulomb field] to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

Thus Feynman’s path integral quantum field theory formulation is not (as popularly claimed) just an addition to the 1920s quantum mechanics foundation by Heisenberg; it rebuilds and reformulates quantum mechanics upon new foundations (second rather than first quantization), getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific first-quantization lying hype like ‘entanglement’ which has never been scientifically shown to exist, but which are just artifacts (like “epicycles”) produced by trying to fit a false theory to the real world:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time. … there are problems for which the new point of view offers a distinct advantage.’

– Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the particle’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

– Richard MacKenzie, Path Integral Methods and Applications, http://arxiv.org/abs/quant-ph/0004090, pp. 2-13.

‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

There are serious failures of first-quantization aside from its nonrelativistic Hamiltonian time dependence, leading to Bohring wavefunction collapse hype: the wavefunction of an electron doesn’t exist in the real, second-quantization world. Hence there is no wavefunction of a real particle that can collapse! Instead of uncertainty being caused by real particles having wavefunctions, uncertainty is caused by the quantum Coulomb field randomness on atomic distance scales. First-quantization uses the metaphysical wavefunction of a real particle, but keeps the Coulomb field classical and deterministic. Second-quantization rejects the wavefunction model of the real particle, and introduces chaos realistically by quantizing the Coulomb field by using quantum field operators. Hence, second-quantization debunks all wavefunction “collapse” results of first-quantization upon real particles, which move chaotically simply due to the Brownian-motion like effect of field quanta exchange, not due to the metaphysics of the “uncertainty principle” acting directly on the position and momentum:

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the [first-quantization wavefunction] equations used to model the physics, it is not inherent in the physics.”

– Thomas Love, California State University, Towards an Einsteinian Quantum Theory (unpublished paper available by email).

Freeman Dyson writes on pages 221-2 of his 2008 book The Scientist as Rebel that the Heisenberg-Bohr based “entanglement” of the wavefunctions stems from the dualistic interpretation which

“says that the classical world is a world of facts while the quantum world is a world of probabilities. Quantum mechanics predicts what is likely to happen while classical mechanics records what did happen. This division of the world was invented by Niels Bohr, the great contemporary of Einstein who presided over the birth of quantum mechanics. Lawrence Bragg, another great contemporary, expressed Bohr’s idea more simply: ‘Everything in the future is a wave, everything in the past is a particle’.”

Or, to be still more precise, all “experiments” in the future on entanglement were to be fiddled to appear to justify the false epicycle-like theory of entanglement, or they would be censored out like Caroline H. Thompson’s papers (the abstract of one follows):

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.”

http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

One of the biggest lies about quantum field theory stems from the popularization of its use in perturbative corrections, such as calculating to many decimal places the trivial ~0.116% increase in the magnetic moment of leptons due to complex (e.g., spacetime loop) Feynman diagram interactions that produce small effects. This totally misses the point that Feynman makes in his 1985 book QED, which is precisely that you don’t need to consider the infinite series of perturbative corrections for 99.9% of the physics of daily life: the roughly ~0.1% correction is trivial and relatively unimportant (it is hyped up to prove that the perturbative correction from the path integral formulation predicts the experimental data very accurately, not to prove that trivial correction is of overwhelming importance in day-to-day physics!). What matters more is the 99.9% of the physics in the path integral which does not rely on complex looped Feynman diagrams, but is based simply on the simpler tree-like Feynman diagram, describing how electrons mostly interact by a very simple (not complex!) exchange of field quanta:

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, Penguin Edition, London, pp. 57-8.

Dirac’s matrix mechanics is equivalent to Schroedinger’s wave mechanics: both – as originally (and as popularly) presented – are “first quantization” quantum mechanics, i.e. they by quantize (or introduce discontinuities to) physics wrongly, by falsely applying the uncertainty principle to the position and momentum of “real” (long lived, i.e. on the relativistic mass-shell) electrons, instead of applying the uncertainty principle to the “virtual” (short-lived, i.e. off the relativistic mass-shell) quanta of the electric field and having these Coulomb field quanta make the electron orbits chaotic in the atom.

In first quantization, the electric field between charges is treated classically (a falsehood), while the position and momentum product of the charges is quantized by simply setting that product equal to integer multiples of h-bar (Planck’s constant divided by twice Pi). This lie is very useful mathematically. E.g., suppose you want to calculate the orbit of the electron’s ground state in a hydrogen atom. The product of ground state radius R and electron momentum p = mv is then h-bar (or Planck’s constant divided by twice Pi):

(R)(mv) = h/(2*Pi)

or

R = h/(2*Pi*mv)

The electron velocity in the ground state, v ~ c/137 = alpha * c, so:

R = (h-bar)/(m * alpha * c)

which is the correct ground state (Bohr) radius for hydrogen.

Bohr came up with first-quantization empirically, by trial and error: he called the empirical method of establishing facts in quantum mechanics the “correspondence principle” because he couldn’t get a physical explanation or derivation, he could justify his model solely on the basis that it corresponded to observation. Bohr discovered that if the orbital angular momentum of an electron, Rmv, is an integer multiple of h-bar, i.e. Rmv = n * h-bar, where n is an integer (n = 1, 2, 3, …), then the resulting energy levels of the electron are those observed for line spectra and modelled by the Balmer formula. de Broglie afterwards came up with a physical explanation for Bohr’s Rmv = n * h-bar, called particle-wave duality.

de Broglie interpreted this first-quantization of Bohr, i.e. the quantization of orbital angular momentum, as implying that the electron is like a standing wave; so to avoid destructive interference, a whole (integer) number, n, of wavelengths must fit along the circumference of the orbit: n * lambda (wavelength) = 2*Pi*R. This “interpretation” inspired Schroedinger to come up with his wave equation for the electron.

Applying Schroedinger’s wave equation, you can do lots with the same simple principle, which makes first-quantization very attractive to teachers.

This first quantization is useful for calculating things mathematically (a bit like the success of Ptolemy’s epicycles for calculating the apparent motion of planets in the earth-centred-universe before Copernicus and Kepler came along), but it is a lie if you want to physically understand what you are modelling with your mathematics, i.e. it is wrong regarding teaching the elementary basis of quantum mechanics and the nature of reality, because it falsely keeps the electric field classical while quantizing instead the momentum and position of real, relativistically on mass shell particles like orbital electrons. It was discovered to be wrong by Dirac in 1929 because it is non-relativistic, and Dirac’s second-quantization equation was confirmed by the discovery of the antimatter it predicted.

Despite this, the obfuscating falsehood of first quantization quantum mechanics is still taught in preference to the facts; it is defended by propaganda from people like Bohr who claimed that Feynman didn’t know the uncertainty principle, while refusing to listen to Feynman’s path integral explanation for electron chaos. These people never bothered to learn or understood second-quantization, or like Bohr they rejected Feynman’s path integrals on religious grounds as a heresy. Another reason used to teach lies is mathematical: simply because the second-quantization mathematics uses quantum field operators which can be harder to teach and to use to solve simple problems. (In other words, the situation is pathetic; a false theory is being taught because it is easier than teaching facts. Duh! Why not just teach flat earth theory?)

The uncertainty principle is not directly the basis for the atomic chaos and the discrete jumps of electron between energy levels. Instead, the basis for these phenomena is the quantum field, which is second-quantization: the uncertainty principle statistically describes the quantum field causing the Coulomb force on the electron (not the position and momentum of the electron itself), and it is the randomness of the behaviour of the quanta of this electric field which makes the electron behave chaotically.

The second quantization of quantum mechanics by Dirac, Feynman, et al., i.e., quantum field theory, tells us that the electron doesn’t move chaotically due to a principle: it moves chaotically because the electromagnetic field is not classical but is composed of particles, quanta, being exchanged at random, causing a force which averages out on large scales (like the random air molecule bombardments appearing as steady air pressure over large areas or long periods of time, in the kinetic theory of gases), but is chaotic on small scales (like the random air molecule bombardments of small dust particles, giving rise to the random motion of the dust which is called Brownian motion). Feynman emphasizes this off-shell quantum field mechanism for the chaos of real on-shell particles in his 1985 book, QED.

The experimental confirmation of the Casimir force is justification of second-quantization; the two conductive metal plates exclude from the space between them all virtual particles that don’t fit in (i.e. which have wavelengths longer than the distance between the plates), so only a partial spectrum exists between two metal places, whereas the full spectrum exists to press against the external sides of the plates, so they are pushed together with a predictable force.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

de Broglie’s first quantization waves versus second quantization

In second quantization, the chaos of the orbital electron is produced by the randomness of the electric field quanta interactions; field quanta are exchanged by charges. The ground state of hydrogen corresponds to the equilibrium where the rate of emission of field quanta by the electron is equal to the rate of reception from the surroundings. The resulting electron’s ground state oscillations in the atom constitute a particular resonate frequency of the system, just like the oscillation of electrons in the tuned antenna circuit of a radio receiver does, thus making the atom susceptible to the receipt of energy like a tuning fork, so the electron will resonate and jump to an excited energy state from the ground state when it receives a quantum of sufficient energy for that transition.

If a photon hits the electron with a frequency that mismatches the resonate frequency of the ground state oscillations of electron-atom system, the result will be destructive interference and the electron won’t gain the energy to be promoted to a higher energy state. If the interference is constructive, the electron can be accelerated and will jump to a higher energy state. Almost exactly the same kind of shell model resonance applies to the nucleus of heavy atoms, explaining why you get resonance in the cross-sections for interactions like fission as a function of neutron energy: some frequencies of neutron oscillations will constructively combine with the natural frequencies of nucleon shell oscillation in the nucleus, causing accelerating the nucleus and causing fission, while others will cancel out.

de Broglie’s theory was that the ground state of a hydrogen atom is an oscillating electrical system with a wavelength of lambda = 2*Pi*h-bar/(mv), i.e. a frequency of: f = v/lambda = mv2/h. However, de Broglie couldn’t offer any consistent physical explanation for this wave oscillation because in first-quantization the electric field is classical and perfectly constant (non-fluctuating) on all distance scales: Bohr’s theory predicted a planar circular orbit of the electron in the ground state, and couldn’t explain why the electron did not lose energy and spiral into the nucleus by the emission of radiation which classically accompanies the acceleration of charge, such as centripetal acceleration due to the orbital motion.

Quantum field theory explains all this by second-quantization: the electron is continuously radiating force-causing off-shell virtual radiation and it doesn’t spiral into the nucleus when it is in the ground state because there is then an equilibrium in the exchange of quanta, with the rate of reception equalling the rate of emission. The electron on average loses the same amount of energy each second as it gains, so it doesn’t fall into the nucleus (except for the well-known case of K-shell capture); it merely gets knocked around a lot chaotically, by the randomness of the quantum exchange interactions.

The oscillations in the ground state give orbital electron a “natural frequency”, explaining Bohr’s rule Rmv = n*h-bar: if a photon comes with the natural frequency, this energy can be captured by the electron and used to make it jump to a higher energy level. The atom is an oscillating electrical system like a tuned antenna circuit, a tuning fork, or a glass resonating strongly when radio waves or sound vibrations are received on its natural frequency, while not being so affected by other frequencies, which pass by unabsorbed.

The advantage of looking at this mechanistically is that opens the door to progress in not just understanding, but in asking and answering new questions which make checkable predictions. Accepting the electric and gravitational fields to be composed of exchanged off-shell quanta leads you to new checkable predictions, as discussed in earlier posts. The disadvantage is that mainstream physics treats this with “double-think”: quantum field theory is the basis of the Standard Model, yet there is a reluctance (to put it mildly) to face the physical consequences. There is a preference (to put it mildly) to treat quantum field theory as nothing more than an abstract mathematical model.

Feynman’s path integrals: the second-quantization mathematical model of QED, etc.

Feynman’s 1985 book QED, explains in pictures the relativistically correct second-quantization formulation of quantum mechanics using path integals:

Every quanta has a wave amplitude. In the real world, lots of quanta are sent out, on all possible paths. Most of the paths interfere with one another, so their amplitudes cancel out; consequently they apparently “appear” to be invisible or “don’t occur”. To get the resultant, you sum the amplitudes, and many cancel out. The advantage of this is that it correctly models anomalies, e.g. the double slit experiment when photons are sent “one at a time”. Young’s wave interference applies because an apparently “single” photon that strikes the screen is actually the uncancelled superposition of photons which have travelled along all possible paths to get to the screen, many cancelling out on the way. Hence the product striking the screen is the superposition of photons which have travelled through both slits, provided that the slits are close enough that the path amplitudes can contribute significantly!

Feynman got this path integral formulated mathematically by considering the “action”, S, which is the integral of the energy (the lagrangian, L) over time. S = {Integral} L dt, where L = E – V, in which E is the kinetic energy and V is the potential energy.

The amplitude for any path is simply exp(iS/h-bar) where S is the action and h-bar (Planck’s constant h divided by twice Pi) is the quantum unit of action from E = hf.

The amplitude comes from Schroedinger’s time-dependent equation H * Psi = -i * h-bar * d Psi /dt, where H is the hamiltonian (energy operator) and Psi is the wavefunction, has the solution: (Psi_t) / (Psi_0) = exp[-Ht/(i * h-bar)], which is basically the path amplitude, exp(iS/h-bar). Integrating this over all the “configuration space” consisting of different paths (which have differing values of action S) allows cancellation of interferring waves and the constructive interference of in-phase waves to be taken into account.

Feynman explains all this with diagrams without equations in “QED”. The problem with his treatment is that he didn’t know anything about the contrapuntal model for the charged capacitor or generally the cancellation of electric and magnetic fields in certain situations, so he presents the “wave amplitude” as being totally abstract (see post linked here for a full discussion of this). The usual formulation of the lagrangian in quantum electrodynamics is based on Maxwell’s classical field equations, which have a false time-dependence: for example Maxwell’s time-dependent correction to Ampere’s law (Maxwell’s displacement current term) assumes continuous rather than discrete time-dependence. Catt, Davidson and Walton published an attack on Maxwell’s equations in 1978 (the PDF is linked here) which claims that instead of a capacitor charging up classically (as Maxwell’s “displacement current” term predicts), it charges up in a series of small steps which only approximate Maxwell’s classical law when you have a large number of steps (a large amount of charge).

There is an error in the paper by Catt and others: they introduce discontinuity by the use of Heaviside’s discontinuous (vertical fronted) logical signal, which increases in potential due to superposition upon reflections, rather than due to discrete charges. Heaviside’s vertical-fronted “energy current” is a falsehood, so using that is a false assumption and produces a contrived disproof of Maxwell’s formula, but it is clear that the capacitor charges up in a discontinuous fashion due to the quantization of charge: each electron entering a plate causes a step-wise increase in the charge by one unit, so the continuous differential approximation in Maxwell’s formula is just an approximation for large numbers and is not physically legitimate.

This may not matter for large numbers of electrons for big capacitors, but this error of Maxwell is vitally important when dealing with quantum electrodynamics, where the classical equation loses all validity since each atom is a tiny capacitor consisting of positive and negative charges separated by a vacuum dielectric. Catt’s criticism of Maxwell’s classical equations is ignored. The correction I did to iron out the errors from Catt’s theory (see post linked here) leads to checkable predictions from quantum electrodynamics of things like the strength of the electromagnetic interaction. A couple of years ago, I emailed Nobel Laureate Gerald ‘t Hooft and received an aggressive reply from him, ignoring the evidence and claiming that Maxwell’s classical equations must be right, end of discussion! Dr Peter Woit did not even reply to a letter about this. There is a complete lack of interest in rebuilding quantum electrodynamics on the basis of corrected Maxwellian equations and physical understanding; even the critics of string theory aren’t interested in going back over the foundations of electromagnetic theory.

Catt was the first to predict the cross-talk in computer chip interconnections, in his 1967 article, “Crosstalk (Noise) in Digital Systems”, IEEE Transactions on Electronic Computers, vol. EC-16, No. 6, December 1967, pp. 743-63 (online PDF file link here). You may know that back in the 1990s all computers still had parallel printer and parallel interconnection ports, with many pins and thick, multi-conductor cables, in the false belief that parallel interconnection was faster than serial (using just two conductors). This was due to Catt’s paper being ignored: parallel suffers from cross-talk (noise) or “mutual inductance” between the many nearby conductors. This slows down computer communications, because the noise causes errors in the data sent, and everytime a packet is received with a defective check digit, it is rejected and a replacement packet has to be requested and sent. This is why USB (serial) has replaced parallel conductors. Nobody listened to Catt back in 1967 regarding cross-talk in electronic communications and the value of serial cables in fast data transfer to eliminate cross-talk has only recently been realized after an immense waste of effort on the intuitively faster but practically slower parallel cables (Catt in 1967 was applying cross-talk to the design of chips).

He has just emailed me about some new progress he has just put on the internet at the pages linked here and here. His latest criticism of Maxwell’s equations focusses on the equation representing Faraday’s law, electromagnetic inductance: this Maxwell equation states that a curling electric field is produced by a time-varying magnetic field and vice-versa, but it it does not include the space-time dependence, i.e. the delay in between varying the the magnetic field and obtaining a curling electric field someplace else. Faraday could not investigate the time delay between varying a magnetic field, and observing the resulting electric field, and Maxwell’s equations don’t include this: Maxwell’s equations require the addition of retarded-time added to make them compatible with post-Maxwellian observations. Feynman explains in volume 1 of his Feynman Lectures on Physics (pages 28-2 to 28-4) that analogous errors in Coulomb’s law to allow for delay times have been corrected simply with terms for retarded time, which would resolve Catt’s problem with the mathematical form of Faraday’s law:

Maxwell’s equations and Yang-Mills Equations

In 1954 C. N. Yang and R. L. Mills of Brookhaven National Laboratory wrote the paper “Conservation of Isotopic Spin and Isotopic Gauge Invariance”, published in the Physical Review, vol. 96, pp. 191–195 (PDF linked here). The Yang-Mills equations are a generalized quantum field theory form of Maxwell’s equations which, unlike Maxwell’s equations, permit field quanta to carry not just energy between charged particles, but charge as well. The whole point of my correction to Catt’s theory is that electromagnetism is a Yang-Mills theory with charged field quanta not neutral field quanta (as claimed by Maxwell’s model): the charged field quanta are unable to show up in electromagnetism interactions because of the infinite magnetic self-inductance of a charge massless gauge boson. This is a barrier against any imbalance ever occurring between the rate at which charge is exchanged by charged field quanta being exchanged between charges; it forces a perfect equilibrium so that any given electron emits X Coulombs/second (amps) of charge and must receive back X amps. However, charge is not the same thing as energy; we know from the Casimir effect that the force-mediating field quanta of electromagnetism have frequencies (the Casimir force measured is due to the cutoff in the spectrum of low vacuum radiation frequencies – or long wavelengths – between two metal plates, as already explained). Therefore, the motion of one on-shell charge relative to another on-shell charge produces a directionally dependent redshift and blueshift of the frequencies of the emitted off-shell charged field quanta, varying the amount of energy being exchanged without varying the amount of charge being exchanged; the result is that although the equilibrium ensures that the charge of an on-shell particle is constant, energy equilibrium is disturbed by Doppler frequency changes in the charged off-shell radiation exchange, delivering energy and causing accelerative forces as observed.

This suggests electromagnetism is really a Yang-Mills theory rather than a U(1) Abelian theory, and that the Yang-Mills equations reduce to Maxwell’s equations for electromagnetism because of the physical constraint of infinite mutual self-inductance in preventing the one way motion of charged massless radiation. All charged massless radiations can propagate only in a perfect equilibrium of two-way exchanges, so that the magnetic field curls are exactly cancelled out, eliminating the barrier of self-inductance. The resulting Yang-Mills theory of electromagnetism is very simple. A positive field around a proton is simply due to the positive charge of the field quanta from protons, while a negative field around an electron is similarly due to the negative charge of the field quanta it exchanges. The presence of two charges of field quanta in electromagnetism explains neatly the long-known fact that electromagnetic field quanta require 4 polarizations rather than just the 2 polarizations of on-shell light photons, as well as explaining very simply how attraction of unlike charges and repulsion of like charges results. It also has advantages for electroweak unification, allowing the SU(2) symmetry to include electromagnetism and getting rid of the need for the “not even wrong” Higgs field, by replacing it with a mass mechanism that makes precise, falsifiable predictions; giving mass to all SU(2) gauge bosons which participate in left-handed interactions.

The Spin-2 error of Markus Fierz and Wolfgang Pauli (this is taken from the earlier post, linked here, to make this new introductory post more complete without wasting time)

The spin-2 theory of gravity by Markus Fierz and Wolfgang Pauli was rejected by Phoebe in an episode of Friends (series 2, episode 3, at 5 mins):

Phoebe: Oh, okay, don’t get me started on gravity.
Ross: You uh, you don’t believe in gravity?
Phoebe: Well, it’s not so much that, you know, like I don’t believe in it, you know, it’s just … I don’t know, lately I get the feeling that I’m not so much being pulled down as I am being pushed.”


Above: the mainstream groupthink on the spin of the graviton goes back to Pauli and Fierz’s paper of 1939, which insists that gravity is attractive (that we’re not being pushed down), which leads to a requirement for the spin to be an even number, not an odd number:

‘In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …’

– Conclusion of the paper by Markus Fierz and Wolfgang Pauli, ‘On relativistic wave equations for particles of arbitrary spin in an electromagnetic field’, Proc. Roy. Soc. London., v. A173, pp. 211-232 (1939).

Pauli and Fierz obtained spin-2 by merely assuming without any evidence that gravity is attractive, not repulsive, i.e. they merely assume that we’re not being pushed down by the convergence of the inward component of graviton exchange with the immense isotropically distributed masses of the universe around us, which will obviously greatly exceed the repulsion of two nearby masses with relatively small gravitational charges. Pauli and Fiertz simply did not know the facts about cosmological repulsion (there was simply no evidence for this until 1998). The advocacy of spin-2 today is similar to the advocacy of Ptolemy’s mainstream earth centred universe from 150 to 1500 A.D., which merely assumed – but then arrogantly claimed this mere assumption to be observational fact – that the Earth was not rotating and that the sun’s apparent daily motion around the Earth is proof that the sun was really orbiting the Earth daily. There is no evidence for a spin-2 graviton!

There is evidence for a spin-1 graviton. For example, the following is from a New Scientist page:

‘Some physicists speculate that dark energy could be a repulsive gravitational force that only acts over large scales. “There is precedent for such behaviour in a fundamental force,” Wesson says. “The strong nuclear force is attractive at some distances and repulsive at others.”’

This possibility was ignored by Pauli and Fierz when first proposing that the quanta of gravitation has spin-2.

But the evidence proves that they’re wrong, and you’re being pushed, not pulled: gravitation is purely repulsive, mediated by spin-1 quanta which:

(1) gives cosmological repulsion of large masses, and

(2) gives a push that appears as LeSage “attraction” for small nearby masses, which only have weak mutual graviton exchange due to their small gravitational charges, and therefore on balance get pushed together by the much larger graviton pressure due to implosive focussing of gravitons converging inwards from the exchange with immense, distant masses (the galaxy clusters isotropically distributed across the sky).


Above: Perlmutter’s discovery of the acceleration of the universe, based on the redshifts of fixed energy supernovae, which are triggered as a critical mass effect when sufficient matter falls into a white dwarf. A type Ia supernova explosion, always yielding 4 x 1028 megatons of TNT equivalent, results from the critical mass effect of the collapse of a white dwarf as soon as its mass exceeds 1.4 solar masses due to matter falling in from a companion star. The degenerate electron gas in the white dwarf is then no longer able to support the pressure from the weight of gas, which collapses, thereby releasing enough gravitational potential energy as heat and pressure to cause the fusion of carbon and oxygen into heavy elements, creating massive amounts of radioactive nuclides, particularly intensely radioactive nickel-56, but half of all other nuclides (including uranium and heavier) are also produced by the ‘R’ (rapid) process of successive neutron captures by fusion products in supernovae explosions. Because we can model how much energy is released using modified computer models of nuclear fusion explosions developed originally by weaponeer Sterling Colgate at Lawrence Livermore National Laboratory to design the early H-bombs, the brightness of the supernova flash tells us how far away the Type Ia supernova is, while the redshift of the flash tells us how fast it is receding from us. That’s how the acceleration of the universe was discovered. Note that “tired light” fantasies about redshift are disproved by Professor Edward Wright on the page linked here.

You can go to an internet page and see the correct predictions on the linked page here or the about page. This isn’t based on speculations, cosmological acceleration has been observed since 1998 when CCD telescopes plugged live into computers with supernova signature recognition software detected extremely distant supernova and recorded their redshifts (see the article by the discoverer of cosmological acceleration, Dr Saul Perlmutter, on pages 53-60 of the April 2003 issue of Physics Today, linked here). The outward cosmological acceleration of the 3 × 1052 kg mass of the 9 × 1021 observable stars in galaxies observable by the Hubble Space Telescope (page 5 of a NASA report linked here), is approximately a = Hc = 6.9 x 10-10 ms-2 (L. Smolin, The Trouble With Physics, Houghton Mifflin, N.Y., 2006, p. 209), giving an immense outward force under Newton’s 2nd law of F = ma = 1.8 × 1043 Newtons. Newton’s 3rd law gives an equal inward (implosive type) reaction force, which predicts gravitation quantitatively. What part of this is speculative? Maybe you have some vague notion that scientific laws should not for some reason be applied to new situations, or should not be trusted if they make useful predictions which are confirmed experimentally, so maybe you vaguely don’t believe in applying Newton’s second and third law to masses accelerating at 6.9 x 10-10 ms-2! But why not? What part of “fact-based theory” do you have difficulty understanding?

It is usually by applying facts and laws to new situations that progress is made in science. If you stick to applying known laws to situations they have already been applied to, you’ll be less likely to observe something new than if you try applying them to a situation which nobody has ever applied them to before. We should apply Newton’s laws to the accelerating cosmos and then focus on the immense forces and what they tell us about graviton exchange.

The theory makes accurate predictions, well within experimental error, and is also fact-based unlike all other theories of quantum gravity, especially the 10500 universes of string theory’s landscape.


Above: The mainstream 2-dimensional ‘rubber sheet’ interpretation of general relativity says that mass-energy ‘indents’ spacetime, which responds like placing two heavy large balls on a mattress, which distorts more between the balls (where the distortions add up) than on the opposite sides. Hence the balls are pushed together: ‘Matter tells space how to curve, and space tells matter how to move’ (Professor John A. Wheeler). This illustrates how the mainstream (albeit arm-waving) explanation of general relativity is actually a theory that gravity is produced by space-time distorting to physically push objects together, not to pull them! (When this is pointed out to mainstream crackpot physicists, they naturally freak out and become angry, saying it is just a pointless analogy. But when the checkable predictions of the mechanism are explained, they may perform their always-entertaining “hear no evil, see no evil, speak no evil” act.)


Above: LeSage’s own illustration of quantum gravity in 1758. Like Lamarke’s evolution theory of 1809 (the one in which characteristics acquired during life are somehow supposed to be passed on genetically, rather than Darwin’s evolution in which genetic change occurs due to the inability of inferior individuals to pass on genes), LeSage’s theory was full of errors and is still derided today. The basic concept that mass is composed of fundamental particles with gravity due to a quantum field of gravitons exchanged between these fundamental particles of mass, is now a frontier of quantum field theory research. What is interesting is that quantum gravity theorists today don’t use the arguments used to “debunk” LeSage: they don’t argue that quantum gravity is impossible because gravitons in the vacuum would “slow down the planets by causing drag”. They recognise that gravitons are not real particles: they don’t obey the energy-momentum relationship or mass shell that applies to particles of say a gas or other fluid. Gravitons are thus off-shell or “virtual” radiations, which cause accelerative forces but don’t cause continuous gas type drag or the heating that occurs when objects move rapidly in a real fluid. While quantum gravity theorists realize that particle (graviton) mediated gravity is possible, LeSage’s mechanism of quantum gravity is still as derided today as Lamarke’s theory of evolution. Another analogy is the succession from Aristarchus of Samos, who first proposed the solar system in 250 B.C. against the mainstream earth-centred universe, to Copernicus’ inaccurate solar system (circular orbits and epicycles) of 1500 A.D. and to Kepler’s elliptical orbit solar system of 1609 A.D. Is there any point in insisting that Aristarchus was the original discoverer of the theory, when he failed to come up with a detailed, convincing and accurate theory? Similarly, Darwin rather than Lamarke is accredited with the theory of evolution, because he made the theory useful and thus scientific.

If someone fails to come up with a detailed, accurate and successfully convincing theory, and merely gets the basic idea right without being able to prove it against the mainstream fashions and groupthink, then the history of science shows that the person is not credited with a big discovery: science is not merely guesswork. Maxwell based his completion of the theory of classical electrodynamics upon an ethereal displacement current of virtual charges in the vacuum, in order to correct Ampere’s law for the case of open circuits such as capacitors using the permittivity of free space (a vacuum) for the dielectric. Maxwell believed, by analogy to the situation of moving ions in a fluid during electrolysis, that current appears to flow through the vacuum between capacitor plates while the capacitor charges and discharges; although in fact the real current just spreads along the plates, and electromagnetic induction (rather than ethereal vacuum currents) produces the current on the opposite place.

Maxwell nevertheless suggested (in an Encyclopedia Britannica article) an experiment to test whether light is carried at an absolute velocity by a mechanical spacetime fabric. After the Michelson-Morley experiment was done in 1887 to test Maxwell’s conjecture, it was clear that no absolute motion was detectable: suggesting (1) that motion appears relative, not absolute, and (2) that light always appears to go at the same velocity in the vacuum. In 1889, FitzGerald published an explanation of these “relativity” results in Science: he argued that the physical vacuum contracted moving masses like the Michelson-Morley experiment, by analogy to the contraction of anything moving in a fluid due to the force from the head-on fluid pressure (wind drag, or hydrodynamic resistance). This fluid-space based explanation predicted quantitatively the relativistic contraction law, and Lorentz showed that since mass depends inversely on the classical radius of the electron, it predicted a mass increase with velocity. Given the equivalence of space and time via the velocity of light, Lorentz showed that the contraction predicted time-dilation due to motion.

Above: In Science in 1889, FitzGerald used the Michelson-Morley result to argue that moving objects at velocity v must contract in length in the direction of their motion by the factor (1 – v2/c2)1/2 so that there is no difference in the travel times of light moving along two perpendicular paths. Groupthink crackpots claim that if the lengths of the arms of the instrument are different, FitzGerald’s argument for absolute motion is destroyed since the travel times are still cancelled out. Actually, the arms of the Michelson-Morley instrument can never be the same length to within the accuracy of the relative times implied by interference fringes! The instrument does not measure the absolute times taken in two different directions: it merely determines if there is a difference in the relative times (which are always slightly different, since the arms can’t be machined to perfectly identical length) when the instrument is rotated by 90 degrees. Another groupthink crackpot argument is that although the FitzGerald theory predicts relativity from length contraction in an absolute motion universe, other special relativity results like time dilation, mass increase, and E = mc2 can only be obtained from Einstein. Actually, all were obtained by Lorentz and Poincare: Lorentz showed that evidence for space-time from electromagnetism implies that apparent time dilates like distance when an clock moves, while he argued that since the classical electromagnetic electron radius is inversely proportional to its mass, its mass should thus increase with velocity by a factor equal to the reciprocal of the FitzGerald contraction factor. Likewise, a force F = d(mv)/dt acting on a body moving distance dx imparts kinetic energy dE = F.dx = d(mv).dx/dt = d(mv)v = v.d(mv) = v2dm + mvdv. Comparison of this purely Newtonian result with the derivative of Lorentz’s relativistic mass increase formula mv = m0(1 – v2/c2)-1/2 gives us dm = dE/c2 or E = mc2. (See for example, Dr Glasstone’s Sourcebook on Atomic Energy, 3rd ed., 1967.)

Carlos Barceló and Gil Jannes, ‘A Real Lorentz-FitzGerald Contraction’, Foundations of Physics, Volume 38, Number 2 / February, 2008, pp. 191-199 (PDF file: http://digital.csic.es/bitstream/10261/3425/3/0705.4652v2.pdf):

“Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics, in particular regarding the quest towards a theory of quantum gravity. …

“… Remarkably, all of relativity (at least, all of special relativity) could be taught as an effective theory by using only Newtonian language. …In a way, the model we are discussing here could be seen as a variant of the old ether model. At the end of the 19th century, the ether assumption was so entrenched in the physical community that, even in the light of the null result of the Michelson-Morley experiment, nobody thought immediately about discarding it. Until the acceptance of special relativity, the best candidate to explain this null result was the Lorentz-FitzGerald contraction hypothesis. … we consider our model of a relativistic world in a fishbowl, itself immersed in a Newtonian external world, as a source of reflection, as a Gedankenmodel. By no means are we suggesting that there is a world beyond our relativistic world describable in all its facets in Newtonian terms. Coming back to the contraction hypothesis of Lorentz and FitzGerald, it is generally considered to be ad hoc. However, this might have more to do with the caution of the authors, who themselves presented it as a hypothesis, than with the naturalness or not of the assumption. … The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”

In 1905, Einstein took the two implications of the Michelson-Morley research (that motion appears relative not absolute, and that the observed velocity of light in the vacuum is always constant) and used them as postulates to derive the FitzGerald-Lorentz transformation and Poincare mass-energy equivalence. Einstein’s analysis was preferred by Machian philosophers because it was purely mathematical and did not seek to explain the principle of relativity and constancy of the velocity of light in the vacuum by invoking a physical contraction of instruments. Einstein postulated relativity; FitzGerald explained it. Both predicted a similar contraction quantitatively. Similarly, Newton’s theory or gravitation is the combination of Galileo’s principle that dropped masses all accelerate at the same rate due to the constancy of the Earth’s mass, with Kepler’s laws of planetary motion. Newton postulated his universal gravitational law based on this evidence plus the guess that the gravitational force is directly proportional to the mass producing it, and he checked it by the Moon’s centripetal acceleration; LeSage tried to explain what Newton had postulated and checked.

The previous post links to Peter Woit’s earlier article about string theorist Erik Verlinde’s arXiv preprint On the Origin of Gravity and the Laws of Newton, which claims: “Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies.” String theorist Verlinde derives Newton’s laws and other results using only “high-school mathematics” (which brings contempt from mathematical physicist Woit, probably one of the areas of agreement he has with string theorist Jacques Distler), i.e. no tensors, and he is derives the Newtonian weak field approximation for gravity, not the relativistic Einsteinian gravity law which also includes contraction. This contraction is physically real but small for weak gravitational fields and non-relativistic velocities: Feynman famously calculated in his published Lectures on Physics that the contraction term in Einstein’s field equation contracts the Earth’s radius by MG/(3c2) = 1.5 mm. Consider two ways to predict contraction using Einstein’s equivalence principle.

First, Einstein’s way. Einstein began by expressing Newton’s law of gravity in tensor field calculus which allows gravity to be represented by non-Euclidean geometry, incorporating the equivalence of inertial and gravitational mass: Einstein started with a false hypothesis that the curvature of spacetime (represented with the Ricci tensor) which causes acceleration (“curvature” is literally the curve of a line on a graph of distance versus time, i.e. it implies acceleration) simply equals the source of gravity (the stress-energy tensor, since in Einstein’s earlier special relativity, mass and energy are equivalent, albeit via the well-known very large conversion factor, c2). (Non-Euclidean geometry wasn’t Einstein’s innovation; it was studied by Riemann and Minkowski, while Ricci and Levi-Civita pioneered tensors to generalize vector calculus to any number of dimensions.)

Einstein in 1915 found that this this simple equivalence was wrong: the Ricci curvature tensor could not be equivalent to the stress-energy tensor because the divergence (the sum of gradients in all spatial dimensions) of the stress-energy tensor is not zero. Unless this divergence is zero, mass-energy will not be conserved. So Einstein used Bianchi’s identity to alter source of gravity, subtracting from the stress-energy tensor, Tab, half the product of the metric tensor gab, and the trace of the stress-energy tensor, T (the trace of a tensor is simply the sum of the top-left to bottom-right diagonal elements of that tensor, i.e. energy density plus pressure, or trace T = T00 + T11 + T22 + T33), because this combination: (1) does have zero divergence and thereby satisfies the conservation of mass-energy, and (2) reduces the stress-energy tensor for weak fields, thereby correctly corresponding to Newtonian gravity in the weak field limit. This is how Einstein found that the Ricci tensor Rab = Tab – (1/2)gabT, which is exactly equivalent to the oft-quoted Einstein equation Rab – (1/2)gabR = Tab, where R is the trace of the Ricci tensor (R = R00 + R11 + R22 + R33).

Secondly, Feynman’s way. A more physically intuitive explanation to the modification of Newton’s gravitational law implied by Einstein’s field equation of general relativity is to examine Feynman’s curvature result: space-time is non-Euclidean in the sense that the gravitational field contracts the Earth’s radius by (1/3)MG/c2 or about 1.5 mm. This is unaccompanied by a transverse contraction, i.e. the Earth’s circumference is unaffected. To mathematically keep “Pi” a constant, therefore, you need to invoke an extra dimension, so that the n-1 = 3 spatial dimensions we experience are in string theory terminology a (mem)brane on a n = 4 dimensional bulk of spacetime. Similarly, if you draw a 2-dimensional circle upon the surface of the interior of a sphere, you will obtain Pi from the circle only by drawing a straight line through the 3-d bulk of the volume (i.e. a line that does not follow the 2-dimensional curved surface or “brane” of the sphere upon which the circle is supposed to exist). If you measure the diameter upon the curved surface, it will be different, so Pi will appear to vary.

A simple physical mechanism of Feynman’s (1/3)MG/c2 excess radius for symmetric, spherical mass M is that the gravitational field quanta compress a mass radially when being exchanged with distant masses in the universe: the exchange of gravitons pushes against masses. By Einstein’s principle of the equivalence of inertial and gravitational mass, the cause of this excess radius is exactly the same as the cause of the FitzGerald-Lorentz contraction of moving bodies in the direction of their motion, first suggested in Science in 1889 by FitzGerald. FitzGerald explained the apparent constancy of the velocity of light regardless of the relative motion of the observer (indicated by the null-result of the Michelson-Morley experiment of 1887) as the physical effect of the gravitational field. In the fluid analogy to the gravitational field, if you accelerate an underwater submarine, there is a head-on pressure from the inertial resistance of the water which it is colliding with, which causes it to contract slightly in the direction it is going in. This head-on or “dynamic” pressure is equal to half the product of the density of the water and the square of the velocity of the submarine. In addition to this “dynamic” pressure, there is a “static” water pressure acting in all directions, which compresses the submarine slightly in all directions, even if the submarine is not moving. In this analogy, the FitzGerald-Lorentz contraction is the “dynamic” pressure effect of the graviton field, while the Feynman excess radius or radial contraction of masses is the “static” pressure effect of the graviton field. Einstein’s special relativity postulates (1) relativity of motion and (2) constancy of c, and derives the FitzGerald-Lorentz transformation and mass-energy equivalence from these postulates; by contrast, FitzGerald and Lorentz sought to physically explain the mechanism of relativity by postulating contraction. To contrast this difference:

(1) Einstein: postulated relativity and produced contraction.
(2) Lorentz and FitzGerald: postulated contraction to produce “apparent” observed Michelson-Morley relativity as just an instrument contraction effect within an absolute motion universe.

These two relativistic contractions, the contraction of relativistically moving inertial masses and the contraction of radial space around a gravitating mass, are simply related under Einstein’s principle of the equivalence of inertial and gravitational masses, since Einstein’s other equivalence (that between mass and energy) then applies to both inertial and gravitational masses. In other words, the equivalence of inertial and gravitational mass implies an effective energy equivalence for each of these masses. The FitzGerald-Lorentz contraction factor [1 – (v/c)2]1/2 contains velocity v, which comes from the kinetic energy of the moving object. By analogy, when we consider a mass m at rest in a gravitational field from another much larger mass M (like a person standing on the Earth), it has acquired gravitational potential energy E = mMG/R, equivalent to a kinetic energy of E = (1/2)mv2, so by Einstein’s equivalence principle of inertial and gravitational field energy it can be considered to have an “effective” velocity of v = (2GM/R)1/2. Inserting this velocity into the FitzGerald-Lorentz contraction factor [1 – (v/c)2]1/2 gives [1 – 2GM/(Rc2)]1/2 which, when expanded by the binomial expansion to the first couple of terms as a good approximation, yields 1 – GM/(Rc2). This result assumes that all of the contraction occurs in one spatial dimension only, which is true for the FitzGerald-Lorentz contraction (where a moving mass is only contracted in the direction of motion, not in the two other spatial dimensions it has), but is not true for radial gravitational contraction around a static spherical, uniform mass, which operates equally in all 3 spatial dimensions. Therefore, the contraction in any one of the three dimensions is by the factor 1 – (1/3)GM/(Rc2). Hence, when gravitational contraction is included, radius R becomes R[1 – (1/3)GM/(Rc2)] = RGM/(3c2), which is the result Feynman produced in his Lectures on Physics from Einstein’s field equation.

The point we’re making here is that general relativity isn’t mysterious unless you want to ignore the physical effects due to energy conservation and associated contraction, which produce its departures from Newtonian physics. Physically understanding the mechanism for how general relativity differs from Newtonian physics therefore immediately takes you to the facts of how the quantum gravitational field physically distorts static and moving masses, leading to checkable predictions which you cannot make with general relativity alone. It is therefore helpful if you want to understand physically how quantum gravity must operate in order to be consistent with general relativity within its domain of validity. Obviously general relativity breaks down outside that domain, which is why we need quantum gravity, but within the limits of validity for that classical domain, both theories are consistent. The reason why quantum gravity of the LeSage sort needs to be fully reconciled with general relativity in this way is that one objection to LeSage was by Laplace, who ignored the gravitational and motion contraction mechanisms of quantum gravity for relativity (Laplace was writing long before FitzGerald and Einstein) and tried to use this ignorance to debun LeSage by arguing that orbital aberration would occur in LeSage’s model due to the finite speed of the gravitons. This objection does not apply to general relativity due to the contractions incorporated into the general relativity theory by Einstein: similarly, Laplace’s objection does not apply to quantum gravity which inherently includes the contractions as physical results of quantum gravity upon moving masses.

In the past, however, FitzGerald’s physical contraction of moving masses as miring by fluid pressure has been controversial in physics, and Einstein tried to dispose of the fluid. The problem with the fluid was investigated by citics of Fatio and LeSage, who promoted a shadowing theory of gravity, whereby masses get pushed together by mutually shielding one another from the pressure of the fluid in space. These critics included some of the greatest classical physicists the world has ever known: Newton (Fatio’s friend), Maxwell and Kelvin. Feynman also reviewed the major objection, drag, to the fluid in his broadcast lectures on the Character of Physical Law. The criticisms of the fluid is that it the force it needs to exert to produce gravity would classically be expected to cause fast moving objects in the vacuum

(1) to heat up until they glow red hot or ablate at immense temperature,

(2) to slow down and (in the case of planets) thus spiral into the sun,

(3) while the fluid would diffuse in all directions and on large distance scales fill in the “shadows” like a gas, preventing the shadowing mechanism from working (this doesn’t apply to gravitons exchanged between masses, for although they will take all possible paths in a path integral, the resultant, effective graviton motion for force delivery will along the path of least action, due to the cancellation of the amplitudes of paths which interfere off the path of least action: see Feynman’s 1985 book QED),

(4) the mechanism would not give a force proportional to mass if the fundamental particles have a large gravitational interaction cross-sectional area, which would mean that in a big mass some of the shadows would “overlap” one another, so the net force of gravity from a big mass would be less than directly proportional to the mass, i.e. it would increase not in simple proportion to M but instead statistically in proportion to: 1 – ebM, where b is a gravity cross-section and geometry-dependent coefficient, which allows for the probability of overlap. This 1 – ebM formula has two asymptotic limits:

(a) for small masses and small cross-sections, bM is much smaller than 1, so: ebM ~ 1 – bM, so 1 – ebM ~ bM. I.e., for small masses and small cross-sections, the theory agrees with observations (there is no significant overlap).

(b) for larger masses and large cross-sections, bM might be much larger than 1, so ebM ~ 0, giving 1 – ebM ~ 1. I.e., for large masses and large cross-sections, the overlap of shadows would prevent any increase in the mass of a body from increasing the resultant gravitational force: once gravitons are stopped, they can’t be stopped again by another mass.

This overlap problem is not applicable for the solar system or many other situations because b is insignificant owing to the small graviton scattering cross-section of a fundamental particle of mass, since the total inward force is trillions upon trillions of times higher than the objectors believed possible: the force is simply determined by Newton’s 2nd and 3rd laws to be the product of the cosmological acceleration and the mass of the accelerating universe, 1.8 × 1043 Newtons, and the cross-section for shielding is the black hole event horizon area, which is so small that overlap is insignificant in the solar system or other tests of Newton’s weak field limit.

(5) the LeSage mechanism suggested that the gravitons which cause gravity would be slowed down by the energy loss when imparting a push to a mass, so that they would not be travelling at the velocity of light, contrary to what is known about the velocity of gravitational fields. However this is false and is due to the real (rather than virtual “off-shell”) radiation that LeSage assumed. The radiation goes at light velocity and merely shifts in frequency due to energy loss. For static situations, where no acceleration is produced (e.g. an apple stationary hanging on a tree) the graviton exchange results in no energy change; it’s a perfectly elastic scattering interaction. No energy is lost from the gravitons, and no kinetic energy is gained by the apple. Where the apple is accelerated, the kinetic energy it gains is that lost due to a shift to lower energy (longer wavelength) of the “reflected” or scattered gravitons. Notice that Dr Lubos Motl has objected to me by falsely claiming that virtual particles don’t appear to have wavelengths; on the contrary, the empirically confirmed Casimir effect is due to inability of virtual photons of wavelength longer than the distance between two metal plates, to exist and produce pressure between the plates (so the plates are pushed together from the complete spectrum of virtual photon wavelengths in the vacuum surrounding the places, which is stronger than the cut-off spectrum between the plates). Like the reflection of light by a mirror, the process is consists of the absorption of a particle followed by the emission of a new particle.

However, quantum field theory, which has been very precisely tested for electrodynamics, resurrects a quantum fluid or field in space which consists of gauge boson radiation, i.e. virtual (off-shell) radiation which carries “borrowed” or off-mass shell energy, not real energy. It doesn’t obey the relationship between energy and momentum that applies to real radiation. This is why the radiation can exert pressure without causing objects to heat up or to slow down: they merely accelerate or distort instead.

The virtual radiation is not like a regular fluid. It carries potential energy that can be used to accelerate and contract objects, but it cannot directly heat them or cause continuous drag to non-accelerating objects by carrying away their momentum in a series of impacts in the way that gas or water molecules cause continuous drag on non-accelerating objects. There is only resistance to accelerations (i.e., inertia and momentum) because of these limitations on the energy exchanges possible with the virtual (off-shell) radiations in the vacuum.

In a new blog post, Dr Woit quotes a New Scientist article about Erik Verlinde’s “entropic gravity”:

“Now we could be closing in on an explanation of where gravity comes from: it might be an emergent property of the way objects are organised, much as fluidity arises as a property of water…. This idea might seem exotic now, but to kids of the future it might be as familiar as apples.”

Like Woit, I don’t see much hope in Verlinde’s entropic gravity since it doesn’t make falsifiable predictions, just ad hoc ones, but the idea that gravity is an “emergent property of the way objects are organised, much as fluidity arises as a property of water” is correct: gravity predicted accurately from the shadowing of the implosive pressure from gravitons exchanged with other masses around us. At best, mainstream quantum gravity theories such as string theory and loop quantum gravity are merely compatible with a massless spin-2 excitation and thus are wrong, ad hoc theories of quantum gravity, founded on error and which fail to make any quantitative, falsifiable predictions.

Woit writes:

“Gerard ‘t Hooft expresses pleasure at seeing a string theorist talking about “real physical concepts like mass and force, not just fancy abstract mathematics”. According to the article, the problem with Einstein’s General Relativity is that its “laws are only mathematical descriptions.” I guess a precise mathematical expression of a theory is somehow undesirable, much better to have a vague description in English about how it’s all due to some mysterious entropy.”

So Dr Woit has finally flipped, giving up on precise mathematical expressions and coming round to the “much better” vague and mysterious ideas of the mainstream string theorists. Well, I think that’s sad, but I suppose it can’t be helped. Newton in 1692 scribbled in his own printed copy of his Principia that Fatio’s 1690 gravity mechanism was “the unique hypothesis by which gravity can be explained”, although Newton did not publish any statement of his interest in the gravitational mechanism (just as he kept his alchemical and religious studies secret).

Update:

John Rennie has commented on Woit’s blog:

“I think you’re being a bit harsh when you say:

I guess a precise mathematical expression of a theory is somehow undesirable, much better to have a vague description in English about how it’s all due to some mysterious entropy.

“No-one is suggesting the existing mathematical models should be abandoned. The point being made is that the entropic approach may give us some physical insight into those mathematical models.”

This is a valid point: finding a way to make predictions with quantum gravity doesn’t mean “abandoning” general relativity, but supplementing it by giving additional physical insight and making quantitative, falsifiable predictions. Although Professor Halton Arp (of the Max-Planck Institut fuer Astrophysik) promotes heretical quasar redshift objections to the big bang which are false, he does make one important theoretical point in his paper The observational impetus for Le Sage Gravity:

‘The first insight came when I realized that the Friedmann solution of 1922 was based on the assumption that the masses of elementary particles were always and forever constant, m = const. He had made an approximation in a differential equation and then solved it. This is an error in mathematical procedure. What Narlikar had done was solve the equations for m= f(x,t). This a more general solution [to general relativity], what Tom Phipps calls a covering theory. Then if it is decided from observations that m can be set constant (e.g. locally) the solution can be used for this special case. What the Friedmann, and following Big Bang evangelists did, was succumb to the typical conceit of humans that the whole of the universe was just like themselves.’

The remainder of his paper is speculative, non-falsifiable or simply wrong, and Arp is totally wrong in dismissing the big bang since his quasar “evidence” has empirically been shown to be completely bogus, while it has also been shown that the redshift evidence definitely does require expansion, since other “explanations” fail. But Arp is right in arguing that the Friedmann et al. solutions to general relativity for cosmological models are all based on the implicit assumption that the source of gravity is not an “emergent” effect of the motion of masses in the surrounding universe. The Lambda-CDM model based on general relativity is typical of the problem, since it can be fitted in ad hoc fashion to virtually any kind of universe by adjusting the values of the dark energy and dark matter parameters to force the theory to fit the observations from cosmology (the opposite of science, which is to make falsifiable predictions and then to check those predictions). That’s a religion based on groupthink politics, not facts.

Update

Copy of comment to:

http://scienceblogs.com/builtonfacts/2010/02/failing_at_gravity.php

“But there’s problems, too. There ought to be “air resistance” from the particles as the planets move through space. Then there’s the fact that the force is proportional to surface area hit by the particles, not to the mass. This can be remedied by assuming a tiny interaction cross-section due to the particles, but if this is true they must be moving very fast indeed to produce the required force – many times the speed of light. And in that case the heating due to the “air resistance” of the particles would be impossibly high. Furthermore, if the particle shadows of two planets overlapped, the sun’s gravity on the farther planet should be shielded. No such effect has been observed.

“For these and other reasons Fatio’s theory had to be rejected as unworkable.”

Wikipedia is a bit unreliable on this subject: Fatio assumed on-shell (“real”) particles, not a quantum field of off-shell virtual gauge bosons. The exchange of gravitons between masses in the universe would cause the heating, drag, etc., regardless of spin if the radiation were real. So it would dismiss spin-2 gravitons of attraction, since they’d have to be everywhere in the universe between masses, just like Fatio’s particles. But in fact the objections don’t apply to gauge boson radiations since they’re off-shell. Fatio didn’t know about relativity or quantum field theory.

Thanks anyway, your post is pretty funny and could be spoofed by writing a fictitious attack on “evolution” by ignoring Darwin’s work and just pointing out errors in Lamarke’s theory of evolution (which was wrong)…

“This can be remedied by assuming a tiny interaction cross-section due to the particles, but if this is true they must be moving very fast indeed to produce the required force – many times the speed of light.”

Or just increasing the flux of spin-1 gravitons when you decrease the cross-section …

Pauli’s role in predicting the neutrino by applying energy conservation to beta decay (against Bohr who falsely claimed that the energy conservation anomaly in beta decay was proof that indeterminancy applies to energy conservation which can violate energy conservation to explain the anomaly without having to predict the neutrino to take away energy). and in declaring Heisenberg’s vacuous (unpredictive) unified field theory “not even wrong” is well known, thanks to Peter Woit. There is a nice anecdote about Markus Fierz, Pauli’s collaborator in the spin-2 theory of gravitons, given by Freeman Dyson on p. 15 of his 2008 book The Scientist as Rebel:

“Many years ago, when I was in Zürich, I went to see the play The Physicists by the Swiss playwright Friedrich Dürrenmatt. The characters in the play are grotesque caricatures … The action takes place in a lunatic asylum where the physicists are patients. In the first act they entertain themselves by murdering their nurses, and in the second act they are revealed to be secret agents in the pay of rival intelligence services. … I complained about the unreality of the characters to my friend Markus Fierz, a well-known Swiss physicist, who came with me to the play. ‘But don’t you see?’ said Fierz. ‘The whole point of the play is to show us how we look to the rest of the human race’.”