‘Dick [Richard Feynman] distrusted my mathematics and I distrusted his intuition. … You could not imagine the sum-over-histories picture being true for a part of nature and untrue for another part. You could not imagine it being true for electrons and untrue for gravity. It was a unifying principle that would either explain everything or explain nothing. …

‘Dick fought back against my skepticism, arguing that Einstein had failed because he stopped thinking in concrete physical images and became a manipulator of equations. I had to admit that was true. … Einstein’s later unified theories failed because they were only sets of equations without physical meaning. Dick’s sum-over-histories theory was in the spirit of the young Einstein, not of the old Einstein. It was solidly rooted in physical reality.’

– Freeman Dyson,

Disturbing the Universe,1981 Pan edition, London, p. 62.

‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding

arrowsfor all the ways an event can happen – there is noneedfor an uncertainty principle!’‘… with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’

– Richard P. Feynman,

QED,Penguin Books, London, 1990, Chapter 3, pp. 84-5, pp. 84-5.

In diffusion like Brownian motion, there is a continuous distribution (and thus an infinite number) of different angles and speeds with which water molecules can hit a dust particle, producing the chaotic motion of dust grains that Brown observed. The experimentally-validated Casimir effect is proof of the off-shell quantum electromagnetic field in the vacuum. Feynman treats the motion of the atomic electron the same way, with the field quanta causing a random diffusion around the classical orbit. This differs from 1st quantization or normal textbook quantum mechanics, which uses a classical coulomb potential rather than a quantum field. It is simple for bound states, but ignores the effects of field quanta, and is non-relativistic. Feynman’s 2nd quantization changes the Hamiltonian to a relativistic one, and then you don’t need first quantization. It gets rid of the notion of intrinsic indeterminancy in particles by making the randomness of the field quanta interactions cause all the indeterminancy; Feynman ascribes indeterminancy to multi-path interference of (off-shell) photons, i.e. field quanta. Thus the electron moves on a real path which is made chaotic by the fact that the Coulomb field, binding it to the positively charged nucleus, does not act smoothly and continuously but instead imparts force in discrete impulses delivered chaotically by randomly-timed off-shell field quanta exchanges!

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman,

The Beat of a Different Drum,Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

Einstein and Infeld in *The Evolution of Physics* quote Brown’s claim that pollen grains in water have an *intrinsic* chaotic motion. Brown couldn’t see the invisible water molecules and didn’t want to believe in such “hidden variables”, and this is the same error made in textbook (1st quantization) quantum mechanics which ignores the role of field quanta in producing indeterminancy. Feynman says Bohr, Heisenberg, Schroedinger (1st quantization guys) did the same thing that Brown did with the electron. They assumed intrinsic indeterminancy, when in fact the wave equation works simply because it models the chaotic interactions with field quanta.

The Heisenberg matrix mechanics / Schroedinger equation quantum mechanics approach is wrong physically, since it is 1925 “first quantization”:

(1) it treats the Coulomb potential classical instead of having a quantum electromagnetic field with random interactions (which would produce random motion on the electron which is being bound to the nucleus by the Coulomb field)

(2) it thus has to introduce intrinsic chaos by applying the uncertainty principle directly to real particles (intrinsic chaos) without any field random bombardment mechanism,

(3) it therefore ends up with a Hamiltonian which is non-relativistic by treating time and space differently (a *first-order* variation of the wavefunction with respect to *time* is combined with a *second-order* variation of the wavefunction with respect to *distance;* see the equation at the top of the page!), thus it is physically and mathematically wrong,

(4) it produces “first quantization” errors, such as the need for “wavefunction collapse” upon measurement, which is not a metaphysical phenomenon as hyped by pseudoscience believers, but is a collapse of applicability to the real world of the statistical model when a measurement is taken. Like the fact that uranium emits say 2.5 neutrons per fission in a particular reaction, we don’t need metaphysics to explain this; it’s simple to understand as an average (50% of fissions give 2 neutrons, 50% give 3; none give 2.5 neutrons, which is just a mean!). Another example is quantum tunnelling “through classically forbidden barriers”; there’s no mystery here, the classical model is wrong simply because it is based on the false assumption of a constant, steadily resisting force-field that is not mediated by statistically fluctuating, random exchanges of field quanta between charges. The real model of the Coulomb field is not a steady arrangement of classical “field lines” but a force field due to the random exchange of particles (field quanta) between charges. This is like air pressure due to Brownian motion; on small scales of space and time it is randomly timed impacts of particles (air molecules in the case of Brownian motion, field quanta in the case of the Coulomb field). *On small spacetime scales, therefore, Brownian motion air pressure (or the quantum Coulomb field) is naturally a chaotic force, consisting of impulses delivered at randomly timed intervals.* Only on large spacetime scales can the path integral average out the Brownian impacts into a relatively steady “air pressure” or “Coulomb field strength”. On small scales, it’s not even a smoothly varying analogue variable representable by a differential equation; instead it is randomly timed series of discrete impulses. An alpha particle occasionally enters or leave a nucleus by “tunnelling through the classical Coulomb barrier” simply by failing to happen to interact with opposing field quanta. It’s not mysterious at all. The random timing of field quanta exchanges which are the mechanism of the Coulomb force will allow some probability of a fast (high energy) charged particle happening to get through without chancing to hit a field quanta. This is all you need to explain quantum tunnelling, although it is hyped in society by pseudoscientists as a physically deep mystery or religious dogma at the heart of Heisenberg’s first-quantization physics. When you explain how Feynman debunked Bohr and Heisenberg’s dogma, the dogmatic bigots falsely claim that QFT is “hidden variables” which they falsely claim was disproved by John von Neumann in 1932. Not so. The Casimir force experiment demonstrates the reality of QFT field quanta. It’s not an “infinite potential” hidden variables speculation from David Bohm. Feynman’s QFT is based on and verified by experimentally verified facts, and is utilized and tested in the basic structure of the most successful scientific theory ever, the Standard Model of particle physics.

The anti-spacetime fabric claims of restricted (“special”) relativity dogmatists ignores the fact that a consistent UV cutoff energy in QFT (say the black hole size of an electron or alternatively the Planck length) is invariant of the restframe, so it doesn’t obey the Lorentz transformation. This alone debunks the role of restricted relativity as the ultimate theory of spacetime. Instead of the principle of relativity being fundamental in QFT, it is just emergent: moving particles get contracted in length simply because they snowplough in to the field quanta of the vacuum whenever they accelerate, and the force contracts their length by the head-on field pressure during acceleration.

(5) it isn’t mathematically as useful as an epicycle-type “handy model” for calculations as widely hyped, since it only gives analytical solutions for hydrogen: all Schroedinger equation solutions for multi-electron atoms involve massive simplifying approximations,

(6) it is still falsely hyped as being right today by big-shots in physics who don’t understand the physical basis of the mathematics of quantum field theory.

Dirac’s relativistic second-quantization (the wave equation with an altered Hamiltonian which quantizes the Coulomb field to make quantum mechanics relativistic) is almost universally dismissed as just an advanced form of quantum mechanics or an addition to first-quantization, when in fact it is not an addition or supplement, but a replacement. By quantizing the field around a real particle, the randomness of the quantum exchanges of an electron with the field binding it to a proton nucleus produce all of the chaos. Similarly, quantum interactions between the electromagnetic field of a photon and those in a block of glass affect its speed, and the interactions between a photon and two slits (Young’s double slit experiment) prove that virtual particles travel along all possible paths by affecting the motion of the photon: the “real photon” is composed of a summation of virtual photons travelling along multiple paths, most of which usually cancel out.

I’m finishing on a paper resolving all of the problems in the standard model, which has errors in the electroweak sector. Electromagnetism is best modelled as massless version of SU(2) isospin, whereas U(1) is actually spin-1 quantum gravity. I can predict couplings and masses, getting rid of the Higgs symmetry breaking model. The symmetry of U(1) x SU(2) is not broken for gravity and electromagnetism, where the difference in couplings arise from their physical mechanisms being bootstrap summations from all the particles in the surrounding universe. It turns out that the random (diffusion) path integral of quantum gravity for field quanta exchanged between all particles in the surrounding universe is statistically proportional to the square root of the number of charges, whereas for electromagnetism the coupling is directly proportional to the number of charges, so the ratio of gravitational to electromagnetic couplings at low energy is 10^{40}/10^{80} = 10^{-40} as observed. My model for how mass is given to left-handed isospin fields to produce weak interactions replaces electroweak symmetry breaking and explains the structure of matter, so I’m fairly excited about my paper because I think that experiments now being done in the LHC will discredit the symmetry-breaking Higgs model and lead to progress.

**Update:** I’ve written my draft paper on quantum field theory, going through the Standard Model and justifying the need to make changes and corrections that incorporate quantum gravity and resolve problems. The first five pages from the draft have been proof-read and are linked here (2 October 2010 draft). It has been an extremely helpful process, to me at least, for clarifying the theory. The paper is a compressed, edited, corrected and updated summary of all the developments in the subject. Once I’ve completely finished polishing the text and diagrams, I’ll submit it to an archive and a suitable journal. 😉

**Update (15 November 2010):**

Professor N. Nakanishi has just made some comments on Dr Woit’s Not Even Wrong blog which clarify and correct a misconception in textbook accounts of electroweak theory, which is of paramount importance to my paper (here and here):

“I emphasize that the Nambu-Goldstone boson does exist in the electroweak theory. It is merely unobservable by the subsidary condition (Gupta condition). Indeed, without NG boson, the charged pion could not decay into muon and antineutrino (or antimuon and neutrino) because the decay through W-boson violates angular-momentum conservation. … Pion’s spin is zero, while W-boson’s spin is one. People usually understand that the pion decays into a muon and a neutrino through an intermediate state consisting of one W-boson. But this is forbidden by the angular-momentum conservation law in the rest frame of the pion.”

The conventional (QFT textbooks) view is that this spin “anomaly” when a pion of zero spin decays by emitting a spin-1 W-boson, is simply to be viewed as a perfect example of the evidence of the need for the existence of epicycles/caloric/phlogiston/the V-A structure of weak interactions.

As Dr Woit comments in the blog post: “What Philip Anderson realized and worked out in the summer of 1962 was that, when you have both gauge symmetry and spontaneous symmetry breaking, the Nambu-Goldstone massless mode can combine with the massless gauge field modes to produce a physical massive vector field.”

Massless and spinless Namu-Goldstone bosons are produced by spontaneously breaking a gauge symmetry, i.e. electroweak symmetry. But Professor Nakanishi’s point is that W gauge bosons are composite particles and for conservation of angular momentum in pion decays, weak bosons must be spin-0 massless bosons, which only acquire their chiral (left-handed?) spin-1 state – together with their mass – from a symmetry breaking Nambu-Goldstone vacuum field:

Thus, electroweak “symmetry” is not (only) *spontaneously* broken (as in the Higgs mechanism for spontaneous symmetry breaking), but is *explicitly broken*, in order for the field quanta to acquire their masses. If it was just spontaneously broken, the weak bosons would be massless Goldstone bosons!

**Above:** Professor Nakanishi’s point about beta decay of pions is that pions are *spin-0* and yet the dominating leptonic decay of charged pions is somehow by their conversion into a *spin-1* weak boson (which subsequently decays into a muon and a neutrino). It therefore looks as if the spin of weak bosons can be zero, and they pick up spin as well as mass while travelling in the vacuum, thus becoming spin-1 massive W bosons, despite having been emitted with zero spin. An analogy to this change in the properties of particles as the travel through the vacuum is the flavour changing of neutrinos; 100% of the neutrinos emitted by the sun are electron neutrinos, but during their journey they interact with the vacuum and change flavour so that only a third of the neutrinos received on the earth are electron neutrinos and the rest are muon and tauon neutrinos. This is often misleadingly referred to as “neutrino oscillations”, which implies the assumption of a classical oscillating wave that gradually transforms flavours of neutrinos. Contrary to this smooth oscillation idea, any *individual* neutrino is not at any time a mixture of flavours, but it is only one flavour, so it must change its flavour in a discrete manner as a result of a discrete interaction. Thus, random quantum interactions in the vacuum convert the solar electron neutrinos into an equally partitioned mixture of the three flavours by the time the arrive at the earth and are detected. The well-known extremely low interaction rate of neutrinos with *matter* is therefore not applicable to the interaction rate of neutrinos with the vacuum.

Since key properties (mass, spin) of the spin-1 massive weak bosons are acquired from vacuum interactions, rather than intrinsic, it is possible that the left-handedness of the weak interaction also lies in the spin acquisition, rather than in the non-falsifiable assumption (in the Standard Model) that right-handed neutrinos don’t exist. The Higgs symmetry-breaking concept is that over very short distances (corresponding to high energies) the weak field quanta are massless, and that mass is acquired as they plow some distance through the vacuum, interacting with massive particles. If they pick up mass from interacting with the vacuum, this will occur as they move along, so mass will only be acquired after a transit through space. Over very small distances (or at very high energies) they won’t move far enough through the vacuum to interact and acquire mass, so they will remain massless over small distances, which is the basis for assuming an electroweak unification at high energies. But this Higgs mechanism is grossly simplistic unchecked speculation, and doesn’t lead to falsifiable predictions of all discrete particle masses. The real theory of particle mass will in some sense be a theory of quantum gravity charge, and you want it to predict masses less vaguely, thus be falsifiable. If quantum gravity charges (fundamental mass particles) have only one sign (as observed so far; because every mass observed falls in the same way in a gravitational field), then there is no way for fundamental charges of mass-energy to disappear. So when matter and antimatter are annihilated, gravitational charge is conserved and doesn’t disappear (the photon interacts with gravitational fields). Thus, the fundamental vacuum field (the raw quantum gravity charge) should be a fundamentally stable ground-state particle, that cannot decay. The Higgs boson is by contrast taken to be yet another unstable particle which is capable of decay, and so the search for the Higgs boson is based on looking for decay products corresponding to those which would be emitted by a Higgs-like unstable particle.

**The path integral, Haag’s theorem and relativity: A reformulation of the phase amplitude factor from complex configuration space to Euclidean space, overcoming the problem that Haag’s theorem forbids an interaction picture in quantum field theory within Hilbert space**

Quantum field theory is presently formulated in a complex configuration space, such as Hilbert or Foch space. Haag’s theorem denies the existence of the interaction picture of quantum field theory in a Foch or Hilbert space. Therefore, Haag’s theorem disproves the present formulation of quantum field theory in terms of renormalizable (polarizable) fields in such complex configuration spaces. Haag’s theorem states that there is a lack of consistency between the free field and the renormalized (i.e. vacuum polarization-compensated) vacuum states in Hilbert spaces, because the isomorphism that maps the free-field Hilbert space on to the renormalized-field Hilbert space is ambiguous. Haag’s theorem is an outstanding foundational problem for quantum field theory (the basis of the Standard Model, which is the most precisely verified theory in existence) and has led Dr Oakley to suggest the drastic step of removing the interaction picture, which leads to difficulties. We choose the alternative option of overcoming Haag’s theorem by retaining the interaction picture and reformulating the phase factor of the path integral using Euler’s formula for the phase amplitude factor, *e*^{iS} = cos *S* + *i* sin *S* where *S* is the action (the integral of the lagrangian over time, expressed in dimensionless units consisting of action divided into the quantum unit of action, h-bar). As this formula shows, the phase factor *e*^{iS} is a simple periodic function when plotted on an Argand diagram. Paths represented by small actions (S » 0) lead to phase factors of *e*^{iS} ~ *e*^{0} = 1, while the cyclical value of *e*^{iS} for other paths with greater actions cancel one another, allowing the sum of phase factors for all paths to be maximised where the action is least.

It is shown in Figs. 1 and 2 that a precise duality exists for path integral purposes between the complex phase factor *e*^{iS} and the factor cos *S*, the only difference being the loss of vector information: the *direction* of the resultant amplitude in a path integral is lost by use of the substitution *e*^{iS} -> cos *S*, since this substitution changes the path integral from summing vectors (of equal length but varying direction on the Argand diagram), to the summation of scalars (amplitudes without direction, i.e. the output from cos *S*) which vary in value between plus and minus one to allow cancellation of cyclical contributions from large actions. However, this vector information is unnecessary for existing uses of the path integral, so that the substitution *e*^{iS} -> cos *S* is an effective and necessary solution to the problem for Hilbert space in Haag’s theorem, allowing the path integral to be formulated in Euclidean space, instead of a complex space. The implication for quantum field theory is the need to reformulate the existing mathematical structure using simpler tools which allow greater self-consistency, simpler evaluation, and more emphasis on understanding the physical mechanisms behind the equations.

**Fig. 1:** a single photon of light reflects off a mirror, appearing to reflect at an angle of incidence which is equal to the angle of reflection. Feynman explains in his 1985 book *QED* (diagrams on the left hand side of the figure) how light knows what angle to reflect at. We know from other experiments (e.g. Young’s double slit experiment when photons are fired one at a time and detected by photomultiplier) that the path a “real” photon of light goes on is influenced by all possible paths although most of the energy is transferred along paths at and near those having minimum action (where the phase factor is *e*^{iS} ~ *e*^{~0} ~ 1), so that “virtual photons” or off-shell radiation goes along other paths and is normally cancelled out due to arriving out of phase. This diagram compares of the evaluation of the path integral by the summation of phase factors for all possible in a complex configuration space, with the replacement of complex phase factor *e*^{iS} by simply the real space cyclic factor, cos *S*. Notice that the path integral using *e*^{iS} tells us two different things: the magnitude (length) of the nose-to-tail resultant arrow (sum over histories), and also its direction. The use of cos *S* for the phase factor in place of *e*^{iS} gives us the *same resultant magnitude for the path integral,* but it fails to tell us the *complex or imaginary direction* of that resultant arrow. So we lose one piece of information, and this is a prime example of the sort of mathematical substitution (analogous to the false statement [*x*^{2}]^{1/2} = *x*, which of course is clearly seen to be wrong for the case *x* = -1, where the equation tells you absurdly that 1 = -1) which makes those mathematicians suffering from excessive demands for rigor (a disease called rigor mortis) turn in their graves. The likes of stringy arXiv advisers/bigots may well be unable to see the advantage of this simplification from complex configuration space to Euclidean space, but who cares about the rigor mortis of string theorists?

The whole point we are making is that, the way that the path integral is currently used and checked in quantum field theory (e.g. finding magnitudes of cross-sections or correction factors for magnetic moments and the Lamb shift, by evaluating the perturbative expansion for the path integral), it is *not* being used to find the imaginary complex direction of the resultant vector, but is simply being used to determine the resultant scalar amplitudes (resultant arrow *lengths*, not directions), *so the full vector properties which include the Argand diagram direction of the resultant arrow are just not relevant for the empirically confirmed predictions from the path integral!* Thus, we reverse the obvious argument against the cos *S* substitution completely! Please don’t fall into the error of assuming that the textbook phase factor *e*^{iS} is perfectly correct and that it must be not replaced (or, saying the same thing with different words, “it must be replaced by something which is absolutely identical”). That’s the error of the proponents of epicycles, caloric and phlogiston, who insisted they were interested in science provided that any new theory is completely compatible with the existing (wrong) theory. The complex exponential phase factor comes from the solution to Schroedinger’s time-dependent equation, and that equation is a guess from Schroedinger that is not rigorously theoretically derivable (as Feynman points out in his *Lectures on Physics*). We know that Schroedinger’s time-dependent equation is only an approximation anyway since it is non-relativistic. So we reverse the old route and find the most useful phase factor heuristically; then we can reformulate quantum mechanics and the mathematical obfuscation and confusion due to Haag’s theorem disappears with the complex configuration space.

We’re left with a Euclidean space in which relativistic curvature is produced by virtual particle interactions. Now what about the Lorentz transformation in special relativity, and the corresponding contraction of spacetime around mass and energy (spacetime curvature) in general relativity, and its consequences for Euclidean space? Dr Woit argues that a Euclidean signature is required even for the conventional formulation (phase amplitudes in a complex configuration space) of quantum field theory in the standard model in *Quantum Field Theory and Representation Theory: A Sketch,* 2002, p. 51: “the standard model should be defined over a Euclidean signature four dimensional space time since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature.” Special relativity is in any case based on Machian observables, implicitly excluding the particulate field or ether of virtual particles in the vacuum required by quantum field theory (these virtual particles cause real, measurable effects like the experimentally demonstated Casimir force, the quantum field effect perturbation to the magnetic moment of leptons and the Lamb shift), and it states (correctly) that we can’t observe absolute motion by measuring the time taken for light to bounce back and forward across a “fixed” distance in different directions, because Lorentz contraction varies the length of the “fixed” distances. Thus a Michelson-Morley device simply can’t *detect* absolute motion:

“Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics …”

“The reason that special relativity was considered a better explanation than the Lorentz-FitzGerald hypothesis can best be illustrated by Einstein’s own words: “The introduction of a ‘luminiferous ether’ will prove to be superfluous inasmuch as the view here to be developed will not require an ‘absolutely stationary space’ provided with special properties.” The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”

G. Builder, “Ether and Relativity” in the *Australian Journal of Physics,* v11, 1958 (linked here), states that if you move atomic clocks, absolute motion is detectable from time-dilation (special relativity does not apply here, and general relativity is based on general covariance, which not the principle of relativity): “we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.” (In 1971, J. C. Hafele flew atomic clocks around the world, citing Builder’s paper and confirming Builder’s conclusion therefore that the relative aging of clocks in different states of motion implies the existence of the means to detect the presence of absolute motion; *Science,* vol. 177, pp. 166-8. Notice however that the winner of the Templeton Prize for Religion, Paul Davies, obfuscated Hafele’s analysis at length in his – Davies – book *About Time*, claiming misleadingly that since Hafele’s analysis of time dilations were based on relativity, his results must somehow defend the religion whereby special relativity disproves the presence of absolute facts. Actually, special relativity isn’t applicable to accelerating motions and some accelerations are needed to separate and return together atomic clocks, so you are dealing with general relativity which is based on general covariance of the equations of motion, rather than special relativity. Regardless of which of Einstein’s theories you use, you find that in similar gravitational fields, clocks which slow down the most are the ones which are moving the most. Thus, time dilation is evidence that you can tell which of two atomic clocks has been in motion. If all motion was relative, you wouldn’t be able to tell. Thus, outside the narrow confines of special relativity, such as uniform motion, *motion isn’t relative*.)

**Fig. 2:** (click on image for a larger view) the equivalence of the resultant amplitudes when using *e*^{iS} and cos *S* for the phase factor in the path integral (sum over path histories). Basically, what we are suggesting is that we take Euler’s *e*^{iS} = cos *S* + *i* sin *S* then drop the complex term *i* sin *S*

**Michelson-Morley experiment on YouTube**

Thanks are due to Matti Pitkanen for news of these great experiment videos:

It seems that anyone can now repeat the Michelson-Morley experiment to verify the result that, as FitzGerald concluded, no absolute motion is detectable when rotating an interferometer, which works by splitting a light beam with a piece of glass at an angle to the beam, and then sending the two beams (at right angles to each other) on a two-way journey to mirrors, before recombining them to create interference fringes on a screen. The interference fringes are extremely sensitive to small changes in the distance or speed of each beam of light, so by rotating the instrument it would be possible to see the effects of any absolute motion of the earth, *provided that the effect of absolute motion doesn’t change two things at once.* When rotated in a plate horizontal to the Earth’s surface, no change in the interference fringes are observed, which George FitzGerald historically interpreted in the journal *Science* in 1889 (two years after the Michelson-Morley experiment) as *evidence that absolute motion has two effects* which cancel out any change in the interference pattern: (1) absolute motion affects the speed of light, and (2) absolute motion contracts the instrument in the direction of its motion (an effect of the gravitational field in space analogous in some respects to the pressure of water on the front of a moving ship or aircraft, which causes a compressive force and thus a small contraction in direction of motion). FitzGerald explained that the effects of these two results prevent the Michelson-Morley instrument from seeing a change in the interference fringes, because the contraction of the instrument shortens the travel distance for light in the direction of absolute motion.

Einstein didn’t get away from this physical mechanism of relativity, the FitzGerald-Lorentz explanation, because special relativity includes the FitzGerald-Lorentz contraction factor, which is going to shorten the instrument in the direction of motion. Therefore, because special relativity includes that equation for the contraction of the instrument, it implies that light velocity must vary as the instrument is moved (say due to the Earth’s 30 km/s motion in orbiting the sun, or the 600 km/s motion of the Milky Way towards Andromeda), because according to Einstein’s special relativity, motion *does* contract the instrument by the FitzGerald-Lorentz factor (1 – *v*^{2}/*c*^{2})^{1/2}, thus the observed lack of interference fringe changes in the horizontal rotation of the instrument rules out a constant velocity of light. The relativistic contraction of the instrument as the relative direction of motion changed due to the rotation of the instrument would be predicted to make the interference fringes change because a contraction of length in the direction of motion will shorten one light path relative to another. The combination of “contraction formula no observed interference fringe shift upon rotation” therefore implies that the velocity of light is absolute, in order to compensate for the contraction and produce the observed lack of interference fringe change!

In the videos, the Michelson-Morley instruments are rotated also in the vertical plane, where the cyclically changing gravitational force naturally causes the oblong rectangular instrument to contract the base board slightly as it rotates, producing changes to the interference fringes. This is simply due to the masses in the instrument such as the camera arm, inducing additional contractions and expansions due to the force of gravity as the instrument is whirled around (gravity has no such effect obviously when the instrument is rotated *horizontally,* which induces no gravitational force). Unlike the FitzGerald-Lorentz contraction, these additional small contractions and expansions of the instrument base board are of course not compensated for by the absolute velocity of light, so they show up on the screen as cyclical variations in the interference fringe pattern. The second experiment minimized gravitational force effects on the instrument by using a stronger instrument base board and rotating the instrument more slowly in the vertical plane by hand rather than with a motor, and this reduced the amount of shifting of the interference fringes as compared to the first experiment, confirming that the shifting pattern was simply an instrument effect due to gravitational forces during vertical rotation.

**20 Vov 2010 update: Garrett Lisi and Peter Woit versus Jacques Distler and the superstring theory community (Aaron Bergmann and others)**

The comments section of Dr Woit’s blog post A Geometric Theory of Everything is currently the battleground between Dr Lisi’s half-baked E_{8} particle physics theory and his critics such as Professor Distler and other superstring believers. Dr Woit writes in the latest comment:

This is just politics. Science isn’t about social niceties or diplomacy, but about facts. The fact is, Dr Lisi’s E_{8} theory is less developed than string theory since (for one thing) it has had less time and money has been invested in it, and Jacques Distler has invested research time in string theory, which gives him a vested interest in defending it as the only game in town, and to do that, he dismissively points out problems in other people’s half-baked theories. There’s a paradox here: nobody can expect to produce a theory as well developed as string theory without the decades of time and research which has been invested in string theory, but unless they do, they will be dismissed. Distler’s key argument is that Lisi’s symmetry pattern needs mirror fermions which destroy the theory; Lisi responds with the convenient arm-waving excuse: “I think what’s really going on is that these mirror fermions may be related to usual fermions using an E8 gauge transformation, so that they never appear as mirrors but rather as another generation.” So he doesn’t really have a cut and dried answer, just a vague guess. But in superstring theory too, with all supersymmetric theories, there are unobserved “sparticles”, supposedly with masses are conveniently too high to observe, and conveniently not predictable precisely (so the theory is really non-falsifiable, or “not even wrong” in Pauli’s language).

**Special relativity and the Michelson-Morley result**

Special relativity demands a “hidden” absolute velocity of light, to compensate for the contraction of the Michelson-Morley instrument in the direction of its motion:

Professor A. S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, *Space Time and Gravitation: An Outline of the General Relativity Theory,* Cambridge University Press, Cambridge, 1921, pp. 20, 152:

“The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus … The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.”

FitzGerald’s original explanation (published in *Science* in 1889, about four years before Lorentz) of the failure of the experiment to detect the absolute velocity of light that James Clerk Maxwell postulated when proposing the experiment, was simply that the instrument contracted in the direction of motion, shortening the instrument so the two-way path of light in the direction of motion was shortened, compensating for the effect of absolute motion and preventing the shift in interference fringes from being observable.

Thus, FitzGerald preserved the ether and the absolute velocity of light, with the contraction formula. Einstein’s special relativity doesn’t debunk FitzGerald’s explanation, because Einstein still has the same contraction formula, the FitzGerald-Lorentz contraction.

The Earth is going around the sun at 30 km/s and the Milky Way is going towards Andromeda at 600 km/s. If the speed of light was always constant, then the contraction of the instrument would produce interference fringe shifts as it was rotated, by shortening one two-way light beam path. Thus, if the instrument contracts in the direction of the Earth’s motion, then this would produce interference fringe shifts unless the velocity of light changes with direction, preventing such shifts:

(1) Contraction in the direction of motion by the factor FitzGerald-Lorentz factor will cause interference fringe movement as the Michelson-Morley instrument is rotated around, due to the contraction of the instrument in the direction of motion (shortening one light path relative to the other) caused by the motion of the Earth.

(2) To get rid of the interference fringe shifts due to the Lorentz-FitzGerald contraction of the instrument in the direction of motion (shortening the path taken by one of the split light beams), you need to have a variable velocity of light!

If Einstein wanted to get rid of an absolute velocity of light, he would need to get rid of the FitzGerald-Lorentz contraction. He doesn’t. He has the contraction! He doesn’t realize that it is incompatible with his postulate of the constancy of the velocity of light, because he doesn’t understand the Michelson-Morley experiment. He just focusses on Maxwell’s equations. The FitzGerald-Lorentz contraction is a compensating factor for the absolute speed of light: if you want a non-absolute speed of light, you then have to get rid of that compensating factor, or else *it (the contraction of the instrument)* will shorten one light path and thus cause interference fringe shifts!

Thus, special relativity was *actually misunderstood* by its alleged author, Einstein. Sir Edmund Whittaker’s *History of the Theories of Aether and Electricity*, v2, 1951, attributed special relativity to Lorentz and Poincare, outraging Einstein’s friend Abraham Pais, who then lent Einstein his original copy of Poincare’s 1904 relativity paper, which Einstein had apparently never seen before (this is in Pais’ 1982 biography of Einstein, *Subtle is the Lord*). Einstein then instructed Born to give a public acknowledgement of Poincare’s role, and Born announced that Poincare had discovered relativity before Einstein. This made Pais furious, because Poincare’s original relativity had three postulates, unlike the two in Einstein’s. Pais ignores the points of Builder and Eddington. The subject has become completely pseudoscientific, putting religious style hero worship ahead of critical analysis. Bearing in mind that Einstein didn’t bother to read up the Michelson-Morley experiment or the papers of FitzGerald, Lorentz and Poincare, before rushing out his own paper, it is no wonder that he was confused.

G. Builder showed in a peer-reviewed journal in 1958 that absolute motion is implicit also in the “relativistic” time-dilation:

– G. Builder, “Ether and Relativity”, *Australian Journal of Physics,* v11, 1958. In 1971, J. C. Hafele flew atomic clocks around the world, citing Builder’s paper and confirming Builder’s conclusion therefore that the relative aging of clocks in different states of motion implies the existence of the means to detect the presence of absolute motion: *Science,* vol. 177, pp. 166-8.

“… we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. … The reason that special relativity was considered a better explanation than the Lorentz-FitzGerald hypothesis can best be illustrated by Einstein’s own words … his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”

**Failure of the peer-review system in theoretical particle physics**

Just a few additional bits taken from the comments section of the Not Even Wrong blog post A Geometric Theory of Everything. Note that Dr Woit, the blog administrator, wrote a chapter about the failure of the peer-review system in the case of the Bogdanov’s in his 2006 book, *Not Even Wrong*, and that Professor Jacques Distler of Steven Weinberg’s stringy research department, Texas University, was then an arXiv.org Cornell University preprint server adviser, who allegedly censored trackbacks to string theory papers on the arXiv that came from Woit’s blog postings, while allowing more enthusiastic trackbacks from his own Musings blog and from other string theorists’ blogs. (The fact that *non*-peer reviewed “preprint” online servers such as arXiv.org are routinely used to “publish” papers, bypassing the supposedly peer-reviewed journal system, is another problem in itself. The role of print journals is now being reversed: no longer are they the means for rapid widespread dissemination of new ideas, they’re now just a slow vehicle for endorsement by “black-balling” elitist groupthink, which due to copyright and charges to download papers, end up making new research unavailable to many people around the world who do not have online access to the journals. It has been made clear that Einstein’s papers were not peer-reviewed; his first instance of peer-review was with the *Physical Review* in 1936, when he wrote to the editor that he refused to have his papers circulated to anonymous reviewers prior to publication. The peer-review system is a good demarcation of the line between science and the consensus of groupthink, the religious doctrine of believing in what is fashionable or has “endorsement” from “authority” in science.) Woit was accused of not taking the “professional” standpoint of supporting peer-reviewed journal papers over preprints, and responded:

“The Bogdanovs (not “Bogdanavs”) are the poster-boys for the problems with peer-review. They published five peer-reviewed articles, two of them in very well-known and highly-respected journals. One of them was given his Ph.D. on the basis of having published such peer-reviewed articles. The articles are complete gibberish, and very different than Garrett’s. Garrett is making clear statements and proposals, some of which may be wrong and/or incomplete. What he’s saying is so clear that Jacques can write a mathematical physics paper purporting to give a rigorous refutation of some of it.

He later adds:

“Obviously I’m not arguing for the suppression of anyone’s articles, peer-reviewed or not. Given the broken nature of the peer-reviewed system, I don’t think you can conduct an argument about the value of an idea by saying that X is peer-reviewed and Y isn’t. Or by seeing that Z has made it into Wikipedia, or is the subject of an article in a popular magazine.

Professor Hawking’s co-author, astrophysicist George F. R. Ellis makes the important point:

“People need to be aware that there is a range of models that could explain the observations… For instance, I can construct you a spherically symmetrical universe with Earth at its center, and you cannot disprove it based on observations… You can only exclude it on philosophical grounds. In my view there is absolutely nothing wrong in that. What I want to bring into the open is the fact that we are using philosophical criteria in choosing our models. **A lot of cosmology tries to hide that**.” (Quoted in: W. Wayt Gibbs, “Profile: George F. R. Ellis,” *Scientific American*, October 1995, Vol. 273, No.4, p. 55. Emphasis added.)

Of course, this is the opposite of what happens in science. Ptolemy fits circular epicycles to an earth-centred universe model, then claims that the predictions prove the model right, and everybody believes him, making the transition to a better model (the solar system, requiring Kepler’s elliptical orbits to avoid epicycles) very hard. Early scientific models acquire an authority status just because they were there first, and people religiously believe them to be true just because they contain enough epicycles to make them fit the evidence. Then you get the problem that when the correct theory comes along, it is initially in a form which makes it give worse predictions than the wrong mainstream theory. E.g., Aristarchus first came up with the solar system in 250 B.C., but because he didn’t have good data, he assumed circular orbits (not elliptical orbits), and his model was inaccurate as well as being ridiculed because it contravened the then-assumed laws of motion (Aristotle’s laws of motion, which were very different to Newton’s, and were wrong). Instead of everybody getting together to work on Aristarchus’s solar system in 250 B.C., they simply used groupthink prejudice to ignore it and to assume that the sun, planets and stars all orbit the earth daily, with epicycles accounting for the apparent retrograde motions. Ptolemy’s Almagest of 150 A.D. makes it clear that Aristarchus’s solar system was actually ridiculed for allegedly (in combination with Aristotle’s flawed laws of motion) predicting that clouds and birds should orbit the equator at 1,000 mph, and predicting a 1,000 mph wind the equator. Copernicus, when trying to establish the solar system in 1500 A.D., in his first printing proofs credited Aristarchus with first suggesting the solar system, then deleted that acknowledgement before publication. It looks extremely unfashionable to try to repopularize an old supposedly “discredited” idea in a new format. Similarly, Darwin did not set out his account of evolution by praising Lamarck’s earlier (incorrect) theory of evolution. If you correct an incorrect theory so that it is no longer incorrect, a frank account of the process will make the result look “contrived” or fiddled. However, a certain amount of fiddling around with theories or puzzles is always needed to get anywhere, whether the result is right or wrong.

Changing the path integral phase factor from e^{iS} to cos iS as explained above gets rid of the problem that Haag’s theorem disproves renormalization in Hilbert’s complex space of quantum field theory, because it replaces Hilbert space with Euclidean space, where renormalization occurs very simply through polarization. Quantum gravity as the exchange of gravitons then becomes the summing of vacuum off-shell radiation exchange processes in real space, an off-shell version of LeSage’s theory. The traditional objections to LeSage all suppose that the graviton is an on-shell particle that can cause drag and heating to moving objects. By contrast, in quantum gravity, the graviton has always been an off-shell (virtual) particle that doesn’t cause heating or drag to moving particles, like the Casimir zero point radiation in the vacuum. However, it is clear that some vacuum interactions with moving particles do occur: gravitons cause contraction to masses moving in the vacuum (the Lorentz contraction), just as they do to static masses (as given by the field equation in general relativity).

## 2 thoughts on “Feynman’s quantum mechanics”