New presentation of quantum gravity

Update (12 January 2010): around us, the accelerating mass of the universe causes an outward force that can be calculated by Newton’s 2nd law, which in turn gives an equal inward reaction force by Newton’s 3rd law. The fraction of that inward force which causes gravity is simply equal to the fraction of the effective surface area of the particle which is shadowed by relatively nearby, non-accelerating masses. If the distance R between the two particles is much larger than their effective radii r for graviton scatter (exchange), then by geometry the area of the shadow cast on surface area 4*Pi*r2 by the other fundamental particle is Pi*r4/R2, so the fraction of the total surface area of the particle which is shadowed is (Pi*r4/R2)/(4*Pi*r2) = (1/4)(r/R)2. This fraction merely has to be multiplied by the inward force generated by distant mass m undergoing radial outward observed cosmological acceleration a, i.e. force F = ma, in order to predict the gravitational force, which is not the same thing as LeSage’s non-factual, non-predictive gas shadowing (which is to quantum gravity what Lemarke’s theory was to Darwin’s evolution, or what Aristotle’s laws of motion were to Newton’s, i.e. mainly wrong). In other words, the source of gravity and dark energy is the same thing: spin-1 vacuum radiation. Spin-2 gravitons are a red-herring, originating from a calculation which assumed falsely that gravitons either would not be exchanged with distant masses, or that any effect would somehow cancel out or be negligible. Woit states:

“Many of the most well-known theorists are pursuing research programs with the remarkable features that:

“•You don’t need to have any idea what the fundamental degrees of freedom are.
“•You don’t need any fundamental dynamical laws either.
“•You can do everything with high school mathematics.”

Although making the most basic quantum gravity predictions can be done with “high school mathematics”, the deeper gauge symmetry connection of quantum gravity to the Standard Model of particle physics does require more advanced mathematics, as does the job of deriving a classical approximation (i.e. a corrected general relativity for cosmology) to this quantum gravity theory, for more detailed checks and predictions. When Herman Kahn was asked, at the 1959 congressional hearings on nuclear war, whether he agreed with the Atomic Energy Commission name of “Sunshine Unit” for strontium-90 levels in human bones, he replied that although he engaged in a number of controversies, he tried to keep the number down. He wouldn’t get involved. Doubtless, Woit has the same policy towards graviton spin. What I’m basically saying is that the fundamental particle is that causing cosmological repulsion, which has spin-1. This causes gravity as a “spin-off” (no pun intended). So if spin-1 gravitons are hard to swallow, simply name them instead spin-1 dark energy particles! Whatever makes the facts easier to digest…
AreaShielding
new illustration of quantum gravity
zzz
Fig. 1 – new presentation of quantum gravity, based on both the recent discussion with Doug Sweetser in the About page comments, and an attempt to explain the mechanism to a science teacher, during an hour long run in the park this evening. Note that this predicts the actual strength of gravity, e.g. it predicts the value of the gravitational parameter G. This is not a non-predictive theory like string theory, based on 6/7 extra dimensions that nobody can observe and adding 100 or more extra unknown parameters plus a multiverse of 10500 extra universes to the Standard Model of particle physics. It’s a predictive theory based on factual inputs, not a non-predictive theory which is based speculations about yet other speculations (Planck scale unification, wrong spin-2 gravitons, etc.).

The mainstream spin-2 graviton theory can’t calculate anything checkable, since it has to falsely ignore graviton contributions in the surrounding universe, which are immense due to the fact that (1) the masses of galaxies in the surrounding universe are immense and (2) the gravitons from such distant masses are converging from a great distance as they are exchanged with masses here, which is the opposite of divergence. This spin-1 graviton theory is the only possible falsifiable theory of quantum gravity for this reason: it is based on observable facts seen in nature. By Newton’s 2nd law, the cosmological acceleration (a = Hc ~ 7 x 10-10 ms-2) of the mass of the universe (~3 x 1052 kg of observable luminous matter, according to NASA’s Hubble Space Telescope) implies a force outward from any observer of F = Ma = 2 x 1043 Newtons. Newton’s 3rd law implies an equal and opposite force (i.e. inward directed, towards the observer). From the possibilities of known particle physics (gravity and the Standard Model), this force must be carried by gravitons, implying the mechanism in Fig. 1 which gives gravity as the asymmetry when this force is shadowed by masses with a cross-section for graviton interactions equal to the black hole event area of the mass of that fundamental particle, which is a fact that is empirically justified in the earlier post linked here. The blach hole event horion radius for an electron is 1.35 x 10-57 metre, so it has a cross-section of just 5.73 x 10-114 m2. This small cross-section is why gravity is so weak compared to other forces (e.g., the gravitational attraction between two apples is negligible, and you need immense masses like the mass of the earth to make the gravitational interaction significant, whereas other fundamental forces show up when dealing with just a few particles).

There is an radial inward force of 2 x 1043 Newtons which is the 3rd law reaction to the observed cosmological acceleration of the universe around the observer. This is an immense force, but because the cross-section for quantum gravity is so small, gravity gets cut down to the observed strength by the shadowing effect in Fig. 1.

The gravitational attraction force given by Newton’s law with parameter G obtained in the usual way empirically (from the twisting of a fibre by the attraction of large lead spheres in the laboratory) can now be calculated theoretically as proved in Fig. 1 above. It is accurate, with errors well within the error in the estimate of the mass of the observable universe (3 x 1052 kg which is taken from page 5 of the NASA report linked here). Fig. 1 also summarizes the flaws in trying to extend LeSage’s inaccurate and useless theory of gravity to this theory in order to ignore it (which is like falsely claiming Darwin’s evolution is wrong because Lamarke came up with an inaccurate and misleading theory of evolution before Darwin sorted out the facts of evolution; it superficially impresses the gullible, but it is a false argument itself): LeSage’s theory is based on real radiation, not virtual (off-shell) radiation like gauge bosons (which don’t heat up objects or slow them down by causing drag). In any case, quantum gravity will imply that there are gravitons throughout the vacuum, so if this naive objection were true, it would be a problem for spin-2 gravitons just as spin-1 gravitons. Actually, there are interactions between gravitons and moving masses: these cause the FitzGerald contraction in length in the direction of motion (head-on pressure effect), the increase in mass (snowplow effect), and for static masses the radial contraction (compression) which leads to various curvature effects in the approximation to quantum gravity which is known as general relativity.

LeSage
Above: LeSage’s shadow theory was developed originally by Newton’s friend Fatio, but was a failure because it couldn’t predict anything:

(1) Fatio and LeSage didn’t know Weyl’s gauge theory whereby two gravitational charges will exchange off-shell virtual particles, gravitons, to cause gravity (which is the case in the other fundamental particle interactions in the Standard Model). So they falsely speculated that gravity was caused by dust like particles which would cause drag, slowing down the planets and heating them up by impacts. Maxwell and Kelvin later pointed out these flaws, debunking the Fatio-LeSage theory.

(2) They didn’t know that we’re surrounded by immense masses in all directions and that according to any Weyl gauged quantum gravity theory, we will be exchanging gravitons with those surrounding masses. There is no way to prevent or justifiably ignore the consequences of this graviton exchange with immense distant masses.

(3) They didn’t know about the recession of matter, so they couldn’t predict the cosmological acceleration of the universe correctly ahead of measurement (which we did publish in 1996, two years before discovery), and then use that value to calculate the outward force of receding mass M by Newton’s 2nd law, F = Ma. They couldn’t in consequence apply Newton’s 3rd law to get the equal and opposite inward-diected, graviton mediated force. They also didn’t have any evidence about the graviton interaction cross-sectional area for matter; they didn’t know the evidence that it is black hole sized.

There are other ignorant claims to be found on the internet. For example, http://www.mathpages.com/HOME/kmath209/kmath209.htm states falsely that the isotropy of the universe is 1 part in 100,000 without specifying the area of sky that this this amount of cosmic background radiation temperature fluctuation applies to: the page claims that this amount of anistropy would cause “fluctuations in the “weight” of a 1 pound object (in the shape of a slender rod, to make it sensitive to the directional flux) on the order of 100 pounds”. It gives no time-frame for the period of oscillation of this density, or the ratio of length to diameter of the rod, just the pseudoscientifically value word “slender” (which is non-quantitative). Actually, this is totally false because if a long slender rod is made, it will not fluctuate in mass due to the anisotropy because the anisotropy is not fluctuating! The same pattern of anisotropy in the cosmic background radiation exists across the sky. Rotating the rod makes no difference whatsoever, because the rod is composed of individual fundamental particles! The sum of forces acting on those particles is no different regardless of the orientation of the rod. With a cross-section for graviton interactions of 5.73 x 10-114 m2 for an electron, there is no significant chance (even with the mass of the earth) that two fundamental particles will lie on a single given line of sight. Hence, the shape of a given mass is irrelevant for the mass sizes we are concerned with in the case of rods in a laboratory. There are also false “arguments” that gravitons have to travel faster than light, cause heating to melt objects, cause drag forces, and so on, which are based upon studiously ignoring the off-shell nature of force-mediating virtual particles, Weyl gauge bosons. Does your fridge magnet glow red-hot from exchanging gauge bosons with the fridge door? No? Electromagnetism between fundamental charged particles is 1040 times stronger than gravity, so if gravitons are supposed to cause heating then electromagnetism would cause a heating effect 1040 times worse than gravitons! That debunks the idea that gauge bosons cause any type of heating, including drag effects which cause objects moving in a real (on-shell, not off-shell) fluid to heat up.

All of the objections to this mechanism of gravity are similar in their off-the-top-of-my-head stupidity and ignorance to the objections Feynman’s path integrals received from Oppenheimer, Bohr, Teller and others at Pocono in 1948; they are based on ignoring the facts and simplistically dismissing progress.

Consider Oppenheimer’s attempt to censor Feynman’s path integrals without listening at all, as described by Freeman Dyson (Stuckelberg was working on the same idea independently, but was ignored and – as with Zweig’s quarks – he received no Nobel Prize). It’s remarkable that genius in the past has consisted to such a large degree in overcoming apathy (Oppenheimer was not just a stubborn exception who objected to path integrals. E.g., Feynman is quoted by Jagdish Mehra in The Beat of a Different Drum, pp. 245-248, saying that Teller, Dirac and Bohr all also claimed to have “disproved” path integrals: Teller’s disproof consisted of saying that Feynman didn’t have to take account of the exclusion principle, Dirac disproved it for not having a unitary operator, and Bohr disproved it because he believed that Feynman didn’t know the uncertainty principle: “it was hopeless to try to explain it further.” So without Dyson’s brilliance at explaining ideas, Feynman’s path integrals would probably have been ignored.)

“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, in Jagdish Mehra, The Beat of a Different Drum (Oxford, 1994, pp. 245-248).

http://www.mathpages.com/HOME/kmath209/kmath209.htm compiles equally false dismissals of physical mechanisms from “geniuses” of physics, with added nonsense thrown in (like the mass variation claim we have just debunked):

Historical Assessments of the Fatio-Lesage Theory

It’s an interesting historical fact that the attitudes of scientists toward the Fatio-Lesage “explanation” of gravity have varied widely, not just from one scientist to another, but for individual scientists at different moments. This is exemplified by Newton’s ambivalence. On one hand, he told Fatio that if gravity had a mechanical cause, then the mechanism must be the one Fatio had described. … he explicitly denied (in a famous letter to Bentley) the intelligibility of bare action at a distance, but he just as explicitly rejected (in a letter to Leibniz) the notion that space is filled with some material substance (a la Descartes) that communicates the force of gravity. His alternative was to say that gravity is caused by the will and spirit of God, not by any material cause. Of course, he gave consideration to various possible material mechanisms, and even included some Queries in the latter editions of Opticks, speculating on the possibility of an ether …

Even setting outside the outlandishness of the explanation, Newton was never able to extract from Fatio’s idea any testable consequence that could support it, so the idea remained an occult mechanism which, according to Newton, is not the proper purview of science.

Subsequent scientists have had similarly ambivalent reactions to the theory of Fatio and Lesage. For example, Euler originally expressed interest in Le Sage’s theory, stating (in the same conditional manner employed by Newton) that if gravity is due solely to impulse forces, then something like Lesage’s theory must be true. However, Euler ultimately rejected Lesage’s theory …

This striking ambivalence regarding the Fatio-Lesage theory has many other examples. Herschel spoke for many scientists when he said it was “too grotesque to need serious consideration”, whereas Thomson and Tait gave it serious consideration, the latter even asserting that it was “the only plausible answer which has yet been propounded”. Darwin too gave the idea “serious consideration”, but he also said “no man of science is disposed to accept it as affording the true road”.

Several of the founders of modern kinetic theory, including both John Herapath in 1820 and John James Waterston in 1845, began their investigations by trying to devise mechanical explanations of gravity. Herapath seems to have been influenced explicitly by Lesage’s writings, whereas Waterston was apparently one of the many independent discoverers of the concept. …

I remember a discussion on Physics Forums in which all the errors in LeSage’s theory and dismissals of it by famous physicists were straightened out over many hundreds of comments. Finally the discussion thread was closed by an administrator who falsely stated that at some point in the above discussion, a decisive dismissal of physical mechanisms had been given, but he couldn’t remember what it was, although it proved that it was pointless to go on discussing the topic. This is of course wrong, but it is what happens in such pointless discussions. Feynman had tried to defend himself against Bohr, who closed the discussion in the same way by falsely claiming that Feynman didn’t know the uncertainty principle. If he had shouted back, Bohr would doubtless have just become either angry or smug and would have still ignored the physics Feynman was putting forward.

It is important to “stand upon the shoulders of giants” in physics in order for them to pay attention to your idea. (The Feynman suppression episode in 1948 reminds you of a famous joke by the late Sidney Coleman: “If I have seen further than others, it is by standing between the shoulders of midgets”.) By building on new foundations which Bohr was ignorant of (and biased against), Feynman guaranteed that he would be ignored and falsely dismissed by an arrogant and ignorant Bohr. Feynman’s continuing censorship today for second-quantization are in favour of a mechanism (virtual, field quanta multipath interference) causing the indeterminancy of fundamental particles on small scales:

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

‘… when the space through which a photon moves becomes too small … we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

The indeterminate electron motion in the atom is simply caused by second-quantization: the field quanta randomly interacting and deflecting the electron.

However, the physically false, non-relativistic Heisenberg/Schroedinger approach is easier to apply to bound states like atoms, so it is falsely taught as QM, just as the Bohr atom is falsely taught in high schools.

Here is a solid example of the failure of first quantization mathematics:

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

– Dr Thomas S. Love, Departments of Mathematics and Physics, California State University.

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.”

http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Obviously, one diagram cannot summarize all of the justifications and implications. However, there is a need in physics to make clear how simple nature really is, as proved by the failure of non-relativistic first quantization and the success of simple path integrals in second quantization (representing fields as exchanged off-shell quanta).

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers …’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

What is the way forward? Well, this spin-1 graviton exchange mechanism deals neatly with gravitation and dark energy as both being quantum gravity effects, and this modifies the Standard Model. So my preferred option is to write a paper, titled maybe ‘A Simple Change to the Standard Model for Inclusion or Quantum Gravity, with Predictions and Validation’, and/or a full textbook explaining first the maths of the Standard Model, and then explaining the evidence for makin the slight corrections needed to incorporate quantum gravity.

A second option (maybe when the first gets ignored) is to follow in the footsteps of a great physicist and write a satrical ‘Dialogue Concerning Two New Sciences’, comparing the failures of over-hyped mainstream false spin-2 speculative, non-falsifiable string theory to the successful predictions of this censored, entirely fact-based theory (note that the black hole cross-section has empirical evidence discussed in the post linked here).

fundamental interactions
gravity illustration NC
unification
Feynman diagrams for gravity
EMforcemechanism

The relationship between the black hole cross-section for gravity and the mechanism for electromagnetism is discussed in earlier posts here and here. The gauge boson of electromagnetism is a virtual photon with 4-polarizations, not the 2-polarizations that normal photons have. The two extra polarizations are required to make attraction work in the framework of Weyl’s gauge theory. The repulsion law works fine even using 2-polarization photon exchange: you get hit by a photon from a similar charge, and it knocks you away from the similar charge. If you fire a photon to that similar charge, you recoil away from that other charge. So similar charges repel. Fine. But attraction requires adding two extra polarizations to the field quantum of electromagnetism. The field around an electron is negative: we know the electron has negative charge because of the field, which is mediated by virtual photons. We don’t know anything about the electron’s core, only about its field. We’ve only probed matter to energies on the order of 100 GeV or so, and we’ll never collide charges hard enough to see beyond the field effects to the core. So the whole notion of “charge” really needs to be applied to what we see with charge, which is the field, not the unobservable inner core of an electron. Hence, in electromagnetism the virtual photons can be treated as charged. The normal objection to this turns out false. It used to be objected that massless charges can’t move or they would generate infinite magnetic self-inductance. But actually, in Weyl’s theory virtual particles are exchanged in two directions at once, e.g. from charge A to charge B and back the other way. This two-way exchange is possible – even though one-way motion is impossible – because the superimposed magnetic curls of the field vectors will cancel out if two-way exchange is occurring. Many photons are exchanged in each direction simultaneously, so this works.

With two oppositely spin-1 charged field quanta mediating electromagnetism and one uncharged spin-1 field quanta mediating gravity, we have 3 massless gauge bosons which seem to be described by an SU(2) symmetry without mass. This suggests a modification to the Standard Model, where at present SU(2) gauge bosons are given mass by a speculative untested, non-falsifiable “Higgs mechanism”. Modifying it so that left-handed SU(2) gauge bosons acquire mass still gives the weak force but allows gravity to be included in a reformed Standard Model.

The coupling strengths of gravity and electromagnetism are different at observed low energy by a factor of about 1040, gravity being the weaker. This is explained in a simple path-integral random walk between charges: the existence of two different electric charges but only one gravitational charge means that you can get a random-walk of gauge boson exchange between electric charges which adds up differently to that between gravitational charges. The random walk result is numerically equal to the size of one step multiplied by the square-root of the number of steps. It turns out that on average the outward divergence of receding field quanta is compensated for by the inward convergence of approaching field quanta, so all we need to do is to multiply gravitational charge strength by the square root of the number of particles in the universe (about 1080) to get the electromagnetic charge strength in QFT: this turns out to be accurate within experimental error (1040).

What is physically happening is that fundamental particles are black holes, radiating high energy particles which behave as field quanta (virtual particles) since they are of extremely small wavelength. The black hole radiating power for electrons calculated from Hawking’s formula predicts a fundamental force 1040 times stronger than gravity; hence this is electromagnetism. Gravity is about 1040 times weaker due to the random-walk mechanism illustrated in previous posts.

Second quantization (Quantum Field Theory of Dirac, Feynman et al.) is physically correct and debunks the non-relativistic, physically wrong first quantization approximation to Quantum Mechanics (Schroedinger and Heisenberg)

Just as Bohr’s atom is taught in school physics, most mainstream general physicists with training in quantum mechanics are still trapped in the use of the “anything goes” false (non-relativistic) 1927-originating “first quantization” for quantum mechanics (where anything is possible because motion is described by an uncertainty principle instead of a quantized field mechanism for chaos on small scales). The physically correct replacement is called “second quantization” or “quantum field theory”, which was developed from 1929-48 by Dirac, Feynman and others.

The discoverer of the path integrals approach to quantum field theory, Nobel laureate Richard P. Feynman, has debunked the mainstream first-quantization uncertainty principle of quantum mechanics. Instead of anything being possible, the indeterminate electron motion in the atom is caused by second-quantization: the field quanta randomly interacting and deflecting the electron.

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

His path integrals rebuild and reformulate quantum mechanics itself, getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific baggage like ‘entanglement hype’ it brings with it:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger’s wave equation and Heisenberg’s matrix mechanics being the first two attempts, which both generate nonsense ‘interpretations’]. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.

‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’

– Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

– Richard MacKenzie, Path Integral Methods and Applications, pp. 2-13.

‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

There are other serious and well-known failures of first quantization aside from the nonrelativistic Hamiltonian time dependence:

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

First quantization for QM (e.g. Schroedinger) quantizes the product of position and momentum of an electron, rather than the Coulomb field which is treated classically. This leads to a mathematically useful approximation for bound states like atoms, which is physically false and inaccurate in detail (a bit like Ptolemy’s epicycles, where all planets were assumed to orbit Earth in circles within circles). Feynman explains this in his 1985 book QED (he dismisses the uncertainty principle as complete model, in favour of path integrals) because indeterminancy is physically caused by virtual particle interactions from the quantized Coulomb field becoming important on small, subatomic scales! Second quantization (QFT) introduced by Dirac in 1929 and developed with Feynman’s path integrals in 1948, instead quantizes the field. Second quantization is physically the correct theory because all indeterminancy results from the random fluctuations in the interactions of discrete field quanta, and first quantization by Heisenberg and Schroedinger’s approaches is just a semi-classical, non-relativistic mathematical approximation useful for obtaining simple mathematical solutions for bound states like atoms:

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Sound waves are composed of the group oscillations of large numbers of randomly colliding air molecules; despite the randomness of individual air molecule collisions, the average pressure variations from many molecules obey a simple wave equation and carry the wave energy. Likewise, although the actual motion of an atomic electron is random due to individual interactions with field quanta, the average location of the electron resulting from many random field quanta interactions is non-random and can be described by a simple wave equation such as Schroedinger’s.

This is fact, it isn’t my opinion or speculation: professor David Bohm in 1952 proved that “brownian motion” of an atomic electron will result in average positions described by a Schroedinger wave equation. Unfortunately, Bohm also introduced unnecessary “hidden variables” with an infinite field potential into his messy treatment, making it a needlessly complex, uncheckable representation, instead of simply accepting that the quantum field interations produce the “Brownian motion” of the electron as described by Feynman’s path integrals for simple random field quanta interactions with the electron.

Quantum tunnelling is possible because electromagnetic fields are not classical, but are mediated by field quanta randomly exchanged between charges. For large charges and/or long times, the number of field quanta exchanged is so large that the result is similar to a steady classical field. But for small charges and small times, such as the scattering of charges in high energy physics, there is some small probability that no or few field quanta will happen to be exchanged in the time available, so the charge will be able to penetrate through the classical “Coulomb barrier”. If you quantize the Coulomb field, the electron’s motion is indeterministic in the atom because it’s randomly exchanging Coulomb field quanta which cause chaotic motion. This is second quantization as explained by Feynman in QED. This is not what is done in quantum mechanics, which is based on first quantization, i.e. treating the Coulomb field V classically, and falsely representing the chaotic motion of the electron by a wave-type equation. This is a physically false mathematical model since it omits the physical cause of the indeterminancy (although it gives convenient predictions, somewhat like Ptolemy’s accurate epicycle based predictions of planetary positions):

Schroedinger error
Fig. 1:The Schrodinger equation, based on quantizing the momentum p in the classical Hamiltonian (the sum of kinetic and potential energy for the particle), H. This is an example of ‘first quantization’, which is inaccurate and is also used in Heisenberg’s matrix mechanics. Correct quantization will instead quantize the Coulomb field potential energy, V, because the whole indeterminancy of the electron in the atom is physically caused by the chaos of the randomly timed individual interactions of the electron with the discrete Coulomb field quanta which bind the electron to orbit the nucleus, as Feynman proved (see quotations below). The triangular symbol is the divergence operator (simply the sum of the gradients in all applicable spatial dimensions, for whatever it operates on) which when squared becomes the laplacian operator (simply the sum of second-order derivatives in all applicable spatial dimensions, for whatever it operates on). We illustrate the Schrodinger equation in just one spatial dimension, x, above, since the terms for other spatial dimensions are identical.

Dirac’s quantum field theory is needed because textbook quantum mechanics is simply wrong: the Schroedinger equation has a second-order dependence on spatial distance but only a first-order dependence on time. In the real world, time and space are found to be on an equal footing, hence spacetime. There are deeper errors in textbook quantum mechanics: it ignores the quantization of the electromagnetic field and instead treats it classically, when the field quanta are the whole distinction between classical and quantum mechanics (the random motion of the electron orbiting the nucleus in the atom is caused by discrete field quanta interactions, as proved by Feynman).

Dirac was the first to achieve a relativistic field equation to replace the non-relativistic quantum mechanics approximations (the Schroedinger wave equation and the Heisenberg momentum-distance matrix mechanics). Dirac also laid the groundwork for Feynman’s path integrals in his 1933 paper “The Lagrangian in Quantum Mechanics” published in Physikalische Zeitschrift der Sowjetunion where he states:

“Quantum mechanics was built up on a foundation of analogy with the Hamiltonian theory of classical mechanics. This is because the classical notion of canonical coordinates and momenta was found to be one with a very simple quantum analogue …

“Now there is an alternative formulation for classical dynamics, provided by the Lagrangian. … The two formulations are, of course, closely related, but there are reasons for believing that the Lagrangian one is the more fundamental. … the Lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; while the Hamiltonian method is essentially nonrelativistic in form …”

Schroedinger’s time-dependent equation is: Hy= iħ.dy /dt, which has the exponential solution:

yt = yo exp[-iH(t – to)/ħ].

This equation is accurate, because the error in Schroedinger’s equation comes only from the expression used for the Hamiltonian, H. This exponential law represents the time-dependent value of the wavefunction for any Hamiltonian and time. Squaring this wavefunction gives the amplitude or relative probability for a given Hamiltonian and time. Dirac took this amplitude e-iHT/ħ and derived the more fundamental lagrangian amplitude for action S, i.e. eiS/ħ. Feynman showed that summing this amplitude factor over all possible paths or interaction histories gave a result proportional to the total probability for a given interaction. This is the path integral.

Schroedinger’s incorrect, non-relativistic hamiltonian before quantization (ignoring the inclusion of the Coulomb field potential energy, V, which is an added term) is: H = ½ p2/m. Quantization is done using the substitution for momentum, p -> -iħ{divergence operator} as in Fig. 1 above. The Coulomb field potential energy, V, remains classical in Schroedinger’s equation, instead of being quantized as it should.

The bogus ‘special relativity’ prediction to correct the expectation H = ½ p2/m is simply: H = [(mc2)2 + p2c2]2, but that was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ2d2y/dt2 = [(mc2)2 + p2c2]y.

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction. Dirac’s equation simply makes the time-dependent Schroedinger equation (Hy = iħ.dy/dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H = apc + b mc2,

where p is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’.  This is not to be confused for the Weyl spinors used in the gauge theories of the Standard Model; whereas the Dirac spinor represents massive spin-1/2 particles, the Dirac equation yields two Weyl equations for massless particles, each with a 2-component Weyl spinor (representing left- and right-handed spin or helicity eigenstates).  The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E = ±[(mc2)2 + p2c2]1/2.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ½ ħ = ± h/(4p). This explains two of the four solutions! The electron is spin-1/2 so it has a spin of only half the amount of a spin-1 particle, which means that the electron must rotate 720 degrees (not 360 degrees!) to undergo one revolution, like a Mobius strip (a strip of paper with a twist before the ends are glued together, so that there is only one surface and you can draw a continuous line around that surface which is twice the length of the strip, i.e. you need 720 degrees turning to return it to the beginning!). Since the spin rate of the electron generates its intrinsic magnetic moment, it affects the magnetic moment of the electron. Zee gives a concise derivation of the fact that the Dirac equation implies that ‘a unit of spin angular momentum interacts with a magnetic field twice as much as a unit of orbital angular momentum’, a fact discovered by Dirac the day after he found his equation (see: A. Zee, Quantum Field Theory in a Nutshell, Princeton University press, 2003, pp. 177-8.) The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc2.  This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc2. The fact that in fact E = ± mc2, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity.  E = ± mc2 allowed Dirac to predict antimatter, such as the anti-electron called the positron, which was later discovered by Anderson in 1932 (anti-matter is naturally produced all the time when suitably high-energy gamma radiation hits heavy nuclei, causing pair production, i.e., the creation of a particle and an anti-particle such as an electron and a positron). 

Much of the material above is from the previous post (I’m putting it here on a separate post because that previous post began with sorting out errors in mainstream cosmology, which may have put off some bigoted and dogmatic people who are only interested in non-cosmology aspects of quantum field theory; it also helps me towards assembling background/draft material for a forthcoming book/paper).

To understand how the path integrals approach explains the double slit experiment, see this post. To see how scientific criticisms of mainstream first quantization lies have been censored out of mainstream journals by dogmatic mathematical simpletons who lack a grasp of the nature of science itself (‘Science is the organized skepticism in the reliability of expert opinion.’ – Richard Feynman in Lee Smolin, The Trouble with Physics, Houghton-Mifflin, 2006, p. 307), see this post. There’s a completely causal explanation: the photon is not a point but has transverse spatial extent; when it encounters two nearby slits (closer than a wavelength) part diffracts through each slit and the recombination on the other side gives rise to the photon whose probability of landing at any point depends on both slits, not just one of them.

String theorists who believe dogmatically that mathematical elegance, mystery and beauty in physics rather than hard evidence of agreement with experiment, are the central requirements, should listen to Einstein and Boltzmann:

“I adhered scrupulously to the precept of that brilliant theoretical physicist L. Boltzmann, according to whom matters of elegance ought to be left to the tailor and to the cobbler.”

– A. Einstein, December 1916 Preface to his book Relativity: The Special and General Theory, Methuen & Co., London, 1920.

Mathematical relationship between the Hamiltonian formalism of first quantization quantum mechanics (bound states of particles) and the Lagrangian path integral formalism necessary to adequately describe quantum fields

Heisenberg’s uncertainty principle (for minimum uncertainty, i.e. intrinsic uncertainty):

px = ħ

is quantized in first quantization (Heisenberg and Schroedinger methods) by turning uncertainties in momentum p and position x into non-commuting operators (which I’ll signify by simply placing square brackets around them), and replacing ħ with -iħ. This gives [p,x] = ħ. The two solutions to that are firstly

[x] = iħd/dp with [p]=p,

and secondly

[p] = -iħd/dx with [x] = x.

Either of these solutions is a first quantization of classical physics. Then you do the same thing replacing momentum p = E/c and x = ct for light, giving px = (E/c)(ct) = Et, which allows you to replace the product of uncertainties px in Heisenberg’s uncertainty principle with the product of uncertainties in energy and time, Et. Repeating the previous recipe for quantization on this energy-time Heisenberg uncertainty principle then gives us [E,t] = ħ. This has the two solutions:

[E] = -iħd/dt with [t] = t,

and

[t] = iħd/dE with [E] = E.

Taking [E] = -iħd/dt, this gives Schroedinger’s time-dependent equation when it acts on wavefunction y, with energy operator [E] = H, the Hamiltonian:

Hy = -ħdy/dt

Rearranging

(1/y)dy = -H.dt/(iħ)

integrating this gives:

ln y = -Ht/(iħ)

(ln yt) – (ln y0) = -Ht/(iħ)

Taking both sides to natural exponents to get rid of the natural logarithms on the left hand side:

(yt)/(y0) = exp(-Ht/(iħ))

hence

yt = y0exp(-Ht/(iħ))

Thus the time-dependent wavefunction equals simply the time-independent wavefunction multiplied by the exponential amplitude factor, exp(-Ht/(iħ)), in which the fraction can be rewritten by multiplying both its numerator and denominator by i, giving:

exp(-Ht/(iħ)) = exp(-iHt/(iiħ)) = exp(iHt/ħ).

The product of the Hamiltonian operator for energy with time is analogous to the integral of the Lagrangian for energy over time, so let Ht = {integral symbol}L dt = S, action. Thus the relative amplitude of a wavefunction (representing the contribution from one Feynman diagram or one “path” in the path integral) is given by:

exp(iHt/ħ). = exp(iS/ħ).

So the path integral amplitude factor is mathematically equivalent to both the Heisenberg matrix mechanics and the Schroedinger wave equation. However, there are physical differences. First quantization is physically wrong. Second quantization is physically correct in the way Feynman presents it.

For a detailed derivation of the time-dependent Schroedinger equation using the path integrals formulation, see David Derbes, “Feynman’s derivation of the Schroedinger equation”, Am. J. Phys. v64, issue 7, July 1996, pp. 881-4. For discussions of random or stochastic quantization, see Poul Henrik Damgaard and Helmuth Hüffel, Stochastic quantization (1988) and Mikio Namiki, Stochastic quantization (1992).

The mathematician in modern physics resolutely refuses to see the difference physically between the Hamiltonian and the Lagrangian approaches, 1st and 2nd quantization, instead seeing them as mathematically equivalent descriptions of the same thing This is totally bogus, because in 1st quantization you keep the field classical (falsely) and then falsely make particle motions intrinsically indeterminate (falsely) with no mechanism for this (hence leading to wave function collapses upon measurement and multiple universe entanglement speculations which are provably false in consequence of the falsehood of the 1st quantization model), and in doing this your model is non-relativistic, i.e. contravenes physically confirmed equations of relativity. But 2nd quantization correctly attributes the indeterminancy of real, relativistically on-shell particles to simple random interactions with the Coulomb field quanta, instead of having the Coulomb field classical. This is just like air pressure being approximately classical and contionuous on large scales (where individual random air molecule bombardments are large enough in number to average out statistically), but produces chaotic, random motion on small scales, called Brownian motion, due to the fact that on small scales there is not enough space for good averaging and cancellation of randomness by large numbers of interactions, so that individual impacts become relatively more important and so randomness predominates in the Coulomb field quanta exchanged by atomic electrons and nuclei. There is no magic of the sort that the string theorists and science fiction buffs would like to believe in, such as wave functions collapsing and being entangled, leading to quantum information theory, etc. That is a myth. Caroline H. Thompson has shown how Alain Aspect’s entanglement experiments are not good experimental data, but are adjusted to make them agree with prejudiced beliefs like a religion: http://arxiv.org/abs/quant-ph/9903066, Subtraction of “accidentals” and the validity of Bell tests:

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal” explanations for the observations have been wrongfully neglected.”

Update:

Relevant copy of a comment to Professor Johnson’s Asymptotia:

“Gell-Mann is best known as the person who came up with the idea of quarks, the particles that make up (for example) protons and neutrons, the building blocks of atomic nuclei.”

It took genius to publish such a speculative idea. According to William H. Cropper’s book Great physicists (Oxford U.P., p. 418), George Zweig’s paper on that theory was “emphatically rejected” by Physical Review but Murray Gell-Mann was “older and wiser” so he “anticipated a negative reception at the Physical Review to such bizarre entities as unobservable, fractionally charged elementary particles, and he published his first quark paper in Physics Letters. Zweig’s theory went unpublished except in a CERN report, but it and its author acquired a certain reputation. When Zweig sought an appointment at a major university, the head of the department pronounced him a ‘charlatan’.”

It’s good that Gell-Mann managed to anticipate and avoid that censorship so cleverly, or we wouldn’t have quark theory, with the SU(3) strong interaction part of the Standard Model. Another example: Pauli’s attempt to censor Yang-Mills theory in February 1954 because the particles are massless (Pauli had already discarded the idea for this “failure”) is another example (Yang simply sat down when Pauli persisted in objecting).

Consider Oppenheimer’s attempt to censor Feynman’s path integrals without listening at all, as described by Freeman Dyson (Stuckelberg was working on the same idea independently, but was ignored and – as with Zweig’s quarks – he received no Nobel Prize). It’s remarkable that genius in the past has consisted to such a large degree in overcoming apathy (Oppenheimer was not just a stubborn exception who objected to path integrals. E.g., Feynman is quoted by Jagdish Mehra in The Beat of a Different Drum, pp. 245-248, saying that Teller, Dirac and Bohr all also claimed to have “disproved” path integrals: Teller’s disproof consisted of saying that Feynman didn’t have to take account of the exclusion principle, Dirac disproved it for not having a unitary operator, and Bohr disproved it because he believed that Feynman didn’t know the uncertainty principle: “it was hopeless to try to explain it further.” So without Dyson’s brilliance at explaining ideas, Feynman’s path integrals would probably have been ignored.)

“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, in Jagdish Mehra, The Beat of a Different Drum (Oxford, 1994, pp. 245-248).

Why the rank-2 stress-energy tensor of general relativity does not imply a spin-2 graviton

“If it exists, the graviton must be massless (because the gravitational force has unlimited range) and must have a spin of 2 (because the source of gravity is the stress-energy tensor, which is a second-rank tensor, compared to electromagnetism, the source of which is the four-current, which is a first-rank tensor). To prove the existence of the graviton, physicists must be able to link the particle to the curvature of the space-time continuum and calculate the gravitational force exerted.” – False claim, Wikipedia.

Previous posts explaining why general relativity requires spin-1 gravitons, and rejects spin-2 gravitons, are linked here, here, here, here, and here. But let’s take the false claim that gravitons must be spin-2 because the stress-energy tensor is rank-2. A rank 1 tensor is a first-order (once differentiated, e.g. da/db) differential summation, such as the divergence operator (sum of field gradients) or curl operator (the sum of all of the differences in gradients between field gradients for each pair of mutually orthagonal directions in space). A rank 2 tensor is some defined summation over second-order (twice differentiated, e.g. d2a/db2) differential equations. The field equation of general relativity has a different structure from Maxwell’s field equations for electromagnetism: as the Wikipedia quotation above states, Maxwell’s equations of classical electromagnetism are vector calculus (rank-1 tensors or first-order differential equations), while the tensors of general relativity are second order differential equations, rank-2 tensors.

The lie, however, is that this is physically deep. It’s not. It’s purely a choice of how to express the fields conveniently. For simple electromagnetic fields, where there is no contraction of mass-energy by the field itself, you can do it easily with first-order equations, gradients. These equations calculate fields with a first-order (rank-1) gradient, e.g. electric field strength, which is the gradient of potential/distance, measured in volts/metre. Maxwell’s equations don’t directly represent accelerations (second-order, rank-2 equations would be needed for that). For gravitational fields, you have to work with accelerations because the gravitational field contracts the source of the gravitational field itself, so gravitation is more complicated than electromagnetism.

The people who promote the lie that because rank-1 tensors apply to spin-1 field quanta in electromagnetism, rank-2 tensors must imply spin-2 gravitons, offer no evidence of this assertion. It’s arm-waving lying. It’s true that you need rank-2 tensors in general relativity, but it is not necessary in principle to use rank-1 tensors in electromagnetism: it’s merely easiest to use the simplest mathematical method available. You could in principle use rank-2 tensors to rebuild electromagnetism, by modelling the equations to observable accelerations instead of unobservable rank-1 electric fields and magnetic fields. Nobody has ever seen an electric field: only accelerations and forces caused by charges. (Likewise for magnetic fields.)

There is no physical correlation between the rank of the tensor and the spin of the gauge boson. It’s a purely historical accident that rank-1 tensors (vector calculus, first-order differential equations) are used to model fictitious electric and magnetic “fields”. We don’t directly observe electric field lines or electric charges (nobody has seen the charged core of an electron, what we see are effects of forces and accelerations which can merely be described in terms of field lines and charges). We observe accelerations and forces. The field lines and charges are not directly observed. The mathematical framework for a description of the relationship between the source of a field and the end result depends on the definition of the end result. In Maxwell’s equations, the end result of a electric charge which is not moving relative to the observer is a first-order field, defined in volts/metre. If you convert this first-order differential field into an observable effect, like force or acceleration, you get a second-order differential equation, acceleration a = d2x/dt2. General relativity doesn’t describe gravity in terms of a first-order field like Maxwell’s equations do, but instead describes gravitation in terms of a second-order observable, i.e. space curvature produced acceleration, a = d2x/dt2.

So the distinction between rank-1 and rank-2 tensors in electromagnetism and general relativity is not physically deep: it’s a matter of human decisions on how to represent electromagnetism and gravitation.

We choose in Maxwell’s equations to represent not second-order accelerations but using Michael Faraday’s imaginary concept of a pictorial field, radiating and curving “field lines” which are represented by first-order field gradients and curls. In Einstein’s general relativity, by contrast, we don’t represent gravity by such half-baked unobservable field concept, but in terms of directly-observable accelerations.

Like first-quantization (undergraduate quantum mechanics) lies, the “spin-2” graviton deception is a brilliant example of historical physically-ignorant mathematical obfuscation in action, leading to groupthink delusions in theoretical physics. (Anyone who criticises the lie is treated with a similar degree of delusional, paranoid hostility directed to dissenters of evil dictatorships. Instead of examining the evidence and seeking to correct the problem – which in the case of an evil dictatorship is obviously a big challenge – the messenger is inevitably shot or the “message” is “peacefully” deleted from the arXiv, reminiscent of the scene from Planet of the Apes where Dr Zaius – serving a dual role as Minister of Science and Chief Defender of the Faith, has to erase the words written in the sand which would undermine his religion and social tea-party of lying beliefs. In this analogy, the censors of the arXiv or journals like Classical and Quantum Gravity are not defending objsctive science, but are instead defending subjective pseudo-science – the groupthink orthodoxy which masquerades as science – from being exposed as a fraud.)

Dissimilarities in tensor ranks used to describe two different fields originate from dissimilarities in the field definitions for those two different fields, not to the spin of the field quanta. Any gauge field whose field is written in a second order differential equation, e.g., acceleration, can be classically approximated by rank-2 tensor equation. Comparing Maxwell’s equations in which fields are expressed in terms of first-order gradients like electric fields (volts/metre) with general relativity in which fields are accelerations or curvatures, is comparing chalk and cheese. They are not just different units, but have different purposes. For a summary of textbook treatments of curvature tensors, see Dr Kevin Aylward’s General Relativity for Teletubbys: “the fundamental point of the Riemann tensor [the Ricci curvature tensor in the field equation general relativity is simply a cut-down, rank-2 version Riemann tensor: the Ricci curvature tensor, Rab = Rxaxb, where Rxaxb is the Riemann tensor], as far as general relativity is concerned, is that it describes the acceleration of geodesics with respect to one another. … I am led to believe that many people don’t have a … clue what’s going on, although they can apply the formulas in a sleepwalking sense. … The Riemann curvature tensor is what tells one what that acceleration between the [particles] will be. This is expressed by

[Beware of some errors in the physical understanding of some of these general relativity internet sites, however. E.g., some suggest – following a popular 1950s book on relativity – that the inverse-square law is discredited by general relativity, because the relativistic motion of Mercury around the sun can be approximated within Newton’s framework by increasing the inverse-square law power slightly from its value of 1/R2 to 1/R2 + X where X is a small fraction, so that the force appears to get stronger nearer the sun. This is fictitious and is just an approximation to roughly accommodate relativistic effects that Newton ignored, e.g. the small increase in planetary mass due to its higher velocity when the planet is nearer the sun on part of its elliptical orbit, than it has when it is moving slower far from sun. This isn’t a physically correct model; it’s just a back-of-the-envelope fudge. A physically correct version of planetary motion in the Newtonian framework would keep the geometric inverse square law and would then correctly modify the force by making the right changes for the relativistic mass variation with velocity. Ptolemy’s epicycles demonstrated the danger of constructing approximate mathematical model which have no physical validity, which then become fashion.]”

Maxwell’s theory based on Faraday’s field lines concept employs only rank-1 equations, for example the divergence of the electric field strength, E, is directly proportional to the charge density, q (charge density is here defined as the charge per unit surface area, not the charge per unit volume): div.E ~ q. The reason this is a rank-1 equation is simply because the divergence operator is the sum of gradients in all three perpendicular directions of space for the operand. All it says is that a unit charge contributes a fixed number of diverging radial lines of electric field, so the total field is directly proportional to the total charge.

But this is just Faraday’s way of visualizing the way the electric force operates! Remember that nobody has yet seen or reported detecting an “electric field line” of force! With our electric meters, iron filings, and compasses we only see the results of forces and accelerations, so the number and locations of electric or magnetic field lines depicted in textbook diagrams is due to purely arbitrary conventions. It’s merely an abstract aetherial legacy from the Faraday-Maxwell era, not a physical reality that has any experimental evidence behind it. If you are going to confuse Faraday’s and Maxwell’s imaginary concept of field “lines” with experimentally defensible reality, you might as well write down an equation in which the invisible intermediary between charge and force is an angel, a UFO, a fairy or an elephant in an imaginary extra dimension. Quantum field theory tells us that there are no physical lines. Instead of Maxwell’s “physical lines of force”, we have known since QED was verified that there are field quanta being exchanged between charges.

So if we get rid of our ad hoc prejudices, getting rid of “electric field strength, E” in volts/metre and just expressing the result of the electric force in terms of what we can actually measure, namely accelerations and forces, we’d have a rank-2 tensor, basically the same field equation as is used in general relativity for gravity. The only differences will be the factor of ~1040 difference between field strengths of electromagnetism and gravity, the differences in the signs for the curvatures (like charges repel in electromagnetism, but attract in gravity) and the absence of the contraction term that makes the gravitational field contract the source of the field, but supposedly does not exist in electromagnetism. The tensor rank will be 2 for both cases, thus disproving the arm-waving yet popular idea that the rank number may be correlated to the field quanta spin. In other words, the electric field could be modelled by a rank-2 equation if we simply make the electric field consistent with the gravitational field by expressing both field in terms of accelerations, instead of using the gradient of the Faraday-legacy volts/metre “field strength” for the electric field. This is however beyond the understanding of the mainstream, who are deluded by fashion and historical ad hoc conventions. Most of the problems in understanding quantum field theory and unifying Standard Model fields with gravitational fields result from the legacy of field definitions used in Maxwellian and Yang-Mills fields, which for purely ad hoc historical reasons are different from the field definition in general relativity. If all fields are expressed in the same way as accelerative curvatures, all field equations become rank-2 and all rank-1 divergencies automatically disappear, since are merely an historical legacy of the Faraday-Maxwell volts/metre field “line” concept, which isn’t consistent with the concept of acceleration due to curvature in general relativity!

However, we’re not advocating the use of any particular differential equations for any quantum fields, because discontinuous quantized fields can’t in principle be correctly modelled by differential equations, which is why you can’t properly represent the source of gravity in general relativity as being a set of discontinuities (particles) in space to predict curvature, but must instead use a physically false averaged distribution such as a “perfect fluid” to represent the source of the field. The rank-2 framework of general relativity has relatively few easily obtainable solutions compared to the simpler rank-1 (vector calculus) framework of electrodynamics. But both classical fields are false in ignoring the random field quanta responsible for quantum chaos (see, for instance, the discussion of first-quantization versus second-quantization in the previous post here, here and here).

Summary:

1. The electric field is defined by Michael Faraday as simply the gradient of volts/metre, which Maxwell correctly models with a first-order differential equation, which leads to a rank-1 tensor equation (vector calculus). Hence, electromagnetism with spin-1 field quanta has a rank-1 tensor purely because of the way it is formulated. Nobody has ever seen Faraday’s electric field, only accelerations/forces. There is no physical basis for electromagnetism being intrinsically rank-1; it’s just one way to mathematically model it, by describing it in terms of Faraday rank-1 fields rather than the directly observable rank-2 accelerations and forces which we see/feel.

2. The gravitational field has historically never been expressed in terms of a Faraday-type rank-1 field gradient. Due to Newton, who was less pictorial than Faraday, gravity has always been described and modelled directly in terms of the end result, i.e. accelerations/forces we see/feel.

This difference between the human formulations of the electromagnetic and gravitational “fields” is the sole reason for the fact that the former is currently expressed with a rank-1 tensor and the latter is expressed with a rank-2 tensor. If Newton had worked on electromagnetism instead of aether crackpots like Maxwell, we would undoubtedly have a rank-2 mathematical model of electromagnetism, in which electric fields are expressed not in volts/metre, but directly in terms of rank-2 acceleration (curvatures), just like general relativity.

Both electromagnetism and gravitation should define fields the same way, with rank-2 curvatures. The discrepancy that electromagnetism uses instead rank-1 tensors is due to the inconsistency that in electromagnetism fields are not defined in terms of curvatures (accelerations) but in terms of a Faraday’s imaginary abstraction of field lines. This has nothing whatsoever to do with particle spin. Rank-1 tensors are used in Maxwell’s equations because the electromagnetic fields are defined (inconsistently with gravity) in terms of rank-1 unobservable field gradients, whereas rank-2 tensors are used in general relativity purely because the definition of a field in general relativity is acceleration, which requires a rank-2 tensor to describe it. The difference is purely down to the way the field is described, not the spin of the field.

The physical basis for rank-2 tensors in general relativity

I’m going to rewrite the paper linked here when time permits.

Groupthink delusions

The real reason why gravitons supposedly “must” be spin-2 is due to the mainstream investment of energy and time in worthless string theory, which is designed to permit the existence of spin-2 gravitons. We know this because whenever the errors in spin-2 gravitons are pointed out, they are ignored. These stringy people aren’t interested in physics, just grandiose fashionable speculations, which is the story of Ptolemy’s epicycles, Maxwell’s aether, Kelvin’s vortex atom, Piltdown Man, S-matrices, UFOs, Marxism, fascism, etc. All were very fashionable with bigots in their day, but:

“… reality must take precedence over public relations, for nature cannot be fooled.” – Feynman’s Appendix F to Rogers’ Commission Report into the Challenger space shuttle explosion of 1986.