Feynman’s issues with the Standard Model (updated 1 May 2013)

“The best way to progress I would think is maybe to try to be as conservative as we can. Try to be as conservative about the physical laws as possible in explaining the phenomenon. If you continuously fail, then you gradually realize that you have got to change something. But if we start out saying we’ve got to change something, there are so many ways of changing. Most often you don’t have to change anything. Most of the time we succeed in ultimately explaining these damn things in terms of the known laws – but the cases that fail are the interesting ones.”

– R. P. Feynman, “Take the World from Another Point of View,” Yorkshire TV, 1973.

beta decay anomalies

 

Above: Feynman argued that the Standard Model is inconsistent and incomplete, and we highlight a typical inconsistency in the beta decay scheme interpretation, which arose after the introduction of weak vector bosons (the previous beta decay of Fermi lacked this inconsistency because it contained no weak boson). Note that the existence of weak bosons is an experimental fact. The problem is therefore in the dogmatic interpretation of beta decay, which distinguishes leptons from quarks by the fact that leptons don’t have strong color charge but quarks do. We predict is that color charge emerges at extremely high energy at the expense of electric charge: the fractional electric charges of quarks is due to vacuum pair production (including production of colour charged particles with colour charged gluons) and associated vacuum polarization screening, which permits a mechanism for strong color charge effects to emerge spontaneously from the fields around leptons at extremely high energy, beyond existing experiments.

The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. …

But if you just look at the [Standard Model] you can see the glue, so to speak. It’s very clear that the photon and the three W’s [weak gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.]

– R. P. Feynman, QED, Penguin, 1990, pp. 141-142.

Feynman also argued that the uncertainty principle in textbook quantum mechanics (the “first quantization” lie, in which the uncertainty principle acts lying on the real on-shell particles while keeping the Coulomb field classical, leading to wavefunction collapse issues popularized with lying hype about Schrodinger’s cat and entanglement lies) is unnecessary because of second-quantization (discovered to be necessary by Dirac in 1927; Schroedinger’s and Heisenberg’s first-quantization approaches to quantum mechanics are lies because they are non-relativistic and because they falsely make the real on-shell particle’s position-momentum product intrinsically uncertain due to keeping the Coulomb field classical, instead of correctly making uncertainty work on the off-shell field quanta, so that the chaotic field quanta interactions are the physical mechanism for the apparent indeterminancy of the electron in an atomic orbit):

“I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, ‘Your old-fashioned ideas are no damn good when …’. If you get rid of ALL the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no NEED for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [of on-shell particles by off-shell field quanta] becomes very important …”

– Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, & 84.

First-quantization bigot Niels Bohr never understood how Dirac’s work in quantum field theory (2nd quantization) overturned Heisenberg’s mythology, and he simply refused to listen to Feynman’s 2nd quantization proof, claiming it violated his dogmatic religion of uncertainty principle worship:

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

Feynman argued against the path integral being fundamental to particle physics:

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space [because there is an infinite series of terms in the perturbative expansion to Feynman’s path integral] … Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”

– Richard P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Feynman also blew the smoke screen out of string theory back in 1988, after the first superstring revolution hype:

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’

– Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pp. 194-195.

Like modern string theorists, Boscovich believed that forces are purely mathematical in nature and he argued that the stable sizes of various objects such as atoms correspond to the ranges of different parts of a unified force theory. The concept that the range of the strong nuclear force determines roughly the size of a nucleus would be a modern example of this concept, although he offered no real explanation for different forces such as gravity and electromagnetism, such as their very different strengths between two protons. Against Boscovich were Newton’s friend Fatio and his French student LeSage, who did not believe in a mathematical universe, but in mechanisms due to particle flying around in the vacuum. Various famous physicists like Maxwell and Kelvin in the Victorian era argued that particles flying around to cause forces by impacts and pressure, would heat the planets up by drag, and slow them down so they spiralled quickly into the sun. Feynman recounts LeSage’s mechanism for gravity and the arguments against it in both his Lectures on Physics and his 1964 lectures The Character of Physical Law (audio linked here). However, the problem is that quantum field theory does accurately predict today the experimentally verified Casimir force which is indeed caused by off-shell (off mass shell) field quanta pushing the plates together, somewhat akin to LeSage’s mechanism. The radiation in the vacuum which causes the Casimir force doesn’t slow down or heat up moving metal or other objects, and the Maxwell-Kelvin objections don’t apply to field quanta (off-shell radiations).

The Casimir force is produced because the metal plates exclude longer wavelengths from the space inbetween them, but you get the full spectrum of virtual radiation pushing against the plates from the opposing sides, so the net force pushes them together, “attraction”. Maxwell’s equations are formulated in terms of rank-1 (first order) gradients and curls of “field lines”, e.g. whereas general relativity is formulated in terms of rank-2 (second order) space curvatures or accelerations, so there is an artificial distinction between the two types of equations. Pauli and Fierz in 1939 argued that if gravitons are only exchanged between an two masses which attract, they have to be spin-2. Electromagnetism can be mediated by spin-1 bosons with 4 polarizations to account for attraction and repulsion. Thus, the myth of linking the rank of the tensor equation to the spin began: rank-1 Maxwell equations implied spin-1 field quanta (virtual photons), and rank-2 general relativity implied spin-2 field quanta (gravitons). However, the rank of the equation is purely a synthetic issue of whether you choose to express the field in terms of Faraday style imaginary “field lines” (which Maxwell chose), or measurable spacetime curvature induced accelerations (which Einstein used). It’s not a fundamental distinction, since you could rewrite Maxwell equations in terms of accelerations, making then rank-2.

Furthermore, Pauli and Fierz wrongly assumed that you can treat two masses as exchanging gravitons, and ignore the exchange of gravitons with all the other masses in the universe. This is easily shown wrong, because the mass of the rest of the universe is immensely larger than an apple and the Earth (say), and forthermore the exchange gravitons being received are converging inward from that distant mass (galaxy clusters etc) which is isotropically distributed in all directions. When you include those contributions, the Pauli-Fierz argument for spin-2 gravitons is disproved, because the repulsion due to the exchange of gravitons between the particles in the apple and those in the Earth is trivial compared to the inward forces from graviton exchange on the other sides. Hence, spin-1 gravitons do the job of pushing the apple down to the Earth. The bigger the mass of the Earth, the more shadowing and asymmetry of graviton forces on the top and bottom of the particles in the apple, so the apple is pushed down with greater force, thus it’s analogous to the mechanism of the Casimir force or LeSage.

The distinction between Newtonian and Einsteinian gravitation is two fold. First, there is the change from forces to spacetime curvature (acceleration) using the Ricci tensor and a very fiddled stress-energy tensor for the source of the field (which can’t represent real matter correctly as particles, using instead artifically averaged smooth distributions of mass energy throughout a volume of space), and secondly these two tensors could not be simply equated by Einstein without violating the conservation of mass-energy (the divergence of the stress energy tensor does not vanish), so Einstein had to complicate the field equation with a contraction term which compensates for the inability of the divergence of the stress-energy tensor to disappear. It is precisely this correction term for the conservation of mass-energy which makes the deflection of light equal to double that of a non-relativistic object like a bullet passing the sun. The reason is that all objects approaching the sun gain gravitational potential energy. In the case of a non-relativistic or slow moving bullet, this gained gravitational potential energy is used to do two things: (1) speed up the bullet, and (2) to deflect the direction of the bullet more towards the sun. A relativistic particle like a photon cannot speed up, so all of the gravitational potential energy it gains instead is used to deflect it, hence it deflects by twice as much as Newton’s law predicts.

It’s pretty clear that special relativity is an effective theory. Carlos Barceló and Gil Jannes state in their paper, ‘A Real Lorentz-FitzGerald Contraction’, Foundations of Physics, Volume 38, Number 2 / February, 2008, pp. 191-199:

“Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics, in particular regarding the quest towards a theory of quantum gravity….

“… Remarkably, all of relativity (at least, all of special relativity) could be taught as an effective theory by using only Newtonian language … In a way, the model we are discussing here could be seen as a variant of the old ether model. At the end of the 19th century, the ether assumption was so entrenched in the physical community that, even in the light of the null result of the Michelson-Morley experiment, nobody thought immediately about discarding it. Until the acceptance of special relativity, the best candidate to explain this null result was the Lorentz-FitzGerald contraction hypothesis… we consider our model of a relativistic world in a fishbowl, itself immersed in a Newtonian external world, as a source of reflection, as a Gedankenmodel. By no means are we suggesting that there is a world beyond our relativistic world describable in all its facets in Newtonian terms. Coming back to the contraction hypothesis of Lorentz and FitzGerald, it is generally considered to be ad hoc. However, this might have more to do with the caution of the authors, who themselves presented it as a hypothesis, than with the naturalness or not of the assumption… The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”For more on this subject, see the earlier post linked here.

Path integral simplicity for low energy quantum gravity applications

Feynman’s book QED explains how to do the path integral approximately without using formal calculus! He gives the rules simply so you can draw arrows with similar length but varying directions, on paper, to represent the complex amplitudes for different paths light can take through a glass lens, and the result is that paths well off the path of least time cancel out efficiently, but those near it reinforce each other. Thus you recover classical laws of reflection and refraction. He can’t and doesn’t apply such simple graphical calculations to HEP situations above the field’s IR cutoff, where there is pair production occurring leading to a perturbative expansion for an infinite series of different possible Feynman diagrams, but the graphical application of path integrals to the simple low energy physics phenomena gives the reader a neat grasp of principles. This applies to low energy quantum gravitational phenomenon just as it does to electromagnetism.

As implied by Hubble’s discovery of the receding universe in 1929, recession velocity is v = HR where H is Hubble’s number and R is distance. This implies acceleration a = dv/dt. If there is “spacetime” in which light from stars obeys the equation R/t = c, then it follows that a = d(HR)/d(R/c) = Hc = 6 x 10-10 ms-2. (Another way to derive this, in which time runs forward rather than backwards with increasing distance from us, is often more acceptable conceptually, and is linked here: https://nige.files.wordpress.com/2009/08/figure-14.jpg.) This is the cosmological acceleration of the receding matter in the universe, implying force F = ma outward and an inward reaction force which is identical according to Newton’s 3rd law, and can only be mediated by gravitons. This makes quantitative predictions which will be shown below.

This is evidence for a spin-1 LeSage graviton (ignored by Pauli and Fierz when first proposing that the quanta of gravitation has spin-2) in the Hubble recession of galaxies which implies cosmological acceleration a = dv/dt = d(HR)/d(R/c) = Hc, obtained in May 1996 from the Hubble relationship v = HR, simply by arguing that spacetime implies that the recession velocity is not just varying with apparent distance, but with time, thus it is an effective acceleration (we published this via p893 of the October 1996 issue of Electronics World and also the February 1997 issue of Science World, ISSN 1367-6172, after string theory reviewers rejected it from more specialized and appropriate journals, without giving any scientific reasons whatsoever). The following statement is from a New Scientist page:

“We don’t know that gravity is strictly an attractive force,” cautions Paul Wesson of the University of Waterloo in Ontario, Canada. He points to the “dark energy” that seems to be accelerating the expansion of the universe, and suggests it may indicate that gravity can work both ways. Some physicists speculate that dark energy could be a repulsive gravitational force that only acts over large scales. “There is precedent for such behaviour in a fundamental force,” Wesson says. “The strong nuclear force is attractive at some distances and repulsive at others.

“… Freedom is the right to question, and change the established way of doing things. It is the continuing revolution … It is the understanding that allows us to recognize shortcomings and seek solutions. It is the right to put forth an idea … It is the right to … stick to your conscience, even if you’re the only one in a sea of doubters. Freedom is the recognition that no single person, no single authority of government has a monopoly on the truth ….” – President Reagan, Moscow State University on May 31, 1988.


Above: Perlmutter’s discovery of the acceleration of the universe, based on the redshifts of fixed energy supernovae, which are triggered as a critical mass effect when sufficient matter falls into a white dwarf. A type Ia supernova explosion, always yielding 4 x 1028 megatons of TNT equivalent, results from the critical mass effect of the collapse of a white dwarf as soon as its mass exceeds 1.4 solar masses due to matter falling in from a companion star. The degenerate electron gas in the white dwarf is then no longer able to support the pressure from the weight of gas, which collapses, thereby releasing enough gravitational potential energy as heat and pressure to cause the fusion of carbon and oxygen into heavy elements, creating massive amounts of radioactive nuclides, particularly intensely radioactive nickel-56, but half of all other nuclides (including uranium and heavier) are also produced by the ‘R’ (rapid) process of successive neutron captures by fusion products in supernovae explosions. The brightness of the supernova flash tells us how far away the Type Ia supernova is, while the redshift of the flash tells us how fast it is receding from us. That’s how the cosmological acceleration of the universe was measured. Note that “tired light” fantasies about redshift are disproved by Professor Edward Wright on the page linked here.

This isn’t based on speculations, cosmological acceleration has been observed since 1998 when CCD telescopes plugged live into computers with supernova signature recognition software detected extremely distant supernova and recorded their redshifts (see the article by the discoverer of cosmological acceleration, Dr Saul Perlmutter, on pages 53-60 of the April 2003 issue of Physics Today, linked here). The outward cosmological acceleration of the 3 × 1052 kg mass of the 9 × 1021 observable stars in galaxies observable by the Hubble Space Telescope (page 5 of a NASA report linked here), is approximately a = Hc = 6.9 x 10-10 ms-2 (L. Smolin, The Trouble With Physics, Houghton Mifflin, N.Y., 2006, p. 209), giving an immense outward force under Newton’s 2nd law of F = ma = 1.8 × 1043 Newtons. Newton’s 3rd law gives an equal inward (implosive type) reaction force, which predicts gravitation quantitatively. What part of this is speculative? Maybe you have some vague notion that scientific laws should not for some reason be applied to new situations, or should not be trusted if they make useful predictions which are confirmed experimentally, so maybe you vaguely don’t believe in applying Newton’s second and third law to masses accelerating at 6.9 x 10-10 ms-2! But why not? What part of “fact-based theory” do you have difficulty understanding?

It is usually by applying facts and laws to new situations that progress is made in science. If you stick to applying known laws to situations they have already been applied to, you’ll be less likely to observe something new than if you try applying them to a situation which nobody has ever applied them to before. We should apply Newton’s laws to the accelerating cosmos and then focus on the immense forces and what they tell us about graviton exchange.

The theory makes accurate predictions, well within experimental error, and is also fact-based unlike all other theories of quantum gravity, especially the 10500 universes of string theory’s landscape.


Above: The mainstream 2-dimensional ‘rubber sheet’ interpretation of general relativity says that mass-energy ‘indents’ spacetime, which responds like placing two heavy large balls on a mattress, which distorts more between the balls (where the distortions add up) than on the opposite sides. Hence the balls are pushed together: ‘Matter tells space how to curve, and space tells matter how to move’ (Professor John A. Wheeler). This illustrates how the mainstream (albeit arm-waving) explanation of general relativity is actually a theory that gravity is produced by space-time distorting to physically push objects together, not to pull them! (When this is pointed out to mainstream crackpot physicists, they naturally freak out and become angry, saying it is just a pointless analogy. But when the checkable predictions of the mechanism are explained, they may perform their always-entertaining “hear no evil, see no evil, speak no evil” act.)


Above: LeSage’s own illustration of quantum gravity in 1758. Like Lamarke’s evolution theory of 1809 (the one in which characteristics acquired during life are somehow supposed to be passed on genetically, rather than Darwin’s evolution in which genetic change occurs due to the inability of inferior individuals to pass on genes), LeSage’s theory was full of errors and is still derided today. The basic concept that mass is composed of fundamental particles with gravity due to a quantum field of gravitons exchanged between these fundamental particles of mass, is now a frontier of quantum field theory research. What is interesting is that quantum gravity theorists today don’t use the arguments used to “debunk” LeSage: they don’t argue that quantum gravity is impossible because gravitons in the vacuum would “slow down the planets by causing drag”. They recognise that gravitons are not real particles: they don’t obey the energy-momentum relationship or mass shell that applies to particles of say a gas or other fluid. Gravitons are thus off-shell or “virtual” radiations, which cause accelerative forces but don’t cause continuous gas type drag or the heating that occurs when objects move rapidly in a real fluid. While quantum gravity theorists realize that particle (graviton) mediated gravity is possible, LeSage’s mechanism of quantum gravity is still as derided today as Lamarke’s theory of evolution. Another analogy is the succession from Aristarchus of Samos, who first proposed the solar system in 250 B.C. against the mainstream earth-centred universe, to Copernicus’ inaccurate solar system (circular orbits and epicycles) of 1500 A.D. and to Kepler’s elliptical orbit solar system of 1609 A.D. Is there any point in insisting that Aristarchus was the original discoverer of the theory, when he failed to come up with a detailed, convincing and accurate theory? Similarly, Darwin rather than Lamarke is accredited with the theory of evolution, because he made the theory useful and thus scientific.

Since 1998, more and more data has been collected and the presence of a repulsive long-range force between masses has been vindicated observationally. The two consequences of spin-1 gravitons are the same thing: distant masses are pushed apart, nearby small masses exchange gravitons less forcefully with one another than with masses around them, so they get pushed together like the Casimir force effect.

Using an extension to the standard “expanding raisin cake” explanation of cosmological expansion, in this spin-1 quantum gravity theory, the gravitons behave like the pressure of the expanding dough. Nearby raisins have less dough pressure between them to push them apart than they have pushing in on them from expanding dough on other sides, so they get pushed closer together, while distant raisins get pushed further apart. There is no separate “dark energy” or cosmological constant; both gravitation and cosmological acceleration are effects from spin-1 quantum gravity (see also the information in an earlier post, The spin-2 graviton mistake of Wolfgang Pauli and Markus Fierz for the mainstream spin-2 errors and the posts here and here for the corrections and links to other information).

As explained on the About page (which contains errors and needs updating, NASA has published Hubble space telescope estimates of the immense amount of receding matter in the universe, and since 1998 Perlmutter’s data on supernova luminosity versus redshift have shown the amount of the tiny cosmological acceleration, so the relationship in the diagram above predicts gravity quantitatively, or you can you normalize it to Newton’s empirical gravity law so it then predicts the cosmological acceleration of the universe, which it has done since publication in October 1996, long before Perlmutter confirmed the predicted value (both are due to spin-1 gravitons).



Elitist hero worship of string theory hero Edward Witten by string theory critic Peter Woit

“Witten’s work is not just mathematical, but covers a lot of ground. The more mathematical end of it has been the most successful, but that’s partly because, in the thirty-some years of his career, no particle theorist at all has had the kind of success that leads to a Nobel Prize. If Witten had been born ten-twenty years earlier, I’d bet that he would have played some sort of important role in the development of the Standard Model, of a sort that would have involved a Nobel prize.” – Peter Woit, Not Even Wrong blog

With enemies like Peter Woit, Witten must be asking himself the question, who needs friends? More seriously, this is a useful statement of Dr Woit’s elitism problem. He thinks that Professor Witten tragically missed out on a Nobel Prize, despite his mathematical physics brilliance, by being born some decades too late. Duh. Doesn’t that prove him unnecessary? After all, he wasn’t needed. Physics did not go on hold for decades awaiting him. Others got the prizes for doing the physics. Maybe I’m just too stupid to understand true genius…

On the topic of Bohr’s quoted attack on Feynman’s 2nd quantization path integrals, we found that this kind of “you’re wrong because our lying, false, but widely hyped dogma is popular fashion, therefore we don’t have to listen to you!” claim is still rife in mainstream physics, particularly in groupthink science fantasy like string theory, when in May 1996 we accurately predicted the a = dv/dt = d(HR)/d(R/c) = Hc ~ 6 x 10-10 ms-2 cosmological acceleration of the universe from the Hubble relationship v = HR simply by arguing that spacetime implies that the recession velocity is not just varying with apparent distance, but with time, thus it is an effective acceleration; which we published via p893 of the October 1996 issue of Electronics World and also the February 1997 issue of Science World (ISSN 1367-6172), after string theory reviewers rejected it from more specialized and appropriate journals, without giving any scientific reasons whatsoever. This acceleration was thus predicted two years before large groups of astronomers detected it. They failed to acknowledge the prediction and frequently lied in publications that it was unpredicted. Edward Witten claimed – falsely as far as physical facts are concerned – that it was a great surprise, when in fact it had been predicted by quantum gravity in my paper years earlier. None of these people seem to understand general relativity, which lacks the dynamics for quantum gravity. All of the relativistic corrections to Newtonian gravity which general relativity contains come from the contraction term needed for the conservation of mass-energy, which is introduced because a direct equivalence of spacetime curvature to the stress-energy (gravitational field source) tensor would be false since the divergence of the stress-energy tensor does not vanish as it should in order to satisfy local conservation of mass-energy. This contraction term is what makes general relativity shrink Earth’s radius by GM/(3c2) = 1.5 mm as Feynman explains in his Lectures on Physics, and together with the Lorentz transformation, this contraction is predicted by spin-1 quantum gravity. The outward force of the accelerating universe is given by Newton’s 2nd law; the inward reaction force mediated by gravitons is then predicted by Newton’s third law to be equal and opposite. This inward force of gravitons turns LeSage’s idea which Feynman discusses, e.g. on the audio file which plays at this linked site, into a quantitive prediction. Because the gravitons are off-shell, they cause forces and thus contractions instead of drag or heating like on-shell radiations. Similarly, the electromagnetic spin-1 gauge bosons causing your fridge magnet to stay attached to the fridge door don’t cause it to heat up. Also, the Casimir force spin-1 gauge bosons in the vacuum which push metal plates together don’t cause drag on the motion of metal plates in a vacuum. Thus the normal drag and heating objections to LeSage’s on-shell vacuum radiation and null and void against off-shell virtual radiation. In fact, the compression effects cause the radial contraction normally attributed to the fourth dimension in general relativity. Relativity is thus explained in terms of a physical mechanism: force effects from off-shell gauge bosons in quantum field theory.

If you really want to understand masses, you need to understand the field structure in detail. This means facing the facts of vacuum polarization, the gain of energy by virtual fermions as they are physically pulled apart (polarized) by an electric field from the particle which attracts the unlike charged virtual fermion and repels the like charged virtual fermion. This delivers energy from the electric field to the virtual fermions, which thus are not quite so virtual anymore! The pulling apart (polarization) affects the survival time and thus physically undermines the uncertainty principle, shifting them from being totally off-shell to closer to being on the mass shell. Thus, the “virtual” fermions in a strong electromagnetic field (high electric polarization) may become more real and are affected by the exclusion principle, which quantizes their positions. This imposes a geometric shell structure configuration on the field by analogy to atomic or nuclear shell structure, determining the hadron masses.

The pair production field isn’t infinite: there are UV and IR cutoffs on the energies and thus distances around a charge where polarizable pair production of virtual fermions occurs. No infinities are present. There can’t be an infinite number of polarized virtual fermions because the mechanism which produces them depends on energy taken from the electromagnetic field, which isn’t infinite.

Remember the earlier post with the diagram above (linked here)?  The beta decay of leptons (muon and tauon) is analyzed in a way that’s inconsistent with quark decay. E.g., muons are supposed to beta decay into electrons via the intermediary of a W weak boson.

For consistency with this picture, downquarks and strange quarks would need to also beta decay into electrons via the intermediary of a W weak boson.

But the mainstream interpretation lacks this consistency in interpretation, and instead insists on seeing quark decay as the decay of quarks directly into other quarks, not requiring the intermediary of the W weak boson (indicating decay of quarks into leptons, complimenting Cabbibo’s 1964 discovery “universality” e.g. the similarity of CKM weak interaction strengths for quarks and leptons within the same generation). For an illustration that makes the problem clear, see the first diagram in this blog post.

The solution to this discrepancy also gets rid of the alleged and unexplained excess of matter over antimatter. Quarks and leptons differ in terms of having colour charge and fractional electric charge, but these differences are superficial masking effects of vacuum polarization which changes the observable electric charge of a particle. They’re not fundamental and deep properties of nature, as today assumed in the construction of the SM.

Suppose that an upquark is really a disguised electron: it’s lost 2/3rds of its electric charge due to vacuum polarization screening, and that electromagnetic energy has been transformed into strong colour charge. This is because the polarization (pulling apart) of virtual quarks by strong electric fields, which gives them potential energy and thus increases their survival time over that predicted by the uncertainty principle. So some of electromagnetic field energy gets converted by this virtual fermion polarization mechanism into the energy of gluon fields (which automatically accompany the pair-production created virtual quark-antiquark pairs). Virtual fermions which have been pulled apart by strong electric fields, using energy from the electric field in the process, both screens the electric charge and contributes to the colour charge.

Thus the difference between the total electromagnetic field energy from the fractionally chargesd downquark, and the integer charge electron, is converted into the colour charge of the downquark. This idea actually predicts that the total energy of the short-range gluon field of a downquark is precisely equal to (1 – 1/3)/(1/3) or twice the total energy of its electromagnetic field.

Hence in this model downquarks and electrons are the same thing, and are merely disguised or cloaked merely by the vacuum polarization phenomena accompanying confinement. This solves the beta decay interpretation anomaly above, and also explains the alleged problem of excess of matter over antimatter in the universe. Observe that the universe is 90% hydrogen, with one electron, one downquark, and two upquarks. If upquarks are disguised positrons and downquarks are disguised electrons, there is a perfect balance of matter and antimatter in the universe; it’s just hidden by vacuum polarization phenomena.

Responding to the comment and diagram above, Professor Jacques Distler (the pro-stringy bigoted arXiv adviser who leaves snide attacks by trackbacks while not permitting genuine trackbacks, as discussed in detail in the earlier blog post linked here), states: “As to your physics comments, they are, alas, completely wrong,” before recommending a dogmatic book on the Standard Model which is the whole problem because my comments are entirely based on Feynman’s criticisms of the Standard Model in his book QED, so I responded:

Thanks for your technical analysis, Professor. I suggest you read Feynman’s book QED for the faults in the Standard Model I discussed (it’s at your level). Cheers.

Doubtless, he will come back with apologies, terrific enthusiasm for solid physics, and a confession that trying to make an ad hoc theory of the Standard Model using string theory or E8 is missing the point that the Standard Model may not merely be “incomplete” but in need of a complete rebuilding! It’s a bit like Ptolemy using epicycles to model the Earth-centred universe of Aristotle; it doesn’t matter how accurate the epicycle model is, if it is a mathematical model for a false interpretation of the universe, then it is a pipe-dream as far as real-world physics is concerned. In QED, Feynman writes:

The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. …

But if you just look at the [Standard Model] you can see the glue, so to speak. It’s very clear that the photon and the three W’s [weak gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.]

– R. P. Feynman, QED, Penguin, 1990, pp. 141-142.

Professor Distler by claiming that Feynman’s arguments which I quoted were wrong (although that could of course just be his standard response to any non-string idea, and the reason for his arXiv advice), suggests that he is completely unaware of these issues with the Standard Model. However, I may be wrong here, since he is married to a psychologist. So maybe the explanation is different and he is a more complex character, of interest to students of the science of psychology? At a firmer level, you can understand his hostility to the real universe (as distinct from the imaginary landscape in the minds of string theorists) by noting that he works in the same Texas University department as the pro-Israeli Palestine civilian bombing terrorism thuggery proponent (who claims that British regard for the human rights for civilians are disrespectful to American Jews), string theorist and Standard Model contributor Professor Steven Weinberg. Maybe Jacques would be fired if he admitted the truth about Feynman’s criticisms of the Standard Model that Weinberg helped build?

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space [because there is an infinite series of terms in the perturbative expansion to Feynman’s path integral] … Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”

– Richard P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

It’s true that ignorant morons may object to advanced QFT mathematics simply because it’s so abstract. However, as Feynman’s argument shows, there are intelligent reasons for questioning the fundamental validity of even well established mathematical techniques used ubiquitously in particle physics…

Update: Dr Tommaso Dorigo has a new blog post up, apparently claiming that anyone who works on the original goal of string theory (but doesn’t get the landscape failure problem) is deranged

What is the future of peer review? What does it do for science and what does the scientific community want it to do? Should it detect fraud and misconduct? Does it illuminate good ideas or shut them down? Does it help journalists report the status and quality of research? Why do some researchers do their bit and others make excuses? And why are all these questions important not just to journal editors, but to policy makers and the public? … [includes slides with vague, arm-waving complains about the problem of of “theories of everything” submitted for peer-review which indicate “deranged minds” on principle, and thus need not be taken seriously]

Response:

But who are “peers”? If you discover something nobody else has discovered which is way out, you don’t truly have any peers who are capable of checking your paper. E.g., if you send a paper to Classical and Quantum Gravity on an alternative to string theory, they send it to dogmatic string theorists to act as “peer” reviewers, who send back a rejection report saying the idea is useless for the progress of string theory! The problem is finding a genuine, unprejudiced, “peer” reviewer if you are working on a new idea which hasn’t as yet attracted any other interest.

Your position of publishing ideas written up by large numbers of collaborators misses this point. If you have a large group of authors, even if they get rejected, they have enough weight in numbers to say, start up their own specialist journal with its own peer review advisory panel, and get on the citation index. In reality, science requires groupthink support for new ideas. It’s unlikely that a radical idea can be completely implemented, explored, and popularized in the face of ignorant hostility and sneering rejections from mainstream “peer” reviewers who are basically the doormen to the trade union “closed shop” or maybe “old boys club” (depending on the analogy you prefer) of the Elitist Scientific Research Corp.

Peer review is great for the incremental progress of everyday science, but it is not so good where the new idea contradicts the entire ongoing mainstream research program, especially if that program is being defended as being “the only game in town”. ;-)

Just a word about the “deranged minds” with the “theory of everything”.

Here’s how the “theory of everything” gets developed:

1. You try to publish a single idea with a prediction for, say, quantum gravity.
2. You get rejected because your paper does not say how your idea fits into the Standard Model or general relativity.
3. You work on this problem and find a solution. Now you have a “theory of everything”.
4. Tommaso then says you must be deranged.

;-)

Thanks for the helpful idea to submit to arXiv.

I did that in December 2002 when only my email at the University of Gloucestershire could be used to set up an account. I uploaded a paper. It appeared online with an arXiv number. Thirty seconds later when I refreshed my browser, it was replaced with somebody else’s paper. Then editor of Classical and Quantum Gravity sent it to string theorists for “peer” review, then sent me the report without the names of the “peer” reviewers. The one good thing about refusing to give the names of the string theorists who wrote that report is that you are forced to suspect the whole lot of them as being your secret enemies ;-)

Diatribe against pure mathematical Platonic ideas infecting physics departments

The following is a piece I wrote then deleted from a paper I’m trying to get finished, called “Understanding quantum gravity”. It’s a bitter diatribe against the pure mathematical Platonic ideals that have infected physics via string theory and even loop quantum gravity. It’s not what I want in my paper, and is not specifically targetted at Distler or anyone else, although it might or might not be applicable in any particular case.

I don’t know whether Distler is a complete string quack or whether he has a genuine interest in physics which just became perverted as the landscape and AdS/CFT failures with string theory have been defended by lying that it is “the only game in town”. Although Jacques has in the past made false speculations about my knowledge of physics, which may seem to suggest to some that he is prepared to make false claims about what people are doing without first finding out the facts or asking; this leads me to think he is presently behaving as a quack in defending string theory when it has repeatedly failed to live up to its hype, but that’s just my personal view based on observing his weird behaviour, which may change and improve later on (particularly if Jacques should ever tragically die from old age and become a little quieter or should we say politely, more “comtemplative”).

What I want in the paper is just applied maths predictions, so I deleted the following portions which show the kind of bitterness that comes out when you try to write a paper after a lot of bigoted, quack “peer”-review for 14 years:

INTRODUCTION

Contrary to lying hype about making “predictions”, string theory gains its strength from its failure to predict results in a falsifiable way, and be found right or wrong. Many media science journalists have confused the impossibility of making checkable predictions with rigor, because they think a failure to ever be debunked by experiments sounds like rigor.

On the contrary, real world isolation is not rigor but just evidence of complete physical failure. By definition, in pure (not applied) mathematics you prove theorems without requiring any experiments or observational confirmations from the real world. The problem with string theory is that it doesn’t prove its non-real world fantasies with pure mathematical rigor. E.g., the AdS/CFT correspondence is an unproved conjecture, and while there may or may not be an exact correspondence between anti de Sitter space and conformal field theory, such an exact mathematical correspondence would only approximately model the strong nuclear force which is clearly not analogous to anti de Sitter space, a negative cosmological constant, with attraction forces increasing as distance is increased, which is a rough approximation to the gluon mediated QCD force over hadron sized distances, for the larger, nuclear-sized distances where mesons mediate the strong force (where it falls off with increasing distance, instead of increasing with distance!). Nobody has succeeded in proving that AdS/CFT actually makes useful calculations that have not already been done by other approximate methods. All they have done is to hype fantasy. Even if the approximation works usefully, that won’t prove that string theory is really being right, any more than the ability of Ptolemy’s epicycles to predict the position of the sun proved that the sun orbits the Earth: it doesn’t!

This kind of ‘intellectual’ pure mathematical parlour game may seem harmless or clever to you, but it is attractive not only to harmless pure mathematicians but also to physically ignorant and thus dangerous second rate (or failed) pure mathematicians, who are not innovative and productive enough at pure mathematicians to earn a place in such a department, but are good enough at textbook calculations to get into physics departments.

These people in some cases have an agenda of Platonic fantasy, turning the physics department into a trojan horse by which the idealist goals of ‘beautiful’ pure mathematics are sneakily fostered upon physics. Any opposition to this destruction of science brings angry hostility rather than reform from these bitter pseudo-physicists who often work as charlatan ‘peer’ reviewers, censoring science, believing in a mathematical universe.

There is also a story of cynical vested interests in non-falsifiable ideas by the educational/research theoretical physics community which goes as follows. Before the 1984 superstring revolution, there were a lot of theories which were falsifiable. People would work on those theories for their PhD thesis, then some experiment would disprove the theory. Then they had to go through life saying they got their PhD in something that was disproved! Not very impressive to potential employers! So the wise guys jumped on the string theory research waggon for stability; if string was not falsifiable, it was a safer bet because in 20 years time, your PhD on “stringy D-branes” will still look respectable, etc. This problem has been called the Catt Concept, as explained in the following Editorial from Popular Mechanics, May 1970:

‘Perhaps NASA was too successful with Apollo. … According to Catt, the most secure project is the unsuccessful one, because it lasts the longest.’

– Robert P. Crossley, Editorial, Popular Mechanics, Vol. 133, No. 5, May 1970, p. 14.

Thus, the problem with string theory may be due to this need for PhD recipients in theoretical physics to have research projects in a general subject area which cannot be experimentally falsified, so that it retains some measure of “non-failure” in the eyes of potential employers for the remainder of their career, decades:

‘The President put his name on the plaque Armstrong and Aldrin left on the moon and he telephoned them while they were there, but he cut America’s space budget to the smallest total since John Glenn orbited the Earth. The Vice-President says on to Mars by 1985, but we won’t make it by “stretching out” our effort. Perhaps NASA was too successful with Apollo. It violated the “Catt Concept”, enunciated by Britisher Ivor Catt. According to Catt, the most secure project is the unsuccessful one, because it lasts the longest.’ (Emphasis added.)

– Robert P. Crossley, Editorial, Popular Mechanics, Vol. 133, No. 5, May 1970, p. 14.

Thanks to censorship of criticisms, string theory has been securely funded for decades without success. E.g., compare the Apollo project with the Vietnam war for price, length and success. Both were initially backed by Kennedy and Johnson as challenges to Communist space technology and subversion, respectively. The Vietnam war – the unsuccessful project – sucked in the cash for longer, which closed down the successful space exploration project! Thus, the Catt Concept explains why the ongoing failure of string theory to be physics makes it a success in terms of killing off more successful alternative projects, by getting ongoing media attention, publicity, and funding which just keeps on coming at the cost of alternative projects which correctly predicted the cosmological acceleration of the universe to within observational error two years before it was detected!

CONTENTS

1. Mathematical lessons from classical gravitation (general relativity) and electromagnetism (Maxwell’s equations), with comments on Lunsford’s unification.

2. The spin of the graviton: evidence that all masses are exchanging gravitons, and that the spin of the graviton is 1 not 2 as claimed by Pauli and Fierz (tensor rank indicates whether field lines or accelerations are being modelled, and is not tied to field quanta spin, contrary to groupthink lying hype)

3. Implications for the Standard Model; changes to electroweak theory which allow gravitation and mass predictions; Feynman’s criticisms of the Standard Model and how these are overcome by the solution to a discrepancy in particle classification in existing beta decay analysis via weak bosons; replacing the Higgs field and current unification ideas

4. Quantitative predictions from the corrected Standard Model, which includes a complete theory of particle masses (replacing the ad hoc Higgs field), removes dogmatic, physically false symmetry breaking mechanisms and includes gravity

5. How arXiv and ‘peer’ review have used uncritical dogmatic censorship of alternative ideas in order to hype misinformed and ignorant non-falsifiable unpredictive pseudo-scientific groupthink fantasy; the analogy to the ‘peer’ review refusals in Nazi publications for the facts on eugenics to be presented and the ‘100 authors against Einstein’ crusade, needed to suppress all scientific dissent against bigoted charlatans with a politican agenda dressed up as ‘mainstream majority-backed science’

6. False modesty versus quantum gravity; obvious facts which you know, I know, you know I know, but which you maybe prefer to pretend that you believe I don’t know; downplaying the facts and being polite and modest in a paper (a) is absolutely no threat whatsoever to a system of censorship in which pseudo ‘peer’ reviewers are bias in favour of mainstream dogmas which have failed physically because ‘they are the only game in town’, when in fact they are falsely producing this illusion by censoring alternatives, and (b) it actually allows ignorant censors to falsely buy convincingly (as far as the ignorant media is concerned) dismiss the facts falsely as ‘speculation’ by quoting the polite statements of presentation of the facts in the paper as alleged evidence for weakness in those statements!

ABSTRACT. In May 1996, the quantum gravity mechanism of this paper predicted to within experimental error the small positive cosmological constant observationally confirmed two years later. Dark energy was accurately predicted. The greatest benefit of being unread is that your unread writings cannot possibly offend anyone. So we’re free to avoid diplomatic drivel and egotistically motivated false ‘modesty’, and explain why these facts are ignored.

Update:

Perelman has rejected the $1,000,000 Clay Mathematics Institute Millennium prize for proving Poincare’s conjecture, which is relevant to the discussion of pure mathematical trash above: the most competent mathematicians aren’t those who go around sneering at other people and trying to censor out discoveries, or hyping stringy lies. They’re relatively quiet, decent, moral people who put ideas forward, then don’t clamour to win immense materialistic prizes or to give endless interviews.


‘The Poincaré conjecture, proposed by French mathematician Henri Poincaré in 1904, was the most famous open problem in topology. Any loop on a sphere in three dimensions can be contracted to a point; the Poincaré conjecture surmises that any closed three-dimensional manifold where any loop can be contracted to a point, is really just a three-dimensional sphere. The analogous result has been known to be true in higher dimensions for some time, but the case of three-manifolds had turned out to be the hardest of them all. Roughly speaking, this is because in topologically manipulating a three-manifold, there are too few dimensions to move “problematic regions” out of the way without interfering with something else. … Perelman modified Richard Hamilton’s program for a proof of the conjecture, in which the central idea is the notion of the Ricci flow. Hamilton’s basic idea is to formulate a “dynamical process” in which a given three-manifold is geometrically distorted, such that this distortion process is governed by a differential equation analogous to the heat equation. The heat equation describes the behavior of scalar quantities such as temperature; it ensures that concentrations of elevated temperature will spread out until a uniform temperature is achieved throughout an object. Similarly, the Ricci flow describes the behavior of a tensorial quantity, the Ricci curvature tensor. Hamilton’s hope was that under the Ricci flow, concentrations of large curvature will spread out until a uniform curvature is achieved over the entire three-manifold. If so, if one starts with any three-manifold and lets the Ricci flow occur, eventually one should in principle obtain a kind of “normal form”. According to William Thurston, this normal form must take one of a small number of possibilities, each having a different kind of geometry, called Thurston model geometries.

‘This is similar to formulating a dynamical process which gradually “perturbs” a given square matrix, and which is guaranteed to result after a finite time in its rational canonical form.

‘Hamilton’s idea had attracted a great deal of attention, but no one could prove that the process would not be impeded by developing “singularities”, until Perelman’s eprints sketched a program for overcoming these obstacles. According to Perelman, a modification of the standard Ricci flow, called Ricci flow with surgery, can systematically excise singular regions as they develop, in a controlled way.

‘It is known that singularities (including those which occur, roughly speaking, after the flow has continued for an infinite amount of time) must occur in many cases. However, any singularity which develops in a finite time is essentially a “pinching” along certain spheres corresponding to the prime decomposition of the 3-manifold. Furthermore, any “infinite time” singularities result from certain collapsing pieces of the JSJ decomposition. Perelman’s work proves this claim and thus proves the geometrization conjecture.”‘

Updates (22 July 2010):

From the “comments” section to this post:

Hi Dirk,

Thanks! The emperor (Distler and other arXiv.org “advisers”) resorts to banning alternatives from string theory such as Lunsford’s peer-reviewed paper* from being hosted, and then falsely claims “string theory is the only game in town”.

Duh. Yeah, that’s because all other serious games are banned by law of arXiv.org, unless they’re wrong/not-even-wrong junk like Smolin’s loop quantum gravity.

The problem with “The Emperor’s New Clothes” is that if you read the original fairytale, when the Emperor realises that the invisible clothes he has been sold don’t actually exist, he decides to continue pretending that they do exist, so the farce continued. People always misinterpret the ending of that fairytale as if the Emperor was debunked. Nope. He was still in command of the situation, and didn’t even blush. If he such wasn’t an arrogant son of a bitch, he wouldn’t have become the emperor or Distler figurehead in the first place!

Cheers,

Nige

—-
* Lunsford’s paper, http://cdsweb.cern.ch/record/688763 was published in the peer-reviewed journal Int. J. Theor. Phys. 43 , 1 (2004) 161-177 but then was deleted and banned without explanation from arXiv.org, see Lunsford’s comment: http://www.math.columbia.edu/~woit/wordpress/?p=128&cpage=1#comment-1920:

“… the proper way to do it is to put it on arxiv. I don’t know why they blacklisted it – all I know is, it got sponsored, got put up, and then vanished – never got any explanation. I would like for the thing to be on arxiv just on general principles, but now that it’s actually been peer-reviewed and published, it’s not a big issue to me any more.”

————

NEW SCIENTIST, 22 May 2010, page 40, “Muon whose army?” by Kate McAlpine:

“For years the E821 collaboration, based at Brookhaven National Laboratory in Upton, New York, studied particles called muons. These are unstable subatomic particles similar to electrons but about 200 times as heavy. The research focussed on a quantum property of the muon known as its magnetic moment, and it found the Standard Model wanting. According to the measurements, there is a mere 0.27 per cent probability that the Standard Model is correct. …

“In fact, the magnetic moment is so sensitive it is affected by the presence of particles unknown in Dirac’s day, including quarks, W and Z bosons, and the [imaginary] Higgs boson. Indeed, quantum mechanics tells us that virtual versions of any kind of particle – including ones we haven’t discovered yet – can pop into existence by borrowing energy for a passing instant. …

“The muon is affected much more than the electron because of its greater mass. That’s because they have more energy available for virtual particles to borrow. The magnetic moment of the electron is one of the most closely verified predictions of the standard model …. Not so for the muon. The first signs that all was not well came shortly after the E281 experiment got under way in the mid-1990s.”

See www.arxiv.org/abs/1001.4528 (“After a brief review of the muon g-2 status, we discuss hypothetical errors in the Standard Model prediction that might explain the present discrepancy with the experimental value. None of them seems likely. In particular, a hypothetical increase of the hadron production cross section in low-energy e+ e- collisions could bridge the muon g-2 discrepancy, but it is shown to be unlikely in view of current experimental error estimates. If, nonetheless, this turns out to be the explanation of the discrepancy, then the 95% CL upper bound on the Higgs boson mass is reduced to about 135GeV which, in conjunction with the experimental 114.4GeV 95% CL lower bound, leaves a narrow window for the mass of this fundamental particle.”) for an analysis of how much the contribution of hadrons to the muon’s magnetic moment would need to be increased to bring the prediction into agreement with measurements. The problem with this is that it causes the theoretical mass of the W boson to fall below the measured value. So the authors of that arxiv paper, Massimo Passera et al., “wondered what effect the various possible Higgs masses would have on the muon’s magnetic moment. To match the E821 result, their calculations suggest that the Higgs mass is far lower than 114 GeV, which has already been ruled out [NC: but there is no Higgs, mass is given by Z bosons of 91 GeV each!]. Taken at face value, this raises the uncomfortable possibility that the standard model is wrong, as long as E821’s results are bona fide.”

Heavy SUSY partners are postulated as an ad hoc explanation for the discrepancy by string theorists: ‘“There are supersymmetric theories that would explain this discrepancy very well,” says Passera.’

This is because virtual particles in the vacuum BOOST the magnetic moment of a lepton.

Karl Popper demarcated science from pseudo-science by arguing for falsifiable predictions.

A related problem to the muon’s magnetic moment miscalculation by the Standard Model is the fact that the proton’s charge radius when determined using muon interactions is 5 standard deviations different to that determined in an experiment using electron interactions, as Wikipedia states:

The internationally-accepted value of the proton’s charge radius is 0.8768 femtometers. This value is based on measurements involving a proton and an electron.

However since July 5, 2009 an international research team has been able to make measurements involving a proton and a negatively-charged muon. After a long and careful analysis of those measurements the team concluded that the root-mean-square charge radius of a proton is “0.84184(67) fm, which differs by 5.0 standard deviations from the CODATA value of 0.8768(69) fm.”[11]

The international research team that obtained this result at the Paul-Scherrer-Institut (PSI) in Villigen (Switzerland) includes scientists from the Max Planck Institute of Quantum Optics (MPQ) in Garching, the Ludwig-Maximilians-Universität (LMU) Munich and the Institut für Strahlwerkzeuge (IFWS) of the Universität Stuttgart (both from Germany), and the University of Coimbra, Portugal.[12][13] They are now attempting to explain the discrepancy, and re-examining the results of both previous high-precision measurements and complicated calculations. If no errors are found in the measurements or calculations, it could be necessary to re-examine the world’s most precise and best-tested fundamental theory: quantum electrodynamics.[14]

This is done by replacing the electron in an atom with a muon:

In order to determine the proton radius, the researchers replaced the single electron in hydrogen atoms with a negatively-charged muon. Muons are very much like electrons, but they are 200 times heavier. According to the laws of quantum physics, the muon must therefore travel 200 times closer to the proton than the electron does in an ordinary hydrogen atom. In turn, this means that the characteristics of the muon orbit are much more sensitive to the dimensions of the proton. The muon ‘feels’ the size of the proton and adapts its orbit accordingly. “In fact, the extension of the proton causes a change in the so-called Lamb-shift of the energy levels in muonic hydrogen”, Dr. Randolf Pohl from the Laser Spectroscopy Division of Prof. Theodor W. Hänsch (Chair of Experimental Physics at LMU and Director at MPQ) explains. “Hence the proton radius can be deduced from a spectroscopic measurement of the Lamb shift.”

MAXWELL’S RANK-1 EQUATIONS AND GENERAL RELATIVITY RANK-2 EQUATIONS

Newton’s law of gravitation was proposed in 1665 and Coulomb’s electrostatics force law was proposed in 1785.

In Newton’s theory, force F is related to gravitational charge (mass), m, by the relationship

F = ma,

Where acceleration, a, leads to “rank-2” i.e. second-order spacetime equations because, a = d2x/dt2.

In Maxwell’s equations, the corresponding force laws are

F = qE and F = qvB sin N

Where q is electric charge, qv sin N is effectively the magnetic charge, E is electric field, and B is magnetic field.

This definition leads to “rank-1” i.e. first-order differential equations, because the fields E and B are not represented in electromagnetism by rank-2 or second-order spacetime equations like acceleration, a = d2x/dt2. The field E and B by contrast are defined as first-order or rank-1 gradients, i.e. E is the gradient of volts/metre, E = dV/dx.

This is the fundamental reason why Maxwell’s equations are rank-1, whereas the accelerations and spacetime curvatures in general relativity utilize rank-2 tensors.

Update (26 July 2010): Dr Woit writes on his blog that the overhyped, imaginary Standard Model Higgs boson, despite so much searching for 45 years, still lacks any statistically significant evidence:

New Higgs Results From the Tevatron

Just got back from vacation this morning. Luckily I managed to be away for the blogosphere-fueled Higgs rumors, returned just in time to catch the released results which appeared in a Fermilab press release minutes ago. The ICHEP talk in Paris announcing these results will start in about half an hour, slides should appear here.

The bottom line is that CDF and D0 can now exclude (at 95% confidence level) the existence of a Standard Model Higgs particle over a fairly wide mass range in the higher mass part of the expected region: from 158 to 175 GeV. If the SM Higgs exists, it appears highly likely that it is in the region between 114 GeV (the LEP limit) and 158 GeV. The most relevant graph is here. It shows an excess of about 1 sigma over the entire region 125 GeV to 150 GeV, which unfortunately is nothing more than the barest possible hint of something actually being there.

Above: Dr Woit explains: “The experiments are not seeing a signal, so the plot is of the 95% confidence level limit they can put on a signal/divided by the signal expected for a SM Higgs. The “expected” value for this is based on the expected performance of the detectors. Around 165 GeV they expect to be able to put a limit on the size of the signal below the SM value, and they do, ruling out the existence of a Higgs at that mass. Around 125 GeV they expect to only be able to get a limit about 1.8 times the SM value, so don’t expect to be able to rule out a Higgs at that mass.

“If there really were a Higgs at a certain mass, one expects that the experiments would start to see an excess above expected background, and this would make their 95% confidence level worse than expected. However, the excess they are seeing is still so small as to be quite consistent with no real signal.”

Myths of quantum field theory

Rank-1 tensors (vectors) are used in electromagnetism because the field is defined in terms of the simple rank-1 gradients and curls of Faraday’s imaginary “field lines”; in general relativity, however, field lines are not used and curvatures describing accelerations (second-order differential equations) are used to describe the field.  Thus the use of rank-1 tensors in electromagnetism and rank-2 tensors in gravitation stems from the differing physical definitions of the “field” in each case (diverging or curling lines in electromagnetism, but accelerations in gravitation), not to the difference in the spin of the field quanta (spin-1 for electromagnetism, spin-2 for gravitation).

MAXWELL’S RANK-1 EQUATIONS AND GENERAL RELATIVITY RANK-2 EQUATIONS

Newton’s law of gravitation was proposed in 1665 and Coulomb’s electrostatics force law was proposed in 1785.

In Newton’s theory, force F is related to gravitational charge (mass), m, by the relationship

F = ma,

Where acceleration, a, leads to “rank-2” i.e. second-order spacetime equations because, a = d2x/dt2.

In Maxwell’s equations, the corresponding force laws are

F = qE and F = qvB sin N

Where q is electric charge, qv sin N is effectively the magnetic charge, E is electric field, and B is magnetic field.

This definition leads to “rank-1” i.e. first-order differential equations, because the fields E and B are not represented in electromagnetism by rank-2 or second-order spacetime equations like acceleration, a = d2x/dt2. The field E and B by contrast are defined as first-order or rank-1 gradients, i.e. E is the gradient of volts/metre, E = dV/dx.

This is the fundamental reason why Maxwell’s equations are rank-1, whereas the accelerations and spacetime curvatures in general relativity utilize rank-2 tensors.

The most curious thing is the false correlation by physically ignorant string theorists and others of spin-1 fields to rank-1 tensors in electromagnetism and of spin-2 fields to rank-2 tensors in general relativity. The correlation is fictitious, because the choice of rank-1 tensors in electromagnetism is purely due to a difference between the way the field is defined in electromagnetism and the way it is defined in general relativity. In electromagnetism, the field is defined by means of Faraday’s diverging or curling field lines, which are modelled in Maxwell’s equations by summing over simple first-order gradients (rank-1 tensors). If Faraday had not gone in for the field line concept, then you can bet we would today have a model of electromagnetism in terms of accelerations, i.e. rank-2 tensors.

There is no physical basis for popular claims that electromagnetism is intrinsically a rank-1 calculus system and that gravitation is intrinsically a rank-2 calculus system. It’s down to historical chance that Maxwell followed Faraday and used gradients of field lines, first order or rank-1 tensors to represent electromagnetic fields instead of directly representing electromagnetic forces in terms of accelerations (second-order equations, rank-2 tensors). If Maxwell had chosen to write his equations in terms of accelerations rather than via the the curls and divergencies of Faraday’s imaginary (fictitious) “field lines” (rank-1 tensors), then we would have spin-1 electromagnetic fields represented by rank-2 tensor equations insteadf of rank-1. It’s purely down to historical fluke. Once Maxwell had formulated his equations using Faraday’s unobservable field lines as rank-1 tensor equations, they became the usual groupthink physics dogma and nobody was willing to try to rebuild the theory in terms of rank-2 tensors.

Similarly, if Einstein and Hilbert in 1915 had formulated the field equation of general relativity using rank-1 tensors by analogy to the field lines of electromagnetism (instead of in terms of spacetime curvature which describes acceleration more directly), gravitation would be described by rank-1 field equations. In summary, the distinction between rank-1 and rank-2 tensor field equations in electromagnetism and gravitation is solely down to the choice of using the divergences and curls of field lines in 3 dimensional space to model electromagnetic fields and the choice of using spacetime curvatures (accelerations) to model gravitational fields. It is quite possible to model fields described in different ways by the use of different ranks of tensors. It’s got nothing to do with the spin of the graviton, because you could model electric forces with a rank-2 spacetime curvature equation and you could reformulate general relativity in terms of a rank-1 Faraday-type field line model where imaginary gravitational field lines diverge outward from mass/energy particles just as imaginary electric field lines diverge outward from electric charges in Faraday’s picture. Do you grasp this point? If you do grasp it and have some time to waste, maybe you will try arguing with the ignorant, lying bigots who are behind Wikipedia’s spin-2 graviton propaganda lies:

“If it exists, the graviton must be massless (because the gravitational force has unlimited range) and must have a spin of 2. This is because the source of gravity is the stress-energy tensor, which is a second-rank tensor, compared to electromagnetism, the source of which is the four-current, which is a first-rank tensor. Additionally, it can be shown that any massless spin-2 field would be indistinguishable from gravity, because a massless spin-2 field must couple to the stress-energy tensor in the same way that the gravitational field does.”

A massless spin-2 field would (if it existed) be indistinguishable from gravity because it couples to the stress-energy tensor like gravity. So what? That doesn’t prove that gravity is due to a massless spin-2 field. It certainly doesn’t disprove spin-1 gravitons, which correctly predicted in 1996 the acceleration of the universe as measured two years later by Perlmutter’s group, something that the non-falsifiable spin-2 gravity “predictions” has never done. The sole success of spin-2 gravitons hype efforts has been to stop the publication of the facts, the falsifiable predictions which were later confirmed by the discovery of the acceleration of the universe as predicted. These people are so ignorant and plain stupid that you are wasting your time if you even say hi to them. Like the Nazis, they are big shots and they know it all too well. Like the Nazis, they have their fellow travellers: the people who have the brains to see, like Prime Minister Chamberlain, that appeasement and shaking hands with these scum brings the applause of the crowd. It’s extremely hard to know how to proceed against a widely lauded groupthink consensus of ignorant liars who censor the arXiv.org, the “peer” (peer?, bigoted mainstream critic, more line)-reviewed journals and the sci-fi obsessed mainstream Hollywood-led media.

(The text above is extracted from the final section of the earlier post linked here, for the reason that the earlier post was primarily concerned with arXiv trackbacks, and it is always a good idea to separate out topics in different posts, or else some people who are not interested in one topic will stop reading a post before reaching material embedded in it which is of more interest to them.)

First versus second quantization quantum mechanics

On Facebook, Dr Jack Sarfatti is reviewing Professor Yakir Aharonov’s ideas.  Aharnonov is the physicist famous for the Aharonov-Bohm effect, which disproves the idea that electric and magnetic field strengths fully describe the electromagnetic field.  This fact becomes intuitively obvious when you notice that you can “cancel out” magnetic or electric fields with nearby opposite poles or opposite charges, without destroying the energy density of the field.  Similarly, you can pass two waves with opposite amplitudes through one another and despite the wave feature being temporarily “cancelled” during the period they are passing through one another and overlapping, when they emerge after passing through one another, they are fully restored with no energy loss!  (The contrapuntal model of the charged capacitor is another example, suggesting that charged massless SU(2) gauge bosons deliver electromagnetic forces, leaving U(1) hypercharge to generate spin-1 quantum gravity.)

The Aharonov–Bohm effect, sometimes called the Ehrenberg–Siday–Aharonov–Bohm effect, is a quantum mechanical phenomenon in which an electrically charged particle shows a measurable interaction with an electromagnetic field despite being confined to a region in which both the magnetic field B and electric field E are zero.

The Aharonov–Bohm effect shows that the local E and B fields do not contain full information about the electromagnetic field, and the electromagnetic four-potential, A, must be used instead. – Wikipedia.

Jack wrote:

Measurements do not necessarily disturb the quantum system, e.g. eigenoperator measurements do not. Aharonov introduced new kinds of “weak” and also “protective” measurements. … Remember Newton’s passion for Alchemy did not detract from his mechanical equations for gravity.

My response:

Measurements often disturb systems, by probing them with particles. Maybe the question is whether first-quantization indeterminate wavefunctions are physically “collapsed” (rather than just mathematically in a model of the process) by the act of taking a measurement of a system. Dr Thomas Love of California State University argues in a preprint he sent me that it’s just a mathematical error of first-quantization, with the time-dependent Schroedinger equation describing the particle prior to interaction, then a switch to the time-independent equation to model the particle’s eigenstate at interaction time:

“The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

Feynman in his 1985 book QED says the same about the first-quantization hype of the uncertainty principle. It’s a problem with Schrodinger/Heisenberg first-quantization, and Feynman offers second-quantization QFT as a replacement. In first-quantization, which is normal textbook “quantum mechanics”, the particle is treated as intrinsically indeterminate while the Coulomb field around it is treated classically. QFT (second-quantization) introduced by Dirac and Feynman, is the exact opposite: the chaotic entity is the field the particle is immersed in, with its creation and annihilation operators. The motion of an atomic electron, according to Feynman’s 1985 book, is not inherently chaotic, but the chaos is produced by the random interferences it experiences with the quantum Coulomb field (which Schroedinger/Heisenberg ignore by treating the field classically in their first-quantization quantum mechanics):

“I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, ‘Your old-fashioned ideas are no damn good when …’. If you get rid of ALL the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no NEED for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [of on-shell particles by off-shell field quanta] becomes very important …”

– Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, & 84.

For a debunking of the fudged experiments of Prof. Alain Aspect on “quantum entanglement”: http://arxiv.org/abs/quant-ph/9903066

After making this comment, Dr John Gribbin, the famous 1st quantization quantum mechanics proponent and “In Search of Schroedinger’s Cat“-author, responded:

Measurements ALWAYS disturb systems!

I replied:

John, if I today measure a supernova explosion flash that was set off a billion years ago, do I ‘disturb’ that system? I think you are excluding a category of measurements of this kind (where the system is self-luminous), when you say ‘Measurements ALWAYS disturb systems’. I can measure distant systems without really disturbing them, just by … See Moremeasuring light they emit in my direction. This doesn’t affect the supernova that occurred a billion years ago!

Similarly, Einstein asked if the observer influences the wavefunction of the Moon: ‘Is the Moon there when you aren’t looking at it?’ Clearly, the action of the moon is way bigger than h-bar, so the Moon is to be treated as a classical system, but even if it were just a particle, an observer of a photon emitted by it a quarter of a million miles away will not have any influence on it. Hope this is a friendly sounding response!

He then replied:

John Wheeler for one would disagree. Check out “delayed choice” experiments. John

My response:

Hi John, thanks. ‘The fundamental lesson of Wheeler’s delayed choice experiment is that the result depends on whether the experiment is set up to detect waves or particles.’ – http://en.wikipedia.org/wiki/Wheeler’s_delayed_choice_experiment
I agree that the observer’s equipment determines whether he sees the wave or particle properties of the photon, but I don’t see how this affects the system which emits the photon in the first place!

In his Nobel prize speech, Feynman mentions the influence of Wheeler’s ideas of electrons travelling backwards in time on his early (failed) attempts to formulate QED. Feynman didn’t get anywhere with that. I haven’t even seen any Nobel Prizes awarded for Aspect’s quantum entanglement experiments (which depend on an ad hoc elimination of 60% of inconvenient results, dismissed as “accidentals”), or for Cramer’s transactional “handshake” interpretation of QM (with its backward time travel). But 2nd quantization QFT has won many prizes for accurate prediction of experimental data, and yet is still being ignored in popular books on 1st quantization quantum mechanics debunked by Feynman 25 years ago in his book QED.

String theory investigator Distler versus E8 theorist Lisi

Professor Jacques Distler has revived his old attack on Garrett Lisi’s idea for unification using the exceptional Lie structure E8. It’s becoming funny to watch the arguments. Jacques begins with the quotation:

“It is difficult to get a man to understand something when his livelihood depends on him not understanding it.” — Upton Sinclair

Jacques goes on (prior to the mathematical discussion):

Skip and I wrote a paper, last year, which proved that Garrett Lisi’s “Theory of Everything” (or any E8-based variant thereof) could not yield chiral fermions (much less 3 Standard Model generations worth of fermions). Anyone with training in high energy theory instantly apprehends the consequence that this “theory” cannot, therefore, have anything remotely to do with the real world. Unfortunately, if your PhD is in pure mathematics (or, apparently, in hydrodynamics), this may not be immediately obvious to you.

Skip has the unenviable task of lecturing on our paper at a workshop, next week, with Garrett in attendance. (Well, OK, the workshop is in lovely Banff Alberta, so perhaps some envy is warranted.) This post is designed to help him fill in the dots. It contains only material which — to someone schooled in high energy theory — is of an embarrassingly elementary nature.

Jacques, you see, is responding to Garrett’s new arXiv paper, An Explicit Embedding of Gravity and the Standard Model in E8, which claims:

The algebraic elements of gravitational and Standard Model gauge fields acting on a generation of fermions may be represented using real matrices. These elements match a subalgebra of spin(11,3) acting on a Majorana-Weyl spinor, consistent with GraviGUT unification. This entire structure embeds in the quaternionic real form of the largest exceptional Lie algebra, E8. These embeddings are presented explicitly and their implications discussed.

So Garrett is claiming to be able to embed fermions within an E8 symmetry, while Jacques and Skip claim to disprove the idea that any chiral (spin-handed) fermions can be produced by an E8 algebra. Being a string theory fanatic against alternative ideas, as we suggested in the previous post, Jacques who is an arXiv adviser finds no problems (unlike Dr Woit, in the recent case of trackbacks to string theory papers) in leaving a critical trackback to Garrett’s arXiv paper.

Garrett, like me, surfs and windsurfs, but I don’t have any enthusiasm for his paper which predicts nothing checkable, explains no particle masses, no coupling parameters, no mixing angles, in short nothing. He claims to show that E8 can be used as an ad hoc model to represent the patterns of fundamental particles, although Jacques and Skip dispute it by showing that E8 will inherently yield non-chiral, “mirror” fermions. In any case, it’s an ugly, ad hoc piece of speculative framework modelling which runs completely counter to Occam’s Razor and to the primitive notion of building theories on facts rather than the time-wasting hype from fiddling around with Platonic mathematical patterns to “describe” directly unobservable features of the world in an uncheckable, non-falsifiable way.

Dr Woit exposes arXiv.org pro-string theory prejudice

Dr Peter Woit of the maths department, Columbia University, New York, has exposed the prejudice in favor of string theory in his brilliant new Not Even Wrong blog post (his blog title title refers to the non-falsifiable nature of the string theory landscape that “predicts” everything in different parallel universes, so you can’t test it) called String Theory Fan.

Professor Jacques Distler is a string theory specialist who advises arXiv.org, the online physics free paper preprint server, on what is sensible physics. He had numerous arguments with Dr Woit. Subsequently (maybe the word should be consequently), arXiv.org has banned trackbacks from Dr Woit’s blog to arXiv.org papers he discusses.

To investigate this, Dr Woit secretly set up a spoof new blog called String Theory Fan, full of hype for speculative non-falsifiable pseudo-science.  Unsurprisingly, arXiv.org approved trackbacks to that blog, while still banning them to Dr Woit’s objective Not Even Wrong blog:

In the comments on Dr Woit’s blog, Kea (Dr Sheppeard) writes:

“The shady activities of the arxiv are now well established. Fortunately, alternative sites such as vixra make the arxiv essentially redundant, even if professional ass kissing keeps it alive.”

However, others defend arXiv.org’s censorship of new ideas:

“… there is a danger to feeding the caricatural anti-establishmentarianism manifested by some of those on the fringe of legimitate science – there is a difference between wild, credible ideas and junk …” – Lenny

Stringy theory consisting of wild, credible ideas; not junk?

Recap of the history of quantum gravity

1. Newton’s Philosophiæ Naturalis Principia Mathematica has a diagram (Book I, The Motion of Bodies, Section II: Determination of Centripetal Forces, Proposition 1, Theorem 1) showing gravity being delivered by discrete (i.e. quantized) impulses.  I.e., Newton uses quantum gravity rather than the continuous variables of his newly created calculus, to prove that the inverse square law applies to Kepler’s elliptical orbits (not just to circular orbits, which Hooke had allegedly already proved ahead of Newton):

Fig. 1 - Newton's Principia, revised 2nd edition, 1713: Book 1, The Motion of Bodies, Section II: The Determination of Centripetal Forces, Proposition 1, Theorem 1.

2. Newton’s friend Fatio suggests that gravity is quantized and is carried by real particles of a gas.  Newton is unimpressed because the idea is speculative, gas tends to cause drag and to heat up moving objects (which would soon slow down and heat up the planets if they are moving through such a gas), and because Fatio is unable to make any quantitative predictions or progress using the idea.  It’s just too vague, too far ahead of its time, and too problematic in its initial form.  LeSage in 1784 publishes Fatio’s idea in France, pointing out that the geometry of isotropic gas particle pressure being “shadowed” by the sun and the planets will create inverse-square law forces towards those masses, and arguing that two key problems can be overcome if

(a) the mean-free-path of the particles in between collisions with masses is large, i.e. the gravitons have a great penetrating power (so that the particles don’t diffuse into “shadow” zones and thus cut off gravity faster than the inverse square law), and

(b) the nature of matter is mostly empty space, rather than “solid” as currently thought to be the case in 1784 (i.e. similar to the nuclear atom in which the atom consists of a tiny nucleus surrounded by tiny electrons at distances around 10,000 nuclear radii).  This was found necessary by LeSage to allow gravitons to act on essentially all of the mass inside a planet, rather than just to depend on the surface area of the macroscopic planet for constant density, surface area is proportional to square of the radius, while volume and thus mass are proportional to the cube of the radius, so LeSage was constrained to make the fundamental particles of mass a tiny size to prevent significant geometric overlap, therefore allowing all of the mass within a planet to contribute to the “shadowing” effect.

3. a century after LeSage, James Clerk Maxwell devised his mechanical aether of the vacuum of space, in which gear cogs and idler wheels conveyed “athereal displacement currents” between the plates of a charging capacitor (and later, between radio transmitter and receiver antenna, once radio waves had been discovered/invented).  Consequently, Maxwell had a mechanical interest LeSage’s gravity, and so did Maxwell’s aethereal followers, famous physicists like Lord Kelvin who resisted Einstein’s discarding as “superfluous” of the spacetime fabric in 1905.  However, Maxwell, Kelvin and others found that the Fatio/LeSage quantum gravity idea was still incapable of predictions and loaded with apparently insuperable problems.  In order to deliver enough force to cause gravity, the gravitons would apparently deliver enough drag force to heat up the Earth as it moved in its orbit of the sun until it glowed red hot (like an meteorite burning up in the Earth’s atmosphere). In addition, the drag force would slow the Earth down, allowing it to spiral into the sun.  Maxwell, Kelvin and others for these reasons found the Fatio/LeSage theory to be wrong, and they also deplored the fact that it did not make really impressive numerical predictions, e.g. a prediction of the field strength constant for gravity, G.

4. In 1915, Einstein and Hilbert came up with the field equation of general relativity. This is a precise mathematical statement relating the source term for gravity (mass and energy produce gravitational fields, with 1 kg of mass having the same gravitational effects as 9 x 1016 joules of energy according to the famous mass-energy equivalence), allowing for the conservation of mass energy by making space contract in gravitational fields, producing acceleration and curvature in 3 spatial dimensions. Spacetime curvature is indicated by a curving line on a plot of distance versus time; this spacetime curvature represents acceleration. Curved spacetime in 3 dimensions therefore represents a force field. The Riemann and Ricci tensors are simply related to acceleration:

5. The success of general relativity in predicting accurately in 1919 that star light falls towards the sun at twice the rate predicted by Newton’s acceleration law for particles of non-relativistic matter, led to a great deal of hype for general relativity. Actually, it is only a classical theory, not a quantum theory, and general relativity has an enormous number of problems (far more than LeSage’s gravity):

(a) The stress-energy tensor used to represent the source of gravitational fields in general relativity can’t (and doesn’t) model particles of matter realistically. Like all differential equations, it can only represent accurately smoothly varying quantities, not discontinuities such as an array of particles in space.  So general relativity has never been used to represent the discontinuous nature of atomic and particulate matter and energy! Instead, such matter has to be replaced by a “perfect fluid” field with the same averaged mass or energy density.

(b) The general relativity theory implicitly assumes that G is a constant, i.e. it simply excludes any possibility that the source of gravitation is to be found in the motions of the masses surrounding the observer.  Teller in 1948 and others have tried to discredit a time variation of G by falsely observing that any such variation would alter fusion rates in the big bang or in the sun, whereas of course electromagnetism is closely linked to gravitation (both are long range, inverse square laws forces), and thus a variation in both electromagnetic and gravitational couplings will prevent any variation in fusion rates (increasing gravity increases compression and thus fusion rates, but increasing electromagnetic coupling does the exact opposite by increasing the repulsion between protons and thus reducing fusion rates by increasing the Coulomb barrier; these two effects offset and effectively cancel out).

We predicted in May 1996 that LeSage’s gravity, applied to spin-1 off-shell gravitons (virtual radiation off the relativistic mass-shell, which is not real gas particles that cause heating and drag!), requires a radial inward force of ~1043 Newtons for black hole sized fundamental particles which radiate spin-1 bosons by an off-shell version of Hawking’s radiation theory. This by Newton’s 3rd law predicts an equal and opposite force, i.e. a radial outward force of the gravitational charges (mass and energy) in the universe which are exchanging spin-1 gravitons with us. From the amount of mass in the universe, Newton’s 2nd law, F = dp/dt ~ ma, the theory in May 1996 predicted a cosmological acceleration of 7 x 10-10 ms-2, a prediction published via page 896 of the October 1996 issue of Electronics World and confirmed two years later by Dr Saul Perlmutter’s now famous discovery of that the acceleration of the universe is 7 x 10-10 ms-2, using computer automated real-time processing of supernova flash data streaming from CCD telescopes. Hence, we have strong evidence (albeit censored out from Classical and Quantum Gravity due to “spin-2 string theory is the only quantum gravity approach worth reading”-lies which are believed by ignorant, dogmatic peer-reviewers) that has not just survived observational tests but has been proved accurate by predicting the dark energy of the universe accurately. This evidence proves that gravity is caused by the reaction force to the acceleration of the universe. Both cosmological acceleration and gravitation are caused by the exchange of spin-1 gravitons between particles of matter/energy. Over great distances and between great masses the effects are repulsive, while between relatively small masses (an apple and the Earth, both small compared to the mass of the surrounding universe) the repulsion is much smaller than the exchange forces on the other sides, so the net result is that the apple gets pushed down to the Earth. The more matter in the Earth, the more shielding of gravitons and the faster the apple gets accelerated because the asymmetry is bigger.

General relativity has other problems. It has few simple solutions due to its complicated calculus, but the few solutions for cosmology are all wrong because they ignore the quantum gravity mechanism implicit in the motion of surrounding matter just explained. Not only that, but the few solutions to general relativity are “landscapes” like epicycles or string theory, which can represent almost any kind of expanding or contracting universes just by varying the amounts of directly unobservable dark energy and dark matter to make the model approximate the observations. In other words, it is easy to fiddle the theory to fit any universe. In this respect, it is not a useful predictive theory which is constrained to one falsifiable prediction. Find an error in the theory? Simply “fix” the theory to do away with the error, by varying some of the adjustable parameters! Similarly, the Standard Model has 19 adjustable parameters and the minimally supersymmetric Standard Model has 125. Plenty of room for “fine tuning” to force the adjustable theory to agree with your data!

6. General relativity led to other problems too. First, the Kaluza-Klein 5-dimensional spacetime theory. Put 5 dimensions into general relativity, and you get the equation for the photon of electromagnetism as a bonus. If the extra dimension is rolled up at an unobservably small Planck scale or whatever is convenient to the theorist, you have put gravitation and electromagnetism together. However, as Lunsford points out, it’s not a falsifiable prediction of anything. Einstein and others tried different approaches to unify electromagnetism and gravitation classically, and all failed. I have some sympathy with Lunsford’s 6-d approach because he finds that SO(3,3) works (predicting zero cosmological constant, i.e. that cosmological acceleration is due to the spin-1 gravitational field which does the repulsive effect of dark energy between larger masses as well as gravitational attraction between relatively smaller masses which are pushed together harder than they repel one another) which has 3 time dimensions and 3 spatial dimensions instead of extra spatial dimensions. There is a symmetry here and you measure the age of the universe (time) by the Hubble expansion rate, v = HR, with time t = 1/H for the observed flat spacetime of the universe. If you were to measure the expansion rates in 3 perpendicular directions and found them slightly different (indicating a predictable, slight anisotropy in the expansion rate), you would have 3 different ages of the universe and thus 3 effective time dimensions. In correcting general relativity to the real world you may well need 6 dimensions, 3 uniformly expanding dimensions proportional to space and time, and 3 contractable spatial dimensions representing the sizes of matter and energy fields which are locally contracted by gravitational fields, rather than expanded by them.

The biggest problem is that string theorists have gone further down the Kaluza-Klein road in order to incorporate the nuclear forces, adding more extra spatial dimensions instead of a properly understood approach to time dimensions. So they have 1 time dimension and 9 spatial dimensions in 10-dimensional superstring theory, where 6 spatial dimensions have to be compactified by a Calabi-Yau manifold which can be stabilized in a “landscape” of 10500 different ways, each corresponding to a different metastable vacuum state, or parallel universe, and 1 time dimension plus 10 spatial dimensions in 11-dimensional supergravity theory, which is holographic “bulk” upon which 10-dimensional superstring theory floats like the surface membrane or “brane” of a bubble (the surface of a bubble has one dimension less than its volume or “bulk”).

Instead of seeing the problems of the 10500 different metastable vacuum states of the Calabi-Yau manifold required to incorporate particle physics (conformal symmetry), the hardened string theorist insists dogmatically that “we must believe what nature tells us”. The whole thing is just ridiculous, because the framework of string theory is motivated by the idea of Fierz and Pauli in 1939 that gravity waves are composed of spin-2 quanta.

Mathematically, spin-2 quanta are in the reductionist framework (i.e., reductionist in the sense of ignoring the rest of the mass in the universe, instead of being holistic by including effects from the rest of the mass in the universe) the simplest way to model the universal attraction between two particles of matter, but it is physically ignorant of the fact that while electromagnetism can get away with a reductionist approach, quantum gravity can’t. In electromagnetism, the electric charges in the surrounding universe are balanced, with as much positive as negative charge (any asymmetry in this would could attractions of the opposite charges with forces 1040 times stronger than gravity, and would thus very quickly cancel themselves out by matter-antimatter annihilation, with gamma ray emission). Therefore, for some purposes we can ignore the surrounding electromagnetic charges in the universe and do path integrals for electromagnetism by just considering two charges.

We can’t extend this to gravity because mass and energy only comes with one sign: all masses fall the same way in a gravitational field. There is no such thing as antimass known. Therefore, gravitational charges (mass, energy particles) don’t cancel out in the surrounding universe in the way that electric fields do; their gravity fields instead just add together. So since the mass of the surrounding universe is so large and since the gravitons we exchange with this mass are converging inward towards us from all radial directions around us (not diverging and losing strength with distance as occurs from a single point source of radiation), we must include it in the path integral. As Feynman shows by pictures in his 1985 book QED, the path integral for long range weak electromagnetic and gravitational fields are very simple. Looped Feynman diagram with complicated integrals are only really important at high energy. For low energy gravitational physics, the small size of the coupling for gravity prevents loops in the vacuum and we can use simple geometrical methods to do the path integral for many situations.

Rank-1 tensors (vectors) are used in electromagnetism because the field is defined in terms of the simple rank-1 gradients and curls of Faraday’s imaginary “field lines”; in general relativity, however, field lines are not used and curvatures describing accelerations (second-order differential equations) are used to describe the field.  Thus the use of rank-1 tensors in electromagnetism and rank-2 tensors in gravitation stems from the differing physical definitions of the “field” in each case (diverging or curling lines in electromagnetism, but accelerations in gravitation), not to the difference in the spin of the field quanta (spin-1 for electromagnetism, spin-2 for gravitation).

The most curious thing is the false correlation by physically ignorant string theorists and others of spin-1 fields to rank-1 tensors in electromagnetism and of spin-2 fields to rank-2 tensors in general relativity. The correlation is fictitious, because the choice of rank-1 tensors in electromagnetism is purely due to a difference between the way the field is defined in electromagnetism and the way it is defined in general relativity. In electromagnetism, the field is defined by means of Faraday’s diverging or curling field lines, which are modelled in Maxwell’s equations by summing over simple first-order gradients (rank-1 tensors). If Faraday had not gone in for the field line concept, then you can bet we would today have a model of electromagnetism in terms of accelerations, i.e. rank-2 tensors.

There is no physical basis for popular claims that electromagnetism is intrinsically a rank-1 calculus system and that gravitation is intrinsically a rank-2 calculus system. It’s down to historical chance that Maxwell followed Faraday and used gradients of field lines, first order or rank-1 tensors to represent electromagnetic fields instead of directly representing electromagnetic forces in terms of accelerations (second-order equations, rank-2 tensors). If Maxwell had chosen to write his equations in terms of accelerations rather than via the the curls and divergencies of Faraday’s imaginary (fictitious) “field lines” (rank-1 tensors), then we would have spin-1 electromagnetic fields represented by rank-2 tensor equations insteadf of rank-1. It’s purely down to historical fluke. Once Maxwell had formulated his equations using Faraday’s unobservable field lines as rank-1 tensor equations, they became the usual groupthink physics dogma and nobody was willing to try to rebuild the theory in terms of rank-2 tensors.

Similarly, if Einstein and Hilbert in 1915 had formulated the field equation of general relativity using rank-1 tensors by analogy to the field lines of electromagnetism (instead of in terms of spacetime curvature which describes acceleration more directly), gravitation would be described by rank-1 field equations. In summary, the distinction between rank-1 and rank-2 tensor field equations in electromagnetism and gravitation is solely down to the choice of using the divergences and curls of field lines in 3 dimensional space to model electromagnetic fields and the choice of using spacetime curvatures (accelerations) to model gravitational fields. It is quite possible to model fields described in different ways by the use of different ranks of tensors. It’s got nothing to do with the spin of the graviton, because you could model electric forces with a rank-2 spacetime curvature equation and you could reformulate general relativity in terms of a rank-1 Faraday-type field line model where imaginary gravitational field lines diverge outward from mass/energy particles just as imaginary electric field lines diverge outward from electric charges in Faraday’s picture. Do you grasp this point? If you do grasp it and have some time to waste, maybe you will try arguing with the ignorant, lying bigots who are behind Wikipedia’s spin-2 graviton propaganda lies:

“If it exists, the graviton must be massless (because the gravitational force has unlimited range) and must have a spin of 2. This is because the source of gravity is the stress-energy tensor, which is a second-rank tensor, compared to electromagnetism, the source of which is the four-current, which is a first-rank tensor. Additionally, it can be shown that any massless spin-2 field would be indistinguishable from gravity, because a massless spin-2 field must couple to the stress-energy tensor in the same way that the gravitational field does.”

A massless spin-2 field would (if it existed) be indistinguishable from gravity because it couples to the stress-energy tensor like gravity. So what? That doesn’t prove that gravity is due to a massless spin-2 field. It certainly doesn’t disprove spin-1 gravitons, which correctly predicted in 1996 the acceleration of the universe as measured two years later by Perlmutter’s group, something that the non-falsifiable spin-2 gravity “predictions” has never done. The sole success of spin-2 gravitons hype efforts has been to stop the publication of the facts, the falsifiable predictions which were later confirmed by the discovery of the acceleration of the universe as predicted. These people are so ignorant and plain stupid that you are wasting your time if you even say hi to them. Like the Nazis, they are big shots and they know it all too well. Like the Nazis, they have their fellow travellers: the people who have the brains to see, like Prime Minister Chamberlain, that appeasement and shaking hands with these scum brings the applause of the crowd. It’s extremely hard to know how to proceed against a widely lauded groupthink consensus of ignorant liars who censor the arXiv.org, the “peer” (peer??? crackpot more like) reviewed journals and the sci-fi obsessed mainstream Hollywood-led media.

“Not in Our Lifetimes” – Brian Greene and Shamit Kachru admit string theory is unlikely to make contact with any experimental evidence while we’re alive, but this uncheckable theory continues to take up PhD research funding, which should go to rebuilding fact-based quantum field theory


Above (this diagram in part copied from an old 2007 post about the Standard Model, which needs updating elsewhere).

The Standard Model is experimentally-based quantum field theory (unlike completely useless string field theory), yet is slightly confused due to the purely speculative electroweak symmetry-breaking Higgs field and string theory is based upon the ignorant lie that spin-2 bosons are required for quantum gravity (see post on Fierz and Pauli’s error of the spin-2 graviton, linked here). U(1) hypercharge gives leptons and quarks particles their measured masses as explained physically on the post linked here. SU(2) is not just responsible for the 3 massive weak Z and W gauge bosons; it also causes massless versions, the charged massless bosons being responsible for positively and negatively charged electric fields, respectively (they propagate in the vacuum because the magnetic self-inductance – normally infinity for massless particles – is completely cancelled out for exchange radiation travelling along two directions, to and from each real charge; this is simply because magnetic field vectors for each component are equal and opposite in direction!; this mechanism also guarantees an exact equilibrium in exchange, so that the SU(2) Yang-Mills field equation term for charge transfer becomes inoperative; you can’t transfer any net charge because of the equilibrium; this constraint on the SU(2) Yang-Mills equation then automatically cancels the equation down to the form of the regular “Maxwellian” Abelian U(1) field equation which doesn’t include charge transfer by gauge bosons!).

Copy of a post by Dr Peter Woit on the Not Even Wrong blog:

Not In Our Lifetimes

A report from one of last Saturday’s events at the World Science Festival has string theorists Brian Greene and Shamit Kachru admitting that they’d be surprised to see experimental evidence for string theory in their lifetimes:

John Hockenberry, the panel’s moderator, asked Greene if he thought experimental evidence would come during his lifetime.

“I’d be surprised,” said Greene.

“And in your lifetime?” Hockenberry asked Kachru.

“…I’d be surprised,” conceded the young physicist reluctantly.

For more reports about the same panel discussion, see here and here.

In the comments of that post by Dr Woit, commentators mention spin-2 gravitons:

  • Anon2 says:

    Even if you can’t get falsifiable predictions from string theory, you could still falsify some of the assumptions it is built upon. E.g. spin-2 gravitons seem to everyone to be logical and necessary and require some kind of stringy framework unlike the spin-1 vector bosons in the Standard Model, but suppose gravity doesn’t conform to the expectations of Pauli and Fierz, and isn’t spin-2. It’s looked logical to Ptolemy to model the sun and stars daily orbiting the Earth… things did not turn out to be that simple.

    Like Ptolemy’s epicycles, string theory is an ad hoc mathematical explanation for a widely held prejudice which still hasn’t a shred of experimental evidence behind it. Maybe we need a new Kepler.

  • AdSCFTfan says:

    OK, maybe string theory has not predicted anything definite about the real world yet. But via the AdS/CFT correspondence it has made many precise predictions about some gauge theories at strong coupling, like the N=4 supersymmetric Yang-Mills.

    These predictions are testable. In fact, some of them have been tested by solving for some qauge theory quantities as functions of the coupling and comparing with string theory at strong coupling. It works!

    Maybe this is not enough to satisfy some of you critics, but I find this amazing.

  • Fluffy Eschaton says:

    AdSCFTfan: Woit has been fairly consistent in saying that it’s string unification he thinks hasn’t panned out, not all the mathematical developments associated with string theory.

  • The interesting thing about the AdS/CFT correspondence conjecture is that it has nothing really going for it, yet is being falsely claimed on that blog comment reproduced above that the AdS/CFT is a correspondence (wrong, it’s a conjectured correspondence) that has been applied to “strong coupling, like the N=4 supersymmetric Yang-Mills [there is no evidence for strong couplings obeying supersymmetry unification predictions; on the contrary, as Woit points out in his book Not Even Wrong, the existing evidence for the trend of strong couplings observed in experiments rules out supersymmetry predictions, if the error bars are correctly estimated for the data!] These predictions are testable

    . In fact, some of them have been tested by solving for some qauge theory quantities as functions of the coupling and comparing with string theory at strong coupling. It works!

  • As mentioned before, Woit shows that the supersymmetric predictions of string theory so far are wrong according to the estimated error bars on data for strong couplings at the highest observed energies to date.] It is true that AdS/CFT is an approximate way to model strong interactions over a limited range of energies, because AdS (anti-de Sitter space) has a negative cosmological constant. This is a good way to mathematically approximate the strong nuclear force at energies where it causes universal attraction with the force increasing with increasing distance (rather than with the force falling with distance as occurs for gravity).

  • The strong force is attractive as is gravity, but the strong force differs from gravity not only in being vastly more powerful, but also in the fact that it’s coupling or effective charge strength, gets bigger with increasing distance (like the force increasing as you stretch an elastic band, the model used in the old “hadronic string theory” of the 1960s which predated the 10/11 dimensional superstring/supergravity theory that culminated with Witten’s work in 1995). The reason for the difference is physically due to the fact that there are 8 strong force gauge bosons (gluons) which are charged and cause an “antiscreening” effect, the opposite of the screening of electromagnetic charge between IR and UV cutoff energies by vacuum virtual fermion polarization caused by electric fields in QED.

  • The massive negative signed AdS cosmological constant in AdS/CFT has nothing to do with spacetime, because it is very unlike the small positive signed number currently accepted by the mainstream in order to model observed radial cosmological accelerations of supernovae away from us.

  • As we stated in earlier posts, even the currently accepted small cosmological constant (Lambda) in the Lambda-CDM model of cosmology was not predicted or by any relativistic or string theory, and the resulting Lambda-CDM model is just a convoluted, epicycle-type model invented in 1998 by ad hoc downward revision of the value of Einstein’s steady-state universe error of 1915, which (unlike AdS) had a massive positive cosmological constant which sought to cancel out gravitational attractions between galaxies and thus keep the observed universe stable and neither collapsing nor expanding, indefinitely (this stability was later disproved; it is an inherently unstable mathematical solution and such a universe would have no more enduring stability than a pen placed upright on its nib). Contrary to ignorant, ad hoc, adjustable parameter general relativity cosmological metrics and string theory theory ideas about a vacuum energy, spin-1 quantum gravity in 1996 predicted the acceleration and thus the “effective” cosmological constant precisely, by showing that the universe must have acceleration of ~Hc in order for gravity to operate by spin-1 gravitons, i.e. there is no mysterious “dark energy” as distinct from the gravitational field: the gravitational field is “dark energy” and therefore there is no separate cosmological constant, although we can predict the “effective value” from the spin-1 gravity mechanism.

  • Update: as an antidote to depression about string theory lies being used by ignorant media morons to censor the facts of physics until after you are long dead and buried, I strongly recommend Saturday night foam parties at Ibiza’s Eden nightclub

    Above: Sat 12 June 2010 Ibiza Eden nightclub (two minutes walk from my hotel) foam party photographed at 5:56am local time Sunday 13th (San Antonio, Ibiza). They gave out free wristband passes for entry at 2am in the town centre, so I didn’t pay a penny. Eden nightclub has only just had its seasonal opening party, and is Keep your phone and money in a plastic bag in your pocket and wear swimming-type earplugs to reduce eardrum stresses when the volume goes up too high for comfort (you still hear the music fine). Don’t drink any alcohol or water if you just want to practice dancing all night long to keep fit and sensibly chat to girls in the chilling out room (being drunk doesn’t make you look attractive, while drinking water or soft drinks makes you look soft). Keep fully hydrated during the day before, then drink a full litre of mineral water immediately before you go to the club at 3am (for just over 3 hours of dancing, you will remain fine without another drink). The foam party began in a special slightly rough-finished circular area (located down from the main dance stage, which is smooth and would be too slippery for foam) around 5:50am and lasted about 20 minutes. It was a water party first (sprays from ceiling sprinklers), and then lots of foam pipes from the ceiling sprayed snow-like waist-deep soap foam.

    Update (15 June 2010):

    Dr Woit of Columbia University has a new Not Even Wrong blog post up:

    Predictions From David Gross

    Video of David Gross’s talk at the Physics at the LHC 2010 conference is now available. He devotes much of the talk to reviewing predictions he made back in 1993 of what would happen by 2008, and making new predictions for what will happen by 2020.

    … His experimental predictions include a repeat of the 1993 ones (superpartners, new Z-mesons, and the Higgs, although now he only mentions one Higgs), except that he has now given up on even “cloudy” evidence of superstrings showing up at the TeV scale. …

    Relationship between relativity, classical fields and quantum gravity

    Fig. 1 - Quantum gravity versus smooth spacetime curvature of general relativity
    Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron. The whole idea of quantum field theory is to remove the calculus of curvature from classical gravitation and replace it with quantized jumps caused by discrete field quanta, graviton interactions. If you believe in the Pauli-Fierz string theory lie, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity. (Basically, spin-1 gravitons push, while spin-2 gravitons suck.

    Tom Bethell, senior editor of the American Spectator, has just written an article called “Relativity and relativism” in the Washington Times newspaper criticizing Einstein’s special relativity theory, which we will quote and discuss at the end of this post. First, let’s examine the relationship between relativity, classical fields and quantum gravity.

    Below we give an improved presentation of the simple basic calculation in the earlier blog post linked here. In October 1996, we showed via page 893 of Electronics World that spin-1 quantum gravitons do the job now attributed to “dark energy” in accelerating the universe (the “cosmological constant”) as well as quantum gravity.

    The cosmological repulsion and consequently the correct cosmological constant was predicted in 1996, years ahead of first being measured. (Few people had any interest and despite concern from the editor, Classical and Quantum Gravity’s peer-reviewers would not support publication of any non-string theory predictions on quantum gravity. A fellow Electronics World author, Mike Renardson, kindly wrote to suggest that the 7 x 10-10 ms-2 acceleration was too small to detect, yet it was soon clear that over large, cosmological-sized, distances it would prove measurable to Perlmutter and other astronomers two years later, using automated detection of standard brightness supernovas by new software working off telescope feed in real time. The measured luminosity indicated distance, while the measured redshift allowed the acceleration of the universe to be measured.)

    Since 1998, more and more data has been collected and the presence of a repulsive long-range force between masses has been vindicated observationally. The two consequences of spin-1 gravitons are the same thing: distant masses are pushed apart, nearby small masses exchange gravitons less forcefully with one another than with masses around them, so they get pushed together like the Casimir force effect.

    Using an extension to the standard “expanding raisin cake” explanation of cosmological expansion, in this spin-1 quantum gravity theory, the gravitons behave like the pressure of the expanding dough. Nearby raisins have less dough pressure between them to push them apart than they have pushing in on them from expanding dough on other sides, so they get pushed closer together, while distant raisins get pushed further apart. There is no separate “dark energy” or cosmological constant; both gravitation and cosmological acceleration are effects from spin-1 quantum gravity (see also the information in an earlier post, The spin-2 graviton mistake of Wolfgang Pauli and Markus Fierz for the mainstream spin-2 errors and the posts here and here for the corrections and links to other information).

    As explained on the About page (which contains errors and needs updating, NASA has published Hubble space telescope estimates of the immense amount of receding matter in the universe, and since 1998 Perlmutter’s data on supernova luminosity versus redshift have shown the amount of the tiny cosmological acceleration, so the relationship in the diagram above predicts gravity quantitatively, or you can you normalize it to Newton’s empirical gravity law so it then predicts the cosmological acceleration of the universe, which it has done since publication in October 1996, long before Perlmutter confirmed the predicted value (both are due to spin-1 gravitons).



    At some stage, improvements in the presentation of these diagrams may clarify it to the stage where it may reach a point at which people generally grasp it within the time they spend focussing on it, i.e. where the diagram looks self-evident and obvious and incontrovertible rather than off-putting. One thing that needs to be included is some of the gauge theory mathematics with simple explanations, e.g. some of the stuff from Feynman’s 1985 book QED which I’ve discussed on earlier posts. I’ve not included in the diagram the cross-section for quantum gravity interactions, the gauge theory of U(1), how it predicts lepton and quark mass patterns, how it replaces the Higgs mechanism for mass and modifies the electroweak theory, etc. Maybe I will have to condense all that down to a single diagram before this is really taken seriously?

    Nobody uses the argument that “off-shell gauge bosons that cause fundamental forces should cause drag like a gas of on-shell particles and slow down (as well as heat up) the planets” to deny mainstream quantum field theories of the Casimir effect and the concept of a theory quantum gravity, but this kind of vacuous “argument” and historical attacks on LeSage’s gravity idea are still levelled against spin-1 gravitons whenever they are explained by me.

    It’s a bit like false experts (charlatans and crackpots) trying to debunk Darwin’s evolution by saying:

    “Lamarke had the idea of evolution and he got the details wrong; now you are coming up with a new version of an old debunked idea which evades the problems of the old idea. How stupid, pointless, and pathetic!”

    There is no science in such an “objection”. It’s just a statement of pure political-style prejudice, not science. There is no way to respond scientifically to purely political objections which ignore the scientific facts. (If reality turns out to be an old idea with some modifications, then tough cheese to those who believe in string theory. Atoms were an old idea by the ancient Greeks when Dalton revived them two thousand years later. What matters is the new evidence on offer, not how famous the critics of the old evidence for the same idea were. I don’t care how famous critics of LeSage were years ago. Science isn’t about the political standing of a “critic” of an old version of an idea. Science is just about the facts.)

    Another “objection” of the same sort is the aether, discussed in the previous post and in the one linked here. This “objection” says falsely that Heisenberg’s and Schroedinger’s first-quantization quantum mechanics disproved causality and mechanism in the universe, because the uncertainty principle makes the world crazy.

    The answer to that is simply that first-quantization quantum mechanics went out of the window in 1927 when Dirac’s relativistic quantum field theory replaced it. First-quantization is a lie: it treats the Coulomb field binding the electron to the nucleus classically, so the chaos of the motion of the electron has to be falsely introduced by making the electron’s motion intrinsically indeterminate. Second-quantization, as Feynman explains, gets rid of this application of the uncertainty principle because it simply treats the Coulomb field properly as a quantum field, in which field quanta (random discrete interactions) replace the false classical smooth Coulomb field. The electron has a chaotic motion because the quantum electromagnetic field binding it in its orbit of the nucleus is chaotic, as Feynman explains:

    “I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle! … When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [amplitudes for different paths] to predict where an electron is likely to be.”

    – Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-85.

    “… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [they] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

    – Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

    “The quantum collapse [in the multiple universes, non-relativistic, pseudoscientific first-quantization model of quantum mechanics] occurs when we model the wave moving according to Schroedinger time-dependent and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger time-independent. The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

    – Dr Thomas Love, California State University.

    “In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.”

    http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

    So much for first-quantization. Like Ptolemy’s epicycles or the Bohr atom, it can be used to make approximate calculations. What is stupid is the way it is taught and popularized as being physically deep, when it is just a non-relativistic classical Coulomb field model. What will it take, really, to get people to give up such pseudoscience and embrace physical reality? Einstein and his early (but not late) hero Mach dismissed physical mechanisms of phenomena in the vacuum, and this is still done in quantum field theory even by critics of non-falsifiable string speculation. Why? What physical justification do they have? Fashion. Yes, fashion. Here is the problem of fashion (first-quantization) being mistaken for fact, from the new 2010 book by Erik von Markovik and Chris Odom, The Pickup Artist published by Villard, New York, page 225:

    “From quantum mechanics, I learned that a particle isn’t really in a specific location until it is observed. Until then, it exists as a fuzzy probability cloud. It’s only when a sentient being observes it that it actually collapses into a specific particle at a specific location. Experiments show that it is the act of observation itself that makes the probability collapse and become ‘real’.”

    This is first-quantization, the 1925 non-relativistic quantum mechanics of Heisenberg’s matrices and Schroedinger’s wave equation, promoted with inaccurate “experiments” hype: there are no experiments that are not false like Alain Aspect’s “accidentals subtracted” statistical fiddle to show such a thing, which may explain why there have been few Nobel Prizes awarded for entanglement, quantum computing, string theory, parallel universes, and so on, although of course Heisenberg and Schroedinger got them for first-quantization mythology. As Dr Thomas Love of California State University has explained, wavefunction collapse is generated by unnatural, mathematical models which don’t consistently represent reality: it is due to a discontinuity between the time-dependent and time-independent wavefunction modelling equations.

    Einstein was thus right to the extent that he dismissed the subjectivist first-quantization approach to quantum mechanics. The shame of Einstein in this context is that Einstein also dismissed the facts: he listened to Feynman’s presentation of path integrals in an informal seminar organized by Wigner at Princeton. Einstein ignored Feynman’s presentation of second quantization, consistent quantum field theory. What you always find is that the few critics of the mainstream heard properly to date, like Bell and Bohm, and maybe Lee Smolin too, have all ended up being obfuscators seeking to introduce ad hoc infinite potentials and other hidden variable or otherwise “crying wolf” ideas which just discredit non-mainstream ideas generally. They don’t seem to grasp the point that Feynman had already sorted out the whole problem. The path integral concept applies to field quanta travelling along all possible paths during their exchange between charges. The Casimir force measurement confirmation demonstrates the reality of this. Sure, if you fire a photon of light at an electron to try to find out its exact position and momentum, you are faced with uncertainty (because the electron is moved by the impact, ending up with a product of uncertainty in position and momentum which is half of the unit of quantum action, h-bar, assuming that you can measure the photon’s properties precisely, which of course you can’t, so the total uncertainty is still greater than half h-bar). But that doesn’t mean that the electron’s position was indeterminate before it was hit by your photon! In other words, just because observing something interferes with it by the impact of the photon of light, that doesn’t prove that the particle was really in an indeterminate state between parallel universes!

    Similarly, a blind man swinging a golf club around in order to detect the position of a golf ball will have uncertainty even when he hears the club hit the ball, because the ball will then have moved and won’t be where it was at the instant of being hit. But the golf ball doesn’t need to be split between two parallel universes before he hits it! Similarly, measuring the potential of a battery will drain it slightly, measuring tyre pressure involves the escape of some air and a fall in pressure, shining a light on a painting fades it slightly. You can’t observe something without interacting in some way, but this doesn’t imply intrinsic chaos. The uncertainty principle has its uses, but as Feynman said, you don’t need it to give rise to wavefunction collapse in quantum mechanics, unless you’re using obsolete, first-quantization, pre-Dirac, flawed mathematical models.

    Returning to the subject of the fashion-conscious pick up book describing first quantization by Erik, he is the star of a VH1 TV programme called (like his latest book) The Pickup Artist. Surprisingly, there are some really deep connections here between making yourself attractive socially and making your quantum gravity theory attractive to people generally. You need to gain social acceptance in both situations. At some stage you have to stop focussing on individuality and start to show some of the not-individual general characteristics required for social acceptance, such as getting papers published in peer-reviewed journals. The trick is to be individual and yet still fit into normal social circles: in other words, there is a high degree of constraint on the sort of personality you are allowed by evolution. The author Leil Lowndes puts it lucidly when she exposes the whole mythology of love with the words: “Evolutionary theorists tell us that, even when considering one-night nookie with a nerd she never wants to see again, a woman subconsciously listens to her genes.” (L. Lowndes, Undercover … signals, Citadel Press, page 126.) Of course, the problem here is that peer-reviewers have prejudice in favour of status quo.

    Although science is supposedly progressive, there is resistance to new ways of thinking, so you need to have a lot of patience, time and energy to deal with the process of trying to respond to pseudoscientific objections presented in a way that tells you that the person making the objections doesn’t even know what science is all about, and just thinks science is playing with existing string theory or some other not-even-wrong speculative framework established sixty years ago which has never led to a single falsifiable prediction. Darwin never engaged in arguments with bigoted “critics”; he just wrote down the evidence and left the mudslinging to others while continuing his investigations. So advice to try to overcome bigotry is wrong: you end up either ignoring the non-scientific “criticisms” or arguing about philosophy; in no circumstances does this go in a fruitful direction. It just sucks in your time and energy, which the peer-reviewers waste on non-scientific matters. So although at some stage journal peer-reviewed publication is inevitable, it isn’t necessarily something suited to this kind of problem. In the context of dating, it reminds me of the silly advice I used to get to waste time in nightclubs, where the music was too loud to enable speech to be heard. It is a waste of time since it prevents any communication at all, just like the peer-reviewers with their lying spin-2 obsessions.

    Physical space, the final frontier of quantum field theory

    Maxwell’s gear cog and idler wheel-filled aether was wrong, so was Kelvin’s vortex atom aether, so physicists moved away altogether from “mechanisms” in fundamental physics and sought out purely mathematical models. The S-matrix, as described in a previous post, was the supreme expression of the rejection of the search for physical understanding in terms of mechanism. Mainstream efforts on the S-matrix were initially used in the 1950s and 1960s to fight off Feynman’s quantum field theory. But it failed in the log run like epicycles, and was overtaken by quantum field theory which gave rise to the Standard Model of particle interactions. However the S-matrix legacy of abstraction lives on in the fact that, still today, quantum field theory is submerged in the wrong type of mathematics, since the gauge theory is done using differential geometry, which is the application of continuously variable fields to approximate discontinuous (quantized) fields! This results in the fact that off-shell radiations in quantum fields are not taken seriously as physical entities of fundamental importance. Hence the popular misconceptions about the empirically defensible Casimir force which were discussed in the previous post on this blog.

    Really, people should be using Monte Carlo models on computers, with field quanta being randomly simulated, flying between charges to model gauge theory properly in space to cause forces by interactions, instead of using the physically false mathematical “approximation” to such quantum fields by continuously varying curvatures of differential geometry. Calculus is a good approximation for describing the effects of large numbers of quanta, but it leads to problems in understanding individual interactions with physical clarity. The path integral really should be replaced by a path summation. There is no curvature of spacetime in a real quantum field (although there is curvature in the currently used mathematical model of gauge theory); electrons are accelerated not in a continuous, classical manner by an electric or gravitational field, but in a series of discrete steps due to discrete quantum field interactions!

    One of the most shockingly groupthink-ignored questions in quantum field theory is the influence of motion on the interactions between real particles and the supposedly off-shell fields around them. As explained in the previous post, vacuum polarization shields electromagnetic field energy, which is given to off-shell particles, making some of them them approach an on-shell condition when those positive and negative virtual fermions are dragged far apart by electric fields. This gives them energy and affects the survival time of such particles, making them less virtual and more susceptible to the influences on real (on-shell) fermions, such as the Pauli exclusion principle (giving a particular amount of geometric space to each fermion, pairing up adjacent fermions with opposite spins, determines shell structures, etc.).

    Although bosons don’t obey the exclusion principle due to their integer rather than half-integer spin, neutral Z bosons created by the annihilation of virtual fermions are affected by the locations of those virtual fermions when they annihilated, and thus in polarized virtual fermion fields, the creation of neutral Z bosons can be indirectly affected by the Pauli exclusion principle acting upon the fermions which annihilated to give rise to those bosons. Such Z bosons, having an intrinsic gravitational charge (mass) from an electroweak U(1) quantum graviton gauge theory of spin-1 gravitons, can mire down the motion of particles, thus giving them mass. So we have a model in which different fundamental particles have different possible discrete shell structures of weak field bosons around them, “miring” their motion to different extents as a pseudo-Higgs field, and thereby giving rise to all of the masses we observe for fundamental particles by analogy to quantized atomic electron structures.

    One other idea from the picture of a vacuum field affecting particle motion is the analogy to a gas. A helicopter moves up because its rotor blades blow air down, so Newton’s 3rd law (the equal and opposite “reaction force”) acts upward, offsetting the gravitational force (weight). If you look at the quantum gravity mechanism this blog is about, you see that distant stars are accelerating away from us, and their reaction force is simply graviton radiation emitted in our direction, satisfying Newton’s 3rd law. The whole point of field quanta in quantum field theory is to TRANSMIT fundamental forces through the vacuum, but this is being obfuscated by the approximations used in the gauge theory framework. The Yang-Mills model has been discussed before and is the subject of a paper I’m preparing. It’s simply the Maxwell equations with an added term for charge transfer via massive charged field quanta in weak interactions. This whole approach of using classical differential field equations needs to be replaced with a working model based on summing discrete quantum interactions of off-shell field particles. The Yang-Mills model can be retained for many purposes, but it is inherently obfuscating in certain situations (such as small numbers of field quanta interactions in small spaces and over brief periods of time), a fact which needs to be physically understood as giving rise to cut-offs on running couplings and thus the need to renormalize gauge theories. This mathematical model problem should not covered-up by handwaving and technical efforts towards symbolic obfuscation. A distant galaxy accelerating away from us can be modelled in just the same way, therefore, as a rocket accelerating away from us. Instead of exhaust gas, you simply have a net flow of graviton field quanta. Just the same amount of energy is used to accelerate a given mass by the same amount in each situation.

    Lies that pay in social interactions and spin-2 graviton “theory”: how lying groupthink delusion wins out over the facts

    Hitler’s lies in Mein Kampf were an aid to his social success in gaining power in Germany in 1933. No, I’m not saying here that the 1940s Holocaust is analogous to spin-2 Witten deception: I’m talking about the propaganda trick used to secure power for National Socialism in Germany in 1933. Pointing out the facts does not win over the mob. Telling the mob lies which conform to their long-held prejudices does win over the mob. Hitler didn’t succeed by being generally unpopular in 1933. He became unpopular after causing World War II. Thuggery won for a long while because of this kind of groupthink-delusion-encouraged sh*t (see the post linked here for more details):

    It is well established that lying weapons-effects-exaggerating groupthink “pacifism” in Britain was really war-mongering because it directly helped Hitler get into the position to start the Holocaust and World War II without opposition until the very last moment, yet despite helping Hitler these pseudo-peace supporters have won numerous prizes and held the world’s media in awe and rapture, while Churchill was being dismissed as a danger to peace because he didn’t believe in effectively collaborating with evil in order to secure a worthless promise of “peace”.

    Despite this, we live in an age of groupthink where the media and the “moral majority” support lying for nefarious reasons. When you look at the points made in this and previous posts about first and second quantization lies, who do you blame? The top professors? The media? The public generally for believing the lies? There is a lot more to be learned about how the Communists and the Nazis used groupthink delusions to oppose the facts, and often received support abroad by other groups hell-bent on lying to the public. I’ve included a more contemporary example of groupthink delusion in the blog post linked here. The vitally important, suppressed fact about the inhumane monsters is that they have loyal and devoted followers who shield them from the facts, and in acquiring power at least, the general public want to believe in their lies because they see them as fashionable prejudices. Spin-2 gravitons are just the same.

    Why the rank-2 stress-energy tensor of general relativity does not imply a spin-2 graviton

    “If it exists, the graviton must be massless (because the gravitational force has unlimited range) and must have a spin of 2 (because the source of gravity is the stress-energy tensor, which is a second-rank tensor, compared to electromagnetism, the source of which is the four-current, which is a first-rank tensor). To prove the existence of the graviton, physicists must be able to link the particle to the curvature of the space-time continuum and calculate the gravitational force exerted.” – False claim, Wikipedia.

    Previous posts explaining why general relativity requires spin-1 gravitons, and rejects spin-2 gravitons, are linked here, here, here, here, and here. But let’s take the false claim that gravitons must be spin-2 because the stress-energy tensor is rank-2. A rank 1 tensor is a first-order (once differentiated, e.g. da/db) differential summation, such as the divergence operator (sum of field gradients) or curl operator (the sum of all of the differences in gradients between field gradients for each pair of mutually orthagonal directions in space). A rank 2 tensor is some defined summation over second-order (twice differentiated, e.g. d2a/db2) differential equations. The field equation of general relativity has a different structure from Maxwell’s field equations for electromagnetism: as the Wikipedia quotation above states, Maxwell’s equations of classical electromagnetism are vector calculus (rank-1 tensors or first-order differential equations), while the tensors of general relativity are second order differential equations, rank-2 tensors.

    The lie, however, is that this is physically deep. It’s not. It’s purely a choice of how to express the fields conveniently. For simple electromagnetic fields, where there is no contraction of mass-energy by the field itself, you can do it easily with first-order equations, gradients. These equations calculate fields with a first-order (rank-1) gradient, e.g. electric field strength, which is the gradient of potential/distance, measured in volts/metre. Maxwell’s equations don’t directly represent accelerations (second-order, rank-2 equations would be needed for that). For gravitational fields, you have to work with accelerations because the gravitational field contracts the source of the gravitational field itself, so gravitation is more complicated than electromagnetism.

    The people who promote the lie that because rank-1 tensors apply to spin-1 field quanta in electromagnetism, rank-2 tensors must imply spin-2 gravitons, offer no evidence of this assertion. It’s arm-waving lying. It’s true that you need rank-2 tensors in general relativity, but it is not necessary in principle to use rank-1 tensors in electromagnetism: it’s merely easiest to use the simplest mathematical method available. You could in principle use rank-2 tensors to rebuild electromagnetism, by modelling the equations to observable accelerations instead of unobservable rank-1 electric fields and magnetic fields. Nobody has ever seen an electric field: only accelerations and forces caused by charges. (Likewise for magnetic fields.)

    There is no physical correlation between the rank of the tensor and the spin of the gauge boson. It’s a purely historical accident that rank-1 tensors (vector calculus, first-order differential equations) are used to model fictitious electric and magnetic “fields”. We don’t directly observe electric field lines or electric charges (nobody has seen the charged core of an electron, what we see are effects of forces and accelerations which can merely be described in terms of field lines and charges). We observe accelerations and forces. The field lines and charges are not directly observed. The mathematical framework for a description of the relationship between the source of a field and the end result depends on the definition of the end result. In Maxwell’s equations, the end result of a electric charge which is not moving relative to the observer is a first-order field, defined in volts/metre. If you convert this first-order differential field into an observable effect, like force or acceleration, you get a second-order differential equation, acceleration a = d2x/dt2. General relativity doesn’t describe gravity in terms of a first-order field like Maxwell’s equations do, but instead describes gravitation in terms of a second-order observable, i.e. space curvature produced acceleration, a = d2x/dt2.

    So the distinction between rank-1 and rank-2 tensors in electromagnetism and general relativity is not physically deep: it’s a matter of human decisions on how to represent electromagnetism and gravitation.

    We choose in Maxwell’s equations to represent not second-order accelerations but using Michael Faraday’s imaginary concept of a pictorial field, radiating and curving “field lines” which are represented by first-order field gradients and curls. In Einstein’s general relativity, by contrast, we don’t represent gravity by such half-baked unobservable field concept, but in terms of directly-observable accelerations.

    Like first-quantization (undergraduate quantum mechanics) lies, the “spin-2” graviton deception is a brilliant example of historical physically-ignorant mathematical obfuscation in action, leading to groupthink delusions in theoretical physics. (Anyone who criticises the lie is treated with a similar degree of delusional, paranoid hostility directed to dissenters of evil dictatorships. Instead of examining the evidence and seeking to correct the problem – which in the case of an evil dictatorship is obviously a big challenge – the messenger is inevitably shot or the “message” is “peacefully” deleted from the arXiv, reminiscent of the scene from Planet of the Apes where Dr Zaius – serving a dual role as Minister of Science and Chief Defender of the Faith, has to erase the words written in the sand which would undermine his religion and social tea-party of lying beliefs. In this analogy, the censors of the arXiv or journals like Classical and Quantum Gravity are not defending objsctive science, but are instead defending subjective pseudo-science – the groupthink orthodoxy which masquerades as science – from being exposed as a fraud.)

    Dissimilarities in tensor ranks used to describe two different fields originate from dissimilarities in the field definitions for those two different fields, not to the spin of the field quanta. Any gauge field whose field is written in a second order differential equation, e.g., acceleration, can be classically approximated by rank-2 tensor equation. Comparing Maxwell’s equations in which fields are expressed in terms of first-order gradients like electric fields (volts/metre) with general relativity in which fields are accelerations or curvatures, is comparing chalk and cheese. They are not just different units, but have different purposes. For a summary of textbook treatments of curvature tensors, see Dr Kevin Aylward’s General Relativity for Teletubbys: “the fundamental point of the Riemann tensor [the Ricci curvature tensor in the field equation general relativity is simply a cut-down, rank-2 version Riemann tensor: the Ricci curvature tensor, Rab = Rxaxb, where Rxaxb is the Riemann tensor], as far as general relativity is concerned, is that it describes the acceleration of geodesics with respect to one another. … I am led to believe that many people don’t have a … clue what’s going on, although they can apply the formulas in a sleepwalking sense. … The Riemann curvature tensor is what tells one what that acceleration between the [particles] will be. This is expressed by

    [Beware of some errors in the physical understanding of some of these general relativity internet sites, however. E.g., some suggest – following a popular 1950s book on relativity – that the inverse-square law is discredited by general relativity, because the relativistic motion of Mercury around the sun can be approximated within Newton’s framework by increasing the inverse-square law power slightly from its value of 1/R2 to 1/R2 + X where X is a small fraction, so that the force appears to get stronger nearer the sun. This is fictitious and is just an approximation to roughly accommodate relativistic effects that Newton ignored, e.g. the small increase in planetary mass due to its higher velocity when the planet is nearer the sun on part of its elliptical orbit, than it has when it is moving slower far from sun. This isn’t a physically correct model; it’s just a back-of-the-envelope fudge. A physically correct version of planetary motion in the Newtonian framework would keep the geometric inverse square law and would then correctly modify the force by making the right changes for the relativistic mass variation with velocity. Ptolemy’s epicycles demonstrated the danger of constructing approximate mathematical model which have no physical validity, which then become fashion.]”

    Maxwell’s theory based on Faraday’s field lines concept employs only rank-1 equations, for example the divergence of the electric field strength, E, is directly proportional to the charge density, q (charge density is here defined as the charge per unit surface area, not the charge per unit volume): div.E ~ q. The reason this is a rank-1 equation is simply because the divergence operator is the sum of gradients in all three perpendicular directions of space for the operand. All it says is that a unit charge contributes a fixed number of diverging radial lines of electric field, so the total field is directly proportional to the total charge.

    But this is just Faraday’s way of visualizing the way the electric force operates! Remember that nobody has yet seen or reported detecting an “electric field line” of force! With our electric meters, iron filings, and compasses we only see the results of forces and accelerations, so the number and locations of electric or magnetic field lines depicted in textbook diagrams is due to purely arbitrary conventions. It’s merely an abstract aetherial legacy from the Faraday-Maxwell era, not a physical reality that has any experimental evidence behind it. If you are going to confuse Faraday’s and Maxwell’s imaginary concept of field “lines” with experimentally defensible reality, you might as well write down an equation in which the invisible intermediary between charge and force is an angel, a UFO, a fairy or an elephant in an imaginary extra dimension. Quantum field theory tells us that there are no physical lines. Instead of Maxwell’s “physical lines of force”, we have known since QED was verified that there are field quanta being exchanged between charges.

    So if we get rid of our ad hoc prejudices, getting rid of “electric field strength, E” in volts/metre and just expressing the result of the electric force in terms of what we can actually measure, namely accelerations and forces, we’d have a rank-2 tensor, basically the same field equation as is used in general relativity for gravity. The only differences will be the factor of ~1040 difference between field strengths of electromagnetism and gravity, the differences in the signs for the curvatures (like charges repel in electromagnetism, but attract in gravity) and the absence of the contraction term that makes the gravitational field contract the source of the field, but supposedly does not exist in electromagnetism. The tensor rank will be 2 for both cases, thus disproving the arm-waving yet popular idea that the rank number may be correlated to the field quanta spin. In other words, the electric field could be modelled by a rank-2 equation if we simply make the electric field consistent with the gravitational field by expressing both field in terms of accelerations, instead of using the gradient of the Faraday-legacy volts/metre “field strength” for the electric field. This is however beyond the understanding of the mainstream, who are deluded by fashion and historical ad hoc conventions. Most of the problems in understanding quantum field theory and unifying Standard Model fields with gravitational fields result from the legacy of field definitions used in Maxwellian and Yang-Mills fields, which for purely ad hoc historical reasons are different from the field definition in general relativity. If all fields are expressed in the same way as accelerative curvatures, all field equations become rank-2 and all rank-1 divergencies automatically disappear, since are merely an historical legacy of the Faraday-Maxwell volts/metre field “line” concept, which isn’t consistent with the concept of acceleration due to curvature in general relativity!

    However, we’re not advocating the use of any particular differential equations for any quantum fields, because discontinuous quantized fields can’t in principle be correctly modelled by differential equations, which is why you can’t properly represent the source of gravity in general relativity as being a set of discontinuities (particles) in space to predict curvature, but must instead use a physically false averaged distribution such as a “perfect fluid” to represent the source of the field. The rank-2 framework of general relativity has relatively few easily obtainable solutions compared to the simpler rank-1 (vector calculus) framework of electrodynamics. But both classical fields are false in ignoring the random field quanta responsible for quantum chaos (see, for instance, the discussion of first-quantization versus second-quantization in the previous post here, here and here).

    Summary:

    1. The electric field is defined by Michael Faraday as simply the gradient of volts/metre, which Maxwell correctly models with a first-order differential equation, which leads to a rank-1 tensor equation (vector calculus). Hence, electromagnetism with spin-1 field quanta has a rank-1 tensor purely because of the way it is formulated. Nobody has ever seen Faraday’s electric field, only accelerations/forces. There is no physical basis for electromagnetism being intrinsically rank-1; it’s just one way to mathematically model it, by describing it in terms of Faraday rank-1 fields rather than the directly observable rank-2 accelerations and forces which we see/feel.

    2. The gravitational field has historically never been expressed in terms of a Faraday-type rank-1 field gradient. Due to Newton, who was less pictorial than Faraday, gravity has always been described and modelled directly in terms of the end result, i.e. accelerations/forces we see/feel.

    This difference between the human formulations of the electromagnetic and gravitational “fields” is the sole reason for the fact that the former is currently expressed with a rank-1 tensor and the latter is expressed with a rank-2 tensor. If Newton had worked on electromagnetism instead of aether crackpots like Maxwell, we would undoubtedly have a rank-2 mathematical model of electromagnetism, in which electric fields are expressed not in volts/metre, but directly in terms of rank-2 acceleration (curvatures), just like general relativity.

    Both electromagnetism and gravitation should define fields the same way, with rank-2 curvatures. The discrepancy that electromagnetism uses instead rank-1 tensors is due to the inconsistency that in electromagnetism fields are not defined in terms of curvatures (accelerations) but in terms of a Faraday’s imaginary abstraction of field lines. This has nothing whatsoever to do with particle spin. Rank-1 tensors are used in Maxwell’s equations because the electromagnetic fields are defined (inconsistently with gravity) in terms of rank-1 unobservable field gradients, whereas rank-2 tensors are used in general relativity purely because the definition of a field in general relativity is acceleration, which requires a rank-2 tensor to describe it. The difference is purely down to the way the field is described, not the spin of the field.

    The physical basis for rank-2 tensors in general relativity

    I’m going to rewrite the paper linked here when time permits.

    Groupthink delusions

    The real reason why gravitons supposedly “must” be spin-2 is due to the mainstream investment of energy and time in worthless string theory, which is designed to permit the existence of spin-2 gravitons. We know this because whenever the errors in spin-2 gravitons are pointed out, they are ignored. These stringy people aren’t interested in physics, just grandiose fashionable speculations, which is the story of Ptolemy’s epicycles, Maxwell’s aether, Kelvin’s vortex atom, Piltdown Man, S-matrices, UFOs, Marxism, fascism, etc. All were very fashionable with bigots in their day, but:

    “… reality must take precedence over public relations, for nature cannot be fooled.” – Richard P. Feynman, Appendix F to Rogers’ Commission Report into the Challenger space shuttle explosion of 1986.


    Above: the mainstream groupthink on the spin of the graviton goes back to Pauli and Fierz’s paper of 1939, which insists that gravity is attractive (that we’re not being pushed down), which leads to a requirement for the spin to be an even number, not an odd number:

    ‘In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …’

    – Conclusion of the paper by M. Fierz and W. Pauli, ‘On relativistic wave equations for particles of arbitrary spin in an electromagnetic field’, Proc. Roy. Soc. London., v. A173, pp. 211-232 (1939).

    Pauli and Fierz obtained spin-2 by merely assuming without any evidence that gravity is attractive, not repulsive, i.e. they merely assume that we’re not being pushed down by the convergence of the inward component of graviton exchange with the immense isotropically distributed masses of the universe around us, which will obviously greatly exceed the repulsion of two nearby masses with relatively small gravitational charges. Pauli and Fiertz simply did not know the facts about cosmological repulsion (there was simply no evidence for this until 1998). The advocacy of spin-2 today is similar to the advocacy of Ptolemy’s mainstream earth centred universe from 150 to 1500 A.D., which merely assumed – but then arrogantly claimed this mere assumption to be observational fact – that the Earth was not rotating and that the sun’s apparent daily motion around the Earth is proof that the sun was really orbiting the Earth daily. There is no evidence for a spin-2 graviton!

    There is evidence for a spin-1 graviton. For example, the

    ‘Some physicists speculate that dark energy could be a repulsive gravitational force that only acts over large scales. “There is precedent for such behaviour in a fundamental force,” Wesson says. “The strong nuclear force is attractive at some distances and repulsive at others.”’

     

    This possibility was ignored by Pauli and Fierz when first proposing that the quanta of gravitation has spin-2.

    (1) gives cosmological repulsion of large masses, and

    (2) gives a push that appears as LeSage “attraction” for small nearby masses, which only have weak mutual graviton exchange due to their small gravitational charges, and therefore on balance get pushed together by the much larger graviton pressure due to implosive focussing of gravitons converging inwards from the exchange with immense, distant masses (the galaxy clusters isotropically distributed across the sky).


    Above: Perlmutter’s discovery of the acceleration of the universe, based on the redshifts of fixed energy supernovae, which are triggered as a critical mass effect when sufficient matter falls into a white dwarf. A type Ia supernova explosion, always yielding 4 x 1028 megatons of TNT equivalent, results from the critical mass effect of the collapse of a white dwarf as soon as its mass exceeds 1.4 solar masses due to matter falling in from a companion star. The degenerate electron gas in the white dwarf is then no longer able to support the pressure from the weight of gas, which collapses, thereby releasing enough gravitational potential energy as heat and pressure to cause the fusion of carbon and oxygen into heavy elements, creating massive amounts of radioactive nuclides, particularly intensely radioactive nickel-56, but half of all other nuclides (including uranium and heavier) are also produced by the ‘R’ (rapid) process of successive neutron captures by fusion products in supernovae explosions. Because we can model how much energy is released using modified computer models of nuclear fusion explosions developed originally by weaponeer Sterling Colgate at Lawrence Livermore National Laboratory to design the early H-bombs, the brightness of the supernova flash tells us how far away the Type Ia supernova is, while the redshift of the flash tells us how fast it is receding from us. That’s how the acceleration of the universe was discovered. Note that “tired light” fantasies about redshift are disproved by Professor Edward Wright on the page linked here.

    You can go to an internet page and see the correct predictions on the linked page here or the about page. This isn’t based on speculations, cosmological acceleration has been observed since 1998 when CCD telescopes plugged live into computers with supernova signature recognition software detected extremely distant supernova and recorded their redshifts (see the article by the discoverer of cosmological acceleration, Dr Saul Perlmutter, on pages 53-60 of the April 2003 issue of Physics Today, linked here). The outward cosmological acceleration of the 3 × 1052 kg mass of the 9 × 1021 observable stars in galaxies observable by the Hubble Space Telescope (page 5 of a NASA report linked here), is approximately a = Hc = 6.9 x 10-10 ms-2 (L. Smolin, The Trouble With Physics, Houghton Mifflin, N.Y., 2006, p. 209), giving an immense outward force under Newton’s 2nd law of F = ma = 1.8 × 1043 Newtons. Newton’s 3rd law gives an equal inward (implosive type) reaction force, which predicts gravitation quantitatively. What part of this is speculative? Maybe you have some vague notion that scientific laws should not for some reason be applied to new situations, or should not be trusted if they make useful predictions which are confirmed experimentally, so maybe you vaguely don’t believe in applying Newton’s second and third law to masses accelerating at 6.9 x 10-10 ms-2! But why not? What part of “fact-based theory” do you have difficulty understanding?

    It is usually by applying facts and laws to new situations that progress is made in science. If you stick to applying known laws to situations they have already been applied to, you’ll be less likely to observe something new than if you try applying them to a situation which nobody has ever applied them to before. We should apply Newton’s laws to the accelerating cosmos and then focus on the immense forces and what they tell us about graviton exchange.

    The theory makes accurate predictions, well within experimental error, and is also fact-based unlike all other theories of quantum gravity, especially the 10500 universes of string theory’s landscape.


    Above: The mainstream 2-dimensional ‘rubber sheet’ interpretation of general relativity says that mass-energy ‘indents’ spacetime, which responds like placing two heavy large balls on a mattress, which distorts more between the balls (where the distortions add up) than on the opposite sides. Hence the balls are pushed together: ‘Matter tells space how to curve, and space tells matter how to move’ (Professor John A. Wheeler). This illustrates how the mainstream (albeit arm-waving) explanation of general relativity is actually a theory that gravity is produced by space-time distorting to physically push objects together, not to pull them! (When this is pointed out to mainstream crackpot physicists, they naturally freak out and become angry, saying it is just a pointless analogy. But when the checkable predictions of the mechanism are explained, they may perform their always-entertaining “hear no evil, see no evil, speak no evil” act.)


    Above: LeSage’s own illustration of quantum gravity in 1758. Like Lamarke’s evolution theory of 1809 (the one in which characteristics acquired during life are somehow supposed to be passed on genetically, rather than Darwin’s evolution in which genetic change occurs due to the inability of inferior individuals to pass on genes), LeSage’s theory was full of errors and is still derided today. The basic concept that mass is composed of fundamental particles with gravity due to a quantum field of gravitons exchanged between these fundamental particles of mass, is now a frontier of quantum field theory research. What is interesting is that quantum gravity theorists today don’t use the arguments used to “debunk” LeSage: they don’t argue that quantum gravity is impossible because gravitons in the vacuum would “slow down the planets by causing drag”. They recognise that gravitons are not real particles: they don’t obey the energy-momentum relationship or mass shell that applies to particles of say a gas or other fluid. Gravitons are thus off-shell or “virtual” radiations, which cause accelerative forces but don’t cause continuous gas type drag or the heating that occurs when objects move rapidly in a real fluid. While quantum gravity theorists realize that particle (graviton) mediated gravity is possible, LeSage’s mechanism of quantum gravity is still as derided today as Lamarke’s theory of evolution. Another analogy is the succession from Aristarchus of Samos, who first proposed the solar system in 250 B.C. against the mainstream earth-centred universe, to Copernicus’ inaccurate solar system (circular orbits and epicycles) of 1500 A.D. and to Kepler’s elliptical orbit solar system of 1609 A.D. Is there any point in insisting that Aristarchus was the original discoverer of the theory, when he failed to come up with a detailed, convincing and accurate theory? Similarly, Darwin rather than Lamarke is accredited with the theory of evolution, because he made the theory useful and thus scientific.

    If someone fails to come up with a detailed, accurate and successfully convincing theory, and merely gets the basic idea right without being able to prove it against the mainstream fashions and groupthink, then the history of science shows that the person is not credited with a big discovery: science is not merely guesswork. Maxwell based his completion of the theory of classical electrodynamics upon an ethereal displacement current of virtual charges in the vacuum, in order to correct Ampere’s law for the case of open circuits such as capacitors using the permittivity of free space (a vacuum) for the dielectric. Maxwell believed, by analogy to the situation of moving ions in a fluid during electrolysis, that current appears to flow through the vacuum between capacitor plates while the capacitor charges and discharges; although in fact the real current just spreads along the plates, and electromagnetic induction (rather than ethereal vacuum currents) produces the current on the opposite place.

    Maxwell nevertheless suggested (in an Encyclopedia Britannica article) an experiment to test whether light is carried at an absolute velocity by a mechanical spacetime fabric. After the Michelson-Morley experiment was done in 1887 to test Maxwell’s conjecture, it was clear that no absolute motion was detectable: suggesting (1) that motion appears relative, not absolute, and (2) that light always appears to go at the same velocity in the vacuum. In 1889, FitzGerald published an explanation of these “relativity” results in Science: he argued that the physical vacuum contracted moving masses like the Michelson-Morley experiment, by analogy to the contraction of anything moving in a fluid due to the force from the head-on fluid pressure (wind drag, or hydrodynamic resistance). This fluid-space based explanation predicted quantitatively the relativistic contraction law, and Lorentz showed that since mass depends inversely on the classical radius of the electron, it predicted a mass increase with velocity. Given the equivalence of space and time via the velocity of light, Lorentz showed that the contraction predicted time-dilation due to motion.

    Above: In Science in 1889, FitzGerald used the Michelson-Morley result to argue that moving objects at velocity v must contract in length in the direction of their motion by the factor (1 – v2/c2)1/2 so that there is no difference in the travel times of light moving along two perpendicular paths. Groupthink crackpots claim that if the lengths of the arms of the instrument are different, FitzGerald’s argument for absolute motion is destroyed since the travel times are still cancelled out. Actually, the arms of the Michelson-Morley instrument can never be the same length to within the accuracy of the relative times implied by interference fringes! The instrument does not measure the absolute times taken in two different directions: it merely determines if there is a difference in the relative times (which are always slightly different, since the arms can’t be machined to perfectly identical length) when the instrument is rotated by 90 degrees. Another groupthink crackpot argument is that although the FitzGerald theory predicts relativity from length contraction in an absolute motion universe, other special relativity results like time dilation, mass increase, and E = mc2 can only be obtained from Einstein. Actually, all were obtained by Lorentz and Poincare: Lorentz showed that evidence for space-time from electromagnetism implies that apparent time dilates like distance when an clock moves, while he argued that since the classical electromagnetic electron radius is inversely proportional to its mass, its mass should thus increase with velocity by a factor equal to the reciprocal of the FitzGerald contraction factor. Likewise, a force F = d(mv)/dt acting on a body moving distance dx imparts kinetic energy dE = F.dx = d(mv).dx/dt = d(mv)v = v.d(mv) = v2dm + mvdv. Comparison of this purely Newtonian result with the derivative of Lorentz’s relativistic mass increase formula mv = m0(1 – v2/c2)-1/2 gives us dm = dE/c2 or E = mc2. (See for example, Dr Glasstone’s Sourcebook on Atomic Energy, 3rd ed., 1967.)

    Carlos Barceló and Gil Jannes, ‘A Real Lorentz-FitzGerald Contraction’, Foundations of Physics, Volume 38, Number 2 / February, 2008, pp. 191-199 (PDF file: http://digital.csic.es/bitstream/10261/3425/3/0705.4652v2.pdf):

    “Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics, in particular regarding the quest towards a theory of quantum gravity. …

    “… Remarkably, all of relativity (at least, all of special relativity) could be taught as an effective theory by using only Newtonian language. …In a way, the model we are discussing here could be seen as a variant of the old ether model. At the end of the 19th century, the ether assumption was so entrenched in the physical community that, even in the light of the null result of the Michelson-Morley experiment, nobody thought immediately about discarding it. Until the acceptance of special relativity, the best candidate to explain this null result was the Lorentz-FitzGerald contraction hypothesis. … we consider our model of a relativistic world in a fishbowl, itself immersed in a Newtonian external world, as a source of reflection, as a Gedankenmodel. By no means are we suggesting that there is a world beyond our relativistic world describable in all its facets in Newtonian terms. Coming back to the contraction hypothesis of Lorentz and FitzGerald, it is generally considered to be ad hoc. However, this might have more to do with the caution of the authors, who themselves presented it as a hypothesis, than with the naturalness or not of the assumption. … The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”

    In 1905, Einstein took the two implications of the Michelson-Morley research (that motion appears relative not absolute, and that the observed velocity of light in the vacuum is always constant) and used them as postulates to derive the FitzGerald-Lorentz transformation and Poincare mass-energy equivalence. Einstein’s analysis was preferred by Machian philosophers because it was purely mathematical and did not seek to explain the principle of relativity and constancy of the velocity of light in the vacuum by invoking a physical contraction of instruments. Einstein postulated relativity; FitzGerald explained it. Both predicted a similar contraction quantitatively. Similarly, Newton’s theory or gravitation is the combination of Galileo’s principle that dropped masses all accelerate at the same rate due to the constancy of the Earth’s mass, with Kepler’s laws of planetary motion. Newton postulated his universal gravitational law based on this evidence plus the guess that the gravitational force is directly proportional to the mass producing it, and he checked it by the Moon’s centripetal acceleration; LeSage tried to explain what Newton had postulated and checked.

    The previous post links to Peter Woit’s earlier article about string theorist Erik Verlinde’s arXiv preprint On the Origin of Gravity and the Laws of Newton, which claims: “Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies.” String theorist Verlinde derives Newton’s laws and other results using only “high-school mathematics” (which brings contempt from mathematical physicist Woit, probably one of the areas of agreement he has with string theorist Jacques Distler), i.e. no tensors, and he is derives the Newtonian weak field approximation for gravity, not the relativistic Einsteinian gravity law which also includes contraction. This contraction is physically real but small for weak gravitational fields and non-relativistic velocities: Feynman famously calculated in his published Lectures on Physics that the contraction term in Einstein’s field equation contracts the Earth’s radius by MG/(3c2) = 1.5 mm. Consider two ways to predict contraction using Einstein’s equivalence principle.

    First, Einstein’s way. Einstein began by expressing Newton’s law of gravity in tensor field calculus which allows gravity to be represented by non-Euclidean geometry, incorporating the equivalence of inertial and gravitational mass: Einstein started with a false hypothesis that the curvature of spacetime (represented with the Ricci tensor) which causes acceleration (“curvature” is literally the curve of a line on a graph of distance versus time, i.e. it implies acceleration) simply equals the source of gravity (the stress-energy tensor, since in Einstein’s earlier special relativity, mass and energy are equivalent, albeit via the well-known very large conversion factor, c2). (Non-Euclidean geometry wasn’t Einstein’s innovation; it was studied by Riemann and Minkowski, while Ricci and Levi-Civita pioneered tensors to generalize vector calculus to any number of dimensions.)

    Einstein in 1915 found that this this simple equivalence was wrong: the Ricci curvature tensor could not be equivalent to the stress-energy tensor because the divergence (the sum of gradients in all spatial dimensions) of the stress-energy tensor is not zero. Unless this divergence is zero, mass-energy will not be conserved. So Einstein used Bianchi’s identity to alter source of gravity, subtracting from the stress-energy tensor, Tab, half the product of the metric tensor gab, and the trace of the stress-energy tensor, T (the trace of a tensor is simply the sum of the top-left to bottom-right diagonal elements of that tensor, i.e. energy density plus pressure, or trace T = T00 + T11 + T22 + T33), because this combination: (1) does have zero divergence and thereby satisfies the conservation of mass-energy, and (2) reduces the stress-energy tensor for weak fields, thereby correctly corresponding to Newtonian gravity in the weak field limit. This is how Einstein found that the Ricci tensor Rab = Tab – (1/2)gabT, which is exactly equivalent to the oft-quoted Einstein equation Rab – (1/2)gabR = Tab, where R is the trace of the Ricci tensor (R = R00 + R11 + R22 + R33).

    Secondly, Feynman’s way. A more physically intuitive explanation to the modification of Newton’s gravitational law implied by Einstein’s field equation of general relativity is to examine Feynman’s curvature result: space-time is non-Euclidean in the sense that the gravitational field contracts the Earth’s radius by (1/3)MG/c2 or about 1.5 mm. This is unaccompanied by a transverse contraction, i.e. the Earth’s circumference is unaffected. To mathematically keep “Pi” a constant, therefore, you need to invoke an extra dimension, so that the n-1 = 3 spatial dimensions we experience are in string theory terminology a (mem)brane on a n = 4 dimensional bulk of spacetime. Similarly, if you draw a 2-dimensional circle upon the surface of the interior of a sphere, you will obtain Pi from the circle only by drawing a straight line through the 3-d bulk of the volume (i.e. a line that does not follow the 2-dimensional curved surface or “brane” of the sphere upon which the circle is supposed to exist). If you measure the diameter upon the curved surface, it will be different, so Pi will appear to vary.

    A simple physical mechanism of Feynman’s (1/3)MG/c2 excess radius for symmetric, spherical mass M is that the gravitational field quanta compress a mass radially when being exchanged with distant masses in the universe: the exchange of gravitons pushes against masses. By Einstein’s principle of the equivalence of inertial and gravitational mass, the cause of this excess radius is exactly the same as the cause of the FitzGerald-Lorentz contraction of moving bodies in the direction of their motion, first suggested in Science in 1889 by FitzGerald. FitzGerald explained the apparent constancy of the velocity of light regardless of the relative motion of the observer (indicated by the null-result of the Michelson-Morley experiment of 1887) as the physical effect of the gravitational field. In the fluid analogy to the gravitational field, if you accelerate an underwater submarine, there is a head-on pressure from the inertial resistance of the water which it is colliding with, which causes it to contract slightly in the direction it is going in. This head-on or “dynamic” pressure is equal to half the product of the density of the water and the square of the velocity of the submarine. In addition to this “dynamic” pressure, there is a “static” water pressure acting in all directions, which compresses the submarine slightly in all directions, even if the submarine is not moving. In this analogy, the FitzGerald-Lorentz contraction is the “dynamic” pressure effect of the graviton field, while the Feynman excess radius or radial contraction of masses is the “static” pressure effect of the graviton field. Einstein’s special relativity postulates (1) relativity of motion and (2) constancy of c, and derives the FitzGerald-Lorentz transformation and mass-energy equivalence from these postulates; by contrast, FitzGerald and Lorentz sought to physically explain the mechanism of relativity by postulating contraction. To contrast this difference:

    (1) Einstein: postulated relativity and produced contraction.
    (2) Lorentz and FitzGerald: postulated contraction to produce “apparent” observed Michelson-Morley relativity as just an instrument contraction effect within an absolute motion universe.

    These two relativistic contractions, the contraction of relativistically moving inertial masses and the contraction of radial space around a gravitating mass, are simply related under Einstein’s principle of the equivalence of inertial and gravitational masses, since Einstein’s other equivalence (that between mass and energy) then applies to both inertial and gravitational masses. In other words, the equivalence of inertial and gravitational mass implies an effective energy equivalence for each of these masses. The FitzGerald-Lorentz contraction factor [1 – (v/c)2]1/2 contains velocity v, which comes from the kinetic energy of the moving object. By analogy, when we consider a mass m at rest in a gravitational field from another much larger mass M (like a person standing on the Earth), it has acquired gravitational potential energy E = mMG/R, equivalent to a kinetic energy of E = (1/2)mv2, so by Einstein’s equivalence principle of inertial and gravitational field energy it can be considered to have an “effective” velocity of v = (2GM/R)1/2. Inserting this velocity into the FitzGerald-Lorentz contraction factor [1 – (v/c)2]1/2 gives [1 – 2GM/(Rc2)]1/2 which, when expanded by the binomial expansion to the first couple of terms as a good approximation, yields 1 – GM/(Rc2). This result assumes that all of the contraction occurs in one spatial dimension only, which is true for the FitzGerald-Lorentz contraction (where a moving mass is only contracted in the direction of motion, not in the two other spatial dimensions it has), but is not true for radial gravitational contraction around a static spherical, uniform mass, which operates equally in all 3 spatial dimensions. Therefore, the contraction in any one of the three dimensions is by the factor 1 – (1/3)GM/(Rc2). Hence, when gravitational contraction is included, radius R becomes R[1 – (1/3)GM/(Rc2)] = RGM/(3c2), which is the result Feynman produced in his Lectures on Physics from Einstein’s field equation.

    The point we’re making here is that general relativity isn’t mysterious unless you want to ignore the physical effects due to energy conservation and associated contraction, which produce its departures from Newtonian physics. Physically understanding the mechanism for how general relativity differs from Newtonian physics therefore immediately takes you to the facts of how the quantum gravitational field physically distorts static and moving masses, leading to checkable predictions which you cannot make with general relativity alone. It is therefore helpful if you want to understand physically how quantum gravity must operate in order to be consistent with general relativity within its domain of validity. Obviously general relativity breaks down outside that domain, which is why we need quantum gravity, but within the limits of validity for that classical domain, both theories are consistent. The reason why quantum gravity of the LeSage sort needs to be fully reconciled with general relativity in this way is that one objection to LeSage was by Laplace, who ignored the gravitational and motion contraction mechanisms of quantum gravity for relativity (Laplace was writing long before FitzGerald and Einstein) and tried to use this ignorance to debun LeSage by arguing that orbital aberration would occur in LeSage’s model due to the finite speed of the gravitons. This objection does not apply to general relativity due to the contractions incorporated into the general relativity theory by Einstein: similarly, Laplace’s objection does not apply to quantum gravity which inherently includes the contractions as physical results of quantum gravity upon moving masses.

    In the past, however, FitzGerald’s physical contraction of moving masses as miring by fluid pressure has been controversial in physics, and Einstein tried to dispose of the fluid. The problem with the fluid was investigated by citics of Fatio and LeSage, who promoted a shadowing theory of gravity, whereby masses get pushed together by mutually shielding one another from the pressure of the fluid in space. These critics included some of the greatest classical physicists the world has ever known: Newton (Fatio’s friend), Maxwell and Kelvin. Feynman also reviewed the major objection, drag, to the fluid in his broadcast lectures on the Character of Physical Law. The criticisms of the fluid is that it the force it needs to exert to produce gravity would classically be expected to cause fast moving objects in the vacuum

    (1) to heat up until they glow red hot or ablate at immense temperature,

    (2) to slow down and (in the case of planets) thus spiral into the sun,

    (3) while the fluid would diffuse in all directions and on large distance scales fill in the “shadows” like a gas, preventing the shadowing mechanism from working (this doesn’t apply to gravitons exchanged between masses, for although they will take all possible paths in a path integral, the resultant, effective graviton motion for force delivery will along the path of least action, due to the cancellation of the amplitudes of paths which interfere off the path of least action: see Feynman’s 1985 book QED),

    (4) the mechanism would not give a force proportional to mass if the fundamental particles have a large gravitational interaction cross-sectional area, which would mean that in a big mass some of the shadows would “overlap” one another, so the net force of gravity from a big mass would be less than directly proportional to the mass, i.e. it would increase not in simple proportion to M but instead statistically in proportion to: 1 – ebM, where b is a gravity cross-section and geometry-dependent coefficient, which allows for the probability of overlap. This 1 – ebM formula has two asymptotic limits:

    (a) for small masses and small cross-sections, bM is much smaller than 1, so: ebM ~ 1 – bM, so 1 – ebM ~ bM. I.e., for small masses and small cross-sections, the theory agrees with observations (there is no significant overlap).

    (b) for larger masses and large cross-sections, bM might be much larger than 1, so ebM ~ 0, giving 1 – ebM ~ 1. I.e., for large masses and large cross-sections, the overlap of shadows would prevent any increase in the mass of a body from increasing the resultant gravitational force: once gravitons are stopped, they can’t be stopped again by another mass.

    This overlap problem is not applicable for the solar system or many other situations because b is insignificant owing to the small graviton scattering cross-section of a fundamental particle of mass, since the total inward force is trillions upon trillions of times higher than the objectors believed possible: the force is simply determined by Newton’s 2nd and 3rd laws to be the product of the cosmological acceleration and the mass of the accelerating universe, 1.8 × 1043 Newtons, and the cross-section for shielding is the black hole event horizon area, which is so small that overlap is insignificant in the solar system or other tests of Newton’s weak field limit.

    (5) the LeSage mechanism suggested that the gravitons which cause gravity would be slowed down by the energy loss when imparting a push to a mass, so that they would not be travelling at the velocity of light, contrary to what is known about the velocity of gravitational fields. However this is false and is due to the real (rather than virtual “off-shell”) radiation that LeSage assumed. The radiation goes at light velocity and merely shifts in frequency due to energy loss. For static situations, where no acceleration is produced (e.g. an apple stationary hanging on a tree) the graviton exchange results in no energy change; it’s a perfectly elastic scattering interaction. No energy is lost from the gravitons, and no kinetic energy is gained by the apple. Where the apple is accelerated, the kinetic energy it gains is that lost due to a shift to lower energy (longer wavelength) of the “reflected” or scattered gravitons. Notice that Dr Lubos Motl has objected to me by falsely claiming that virtual particles don’t appear to have wavelengths; on the contrary, the empirically confirmed Casimir effect is due to inability of virtual photons of wavelength longer than the distance between two metal plates, to exist and produce pressure between the plates (so the plates are pushed together from the complete spectrum of virtual photon wavelengths in the vacuum surrounding the places, which is stronger than the cut-off spectrum between the plates). Like the reflection of light by a mirror, the process is consists of the absorption of a particle followed by the emission of a new particle.

    However, quantum field theory, which has been very precisely tested for electrodynamics, resurrects a quantum fluid or field in space which consists of gauge boson radiation, i.e. virtual (off-shell) radiation which carries “borrowed” or off-mass shell energy, not real energy. It doesn’t obey the relationship between energy and momentum that applies to real radiation. This is why the radiation can exert pressure without causing objects to heat up or to slow down: they merely accelerate or distort instead.

    The virtual radiation is not like a regular fluid. It carries potential energy that can be used to accelerate and contract objects, but it cannot directly heat them or cause continuous drag to non-accelerating objects by carrying away their momentum in a series of impacts in the way that gas or water molecules cause continuous drag on non-accelerating objects. There is only resistance to accelerations (i.e., inertia and momentum) because of these limitations on the energy exchanges possible with the virtual (off-shell) radiations in the vacuum.

    In a new blog post, Dr Woit quotes a New Scientist article about Erik Verlinde’s “entropic gravity”:

    “Now we could be closing in on an explanation of where gravity comes from: it might be an emergent property of the way objects are organised, much as fluidity arises as a property of water…. This idea might seem exotic now, but to kids of the future it might be as familiar as apples.”

    Like Woit, I don’t see much hope in Verlinde’s entropic gravity since it doesn’t make falsifiable predictions, just ad hoc ones, but the idea that gravity is an “emergent property of the way objects are organised, much as fluidity arises as a property of water” is correct: gravity predicted accurately from the shadowing of the implosive pressure from gravitons exchanged with other masses around us. At best, mainstream quantum gravity theories such as string theory and loop quantum gravity are merely compatible with a massless spin-2 excitation and thus are wrong, ad hoc theories of quantum gravity, founded on error and which fail to make any quantitative, falsifiable predictions.

    “Gerard ‘t Hooft expresses pleasure at seeing a string theorist talking about “real physical concepts like mass and force, not just fancy abstract mathematics”. According to the article, the problem with Einstein’s General Relativity is that its “laws are only mathematical descriptions.” I guess a precise mathematical expression of a theory is somehow undesirable, much better to have a vague description in English about how it’s all due to some mysterious entropy.”

    So Dr Woit has finally flipped, giving up on precise mathematical expressions and coming round to the “much better” vague and mysterious ideas of the mainstream string theorists. Well, I think that’s sad, but I suppose it can’t be helped. Newton in 1692 scribbled in his own printed copy of his Principia that Fatio’s 1690 gravity mechanism was “the unique hypothesis by which gravity can be explained”, although Newton did not publish any statement of his interest in the gravitational mechanism (just as he kept his alchemical and religious studies secret).

    Update:

    “I think you’re being a bit harsh when you say:

    I guess a precise mathematical expression of a theory is somehow undesirable, much better to have a vague description in English about how it’s all due to some mysterious entropy.

    “No-one is suggesting the existing mathematical models should be abandoned. The point being made is that the entropic approach may give us some physical insight into those mathematical models.”

    This is a valid point: finding a way to make predictions with quantum gravity doesn’t mean “abandoning” general relativity, but supplementing it by giving additional physical insight and making quantitative, falsifiable predictions. Although Professor Halton Arp (of the Max-Planck Institut fuer Astrophysik) promotes heretical quasar redshift objections to the big bang which are false, he does make one important theoretical point in his paper The observational impetus for Le Sage Gravity:

    ‘The first insight came when I realized that the Friedmann solution of 1922 was based on the assumption that the masses of elementary particles were always and forever constant, m = const. He had made an approximation in a differential equation and then solved it. This is an error in mathematical procedure. What Narlikar had done was solve the equations for m= f(x,t). This a more general solution [to general relativity], what Tom Phipps calls a covering theory. Then if it is decided from observations that m can be set constant (e.g. locally) the solution can be used for this special case. What the Friedmann, and following Big Bang evangelists did, was succumb to the typical conceit of humans that the whole of the universe was just like themselves.’

    The remainder of his paper is speculative, non-falsifiable or simply wrong, and Arp is totally wrong in dismissing the big bang since his quasar “evidence” has empirically been shown to be completely bogus, while it has also been shown that the redshift evidence definitely does require expansion, since other “explanations” fail. But Arp is right in arguing that the Friedmann et al. solutions to general relativity for cosmological models are all based on the implicit assumption that the source of gravity is not an “emergent” effect of the motion of masses in the surrounding universe. The Lambda-CDM model based on general relativity is typical of the problem, since it can be fitted in ad hoc fashion to virtually any kind of universe by adjusting the values of the dark energy and dark matter parameters to force the theory to fit the observations from cosmology (the opposite of science, which is to make falsifiable predictions and then to check those predictions). That’s a religion based on groupthink politics, not facts.

    Update

    Copy of comment to:

    http://scienceblogs.com/builtonfacts/2010/02/failing_at_gravity.php

    “But there’s problems, too. There ought to be “air resistance” from the particles as the planets move through space. Then there’s the fact that the force is proportional to surface area hit by the particles, not to the mass. This can be remedied by assuming a tiny interaction cross-section due to the particles, but if this is true they must be moving very fast indeed to produce the required force – many times the speed of light. And in that case the heating due to the “air resistance” of the particles would be impossibly high. Furthermore, if the particle shadows of two planets overlapped, the sun’s gravity on the farther planet should be shielded. No such effect has been observed.

    “For these and other reasons Fatio’s theory had to be rejected as unworkable.”

    Wikipedia is a bit unreliable on this subject: Fatio assumed on-shell (“real”) particles, not a quantum field of off-shell virtual gauge bosons. The exchange of gravitons between masses in the universe would cause the heating, drag, etc., regardless of spin if the radiation were real. So it would dismiss spin-2 gravitons of attraction, since they’d have to be everywhere in the universe between masses, just like Fatio’s particles. But in fact the objections don’t apply to gauge boson radiations since they’re off-shell. Fatio didn’t know about relativity or quantum field theory.

    Thanks anyway, your post is pretty funny and could be spoofed by writing a fictitious attack on “evolution” by ignoring Darwin’s work and just pointing out errors in Lamarke’s theory of evolution (which was wrong)…

    “This can be remedied by assuming a tiny interaction cross-section due to the particles, but if this is true they must be moving very fast indeed to produce the required force – many times the speed of light.”

    Or just increasing the flux of spin-1 gravitons when you decrease the cross-section …

    Pauli’s role in predicting the neutrino by applying energy conservation to beta decay (against Bohr who falsely claimed that the energy conservation anomaly in beta decay was proof that indeterminancy applies to energy conservation which can violate energy conservation to explain the anomaly without having to predict the neutrino to take away energy). and in declaring Heisenberg’s vacuous (unpredictive) unified field theory “not even wrong” is well known, thanks to Peter Woit. There is a nice anecdote about Markus Fierz, Pauli’s collaborator in the spin-2 theory of gravitons, given by Freeman Dyson on p. 15 of his 2008 book The Scientist as Rebel:

    “Many years ago, when I was in Zürich, I went to see the play The Physicists by the Swiss playwright Friedrich Dürrenmatt. The characters in the play are grotesque caricatures … The action takes place in a lunatic asylum where the physicists are patients. In the first act they entertain themselves by murdering their nurses, and in the second act they are revealed to be secret agents in the pay of rival intelligence services. … I complained about the unreality of the characters to my friend Markus Fierz, a well-known Swiss physicist, who came with me to the play. ‘But don’t you see?’ said Fierz. ‘The whole point of the play is to show us how we look to the rest of the human race’.”

    “… reality must take precedence over public relations, for nature cannot be fooled.” – Feynman’s Appendix F to Rogers’ Commission Report into the Challenger space shuttle explosion of 1986.

    Fig. 1 - Newton's Principia, revised 2nd edition, 1713: Book 1, The Motion of Bodies, Section II: The Determination of Centripetal Forces, Proposition 1, Theorem 1.

    Fig. 1 – Newton’s geometric proof that an impulsive pushing graviton mechanism is consistent with Kepler’s 3rd law of planetary motion, because equal areas will be swept out in equal times (the three triangles of equal area, SAB, SBC and SBD, all have an equal base of length SB, and they all have altitudes of equal length), together with a diagram we will use for a more modern analysis. Newton’s geometric proof of centripetal acceleration, from his book Principia, applies to any elliptical orbit, not just circular orbits as Hooke’s easier inverse-square law derivation did. (Newton didn’t include the graviton arrow, of course.) By Pythagoras’ theorem x2 = r2 + v2t2, hence x = (r2 + v2t2)1/2. Inward motion, y = x – r = (r2 + v2t2)1/2r = r[(1 + v2t2/r2)1/2 – 1], which upon expanding with the binomial theorem to the first two terms, yields: y ~ r[(1 + (1/2)v2t2/r2) – 1] = (1/2)v2t2/r. Since this result is accurate for infidesimally small steps (the first two terms of the binomial become increasingly accurate as the steps get smaller, as does the approximation of treating the triangles as right-angled triangles so Pythagoras’ theorem can be used), we can accurately differentiate this result for y with respect to t to give the inward velocity, u = v2t/r. Inward acceleration is the derivative of u with respect to t, giving a = v2/r. This is the centripetal force formula which is required to obtain the inverse square law of gravity from Kepler’s third law: Hooke could only derive it for circular orbits, but Newton’s geometric derivation (above, using modern notation and algebra) applies to elliptical orbits as well. This was the major selling point for the inverse square law of gravity in Newton’s Principia over Hooke’s argument.

    See Newton’s Principia, Book I, The Motion of Bodies, Section II: Determination of Centripetal Forces, Proposition 1, Theorem 1:

    ‘The areas which revolving bodies describe by radii drawn to an immovable centre of force … are proportional to the times on which they are described. For suppose the time to be divided into equal parts … suppose that a centripetal [inward directed] force acts at once with a great impulse [like a graviton], and, turning aside the body from the right line … in equal times, equal areas are described … Now let the number of those triangles be augmented, and their breadth diminished in infinitum … QED.’

    This result, in combination with Kepler’s third law, gives the inverse-square law of gravity, although Newton’s argument is using geometry plus hand-waving so it is actually far less rigorous than my rigorous algebraic version above. Newton failed to employ calculus and the binomial theorem to make his proof more rigorous, because he was the inventor of them, and most readers wouldn’t be familiar with those methods. (It doesn’t do to be so inventive as to both invent a new proof and also invent a new mathematics to use in making that proof, because readers will be completely unable to understand it without a large investment of time and effort; so Newton found that it payed to keep things simple and to use old-fashioned mathematical tools which were widely understood.)

    Newton in addition worked out an ingeniously simple proof, again geometrically, to demonstrate that a solid sphere of uniform density (or radially symmetric density) has the same net gravity on the surface and at any distance, for all of its atoms in their three dimensional distribution, as would be the case if all the mass was concentrated in a point in the middle of the Earth. The proof for that is very simple: consider the sphere to be made up of a lot of concentric shells, each of small thickness. For any given shell, the geometry is such as that a person on the surface experiences small gravity effects from small quantities of mass nearby on the shell, while most of the mass of the shell is located at large distances. The inverse square effect, which means that for equal quantities of mass, the most nearby mass creates the strongest gravitational field, is thereby offset by the actual locations of the masses: only small amounts are nearby, and most of the mass of the shell is at a great distance. The overall effect is that the effective location for the entire mass of the shell is in the middle of the shell, which implies that the effective location of the mass of a solid sphere seen from a distance is in the middle of the sphere (if the density of each of the little shells, considered to be parts of the sphere, is uniform).

    Feynman discusses the Newton proof in his November 1964 Cornell lecture on ‘The Law of Gravitation, an Example of Physical Law’, which was filmed for a BBC2 transmission in 1965 and can viewed on google video here (55 minutes). Feynman in his second filmed November 1964 lecture, ‘The Relation of Mathematics to Physics’, also on google video (55 minutes), stated:

    ‘People are often unsatisfied without a mechanism, and I would like to describe one theory which has been invented of the type you might want, that this is a result of large numbers, and that’s why it’s mathematical. Suppose in the world everywhere, there are flying through us at very high speed a lot of particles … we and the sun are practically transparent to them, but not quite transparent, so some hit. … the number coming [from the sun’s direction] towards the earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see, after some mental effort, that the farther the sun is away, the less in proportion of the particles are being taken out of the possible directions in which particles can come. So there is therefore an impulse towards the sun on the earth that is inversely as square of the distance, and is the result of large numbers of very simple operations, just hits one after the other. And therefore, the strangeness of the mathematical operation will be very much reduced the fundamental operation is very much simpler; this machine does the calculation, the particles bounce. The only problem is, it doesn’t work. …. If the earth is moving it is running into the particles …. so there is a sideways force on the sun would slow the earth up in the orbit and it would not have lasted for the four billions of years it has been going around the sun. So that’s the end of that theory. …

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    The error Feynman makes here is that quantum field theory tells us that there are particles of exchange radiation mediating forces normally, without slowing down the planets: this exchange radiation causes the FitzGerald-Lorentz contraction and inertial resistance to accelerations (gravity has the same mechanism as inertial resistance, by Einstein’s equivalence principle in general relativity). So the particles do have an effect, but only as a once-off resistance due to the compressive length change, not continuous drag. Continuous drag requires a net power drain of energy to the surrounding medium, which can’t occur with gauge boson exchange radiation unless acceleration is involved, i.e., uniform motion doen’t involve acceleration of charges in such a way that there is a continuous loss of energy, so uniform motion doesn’t involve continuous drag in the sea of gauge boson exchange radiation which mediates forces! The net energy loss or gain during acceleration occurs due to the acceleration of charges, and in the case of masses (gravitational charges), this effect is experienced by us all the time as inertia and momentum; the resistance to acceleration and to deceleration. The physical manifestation of these energy changes occurs in the FitzGerald-Lorentz transformation; contractions of the matter in the length parallel to the direction of motion, accompanied by related relativistic effects on local time measurements and upon the momentum and thus inertial mass of the matter in motion. This effect is due to the contraction of the earth in the direction of its motion. Feynman misses this entirely. The contraction of the earth’s radius by this mechanism of exchange radiation (gravitons) bouncing off the particles, gives rise to the empirically confirmed general relativity law due to conservation of mass-energy for a contracted volume of spacetime, as proved in an earlier post. So it is two for the price of one: the mechanism predicts gravity but also forces you to accept that the Earth’s radius shrinks, which forces you to accept general relativity, as well. Additionally, it predicts a lot of empirically confirmed facts about particle masses and cosmology, which are being better confirmed by experiments and observations as more experiments and observations are done.

    As pointed out in a previous post giving solid checkable predictions for the strength of quantum gravity and observable cosmological quantities, etc., due to the equivalence of space and time, there are 6 effective dimensions: three expanding time-like dimensions and three contractable material dimensions. Whereas the universe as a whole is continuously expanding in size and age, gravitation contracts matter by a small amount locally, for example the Earth’s radius is contracted by the amount 1.5 mm as Feynman emphasized in his famous Lectures on Physics. This physical contraction, due to exchange radiation pressure in the vacuum, is not only a contraction of matter as an effect due to gravity (gravitational mass), but it is also a contraction of moving matter (i.e., inertial mass) in the direction of motion (the Lorentz-FitzGerald contraction).

    This contraction necessitates the correction which Einstein and Hilbert discovered in November 1915 to be required for the conservation of mass-energy in the tensor form of the field equation. Hence, the contraction of matter from the physical mechanism of gravity automatically forces the incorporation of the vital correction of subtracting half product of the metric and the trace of the Ricci tensor, from the Ricci tensor of curvature. This correction factor is the difference between Newton’s law of gravity merely expressed mathematically as 4 dimensional spacetime curvature with tensors and the full Einstein-Hilbert field equation; as explained on an earlier post, Newton’s law of gravitation when merely expressed in terms of 4-dimensional spacetime curvature gives the wrong deflection of starlight and so on. It is absolutely essential to general relativity to have the correction factor for conservation of mass-energy which Newton’s law (however expressed in mathematics) ignores. This correction factor doubles the amount of gravitational field curvature experienced by a particle going at light velocity, as compared to the amount of curvature that a low-velocity particle experiences. The amazing thing about the gravitational mechanism is that it yields the full, complete form of general relativity in addition to making checkable predictions about quantum gravity effects and the strength of gravity (the effective gravitational coupling constant, G). It has made falsifiable predictions about cosmology which have been spectacularly confirmed since first published in October 1996. The first major confirmation came in 1998 and this was the lack of long-range gravitational deceleration in the universe. It also resolves the flatness and horizon problems, and predicts observable particle masses and other force strengths, plus unifies gravity with the Standard Model. But perhaps the most amazing thing concerns our understanding of spacetime: the 3 dimensions describing contractable matter are often asymmetric, but the 3 dimensions describing the expanding spacetime universe around us look very symmetrical, i.e. isotropic. This is why the age of the universe as indicated by the Hubble parameter looks the same in all directions: if the expansion rate were different in different directions (i.e., if the expansion of the universe was not isotropic) then the age of the universe would appear different in different directions. This is not so. The expansion does appear isotropic, because those time-like dimensions are all expanding at a similar rate, regardless of the direction in which we look. So the effective number of dimensions is 4, not 6. The three extra time-like dimensions are observed to be identical (the Hubble constant is isotropic), so they can all be most conveniently represented by one ‘effective’ time dimension.

    Only one example of a very minor asymmetry in the graviton pressure from different directions, resulting from tiny asymmetries in the expansion rate and/or effective density of the universe in different directions, has been discovered and is called the Pioneer Anomaly, an otherwise unaccounted-for tiny acceleration in the general direction toward the sun (although the exact direction of the force cannot be precisely determined from the data) of (8.74 ± 1.33) × 10−10 m/s2 for long-range space probes, Pioneer-10 and Pioneer-11. However these accelerations are very small, and to a very good approximation, the three time-like dimensions – corresponding to the age of the universe calculated from the Hubble expansion rates in three orthagonal spatial dimensions – are very similar.

    Therefore, the full 6-dimensional theory (3 spatial and 3 time dimensions) gives the unification of fundamental forces; Riemann’s suggestion of summing dimensions using the Pythagorean sum ds2 = å (dx2) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)2 can be included with the other dimensions dx2, dy2, and dz2. There is then the question as to whether the term d(ct)2 will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx2 + dy2 + dz2 = d(ct)2, which implies that d(ct)2 changes sign when passed across the equality sign to the other dimensions: ds2 = å (dx2) = dx2 + dy2 + dz2d(ct)2 = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/time-dilation in inertial motion); This formula, ds2 = å (dx2) = dx2 + dy2 + dz2d(ct)2, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854.

    Professor Georg Riemann (1826-66) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the n-dimensional manifold be expressed by n variable quantities x1, x2, x3, and so on to xn, then … ds = Ö [å (dx)2] … I will therefore term flat these manifolds in which the square of the line-element can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’

    [The algebraic Newtonian-equivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds2 = (1 – 2GM/r)-1(dx2 + dy2 + dz2) – (1 – 2GM/r) d(ct)2. This only reduces to the special relativity metric for the impossible, unphysical, imaginary, and therefore totally bogus case of M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]

    WARNING: I’ve made a change to the usual tensor notation below and, apart from the conventional notation in the Christoffel symbol and Riemann tensor, I am indicating covariant tensors by positive subscript and contravariant by negative subscript instead of using indices (superscript) notation for contravariant tensors. The reasons for doing this will be explained and are to make this post easier to read for those unfamiliar with tensors but familiar with ordinary indices (it doesn’t matter to those who are familiar with tensors, since they will know about covariant and contravariant tensors already).

    Professor Gregorio Ricci-Curbastro (1853-1925) took up Riemann’s suggestion and wrote a 23-pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of co-ordinate system. In 1901, Ricci and Tullio Levi-Civita (1873-1941) wrote a 77-pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute co-ordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):

    g = ds2 = gm n dxmdxn. The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.

    ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). … We call four quantities Av the components of a covariant four-vector, if for any arbitrary choice of the contravariant four-vector Bv, the sum over v, å Av Bv = Invariant. The law of transformation of a covariant four-vector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

    The rank is denoted simply by the number of letters of subscript notation, so that Xa is a ‘rank 1’ tensor (a vector sum of first-order differentials, like net velocity or gradient over applicable dimensions), and Xab is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, Xa) and a contra-variant tensor of the same variable (say, X-a) are distinguished by the way they transform when converting from one system of co-ordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contra-variant tensor by superscript (for example xn). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor xn = x1 + x2 + x3 +… xn, whereas for powers or indices xn = x1 x2 x3 …xn). [Another step towards ‘beautiful’ gibberish then occurs whenever a contra-variant tensor is raised to a power, resulting in, say (x2)2, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x4,causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contra-variant notation, thus x-n is here the contra-variant version of the covariant tensor xn. Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and Levi-Civita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’

    This was fine for Einstein who had by that time been working with the theory of Ricci and Levi-Civita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m and n able to take values representing the four components of space-time (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4-by-4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):

    g = ds2 = gmn dxmdxn = å (gm n dxm dxn )= -(g11 dx-1 dx-1 + g21 dx-2 dx-1 + g31 dx-3 dx-1 + g41 dx-4 dx-1) + (-g12 dx-1 dx-2 + g22 dx-2 dx-2 + g32 dx-3 dx-2 + g42 dx-4 dx-2) + (-g13 dx-1 dx-3 + g23 dx-2 dx-3 + g33 dx-3 dx-3 + g43 dx-4 dx-3) + (-g14 dx-1 dx-4 + g24 dx-2 dx-4 + g34 dx-3 dx-4 + g44 dx-4 dx-4)

    The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx-1 dx-1 = d(ct)2, while dx-2 dx-2 = dx2, dx-3 dx-3 = dy2, and so on. Therefore:

    g = ds2 = gct d(ct)2 + gx dx2 + gy dy2 + gz dz2 + (a dozen trivial first order differential terms).

    It is often asserted that Albert Einstein (1879-1955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and Levi-Civita who wrote the long-winded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute co-ordinate systems. If Ricci and Levi-Civita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully. Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into space-time geometry:

    ‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for “higher mathematics”, which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and Levi-Civita. In Zurich Einstein found a friend, Marcel Grossmann (1878-1936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)

    General relativity equates the mass-energy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.

    Curvature is best illustrated by plotting a graph of distance versus time and when the line curves (as for an accelerating car) that curve is ‘curvature’. It’s the curved line on a space-time graph that marks acceleration, be that acceleration due to a force acting upon gravitational mass or inertial mass (the equivalence principle of general relativity means that gravitational mass = inertial mass).

    This point is made very clearly by Professor Lee Smolin on page 42 of the USA edition of his 1996 book, ‘The Trouble with Physics.’ See Figure 1 in the post here. Next, in order to mathematically understand the Riemann curvature tensor, you need to understand the operator (not a tensor) which is denoted by the Christoffel symbol (superscript here indicates contravariance):

    G abc = (1/2)gcd [(dgda/dxb) + (dgdb/dxa) + (dgab/dxd)]

    The Riemann curvature tensor is then represented by:

    Racbe = ( dG bca /dxe ) – ( dG bea /dxc ) + (G tea G bct ) – (G tba G cet ).

    If there is no curvature, spacetime is flat and things don’t accelerate. Notice that if there is any (fictional) ‘cosmological constant’ (a repulsive force between all masses, opposing gravity an increasing with the distance between the masses), it will only cancel out curvature at a particular distance, where gravity is cancelled out (within this distance there is curvature due to gravitation and at greater distances there will be curvature due to the dark energy that is responsible for the cosmological constant). The only way to have a completely flat spacetime is to have totally empty space, which of course doesn’t exist, in the universe we actually know.

    To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to cò (- gmn dxm dxn)1/2, while the proper time is ò (gm n dxmdxn)1/2.

    Notice that the ratio of proper length to proper time is always c. The Ricci tensor is a Riemann tensor contracted in form by summing over a = b, so it is simpler than the Riemann tensor and is composed of 10 second-order differentials. General relativity deals with a change of co-ordinates by using Fitzgerald-Lorentz contraction factor, g = (1 – v2/c2)1/2. Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible, not-in-our-universe, case of zero mass. Einstein at first built a representation of Isaac Newton’s gravity law a = MG/r2 (inward acceleration being defined as positive) in the form Rm n = 4p GTm n /c2, where Tmn is the mass-energy tensor, Tm n = r um un. ( This was incorrect since it did not include conservation of energy.) But if we consider just a single dimension for low velocities (g = 1), and remember E = mc2, then Tm n = T00 = r u2 = r (g c)2 = E/(volume). Thus, Tm n /c2 is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). We ignore pressure, momentum, etc., here:

    The components of the stress-energy tensor

    Above: the components of the stress-energy tensor (image credit: Wikipedia).

    The scalar term sum or “trace” of the stress-energy tensor is of course the sum of the diagonal terms from the top left to the top right, hence the trace is just the sum of the terms with subscripts of 00, 11, 22, and 33 (i.e., energy-density and pressure terms).

    The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.

    However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: Fitzgerald-Lorentz contraction effect: g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + … . Gravitational contraction effect: g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + …, where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2]. Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.

    This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without molecular viscosity (this is due to the Schwinger threshold for pair-production by an electric field: the vacuum only contains fermion-antifermion pairs out to a small distance from charges, and beyond that distance the weaker fields can’t cause pair-production – i.e., the energy is below the IR cutoff – so the vacuum contains just bosonic radiation without pair-production loops that can cause viscosity; for this reason the vacuum compresses macroscopic matter without slowing it down by drag). Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.

    More information can be found in the earlier posts here, here, here, here, here and here.

    Now back to the Washington Post

    “The day before he was elected as Pope Benedict XVI, Cardinal Joseph Ratzinger addressed the cardinals assembled in St Peter’s and warned that society was “building a dictatorship of relativism that does not recognize anything as definitive.” … When I wrote Questioning Einstein: Is Relativity Necessary? (Vales Lake, 2009) I realized that the central claims of relativity and relativism are very similar.

    “My book was based on work by Petr Beckmann, a Czech immigrant who defected to the U.S. and taught electrical engineering at the University of Colorado. … His main point was that the physical anomalies that led to relativity can be explained without it. For example, the famous equation “E = mc2” was derived using relativity theory. But later Einstein re-derived it, this time without relativity.

    “A frequently heard statement of cultural relativism goes like this: “If it feels right for you, it’s OK. Who is to say you’re wrong?” One individual’s experience is as “valid” as another’s. There is no “preferred” or higher vantage point from which to judge these things. Not just beauty, but right and wrong are in the eye of the beholder. …

    “The special theory of relativity imposes on the physical world a claim that is very similar to the one made by relativism. In the 1880s a scientist named Albert Michelson searched for the “ether” – the medium in which light waves were thought to travel. But his equipment could not detect it … Einstein resolved the problem by claiming that a light ray keeps moving toward you at the same speed no matter how fast you move toward it. Light’s speed is unaffected by the observer’s velocity, Einstein said. That was strange because other waves don’t behave that way. Move toward a sound wave, and you must add your speed to that of the oncoming wave to know its approach velocity. That didn’t apply to light, apparently.

    “So how come the speed of light always stays the same? Einstein argued that when the observer moves relative to an object, distance and time always adjust themselves just enough to preserve light speed as a constant. Speed is distance divided by time. So, Einstein argued, length contracts and time dilates to just the extent needed to keep the speed of light ever the same.

    “Relativity … elevated science into a priesthood of obscurity. Common sense could no longer be trusted.

    “The contraction of space and the dilation of time are deductions from relativity. But they have not been observed. In easy Einstein books, drawings of spaceships that are shortened because they are moving at high speed are imagined by artists in accordance with theory. No physical experiment has ever detected length contraction.

    “Atomic clocks do slow down when they move … But the slowing of clocks and the slowing of time are very different things. GPS has “relativistic” corrections to keep its clocks synchronized. But those corrections depart significantly from Einstein’s theory. They refer clock motion not to the observer but to an absolute reference frame, centered on the Earth.

    “So there are reasons to think that experiments with atomic clocks have falsified special relativity. (The general theory is another matter. Beckmann said it gave the right results by a roundabout method.)”

    Bethell’s article is full of misunderstandings:

    1. Einstein simplified physics by opposing aether theories. Einstein showed how to take just two experimentally defensible principles, namely relative motion and constancy of the velocity of light, and used them to predict time-dilation and the increase inertial mass with velocity (both confirmed by particle physics, e.g., radioactive particles like muons decay more slowly when accelerated to relativistic – i.e. near c – velocities, and particles gain more momentum due to the mass increase due to their velocity, which has measurable effects when they collide).

    Aether theories were ugly, ad hoc, and occurred in many varieties, sharing only the common problem that they all failed to explain let alone predict these results. Einstein’s genius was dumping the pictorial model and sticking to equations based on empirically defensible principles. The best of the aether theories were those of Lorentz and FitzGerald, in which the contraction of the Michelson-Morley apparatus in the direction of its motion (which is the same in Einstein’s, Lorentz’s and FitzGerald’s theories) is due to the pressure from the front of the moving particles in the instrument (electrons and quarks) pushing against the aether as the Earth moves through an absolute space. The pressure was supposed to contract the instrument in direct proportion to the factor (1 – v2/c2)1/2, where v is the velocity of the instrument and c is the velocity of light.

    This is of course an analogy to the Prandtl-Glauert transformation, whereby the drag coefficient of an object is directly proportional to (1 – M2)-1/2, where the Mach number, M = v/c, so that the drag coefficient rises in proportion to (1 – v2/c2)-1/2, for an object moving in the air as it approaches the velocity of sound, the so-called “sound barrier” in that theory. The increase in drag coefficient contracts an aircraft in the direction of its motion, because the head-on pressure on the nose of the aircraft rises. Of course, this is not a perfect analogy. For one thing, there the total drag force in air is not just proportional to the drag coefficient, but to the square of the velocity. The total drag force is equal to the dynamic pressure, the product of half the air density and the square of the velocity, multiplied by the cross-sectional area and the drag coefficient. As explained in previous posts, the off-shell gauge boson radiations in the vacuum force fields are unable to slow down moving objects by carrying away kinetic energy like the air does to an aircraft.

    So in the case of the vacuum, the mechanism alters the form of the mathematical model used to describe the system. Only accelerations are resisted, and this gives rise to inertial mass, which rises with velocity (an analogy to the snowplow effect, where the height of snow on a snowplow rises with the velocity of the snowplow, because snow piles up since it has its own inertia and isn’t shunted sideways out of the way of the plow in direct proportion to the forward speed; thus the effective mass of the snow being pushed by the plow increases with the forward velocity) by the same Lorentz transformation factor which describes the contraction in length and time-dilation. Furthermore, the Prandtl–Glauert transformation is false for air because you can in fact exceed the velocity of sound, albeit at the price of creating a supersonic shock wave. Air is not a perfect fluid. Vitally important for the analogy is the historical chance that Ludwig Prandtl and Hermann Glauert discovered their approximate theory of air drag in the 1920s, after special relativity had replaced the FitzGerald-Lorentz aether theory as the means of deriving the same basic equations. Prandtl had been lecturing on the subject before Glauert, although the latter published first in 1927, “The Effect of Compressibility on the Lift of Airfoils,” Proceedings of the Royal Society, vol. A118 (1927), pp. 113-9. No reference was made to the contraction in special relativity. The analogy is pertinent because in quantum field theory, space is filled with field quanta moving at light velocity, c which is the isothermal analogy to the adiabatic relation of mean air particle velocity to sound speed in the air. However, sound is a longitudinal wave whereas light oscillates perpendicularly to its direction of propagation.

    Nevertheless, Feynman’s path integral explains the double-slit diffraction of light accurately by showing that each photon effectively follows all possible paths, most of which are not normally observable because they have amplitudes which cancel one another out. Therefore, long-range (electromagnetic, gravitational) forces have quantum field particles moving at the velocity of light between all charges in the universe, and “real” (on-shell) particles of radiation appear as asymmetries in the normal equilibrium of exchange of (off-shell) particles between long-lived (real) particles. From the experimentally confirmed Casimir effect, where there is a LeSage-type pushing of metal plates together by the fact that they are hit by the full spectrum of vacuum radiation in the outside but only by the reduced (cut-off) spectrum of wavelengths smaller than the size of the gap between them, on the inside, we know that this is real.

    Notice in the video below that the lecturer is totally incompetent in claiming that all types of virtual particles contribute to the Casimir effect; this is incorrect because although the Casimir force operates over short distances between parallel conducting metal plates, the distances is large compared to the size of an atom, i.e. it is larger than the range of virtual hadrons in the vacuum (which are limited to nuclear sized distances and thus can’t even reach the radius of orbital electrons). So the guy’s arm-waving claims about all kinds of particle contributing to the Casimir effect is totally false. Likewise, his claims about subtracting infinities are unphysical and unnecessary; there is no evidence that the zero point vacuum energy density is infinite. Some people like to add junk speculations to physics to alloy it to science fiction. The Casimir force is actually evidence that the vacuum contains electromagnetic field quanta coming from all directions, not merely being exchanged between two charges we are looking at, but being exchanged with all the other charges in the universe (charges in distant stars all around). There is no mechanism for field quanta to resist being exchanged between charges A and B, just because those charges each have opposite charges near them (e.g., nuclei if they are electrons, or vice versa). Fundamental charges have no way of being influence by the overall neutrality of a distant system; if there is a positive and a negative charge at a distance, an electron will still exchange field quanta, although the overall force reslting from the gauge interaction (quanta exchange) may be balanced out if the distant negative and positive charges are close together. This leads to physical predictions and a deeper understanding of physical phenomena, unlike false claims about having to subtract infinities:

    2. The claim “Relativity … elevated science into a priesthood of obscurity. Common sense could no longer be trusted”, is better levelled at lying, physically false and indefensible, non-relativistic first-quantization quantum mechanics which uses classical Coulomb fields and has to therefore falsely introduce chaos by applying intrinsic Planck-scale uncertainty to real particles:

    “… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

    – Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

    “I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!” [Emphasis by Feynman.]

    – Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

    “When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [amplitudes for different paths] to predict where an electron is likely to be.”

    – Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

    “The quantum collapse [in the multiple universes, non-relativistic, pseudoscientific first-quantization model of quantum mechanics] occurs when we model the wave moving according to Schroedinger time-dependent and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger time-independent. The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

    – Dr Thomas Love, California State University.

    “In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.”

    http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

    3. Bethell doesn’t seem to be aware of the biggest anisotropy in the cosmic background radiation, the massive +/-3 milliKelvin anisotropy which was discovered right back in the early days of investigation of investigating the microwave background from 300,000 years after the big bang by Richard A. Muller using U-2 aircraft in 1978. See his Scientific American article, “The cosmic background radiation and the new aether drift”, vol. 238, May 1978, p. 64-74 (PDF linked here):

     

    This anisotropy is literally magnitudes bigger in size than the smaller ripples detected more recently by satellites such as COBE and WMAP. Our Milky Way is moving towards Andromeda at a speed much faster than suggested by the gravitational attraction suggested by the masses of the stars in the two galaxies, which has led to suggestions of an invisible “great attractor” near Andromeda, such as a large black hole. However, for scientists who let science fiction take a back seat, there is no evidence for this and it isn’t even wrong – it’s not even a falsifiable hypothesis, it is just ad hoc fiddling of uncheckable theory to fit facts.

    Instead of inventing an unobserved black hole or unobserved UFO parked around Andromeda to “explain” the blueshift towards it, we can adopt Occam’s Razor and take the simple explanation of Muller in that 1978 Scientific American article: the motion is the aether drift which Michelson and Morley failed to measure because their instrument contracted in the direction it was moving in absolute motion. The Milky Way is going at 600 km/s and, with occasional deflections and variations in its motion as it approaches and passes other galaxies, it has been travelling like this for a long time. Since the big bang, the Milky Way has gone at roughly this speed for 13.7 thousand million years, about 0.3% of the horizon radius of the universe today. Hence, the reason why we see a highly isotropic universe around us (stars every way we look) is because we’re in the middle, just 0.3% of the radius off centre! Special relativity then becomes a child’s simplification like the Bohr atom: absolutely false in its denial of the possibility of measuring absolute motion, but relatively correct in its results and handy for making quick, rough calculations. The Copernican “principle” that “we are at no special place” becomes ignorant, ill-informed pseudo-science because it simply has no experimental foundation.

    Nobody has ever proved that the observed isotropy of the universe around us is a hoax due to curved spacetime, because – as observationally discovered in 1998 by Perlmutter from supernova redshifts – our universe is not curved, it is simply flat in geometry and the general relativistic gravitational curvature is limited to the spacetime near masses. At great distances from masses, the cosmological acceleration due to dark energy opposes curvature, flattening spacetime out!

    “The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

    One funny or stupid denial of this was in a book called Einstein’s Mirror by a couple of physics lecturers, Patrick Hey and Tony Walters. They seemed to vaguely claim, in effect, that in the Michelson-Morley experiment the arms of the instrument are of precisely the same length and measure light speed absolutely, then they claimed that if anyone built a Michelson-Morley instrument with arms of unequal length, the contraction wouldn’t work. In fact, the arms were never of equal length to within a wavelength of light to begin with, and they only detected the relative difference in apparent light speed between two perpendicular directions by utilising interference fringes, which is a way to measure relative speed in one direction to another, not absolute speed in any direction. You can’t measure the speed of light with the Michelson-Morley instrument, it only shows if there is a difference between two perpendicular directions if you implicitly assume there is no length contraction!

    It’s really funny that Eddington made Einstein’s special relativity (anti-aether) famous in 1919 by confirming aetherial general relativity. The media couldn’t be bothered to explain aetherial general relativity, so they explained Einstein’s earlier false special relativity instead!

    ‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, New Pathways in Science, v2, p39, 1935.

    “The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.” – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.

    “Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.” – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 184-5.

    “Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.” – Einstein’s Legacy – Where are the “Einsteinians?”, by Professor Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm

    “But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.” – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

    ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).

    ‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’… A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90. (However, this is a massive source of controversy in GR because it’s a continuous approximation to discrete lumps of matter as a source of gravity which gives rise to a falsely smooth Riemann curvature metric; really continuous differential equations in GR must be replaced by a summing over discrete – quantized – gravitational interaction Feynman graphs.)

    ‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.

    ‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2nd ed., v1, p. v, 1951.

    ‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

    ‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 64-74.

    ‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    ‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

    ‘… the Heisenberg formulae [virtual particle interactions cause random pair-production in the vacuum, introducing indeterminancy, like the Brownian motion of pollen fragments due to random air molecular bombardment] can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

    ‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’ – G. Builder, ‘Ether and Relativity’, Australian Journal of Physics, v11 (1958), p279.

    This paper of Builder on absolute velocity in ‘relativity’ is the analysis used and cited by the famous paper on the atomic clocks being flown around the world to validate ‘relativity’, namely J.C. Hafele in Science, vol. 177 (1972) pp 166-8. So it was experimentally proving absolute motion, not ‘relativity’ as widely hyped Absolute velocities are required in general relativity because when you take synchronised atomic clocks on journeys within the same gravitational isofield contour and then return them to the same place, they read different times due to having had different absolute motions. This experimentally debunks special relativity for this situation, where you need the absolute motions from accelerations modelled by curvature in general relativity. Hence, Einstein was wrong when he wrote in Ann. d. Phys., vol. 17 (1905), p. 891: ‘we conclude that a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ See, for example, the debunking of Einstein’s claim there on page 12 of the September 2005 issue of ‘Physics Today’, available at: http://www.physicstoday.org/vol-58/iss-9/pdf/vol58no9p12_13.pdf (PDF linked here)

    So we see from this solid experimentally confirmed evidence that the usual statement that there is no ‘preferred’ frame of reference, i.e., a single absolute reference frame, is false. Experimentally, a swinging pendulum or spinning gyroscope is observed to stay true to the stars (which are not moving at sufficient angular velocities from our observation point to have any significant problem with being an absolute reference frame for most purposes).

    If you need a more accurate standard, then use the cosmic background radiation, which is the truest blackbody radiation spectrum ever measured in history.

    These different methods of obtaining measurements of absolute motion are not really examining ‘different’ or ‘preferred’ frames, or pet frames. They are all approximations to the same thing, the absolute reference frame. All the Copernican propaganda since the time of Einstein that: ‘Copernicus didn’t discover the earth orbits the sun, but instead Copernicus denied that anything really orbited anything because he thought there is no absolute motion, only relativism’, is a gross lie. That claim is just the sort of brainwashing double-think propaganda which Orwell accused the dictatorships of doing in his book ’1984′. Copernicus didn’t travel throughout the entire universe to confirm that the earth is: ‘in no special place’. Even if he did make that claim, it would not be founded upon any evidence. Science is (or rather, should be) concerned with being unprejudiced in areas where there is a lack of evidence.

    If the cosmic background radiation, the Casimir experiment proof, and so on, had been around in Einstein’s time, I don’t think he would have been a laggard in trying to move physics forward. At some time, old ideas like aether, relativity, etc., which were all “mainstream religions” in their day, need to be given up and people need to move on, driven by empirical facts which can no longer be denied or covered-up. What does not advance peacefully, sooner or later succumbs to corruption, decay, and even radical revolution.

    The famous 1930-64 New York Times science correspondent (science editor from 1956-64), William L. Laurence, who published the headline news of early American research on nuclear fission chain reactions in February 1939 and was the only journalist to witness the first nuclear test and the Nagasaki raid from a B-29 aircraft, published numerous stories hyping Einstein’s many failed efforts to find a classical unified field theory of electromagnetism and gravitation (ignoring nuclear forces). Laurence and his wife are photographed beside Einstein on the back cover of Laurence’s 1951 book about the development of the H-bomb, The Hell Bomb, which praises Einstein’s classical theory hype. However, Laurence changed tune after Einstein died and became cynical to the point of ridicule in his 1959 book Men and Atoms, chapter 29 (pp. 230-235 of the 1961 Hodder and Stoughton, London, edition):

    “He [Einstein] first believed he had achieved his goal in 1929, after thirteen years of concentrated effort, only to find it illusory on closer examination. In 1950 he thought he almost had it within his grasp, having overcome ‘all the obstacles but one.’ In March 1953 he felt convinced that he had at last overcome that lone obstacle and thus had attained the crowning achievement of his life’s work. Yet even then he ruefully admitted that he had ‘not yet found a practical way to confront the theory with experimental evidence,’ the crucial test for any theory.

    “Even more serious, his field theory failed to find room in the universe for the atom and its component particles … which appeared to be ‘singularities in the field’, like flies in the cosmic ointment. Despite these drawbacks, he never wavered in his confidence that the concept of the pure field, free from ‘singularities’ (i.e., the particle concept …) was the only true approach to a well-ordered universe, and that eventually ‘the field’ would find room in it for the enfant terrible of the cosmos – the atom and the vast forces within it. …

    “Einstein believed that the physical universe was one continuous field, governed by one logical set of laws, in which every individual event is inexorably determined by immutable laws of causality. On the other hand, the vast majority of modern-day physicists champion the quantum theory, which leads to a discontinuous universe, made up of discrete particles and quanta of energy, in which probability takes the place of causality and determinism is supplanted by chance. …

    “Einstein alone stood in majestic solitude against all these concepts of the quantum theory. Granting that it had had brilliant successes in explaining many of the mysteries of the atom and the phenomena of radiation, which no other theory had succeeded in explaining, he nevertheless insisted that a theory of discontinuity and uncertainty, of duality of particle and wave, and of a universe not governed by cause and effect was an ‘incomplete theory’; that eventually laws would be found showing a continuous, unitary universe, governed by immutable laws in which every individual event was predictable.

    “‘I cannot believe’, he said, ‘that God plays dice with the cosmos!’ Rather, as he said on another occasion, ‘God is subtle but He is not malicious.’

    “Paradoxically, as the years passed, the figure of Einstein the man became more and more remote, while that of Einstein the legend came ever nearer to the masses of mankind. They grew to know him not as a universe maker whose theories they could not hope to understand, but as a world citizen, one of the outstanding spiritual leaders of his generation, a symbol of the human spirit and its highest aspirations. … He radiated humor, warmth and kindliness. He loved jokes and laughed easily. Princeton residents would see him walk in their midst, a familiar figure yet a stranger; a close neighbor yet at the same time a visitor from another world. … Princetonians, old and young, soon got used to the long-haired figure in pullover sweater and unpressed slacks wandering in their midst, a knitted stocking cap covering his head in winter. …

    “He was a severe critic of modern methods of education. ‘It is nothing short of a miracle,’ he said, ‘that modern methods of instruction have not yet entirely strangled the holy curiosity of inquiry. For this delicate little plant, aside from stimulation, stands mainly in need of freedom.’ …

    “‘In my life,’ he said once, explaining his great love for music, ‘the artistic visionary plays no mean role. After all, the work of a research scientist germinates upon the seed of imagination, of vision. Just as the artist arrives at his conceptions partly by intuition, so a scientist must also have a certain amount of intuition.’

    “While he did not believe in a formal, dogmatic religion, Einstein, like all true mystics, was of a deeply religious nature. …

    “‘I assert [he wrote for The New York Times on November 9, 1930] that the cosmic religious experience is the strongest and the nobelest driving force behind scientific research. No one who does not appreciate the terrific exertions and, above all, the devotion without which pioneer creation in scientific thought cannot come into being, can judge the strength of the feeling out of which alone such work, turned away as it is from immediate, practical life, can grow.”

    Of course, since 1984 the hype for “string theory”, in which quanta are supposedly manufactured from classical field theory using compactifications of otherwise invisible extra-dimensions by means of Rube-Goldberg stabilized Calabi-Yau manifolds, has returned Einstein’s spin machine to newspaper physics, complete with its lack of falsifiable predictions.

    Update (4 June 2010): Carl Brannen has had his latest paper, “Spin Path Integrals and Generations”, accepted for publication in Foundations of Physics. He has a version of it in PDF format on his site (linked here) and I’ve read it. My first impression, for about the first 10 (out of 22) pages, was extremely good. Basically, the first half of the paper is a very competent and excellent, in my view, discussion of the physics of the path integral which helps for basic understanding of the physical basis of what is the mathematics is doing. The remainder of the paper is, to my mind, quite different to the first half, and is not so rigorous physically, although Carl does a good job of using some mathematics impressively in the second half (in my opinion covering up the lack of physical understanding which accompanies the mathematical correlations and guesswork about particle masses). However, it is today generally accepted (in my view wrongly) that people should approach physics mathematically and not worry about mechanisms. This view goes back to Mach and Einstein when they were getting rid of the mechanistic view of an aether communicating forces through the vacuum of space. Einstein went against Mach in the 1920s of course, when German mainstream physics under Heisenberg used the uncertainty principle to get rid of any casuality in principle for quantum mechanics. As expolained above, Feynman’s second-quantization brought some casuality back because Feynman explains the chaos of the electron’s motion in the atom by means of a quantum Coulomb field acting between electron and nucleus, whereas Heisenberg’s first-quantization quantum mechanics wrongly uses a classical Coulomb field and therefore requires a direct obediance to the uncertainty principle to “explain” electron chaos. Feynman doesn’t need the electron to intrinsically have an indeterminate position and momentum product according to the uncertainty principle (which he dumps in this context), because Feynman’s second quantization path integral introduces a mechanism for indeterminancy: the electron jumps around because the Coulomb field quanta are exchanged at random and cause interferences. You can use an uncertainty principle to model the quantum field, which in turn makes the electron’s position indeterminate because the electron is being moved by the fluctuating quantum Coulomb field, unlike the steady classical Coulomb field in Heisenberg’s first-quantization mythology. When will the hoax of first-quantization and the full physical mechanism for indeterminancy in second-quantization (which unlike first-quantization is the relativistic form of quantum mechanics, by which I mean empirically relativistic – i.e. in agreement with the Lorentz equations etc., rather than Einstein’s philosophy about there being no absolute motion possible in the universe) be widely promoted in the media and in undergraduate physics courses? When will reality be explained clearly by physicists, instead of being obfuscated in order to preserve the philosophy of Mach and Bohr and Heisenberg which Feynman denounced as ignorant?

    From this mainstream point of view, Carl is doing the right job, and I hope that his and Marni’s investigations and will be helpful at least regarding the CKM matrix and neutrino masses. However, I think this mainstream “ignore mechanisms and just model with abstract mathematics”-approach is wrong for several reasons. Firstly, because mathematics which is unconstrained by physical mechanisms can go anywhere and there is no guarantee that you won’t end up like Ptolemy with a theory of accurate but physically hopeless epicycles instead of a mathematical model which is tied to physical facts. Secondly, it gets defended by mathematical obfuscation; you can hide behind mathematics and build a fortress out of it which is hard for others to understand enough to objectively criticise. Thirdly, it’s too easy to add further mathematical epicycles to explain any anomaly or disagreement with the data that comes along. I’m convinced, for experimentally defensible reasons spelled out in detail two posts ago, that particle masses have a simple basis in quantum field theory.

    Vacuum polarization (shielding the electric field of the real electron core in the process) gives energy to off-shell virtual fermions in the vacuum, which makes some of them approach an on-shell energy state where they exist long enough to be affected by the exclusion principle, which thus begins to structure those semi-virtual fermions in the vacuum. When the do annihilate, some of the neutral weak gauge bosons produced (from that structured semi-virtual fermion vacuum) act as neutral currents which “mire” down the real core charge like a Higgs field. There is no Higgs boson to give mass: instead, weak bosons have intrinsic mass as the charge from a U(1) gauge theory of quantum gravity, and the neutral weak bosons behave like a Higgs field, a theory which predicts the general distribution of lepton, quark and hadron masses as shown in previous posts (e.g., two posts back for lepton and quark masses, and the about page of this blog for hadron masses).

    However, I still have to look further into the details and try to follow through some of the ideas in the previous post, such as looking at beta decay afresh in a more consistent physical way than is presently used. Lunsford’s 6-dimensional spacetime (3 spatial dimensions and 3 timelike dimensions) is another example of something I have to get to grips with urgently. Mathematically, it’s as abstract and abstruse as you can get, but seems physically comprehensible to me. We like in an expanding geometric space; the universe expands in 3 spatial dimensions. If the expansion rates in the 3 different spatial dimensions, i.e. the Hubble parameters (v = HR, where H is Hubble parameter) were different in each different spatial dimension, then we would have 3 different ages of the universe, each being t = 1/H. The expansion rate, however, is isotropic (the same in all directions we look) so effectively the age of the universe is one value not three, and three effective time dimensions are thus identical and mathematically represented by one time dimension. Just because geometric space is expanding, does not imply that everything is expanding. The expansion of geometric space is not accompanied by the expansion of masses, which are contracted in general relativity by gravitational fields. This fact – which is not grasped by most popularizers of the big bang who believe falsely that masses expand like spaces between them – is well explained by Feynman in his famous “Lectures on Physics”; as an example, general relativity predicts that the Earth’s radius is contracted by the gravitational field a distance MG/(3c2), which is about 4 mm; we have shown on the earlier post linked here how this general relativistic contraction of gravitational charges is physically related to the Lorentz contraction and due to spin-1 quantum gravity fields. So the expansion of spatial geometric distances with time after the big bang is not the same thing as the expansion of the distances that describe masses such as particles. We need 3 dimensions to describe the size of contractable masses and 3 expanding dimensions to describe the expansion in time of the spatial geometric volume of the universe.

    Thus, we have 3 contractable dimensions describing matter, and 3 expanding dimensions describing time: we cannot unambiguously use distance scales for measuring expanding spaces in our universe because by the time light (or any force fields) from a star eventually reaches us, the star has had the time to move still further away! Instead, we must really measure the location of a star in terms of time; the time in our past when the light was emitted. This is what we do when we measure cosmological distances in units like “light-years”, which are time units. In total we may need, therefore, 6 dimensions to describe everything consistently, although for practical purposes the isotropy of the universe’s expansion around us means that the 3 time dimensions can be lumped together as indistinguishable and thus treated like a single effective dimension for many purposes. Lunsford’s 6-d spacetime has a neat symmetry (3 time dimensions, 3 spatial distances) and predicts that there is no cosmological constant. In 1996, I showed that spin-1 quantum gravitons do the job now attributed to “dark energy” in accelerating the universe (the cosmological constrant) as well as a LeSage quantum gravitational effect. The two consequences of spin-1 gravitons are the same thing: distant masses are pushed apart, nearby small masses exchange gravitons less forcefully with one another than with masses around them, so they get pushed together like the Casimir force effect. There is no separate “dark energy” or CC; its all due to spin-1 gravitons (see two posts back for quantitative proof that you get this to work).