– R. P. Feynman, “Take the World from Another Point of View,” Yorkshire TV, 1973.

**Above:** Feynman argued that the Standard Model is inconsistent and incomplete, and we highlight a typical inconsistency in the beta decay scheme interpretation, which arose after the introduction of weak vector bosons (the previous beta decay of Fermi lacked this inconsistency because it contained no weak boson). Note that the existence of weak bosons is an experimental fact. The problem is therefore in the dogmatic interpretation of beta decay, which distinguishes leptons from quarks by the fact that leptons don’t have strong color charge but quarks do. We predict is that color charge emerges at extremely high energy at the expense of electric charge: the fractional electric charges of quarks is due to vacuum pair production (including production of colour charged particles with colour charged gluons) and associated vacuum polarization screening, which permits a mechanism for strong color charge effects to emerge spontaneously from the fields around leptons at extremely high energy, beyond existing experiments.

The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of

j[Feynman’s symboljis related to alpha or 1/137.036… by: alpha =j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. …

But if you just look at the [Standard Model] you can see the glue, so to speak. It’s very clear that the photon and the three W’s[weak gauge bosons]are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’[Emphasis added.]- R. P. Feynman,

QED,Penguin, 1990, pp. 141-142.

Feynman also argued that the uncertainty principle in textbook quantum mechanics (the “first quantization” lie, in which the uncertainty principle acts lying on the real on-shell particles while keeping the Coulomb field classical, leading to wavefunction collapse issues popularized with lying hype about Schrodinger’s cat and entanglement lies) is unnecessary because of second-quantization (discovered to be necessary by Dirac in 1927; Schroedinger’s and Heisenberg’s first-quantization approaches to quantum mechanics are lies because they are non-relativistic and because they falsely make the real on-shell particle’s position-momentum product intrinsically uncertain due to keeping the Coulomb field classical, instead of correctly making uncertainty work on the off-shell field quanta, so that the chaotic field quanta interactions are the physical mechanism for the apparent indeterminancy of the electron in an atomic orbit):

“I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, ‘Your old-fashioned ideas are no damn good when …’. If you get rid of ALL the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no NEED for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [of on-shell particles by off-shell field quanta] becomes very important …”

– Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, & 84.

First-quantization bigot Niels Bohr never understood how Dirac’s work in quantum field theory (2nd quantization) overturned Heisenberg’s mythology, and he simply refused to listen to Feynman’s 2nd quantization proof, claiming it violated his dogmatic religion of uncertainty principle worship:

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of

Feynman, The Beat of a Different Drum,Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

Feynman argued against the path integral being fundamental to particle physics:

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space [because there is an infinite series of terms in the perturbative expansion to Feynman’s path integral] … Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”

– Richard P. Feynman,

The Character of Physical Law,November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Feynman also blew the smoke screen out of string theory back in 1988, after the first superstring revolution hype:

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’

– Richard P. Feynman, in Davies & Brown,

Superstrings,1988, pp. 194-195.

Like modern string theorists, Boscovich believed that forces are purely mathematical in nature and he argued that the stable sizes of various objects such as atoms correspond to the ranges of different parts of a unified force theory. The concept that the range of the strong nuclear force determines roughly the size of a nucleus would be a modern example of this concept, although he offered no real explanation for different forces such as gravity and electromagnetism, such as their very different strengths between two protons. Against Boscovich were Newton’s friend Fatio and his French student LeSage, who did not believe in a mathematical universe, but in mechanisms due to particle flying around in the vacuum. Various famous physicists like Maxwell and Kelvin in the Victorian era argued that particles flying around to cause forces by impacts and pressure, would heat the planets up by drag, and slow them down so they spiralled quickly into the sun. Feynman recounts LeSage’s mechanism for gravity and the arguments against it in both his Lectures on Physics and his 1964 lectures The Character of Physical Law (audio linked here). However, the problem is that quantum field theory does accurately predict today the experimentally verified Casimir force which is indeed caused by off-shell (off mass shell) field quanta pushing the plates together, somewhat akin to LeSage’s mechanism. The radiation in the vacuum which causes the Casimir force doesn’t slow down or heat up moving metal or other objects, and the Maxwell-Kelvin objections don’t apply to field quanta (off-shell radiations).

The Casimir force is produced because the metal plates exclude longer wavelengths from the space inbetween them, but you get the full spectrum of virtual radiation pushing against the plates from the opposing sides, so the net force pushes them together, “attraction”. Maxwell’s equations are formulated in terms of rank-1 (first order) gradients and curls of “field lines”, e.g. whereas general relativity is formulated in terms of rank-2 (second order) space curvatures or accelerations, so there is an artificial distinction between the two types of equations. Pauli and Fierz in 1939 argued that if gravitons are only exchanged between an two masses which attract, they have to be spin-2. Electromagnetism can be mediated by spin-1 bosons with 4 polarizations to account for attraction and repulsion. Thus, the myth of linking the rank of the tensor equation to the spin began: rank-1 Maxwell equations implied spin-1 field quanta (virtual photons), and rank-2 general relativity implied spin-2 field quanta (gravitons). However, the rank of the equation is purely a synthetic issue of whether you choose to express the field in terms of Faraday style imaginary “field lines” (which Maxwell chose), or measurable spacetime curvature induced accelerations (which Einstein used). It’s not a fundamental distinction, since you could rewrite Maxwell equations in terms of accelerations, making then rank-2.

Furthermore, Pauli and Fierz wrongly assumed that you can treat two masses as exchanging gravitons, and ignore the exchange of gravitons with all the other masses in the universe. This is easily shown wrong, because the mass of the rest of the universe is immensely larger than an apple and the Earth (say), and forthermore the exchange gravitons being received are converging inward from that distant mass (galaxy clusters etc) which is isotropically distributed in all directions. When you include those contributions, the Pauli-Fierz argument for spin-2 gravitons is disproved, because the repulsion due to the exchange of gravitons between the particles in the apple and those in the Earth is trivial compared to the inward forces from graviton exchange on the other sides. Hence, spin-1 gravitons do the job of pushing the apple down to the Earth. The bigger the mass of the Earth, the more shadowing and asymmetry of graviton forces on the top and bottom of the particles in the apple, so the apple is pushed down with greater force, thus it’s analogous to the mechanism of the Casimir force or LeSage.

The distinction between Newtonian and Einsteinian gravitation is two fold. First, there is the change from forces to spacetime curvature (acceleration) using the Ricci tensor and a very fiddled stress-energy tensor for the source of the field (which can’t represent real matter correctly as particles, using instead artifically averaged smooth distributions of mass energy throughout a volume of space), and secondly these two tensors could not be simply equated by Einstein without violating the conservation of mass-energy (the divergence of the stress energy tensor does not vanish), so Einstein had to complicate the field equation with a contraction term which compensates for the inability of the divergence of the stress-energy tensor to disappear. It is precisely this correction term for the conservation of mass-energy which makes the deflection of light equal to double that of a non-relativistic object like a bullet passing the sun. The reason is that all objects approaching the sun gain gravitational potential energy. In the case of a non-relativistic or slow moving bullet, this gained gravitational potential energy is used to do two things: (1) speed up the bullet, and (2) to deflect the direction of the bullet more towards the sun. A relativistic particle like a photon cannot speed up, so all of the gravitational potential energy it gains instead is used to deflect it, hence it deflects by twice as much as Newton’s law predicts.

“Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics, in particular regarding the quest towards a theory of quantum gravity….

“… Remarkably, all of relativity (at least, all of special relativity) could be taught as an effective theory by using only Newtonian language … In a way, the model we are discussing here could be seen as a variant of the old ether model. At the end of the 19th century, the ether assumption was so entrenched in the physical community that, even in the light of the null result of the Michelson-Morley experiment, nobody thought immediately about discarding it. Until the acceptance of special relativity, the best candidate to explain this null result was the Lorentz-FitzGerald contraction hypothesis… we consider our model of a relativistic world in a fishbowl, itself immersed in a Newtonian external world, as a source of reflection, as a Gedankenmodel. By no means are we suggesting that there is a world beyond our relativistic world describable in all its facets in Newtonian terms. Coming back to the contraction hypothesis of Lorentz and FitzGerald, it is generally considered to be ad hoc. However, this might have more to do with the caution of the authors, who themselves presented it as a hypothesis, than with the naturalness or not of the assumption… The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”For more on this subject, see the earlier post linked here.

**Path integral simplicity for low energy quantum gravity applications**

Feynman’s book QED explains how to do the path integral approximately without using formal calculus! He gives the rules simply so you can draw arrows with similar length but varying directions, on paper, to represent the complex amplitudes for different paths light can take through a glass lens, and the result is that paths well off the path of least time cancel out efficiently, but those near it reinforce each other. Thus you recover classical laws of reflection and refraction. He can’t and doesn’t apply such simple graphical calculations to HEP situations above the field’s IR cutoff, where there is pair production occurring leading to a perturbative expansion for an infinite series of different possible Feynman diagrams, but the graphical application of path integrals to the simple low energy physics phenomena gives the reader a neat grasp of principles. This applies to low energy quantum gravitational phenomenon just as it does to electromagnetism.

As implied by Hubble’s discovery of the receding universe in 1929, recession velocity is *v = HR* where *H* is Hubble’s number and *R* is distance. This implies acceleration *a = dv/dt.* If there is “spacetime” in which light from stars obeys the equation *R/t = c,* then it follows that *a = d(HR)/d(R/c) = Hc* = 6 x 10^{-10} ms^{-2}. (Another way to derive this, in which time runs forward rather than backwards with increasing distance from us, is often more acceptable conceptually, and is linked here: https://nige.files.wordpress.com/2009/08/figure-14.jpg.) This is the cosmological acceleration of the receding matter in the universe, implying force F = ma outward and an inward reaction force which is identical according to Newton’s 3rd law, and can only be mediated by gravitons. This makes quantitative predictions which will be shown below.

This is evidence for a spin-1 LeSage graviton (ignored by Pauli and Fierz when first proposing that the quanta of gravitation has spin-2) in the Hubble recession of galaxies which implies cosmological acceleration *a = dv/dt = d(HR)/d(R/c) = Hc*, obtained in May 1996 from the Hubble relationship *v = HR*, simply by arguing that spacetime implies that the recession velocity is not just varying with apparent distance, but with time, thus it is an effective acceleration (we published this via p893 of the October 1996 issue of *Electronics World* and also the February 1997 issue of *Science World*, ISSN 1367-6172, after string theory reviewers rejected it from more specialized and appropriate journals, without giving any scientific reasons whatsoever). The following statement is from a *New Scientist* page:

“We don’t know that gravity is strictly an attractive force,” cautions Paul Wesson of the University of Waterloo in Ontario, Canada. He points to the “dark energy” that seems to be accelerating the expansion of the universe, and suggests it may indicate that gravity can work both ways. Some physicists speculate that dark energy could be a repulsive gravitational force that only acts over large scales. “There is precedent for such behaviour in a fundamental force,” Wesson says. “The strong nuclear force is attractive at some distances and repulsive at others.

“… Freedom is the right to question, and change the established way of doing things. It is the continuing revolution … It is the understanding that allows us to recognize shortcomings and seek solutions. It is the right to put forth an idea … It is the right to … stick to your conscience, even if you’re the only one in a sea of doubters. Freedom is the recognition that no single person, no single authority of government has a monopoly on the truth ….” – President Reagan, Moscow State University on May 31, 1988.

Above: Perlmutter’s discovery of the acceleration of the universe, based on the redshifts of fixed energy supernovae, which are triggered as a critical mass effect when sufficient matter falls into a white dwarf. A type Ia supernova explosion, always yielding 4 x 10^{28} megatons of TNT equivalent, results from the critical mass effect of the collapse of a white dwarf as soon as its mass exceeds 1.4 solar masses due to matter falling in from a companion star. The degenerate electron gas in the white dwarf is then no longer able to support the pressure from the weight of gas, which collapses, thereby releasing enough gravitational potential energy as heat and pressure to cause the fusion of carbon and oxygen into heavy elements, creating massive amounts of radioactive nuclides, particularly intensely radioactive nickel-56, but half of all other nuclides (including uranium and heavier) are also produced by the ‘R’ (rapid) process of successive neutron captures by fusion products in supernovae explosions. The brightness of the supernova flash tells us how far away the Type Ia supernova is, while the redshift of the flash tells us how fast it is receding from us. That’s how the cosmological acceleration of the universe was measured. Note that “tired light” fantasies about redshift are disproved by Professor Edward Wright on the page linked here.

This isn’t based on speculations, cosmological acceleration has been observed since 1998 when CCD telescopes plugged live into computers with supernova signature recognition software detected extremely distant supernova and recorded their redshifts (see the article by the discoverer of cosmological acceleration, Dr Saul Perlmutter, on pages 53-60 of the April 2003 issue of *Physics Today,* linked here). The outward cosmological acceleration of the 3 × 10^{52} kg mass of the 9 × 10^{21} observable stars in galaxies observable by the Hubble Space Telescope (page 5 of a NASA report linked here), is approximately *a = Hc* = 6.9 x 10^{-10} ms^{-2} (L. Smolin, *The Trouble With Physics,* Houghton Mifflin, N.Y., 2006, p. 209), giving an immense outward force under Newton’s 2nd law of *F = ma* = 1.8 × 10^{43} Newtons. Newton’s 3rd law gives an equal inward (implosive type) reaction force, which predicts gravitation quantitatively. What part of this is speculative? Maybe you have some vague notion that scientific laws should not for some reason be applied to new situations, or should not be trusted if they make useful predictions which are confirmed experimentally, so maybe you vaguely don’t believe in applying Newton’s second and third law to masses accelerating at 6.9 x 10^{-10} ms^{-2}! But why not? What part of “fact-based theory” do you have difficulty understanding?

It is usually by applying facts and laws to new situations that progress is made in science. If you stick to applying known laws to situations they have already been applied to, you’ll be less likely to observe something new than if you try applying them to a situation which nobody has ever applied them to before. We should apply Newton’s laws to the accelerating cosmos and then focus on the immense forces and what they tell us about graviton exchange.

*Above:* The mainstream 2-dimensional ‘rubber sheet’ interpretation of general relativity says that mass-energy ‘indents’ spacetime, which responds like placing two heavy large balls on a mattress, which distorts more between the balls (where the distortions add up) than on the opposite sides. Hence the balls are pushed together: ‘Matter tells space how to curve, and space tells matter how to move’ (Professor John A. Wheeler). This illustrates how the mainstream (albeit arm-waving) explanation of general relativity is actually a theory that gravity is produced by space-time distorting to *physically push* objects together, not to pull them! (When this is pointed out to mainstream crackpot physicists, they naturally freak out and become angry, saying it is just a pointless analogy. But when the checkable predictions of the mechanism are explained, they may perform their always-entertaining “hear no evil, see no evil, speak no evil” act.)

Above: LeSage’s own illustration of quantum gravity in 1758. Like Lamarke’s evolution theory of 1809 (the one in which characteristics acquired during life are somehow supposed to be passed on genetically, rather than Darwin’s evolution in which genetic change occurs due to the inability of inferior individuals to pass on genes), LeSage’s theory was full of errors and is still derided today. The basic concept that mass is composed of fundamental particles with gravity due to a quantum field of gravitons exchanged between these fundamental particles of mass, is now a frontier of quantum field theory research. What is interesting is that quantum gravity theorists today don’t use the arguments used to “debunk” LeSage: they don’t argue that quantum gravity is impossible because gravitons in the vacuum would “slow down the planets by causing drag”. They recognise that gravitons are not real particles: they don’t obey the energy-momentum relationship or mass shell that applies to particles of say a gas or other fluid. Gravitons are thus off-shell or “virtual” radiations, which cause accelerative forces but don’t cause continuous gas type drag or the heating that occurs when objects move rapidly in a real fluid. While quantum gravity theorists realize that particle (graviton) mediated gravity is possible, LeSage’s mechanism of quantum gravity is still as derided today as Lamarke’s theory of evolution. Another analogy is the succession from Aristarchus of Samos, who first proposed the solar system in 250 B.C. against the mainstream earth-centred universe, to Copernicus’ inaccurate solar system (circular orbits and epicycles) of 1500 A.D. and to Kepler’s elliptical orbit solar system of 1609 A.D. Is there any point in insisting that Aristarchus was the original discoverer of the theory, when he failed to come up with a detailed, convincing and accurate theory? Similarly, Darwin rather than Lamarke is accredited with the theory of evolution, because he made the theory useful and thus scientific.

Since 1998, more and more data has been collected and the presence of a repulsive long-range force between masses has been vindicated observationally. The two consequences of spin-1 gravitons are the same thing: distant masses are pushed apart, nearby small masses exchange gravitons less forcefully with one another than with masses around them, so they get pushed together like the Casimir force effect.

Using an extension to the standard “expanding raisin cake” explanation of cosmological expansion, in this spin-1 quantum gravity theory, the gravitons behave like the pressure of the expanding dough. Nearby raisins have less dough pressure between them to push them apart than they have pushing in on them from expanding dough on other sides, so they get pushed closer together, while distant raisins get pushed further apart. There is no separate “dark energy” or cosmological constant; both gravitation and cosmological acceleration are effects from spin-1 quantum gravity (see also the information in an earlier post, *The spin-2 graviton mistake of Wolfgang Pauli and Markus Fierz* for the mainstream spin-2 errors and the posts here and here for the corrections and links to other information).

As explained on the About page (which contains errors and needs updating, NASA has published Hubble space telescope estimates of the immense amount of receding matter in the universe, and since 1998 Perlmutter’s data on supernova luminosity versus redshift have shown the amount of the tiny cosmological acceleration, so the relationship in the diagram above predicts gravity quantitatively, or you can you normalize it to Newton’s empirical gravity law so it then predicts the cosmological acceleration of the universe, which it has done since publication in October 1996, long before Perlmutter confirmed the predicted value (both are due to spin-1 gravitons).

**Elitist hero worship of string theory hero Edward Witten by string theory critic Peter Woit**

“Witten’s work is not just mathematical, but covers a lot of ground. The more mathematical end of it has been the most successful, but that’s partly because, in the thirty-some years of his career, no particle theorist at all has had the kind of success that leads to a Nobel Prize. If Witten had been born ten-twenty years earlier, I’d bet that he would have played some sort of important role in the development of the Standard Model, of a sort that would have involved a Nobel prize.” – Peter Woit, Not Even Wrong blog

With enemies like Peter Woit, Witten must be asking himself the question, who needs friends? More seriously, this is a useful statement of Dr Woit’s elitism problem. He thinks that Professor Witten tragically missed out on a Nobel Prize, despite his mathematical physics brilliance, by being born some decades too late. Duh. Doesn’t that prove him unnecessary? After all, he wasn’t needed. Physics did not go on hold for decades awaiting him. Others got the prizes for doing the physics. Maybe I’m just too stupid to understand true genius…

On the topic of Bohr’s quoted attack on Feynman’s 2nd quantization path integrals, we found that this kind of “you’re wrong because our lying, false, but widely hyped dogma is popular fashion, therefore we don’t have to listen to you!” claim is still rife in mainstream physics, particularly in groupthink science fantasy like string theory, when in May 1996 we accurately predicted the *a = dv/dt = d(HR)/d(R/c) = Hc* ~ 6 x 10^{-10} ms^{-2} cosmological acceleration of the universe from the Hubble relationship *v = HR* simply by arguing that spacetime implies that the recession velocity is not just varying with apparent distance, but with time, thus it is an effective acceleration; which we published via p893 of the October 1996 issue of *Electronics World* and also the February 1997 issue of *Science World* (ISSN 1367-6172), after string theory reviewers rejected it from more specialized and appropriate journals, without giving any scientific reasons whatsoever. This acceleration was thus predicted two years before large groups of astronomers detected it. They failed to acknowledge the prediction and frequently lied in publications that it was unpredicted. Edward Witten claimed – falsely as far as physical facts are concerned – that it was a great surprise, when in fact it had been predicted by quantum gravity in my paper years earlier. None of these people seem to understand general relativity, which lacks the dynamics for quantum gravity. All of the relativistic corrections to Newtonian gravity which general relativity contains come from the contraction term needed for the conservation of mass-energy, which is introduced because a direct equivalence of spacetime curvature to the stress-energy (gravitational field source) tensor would be false since the divergence of the stress-energy tensor does not vanish as it should in order to satisfy local conservation of mass-energy. This contraction term is what makes general relativity shrink Earth’s radius by *GM*/(3*c*^{2}) = 1.5 mm as Feynman explains in his *Lectures on Physics,* and together with the Lorentz transformation, this contraction is predicted by spin-1 quantum gravity. The outward force of the accelerating universe is given by Newton’s 2nd law; the inward reaction force mediated by gravitons is then predicted by Newton’s third law to be equal and opposite. This inward force of gravitons turns LeSage’s idea which Feynman discusses, e.g. on the audio file which plays at this linked site, into a quantitive prediction. Because the gravitons are off-shell, they cause forces and thus contractions instead of drag or heating like on-shell radiations. Similarly, the electromagnetic spin-1 gauge bosons causing your fridge magnet to stay attached to the fridge door don’t cause it to heat up. Also, the Casimir force spin-1 gauge bosons in the vacuum which push metal plates together don’t cause drag on the motion of metal plates in a vacuum. Thus the normal drag and heating objections to LeSage’s on-shell vacuum radiation and null and void against off-shell virtual radiation. In fact, the compression effects cause the radial contraction normally attributed to the fourth dimension in general relativity. Relativity is thus explained in terms of a physical mechanism: force effects from off-shell gauge bosons in quantum field theory.

The pair production field isn’t infinite: there are UV and IR cutoffs on the energies and thus distances around a charge where polarizable pair production of virtual fermions occurs. No infinities are present. There can’t be an infinite number of polarized virtual fermions because the mechanism which produces them depends on energy taken from the electromagnetic field, which isn’t infinite.

Remember the earlier post with the diagram above (linked here)? The beta decay of leptons (muon and tauon) is analyzed in a way that’s inconsistent with quark decay. E.g., muons are supposed to beta decay into electrons via the intermediary of a W_{-} weak boson.

For consistency with this picture, downquarks and strange quarks would need to also beta decay into electrons via the intermediary of a W_{-} weak boson.

But the mainstream interpretation lacks this consistency in interpretation, and instead insists on seeing quark decay as the decay of quarks directly into other quarks, not requiring the intermediary of the W_{-} weak boson (indicating decay of quarks into leptons, complimenting Cabbibo’s 1964 discovery “universality” e.g. the similarity of CKM weak interaction strengths for quarks and leptons within the same generation). For an illustration that makes the problem clear, see the first diagram in this blog post.

The solution to this discrepancy also gets rid of the alleged and unexplained excess of matter over antimatter. Quarks and leptons differ in terms of having colour charge and fractional electric charge, but these differences are superficial masking effects of vacuum polarization which changes the observable electric charge of a particle. They’re not fundamental and deep properties of nature, as today assumed in the construction of the SM.

Suppose that an upquark is really a disguised electron: it’s lost 2/3rds of its electric charge due to vacuum polarization screening, and that electromagnetic energy has been transformed into strong colour charge. This is because the polarization (pulling apart) of virtual quarks by strong electric fields, which gives them potential energy and thus increases their survival time over that predicted by the uncertainty principle. So some of electromagnetic field energy gets converted by this virtual fermion polarization mechanism into the energy of gluon fields (which automatically accompany the pair-production created virtual quark-antiquark pairs). Virtual fermions which have been pulled apart by strong electric fields, using energy from the electric field in the process, both screens the electric charge and contributes to the colour charge.

Thus the difference between the total electromagnetic field energy from the fractionally chargesd downquark, and the integer charge electron, is converted into the colour charge of the downquark. This idea actually predicts that the total energy of the short-range gluon field of a downquark is precisely equal to (1 – 1/3)/(1/3) or twice the total energy of its electromagnetic field.

Hence in this model downquarks and electrons are the same thing, and are merely disguised or cloaked merely by the vacuum polarization phenomena accompanying confinement. This solves the beta decay interpretation anomaly above, and also explains the alleged problem of excess of matter over antimatter in the universe. Observe that the universe is 90% hydrogen, with one electron, one downquark, and two upquarks. If upquarks are disguised positrons and downquarks are disguised electrons, there is a perfect balance of matter and antimatter in the universe; it’s just hidden by vacuum polarization phenomena.

Responding to the comment and diagram above, Professor Jacques Distler (the pro-stringy bigoted arXiv adviser who leaves snide attacks by trackbacks while not permitting genuine trackbacks, as discussed in detail in the earlier blog post linked here), states: “As to your physics comments, they are, alas, completely wrong,” before recommending a dogmatic book on the Standard Model which is the whole problem because my comments are entirely based on Feynman’s criticisms of the Standard Model in his book QED, so I responded:

Thanks for your technical analysis, Professor. I suggest you read Feynman’s book QED for the faults in the Standard Model I discussed (it’s at your level). Cheers.

Doubtless, he will come back with apologies, terrific enthusiasm for solid physics, and a confession that trying to make an ad hoc theory of the Standard Model using string theory or E8 is missing the point that the Standard Model may not merely be “incomplete” but in need of a complete rebuilding! It’s a bit like Ptolemy using epicycles to model the Earth-centred universe of Aristotle; it doesn’t matter how accurate the epicycle model is, if it is a mathematical model for a false interpretation of the universe, then it is a pipe-dream as far as real-world physics is concerned. In QED, Feynman writes:

The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of

j[Feynman’s symboljis related to alpha or 1/137.036… by: alpha =j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. …

But if you just look at the [Standard Model] you can see the glue, so to speak. It’s very clear that the photon and the three W’s[weak gauge bosons]are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’[Emphasis added.]- R. P. Feynman,

QED,Penguin, 1990, pp. 141-142.

Professor Distler by claiming that Feynman’s arguments which I quoted were wrong (although that could of course just be his standard response to any non-string idea, and the reason for his arXiv advice), suggests that he is completely unaware of these issues with the Standard Model. However, I may be wrong here, since he is married to a psychologist. So maybe the explanation is different and he is a more complex character, of interest to students of the science of psychology? At a firmer level, you can understand his hostility to the real universe (as distinct from the imaginary landscape in the minds of string theorists) by noting that he works in the same Texas University department as the pro-Israeli Palestine civilian bombing terrorism thuggery proponent (who claims that British regard for the human rights for civilians are disrespectful to American Jews), string theorist and Standard Model contributor Professor Steven Weinberg. Maybe Jacques would be fired if he admitted the truth about Feynman’s criticisms of the Standard Model that Weinberg helped build?

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space [because there is an infinite series of terms in the perturbative expansion to Feynman’s path integral] … Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”

– Richard P. Feynman,

The Character of Physical Law,November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

It’s true that ignorant morons may object to advanced QFT mathematics simply because it’s so abstract. However, as Feynman’s argument shows, there are intelligent reasons for questioning the fundamental validity of even well established mathematical techniques used ubiquitously in particle physics…

What is the future of peer review? What does it do for science and what does the scientific community want it to do? Should it detect fraud and misconduct? Does it illuminate good ideas or shut them down? Does it help journalists report the status and quality of research? Why do some researchers do their bit and others make excuses? And why are all these questions important not just to journal editors, but to policy makers and the public? … [includes slides with vague, arm-waving complains about the problem of of “theories of everything” submitted for peer-review which indicate “deranged minds” on principle, and thus need not be taken seriously]

Response:

But who are “peers”? If you discover something nobody else has discovered which is way out, you don’t truly have any peers who are capable of checking your paper. E.g., if you send a paper to Classical and Quantum Gravity on an alternative to string theory, they send it to dogmatic string theorists to act as “peer” reviewers, who send back a rejection report saying the idea is useless for the progress of string theory! The problem is finding a genuine, unprejudiced, “peer” reviewer if you are working on a new idea which hasn’t as yet attracted any other interest.

Your position of publishing ideas written up by large numbers of collaborators misses this point. If you have a large group of authors, even if they get rejected, they have enough weight in numbers to say, start up their own specialist journal with its own peer review advisory panel, and get on the citation index. In reality, science requires groupthink support for new ideas. It’s unlikely that a radical idea can be completely implemented, explored, and popularized in the face of ignorant hostility and sneering rejections from mainstream “peer” reviewers who are basically the doormen to the trade union “closed shop” or maybe “old boys club” (depending on the analogy you prefer) of the Elitist Scientific Research Corp.

Peer review is great for the incremental progress of everyday science, but it is not so good where the new idea contradicts the entire ongoing mainstream research program, especially if that program is being defended as being “the only game in town”. ;-)

Just a word about the “deranged minds” with the “theory of everything”.

Here’s how the “theory of everything” gets developed:

1. You try to publish a single idea with a prediction for, say, quantum gravity.

2. You get rejected because your paper does not say how your idea fits into the Standard Model or general relativity.

3. You work on this problem and find a solution. Now you have a “theory of everything”.

4. Tommaso then says you must be deranged.;-)

Thanks for the helpful idea to submit to arXiv.

I did that in December 2002 when only my email at the University of Gloucestershire could be used to set up an account. I uploaded a paper. It appeared online with an arXiv number. Thirty seconds later when I refreshed my browser, it was replaced with somebody else’s paper. Then editor of Classical and Quantum Gravity sent it to string theorists for “peer” review, then sent me the report without the names of the “peer” reviewers. The one good thing about refusing to give the names of the string theorists who wrote that report is that you are forced to suspect the whole lot of them as being your secret enemies ;-)

**Diatribe against pure mathematical Platonic ideas infecting physics departments**

The following is a piece I wrote then deleted from a paper I’m trying to get finished, called “Understanding quantum gravity”. It’s a bitter diatribe against the pure mathematical Platonic ideals that have infected physics via string theory and even loop quantum gravity. It’s not what I want in my paper, and is not specifically targetted at Distler or anyone else, although it might or might not be applicable in any particular case.

I don’t know whether Distler is a complete string quack or whether he has a genuine interest in physics which just became perverted as the landscape and AdS/CFT failures with string theory have been defended by lying that it is “the only game in town”. Although Jacques has in the past made false speculations about my knowledge of physics, which may seem to suggest to some that he is prepared to make false claims about what people are doing *without first finding out the facts or asking;* this leads me to think he is presently behaving as a quack in defending string theory when it has repeatedly failed to live up to its hype, but that’s just my personal view based on observing his weird behaviour, which may change and improve later on (particularly if Jacques should ever tragically die from old age and become a little quieter or should we say politely, more “comtemplative”).

What I want in the paper is just applied maths predictions, so I deleted the following portions which show the kind of bitterness that comes out when you try to write a paper after a lot of bigoted, quack “peer”-review for 14 years:

INTRODUCTION

Contrary to lying hype about making “predictions”, string theory gains its strength from its failure to predict results in a falsifiable way, and be found right or wrong. Many media science journalists have confused the impossibility of making checkable predictions with rigor, because they think a failure to ever be debunked by experiments sounds like rigor.

On the contrary, real world isolation is not rigor but just evidence of complete physical failure. By definition, in pure (not applied) mathematics you prove theorems without requiring any experiments or observational confirmations from the real world. The problem with string theory is that it doesn’t prove its non-real world fantasies with pure mathematical rigor. E.g., the AdS/CFT correspondence is an unproved conjecture, and while there may or may not be an exact correspondence between anti de Sitter space and conformal field theory, such an exact mathematical correspondence would only approximately model the strong nuclear force which is clearly not analogous to anti de Sitter space, a *negative* cosmological constant, with attraction forces increasing as distance is increased, which is a rough approximation to the gluon mediated QCD force over hadron sized distances, for the larger, nuclear-sized distances where mesons mediate the strong force (where it falls off with increasing distance, instead of increasing with distance!). Nobody has succeeded in proving that AdS/CFT actually makes useful calculations that have not already been done by other approximate methods. All they have done is to hype fantasy. Even if the approximation works usefully, that won’t prove that string theory is really being right, any more than the ability of Ptolemy’s epicycles to predict the position of the sun proved that the sun orbits the Earth: it doesn’t!

This kind of ‘intellectual’ pure mathematical parlour game may seem harmless or clever to you, but it is attractive not only to harmless pure mathematicians but also to physically ignorant and thus dangerous second rate (or failed) pure mathematicians, who are not innovative and productive enough at pure mathematicians to earn a place in such a department, but are good enough at textbook calculations to get into physics departments.

These people in some cases have an agenda of Platonic fantasy, turning the physics department into a trojan horse by which the idealist goals of ‘beautiful’ pure mathematics are sneakily fostered upon physics. Any opposition to this destruction of science brings angry hostility rather than reform from these bitter pseudo-physicists who often work as charlatan ‘peer’ reviewers, censoring science, believing in a mathematical universe.

There is also a story of cynical vested interests in non-falsifiable ideas by the educational/research theoretical physics community which goes as follows. Before the 1984 superstring revolution, there were a lot of theories which were falsifiable. People would work on those theories for their PhD thesis, then some experiment would disprove the theory. Then they had to go through life saying they *got their PhD in something that was disproved!* Not very impressive to potential employers! So the wise guys jumped on the string theory research waggon for stability; if string was not falsifiable, it was a safer bet because in 20 years time, your PhD on “stringy D-branes” will still look respectable, etc. This problem has been called the Catt Concept, as explained in the following Editorial from Popular Mechanics, May 1970:

‘Perhaps NASA was too successful with Apollo. … According to Catt, the most secure project is the unsuccessful one, because it lasts the longest.’

– Robert P. Crossley, Editorial, *Popular Mechanics,* Vol. 133, No. 5, May 1970, p. 14.

‘The President put his name on the plaque Armstrong and Aldrin left on the moon and he telephoned them while they were there, but he cut America’s space budget to the smallest total since John Glenn orbited the Earth. The Vice-President says on to Mars by 1985, but we won’t make it by “stretching out” our effort. Perhaps NASA was too successful with Apollo. It violated the “Catt Concept”, enunciated by Britisher Ivor Catt. According to Catt, the

most secure project is the unsuccessful one, because it lasts the longest.’ (Emphasis added.)– Robert P. Crossley, Editorial,

Popular Mechanics,Vol. 133, No. 5, May 1970, p. 14.Thanks to censorship of criticisms, string theory has been securely funded for decades without success. E.g., compare the Apollo project with the Vietnam war for price, length and success. Both were initially backed by Kennedy and Johnson as challenges to Communist space technology and subversion, respectively. The Vietnam war – the unsuccessful project – sucked in the cash for longer, which closed down the successful space exploration project! Thus, the Catt Concept explains why the ongoing failure of string theory to be physics makes it a success in terms of killing off more successful alternative projects, by getting ongoing media attention, publicity, and funding which just keeps on coming at the cost of alternative projects which correctly predicted the cosmological acceleration of the universe to within observational error two years before it was detected!

CONTENTS

1. Mathematical lessons from classical gravitation (general relativity) and electromagnetism (Maxwell’s equations), with comments on Lunsford’s unification.

2. The spin of the graviton: evidence that all masses are exchanging gravitons, and that the spin of the graviton is 1 not 2 as claimed by Pauli and Fierz (tensor rank indicates whether field lines or accelerations are being modelled, and is not tied to field quanta spin, contrary to groupthink lying hype)

3. Implications for the Standard Model; changes to electroweak theory which allow gravitation and mass predictions; Feynman’s criticisms of the Standard Model and how these are overcome by the solution to a discrepancy in particle classification in existing beta decay analysis via weak bosons; replacing the Higgs field and current unification ideas

4. Quantitative predictions from the corrected Standard Model, which includes a complete theory of particle masses (replacing the ad hoc Higgs field), removes dogmatic, physically false symmetry breaking mechanisms and includes gravity

5. How arXiv and ‘peer’ review have used uncritical dogmatic censorship of alternative ideas in order to hype misinformed and ignorant non-falsifiable unpredictive pseudo-scientific groupthink fantasy; the analogy to the ‘peer’ review refusals in Nazi publications for the facts on eugenics to be presented and the ‘100 authors against Einstein’ crusade, needed to suppress all scientific dissent against bigoted charlatans with a politican agenda dressed up as ‘mainstream majority-backed science’

6. False modesty versus quantum gravity; obvious facts which you know, I know, you know I know, but which you maybe prefer to pretend that you believe I don’t know; downplaying the facts and being polite and modest in a paper (a) is absolutely no threat whatsoever to a system of censorship in which pseudo ‘peer’ reviewers are bias in favour of mainstream dogmas which have failed physically because ‘they are the only game in town’, when in fact they are falsely producing this illusion by censoring alternatives, and (b) it actually allows ignorant censors to falsely buy convincingly (as far as the ignorant media is concerned) dismiss the facts falsely as ‘speculation’ by quoting the polite statements of presentation of the facts in the paper as alleged evidence for weakness in those statements!

ABSTRACT. In May 1996, the quantum gravity mechanism of this paper predicted to within experimental error the small positive cosmological constant observationally confirmed two years later. Dark energy was accurately predicted. The greatest benefit of being unread is that your unread writings cannot possibly offend anyone. So we’re free to avoid diplomatic drivel and egotistically motivated false ‘modesty’, and explain why these facts are ignored.

**Update:**

Perelman has rejected the $1,000,000 Clay Mathematics Institute Millennium prize for proving Poincare’s conjecture, which is relevant to the discussion of pure mathematical trash above: the most competent mathematicians aren’t those who go around sneering at other people and trying to censor out discoveries, or hyping stringy lies. They’re relatively quiet, decent, moral people who put ideas forward, then don’t clamour to win immense materialistic prizes or to give endless interviews.

‘This is similar to formulating a dynamical process which gradually “perturbs” a given square matrix, and which is guaranteed to result after a finite time in its rational canonical form.

‘Hamilton’s idea had attracted a great deal of attention, but no one could prove that the process would not be impeded by developing “singularities”, until Perelman’s eprints sketched a program for overcoming these obstacles. According to Perelman, a modification of the standard Ricci flow, called Ricci flow with surgery, can systematically excise singular regions as they develop, in a controlled way.

‘It is known that singularities (including those which occur, roughly speaking, after the flow has continued for an infinite amount of time) must occur in many cases. However, any singularity which develops in a finite time is essentially a “pinching” along certain spheres corresponding to the prime decomposition of the 3-manifold. Furthermore, any “infinite time” singularities result from certain collapsing pieces of the JSJ decomposition. Perelman’s work proves this claim and thus proves the geometrization conjecture.”‘

**Updates (22 July 2010):**

From the “comments” section to this post:

Hi Dirk,

Thanks! The emperor (Distler and other arXiv.org “advisers”) resorts to banning alternatives from string theory such as Lunsford’s peer-reviewed paper* from being hosted, and then falsely claims “string theory is the only game in town”.

Duh. Yeah, that’s because all other serious games are banned by law of arXiv.org, unless they’re wrong/not-even-wrong junk like Smolin’s loop quantum gravity.

The problem with “The Emperor’s New Clothes” is that if you read the original fairytale, when the Emperor realises that the invisible clothes he has been sold don’t actually exist, he decides to continue pretending that they do exist, so the farce continued. People always misinterpret the ending of that fairytale as if the Emperor was debunked. Nope. He was still in command of the situation, and didn’t even blush. If he such wasn’t an arrogant son of a bitch, he wouldn’t have become the emperor or Distler figurehead in the first place!

Cheers,

Nige

—-

* Lunsford’s paper, http://cdsweb.cern.ch/record/688763 was published in the peer-reviewed journal Int. J. Theor. Phys. 43 , 1 (2004) 161-177 but then was deleted and banned without explanation from arXiv.org, see Lunsford’s comment: http://www.math.columbia.edu/~woit/wordpress/?p=128&cpage=1#comment-1920:

“… the proper way to do it is to put it on arxiv. I don’t know why they blacklisted it – all I know is, it got sponsored, got put up, and then vanished – never got any explanation. I would like for the thing to be on arxiv just on general principles, but now that it’s actually been peer-reviewed and published, it’s not a big issue to me any more.”

————

NEW SCIENTIST, 22 May 2010, page 40, “Muon whose army?” by Kate McAlpine:

“For years the E821 collaboration, based at Brookhaven National Laboratory in Upton, New York, studied particles called muons. These are unstable subatomic particles similar to electrons but about 200 times as heavy. The research focussed on a quantum property of the muon known as its magnetic moment, and it found the Standard Model wanting. **According to the measurements, there is a mere 0.27 per cent probability that the Standard Model is correct. …**

“In fact, the magnetic moment is so sensitive it is affected by the presence of particles unknown in Dirac’s day, including quarks, W and Z bosons, and the [imaginary] Higgs boson. Indeed, quantum mechanics tells us that virtual versions of any kind of particle – including ones we haven’t discovered yet – can pop into existence by borrowing energy for a passing instant. …

“The muon is affected much more than the electron because of its greater mass. That’s because they have more energy available for virtual particles to borrow. The magnetic moment of the electron is one of the most closely verified predictions of the standard model …. Not so for the muon. The first signs that all was not well came shortly after the E281 experiment got under way in the mid-1990s.”

See www.arxiv.org/abs/1001.4528 (“After a brief review of the muon g-2 status, we discuss hypothetical errors in the Standard Model prediction that might explain the present discrepancy with the experimental value. None of them seems likely. In particular, a hypothetical increase of the hadron production cross section in low-energy e+ e- collisions could bridge the muon g-2 discrepancy, but it is shown to be unlikely in view of current experimental error estimates. If, nonetheless, this turns out to be the explanation of the discrepancy, then the 95% CL upper bound on the Higgs boson mass is reduced to about 135GeV which, in conjunction with the experimental 114.4GeV 95% CL lower bound, leaves a narrow window for the mass of this fundamental particle.”) for an analysis of how much the contribution of hadrons to the muon’s magnetic moment would need to be increased to bring the prediction into agreement with measurements. The problem with this is that it causes the theoretical mass of the W boson to fall below the measured value. So the authors of that arxiv paper, Massimo Passera et al., “wondered what effect the various possible Higgs masses would have on the muon’s magnetic moment. To match the E821 result, their calculations suggest that the Higgs mass is far lower than 114 GeV, which has already been ruled out [NC: but there is no Higgs, mass is given by Z bosons of 91 GeV each!]. Taken at face value, this raises the uncomfortable possibility that the standard model is wrong, as long as E821’s results are bona fide.”

Heavy SUSY partners are postulated as an ad hoc explanation for the discrepancy by string theorists: ‘“There are supersymmetric theories that would explain this discrepancy very well,” says Passera.’

This is because virtual particles in the vacuum BOOST the magnetic moment of a lepton.

Karl Popper demarcated science from pseudo-science by arguing for falsifiable predictions.

The internationally-accepted value of the proton’s charge radius is 0.8768 femtometers. This value is based on measurements involving a proton and an electron.

However since July 5, 2009 an international research team has been able to make measurements involving a proton and a negatively-charged muon. After a long and careful analysis of those measurements the team concluded that the root-mean-square charge radius of a proton is “0.84184(67) fm, which differs by 5.0 standard deviations from the CODATA value of 0.8768(69) fm.”[11]

The international research team that obtained this result at the Paul-Scherrer-Institut (PSI) in Villigen (Switzerland) includes scientists from the Max Planck Institute of Quantum Optics (MPQ) in Garching, the Ludwig-Maximilians-Universität (LMU) Munich and the Institut für Strahlwerkzeuge (IFWS) of the Universität Stuttgart (both from Germany), and the University of Coimbra, Portugal.[12][13] They are now attempting to explain the discrepancy, and re-examining the results of both previous high-precision measurements and complicated calculations. If no errors are found in the measurements or calculations, it could be necessary to re-examine the world’s most precise and best-tested fundamental theory: quantum electrodynamics.[14]

This is done by replacing the electron in an atom with a muon:

In order to determine the proton radius, the researchers replaced the single electron in hydrogen atoms with a negatively-charged muon. Muons are very much like electrons, but they are 200 times heavier. According to the laws of quantum physics, the muon must therefore travel 200 times closer to the proton than the electron does in an ordinary hydrogen atom. In turn, this means that the characteristics of the muon orbit are much more sensitive to the dimensions of the proton. The muon ‘feels’ the size of the proton and adapts its orbit accordingly. “In fact, the extension of the proton causes a change in the so-called Lamb-shift of the energy levels in muonic hydrogen”, Dr. Randolf Pohl from the Laser Spectroscopy Division of Prof. Theodor W. Hänsch (Chair of Experimental Physics at LMU and Director at MPQ) explains. “Hence the proton radius can be deduced from a spectroscopic measurement of the Lamb shift.”

MAXWELL’S RANK-1 EQUATIONS AND GENERAL RELATIVITY RANK-2 EQUATIONS

Newton’s law of gravitation was proposed in 1665 and Coulomb’s electrostatics force law was proposed in 1785.

In Newton’s theory, force F is related to gravitational charge (mass), m, by the relationship

*F = ma,*

Where acceleration, a, leads to “rank-2” i.e. second-order spacetime equations because, *a = d ^{2}x/dt^{2}.*

In Maxwell’s equations, the corresponding force laws are

*F = qE* and *F = qvB sin N*

Where *q* is electric charge, *qv sin N* is effectively the magnetic charge, *E* is electric field, and *B* is magnetic field.

This definition leads to “rank-1” i.e. first-order differential equations, because the fields *E* and *B* are not represented in electromagnetism by rank-2 or second-order spacetime equations like acceleration, *a = d ^{2}x/dt^{2}.* The field

*E*and

*B*by contrast are defined as first-order or rank-1 gradients, i.e.

*E*is the gradient of volts/metre,

*E = dV/dx.*

This is the fundamental reason why Maxwell’s equations are rank-1, whereas the accelerations and spacetime curvatures in general relativity utilize rank-2 tensors.

New Higgs Results From the Tevatron

Just got back from vacation this morning. Luckily I managed to be away for the blogosphere-fueled Higgs rumors, returned just in time to catch the released results which appeared in a Fermilab press release minutes ago. The ICHEP talk in Paris announcing these results will start in about half an hour, slides should appear here.

The bottom line is that CDF and D0 can now exclude (at 95% confidence level) the existence of a Standard Model Higgs particle over a fairly wide mass range in the higher mass part of the expected region: from 158 to 175 GeV. If the SM Higgs exists, it appears highly likely that it is in the region between 114 GeV (the LEP limit) and 158 GeV. The most relevant graph is here. It shows an excess of about 1 sigma over the entire region 125 GeV to 150 GeV, which unfortunately is nothing more than the barest possible hint of something actually being there.

“If there really were a Higgs at a certain mass, one expects that the experiments would start to see an excess above expected background, and this would make their 95% confidence level worse than expected. However, the excess they are seeing is still so small as to be quite consistent with no real signal.”