Rabinowitz and quantum gravity

Dr Mario Rabinowitz, the author of the arXiv paper “Deterrents to a Theory of Quantum Gravity,” has kindly pointed out his approach to the central problem I’m dealing with.  (Incidentally, the problem he has with quantum gravity does not apply to the quantum gravity mechanism I’m working on, where gravity is a residue of the electromagnetic field caused by the exchange of electromagnetic gauge bosons which allows two kinds of additions, a weak always attractive force and a force about 1040 times stronger with both attractive and repulsive mechanisms.)  His paper, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990), is the earliest I’m aware of which comes up with a general result equal to Louise Riofrio’s equation MG = tc3.

He sets the gravitational force equal to the inertial force, F = mMG/R2 = [mM/(M + m)]v2/R ≈ (mc2)/R.  This gives MG = Rc2 = (ct)c2 = tc3 which is identical to Riofrio’s equation.

Here is my detailed treatment of Mario’s analysis.  The cosmological recession of Hubble’s law v = HR where H is Hubble parameter and R is radial distance, implies an acceleration in spacetime (since R = ct) of a = dv/dt = d(HR)/dt = Hv = (v/R)v = v2/R.  (This is not controversial or speculative; it is just employing calculus on Hubble’s v = HR, in the Minkowski spacetime we can observe, where: ‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1908.) Hence the outward force on mass m due to recession is F = ma = mv2/R = mc2/R for extreme distances where most of the mass is and where redshifts are great, so that  v ~ c.

Hence the inward (attractive) gravity force is balanced by this outward force:

 F = mMG/R2 = mc2/R


 MG = Rc2 = (ct)c2 = tc3.

(This result is physically and dimensionally correct but quantitatively is off by a dimensionless correction factor of e3 = 20, because it ignores the dynamics of quantum gravity at long distances (rising density as time approaches zero, which increases toward infinity the effective gravity effect due to the expansion of the universe, and falling strength of gravity causing exchange radiation as time goes towards zero due to the extreme redshift of that radiation, weakening gravity.  However, the physical arguments above are very important and can be compared to those in the mechanism at http://feynman137.tripod.com/.  The correct formula is: e3 MG =  tc3 , where, because of the lack of gravitational retardation in quantum gravity, t = 1/H where H is Hubble parameter, instead of t = (2/3)/H which is the case for the classic Friedmann scenario with gravitational deceleration.)

Historically the rediscovery of this result since Mario’s paper in 1990 has occurred three times, each under different circumstances:

(1) M. Rabinowitz, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990).

(2) My own treatment, Electronics World, various issues (October 1996-April 2003), based on a physical mechanism of gravity (outward force of matter in receding universe is balanced, by Newton’s 3rd law, by an inward force of gauge boson pressure, which causes gravity by asymmetries since each fundamental particle acts as a reflecting shield, so masses shield one another and get pushed together by gauge boson radiation, predicting the value of G quite accurately).

[Initially I had a crude physical model of the Dirac sea, in which the motion of matter outward resulted in an inward motion of the Dirac sea to fill in the volume being vacated.  This was objected to strongly for being a material pressure LeSage gravity mechanism, although it makes the right prediction for gravity strength (unlike other LeSage models) and utilises the right form of the Hubble acceleration outward, a = dv/dt = d(HR)/dt = Hv.  This was published in Electronics World from October 1996 (letters page item) to April 2003 (a major six pages long paper).  A gauge boson exchange radiation based calculation for gravity was then developed which does the same thing (without the Dirac sea material objections to LeSage gravity which the previous version had) in 2005.  I’ve little free time, but am rewriting my site into an organised book which will be available free online.  The correct formula from http://feynman137.tripod.com/ for the gravity constant is G = (3/4)H2 /(Pi*Rho*e3 ) where Rho is the observed (not Friedmann critical) density of visible matter and dust, etc.  This equation is equivalent to e3 MG =  tc3 , and differs from the Friedmann critical density result by a factor of approximately 10, predicting that the amount of dark matter is less than predicted by the critical density law.  In fact, you get a very good prediction of the gravity constant from the detailed Yang-Mills exchange radiation mechanism by ignoring dark matter, as a first approximation.  Since dark matter has never been observed in a laboratory, but is claimed to be abundant in the universe, you have to ask why it is avoiding laboratories.  In fact the most direct evidence claimed for it doesn’t reveal any details about it.  It is required in the conventional (inadequate) approximations to gravity but the correct quantum gravity, which predicted the non-retarded expansion of the universe in 1996, two years before Perlmutter’s observational data confirmed it, reduces the amount of dark matter dramatically and makes various other validated predictions.]

(3) John Hunter published a conjecture on page 17 of the 12 July 2003 issue of New Scientist, suggesting that the rest mass energy of a particle, E = mc2, is equal to its gravitational potential energy with respect to the rest of the matter in the surrounding universe, E = mMG/R.  This leads to E = mc2 = mMG/R, hence MG = Rc2 = (ct)c2 = tc3.  He has the conjecture on a website here, which contains an interesting and important approach to solving the galactic rotation curve problem without inventing any unobserved dark matter, although his cosmological speculations on linked pages are unproductive and I wouldn’t want to be associated with those non-predictive guesses.  Theories should be built on facts.

(4) Louise Riofrio came up with the basic equation MG = tc3 by dimensional analysis and has applied it to various problems.  She correctly concludes that there is no dark energy, but one issue is what is varying in the equation MG = tc3 to compensate for time increasing on the right hand side.  G is increasing with time, while M and c remain constant.  This conclusion comes from the detailed gravity mechanism.  Contrary to claims by Professor Sean Carroll and the late Dr Edward Teller, an increasing G does not vary the sun’s brightness or the fusion rate in the first minutes of the big bang (electromagnetic force varies in the same way so Coulomb’s law of repulsion between protons was different, offsetting the variation in compression on the fusion rate due to varying gravity), but it does correctly predict that gravity was weaker in the past when the cosmic background radiation was emitted, thus explaining quantitatively why the ripples in that radiation due to mass were so small when it was emitted 300,000 years after big bang.  This, together with the lack of gravitational retardation on the rapid expansion of the universe (gravity can’t retard expansion between relativistically receding masses, because the gravity causing exchange radiation will be redshifted, losing its force-causing energy, like ordinary light which is also redshifted in cases of rapid recession; this redshift effect is precisely why we don’t see a blinding light and lethal radiation from extreme distances corresponding to early times after the big bang) gets rid of the ad hoc inflationary universe speculations

I’m disappointed by Dr Peter Woit’s new post on astronomy where he claims astronomy is somehow not physics: ‘When I was young, my main scientific interest was in astronomy, and to prove it there’s a very geeky picture of me with my telescope on display in my apartment, causing much amusement to my guests (no way will I ever allow it to be digitized, I must ensure that it never appears on the web). By the time I got to college, my interests had shifted to physics…’

I’d like to imagine that Dr Woit just means that current claims of observing ‘evolving dark energy’ and ‘dark matter (with lots of alleged evidence which turns out to be gravity caused distortions which could be caused by massive neutrinos or anything, and doesn’t have a fig leaf of direct laboratory confirmation for the massive quantity postulated to fix epicycles in the current general relativity paradigm which ignores quantum gravity)’ are not physics.  However, he is unlikely to start claiming that the mainstream ‘time-varying-lambda-CDM or time-varying-lambda (time-varying dark energy ‘cosmological constant’)-cold dark matter’ model of cosmology is nonsense because in his otherwise excellent book Not Even Wrong he uses the false, ad hoc, small positive fixed value of the cosmological constant to ridicule the massive value predicted by force unification considerations in string theory.  Besides, if he knows little of modern astronomy and cosmology, he will not be in an expert to competently evaluate it and criticise it.  I hope Dr Woit will submerge himself in the lack of evidence for modern cosmology and perhaps come up with a second volume of Not Even Wrong addressed at the lambda-CDM model and its predictive, checkable solution using a proper system of quantum gravity.

For my earlier post on this topic, see https://nige.wordpress.com/2006/09/22/gravity-equation-discredits-lubos-motl/ 

Other news: my domain http://quantumfieldtheory.org/ is up and running with some draft material – now I just have to write the free quantum field theory textbook to put on there!


The current observational value of H is about 70 +/- 2.4 km/s/Mparsec ~ 2.27*10-18 s-1, and Rho causes the difficulty today.  The observed visible matter (stars, hot gas clouds) has long been estimated to have a mean density around us of ~4*10-28 kg/m3, although studies show that this should be increased for dust by about 15% and for various other factors.   The prediction G = (3/4)H2 /(Pi*Rho*e3 )  is a factor of e3 /2 ~ 10 times smaller than that in the Friedmann critical density formula.  It’s accuracy depends on what evidence you take for the density.  It happens to agree exactly with the statement by Hawking in 2005:

‘When we add up all this dark matter [which accounts for the high speed of the outermost stars orbiting spiral galaxies like the Milky Way, and the high speed of galaxies orbiting in clusters of galaxies] , we still get only about one-tenth of the amount of matter required to half the expansion [the critical density in Friedmann’s solution]’.

– S. Hawking and L. Mlodinow, A Briefer History of Time, Bantam, London, 2005, p65.

Changing it around, it predicts the density is 9.2*10-28 kg/m3, about twice the observed density if that is taken as the traditional figure of 4*10-28 kg/m3, however the latest estimates of the density are higher and similar to the predicted value 9.2*10-28 kg/m3, for example the following:

‘Astronomers can estimate the mass of galaxies by totalling up the number of stars in the galaxy (about 109) and multiplying by the mass of one star, or by observing the dynamics of orbiting parts of a galaxy. Next they add up all the galactic mass they can see in this region and ivide by the volume of space they are looking at. If this is done for bigger and bigger regions of space the mean density approaches a figure of about 10-30 grams per cubic centimetre or 10-27 kg m-3. You will realise that there is some doubt in this value because it is the result of a long chain of estimations.’

Putting this approximate value of Rho = 10-27 kg m-3 into G = (3/4)H2 /(Pi*Rho*e3 ) with H as before gives G =  6.1*10-11  N m2 kg-2 , which is only 9% low, and although the experimental error in density observations is relatively high, it will improve with further astronomical studies, just as the Hubble parameter error has improved with time.  This provides a further check.  (Other relevant checks on quantum gravity are discussed here, top post.)

Here’s an extract from a response I sent to Dr Rabinowitz on 8 January, regarding the issue of gauge bosons and the accuracy of the calculation of G in comparison to observed data:

“Are your gauge bosons real or virtual?” What’s the difference? It’s the key question in many ways.  Obviously they are real in the sense they really produce electric forces. But you can’t detect them with a radio receiver or other instrument designed to detect either oscillatory waves or discrete particles.
“I am troubled by your force calculation (~10^43 N) which is an input to your derivation of G.  I’m inclined to think that the force calculation could be off by a large factor, so that that one may question that “The result predicts gravity constant G to within 2 % “.

First, the “outward force” is ambiguous.  If you ignore the fact that the more distant observable universe has higher density, then you get one figure. If you assume that density increases to infinity with distance, you get another result for outward force (infinity).  Finally, if you are interested in the inward reaction force carried by radiation (gauge bosons) then you need to allow for the redshift of those due to the recession of the matter emitting them, which cancels out the infinity due to density increasing, and gives a result of about 7 x 10^43 N or whatever.  In giving outward force as ~10^43 N, I’m giving a rough figure which anyone will be able to validate approximately without having to do the more complicated calculations.

I used two published best estimates for the Hubble parameter and the density of the visible matter plus dust in the universe.  These allowed G to be predicted.  The result was within 2% of the empirically known value of G.  I used 70 km/s/Mparsec for H a decade ago and that is still the correct figure, although the uncertainty is falling.  A decade ago, there was no estimate to the uncertainty because the data clustered between two values, 50 and 100.  Now there is agreement that the correct value of H is very close to 70.  …  I don’t think there is any massive error involved in observational astronomy.  There used to be a confusion because of two types of variable star, with Hubble using the wrong type to estimate H.  Hubble had a value of 550 for H, many times too high.  That sort of error is long gone.

Recent response to Professor Landis about general relativity:

“Ultimately, it’s all in the experimental demonstration.  If Einstein’s theory hadn’t been confirmed by tests, it would have been abandoned regardless of how pretty or ugly it may be.” – Geoffrey Landis

What about string theory, which has been around since 1969 and can’t be tested and doesn’t hold out any hope?  I disagree: the tests of general relativity would first have been repeated, and if they still didn’t agree, then an additional factor would have been invented/discovered to make the theory correct.

Newton’s gravity law in tensors would be R_uv = 4*Pi*T_uv

which is false because the divergence of T_uv doesn’t disappear.  Hence it violates conservation of energy.  Einstein replaces T_uv with (T_uv) – (1/2)(g_uv)T which does have a vanishing divergence and so doesn’t contradict the conservation of energy.  If the solutions of general relativity are wrong, then you would need to find out physically what is causing the discrepancy.

The Friedmann solution of general relativity predicted that gravity slows down expansion.  Observations by Perlmutter on distant supernova showed that there was something wrong.  Instead of abandoning general relativity, a suitable small positive “cosmological constant” was adopted to keep everything fine.  Recently, however, more detailed observations show that there is evidence that such a “cosmological constant” lambda would be varying with time.

Discussion by email with Dr Rabinowitz:

From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, January 17, 2007 5:27 AM

Subject: Paul Gerber is an unsung hero

Dear Nigel,

… Einstein’s General Relativity (EGR)  makes the problem much more difficult than your simple approach.  

  Another shortcoming is the LeSage model itself.  It is very appealing, but one aspect is appauling.  What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.  One should be able to calculate the time constant for slowing down a body.  …  

     Best regards,

From: Nigel Cook

To: Mario Rabinowitz

Sent: Wednesday, January 17, 2007 7:07 PM

Subject: Re: Paul Gerber is an unsung hero

Dear Mario,

Since the leading edge of the Universe is moving at nearly c, one needs to bring relativity into the equations.  Special relativity (without boosts) can’t do it.  Einstein’s General Relativity (EGR)  makes the problem much more difficult than your simple approach.”

The mechanism of relativity comes from this simple approach: the radiation pressure on a moving object causes the contraction effect.  Any inconsistency is a failure of general or special relativity, which are mathematical structures based on principles.  An example of a failure is the lack of deceleration of the universe…

Another shortcoming is the LeSage model itself.  It is very appealing, but one aspect is appauling.  What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.”

This is the objection of Feynman to LeSage in his November 1964 Cornell lectures on the Character of Physical Law.  The failure of LeSage has been discussed in detail by people from Maxwell to Feynman.  I have some discussion of LeSage at http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html

See http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html where the Dirac sea (or the equivalent Yang-Mills radiation exchange pressure on moving objects) is the mechanism for relativity:

Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:” ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’”

The force inward on every point is enormous, 10^43 Newtons.  General relativity gives the result that the Earth’s radius is contracted by (1/3)MG/c^2 = 1.5 millimetres.  The physical mechanism of this process (gravity dynamics by radiation pressure of exchange radiation) is the basis for gravitational “curvature” of spacetime in general relativity, because this shrinking of radius is radial only: transverse directions (eg circumference) is not affected.  Hence, the ratio circumference/radius will vary depending on the mass of the object, unless you invent a fourth dimension and maintain Pi by stating that spacetime is curved by the extra dimension.

LeSage (who apparently plagarised Fatio, a friend of Newton) was also dismissed for various other equally false reasons:

1. Maxwell claimed that the force causing radiation would have to be so great it would heat up objects until they were red hot.  This is vacuous for various reasons: the strong nuclear force (unknown in Maxwell’s time) is widely accepted to be mediated by Pions and other particles, and is immensely stronger than gravity, but doesn’t cause things to melt.  Heat transfer depends on how energy is coupled.  It is known that gravity and other forces are indirectly coupled to particles via a vacuum field that has mass and other properties.

2. Several physicists in the 1890s wrote papers which dismissed LeSage by claiming that any useful employment of the mechanism makes gravity depend on the mass of atoms rather than on the surface area of a planet, and so requires the gravity causing field to be able to penetrate through solid matter, and that therefore matter must be mainly void, with atoms mainly empty.  This appeared absurd.  But when X-rays, radioactivity and the nuclear atom confirmed LeSage, he was not hailed as having made a successful prediction, confirmed experimentally.  The later mainstream view of LeSage was summed up by Eddington: ‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, ‘Space Time and Gravitation’, Cambridge University Press, 1921, p64.  This is partly correct in the sense that there was no numerical prediction from LeSage that could be tested.

3. Feynman’s objection assumes that the force carrying radiation interacts chaotically with itself, like gas molecules, and would fill in “shadows” and cause drag on moving objects by striking moving objects and carrying away momentum randomly in any direction.  This is a straw man argument: Feynman should have considered the Yang-Mills exchange radiation as the only basis for forces below the infra red cutoff, ie, beyond 1 fm from a particle core.

The gas of creation-annihilation loops only occurs above the IR cutoff.  It is ironic that Feynman missed this, seeing his own major role in discovering renormalization which is evidence for the IR cutoff.

Best wishes,


From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, January 17, 2007 7:51 PM

Subject: Contradictory prediction of the LeSage model to that of Newton

Dear Nigel,

  Thanks for addressing the issues I raised.  

  I know very little about the LeSage model, its critics, and its proponents.  Nevertheless, let me venture forth.  Consider a Large Dense Disk rotating slowly.  I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body?  We could have two identical Disks:  One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius.  I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction.

        Best regards,

From: Nigel Cook

To: Mario Rabinowitz

Sent: Thursday, January 18, 2007 10:57 AM

Subject: Re: Contradictory prediction of the LeSage model to that of Newton

Dear Mario,

It is just a very simple form of radiation shielding.  Each fundamental particle is found to have a gravity shielding cross-section of Pi.R^2 where R = 2GM/c^2, M being the mass of the particle.  This precise result, that the black hole horizon area is the area of gravitational interactions, is not a fiddle to make the theory work, but instead comes from comparing the results of two different derivations of G, each derivation being based on a different set of empirically-founded assumptions or axioms.

It is also consistent with the idea of Poynting electromagnetic energy current being trapped gravitationally to form fermions from bosonic energy (the E-field lines are spherically symmetric in this case, while the B-field lines form a torus shape which becomes a magnetic dipole at long distances because the polarized vacuum around the electron core shields transverse B-field lines as it does radial E-field lines, but doesn’t of course shield radial – ie polar – B-field lines).

Notice that the black hole radius of an electron is many orders of magnitude smaller than the Planck length.  The idea that gravity will be reduced by particles being directly behind one another is absurd, because the gravitational interaction cross-section is so small.  You can understand the small size of the gravitational cross-section when you consider that the inward force of gauge boson radiation is something on the order 10^43 N, directed towards every particle.  This force only requires a tiny shielding to produce a large gravitational force.

There are obviously departures produced by this model from standard general relativity under extreme circumstances.  One is that you can never have a gravitational force – regardless how big the mass is – that exceeds 10^43 N.  I don’t list this as a prediction in the list of predictions on my home page, because it is clearly not a falsifiable or checkable prediction, except near a large black hole which can’t very well be examined.  The effect of one mass being behind the other, and so not adding any additional geometrical shielding to a situation, is dealt with in regular radiation shielding calculations.  If amount of shielding material H is enough to cut the gravity causing radiation pressure by half, the statistical effect of amount M is that the shielded pressure fraction will not be f = 1 – (0.5M/H), but will instead be f = exp{-M(ln 2)/H}.

However, we know mathematically that f = 1 – (0.5M/H) becomes a brilliant approximation to f = exp{-M(ln 2)/H} when M << H.  Calculations show that you will generally have to have a mass approaching the mass of the universe in order to get any significant effect whereby “overlap” issues become effective.

Consider a Large Dense Disk rotating slowly.  I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body?  We could have two identical Disks:  One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius.  I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction. “

You or I need to make some calculations to check this.  The problem here is that I don’t immediately see the mechanism by which you think that there would be a reduction in gravity, or how much of a reduction there would be, do you allow for mass increase due to speed of rotation, or is that ignored?  Many of the “criticisms” that could be laid against a LeSage gravity could also be laid against the Standard Model SU(3)xSU(2)xU(1) forces which again use exchange radiation.  You could suggest that Yang-Mills quantum field theory would predict a departure from Coulomb’s law for a large charged rotating disc, along the plain of the disc.

To put this another way, how far should someone go into trying to disprove the model, or resolve all questions, before trying to publish?  This comes down to the question of time. 

Can I also say that the calculations for http://quantumfieldtheory.org/Proof.htm were extremely difficult to do for the first time.  The diagram http://quantumfieldtheory.org/Proof_files/Image31.gif is the result of a great deal of effort in trying to make calculations, not the other way around.  The clear picture emerged slowly:

“The universe empirically looks similar in all directions around us: hence the net unshielded gravity force equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (see diagram). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4 p R 2. The ‘clever’ mathematical bit is that the shielding area of a local mass is projected on to this area by very simple geometry: the local mass of say the planet Earth, the centre of which is distance r from you, casts a ‘shadow’ (on the distant surface 4 p R 2) equal to its shielding area multiplied by the simple ratio (R / r)2. This ratio is very big. Because R is a fixed distance, as far as we are concerned for calculating the fall of an apple or the ‘attraction’ of a man to the Earth, the most significant variable the 1/ r2 factor, which we all know is the Newtonian inverse square law of gravity. For two separate rigorous and full accurate treatments see Geometrically, the unshielded gravity force is equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (illustration here). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4*Pi*R². The shielding area of a local mass is projected on to this area: the local mass of say the planet Earth, the centre of which is distance r from you, casts a shadow (on the distant surface 4*Pi*R² ) equal to its shielding area multiplied by the simple ratio (R/r)². This ratio is very big. Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.

“Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the net force sideways is the same in each direction unless there is a shielding mass intervening. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is receding. Gravity is the net force introduced where a mass shadows you, namely in the double-cone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc.”

Where disagreements exist, it may be the case that the existing theory is wrong, rather than the new theory.  There were plenty of objections to Aristarchus’ solar system because it predicted that the earth spins around daily, which was held to be absurd.  Ptolemy casually wrote that the earth can’t be rotating or clouds and air would travel around the equator at 1,000 miles per hour, but he didn’t prove that this would be the case, or state his assumption that the air doesn’t get dragged.

“Refutations” should really be written up in detail so they can be analysed and checked properly.  Problems arise in science where ideas are ridiculed instead of being checked with scientific rigor: clearly journal editors and busy peer reviewers are prone to ridicule ideas with strawman arguments without spending much time checking them.  It is a problem with elitism, as Witten’s letter shows, http://schwinger.harvard.edu/%7Emotl/witten-nature-letter.pdf .  Witten’s approach to criticism of M-theory is not to reply, thus remaining respectful.  Yet if I don’t reply to criticism, it is implied that I’m just a fool.

An excellent example is how your paper’s on the problems in quantum gravity are ignored by string theorists.  That proves string theorists are respectable, you see.  If they engaged in discussions with their critics, they would look foolish.  It is curious that if Witten refuses to discuss problems, he escapes being deemed foolish, but if outsiders do that then they are deemed foolish.  There is such a rigid view taken of the role of authority in science today, that hypocrisy is taken for granted by all.

Best wishes,


Loop Quantum Gravity, Representation Theory and Particle Physics

‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ –Wiki.  (Hence there is a simple relationship between leptons and fermions; more later on.)

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  ____________________________________

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  ____________________________________

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  ____________________________________

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  ____________________________________

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  ____________________________________


Fig. 1 – (Display page full width to see illustration properly; it is not an image file.)  The incompatibility between the quantum fields of quantum field theory (which are discontinuous, particulate) and the continious fields of classical theories like Einstein’s general relativity and Maxwell’s electromagnetism.  The incompatibility between quantum field theory and general relativity is due to the Dirac sea, which imposes discrete upper and lower limits (called the UV/ultraviolet and the IR/infrared cutoff, respectively) on the strengths of fields originating from particles.

Simplified vacuum polarization picture: zone A is the UV cutoff, while zone B is the IR cutoff around the particle core:

Highly simplified model of polarization of vacuum charge as forming simple shells of opposite charges from pair production, which cause a radial electric field that opposes the core charge of a particle, cancelling part of it, and reducing the observed charge at large distances (hence the renormalization of electric charge in quantum field theory) 

Mass mechanism based on this simplified model:

Mechanism for masses of fundamental particles based on the polarized vacuum charge renormalization, predicting all lepton and hadron quantized masses approximately

See also http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html and https://nige.wordpress.com/2006/10/09/16/ for more information.   To find out how to calculate the 137 polarization shielding factor (1/alpha), scroll down and see the section below in this post headed ‘Mechanism for the strong nuclear force.’


Dirac’s sea correctly predicted antimatter and allows the polarization of the vacuum required in the Standard Model of particle physics, making thousands of accurate predictions.  Einstein’s spacetime continuum of his general relativity allows only a very few correct predictions and has a large ‘landscape’ of ad hoc cosmological models (ie, a large number of unphysical solutions, or at least uncheckable solutions, making it an ugly physics model) and in addition it is false in so much as it fails to naturally explain or incorporate the renormalization of force field charges due to polarization of the particulate vacuum, and it also fails to even model the long range gauge bosons (which may be non-oscillatory radiation for the long-range force fields) exchange radiation of the Yang-Mills quantum field theories which successfully comprise the Standard Model of electroweak and strong interactions.  For example, Einstein’s general relativity is disproved by the fact that it contains no natural mechanism to allow for the redshift or related depletion of energy in gauge boson exchange radiation causing forces across the expanding universe!For these reasons, it is necessary to re-build general relativity on the basis of quantum field theory.   Smolin et al. show using Loop Quantum Gravity (LQG) that a Feynman path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent).

Regarding the physics of the metric: in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:

 ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’

Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions:

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with.  It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’

– P. Woit, Not Even Wrong, Cape, London, 2006, p189.  [Emphasis added.]

Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.

Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations. As Nobel Laureate Professor Phil Anderson wrote: “the flat universe is just not decelerating, it isn’t really accelerating …” – http://cosmicvariance.com/2006/01/03/danger-phil-anderson


.                       .

.                       .

Fig. 2 – The large void represents simply an enlargement of part of the left hand side of Fig. 1.  The particulate nature of the Dirac sea explains the physical basis for the UV (ultraviolet) cutoff in quantum field theories such as the successful Standard Model.  As you reduce the volume of space to such small volumes (i.e., as you collide particles to higher energies so that they approach so closely that there is very little distance between them) that it is unlikely that the small space will contain any background Dirac sea field particles at all, it is obvious that no charge polarization of the Dirac sea is possible.  This is due to the Dirac sea becoming increasing coarse grained when magnified excessively.  To make this argument quantitative and predictive is easy (see below).  The error in existing quantum field theory which require manual renormalization (upper and lower cutoffs) is the statistical treatment in the equations, which are continuous equations: these are only valid where large numbers of statistics are involved, and they break down where pushed too far, thus requiring cutoffs.

The UV cutoff is explained in Fig. 2: Dirac sea polarization (leading to charge renormalization) is only possible in volumes large enough to be likely to contain some discrete charges!  The IR cutoff has a different explanation.  It is required physically in quantum field theory to limit the range over which the vacuum charges of the Dirac sea are polarized, because if there were no limit, then the Dirac sea would be able to polarize sufficiently to completely eradicate the entire electric field of all electric charges.  That this does not happen in nature shows that there is a physical mechanism in place which prevents polarization below the range of the IR cutoff, which is about 10-15 m from an electron, corresponding to something like 1020 volts/metre electric field strength. Clearly, the Dirac sea is physically:(a) disrupted from bound into freed charges (pair production effect) above the IR cutoff (threshold for pair production),(b) given energy in proportion to the field strength (by analogy to Einstein’s photoelectric equation, where there is a certain minimum amount of energy required to free electrons from their bound state, and further energy above that mimimum then then goes into increasing the kinetic energy of those particles, except that in this case the indeterminancy principle due to scattering indeterminism introduces statistics and makes it more like a quantum tunnelling effect and the extra field energy above the threshold can also energise ground state Dirac sea charges into more massive loops in progressive states, ie, 1.022 MeV delivered to two particles colliding with 0.511 MeV each – the IR cutoff – can create an e- and e+ pair, while a higher loop threshold will be 211.2 MeV delivered as two particles colliding with 105.6 MeV or more, which can create a muon+ and muon- pair, and so on, see the previous post for explanation of a diagram explaining mass by ‘doubly special supersymmetry’ where charges have a discrete number of massive partners located either within the close-in UV cutoff range or beyond the perimeter IR cutoff range, accounting for masses in a predictive, checkable manner), and(c) the quantum field is then polarized (shielding electric field strength).These three processes should not be confused, but are generally confused by the use of the vague term ‘energy’ to represent 1/distance in most discussions of quantum field theory.  For two of the best introductions to quantum field theory as it is traditionally presented see http://arxiv.org/abs/hep-th/0510040 and http://arxiv.org/abs/quant-ph/0608140We only see ‘pair-production’ of Dirac sea charges becoming observable in creation-annihilation ‘loops’ (Feynman diagrams) when the electric field is in excess of about 1020 volts/metre.  This very intense electric field, which occurs out to about 10-15 metres from a real (long-observable) electron charge core, is strong enough to overcome the binding energy of the Dirac sea: particle pairs then pop into visibility (rather like water boiling off at 100 C).The spacing of the Dirac sea particles in the bound state below the IR cutoff is easily obtained.  Take the energy-time form of Heisenberg’s uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10-21 second; the maximum possible range is therefore this time multiplied by c, ie ~10-12 metre. The key thing to do would be to calculate the transmission of gamma rays in the vacuum. Since the maximum separation of charges is 10-12 m, the vacuum contains at least 1036 charges per cubic metre. If I can calculate that the range of gamma radiation in such a dense medium is 10-12 metre, I’ll have substantiated the mainstream picture. Normally you get two gamma rays when an electron and positron annhilate (the gamma rays go off in opposite directions), so the energy of each gamma ray is 0.511 MeV, and it is well known that the Compton effect (a scattering of gamma rays by electrons as if both are particles not waves) predominates for this energy.  The mean free path for scatter of gamma ray energy by electrons and positrons depends essentially on the density of electrons (number of electrons and positrons per cubic metre of space).  However, the data come from either the Klein-Nishita theory (an application of quantum mechanics to the Compton effect) or experiment, for situations where the binding energy of electrons to atoms or whatever is insignificant compared to the energy of the gamma ray.  It is perfectly possible that the binding energy of the Dirac sea would mean that the usual radiation attenuation data are inapplicable!Ignoring this possibility for a moment, we find that for 0.5 MeV gamma rays, Glasstone and Dolan (page 356) state that the linear absorption coefficient of water is u = 0.097 (cm)-1}, where the attenuation is exponential as e-ux where x is distance.  Each water molecule has 8 electrons and we know from Avogadro’s number that 18 grams of water contains 6.0225*1023 water molecules, or about 4.818*1024 electrons.  Hence, 1 cubic metre of water (1 metric ton or 1 million grams) contains 2.6767*1029 electrons.  The reciprocal of the linear absorption coefficient u, ie, 1/u tells us the ‘mean free path’ (the best estimate of effective ‘range’ for our purposes here), which for water exposed to 0.5 MeV gamma rays is 1/0.097 = 10.3 cm = 0.103 m.  Hence, the number of electrons and positrons in the Dirac sea must be vastly larger that in water, in order to keep the range down (we don’t observe any vacuum gamma radioactivity, which only affects subatomic particles).  Normalising the mean free path to 10-12 m to agree with the Heisenberg uncertainty principle, we find that the density of electrons and positrons in the vacuum would be: {the electron density in 1 cubic metre of water, 2.6767*1029} * 0.103/[10-12] = 2.76 * 1040 electrons and positrons per cubic metre of Dirac sea.  This agrees with the estimate previously given from the Heisenberg uncertainty principle that the vacuum contains at least 1036 charges per cubic metre.  However, the binding energy of the Dirac sea is being ignored in this Compton effect shielding estimate.  The true separation distance is smaller still, and the true density of electrons and positrons in the Dirac sea is still higher.

Obviously the graining of the Dirac sea must be much smaller than 10-12 m because we have already said that it exists down to the UV cutoff (very high energy, ie, very small distances of closest approach).  The amount of ‘energy’ in the Dirac sea is astronomical if you calculate the rest mass equivalent, but you can similarly produce stupid numbers for the energy of the earth’s atmosphere: the mean energy of an air molecule is around 500 m/s, and since the atmosphere is composed mainly of air molecules (with a relatively small amount of water and dust), we can get a ridiculous energy density of the air by multiplying the mass of air by 0.5*(5002) to obtain its kinetic energy.  Thus, 1 kg of air (with all the molecules going at a mean speed of 500 m/s) has an energy of 125,000 Joules.  But this is not useful energy because it can’t be extracted: it is totally disorganised.  The Dirac sea ‘energy’ is similarly massive but useless.


Woit gives an example of how representation theory can be used in low dimensions to reduce the entire Standard Model of particle physics into a simple expression of Lie spinors and Clifford algebra on page 51 of his paper http://arxiv.org/abs/hep-th/0206135.  This is a success in terms of what Wigner wants (see the top of this post for the vital quote from Wiki), and there is then the issue of the mechanism for electroweak symmetry breaking, for mass/gravity fields, and for the 18 parameters of the Standard Model.  These are not extravagant, seeing that the Standard Model has made thousands of accurate predictions with them, and all of those parameters are either already or else in principle mechanistically predictable by the causal Yang-Mills exchange radiation effects model and a causal model of renormalization and gauge boson energy-sharing based unification (see previous posts on this blog, and the links section in the ‘about’ section on the right hand side of this blog for further information).

Additionally, Woit stated other clues of chiral symetry: ‘The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time.’

For the background to Lie spinors and Clifford algebras, Baez has an interesting discussion of some very simple Lie algebra physics here and here, and representation theory here, Woit has extensive lecture notes here, and Tony Smith has a lot of material about Clifford algebras here and spinors here.  The objective to have is a simple unified model to represent the particle which can explain the detailed relationship between quarks and leptons and predict things about unification which are checkable.  The short range forces for quarks are easily explained by a causal model of polarization shielding by lepton-type particles in proximity (pairs or triads of ‘quarks’ form hadrons, and the pairs or triads are close enough to all share the same polarized vacuum veil to a large extent, which makes the poalrized vacuum generally stronger so that the effective long-range electromagnetic charge per ‘quark’ is reduced to a fraction of that for a lepton which consists of only one core charge: see this comment on Cosmic Variance blog.

I’ve given some discussion of the Standard Model at my main page (which is now partly obsolete and in need of a major overhaul to include many developments).  Woit gives a summary the Standard Model in a completely different way, which makes chiral symmetries clear, in Fig. 7.1 on page 93 of Not Even Wrong (my failure to understand this before made me very confused about chiral symmetry so I didn’t mention or consider it’s role):

‘The picture [it is copyright, so get the book: see Fig. 7.1 on p.93 of Not Even Wrong] shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).

‘Under SU(3), the quarks are triplets and the leptons are invariant.

‘Under SU(2), the particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).

‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’

This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (‘quarks’), whereas SU(2) controls doublet’s of particles (‘quarks’).

But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!

Clearly this weak hypercharge effect is what has been missing from my naive causal model (where observed long range quark electric charge is determined merely by the strength of vacuum polarization shielding of the electric charges closely confined).  Energy is not merely being shared between the QCD SU(3) colour forces and the U(1) electromagnetic forces, but there is the energy present in the form of weak hypercharge forces which are determined by the SU(2) weak nuclear force group.

Let’s get the facts straight: from Woit’s discussion (unless I’m misunderstanding), the strong QCD force SU(3) only applies to triads of quarks, not to pairs of quarks (mesons).

The binding of pairs of quarks is by the weak force only (which would explain why they are so unstable, they’re only weakly bound and so more easily decay than triads which are strongly bound).  The weak force also has effects on triads of quarks.

The weak hypercharge of a downquark in a meson containing 2 quarks is Y=1/3 compared to Y=-2/3 for a downquark in a baryon containing 3 quarks.

Hence the causal relationship holds true for mesons.  Hypothetically, 3 right-handed electrons (each with weak hypercharge Y = -2) will become right-handed downquarks (each with hypercharge Y=-2/3) bought close together, because they then share the same vacuum polarization shield, which is 3 times stronger than that around a single electron, and so attenuates more of the electric field, reducing it from -1 per electron when widely separated to -1/3 when brought close together (forget the Pauli exclusion principle, for a moment!).

Now, in a meson, you only have 2 quarks, so you might think that from this model the downquark would have electric charge -1/2 and not -1/3, but that anomaly only exists when ignoring the weak hypercharge!  For a downquark in a meson, the weak hypercharge is Y=1/3 instead of Y=-2/3 which the downquark has in a baryon (triad).  The increased hypercharge (which is responsible physically to the weak force field that binds up a meson) offsets the electric charge anomaly.  The handedness switch-over, in going from considering quarks in baryons to those in mesons, automatically compensates the electric charge, keeping it the same!

The details of how handedness is linked to weak hypercharge is found in the dynamics of Pauli’s exclusion principle: adjacent particles can’t have have a full set of the same quantum numbers like the same spin and charge.  Instead, each particle has a unique set of quantum numbers.  Bringing particles together and having them ‘live together’ in close proximity forces them to arrange themselves with suitable quantum numbers.  The Pauli exclusion principle is simple in the case of atomic electrons: each electron has four quantum numbers, describing orbit configuration and intrinsic spin, and each adjacent electron has opposite spin to its neighbours.  The spin alignment here can be understood very simply in terms of magnetism: it needs the least energy to have sign an alignment (hving similar spins would be an addition of magnetic moments, so that north poles would all be adjacent and south poles would all be adjacent, which requires more energy input than having adjacent magnets parallel with opposite poles nearest).  In quarks, the situation regarding the Pauli exclusion principle mechanism is slightly more complex, because quarks can have similar spins if their colour charges are different (electrons don’t have colour charges, which are an emergent property of the strong fields which arise when two or three real fundamental particles are confined at close quarters).

Obviously there is a lot more detail to be filled in, but the main guiding principles are clear now: every fermion is indeed the same basic entity (whether quark or lepton), and the differences in observed properties stem to the vacuum properties such as the strength of vacuum polarization, etc.  The fractional charges of quarks always arise due to the use of some electromagnetic energy to create other types of short range forces (the testable prediction of this model is the forecast that detailed calculations will show that perfect unification will arise on such energy conservation principles, without requiring the 1:1 boson to fermion ‘supersymmetry’ hitherto postulated by string theorists).  Hence, in this simple mechanism, the +2/3 charge of the upquark is due to a combination of strong vacuum polarization attenuation and hypercharge (the downquark we have been discussing is just the clearest case).

So regarding unification, we can get hard numbers out of this simple mechanism.  We can see that the total gauge boson energy for all fields is conserved, so when one type of charge (electric charge, colour charge, or weak hypercharge) varies with collision energy or distance from nucleus, we can predict that the others will vary in such a way that the total charge gauge boson energy (which mediates the charge) remains constant.  For example, we see reduced electric charge from a long range because some of that energy is attenuated by the vacuum and is being used for weak and (in the case of triads of quarks) colour charge fields.  So as you get to ever higher energies (smaller distances from particle core) you will see all the forces equalizing naturally because there is less and less polarized vacuum between you and the real particle core which can attenuate the electromagnetic field.  Hence, the observable strong charge couplings have less supply of energy (which comes from attenuation of the electromagnetic field), and start to decline.  This causes asymptotic freedom of quarks because the decline in the strong nuclear coupling at very small distances is offset by the geometric inverse-square law over a limited range (the range of asymptotic freedom).  This is what allows hadrons to have a much bigger size than the size of the tiny quarks they contain.


We’re in a Dirac sea, which undergoes various phase transitions breaking symmetries as the strength of the field is increased.  Near a real charge, the electromagnetic field within 10^{-15} metre exceeds 10^20 volts/metre which causes the first phase transition, like ice melting or water boiling.  The freed Dirac sea particles can exert therefore a short range attractive force by the LeSage mechanism (which of course does not apply directly to long range interactions because the ‘gas’ effect fills in LeSage shadows over long distances, so the attractive force is short-ranged: it is limited to a range of about one mean-free-path for the interacting particles in the Dirac sea).  The LeSage gas mechanism represents the strong nuclear attractive force mechanism.  Gravity and electromagnetism as explained the previous posts on this blog are both due to the Yang-Mills ‘photon’ exchange mechanism (because Yang-Mills exchange ‘photon’ radiation – or any other radiation – doesn’t diffract into shadows, it doesn’t suffer the short range issue of the strong nuclear force; the short range of the weak nuclear force due to shielding by the Dirac sea may be quite a different mechanism for having a short-range).

You can think of the strong force like the short-range forces due to normal sea-level air pressure: the air pressure of 14.7 psi or 101 kPa is big, so you can prove the short range attractive force of air pressure it by using a set of rubber ‘suction cups’ strapped on your hands and knees to climb a smooth surface like a glass-fronted building (assuming the glass is strong enough!).  This force has a range on the order of the mean free path of air molecules.  At bigger distances, air pressure fills the gap, and the force disappears.  The actual fall of course is statistical; instead of the short range attraction becoming suddenly zero at exactly one mean free path, it drops (in addition to geometric factors) exponentially by the factor exp{-ux} where u is the reciprocal of the mean free path and x is distance (in air of course there are weak attractive forces between molecules, Van der Waals forces, as well).  Hence it is short ranged due to scatter of charged particles dispersing forces in all directions (unlike radiation):

 ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

(Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, which above the IR cutoff start to exert vast pair-production loop pressure; this gives the foam vacuum.)

Now for the formulae!  The reason for radioactivity of heavy elements is linked to the increasing difficulty the strong force has in offsetting electromagnetism as you get towards 137 protons, accounting for the shorter half-lives. So here is a derivation of the strong nuclear force (mediated by pions) law including the natural explanation of why it is 137 times stronger than electromagnetism at short distances:

Heisenberg’s uncertainty says p*d = h/(2*Pi), if p is uncertainty in momentum, d is uncertainty in distance.

This comes from the resolving power of Heisenberg’s imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process. The factor of 2 would be a factor of 4 if we consider the uncertainty in one direction about the expected position (because the uncertainty applies to both directions, it becomes a factor of 2 here).

For light wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time. OK, we are dealing with massive pions, not light, but this is close enough since they are expected to be relativistic, ie, they have a velocity approaching c:

Et = h/(2*Pi)

t = d/c = h/(2*Pi*E)

E = hc/(2*Pi*d).

Hence we have related distance to energy: this result is the formula used even in popular texts used to show that a 80 GeV energy W+/- gauge boson will have a range of 10-17 m. So it’s OK to do this (ie, it is OK to take uncertainties of distance and energy to be real energy and range of gauge bosons which cause fundamental forces).

Now, the work equation E = F*d (a vector equation: “work is product of force and the distance acted against the force in the direction of the force”), where again E is uncertainty in energy and d is uncertainty in distance, implies:

E = hc/(2*Pi*d) = Fd

F = hc/(2*Pi*d^2)

Notice the inverse square law resulting here!  There is a maximum range of this force, equal to the distance pions can travel in the time given by the uncertainty principle, d = hc/(2*Pi*E).

The strong force as a whole gives a Van der Waals type force curve; attractive at the greatest distances due to the pion mediated strong force (which is always attractive since pions have spin 0) and repulsive at short ranges due to exchange of rho particles (which have a spin of 1).   We’re just considering the attractive pion exchange force here.  (The repulsive rho exchange force would need to be added to the result to get the net strong force versus distance curve.)

The exponential quenching factor for the attractive (pion mediated) part of the strong force is exp(-x/a) where x is distance and a is the range of the pions. Using the uncertainty principle, assuming the pions are relativistic (velocity ~ c) and ignoring pion decay, a = {h bar}c/E where E is pion energy (~140 MeV ~=2.2*10^{-11} J). Hence a = 1.4*10^{-15} m = 1.4 fm.

So over a distance of 1 fm, this would reduce the pion force to exp(-1/1.4) ~ 0.5. But if the pion speed is much smaller than c, the reduction will be greater.  There would be other factors involved as well, due to things like the polarization of the charged pion cloud.

Ignoring the exponential attenuation, the strong force obtained above is 137.036 times higher than Coulomb’s law for unit fundamental charges!  This is the usual value often given for the ratio between the strong nuclear force and the electromagnetic force (I’m aware the QCD inter quark gluon-mediated force takes different and often smaller values than 137 times the electromagnetism force; that is due to vacuum polarization effects including the effect of charges in the vacuum loops coupling to and interfering with the gauge bosons for the QCD force).

This is the bare core charge of any particle, ignoring vacuum polarization which extends out to 10-15 metres and shields the electric field by a factor of 137 (which is the number 1/alpha), ie, the vacuum is attenuating 100(1-alpha) % = 99.27 % of the electric field of the electron.  This energy is going into nuclear forces in the short-range vacuum polarization region (ie, massive loops, virtual particles, W+/-, Z_o and ‘gluon’ effects, which don’t have much range because they are barred by the high density of the vacuum, which is the obvious mechanism of electroweak symmetry breaking – regardless whether there is a Higgs boson or no Higgs boson).

The electron has the characteristics of a gravity field trapped energy current, a Heaviside energy current loop of black hole size (radius 2GM/c^2) for its mass, as shown by gravity mechanism considerations (see ‘about’ information on right hand side of this blog for links).  The looping of energy current, basically a Poynting-Heaviside energy current trapped in a small loop, causes a spherically symmetric E-field and a toroidal shaped B-field which at great distances reduces (because of the effect of the close-in radial electric fields on transverse B-fields in the vacuum polarization zone within 10-15 metre of the electron black hole core) to a simple magnetic dipole field (those B-field lines which are parallel to E-field lines, ie, the polar B-field lines of the toroid, obviously can’t ever be attenuated by the radial E-field).  This means that since the E- and B-fields in a photon are related by simply E = c*B, the vacuum polarization reduces only E by a factor of 137, and not B!  This is long evidenced in practice as Dirac proved in 1931:

‘When one considers Maxwell’s equations for just the electromagnetic field, ignoring electrically charged particles, one finds that the equations have some peculiar extra symmetries besides the well-known gauge symmetry and space-time symmetries.  The extra symmetry comes about because one can interchange the roles of the electric and magnetic fields in the equations without changing their form.  The electric and magnetic fields in the equations are said to be dual to each other, and this symmetry is called a duality symmetry.  Once electric charges are put back in to get the full theory of electrodynamics, the duality symmetry is ruined.  In 1931 Dirac realised that to recover the duality in the full theory, one needs to introduce magnetically charged particles with peculiar properties.  These are called magnetic monopoles and can be thought of as topologically non-trivial configurations of the electromagnetic field, in which the electromagnetic field becomes infinitely large at a point.  Whereas electric charges are weakly coupled to the electromagnetic field with a coupling strength given by the fine structure constant alpha = 1/137, the duality symmetry inverts this number, demanding that the coupling of the magnetic charge to the electromagnetic field be strong with strength 1/alpha = 137. [This applies to the magnetic dipole Dirac calculated for the electron, assuming it to be a Poynting wave where E = c*B and E is shielded by vacuum polarization by a factor of 1/alpha = 137.]

‘If magnetic monopoles exist, this strong [magnetic] coupling to the electromagnetic field would make them easy to detect.  All experiments that have looked for them have turned up nothing…’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, pp. 138-9. [Emphasis added.]

The Pauli exclusion principle normally makes the magnetic moments of all electrons undetectable on a macroscopic scale (apart from magnets made from iron, etc.): the magnetic moments usually cancel out because adjacent electrons always pair with opposite spins!  If there are magnetic monopoles in the Dirac sea, there will be as many ‘north polar’ monopoles as ‘south polar’ monopoles around, so we can expect not to see them because they are so strongly bound!


Professor Jacques Distler has an interesting, thoughtful, and well written post called ‘The Role of Rigour’ on his Musings blog where he brilliantly argues:

‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ 

Jacques also summarises the issues for theoretical physics clearly in a comment there:

  1. ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
  2. ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
  3. ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’

I’ve explained there to Dr ‘string-hype Haelfix’ that people should be working on non-rigorous areas like the derivation of the Hamiltonian in quantum mechanics, which would increase the rigour of theoretical physics, unlike string.  I earlier explained this kind of thing (the need for checkable research not speculation about unobservables) in the October 2003 Electronics World issue opinion page, but was ignored, so clearly I need to move on to stronger language because stringers don’t listen to such polite arguments as those I prefer using!  Feynman writes in QED, Penguin, London 1985:

‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’

There is already a frequency of oscillation in the photon before it hits the glass, and in the glass due to the sea of electrons interacting via Yang-Mills force-causing radiation.  If the frequencies clash, the photon can be reflected or absorbed.  If they don’t interfere, the photon goes through the glass.  Some of the resonate frequencies of the electrons in the glass are determined by the exact thickness of the glass, just like the resonate frequencies of a guitar string are determined by the exact length of the guitar string.  Hence the precise thickness of the glass controls some of the vibrations of all the electrons in it, including the surface electrons on the edges of the glass.  Hence, the precise thickness of the glass determines the amplitude there is for a photon of given frequency to be absorbed or reflected by the front surface of the glass.  It is indirect in so much as the resonance is set up by the thickness of the glass long before the photon even arrives (other possible oscillations, corresponding to a non-integer value of the glass thickness as measured in terms of the number of wavelengths which fit into that thickness, are killed off by interference, just as a guitar string doesn’t resonate well at non-natural frequencies).

What has happened is obvious: the electrons have set up a equilibrium oscillatory state dependent upon the total thickness before the photon arrives.  There is nothing to this: consider how a musical instrument works, or even just a simple tuning fork or solitary guitar string.  The only resonate vibrations are those which contain an integer number of wavelengths.  This is why metal bars of different lengths resonate at different frequencies when struck.  Changing the length of the bar slightly, completely alters its resonance to a given wavelength!  Similarly, the photon hitting the glass has a frequency itself.  The electrons in the glass as a whole are all interacting (they’re spinning and orbiting with centripetal accelerations which cause radiation emission, so all are exchanging energy all the time which is the force mechanism in Yang-Mills theory for U(1) electromagnetism), so they have a range of resonances that is controlled by the number of integer wavelengths which can fit into the thickness of the glass, just as the range of resonances of a guitar string are determined by the wavelengths which fit into the string length resonately (ie, without suffering destructive interference).

Hence, the thickness of the glass pre-determines the amplitude for a photon of given frequency to be either absorbed or reflected.  The electrons at the glass surface are already oscillating with a range of resonate frequencies depending on the glass thickness, before the photon even arrives.  Thus, the photon is reflected (if not absorbed) only from the front face, but it’s probability of being reflected is dependent on the total thickness of the glass.  Feynman also explains:

‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’

More about this here (in the comments; but notice that Jacques’ final comment on the thread of discussion about rigour in quantum mechanics is discussed by me here), here, and here.  In particular, Maxwell’s equations assume that real electric current is dQ/dt which is a continuous equation being used to represent a discontinuous situation (particulate electrons passing by, Q is charge): it works approximately for large numbers of electrons, but breaks down for small numbers passing any point in a circuit in a second!  It is a simple mathematical error, which needs correcting to bring Maxwell’s equations into line with modern quantum field theory.  A more subtle error in Maxwell’s equations is his ‘displacement current’ which is really just a Yang-Mills force-causing exchange radiation as explained in the previous post and on my other blog here.  This is what people should be working on to derive the Hamiltonian: the Hamiltonian in both Schroedinger’s and Dirac’s equations describes energy transfers as wavefunctions vary in time, which is exactly what the corrected Maxwell ‘displacement current’ effect is all about (take the electric field here to be a relative of the wavefunction).  I’m not claiming that classical physics is right!  It is wrong!  It needs to be rebuilt and its limits of applicability need to be properly accepted:

Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{-15} m from an electron (where the electric field is over about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

The Hamiltonian time evolution should be derived rigorously from the empirical facts of electromagnetism: Maxwell’s ‘displacement current’ describes energy flow (not real charge flow) due to a time-varying electric field. Clearly it is wrong because the vacuum doesn’t polarize below the IR cutoff which corresponds to 10^20 volts/metre, and you don’t need that electric field strength to make capacitors, radios, etc. work.

So you could derive the Schroedinger from a corrected Maxwell ‘displacement current’ equation. This is just an example of what I mean by deriving the Schroedinger equation. Alternatively, a computer Monte Carlo simulation of electrons in orbit around a nucleus, being deflected by pair production in the Dirac sea, would provide a check on the mechanism behind the Schroedinger equation, so there is a second way to make progress


‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’ – Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org/articles/science/sc0043.html

‘There will certainly be no lack of human pioneers when we have mastered the art of flight.  Who would have thought that navigation across the vast ocean is less dangerous and quieter than in the narrow, threatening gulfs of the Adriatic , or the Baltic, or the British straits?  Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes.  In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies – I shall do it for the moon, you, Galileo, for Jupiter.’ – Letter from Johannes Kepler to Galileo Galilei, April 1610, http://www.physics.emich.edu/aoakes/letter.html

Kepler was a crackpot/noise maker; despite his laws and discovery of elliptical orbits, he got the biggest problem wrong, believing that the earth – which William Gilbert had discovered to be a giant magnet – was kept in orbit around the sun by magnetic force. So he was a noise generator, a crackpot.  If you drop a bag of nails, they don’t all align to the earth’s magnetism because it is so weak, but they do all fall – because gravity is relatively strong due to the immense amounts of mass involved. (For unit charges, electromagnetism is stronger than gravity by a factor like 1040 but that is not the right comparison here, since the majority of the magnetism in the earth due to fundamental charges is cancelled out by the fact that charges are paired with opposite spins, cancelling out their magnetism. The tiny magnetic field of the planet earth is caused by some kind of weak dynamo mechanism due to the earth’s rotation and the liquid nickel-iron core of the earth, and the earth’s magnetism periodically flips and reverses naturally – it is weak!)  So just because a person gets one thing right, or one thing wrong, or even not even wrong, that doesn’t mean that all their ideas are good/rubbish.

As Arthur Koestler pointed out in The Sleepwalkers, it is entirely possible for there to be revolutions without any really fanatic or even objective/rational proponents (Newton was a totally crackpot alchemist who also faked the first ‘theory’ of sound waves).  My own view of the horrible Dirac sea (Oliver Lodge said: ‘A fish cannot comprehend the existence of water.  He is too deeply immersed in it,’ but what about flying fish?) is that it is an awfully ugly empirical fact that is

(1) required by the Dirac equation’s negative energy solution, and which is

(2) experimentally demonstrated by antimatter.

My personal interest in the subject is more to do with a personal, bitter vendetta against string theorists who are turning physics into a religion and laughing stock in Britain, than because I have the slightest interest how the big bang came about or what will happen in the distant future.  I don’t care about that, just about understanding what is already known, and promoting the hard, experimental facts.  Maybe when time permits some analysis of what these facts say about the early time of the big bang and the future of the big bang will be possible (see my controversial comment here).  I did touch on these problems in an eight pages long initial paper which I wrote in May 1996 and which was sold via the October 1996 issue of Electronics World (see letters pages for the Editor’s note).  However, that paper is long obsolete and the whole subject needs to be carefully analysed before coming to important conclusions.  But the main problem is that Woit summarises on p.259 of the UK edition of the brilliant book Not Even Wrong:

‘As long as the leadership of the particle theory community refuses to face up to what has happened and continues to train young theorists to work on a failed project, there is little likelihood of new ideas finding fertile ground in which to grow.  Without a dramatic change in the way theorists choose what topics to address, they will continue to be as unproductive as they have been for two decades, waiting for some new experimental result finally to arrive.’

John Horgan’s 1996 excellent book The End of Science, which Woit argues is the future of physics if people don’t keep to explaining what is known (rather than speculating about unification at energy higher than can ever be seen, speculating about parallel universes, extradimensions, and other non-empirical drivel), states:

‘A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics.  The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.’

This post is updated as of 26 October 2006, and will be further expanded to include material such as the results here, here, here, here and here.

I’ve not included gravity, electromagnetism or mass mechanism dynamics in this post; for these see the links in the ‘about’ section on the right hand side of this blog, and the previous posts on this blog.  The major quantitative predictions and successful experimental tests are summarized in the old webpage at http://feynman137.tripod.com/#d apart from all of the particle masses which are dealt with in the previous post on this blog.  It is not particularly clear whether I should spent spare time revising outdated material or studying unification and Standard Model details further.  Obviously, I’ll try to do both as far as time permits.

L. Green, “Engineering versus pseudo-science”, Electronics World, vol. 110, number 1820, August 2004, pp52-3:

‘… controversy is easily defused by a good experiment. When such unpleasantness is encountered, both warring factions should seek a resolution in terms of definitive experiments, rather than continued personal mudslinging. This is the difference beween scientific subjects, such as engineering, and non-scientific subjects such as art. Nobody will ever be able to devise an uglyometer to quantify the artistic merits of a painting, for example.’  (If string theorists did this, string theory would be dead, because my mechanism published in Oct 96 E.W. and Feb. 97 Science World, predicts the current cosmological results which were discovered about two years later by Perlmutter.)

‘The ability to change one’s mind when confronted with new evidence is called the scientific mindset. People who will not change their minds when confronted with new evidence are called fundamentalists.’ – Dr Thomas S. Love, California State University.

This comment from Dr Love is extremely depressing; we all know today’s physics is a religion.  I found out after emailed exchanges with, I believe, Dr John Gribbin, the author of numerous crackpot books like ‘The Jupiter Effect’ (claiming Los Angeles would be destroyed by an earthquake in 1982), and quantum books trying to prove Lennon’s claim ‘nothing is real’.  After explaining the facts to Gribbin, he then emailed me a question something like (I have archives of emails by the way, so could check the exact wording if required): ‘you don’t seriously expect me to believe that or write about it?’

‘… a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ – Max Planck.

But, being anti-belief and anti-religious intrusion into science, I’m not interested in getting people to believe truths but on the contrary, to question them.  Science is about confronting facts.  Dr Love suggests a U(3,2)/U(3,1)xU(1) alternative to the Standard Model, which provides a test on my objectivity.  I can’t understand his model properly because it reproduces particle properties in a way I don’t understand, and doesn’t appear to yield any of the numbers I want like force strengths, particle masses, causal explanations.  Although he has a great many causal explanations in his paper, which are highly valuable, I don’t see how they connect to the alternative to the standard model.  He has an online paper on the subject as a PDF file, ‘Elementary Particles as Oscillations in Anti-de-Sitter Space-Time’ which I have several issues with: (1) anti-de-Sitter spacetime is a stringy assumption to begin with, (2) I don’t see checkable predictions.  However, maybe further work on such ideas will produce more justification for them; they haven’t had the concentration of effort which string theory has had.

There are no facts in string ‘theory’ (there isn’t even a theory – see the previous post) which is merely speculation.  The gravity strength prediction I give is accurate and compatible with the Yang-Mills exchange radiation standard model and the validated (not the cosmic-landscape epicycles rubbish) aspects of general relativity.  Likewise, I predict correctly the ratio of electromagnetic strength to gravity strength (previous post), and the ratio of strong to electromagnetic which means that I predict three forces for the price of one.  In addition (see previous post) the masses of all directly observable particles (the masses of isolated quarks are not real as such because they can’t be isolated, because the energy required to separate them exceeds the energy required to create new quark pairs).

Don’t believe this, it is not a faith-based religion.  It is just plain fact.  The key questions are the accuracy of the predictions and the clear presentation of the mechanisms.  Unlike string theory, this is falsifiable science which makes many connections to reality.  However, as Ian Montgomery, an Australian, aptly expressed the political state of physics in an email: ‘… we up Sh*t Creek in a barbed wire canoe without a paddle …’  I think that is a succinct summary of the state of high energy physics at present and of the hope of making progress.  There is obviously a limit to what a handful of ‘crackpots’ outside the mainstream can do, with no significant resources compared to stringers.

[Regards the ‘spin 2 graviton’ see an interesting comment on Not Even Wrong: ‘LDM Says:
October 26th, 2006 at 12:03 pm

Referring to footnote 12 of the physics/0610168 about string theory and GR…

If you actually check what Feynman said in the “Feynman Lectures on Gravitation”, page 30…you will see that the (so far undetected) graviton, does not, a priori, have to be spin 2, and in fact, spin 2 may not work, as Feynman points out.

This elevation of a mere possibility to a truth, and then the use of this truth to convince oneself one has the correct theory, is a rather large extrapolation.’

Note that I also read those Feynman lectures on gravity when Penguin books brought them out in paperback a few years ago and saw the same thing, although I hated reading the abject speculation in them where Feynman suggests that the strength ratio of gravity to electromagnetism is like the ratio of the radius of the universe to the radius of a proton, without any mechanism or dynamics.  Tony Smith quotes a bit of them on his site which I re-quote on my home page.  The spin depends on the nature of the radiation, and if it is non-oscillating then it can only propagate via the 2-way mode like electric/Heaviside-Poynting energy due to the same reason of infinite self-inductance preventing it working by a single way mode (like two non-oscillating energy currents going in opposite directions) which will affect what you mean by spin.

On my home page there are three main sections dealing with the gravity mechanism dynamics, namely near the top of http://feynman137.tripod.com (scroll down to first illustration), at http://feynman137.tripod.com/#a and for technical calculations predicting strength of gravity accurately at http://feynman137.tripod.com/#h.  The first discussion, near the top of the page, explains how shielding occurs: ‘… If you are near a mass, it creates an asymmetry in the radiation exchange, because the radiation normally received from the distant masses in the universe is red-shifted by high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force of a nearby mass which is not receding (in spacetime) from you is F = ma = mv/t = mv/(x/c) = mcv/x = 0. Hence by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, creating an asymmetry. So you get pushed towards the shield. This is why apples fall. …’  This brings up the issue of how electromagnetism works.  Obviously, the charges of gravity and electromagnetism are different: masses don’t have the symmetry properties of the electric charge.  For example, mass increases with velocity, while electric charge doesn’t.  I’ve dealt with this in the last couple of posts on this blog, but unification physics is a big field and I’m still making progress.  One comment about the spin.  Fermions have half-integer spin which means they are like a Mobius strip, requiring 720 degrees of rotation for a complete exposure of their surface.  Fermi-Dirac statistics describe such particles.  Bosons have integer spin and spin-1 bosons are relatively normal in that they only require 360 degrees of rotation for a complete revolution.  Spin-2 bosons gravitons presumably require only 180 degrees of rotation per revolution, so appear stringy to me.  I think the exchange radiation of gravity and electromagnetism is the same thing – based on the arguments in previous posts – and is spin-1 radiation, albeit continuous radiation.  It is quite possible to have continuous radiation in a Dirac sea, just as you can have continuous waves composed of molecules in a water based sea.] 

A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments. – Novum Organum.

This would allow LQG to be built as a bridge between path integrals and general relativity. I wish Smolin or Woit would pursue this.

Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)

– Feynman, QED, Penguin, 1990, page 54.

That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.

The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that.

Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.

So the vacuum simply isn’t full of loops (they only extend out to 1 fm around particles). Hence no dark energy mechanism. 

For more recent information on gravity, see http://electrogravity.blogspot.com/

See the discussion of this at https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/


There are two cutoffs, named for historical reasons after different extreme ends of the visible spectrum of light.  Visible light spectra have two cutoffs, a lower (infrared) “IR” cutoff, and an upper (ultraviolet) “UV” cutoff.  Obviously, IR and UV have nothing to do with the quantum field theory IR and UV cutoffs which are in the gamma ray energy region (0.511 MeV and 10^16 GeV or thereabouts).

To calculate the exact distance corresponding to the IR cutoff: simply calculate the distance of closest approach of two electrons colliding at 1.022 MeV (total collision energy) or 0.511 MeV per particle.  This is easy as it is Coulomb scattering.  See http://cosmicvariance.com/2006/10/04/is-that-a-particle-accelerator-in-your-pocket-or-are-you-just-happy-to-see-me/#comment-123143 :

Unification is often made to sound like something that only occurs at a fraction of a second of the BB: http://hyperphysics.phy-astr.gsu.edu/hbase/astro/unify.html#c1Problem is, unification also has another meaning: that of closest approach when two electrons (or whatever) are collided. Unification of force strengths occurs not merely at high energies, but close to the core of a fundamental particle.The kinetic energy is converted into electrostatic potential energy as the particles are slowed by the electric field. Eventually, the particles stop approaching (just before they rebound) and at that instant the entire kinetic energy has been converted into electrostatic potential energy of E = (charge^2)/(4*Pi*Permittivity*X), where R is the distance of closest approach.This concept enables you to relate the energy of the particle collisions to the distance they are approaching. For E = 1 MeV, R = 1.44 x 10^-15 m (this assumes one moving electron of 1 MeV hits a non-moving electron, or that two 0.5 MeV electrons collide head-on). OK, I do know that there are other types of scattering than the simple Coulomb scattering, so it gets far more complex, particularly at higher energies.

But just thinking in terms of distance from a particle, you see unification very differently to the usual picture. For example, experiments in 1997 (published by Levine et al. in PRL v.78, 1997, no.3, p.424) showed that the observable electric charge is 7% higher at 92 GeV than at low energies like 0.5 MeV. Allowing for the increased charge due to reduced polarization caused shielding, the 92 GeV electrons approach within 1.8 x 10^-20 m. (Assuming purely Coulomb scatter.)

Extending this to the assumed unification energy of 10^16 GeV, the distance of approach is down to 1.6 x 10^-34 m, and the Planck scale is ten times smaller.

If you replot graphs like http://www.aip.org/png/html/keith.htm (or Fig 66 of Lisa Randall’s Warped Passages) as force strength versus distance form particle core, you have to treat leptons and quarks differently.

You know that vacuum polarization is shielding the core particle’s electric charge, so that electromagnetic interaction strength rises as you approach unification energy, while strong nuclear forces fall.

Electric field lines diverge, that causes the inverse square law (the number of lines crossing unit area falls as the inverse square of distance because the number of radial field lines is constant but the surface area of a sphere at distance R from the electron core is 4*Pi*R^2).  The polarization of the vacuum within 1 fm of an electron core means virtual positrons get drawn closer to the electron core than virtual electrons, creating another electric field line which opposes and cancels out some the electron’s field lines entirely.

If you look you my home page you find that the electron’s charge is 7% stronger at a scattering energy of 90 GeV than at 1 fm distance and beyond (0.511 Mev/particle or 1 MeV per collision scattering energy).  For purely Coulomb perfectly elastic scattering at normal incidence the distance of closest approach goes inversely as the energy of the collision, so on this basis the charge of the electron is the normal charge “e” at 1 fm (10^{-15} m) and beyond, but is 1.07e at something like 10^{-20} m.  Actually, the collision is not elastic but is results in other particles being created and reactions, so the true distance of 1.07e charge is less than 10^{-20} m.

I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424

Nature reviews Dr Woit’s book Not Even Wrong and Smolin’s book Trouble; Lubos Motl’s string snaps; Professor Bert Schroer puts string theory out of its misery

This WordPress post is a revised and updated version of the post here.)

‘The problem is not that there are no other games in town, but rather that there are no bright young players who take the risk of jeopardizing their career by learning and expanding the sophisticated rules for playing other games.’

– Prof. Bert Schroer, http://arxiv.org/abs/physics/0603112, p46

‘My final conclusion is that the young and intelligent Harvard professor Lubos Motl has decided to build his career on offering a cartering service for the string community. He obviously is a quick scanner of the daily hep-th server output, and by torching papers which are outside the credo of string theorists (i.e. LQG, AQFT) he saves them time. The downgrading of adversaries is something which has at least the tacit consent of the community. It is evident that he is following a different road from that of using one’s intellectual potential for the enrichment of knowledge about particle physics. If one can build a tenure track career at a renown university by occasionally publishing a paper but mainly keeping a globalized community informed by giving short extracts of string-compatible papers and playing the role of a Lord of misuse to outsiders who have not yet gotten the message, the transgression of the traditional scientific ethics [24] for reasons of career-building may become quite acceptable. It would be interesting to see into what part of this essay the string theorists pitbull will dig his teeth. [He’ll just quietly run away, Professor Schroer! All these stringers don’t have any answer to the facts so they run away when under pressure, following Kaku’s fine example.]’ – Prof. Bert Schroer, http://arxiv.org/abs/physics/0603112, p22

First, Kaku ‘accidentally’ published on his website a typically inaccurate New Scientist magazine article draft which will appear in print in mid-November 2006. He falsely claimed:

‘The Standard Model of particles simply emerges as the lowest vibration of the superstring. And as the string moves, it forces space-time to curl up, precisely as Einstein predicted. Hence, both theories are neatly included in string theory. And unlike all other attempts at a unified field theory, it can remove all the infinities which plague other theories. But curiously, it does much more. Much, much more.’

Actually, it doesn’t, as Peter Woit patiently explains. String theory starts with a 1-dimensional line, when it oscillates time enters so it becomes a 2-dimensional worldsheet, which then needs at least 8 more dimensions added to give the resonances of particle physics satisfying conformal symmetry. So you end up with at least 10 dimensions, and because general relativity has 4 spacetime dimensions (3 spacelike, 1 timelike), you obviously somehow need to compactify or roll up 6 dimensions, which is done using a 6-d Calabi-Yau manifold, that has many size and shape parameters, giving the string something like 10^500 vibrational metastable resonance states and that many different solutions. The Standard Model might or might not be somewhere in there. Even if it is, you then have the problem of explaining all the other (unphysical) solutions.

10^500 is actually too much to ever work out rigorously in the age of the universe: it is 1 followed by 500 zeroes. For comparison, the total number of fermions in the universe is only about 10^80. The age of the universe measured in seconds is merely 4.7 x 10^17.

So, if stringers could evaluate one solution per second, it would take them ~(10^500)/(10^17) = 10^483 times the age of the universe. Now let’s assume they could somehow evaluate one solution every millionth of a second. Then they would get through the problem in (10^483)/(10^6) = 10^477 times the age of the universe.

Now suppose I came up with a theory which predicted even just 2 different solutions for the same thing. If one of them turned out to be consistent with the real world, and one didn’t, I could not claim to predict reality. Dirac’s quantum field theory equation in 1929 gives an example of how to treat physical solutions. His spinor in the Hamiltonian predicts E = +/-mc^2 which is different from Einstein’s E = mc^2.

Dirac realised that ALL SOLUTIONS MUST BE PHYSICAL, so he interpreted the E = -mc^2 solution as the prediction of antimatter, which Anderson discovered as the “positron’’ (anti-electron) in 1932. This is the way physics is done.

So the trouble is due to the fact that a large number of extra dimensions are needed to get string theory to ‘work’ as an ad hoc model, and to make those extra dimensions appear invisible they are curled up into a Calabi-Yau manifold. Because there are loads of parameters to describe the exact sizes of the many dimensions of the manifold, it is capable of 10^500 states of resonance, and there is no proof that any of those gives the standard model of particle physics.

Even if it does, it is hardly a prediction because the theory is so vague it has loads of unphysical solutions. Susskind’s stringy claim (see here for latest Susskind propaganda) that all the solutions are real and occur in other parallel universes is just a religious belief, since it can’t very well be checked. The anthropic principle can make predictions but it is very subjective and is not falsifiable, so doesn’t fit in with Popper’s criterion of science.

As for its claim to predict gravity, it again only predicts the possibility of unobservable spin-2 gravitons, and says nothing checkable about gravity. See the comment by Eddington made back in 1920, quoted here:

‘It is said that more than 200 theories of gravitation have have been put forward; but the most plausible of these have all had the defect that that they lead nowhere and admit of no experimental test.’

– A. S. Eddington, Space Time and Gravitation, Cambridge University Press, 1920, p64. Contrast that caution to Witten’s stringy hype:

‘String theory has the remarkable property of predicting gravity.’

– Edward Witten, superstring 10/11 dimensional M-theory originator, Physics Today, April 1996.

Nature’s review is available here and it reads in part:

Nature 443, 491 (5 October 2006). Published online 4 October 2006:

Theorists snap over string pieces

Geoff Brumfiel


‘Books spark war of words in physics. Two recently published books are riling the small but influential community of string theorists, by arguing that the field is wandering dangerously far from the mainstream.

‘The books’ titles say it all: Not Even Wrong, a phrase that physicist Wolfgang Pauli used to describe incomplete ideas, and The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Both articulate a fear that the field is becoming too abstract and is focusing on aesthetics rather than reality. Some physicists even warn that the theory’s dominance could pose a threat to the scientific method itself.

‘Those accusations are vehemently denied by string theorists, and the books – written by outsiders – have stirred deep resentment in the tight-knit community. Not Even Wrong was published in June and The Trouble with Physics came out in September; shortly after they appeared on the Amazon books website, string theorist Lubos Motl of Harvard University posted reviews furiously entitled “Bitter emotions and obsolete understanding of high-energy physics’’ and “Another postmodern diatribe against modern physics and scientific method’’. As Nature went to press, the reviews had been removed.

‘Few in the community are, at least publicly, as vitriolic as Motl. But many are angry and struggling to deal with the criticism. “Most of my friends are quietly upset,’’ says Leonard Susskind, a string theorist at Stanford University in California. …

‘The books leave string theorists such as Susskind wondering how to approach such strong public criticism. “I don’t know if the right thing is to worry about the public image or keep quiet,’’ he says. He fears the argument may “fuel the discrediting of scientific expertise’’.

‘That’s something that Smolin and Woit insist they don’t want. Woit says his problem isn’t with the theory itself, just some of its more grandiose claims. ‘‘There are some real things you can do with string theory,’’ he says. [Presumably Woit means sifting through 10^500 metastable solutions trying to find one which looks like the Standard Model, or using string theory to make up real propaganda. ]’

– Geoff Brumfiel, Nature.

Lubos Motl responds on Peter Woit’s blog with disgusting language, as befitting the pseudo-scientific extra dimensional string theorist who can’t predict anything checkable:

Lubos Motl Says: October 3rd, 2006 at 8:14 pm

Dear crackpot Peter, you are a damn assh***. I will sue you for the lies those crackpot commenters telling on me on your crackpot blog. I hope you will die soon. The sooner the better.

So: be prepared to hear from my lawyer.

Best Lubos

Note: string theorist Aaron Bergman reviewed Not Even Wrong at the String Coffee Table, and now he writes in a comment on Not Even Wrong that if he reviewed Smolin’s Trouble he would ‘probably end up being a bit more snide’ in the review than Sean Carroll was on Cosmic Variance. That really does sum up the arrogant attitude problem with stringers…

Update 6 October 2006

The distinguished algebraic quantum field theorist, Professor Bert Schroer, has written a response to Lubos Motl in the form of an updated and greatly revised paper, the draft version of which was previously discussed on Dr Peter Woit weblog Not Even Wrong: http://arxiv.org/abs/physics/0603112. (Schroer’s publication list is here.) He analyses the paranoia of string theorists on pages 21 et seq.

He starts by quoting Motl’s claim ‘Superstring/M-theory is the language in which God wrote the world’, and remarks:

‘Each time I looked at his signing off, an old limerick which I read a long time ago came to my mind. It originates from pre-war multi-cultural Prague where, after a performance of Wagner’s Tristan and Isolde by a maestro named Motl, an art critic (who obviously did not like the performance) wrote instead of a scorcher for the next day’s Vienna newspaper the following spooner (unfortunately untranslatable without a complete loss of its lovely polemic charm):

‘Gehn’s net zu Motl’s Tristan
schaun’s net des Trottels Mist an,
schaffn’s lieber ’nen drittel Most an
und trinkn’s mit dem Mittel Trost an’

(A very poor translation is:

Do not go to Motl’s Tristan.
Don’t appear at this nincompoop muck,
Get yourself a drink instead
And remain in comfort.)

‘After having participated in Peter Woit’s weblog and also occasionally followed links to other weblogs during March-June 2006 I have to admit that my above conclusions about Lubos Motl were wrong. He definitely represents something much more worrisome than an uninhibited name-calling (crackpot, rat, wiesel…..) character who operates on the fringes of ST and denigrates adversaries of string theory23 in such a way that this becomes an embarrassing liability to the string community. If that would be true, then at least the more prominent string theorists, who still try to uphold standards of scientific ethic in their community, would keep a certain distance and the whole affair would not even be worth mentioning in an essay like this. But as supporting contributions of Polchinski and others to Motl’s weblog show, this is definitely not the case. My final conclusion is that the young and intelligent Harvard professor Lubos Motl has decided to build his career on offering a cartering service for the string community. He obviously is a quick scanner of the daily hep-th server output, and by torching papers which are outside the credo of string theorists (i.e. LQG, AQFT) he saves them time. The downgrading of adversaries is something which has at least the tacit consent of the community. It is evident that he is following a different road from that of using one’s intellectual potential for the enrichment of knowledge about particle physics. If one can build a tenure track career at a renown university by occasionally publishing a paper but mainly keeping a globalized community informed by giving short extracts of string-compatible papers and playing the role of a Lord of misuse to outsiders who have not yet gotten the message, the transgression of the traditional scientific ethics24 for reasons of career-building may become quite acceptable. It would be interesting to see into what part of this essay the string theorists pitbull will dig his teeth.’

Peter Woit links to Risto Raitio’s weblog discussion of Schroer’s paper which points out aspects which are even more interesting:

‘For the present particle theorist to be successful it is not sufficient to propose an interesting idea via written publication and oral presentation, but he also should try to build or find a community around this idea. The best protection of a theoretical proposal against profound criticism and thus securing its longtime survival is to be able to create a community around it. If such a situation can be maintained over a sufficiently long time it develops a life of its own because no member of the community wants to find himself in a situation where he has spend the most productive years on a failed project. In such a situation intellectual honesty gives way to an ever increasing unwillingness and finally a loss of critical abilities as a result of self-delusion.

‘I would like to argue that these developments have been looming in string theory for a long time and the recent anthropic manifesto [1] (L. Susskind, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design) (which apparently led to a schism within the string community) is only the extreme tip of an iceberg. Since there has been ample criticism of this anthropic viewpoint (even within the string theory community), my critical essay will be directed to the metaphoric aspect by which string theory has deepened the post standard model crisis of particle physics. Since in my view the continuation of the present path could jeopardize the future research of fundamental physics for many generations, the style of presentation will occasionally be somewhat polemic.

‘An age old problem of QFT which resisted all attempts to solve it is the problem of existence of models i.e. whether there really exist a QFT behind the Lagrangian name and perturbative expressions. Since there are convincing arguments that perturbative series do not converge (they are at best asymptotic expressions) this is a very serious and (for realistic models) unsolved problems. The problem that particle physics most successful theory of QED is also its mathematically most fragile has not gone away. In this sense QFT has a very precarious status very different from any other area of physics in particular from QM. This is very annoying and in order to not to undermine the confidence of newcomers in QFT the prescribed terminology is to simply use the word ‘‘defined” or ‘‘exists” in case some consistency arguments (usually related in some way to perturbation theory) have been checked.

‘These problems become even worse in theories as string theory (which in the eyes of string protagonists are supposed to supersede QFT). In this case one faces in addition to the existence problem the conceptual difficulty of not having been able to extract characterizing principles from ad hoc recipes

‘… Particle physics these days is generally not done by individuals but by members of big groups, and when these big caravans have passed by a problem, it will remain in the desert. A reinvestigation (naturally with improved mathematical tool and grater conceptual insight) could be detrimental to the career of somebody who does not enjoy the security of a community.

‘In its new string theoretical setting its old phenomenological flaw of containing a spin=2 particle was converted into the ‘‘virtue” of the presence of a graviton. The new message was the suggestion that string theory (as a result of the presence of spin two and the apparent absence of perturbative ultraviolet divergencies) should be given the status of a fundamental theory at an energy scale of the gravitational Planck mass, 10^19 GeV, i.e. as a true theory of everything (TOE), including gravity. Keeping in mind that the frontiers of fundamental theoretical physics (and in particular of particle physics) are by their very nature a quite speculative subject, one should not be surprised about the highly speculative radical aspects of this proposals; we know from history that some of our most successful theories originated as speculative conjectures. What is however worrisome about this episode is rather its uncritical reception. After all there is no precedent in the history of physics of a phenomenologically conceived idea for laboratory energies to became miraculously transmuted into a theory of everything by just sliding the energy scale upward through 15 orders of magnitudes and changing the terminology without a change in its mathematical-conceptual setting.

‘In this essay I emphasized that, as recent progress already forshadows, the issue of QG will not be decided in an Armageddon between ST and LQG, but QFT will enter as a forceful player once it has conceptually solidified the ground from where exploratory jumps into the blue yonder including a return ticket can be undertaken.

‘The problem is not that there are no other games in town, but rather that there are no bright young players who take the risk of jeopardizing their career by learning and expanding the sophisticated rules for playing other games.’

I’ve enjoyed Schroer’s excellent paper and the first part has quite a bit of discussion about the ultraviolet (UV) divergence problem in quantum field field where you have to take an upper limit cutoff for the charge renormalization to prevent a divergence of loops of massive nature occurring at extremely high energy. The solution to this problem is straightforward (it is not a physically real problem): there physically just isn’t room for massive loops to be polarized above the UV cutoff because at higher energy you get closer to the particle core, so the space is simply too small in size to have massive loops with charges being polarized along the electric field vector.

To explain further, if the massive particle loops are simply energized Dirac sea particles, i.e., if the underlying mechanism is that there is a Dirac sea in the vacuum which gains energy close to charges so that pairs of free electrons + positrons (and heavier loops where the field strength permits) are able to pop into observable existence close to electrons where the electric field strength is above 10^18 volts/metre, then the UV cutoff is explained: for extremely high energy, the corresponding distance is so small there is not likely to be any Dirac sea particles available in that small space. So the intense electric field strength is unable to produce any massive loops. We rely on Popper’s explanation of the uncertainty principle in this case: the massive virtual particles are low energy Dirac field particles which have simply gained vast energy from the intense field:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Note that string theory claims to solve the ultraviolet divergence problem at high energy by postulating 1:1 boson to fermion supersymmetry (one massive bosonic superpartner for every fermion in the universe) which is extravagant and predicts nothing except unification of forces near the Planck scale. It is artificial and even if you want string theory to be real, there are ways of getting around that by modifying 26 dimensional bosonic string theory as Tony Smith shows (he is suppressed from arXiv now, for not following the mainstream herd into M-theory). Previous posts are here (illustrated with force unification graphs showing effect of supersymmetry) and here (background information). So everything string says is wrong/not even wrong. The greatest claims of string theory to be successful are unphysical, uncheckable.

Updated diagram of mass model: http://thumbsnap.com/vf/FBeqR0gc.gif. Now I’ll explain in detail the vacuum polarization dynamics of that model. 

In Road to Reality, Penrose neatly illustrates with a diagram how the polarization of pair-production charges in the intense electric field surrounding a particle core, shield the core charge, with a diagram in Road to Reality. He speculates that the observed long range electric charge is smaller than the core electron charge by a factor of the square root of 137, ie 11.7. His book was published in 2004 I believe. But in the August 2002 and April 2003 issues of Electronics World magazine, I gave some mathematical evidence that the ratio is 137, and not the square root of 137. However, I didn’t have a clear physical picture of vacuum polarization when I wrote the articles and did not understand the difference for the, and Penrose’s book encouraged me enormously to investigate it!

The significance is the mechanistic explanation of quantum field theory and the prediction of the masses of all observable (lepton and hadron) particles in the universe: see my illustration here. (This is a fermion:boson correspondence as I’ll explain it later in this comment, but is not an exact 1:1 supersymmetry, so force unification occurs differently to string theory, as I’ll explain later.)

In that diagram the Pi shielding factor is due to the charge rotation effect while exchange gauge bosons are emitted and received by the rotating charge. Think about Star Wars: shooting down an ICBM with a laser. In the 1980s it was proved that by rapidly spinning the ICBM along its long axis, you reduce the exposure of the skin to laser energy by a factor of Pi, as compared to a non-spinning missile, or as compared to the particle as seen end-on. What is happening is that the effective “cross-section” as we call the interaction area in particle and nuclear physics, is increased by a factor of Pi if you see the particle spinning edge on, so if the spinning particle first receives and then (after the slightest decay) remits an exchange radiation particle, then the re-emitted particle could be fired off in any direction at all (if the spin is fast), whereas if it is not spinning the particle goes back the way it came (in a head-on or normal incidence collision).

The multiplying factors in front of Pi depend on the spin dynamics. For a spin ½ particle like an electron, there are two spin revolutions per rotation which means the electron is like a Mobius strip (a loop of paper with a half twist so that both top and bottom surfaces are joined – if you try to draw a single line right the way around the Mobius strip of paper, you find it will cover both sides of the paper and will have a length of exactly twice the length of the paper, so that a Mobius strip needs to be rotated twice in order to expose the full surface – like the spin ½ electron). This gives the factor of 2. The higher factors come from the fact that the distance of the electric charge from the mass giving boson is varied

The best sources for explaining what is physically going on in quantum field theory polarization are a 1961 book by Rose (chief physicist at Oak Ridge National Lab., USA) called Relativistic Electron Theory (I quote the vital bits on my home page), the 1997 PRL article by Levine et al which experimentally confirms it by smashing electrons together and determining the change in Coulomb (again quoted on my page), and the lectures here. Those lectures originally contained an error because the electron and positron annihilation and creation process forms one “vacuum loop” correction which occurs at the energy required for pair-production of those particles, i.e., an energy of 0.511 MeV per particle, and the authors had ignored higher loops between 0.5-92,000 MeV. For example, when the energy exceeds 105 MeV, you get loops of muon-antimuons being endlessly created and annihilated in the vacuum, which means you have to add an higher order loop correction to the polarization calculation. The authors had stated the equation for electron-positron loops as being valid all the way from 0.5 MeV to 92,000 MeV, and had forgotten to include loads of other loops, although they have now corrected and improved the paper. The vital results in the paper about polarization are around page 70 for the effect on measurable electron charge and on page 85 where the electric field strength threshold is calculated.

It is obvious that quantum field theory is very poor mathematically (see quotes at top of the page http://www.cgoakley.demon.co.uk/qft).

Most professors of quantum field theory shy away from talking realities like polarization because there are gross problems. The biggest problem is that although virtual charges are created in pairs of monopoles with opposite charges that can be polarized, quantum field theory also requires the mass of the electron to be renormalized.

Since mass is the charge of gravitational force, it doesn’t occur in negative types (antimatter falls the same way as normal matter), so it is hard to see how to polarize mass. Hence the heuristic explanation of how electric fields are renormalized by polarization of pair production electric charges, fails to explain mass renormalization.

The answer seems to be that mass is coupled to the electric polarization mechanism. The massive Z_o boson is probably an electric dipole like the photon (partly negatively electric field and partly positive), but because it is massive it goes slowly and can be polarized by aligning with an electric field. If the vacuum contains Z_o bosons in its ground state, this would explain how masses arise. See comments on recent posts on this blog, and see the predictions of the masses of all particles as illustrated here shows the polarized zones around particles. Each polarized zone has inner and outer circles corresponding to the upper (UV) and lower (IR) cutoffs for particle scatter energy in QFT. The total shielding of each polarization zone is the well known alpha factor of 1/137. If the mass-producing boson is outside this polarization zone, the charge shielding reduces the mass by the alpha factor. By very little numerology, this model works extremely well.You would expect that semi-empirical relationships of the numerology sort would precede a rigorous mass predicting mechanism, just as Balmer’s formula preceded Bohr’s theory for it. Alejandro Rivero and another guy published the vital first link numerically between the Z_o boson mass and the muon/electron masses which made me pay attention and check further.

Obviously any as yet unorthodox idea may be attacked by the ‘crackpotism’ charge, but I think this one is particularly annoying to orthodoxy as it is hard to dismiss objectively.

More on Cosmic Variance here, here, here, on Not Even Wrong here, here, here, here, and on Christine Dantas’ Background Independence here.

POLARIZATION MECHANISM BY ELECTRIC DIPOLE (PAIR PRODUCTION):Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001…’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 92 GeV. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 92 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification. The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. Hence, no need for M-theory.

As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are apparently consistent with an gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law).  Lunsford has investigated this over SO(3,3):


‘…I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’

Lunsford’s paper is http://cdsweb.cern.ch/search.py?recid=688763&ln=en

Lunsford’s prediction is correct: he proves that the cosmological constant must vanish in order that gravitation be unified with electromagnetism.  As Nobel Laureate Phil Anderson says, the observed fact regarding the imaginary cosmological constant and dark energy is merely that:

“… the flat universe is just not decelerating, it isn’t really accelerating …”


Since it isn’t accelerating, there is no dark energy and no cosmological constant: Lunsford’s unification prediction is correct, and is explicable in terms of Yang-Mills QFT.

See for example the discussion in a comment on Christine Dantas’ blog: ‘From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.’Hence, we predict that the Hubble law will be the correct formula.’Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.’Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.’I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

‘People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.’

As for LQG:

‘In loop quantum gravity, the basic idea is … to … think about the holonomy [whole rule] around loops in space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Cape, London, 2006, p189.

Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.

Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations.

For the empirically-verifiable prediction of the strength of gravity, see the mathematical proofs at http://feynman137.tripod.com/#h which have been developed and checked for ten years.  The result is consistent with the Hubble parameter and Hubble parameter-consistent-density estimates.  Putting in the Hubble parameter and density yields the universal gravitational constant within the error of the parameters.  Since further effort is being made in cosmology to refine the estimates of these things, we will get better estimates and make a more sensitive check on the predicted strength of gravity in consequence.  Another relationship the model implies is the dynamics of the strength of electromagnetism relative to that of gravity.

This utilises the lepton-quark capacitor model, with the gauge boson exchange radiation representing the electromagnetic field.  For underlying electromagnetic theory problems see this page: ‘Kirchoff circuital current law dQ/dt + dD/dt = 0 is correct so far as it is a mathematical model dealing with large numbers of electrons. The problems with it as that it assumes, by virtue of the differential dQ/dt, that charge is a continuous variable and is not composed of discontinuities (electrons). So it is false on that score, and is only a mathematical approximation which is useful when the number dQ/dt represents a large change in the number of electrons passing a given point in the circuit in a second. A second flaw with the equation is the second term dD/dt (displacement current) which is a mathematical artifact and doesn’t describe a real vacuum displacement current. Instead, the reality is a radiative field effect, not a displacement or vacuum current. There is no way the vacuum can be polarized to give an electric displacement current where the field strength is below 10^18 volts/metre. Hence, displacement current doesn’t exist. The term dD/dt represents a simple but involved mechanism whereby accelerating charges at the wavefront in each conductor exchange radio frequency energy but none of the energy escapes to the surroundings because each conductor’s emission is naturally an inversion of the signal from the other, so the superimposed signals cancel out as seen from a distance large in comparison to the distance of separation of the two conductors. (As I’ve explained and illustrated previously: [14]).’

The capacitor QFT model in detail:


… At every instant, assuming the electrons have real positions and the indeterminancy principle is explained by ignorance of its position which is always real but often unknown – instead of by metaphysics of the type Bohr and Heisenberg worshipped – so you have a vector sum of electric fields possible across the universe.

The fields are physically propagated by gauge boson exchange. The gauge bosons must travel between all charges, they can’t tell that an atom is “neutral” as a whole, they just travel between the charges.

Therefore even though the electric dipole created by the separation of the electron from the proton in a hydrogen atom at any instant is randomly orientated, the gauge bosons can also be considered to be doing a random walk between all the charges in the universe.

The random-walk vector sum for the charges of all the hydrogen atoms is the voltage for a single hydrogen atom (the real charges mass in the universe is something like 90% composed of hydrogen), multiplied by the square root of the number of atoms in the universe.

This allows for the angles of each atom being random. If you have a large row of charged capacitors randomly aligned in a series circuit, the average voltage resulting is obviously zero, because you have the same number of positive terminals facing one way as the other.

So there is a lot of inefficiency, but in a two or three dimensional set up, a drunk taking an equal number of steps in each direction does make progress. The taking 1 step per second, he goes an average net distance from the starting point of t^0.5 steps after t seconds.

For air molecules, the same occurs so instead of staying in the same average position after a lot of impacts, they do diffuse gradually away from their starting points.

Anyway, for the electric charges comprising the hydrogen and other atoms of the universe, each atom is a randomly aligned charged capacitor at any instant of time.

This means that the gauge boson radiation being exchanged between charges to give electromagnetic forces in Yang-Mills theory will have the drunkard’s walk effect, and you get a net electromagnetic field of the charge of a single atom multiplied by the square root of the total number in the universe.

Now, if gravity is to be unified with electromagnetism (also basically a long range, inverse square law force, unlike the short ranged nuclear forces), and if gravity due to a geometric shadowing effect (see my home page for the Yang-Mills LeSage quantum gravity mechanism with predictions), it will depend on only a straight line charge summation.

In an imaginary straight line across the universe (forget about gravity curving geodesics, since I’m talking about a non-physical line for the purpose of working out gravity mechanism, not a result from gravity), there will be on average almost as many capacitors (hydrogen atoms) with the electron-proton dipole facing one way as the other,

But not quite the same numbers!

You find that statistically, a straight line across the universe is 50% likely to have an odd number of atoms falling along it, and 50% likely to have an even number of atoms falling along it.

Clearly, if the number is even, then on average there is zero net voltage. But in all the 50% of cases where there is an ODD number of atoms falling along the line, you do have a net voltage. The situation in this case is that the average net voltage is 0.5 times the net voltage of a single atom. This causes gravity.

The exact weakness of gravity as compared to electromagnetism is now explained.

Gravity is due to 0.5 x the voltage of 1 hydrogen atom (a “charged capacitor”).

Electromagnetism is due to the random walk vector sum between all charges in the universe, which comes to the voltage of 1 hydrogen atom (a “charged capacitor”), multiplied by the square root of the number of atoms in the universe.

Thus, ratio of gravity strength to electromagnetism strength between an electron and a proton is equal to: 0.5V/(V.N^0.5) = 0.5/N^0.5.

V is the voltage of a hydrogen atom (charged capacitor in effect) and N is the number of atoms in the universe.

This ratio is equal to 10^-40 or so, which is the correct figure.

The theory predicts various things that are correct, and others that haven’t been checked yet.  So it is falsifiable experimentally and since the theory predicts that the black hole radius 2GM/c^2 and not the much bigger Planck scale is the correct size for lepton and quark gauge boson interaction cross-sections, it implies that gravity is trapping energy Poynting TEM wave currents (which are light speed Heaviside energy fields, not composed of slowly drifting charge, but composed of gauge boson type radiation) to create the particles, and thus permits a rigorous equivalence between rest mass energy and gravitational potential energy with respect to the rest of the universe.   Such an energy equivalence solves the galactic rotation curves anomaly and is consistent with ‘widely observed dark matter’ as John Hunter shows.  Hunter’s equivalence like Louise Riofrio’s equation needs a dimensionless correction factor of e^3 with e is the base of natural logarithms.  Dr Thomas Love of the Departments of Mathematics and Physics, California State University, shows that you can derive Kepler’s mathematical law from an energy equivalence (see previous post). 

Dr Love also deals with the ‘nothing is real’ claims of pseudo-scientific quantum popularisers who don’t understand mathematical physics.  One claim against causality and mechanism in quantum field theory is entanglement.  Quantum entanglement as an interpretation of the Bell inequality, as tested by Aspect et al., relies upon a belief in the “wavefunction collapse”.  The exact state of any particle is supposed to be indeterminate before being measured. When measured, the wave function “collapses” into a definite value.  Einstein objected to this, and often joked to believers of wave function collapse:

Do you believe that the moon exists when you aren’t looking?

EPR (Einstein, Polansky and Rosen) wrote a paper in Physical Review on the wavefunction collapse problem in 1935. (This led eventually to Aspect’s tangled experiment.)  Schroedinger was inspired by it to write the “cat paradox” paper a few months later.


Dr Thomas Love of the Departments of Physics and Mathematics, California State University, points out that the “wavefunction collapse” interpretation (and all entanglement interpretations) are speculative.  He points out that the wavefunction doesn’t physically collapse. There are two mathematical models, the time-dependent Schroedinger equation and the time-independent Schroedinger equation.

Taking a measurement means that, in effect, you switch between which equations you are using to model the electron. It is the switch over in mathematical models which creates the discontinuity in your knowledge, not any real metaphysical effect.  When you take a measurement on the electron’s spin state, for example, the electron is not in a superimposition of two spin states before the measurement. (You merely have to assume that each possibility is a valid probabilistic interpretation, before you take a measurement to check.)

Suppose someone flips a coin and sees which side is up when it lands, but doesn’t tell you. You have to assume that the coin is 50% likely heads up, and 50% likely to be tails up. So, to you, it is like the electron’s spin before you measure it. When the person shows you the coin, you see what state the coin is really in. This changes your knowledge from a superposition of two equally likely possibilities, to reality.

Dr Love states on page 9 of his paper Towards an Einsteinian Quantum Theory: “The problem is that quantum mechanics is mathematically inconsistent…”, and compares the two versions of the Schroedinger equation on page 10. The time independent and time-dependent versions disagree and this disagreement nullifies the principle of superposition and consequently the concept of wavefunction collapse being precipitated by the act of making a measurement. The failure of superposition discredits the usual interpretation of the EPR experiment as proving quantum entanglement. To be sure, making a measurement always interferes with the system being measured (by recoil from firing light photons or other probes at the object), but that is not justification for the metaphysical belief in wavefunction collapse.

P. 51: Love quotes a letter from Einstein to Schrodinger written in May 1928; ‘The Heisenberg-Bohr tranquilizing philosophy – or religion? – is so delicately contrived that, for the time being, it provides a gentle pillow for the true believer from which he cannot easily be aroused. So let him lie there.’

P. 52: ‘Bohr and his followers tried to cut off free enquiry and say that they had discovered ultimate truth – at that point their efforts stopped being science and became a revealed religion with Bohr as its prophet.’

P. 98: Quotation of Einstein’s summary of the problems with standard quantum theory: ‘I am, in fact, firmly convinced that the essential statistical character of contemporary quantum theory is solely to be ascribed to the fact that this theory operates with an incomplete description of physical systems.’ (Albert Einstein, ‘Reply to Criticisms’, in Albert Einstein: Philosopher-Scientist, edited by P. A. Schipp, Tutor Publishing, 1951.)

‘Einstein … rejected the theory not because he … was too conservative to adapt himself to new and unconventional modes of thought, but on the contrary, because the new theory was in his view too conservative to cope with the newly discovered empirical data.’ – Max Jammer, ‘Einstein and Quantum Physics’ in Albert Einstein: Historical and Cultural Perspectives, edited by Gerald Holton and Yedhuda Elkana, 1979.

P. 99: “It is interesting to note that when a philosopher of science attacked quantum field theory, the response was immediate and vicious. But when major figures from within physics, like Dirac and Schwinger spoke, the critics were silent.” Yes, and they were also polite to Einstein when he spoke, but called him an old fool behind his back. (The main problem is that even authority in science is pretty a impotent thing unless it is usefully constructive criticism.)

P. 100: ‘The minority who reject the theory, although led by the great names of Albert Einstein and Paul Dirac, do not yet have any workable alternative to put in its place.’ – Freeman Dyson, ‘Field Theory’, Scientific American, 199 (3), September 1958, pp78-82.

P. 106: ‘Once an empirical law is well established the tendency is to ignore or try to accommodate recalcitrant experiences, rather than give up the law. The history of science is replete with examples where apparently falsifying evidence was ignored, swept under the rug, or led to something other than the law being changed.’ – Nancy J. Nersessian, Faraday to Einstein: Constructing Meaning in Scientific Theories, Martinus Nijhoff Pub., 1984.

O’Hara quotation “Bandwagons have bad steering, poor brakes, and often no certificate of roadworthiness.” (M. J. O’Hara, Eos, Jan 22, 1985, p34.)

Schwartz quotation: ‘The result is a contrived intellectual structure, more an assembly of successful explanatory tricks and gadgets that its most ardent supporters call miraculous than a coherently expressed understanding of experience. … Achievement at the highest levels of science is not possible without a deep relationship to nature that can permit human unconscious processes – the intuition of the artist – to begin to operate … The lack of originality in particle physics … is a reflection of the structural organization of the discipline where an exceptionally sharp division of labor has produced a self-involved elite too isolated from experience and criticism to succeed in producing anything new.’ [L. Schwartz, The Creative Moment, HarperCollins, 1992.]

P. 107: ‘The primary difference between scientific thinking and religious thinking is immediacy. The religious mind wants an answer now. The scientific mind has the ability to wait. To the scientific mins the answer “We don’t know yet” is perfectly acceptable. The physicists of the 1920s and later accepted many ideas without sufficient data or thought but with all the faith and fervor characteristic of a religion.’

Love is author of papers like ‘The Geometry of Grand Unification’, Int. J. Th. Phys., 1984, p801, ‘Complex Geometry, Gravity, and Unification, I., The Geometry of Elementary Particles’, Int. J. Th. Phys., 32, 1993, pp.63-88 and ‘II., The Generations Problem’, Int. J. Th. Phys., 32, 1993, pp. 89-107. He presented his first paper before an audience which included Dirac (although unfortunately Dirac was then old and slept right through).  He has a vast literature survey and collection of vitally informative quotations from authorities, as well as new insights from his own work in quantum mechanics and field theory.

It is a pity that string theorists block him and others like Tony Smith (also here), Danny Ross Lunsford (see here for his brilliant but censored paper which was deleted from arXiv and is now only on the widely ignored CERN Document Server, and see here for his suppression by stringers), and others who also have more serious ideas than string, like many of the others commenters on Not Even Wrong.

More on the technical details of waves in space: 

Gauge bosons for electromagnetism are supposed to have 4 polarizations, not the 2 of real photons.  However you can get 4 polarizations by an exchange system where two opposite-flowing energy currents are continuously being exchanged between each pair of charges: the Poynting-Heaviside electromagnetic energy current is illustrated at the top of: http://www.ivorcatt.com/1_3.htm

Unfortunately the orthogonal vectors the author of that page uses don’t clearly show the magnetic field looping around each conductor in opposite directions.  However, his point that electricity only goes at light speed seems to imply that static charge goes at light speed, presumably with this speed being the spin of fermions.

This ties in with the radiation from a rotating (spinning) electron.  You don’t get oscillating Maxwellian radiation thrown off from the circular acceleration of the spin of the charge, because there is no real oscillation to begin with, just rotation.  So you should get continuous, non-oscillating radiation.  The difference between this and oscillating photons is in a way the same as the difference between D.C. and A.C. electricity transmission mechanisms.

For D.C. electricity transmission, you always need two conductors, even if you are just sending a logic signal into a long unterminated transission line, and you know the logic signal will bounce back off the far end at return to you at light speed.  But for alternating signals, you only need a single wire because the time-varying signal helps it propagate.

The key physics is self inductance.  A single wire has infinite self inductance, ie, the magnetic field generated by energy flowing in a single wire opposes that flow of energy.  With two wires, the magnetic field each wire produces partly cancels that of the other, making the total inductance less than infinite.

See http://www.ivorcatt.com/6_2.htm for the calculations proving that the inductance per unit length is infinite for a one-way energy current, but not so if there is also an energy current going in the opposite direction:

The self inductance of a long straight conductor is infinite.  This is a recurrence of Kirchhoff’s First Law, that electric current cannot be sent from A to B. It can only be sent from A to B and back to A”

Similarly, if you stop thinking about the transverse light wave, and think instead about a longitudinal sound wave, you see that the oscillation in the sound wave means that you have two opposing forces in the sound wave.  An outward force, and an inward force.  The inward force is the underpressure phase, while the outward force is the overpressure phase.  I started thinking about the balance of forces due to explosion physics: http://glasstone.blogspot.com/2006/03/outward-pressure-times-area-is-outward.html

Whenever you have a sound, the outward overpressure times the spherical area gives the total outward force.  This force must by Newton’s 3rd law have an inward reaction.  The inward reaction is the underpressure phase, which has equal duration but reversed direction due to being below ambient pressure.

You can’t get a sound wave to propagate just by releasing pressure, or the air will disperse locally without setting up a 1100 feet/second propagating longitudinal wave.

To get a sound wave, you need first to create overpressure, and then you need to create underpressure so that there is a reaction to the overpressure, which allows it to propagate in the longitudinal wave mode.  Transverse waves are similar, except that the field variation is perpendicular to the direction of propagation.  The Transverse Electromagnetic (TEM) wave is illustrated with nice simulations here: http://www.ee.surrey.ac.uk/Teaching/Courses/EFT/transmission/html/TEMWave.html 

There is a serious conflict between Maxwell’s conception of the electromagnetic wave, and quantum field theory, and Maxwell is the loser.  Maxwell’s radio wave requires that in typical 1-10 volts/metre electromagnetic waves in space there is a displacement current due to free charges moving, but the infra-red cutoff in quantum field theory implies that electric field strengths of at least 10^18 volts/metre are required to allow the creation of polarizable charges by pair production in the vacuum, and thus displacement current.  Hence although Maxwell’s mathematical model of electromagnetism has a real world correspondence, it is not exactly what he thought it to mean.

DISCLAIMER: just because string theory is not even wrong, you should not automatically believe alternatives.  Any new ideas in this post must not be instantly accepted as objective truth by everyone!  Please don’t assume them to be correct just because they look so inviting and beautiful…