Correcting the U(1) error in the Standard Model of particle physics


Fundamental particles in the SU(2)xU(1) part of the Standard Model

Above: the Standard Model particles in the existing SU(2)xU(1) electroweak symmetry group (a high-quality PDF version of this table can be found here).  The complexity of chiral symmetry – the fact that only particles with left-handed spins (Weyl spinors) experience the weak force – is shown by the different effective weak charges for left and right handed particles of the same type.  My argument, with evidence to back it up in this post and previous posts, is that there are no real ‘singlets’: all the particles are doublets apart from the gauge bosons (W/Z particles) which are triplets.  This causes a major change to the SU(2)xU(1) electroweak symmetry.  Essentially, the U(1) group which is a source of singlets (i.e., particles shown in blue type in this table which may have weak hypercharge but have no weak isotopic charge) is removed!  An SU(2) symmetry group then becomes a source of electric and weak hypercharge, as well as its existing role in Standard Model as a descriptor of the isotopic spin.  It modifies the role of the ‘Higgs bosons’: some such particles are still be required to give mass, but the mainstream electroweak symmetry breaking mechanism is incorrect.

There are 6 rather than 4 electroweak gauge bosons, the same 3 massive weak bosons as before, but 2 new charged massless gauge bosons in addition to the uncharged massless ‘photon’, B.  The 3 massless gauge bosons are all massless counterparts to the 3 massive weak gauge bosons.  The ‘photon’ is not the gauge boson of electromagnetism because, being neutral, it can’t represent a charged field.  Instead, the ‘photon’ gauge boson is the graviton, while the two massless gauge bosons are the charged exchange radiation (gauge bosons) of electromagnetism.  This allows quantitative predictions and the resolution of existing electromagnetic anomalies (which are usually just censored out of discussions).

It is the U(1) group which falsely introduces singlets.  All Standard Model fermions are really doublets: if they are bound by the weak force (i.e., left-handed Weyl spinors) then they are doublets in close proximity.  If they are right-handed Weyl spinors, they are doublets mediated by only strong, electromagnetic and gravitational forces, so for leptons (which don’t feel the strong force), the individual particles in a doublet can be located relatively far from another (the electromagnetic and gravitational interactions are both long-range forces).  The beauty of this change to the understanding of the Standard Model is that gravitation automatically pops out in the form of massless neutral gauge bosons, while electromagnetism is mediated by two massless charged gauge bosons, which gives a causal mechanism that predicts the quantitative coupling constants for gravity and electromagnetism correctly.  Various other vital predictions are also made by this correction to the Standard Model.

Fundamental vector boson charges of SU(2) 

Above: the fundamental vector boson charges of SU(2).  For any particle which has effective mass, there is a black hole event horizon radius of 2GM/c2.  If there is a strong enough electric field at this radius for pair production to occur (in excess of Schwinger’s threshold of 1.3*1018 v/m), then pairs of virtual charges are produced near the event horizon.  If the particle is positively charged, the negatively charged particles produced at the event horizon will fall into the black hole core, while the positive ones will escape as charged radiation (see Figures 2, 3 and particularly 4 below for the mechanism for propagation of massless charged vector boson exchange radiation between charges scattered around the universe).  If the particle is negatively charged, it will similarly be a source of negatively charged exchange radiation (see Figure 2 for an explanation of why the charge is never depleted by absorbing radiation from nearby pair production of opposite sign to itself; there is simply an equilibrium of exchange of radiation between similar charges which cancels out that effect).  In the case of a normal (large) black hole or neutral dipole charge (one with equal and opposite charges, and therefore neutral as a whole), as many positive as negative pair production charges can escape from the event horizon and these will annihilate one another to produce neutral radiation, which produces the right force of gravity.  Figure 4 proves that this gravity force is about 1040 times stronger than electromagnetism.  Another earlier post calculates the Hawking black hole radiation rate and proves it creates the force strength involved in electromagnetism.

(For a background to the elementary basics of quantum field theory and quantum mechanics, like the Schroedinger and Dirac equations and their consequences, see the earlier post on The Physics of Quantum Field Theory.  For an introduction to symmetry principles, see the previous post.)

The SU(2) symmetry can model electromagnetism (in addition to isospin) because it models two types of charges, hence giving negative and positive charges without the wrong method U(1) uses (where it specifies there are only negative charges, so positive ones have to be represented by negative charges going backwards in time).  In addition, SU(2) gives 3 massless gauge bosons, two charged ones (which mediate the charge in electric fields) and one neutral one (which is the spin-1 graviton, that causes gravity by pushing masses together).  In addition, SU(2) describes doublets, matter-antimatter pairs.  We know that electrons are not produced individidually, only in lepton-antilepton pairs.  The reason why electrons can be separated a long distance from their antiparticle (unlike quarks) is simply the nature of the binding force, which is long range electromagnetism instead of a short-range force.

Quantum field theory, i.e., the standard model of particle physics, is based mainly on experimental facts, not speculating.  The symmetries of baryons give SU(3) symmetry, those of mesons give SU(2) symmetry.  That’s experimental particle physics. The problem in the standard model SU(3)xSU(2)xU(1) is the last component, the U(1) electromagnetic symmetry.  In SU(3) you have three charges (coded red, blue and green) and form triplets of quarks (baryons) bound by 32-1 = 8 charged gauge bosons mediating the strong force.  For SU(2) you have two charges (two isospin states) and form doublets, i.e., quark-antiquark pairs (mesons) bound by 22-1 = 3 gauge bosons (one positively charged, one negatively charged and one neutral).

One problem comes when electromagnetism is represented by U(1) and added to SU(2) to form the electroweak unification, SU(2)xU(1).  This means that you have to add a Higgs field which breaks the SU(2)xU(1) symmetry at low energy, by giving masses (at low energy only) to the 3 gauge bosons of SU(2).  At high energy, the masses of those 3 gauge bosons must disappear, so that they are massless, like the photon assumed to mediate the electromagnetic force represented by U(1).  The required Higgs field which adds mass in the right way for electroweak symmetry breaking to work in the Standard Model but adds complexity and isn’t very predictive.

The other, related, problem is that SU(2) only acts on left-handed particles, i.e., particles whose spin is described by a left-handed Weyl spinor.  U(1) only has one electric charge, the electron.  Feynman represents positrons in the scheme as electrons going backwards in time, and this makes U(1) work, but it has many problems and a massless version of SU(2) is the correct electromagnetism-gravitational model.

So the correct model for electromagnetism is really SU(2) which has two types of electric charge (positive and negative) and acts on all particles regardless of spin, and is mediated by three types of massless gauge bosons: negative ones for the fields around negative charges, positive ones for positive fields, and neutral ones for gravity.

The question then is, what is the corrected Standard Model?  If we delete U(1) do we have to replace it with another SU(2) to get SU(3)xSU(2)xSU(2), or do we just get SU(3)xSU(2) in which SU(2) takes on new meaning, i.e., there is no symmetry breaking?

Assume the symmetry group of the universe is SU(3)xSU(2).  That would mean that the new SU(2) interpretation has to do all the work and more of SU(2)xU(1) in the existing Standard Model.  The U(1) part of SU(2)xU(1) represented both electromagnetism and weak hypercharge, while SU(2) represented weak isospin.

We need to dump the Higgs field as a source for symmetry breaking, and replace it with a simpler mass-giving mechanism that only gives mass to left-handed Weyl spinors.  This is because the electroweak symmetry breaking problem has disappeared. We have to use SU(2) to represent isospin, weak hypercharge, electromagnetism and gravity.   Can it do all that? Can the Standard Model be corrected by simply removing U(1) to leave SU(3)xSU(2) and having the SU(2) produce 3 massless gauge bosons (for electromagnetism and gravity) and 3 massive gauge bosons (for weak interactions)? Can we in other words remove the Higgs mechanism for electroweak symmetry breaking and replace it by a simpler mechanism in which the short range of the three massive weak gauge bosons distinguishes between electromagnetism (and gravity) from the weak force? The mass giving field only gives mass to gauge bosons that normally interact with left-handed particles. What is unnerving is that this compression means that one SU(2) symmetry is generating a lot more physics than in the Standard Model, but in the Standard Model U(1) represented both electric charge and weak hypercharge, so I don’t see any reason why SU(2) shouldn’t represent weak isospin, electromagnetism/gravity and weak hypercharge. The main thing is that because it generates the 3 massless gauge bosons, only half of which need to have mass added to them to act as weak gauge bosons, it has exactly the right field mediators for the forces we require. If it doesn’t work, the alternative replacement to the Standard Model is SU(3)xSU(2)xSU(2) where the first SU(2) is isospin symmetry acting on left-handed particles and the second SU(2) is electrogravity.

Mathematical review

Following from the discussion in previous posts, it is time to correct the errors of the Standard Model, starting with the U(1) phase or gauge invariance.  The use of unitary group U(1) for electromagnetism and weak hypercharge is in error as shown in various ways in the previous posts here, here, and here.

The maths is based on a type of continuous group defined by Sophus Lie in 1873.  Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together.  It was the representation theory of these groups that Weyl was studying.

‘A simple example of a Lie group together with a representation is that of the group of rotations of the two-dimensional plane.  Given a two-dimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point.  This is a symmetry of the plane.  The thing that is invariant is the distance between a point on the plane and the central point.  This is the same before and after the rotation.  One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point.  There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.

 Not Even Wrong

Argand diagram showing rotation by an angle on the complex plane.   Illustration credit: based on Fig. 3.1 in Not Even Wrong.

‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one.  If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers).  As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1).

‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions].  Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave.  This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees.  Because of this analogy, U(1) symmetry transformations are often called phase transformations. …

‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N).  It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest.  The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N).  Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large.

‘In the case N = 1, SU(1) is just the trivial group with one element.  The first non-trivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3).  The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’

Hermann Weyl and Eugene Wigner discovered that Lie groups of complex symmetries represent quantum field theory.  In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin-1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the Yang-Mills theory is renormalizable so the problem of running couplings having no limits can be cut off at effective limits to make the theory work (Yang-Mills theories use non-commutative algebra, usually called non-commutative geometry). The photon Yang-Mills theory is U(1). Equivalent Yang-Mills interaction theories of the strong force SU(3) and the weak force isospin group SU(2) in conjunction with the U(1) force result in the symmetry group  SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on left-handed spinning fermions, breaking the conservation of parity.

Dr Woit’s Not Even Wrong at pages 98-100 summarises the problems in the Standard Model.  While SU(3) ‘has the beautiful property of having no free parameters’, the SU(2)xU(1) electroweak symmetry does introduce two free parameters: alpha and the mass of the speculative ‘Higgs boson’.  However, from solid facts, alpha is not a free parameter but the shielding ratio of the bare core charge of an electron by virtual fermion pairs being polarized in the vacuum and absorbing energy from the field to create short range forces:

“This shielding factor of alpha can actually obtained by working out the bare core charge (within the polarized vacuum) as follows.  Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar.  The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct.  Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence).  Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post).  In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime.  The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology.  Now for the slightly clever bit:

px = h-bar implies (when remembering p = mc, and E = mc2):

x = h-bar /p = h-bar /(mc) = h-bar*c/E

so E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx):

F = E/x = (h-bar*c/x)/x

= h-bar*c/x2.

“So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law!  This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs.  So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a.  The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge.  All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.

“One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance.  However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx.  For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved.  This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.

“It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)

“Experimental evidence:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

“In particular:

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.”

As for the ‘Higgs boson’ mass that gives mass to particles, there is evidence there of its value.  On page 98 of Not Even Wrong, Dr Woit points out:

‘Another related concern is that the U(1) part of the gauge theory is not asymptotically free, and as a result it may not be completely mathematically consistent.’

He adds that it is a mystery why only left-handed particles experience the SU(2) force, and on page 99 points out that: ‘the standard quantum field theory description for a Higgs field is not asymptotically free and, again, one worries about its mathematical consistency.’

Another thing is that the 9 masses of quarks and leptons have to be put into the Standard Model by hand together with 4 mixing angles to describe the interaction strength of the Higgs field with different particles, adding 13 numbers to the Standard Model which you  want to be explained and predicted.

Important symmetries:

  1. ‘electric charge rotation’ would transform quarks into leptons and vice-versa within a given family: this is described by unitary group U(1).  U(1) deals with just 1 type of charge: negative charge, i.e., it ignores positive charge which is treated as a negative charge travelling backwards in time, Feynman’s fatally flawed model of a positron or anti-electron, and with solitary particles (which don’t actually exist since particles always are produced and annihilated as pairs).  U(1) is therefore false when used as a model for electromagnetism, as we will explain in detail in this post.  U(1) also represents weak hypercharge, which is similar to electric charge.
  2. ‘isospin rotation’ would switch the two quarks of a given family, or would switch the lepton and neutrino of a given family: this is described by symmetry unitary group SU(2).  Isospin rotation leads directly to the symmetry unitary group SU(2), i.e., rotations in imaginary space with 2 complex co-ordinates generated by 3 operations: the W+, W, and Z0 gauge bosons of the weak force.  These massive weak bosons only interact with left-handed particles (left handed Weyl spinors).  SU(2) describes doublets, matter-antimatter pairs such as mesons and (as this blog post is arguing) lepton-antilepton charge pairs in general (electric charge mechanism as well as weak isospin).
  3. ‘colour rotation’ would change quarks between colour charges (red, blue, green): this is described by symmetry unitary group SU(3).  Colour rotation leads directly to the Standard Model symmetry unitary group SU(3), i.e., rotations in imaginary space with 3 complex co-ordinates generated by 8 operations, the strong force gluons.  There is also the concept of ‘flavor’ referring to the different types of quarks (up and down, strange and charm, top and bottom).  SU(3) describes triplets of charges, i.e. baryons.

U(1) is a relatively simple phase-transformation symmetry which has a single group generator, leading to a single electric charge.  (Hence, you have to treat positive charge as electrons moving backwards in time to make it incorporate antimatter!  This is false because things don’t travel backwards in time; it violates causality, because we can use pair-production – e.g. electron and positron pairs created by the shielding of gamma rays from cobalt-60 using lead – to create positrons and electrons at the same time, when we choose.)  Moreover, it also only gives rise to one type of massless gauge boson, which means it is a failure to predict the strength of electromagnetism and its causal mechanism of electromagnetism (attractions between dissimilar charges, repulsions between similar charges, etc.).  SU(2) must be used to model the causal mechanism of electromagnetism and gravity; two charged massless gauge bosons mediate electromagnetic forces, while the neutral massless gauge boson mediates gravitation.  Both the detailed mechanism for the forces and the strengths of the interactions (as well as various other predictions), arise automatically from SU(2) with massless gauge bosons replacing U(1).

Fig. 1 - The imaginary U(1) interaction of a photon with an electron, which is fine for photons interacting with electrons, but doesn't adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!

Fig. 1: The imaginary U(1) gauge invariance of quantum electrodynamics (QED) simply consists of a description of the interaction of a photon with an electron (e is the coupling constant, the effective electric charge after allowing for shielding by the polarized vacuum if the interaction is at high energy, i.e., above the IR cutoff).  When the electron’s field undergoes a local phase change, a gauge field quanta called a ‘virtual photon’ is produced, which keeps the Lagrangian invariant; this is how gauge symmetry is supposed to work for U(1).

This doesn’t adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!  It’s just too simplistic: the moving electron is viewed as a current, and the photon (field phase) affects that current by interacting by the electron.  There is nothing wrong with this simple scheme, but it has nothing to do with the detailed causal, predictive mechanism for electromagnetic attraction and repulsion, and to make this virtual-photon-as-gauge-boson idea work for electromagnetism, you have to add two extra polarizations to the normal two polarizations (electric and magnetic field vectors) of ordinary photons.  You might as well replace the photon by two charged massless gauge bosons, instead of adding two extra polarizations!  You have so much more to gain from using the correct physics, than adding extra epicycles to a false model to ‘make it work’.

This is Feynman’s explanation in his book QED, Penguin, 1990, p120:

‘Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization – for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)’

The gauge bosons of mainstream electromagnetic model U(1) are supposed to consist of photons with 4 polarizations, not 2.  However, U(1) has only one type of electric charge: negative charge.  Positive charge is antimatter and is not included.  But in the real universe there as much positive as negative charge around!

We can see this error of U(1) more clearly when considering the SU(3) strong force: the 3 in SU(3) tells us there are three types of color charges, red, blue and green.  The anti-charges are anti-red, anti-blue and anti-green, but these anti-charges are not included.  Similarly, U(1) only contains one electric charge, negative charge.  To make it a reliable and complete theory predictive everything, it should contain 2 electric charges: positive and negative, and 3 gauge bosons: positive charged massless photons for mediating positive electric fields, negative charged massless photons for mediating negative electric fields, and neutral massless photons for mediating gravitation.  The way this correct SU(2) electrogravity unification works was clearly explained in Figures 4 and 5 of the earlier post:

Basically, photons are neutral because if they were charged as well as being massless, the magnetic field generated by its motion would produce infinite self-inductance.  The photon has two charges (positive electric field and negative electric field) which each produce magnetic fields with opposite curls, cancelling one another and allowing the photon to propagate:

Fig. 2 - Mechanism of gauge bosons for electromagnetism

Fig. 2: charged gauge boson mechanism for electromagnetism, as illustrated by the Catt-Davidson-Walton work in charging up transmission lines like capacitors and checking what happens when you discharge the energy through a sampling oscilloscope.  They found evidence, discussed in detail in previous posts on this blog, that the existence of an electric field is represented by two opposite-travelling (gauge boson radiation) light velocity field quanta: while overlapping, the electric fields of each add up (reinforce) but the magnetic fields disappear because the curls of the magnetic field components cancel once there is equilibrium of the exchange radiation going along the same path in opposite directions.  Hence, electric fields are due to charged, massless gauge bosons with Poynting vectors, being exchanged between fermions.  Magnetic fields are cancelled out in certain configurations (such as that illustrated) but in other situations where you send two gauge bosons of opposite charge through one another (in the figure the gauge bosons modelled by electricity have the same charge), you find that the electric field vectors cancel out to give an electrically neutral field, but the magnetic field curls can then add up, explaining magnetism.

The evidence for Fig. 2 is presented near the end of Catt’s March 1983 Wireless World article called ‘Waves in Space’ (typically unavailable on the internet, because Catt won’t make available the most useful of his papers for free): when you charge up x metres of cable to v volts, you do so at light speed, and there is no mechanism for the electromagnetic energy to slow down when the energy enters the cable.  The nearest page Catt has online about this is here: the battery terminals of a v volt battery are indeed at v volts before you connect a transmission line to them, but that’s just because those terminals have been charged up by field energy which is flowing in all directions at light velocity, so only half of the total energy, v/2 volts, is going one way and half is going the other way.  Connect anything to that battery and the initial (transient) output at light speed is only half the battery potential; the full battery potential only appears in a cable connected to the battery when the energy has gone to the far end of the cable at light speed and reflected back, adding to further in-flowing energy from the battery on the return trip, and charging the cable to v/2 + v/2 = v volts.

Because electricity is so fast (light speed for the insulator), early investigators like Ampere and Maxwell (who candidly wrote in the 1873 edition of his Treatise on Electricity and Magnetism, 3rd ed., Article 574: ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second. …’) had no idea whatsoever of this crucial evidence which shows what electricity is all about.  So when you discharge the cable, instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get just what is predicted by Fig. 2: a pulse of v/2 volts taking 2x/c seconds to exit.  In other words, the half of the energy already moving towards the exit end, exits first.  That gives a pulse of v/2 volts lasting x/c seconds.  Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy.  This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse.  Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds.  This is experimental fact.  It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals).  Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a flaw inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.  The Catt, Davidson and Walton history is summarised here

[The original Catt-Davidson-Walton paper can be found here (first page) and here (second page) although it contains various errors.  My discussion of it is here.  For a discussion of the two major awards Catt received for his invention of the first ever practical wafer-scale memory to come to market despite censorship such as the New Scientist of 12 June 1986, p35, quoting anonymous sources who called Catt ‘either a crank or visionary’ – a £16 million British government and foreign sponsored 160 MB ‘chip’ wafer back in 1988 – see this earlier post and the links it contains.  Note that the editors of New Scientist are still vandals today.  Jeremy Webb, current editor of New Scientist, graduated in physics and solid state electronics, so he has no good excuse for finding this stuff – physics and electronics – over his head.  The previous editor to Jeremy was Dr Alum M. Anderson who on 2 June 1997 wrote to me the following insult to my intelligence: ‘I’ve looked through the files and can assure you that we have no wish to suppress the discoveries of Ivor Catt nor do we publish only articles from famous people.  You should understand that New Scientist is not a primary journal and does not publish the first accounts of new experiments and original theories. These are better submitted to an academic journal where they can be subject to the usual scientific review.  New Scientist does not maintain the large panel of scientific referees necessary for this review process. I’m sure you understand that science is now a gigantic enterprise and a small number of scientifically-trained journalists are not the right people to decide which experiments and theories are correct. My advice would be to select an appropriate journal with a good reputation and send Mr Catt’s work there. Should Mr Catt’s theories be accepted and published, I don’t doubt that he will gain recognition and that we will be interested in writing about him.’  Both Catt and I had already sent Dr Anderson abstracts from Catt’s peer-reviewed papers such as IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67. Also Proc. IEE, June 83 and June 87. Also a summary of the book “Digital Hardware Design” by Catt et. al., pub. Macmillan 1979.  I wrote again to Dr Anderson with this information, but he never published it; Catt on 9 June 1997 published his response on the internet which he carbon copied to the editor of New Scientist.  Years later, when Jeremy Webb had taken over, I corresponded with him by email.  The first time Jeremy responded was on an evening in Dec 2002, and all he wrote was a tirade about his email box being full when writing a last-minute editorial.  I politely replied that time, and then sent him by recorded delivery a copy of the Electronics World January 2003 issue with my cover story about Catt’s latest invention for saving lives.  He never acknowledged it or responded.  When I called the office politely, his assistant was rude and said she had thrown it away unread without him seeing it!  I sent another but yet again, Jeremy wasted time and didn’t publish a thing.  According to the Daily Telegraph, 24 Aug. 2005: ‘Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.’  But even when Catt’s stuff was applied to cosmology in Electronics World Aug. 02 and Apr. 03, it was still ignored by New ScientistHelene Guldberg has written a ‘Spiked Science’ article called Eco-evangelism about Jeremy Webb’s bigoted policies and sheer rudeness, while Professor John Baez has publicised the decline of New Scientist due to the junk they publish in place of solid physics.  To be fair, Jeremy was polite to Prime Minister Tony Blair, however.  I should also add that Catt is extremely rude in refusing to discuss facts.  Just because he has a few new solid facts which have been censored out of mainstream discussion even after peer-reviewed publication, he incorrectly thinks that his vast assortment of more half-baked speculations are equally justified.  For example, he refuses to discuss or co-author a paper on the model here.  Catt does not understand Maxwell’s equations (he thinks that if you simply ignore 18 out of 20 long hand Maxwell differential equations and show that when you reduce the number of spatial dimensions from 3 to 1, then – since the remaining 2 equations in one spatial dimension contain two vital constants – that means that Maxwell’s equations are ‘shocking … nonsense’, and he refuses to accept that he is talking complete rubbish in this empty argument), and since he won’t discuss physics he is not a general physics  authority, although he is expert in experimental research on logic signals, e.g., his paper in IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67.]

Fig. 3 - Coulomb force mechanism for electric charged massless gauge bosons

Fig. 3: Coulomb force mechanism for electric charged massless gauge bosons.  The SU(2) electrogravity mechanism.  Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them.  They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets.  The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe).  That explains the electromagnetic repulsion physically.  Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides.  The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd.  In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them.  When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges.  This theory holds water!

This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation.

Fig. 4 - Charged gauge bosons mechanism and how the potential adds up

Fig. 4: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation.  For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible.  But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons.  Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves).  Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue.  Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping.  This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down.  When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed.  This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.

The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.

For some of the many quantitative predictions and tests of this model, see previous posts such as this one.

SU(2), as used in the SU(2)xU(1) electroweak symmetry group, applies only to left-handed particles.  So it’s pretty obvious that half the potential application of SU(2) is being missed out somehow in SU(2)xU(1).

SU(2) is fairly similar to U(1) in Fig. 1 above, except that SU(2) involves 22 – 1 = 3 types of charges (positive, negative and neutral), which (by moving) generate 2 types of charged currents (positive and negative currents) and 1 neutral current (i.e., the motion of an uncharged particle produces a neutral current by analogy to the process whereby the motion of a charged particle produces a charged current), requiring 3 types of gauge boson (W+, W, and Z0).

For weak interactions we need the whole of SU(2)xU(1) because SU(2) models weak isospin by using electric charges as generators, while U(1) is used to represent weak hypercharge, which looks almost identical to Fig. 1 (which illustrates the use of U(1) for quantum electrodynamics).  The SU(2) isospin part of the weak interaction SU(2)xU(1) applies to only left-handed fermions, while the U(1) weak hypercharge part applies to both types of handedness, although the weak hypercharges of left and right handed fermions are not the same (see earlier post for the weak hypercharges of fermions with different spin handedness).

It is interesting that the correct SU(2) symmetry predicts massless versions of the weak gauge bosons (W+, W, and Z0).  Then the mainstream go to a lot of trouble to make them massive by adding some kind of speculative Higgs field, without considering whether the massless versions really exist as the proper gauge bosons of electromagnetism and gravity.  A lot of the problem is that the self-interaction of charged massless gauge bosons is a benefit in explaining the mechanism of electromagnetism (since two similar charged electromagnetic energy currents flowing through one another cancel out each other’s magnetic fields, preventing infinite self-inductance, and allowing charged massless radiation to propagate freely so long as it is exchange radiation in equilibrium with equal amounts flowing from charge A to charge B as flow from charge B to charge A; see Fig. 5 of the earlier post here).  Instead of seeing how the mutual interactions of charged gauge bosons allow exchange radiation to propagate freely without complexity, the mainstream opinion is that this might (it can’t) cause infinities because of the interactions.  Therefore, mainstream (false) consensus is that weak gauge bosons have to have a great mass, simply in order to remove an enormous number of unwanted complex interactions!  They simply are not looking at the physics correctly.

U(2) and unification

Dr Woit has some ideas on how to proceed with the Standard Model: ‘Supersymmetric quantum mechanics, spinors and the standard model’, Nuclear Physics, v. B303 (1988), pp. 329-42; and ‘Topological quantum theories and representation theory’, Differential Geometric Methods in Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced Research Workshop, Ling-Lie Chau and Werner Nahm, Eds., Plenum Press, 1990, pp. 533-45. He summarises the approach in

‘… [the theory] should be defined over a Euclidean signature four dimensional space since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) [is a proper subset of] SO(4) (perhaps better thought of as a U(2) [is a proper subset of] Spin^c (4)) and … it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2n) as A *(C^n) applied to a ‘vacuum’ vector.

‘Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons… A generation of quarks has the same transformation properties except that one has to take the ‘vacuum’ vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero. The above comments are … just meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles…’

The SU(3) strong force (colour charge) gauge symmetry

The SU(3) strong interaction – which has 3 color charges (red, blue, green) and 32 – 1 = 8 gauge bosons – is again virtually identical to the U(1) scheme in Fig. 1 above (except that there are 3 charges and 8 spin-1 gauge bosons called gluons, instead of the alleged 1 charge and 1 gauge boson in the flawed U(1) model of QED, and the 8 gluons carry color charge, whereas the photons of U(1) are uncharged).  The SU(3) symmetry is actually correct because it is an empirical model based on observed particle physics, and the fact that the gauge bosons of SU(3) do carry colour makes it a proper causal model of short range strong interactions, unlike U(1).  For an example of the evidence for SU(3), see the illustration and history discussion in this earlier post.SU(3) is based on an observed (empirical, experimentally determined) particle physics symmetry scheme called the eightfold way.  This is pretty solid experimentally, and summarised all the high energy particle physics experiments from about the end of WWII to the late 1960s.  SU(2) describes the mesons which were originally studied in natural cosmic radiation (pions were the first mesons discovered, and they were found in cosmic radiation from outer space in 1947, at Bristol University).  A type of meson, the pion, is the long-range mediator of the strong nuclear force between nucleons (neutrons and protons), which normally prevents the nuclei of atoms from exploding under the immense Coulomb repulsion of having many protons confined in the small space of the nucleus.  The pion was accepted as the gauge boson of the strong force predicted by Japanese physicist Yukawa, who in 1949 was awarded the Nobel Prize for predicting that meson right back in 1935.  So there is plenty of evidence for both SU(3) color forces and SU(2) isospin.  The problems all arise from U(1). To give an example of how SU(3) works well with charged gauge bosons, gluons, remember that this property of gluons is responsible for the major discovery of asymptotic freedom of confined quarks.  What happens is that the mutual interference of the 8 different types of charged gluons with pairs of virtual quarks and virtual antiquarks at very small distances between particles (high energy) weakens the color force.  The gluon-gluon interactions screen the color charge at short distances because each gluon contains two color charges.  If each gluon contained just one color charge, like the virtual fermions in pair production in QED, then the screening effect would be most significant at large, rather than short, distances.  Because the effective colour charge diminishes at very short distances, for a particular range of distances this color charge fall as you get closer offsets the inverse-square force law effect (the divergence of effective field lines), so the quarks are completely free – within given limits of distance – to move around within a neutron or a proton.  This is asymptotic freedom, an idea from SU(3) that was published in 1973 and resulted in Nobel prizes in 2004.  Although colour charges are confined in this way, some strong force ‘leaks out’ as virtual hadrons like neutral pions and rho particles which account for the strong force on the scale of nuclear physics (a much larger scale than is the case in fundamental particle physics): the mechanism here is similar to the way that atoms which are electrically neutral as a whole can still attract one another to form molecules, because there is a residual of the electromagnetic force left over.  The strong interaction weakens exponentially in addition to the usual fall in potential (1/distance) or force (inverse square law), so at large distances compared to the size of the nucleus it is effectively zero.  Only electromagnetic and gravitational forces are significant at greater distances.  The weak force is very similar to the electromagnetic force but is short ranged because the gauge bosons of the weak force are massive.  The massiveness of the weak force gauge bosons also reduces the strength of the weak interaction compared to electromagnetism.

The mechanism for the fall in color charge coupling strength due to interference of charged gauge bosons is not the whole story.  Where is the energy of the field going where the effective charge falls as you get closer to the middle?  Obvious answer: the energy lost from the strong color charges goes into the electromagnetic charge.  Remember, short-range field charges fall as you get closer to the particle core, while electromagnetic charges increase; these are empirical facts.  The strong charge decreases sharply from about 137e at the greatest distances it extends to (via pions) to around 0.15e at 91 GeV, while over the same range of scattering energies (which are appriximately inversely proportional to the distance from the particle core), the electromagnetic charge has been observed to increase by 7%.  We need to apply a new type of continuity equation to the conservation of gauge boson exchange radiation energy of all types, in order to deduce vital new physical insights from the comparison of these figures for charge variation as a function of distance.  The suggested mechanism in a previous post is:

‘We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy.  If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.  So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases!  Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.’ 

Force strengths as a function of distance from a particle core

I’ve written previously that the existing graphs showing U(1), SU(2) and SU(3) force strengths as a function of energy are pretty meaningless; they do not specify which particles are under consideration.  If you scatter leptons at energies up to those which so far have been available for experiments, they don’t exhibit any strong force SU(3) interactions.What should be plotted is effective strong, weak and electromagnetic charge as a function of distance from particles.  This is easily deduced because the distance of closest approach of two charged particles in a head-on scatter reaction is easily calculated: as they approach with a given initial kinetic energy, the repulsive force between them increases, which slows them down until they stop at a particular distance, and they are then repelled away.  So you simply equate the initial kinetic energy of the particles with the potential energy of the repulsive force as a function of distance, and solve for distance.  The initial kinetic energy is radiated away as radiation as they decelerate.  There is some evidence from particle collision experiments that the SU(3) effective charge really does decrease as you get closer to quarks, while the electromagnetic charge increases.  Levine and Koltick published in PRL (v.78, 1997, no.3, p.424) in 1997 that the electron’s charge increases from e to 1.07e as you go from low energy physics to collisions of electrons at an energy of 91 GeV, i.e., a 7% increase in charge.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.

The full investigation of running-couplings and the proper unification of the corrected Standard Model is the next priority for detailed investigation.  (Some details of the mechanism can be found in several other recent posts on this blog, e.g., here.)

‘The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W, and W0 /Z0 gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still the ’seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.] – R. P. Feynman, QED, Penguin, 1990, pp141-142.Mechanism for loop quantum gravity with spin-1 (not spin-2) gravitons

Peter Woit gives a discussion of the basic principle of LQG in his book:

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p189.

I watched Lee Smolin’s Perimeter Institute lectures, “Introduction to Quantum Gravity”, and he explains that loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity.

It’s pretty evident that the quantum gravity loops are best thought of as being the closed exchange cycles of gravitons going between masses (or other gravity field generators like energy fields), to and fro, in an endless cycle of exchange.  That’s the loop mechanism, the closed cycle of Yang-Mills exchange radiation being exchanged from one mass to another, and back again,  continually.

According to this idea, the graviton interaction nodes are associated with the ‘Higgs field quanta’ which generates mass.  Hence, in a Penrose spin network, the vertices represent the points where quantized masses exist. Some predictions from this are here.

Professor Penrose’s interesting original article on spin networks, Angular Momentum: An Approach to Combinatorial Space-Time, published in ‘Quantum Theory and Beyond’ (Ted Bastin, editor), Cambridge University Press, 1971, pp. 151-80, is available online, courtesy of Georg Beyerle and John Baez.

Update (25 June 2007):

Lubos Motl versus Mark McCutcheon’s book The Final Theory

Seeing that there is some alleged evidence that mainstream string theorists are bigoted charlatans, string theorist Dr Lubos Motl, who is soon leaving his Assistant Professorship at Harvard, made me uneasy when he attacked Mark McCutcheon’s book The Final Theory. Motl wrote a blog post attacking McCutcheon’s book by saying that: ‘Mark McCutcheon is a generic arrogant crackpot whose IQ is comparable to chimps.’ Seeing that Motl is a stringer, this kind of abuse coming from him sounds like praise to my ears. Maybe McCutcheon is not so wrong? Anyway, at lunch time today, I was in Colchester town centre and needed to look up a quotation in one of Feynman’s books. Directly beside Feynman’s QED book, on the shelf of Colchester Public Library, was McCutcheon’s chunky book The Final Theory. I found the time to look up what I wanted and to read all the equations in McCutcheon’s book.

Motl ignores McCutcheon’s theory entirely, and Motl is being dishonest when claiming: ‘his [McCutcheon’s] unification is based on the assertion that both relativity as well as quantum mechanics is wrong and should be abandoned.’

This sort of deception is easily seen, because it has nothing to do with McCutcheon’s theory! McCutcheon’s The Final Theory is full of boring controversy or error, such as the sort of things Motl quotes, but the core of the theory is completely different and takes up just two pages: 76 and 194. McCutcheon claims there’s no gravity because the Earth’s radius is expanding at an accelerating rate equal to the acceleration of gravity at Earth’s surface, g = 9.8 ms-2. Thus, in one second, Earth’s radius (in McCutcheon’s theory) expands by (1/2)gt2 = 4.9 m.

I showed in an earlier post that there is a simple relationship between Hubble’s empirical redshift law for the expansion of the universe (which can’t be explained by tired light ideas and so is a genuine observation) and acceleration:

Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2

McCutcheon instead defines a ‘universal atomic expansion rate’ on page 76 of The Final Theory which divides the increase in radius of the Earth over a one second interval (4.9 m) into the Earth’s radius (6,378,000 m, or 6.378*106 m). I don’t like the fact he doesn’t specify a formula properly to define his ‘universal atomic expansion rate’.

McCutcheon should be clear: he is dividing (1/2)gt2 into radius of Earth, RE, to get his ‘universal atomic expansion rate, XA:

XA = (1/2)gt2/RE,

which is a dimensionless ratio. On page 77, McCutcheon honestly states: ‘In expansion theory, the gravity of an object or planet is dependent on it size. This is a significant departure from Newton’s theory, in which gravity is dependent on mass.‘ At first glance, this is a crazy theory, requiring Earth (and all the atoms in it, for he makes the case that all masses expand) to expand much faster than the rate of expansion of the universe.

However, on page 194, he argues that the outward acceleration of the an atom of radius R is:

a = XAR,

now the first thing to notice is that acceleration has units of ms-2 and R has units of m. So this equation is false dimensionally if XA = (1/2)gt2/RE. The only way to make a = XAR accurate dimensionally is to change the definition of XA by dropping t2 from the dimensionless ratio (1/2)gt2/RE to the ratio:

XA = (1/2)g/RE,

which has correct units of s-2. So we end up with this accurate version of McCutcheon’s formula for the outward acceleration of an atom of radius R (we will use the average radius of orbit of the chaotic electron path in the ground state of a hydrogen atom for R, which is 5.29*10-11 m):

a = XAR = [(1/2)g/RE]R, which can be equated to Newton’s formula for acceleration due to mass m, which is 1.67*10-27 kg:

a = [(1/2)g/RE]R

= mG/R2.

Hence, McCutcheon on page 194 calculates a value for G by rearranging these equations:

G = (1/2)gR3/(REm)

=(1/2)*(9.81)*(5.29*10-11)3 /[(6.378*106)*(1.67*10-27)]

= 6.82*10-11 m3/(kg*s2).

Which is only 2% higher than the measured value of

G = 6.673 *10-11 m3/(kg*s2).

After getting this result on page 194, McCutcheon remarks on page 195: ‘Recall … that the value for XA was arrived at by measuring a dropped object in relation to a hypothesized expansion of our overall planet, yet here this same value was borrowed and successfully applied to the proposed expansion of the tinest atom.

We can compress McCutcheon’s theory: what is he basically saying is the scaling ratio:

a = (1/2)g(R/RE) which when set equal to Newton’s law mG/R2, rearranges to give: G = (1/2)gR3/(REm).

However, McCutcheon’s own formula is just his guessed scaling law: a = (1/2)g(R/RE).

Although this quite accurately scales the acceleration of gravity at Earth’s surface (g at RE) to the acceleration of gravity at the ground state orbit radius of a hydrogen atom (a at R), it is not clear if this is just a coincidence, or if it is really anything to do with McCutcheon’s expanding matter idea. He did not derive the relationship, he just defined it by dividing the increased radius into the Earth’s radius and then using this ratio in another expression which is again defined without a rigorous theory underpinning it. In its present form, it is numerology. Furthermore, the theory is not universal: ithe basic scaling law that McCutcheon obtains does not predict the gravitational attraction of the two balls Cavendish measured; instead it only relates the gravity at Earth’s surface to that at the surface of an atom, and then seems to be guesswork or numerology (although it is an impressively accurate ‘coincidence’). It doesn’t have the universal application of Newton’s law. There may be another reason why a = (1/2)g(R/RE) is a fairly accurate and impressive relationship.

Since I regularly oppose censorship based on fact-ignoring consensus and other types of elitist fascism in general (fascism being best defined as the primitive doctrine that ‘might is right’ and who speaks loudest or has the biggest gun is the scientifically correct), it is only correct that I write this blog post to clarify the details that really are interesting.

Maybe McCutcheon could make his case better to scientists by putting the derivation and calculation of G on the front cover of his book, instead of a sunset. Possibly he could justify his guesswork idea to crackpot string theorists by some relativistic obfuscation invoking Einstein, such as:

‘According to relativity, it’s just as reasonable to think as the Earth zooming upwards up to hit you when you jump off a cliff, as to think that you are falling downward.’

If he really wants to go down the road of mainstream hype and obfuscation, he could maybe do even better by invoking the popular misrepresentation of Copernicus:

‘According to Copernicus, the observer is at ‘no special place in the universe’, so it is as justifiable to consider the Earth’s surface accelerating upwards to meet you, as vice-versa. Copernicus used a spaceship to travel all throughout the entire universe on a spaceship or a flying carpet to confirm the crackpot modern claim that we are not at a special place in the universe, you know.’

The string theorists would love that kind of thing (i.e., assertions that there is no preferred reference frame, based on lies) seeing that they think spacetime is 10 or 11 dimensional, based on lies.

My calculation of G is entirely different, being due to a causal mechanism of graviton radiation, and it has detailed empirical (non-speculative) foundations to it, and a derivation which predicts G in terms of the Hubble parameter and the local density:

G = (3/4)H2/(rπe3),

plus a lot of other things about cosmology, including the expansion rate of the universe at long distances in 1996 (two years before it was confirmed by Saul Perlmutter’s observations in 1998). However, this is not necessarily incompatible with McCutcheon’s theory. There are such things as mathematical dualities: where completely different calculations are really just different ways of modelling the same thing.

McCutcheon’s book is not just the interesting sort of calculation above, sadly. It also contains a large amount of drivel (particularly in the first chapter) about his alleged flaw in the equation: W = Fd or work energy = force applied * distance moved by force in the direction that the force operates. McCutcheon claims that there is a problem with this formula, and that work energy is being used continuously by gravity, violating conservation of energy. On page 14 (2004 edition) he claims falsely: ‘Despite the ongoing energy expended by Earth’s gravity to hold objects down and the moon in orbit, this energy never diminishes in strength…’

The error McCutcheon is making here is that no energy is used up unless gravity is making an object move. So the gravity field is not depleted of a single Joule of energy when an object is simply held in one place by gravity. For orbits, gravity force acts at right angles to the distance the moon is going in its orbit, so gravity is not using up energy in doing work on the moon. If the moon was falling straight down to earth, then yes, the gravitational field would be losing energy to the kinetic energy that the moon would gain as it accelerated. But it isn’t falling: the moon is not moving towards us along the lines of gravitational force; instead it is moving at right angles to those lines of force. McCutcheon does eventually get to this explanation on page 21 of his book (2004 edition). But this just leads him to write several more pages of drivel about the subject: by drivel, I mean philosophy. On a positive note, McCutcheon near the end of the book (pages 297-300 of the 2004 edition) correctly points out that that where two waves of equal amplitude and frequency are superimposed (i.e., travel through one another) exactly out of phase, their waveforms cancel out completely due to ‘destructive interference’. He makes the point that there is an issue for conservation of energy where such destructive interference occurs. For example, Young claimed that destructive interference of light occurs at the dark fringes on the screen in the double-slit experiment. Is it true that two out-of-phase photons really do arrive at the dark fringes, cancelling one another out? Clearly, this would violate conservation of energy! Back in February 1997, when I was editor of Science World magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject. Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes. The photons are not arriving at the dark fringes. Instead, they arrive in the bright fringes.

The interference of radio waves and other phased waves is also known as the Hanbury-Brown-Twiss effect, whereby if you have two radio transmitter antennae, the signal that can be received depends on the distance between them: moving they slightly apart or together changes the relative phase of the transmitted signal from one with respect to the other, cancelling the signal out or reinforcing it. (It depends on the frequencies and amplitude as well: if both transmitters are on the same frequency and have the same output amplitude and radiation power, then perfectly destructive interference if they are exactly out of phase, or perfect reinforcement – constructive interference – if they are exactly in-phase, will occur.) This effect also actually occurs in electricity, replacing Maxwell’s mechanical ‘displacement current’ of vacuum dielectric charges.

Feynman quotation

The Feynman quotation I located is this:

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn – the phenomena that we see are very well approximated by rules such as ‘light travels in straight lines’ because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go [influenced by the randomly occurring fermion pair-production in the strong electric field on small distance scales, according to quantum field theory], each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to go.’

– R. P. Feynman, QED, Penguin, London, 1990, pp. 84-5. (Emphasis added in bold.)

Compare that to:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Heisenberg quantum mechanics: Poincare chaos applies on the small scale, since the virtual particles of the Dirac sea in the vacuum regularly interact with the electron and upset the orbit all the time, giving wobbly chaotic orbits which are statistically described by the Schroedinger equation – it’s causal, there is no metaphysics involved. The main error is the false propaganda that ‘classical’ physics models contain no inherent uncertainty (dice throwing, probability): chaos emerges even classically from the 3+ body problem, as first shown by Poincare.

Anti-causal hype for quantum entanglement: Dr Thomas S. Love of California State University has shown that entangled wavefunction collapse (and related assumptions such as superimposed spin states) are a mathematical fabrication introduced as a result of the discontinuity at the instant of switch-over between time dependent and time independent versions of Schroedinger at time of measurement.

Just as the Copenhagen Interpretation was supported by lies (such as von Neumann’s false ‘disproof’ of hidden variables in 1932) and fascism (such as the way Bohm was treated by the mainstream when he disproved von Neumann’s ‘proof’ in the 1950s), string ‘theory’ (it isn’t a theory) is supported by similar tactics which are political in nature and have nothing to do with science:

‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.


‘Superstring/M-theory is the language in which God wrote the world.’ – Assistant Professor Lubos Motl, Harvard University, string theorist and friend of Edward Witten, quoted by Professor Bert Schroer, (p. 21).

‘The mathematician Leonhard Euler … gravely declared: “Monsieur, (a + bn)/n = x, therefore God exists!” … peals of laughter erupted around the room …’ –

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation – a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? … In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up … So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. … All these numbers … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pp 194-195. [Quoted by Tony Smith.]

Feynman predicted today’s crackpot run world in his 1964 Cornell lectures (broadcast on BBC2 in 1965 and published in his book Character of Physical Law, pp. 171-3):

‘The inexperienced, and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculation] you can immediately see that they are wrong, so that does not count. … There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a territory.’

In the same book Feynman states:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters

‘If you are not criticized, you may not be doing much.’ – Donald Rumsfeld.

The Standard Model, which Edward Witten has done a lot of useful work on (before he went into string speculation), is the best tested physical theory. Forces result from radiation exchange in spacetime. The big bang matter’s speed is 0-c in spacetime of 0-15 billion years, so outward force F = ma = 1043 N. Newton’s 3rd law implies equal inward force, which from the Standard Model possibilities will be carried by gauge bosons (exchange radiation), predicting current cosmology, gravity and the contraction of general relativity, other forces and particle masses.

‘A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments.’ – Novum Organum.

This predicts gravity in a quantitative, checkable way, from other constants which are being measured ever more accurately and will therefore result in more delicate tests. As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are consistent with LQG and Lunsford’s gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law).

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54.

That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.

The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that. Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.

So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles). The LQG loops are entirely different (exchange radiation) and cause gravity, not cosmological constant effects. Hence no dark energy mechanism can be attributed to the charge creation effects in the Dirac sea, which exists only close to real particles.

‘By struggling to find a mathematically precise formulation, one often discovers facets of the subject at hand that were not apparent in a more casual treatment. And, when you succeed, rigorous results (”Theorems”) may flow from that effort.

‘But, particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigorous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ – Professor Jacques Distler, blog entry on The Role of Rigour.

‘[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. … The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting…. The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.’ – Editorial, p5 of the 9 Dec 06 issue of New Scientist.

Far easier to say anything else is crackpot. String isn’t, because it’s mainstream, has more people working on it, and has a large number of ideas connecting one another. No ‘lone genius’ can ever come up with anything more mathematically complex, and amazingly technical than string theory ideas, which are the result of decades of research by hundreds of people. Ironically, the core of a particle is probably something like a string, albeit not the M-theory 10/11 dimensional string, just a small loop of energy which acquires mass by coupling to an external mass-giving bosonic field. It isn’t the basic idea of string which is necessarily wrong, but the way the research is done and the idea that by building a very large number of interconnected buildings on quicksand, it will be absurd for disaster to overcome the result which has no solid foundations. In spacetime, you can equally well interpret recession of stars as a variation of velocity with time past as seen from our frame of reference, or a variation of velocity with distance (the traditional ‘tunnel-vision’ due to Hubble).

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1907.

Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of “predicting gravity”, he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an “alternative to currently accepted theories”. Where is the theory in string? Where is the theory in M-”theory” which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn’t an alternative to a currently accepted theory. It’s tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I’m not saying string should be banned, but I don’t agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!)

There is some dark matter in the form of the mass of neutrinos and other radiations which will be attracted around galaxies and affect their rotation, but it is bizarre to try to use discrepancies in false theories as “evidence” for unobserved “dark energy” and “dark matter”, neither of which has been found in any particle physics experiment or detector in history. The “direct evidence of dark matter” seen in photos of distorted images don’t say what the “dark matter” is and we should remember that Ptolemy’s followers were rewarded for claiming direct evidence of the earth centred universe was apparent to everyone who looked at the sky. Science requires evidence, facts, and not faith based religion which ignores or censors out the evidence and the facts.

The reason for current popularity of M-theory is precisely that it claims to not be falsifiable, so it acquires a religious or mysterious allure to quacks, just as Ptolemy’s epicycles, phlogiston, caloric, Kelvin’s vortex atom and Maxwell’s mechanical gear box aether did in the past. Dr Peter Woit explains the errors and failures of mainstream string theory in his book Not Even Wrong (Jonathan Cape, London, 2006, especially pp 176-228): using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.

By claiming to ‘predict’ everything conceivable, it predicts nothing falsifiable at all and is identical to quackery, although string theory might contain some potentially useful spin-offs such as science fiction and some mathematics (similarly, Ptolemy’s epicycles theory helped to advance maths a little, and certainly Maxwell’s mechanical theory of aether led ultimately to a useful mathematical model for electromagnetism; Kelvin’s false vortex atom also led to some ideas about perfect fluids which have been useful in some aspects of the study of turbulence and even general relativity). Even if you somehow discovered gravitons, superpartners, or branes, these would not confirm the particular string theory model anymore than a theory of leprechauns would be confirmed by discovering small people. Science needs quantitative predictions.

Dr Imre Lakatos explains the way forward in his article ‘Science and Pseudo-Science’:

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

Really, there is nothing more anyone can do after making a long list of predictions which have been confirmed by new measurements, but are censored out of mainstream publications by the mainstream quacks of stringy elitism. Prof Penrose wrote this depressing conclusion well in 2004 in The Road to Reality so I’ll quote some pertinent bits from the British (Jonathan Cape, 2004) edition:

On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’

Penrose identifies the problem clearly on page 1021: ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

On page 1026, Penrose gets down to the business of how science is really done: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’

‘Cargo cult science is defined by Feynman as a situation where a group of people try to be scientists but miss the point. Like writing equations that make no checkable predictions… Of course if the equations are impossible to solve (like due to having a landscape of 10^500 solutions that nobody can handle), it’s impressive, and some believe it. A winning theory is one that sells the most books.’ –

‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, 1984, Chancellor Press, London, 1984, p225

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. …’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

Path integrals for gauge boson radiation versus path integrals for real particles, and Weyl’s gauge symmetry principle

The previous post plus a re-reading of Professor Zee’s Quantum Field Theory in a Nutshell (Princeton, 2003) suggests a new formulation for quantum gravity, the mechanism and mathematical predictions of which were given two posts ago.The sum over histories for real particles is used to work out the path of least action, such as the path of a photon of light which takes the least time to bounce off a mirror.  You can do the same thing for the path of a real electron, or the path of a drunkard’s walk.  The integral tells you the effective path taken by the particle, or the probability of any given path being taken, from many possible paths.

For gauge bosons or vector bosons, i.e., force-mediating radiation, the role of the path integral is no longer to find the probability of a path being taken or the effective path.  Instead, gauge bosons are exchanged over many paths simultaneously.  Hence there are two totally different applications of path integrals we are concerned with:

  • Applying the path integral for real particles involves evaluating a lot of paths, most of which are not actually taken (the real particle takes only one of those paths, although as Feynman said, it uses a ‘small core of nearby space’ so it can be affected by both of two slits in a screen, provided those slits are close together, within a transverse wavelength or so, so the small core of paths taken overlap both slits).
  • Applying the path integral for gauge bosons involves evaluating a lot of paths which are all actually being taken, because the extensive force field is composed of lots of gauge bosons being exchanged between charges, really going all over the place (for long-range gravity and electromagnetism).

In both cases the path taken by a given real particle or a single gauge boson must be composed of straight lines in between interactions (see Fig. 1 of previous post) because the curvature of general relativity appears to be a classical approximation to a lot of small discrete deflections due to discrete interactions with field quanta (sometimes curves are used in Feynman diagrams for convenience, but according to quantum field theory all mechanisms for curvature actually involve lots of little deflections by the quanta of fields).

The calculations of quantum gravity, two posts ago, use geometry to evaluate these straight-line gauge boson paths for gravity and electromagnetism.  Presumably, translating the simplicity of the calculations based on geometry in that post into a path integrals will appeal more to the stringy mainstream.  Loop quantum gravity methods of summing up a lot of interaction graphs will be used to do this.  What is vital are directional asymmetries, which transform a perfect symmetry of gauge boson exchanges in all directions into a force, represented by the geometry of Fig. 1 (below).  One way to convert that geometry into a formula is to consider the inward-outward travelling isotropic graviton exchange radiation by using divergence operator.  I think this can be done easily because there are two useful physical facts which make the geometry simpler even than appears from Fig. 1: first, the shield area x in Fig. 1 is extremely small so the asymmetry cone can not ever have a large sized base for any practical situation; second, by Newton’s proof the gravity inverse square law force from a lot of little particles spread out in the Earth is the same as you get by mathematically assuming that all the little masses (fundamental particles) are not spread throughout a large planet but are all at the centre.  So a path integral formulation for the geometry of Fig. 1 is simple.

Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation - not necessarily spin 2 gravitons, preferably spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as wel shall see - from all directions except that where there is an asymmetry produced by the mass which shields that radiation) . By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).

Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation –  spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as proved in the earlier post – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).  (Full proof here.)

Weyl’s gauge symmetry principle

A symmetry is anything that doesn’t change as the result of a transformation.  For example, the colour of a plastic pen doesn’t change when you rotate it, so the colour is a symmetry of the pen when the transformation type is a rotation.  If you transform the plastic pen by burning it, colour is not a symmetry of the pen (unless the pen was the colour of carbon in the first place).

A gauge symmetry is one where scalable quantities (gauges) are involved.  For example, there is a symmetry in the fact that the same amount of energy is required to lift a 1 kg mass up by a height of 1 metre, regardless of the original height of the mass above sea level.  (This example is not completely true, but it is almost true because the fall in gravity acceleration with height is small, as gravity is only 0.3% weaker at the top of the tallest mountain than it is at sea level.)

The female mathematician Emmy Noether in 1915 proved a great theorem which states that any continuous symmetry leads to a conservation law, e.g., the symmetry of physical laws (due to these laws remaining the same while time passes) leads to the principle of conservation of energy!  This particularly impressive example of Noether’s theorem does not strictly apply to forces over very long time scales, because, as proved, fundamental force coupling constants (relative charges) increase in direct proportion to the age of the universe.  However, the theorem is increasingly accurate as the time scale involved is reduced and the inaccuracy becomes trivial when the time considered is small compared to the age of the universe.

At the end of Quantum Field Theory in a Nutshell (at page 457), Zee points out that Maxwell’s equations unexpectedly contained two hidden symmetries, Lorentz invariance and gauge invariance: ‘two symmetries that, as we now know, literally hold the key to the secrets of the universe.’

He then argues that Maxwell’s long-hand differential equations masked these symmetries and it took Einstein’s genius to uncover them (special relativity for Lorentz invariance, general relativity for the tensor calculus with the repeated-indices summation convention, e.g., mathematical symbol compressions by defining notation which looks something like: Fab = 2dAab = daAb – dbAa).  This is actually a surprisingly good point to make.

Zee, judging from what his Quantum Field Theory in a Nutshell book contains, does not seem to be aware how useful Heaviside’s vector calculus is (Heaviside compressed Maxwell’s 20 equations into 4 field equations plus a continuity equation for conservation of charge, while Einstein merely compressed the 4 field equations into 2, a less impressive feat but one leading to less intuitive equations; divergence and curl equations in vector calculus describe simple divergence of radial electric field lines which you can picture, and simple curling of electric or magnetic field lines which again are easy to picture).  In addition, the way relativity comes from Maxwell’s equations is best expressed non-mathematically, just because it is so simple: if you move relative to an electric charge you get a magnetic field, if you don’t move relative to an electric charge you don’t see the magnetic field.

Zee adds: ‘it is entirely possible that an insightful reader could find a hitherto unknown symmetry hidden in our well-studied field theories.’

Well, he could start with the insight that U(1) doesn’t exist, as explained in the previous post.  There are no single charged leptons about, only pairs of them.  They are created in pairs, and are annihilated as pairs.  So really you need some form of SU(2) symmetry to replace U(1).  Such a replacement as a bonus predicts gravity and electromagnetism quantitatively, giving the coupling constants for each and the complete mechanism for each force.

Just to be absolutely lucid on this, so that there can be no possible confusion:

  • SU(2) correctly asserts that quarks form quark-antiquark doublets due to the short-range weak force mediated by massive weak gauge bosons
  • U(1) falsely asserts that leptons do not form doublets due to the long-range electromagnetic force mediated by mass-less electromagnetic gauge bosons.

The correct picture to replace SU(2)xU(1) is based on the same principle for SU(2) but a replacement of U(1) by another effect of SU(2):

  • SU(2) correctly asserts that quarks form quark-antiquark doublets due to the short-range weak force mediated by massive weak gauge bosons.
  • SU(2) also correctly asserts that leptons form lepton-antilepton doublets (although since the binding force is long-range electromagnetism instead of short-range massive weak gauge bosons, the lepton-antilepton doublets are not confined in a small place because the range over which the electromagnetic force operates is simply far greater than that of the weak force).

Solid experimentally validated evidence for this (including mechanisms and predictions of gravity and electromagnetism strengths, etc., from massless SU(2) gauge boson interactions which automatically explain gravity and electromagnetism): here.  Sheldon Glashow’s early expansion of the original Yang-Mills SU(2) gauge interaction symmetry to unify electromagnetism and weak interactions is quoted here.  More technical discussion on the relationship of leptons to quarks implies by the model: here.

However, innovation of a checkable sort is now unwelcome in mainstream stringy physics, so maybe Zee was joking, and maybe he secretly doesn’t want any progress (unless of course it comes from mainstream string theory).  This suggestion is made because Zee on the same page (p457) adds that the experimentally-based theory of electromagnetic unification (unification of electricity and magnetism) was a failure to achieve its full potential because those physicists: ‘did not possess the mind-set for symmetry.  The old paradigm “experiments -> action -> symmetry” had to be replaced in fundamental physics by the new paradigm “symmetry -> action -> experiments,” the new paradigm being typified by grand unified theory and later by string theory.’  (Emphasis added.)

Problem is, string theory has proved an inedible, stinking turkey (Lunsford both more politely and more memorably calls string ‘a vile and idiotic lie’ which ‘has managed to slough itself along for 20 years, leaving a shiny trail behind it’).  I’ve explained politely why string theory is offensive, insulting, abusive, dictatorial ego-massaging, money-laundering pseudoscience at my domain

Zee needs to try reading Paul Feyerabend’s book, Against Method.  Science actually works by taking the route that most agrees with nature, regardless of how unorthodox an idea is, or how crazy it superficially looks to the prejudiced who don’t bother to check it objectively before arriving at a conclusion on its merits; ‘science,’ when it does occasionally take the popular route that is a total and complete moronic failure, e.g., mainstream string, temporarily becomes a religion.  String theorists are like fanatical preachers, trying to dictate to the gullible what nature is like ahead of any evidence, the very error Bohr alleged Einstein was making in 1927.  Actually there is a strong connection between the speculative Copenhagen Interpretation propaganda of Bohr in 1927 (Bohr in fact had no solid evidence for his pet theory of metaphysics, while Einstein had every causal law and mechanism of physics on his side; today we all know from high-energy physics that virtual particles are an experimental physics fact and they cause indeterminancy in a simple mechanical way on small distance scales), and string.  Both rely on exactly the same mixture of lies, hype, coercion, ridicule of factual evidence, etc.  Both are religions.  Neither is a science, and no matter how much physically vacuous mathematical obfuscation they use, it’s failure to cover-up the gross incompetence in basic physics remains as perfectly transparent as the Emperor’s new clothes.  Unfortunately, most people see what they are told to see, so this farce of string theory continues.

Feynman diagrams in loop quantum gravity, path integrals, and the relationship of leptons to quarks

Fig. 1 - Quantum gravity versus smooth spacetime curvature of general relativity

Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron (see previous post for the mechanism and quantitative checked prediction of the strength of gravity).  If you believe string theory, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity.  (Basically, spin-1 gravitons push, while spin-2 gravitons suck.  So if you want a checkable, predictive, real theory of quantum gravity that pushes forward, check out spin-1 gravitons.  But if you merely want any old theory of quantum gravity that well and truly sucksyou can take your pick from the ‘landscape’ of 10500 stringy theories of mainstream sucking spin-2 gravitons.)  In general relativity, an electron accelerates due to a continuous smooth curvature of spacetime, due to a spacetime ‘continuum’ (spacetime fabric).

In mainstream quantum gravity ideas (at least in the Feynman diagram for quantum gravity), an electron accelerates in a gravitational field because of quantized interactions with some sort of graviton radiation (the gravitons are presumed to interact with the mass-giving Higgs field bosons surrounding the electron core).  As explained in the discussion of the stress-energy curvature in the previous post, in addition to the gravity mediators (gravitons) presumably being quantized rather than a continuous or continuum curved spacetime, there is the problem that the sources of fields such as discrete units of matter, come in quantized units in locations of spacetime.  General relativity only produces smooth curvature (the acceleration curve in the left hand diagram of Fig. 1) by smoothing out the true discontinuous (atomic and particulate) nature of matter by the use of an averaged density to represent the ‘source’ of the gravitational field.

The curvature of the line in the Feynman diagram for general relativity is therefore due to the smoothness of the source of gravity spacetime, resulting from the way that the presumed source of curvature – the stress-energy tensor in general relativity – averages the discrete, quantized nature of mass-energy per unit volume of space. Quantum field theory is suggestive that the correct Feynman diagram for any interaction is not a continuous, smooth curve, but instead a number of steps due to discrete interactions of the field quanta with the charge (i.e., gravitational mass).  However, the nature of the ‘gravitons’ has not been observed, so there are some uncertainties remaining about their nature.  Fig. 1 (which was inspired – in part – by Fig. 3 in Lee Smolin’s Trouble with Physics) is designed to give a clear idea of what quantum gravity is about and how it is related to general relativity:

‘Loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. The model is not as speculative as string theory…’ –

The previous post predicts gravity and cosmology correctly; the basic mechanism was published (by Electronics World) in October 1996, two years ahead of the discovery that there’s no gravitational retardation.  More important, it predicts gravity quantitatively, and doesn’t use any ad hoc hypotheses, just experimentally validated facts as input.  I’ve used that post to replace the earlier version of the gravity mechanism discussion here, here, etc., to improve clarity.

I can’t update the more permanent paper on the CERN document server here because as Tony Smith has pointed out, “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …”  The only way you can update a paper on the CERN document server is if it a mirror copy of one on arXiv; update the arXiv paper and CERN’s mirror copy will be updated.  This is contrary to scientific ethics whereby the whole point of electronic archives is that corrections and updates should be permissible.  Professor Jacques Distler, who works on string theory and is a member of arXiv’s advisory board, despite being warmly praised by me, still hasn’t even put Lunsford’s published paper on arXiv, which was censored by arXiv despite having been peer-reviewed and published.

Path integrals of quantum field theory

The path integral for the incorrect spin-2 idea was discussed at the earlier post here, while as stated the correct mechanism with accurate predictions confirming it, is at the post here. Let’s now examine the path integral formulation of quantum field theory in more depth.  Before we go into the maths below, by way of background, Wiki has a useful history of path integrals, mentioning:

‘The path integral formulation was developed in 1948 by Richard Feynman. … This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s called the renormalization group which unified quantum field theory with statistical mechanics. If we realize that the Schrödinger equation is essentially a diffusion equation with an imaginary diffusion constant, then the path integral is a method for the enumeration of random walks. For this reason path integrals had also been used in the study of Brownian motion and diffusion before they were introduced in quantum mechanics.’

As Fig. 1 shows, according to Feynman, ‘curvature’ is not real and general relativity is just an approximation: in reality, graviton exchange causes accelerations in little jumps.  If you want to get general relativity out of quantum field theory, you have to sum over the histories or interaction graphs for lots of little discrete quantized interactions.  The summation process is what we are about to describe mathematically.  By way of introduction, we can remember the random walk statistics mentioned in the previous post.  If a drunk takes n steps of approximately equal length x in random directions, he or she will travel an average of distance xn1/2 from the starting point, in a random direction!  The reason why the average distance gone is proportional to the square-root of the number of steps is easily understood intuitively because it is due to diffusion theory.  (If this was not the case, there would be no diffusion, because molecules hitting each other at random would just oscillate around a central point without any net movement.)  This result is just a statistical average for a great many drunkard’s walks.  You can derive it statistically, or you can simulate it on a computer, add up the mean distance gone after n steps for lots of random walks, and take the average.  In other words, you take the path integral over all the different possibilities, and this allows you to work out what is most likely to occur.

Feynman applied this procedure to the principle of least action.  One simple way to illustrate this is the discussion of how light reflects off a mirror.  Classically, the angle of incidence is equal to the angle of reflection, which is the same as saying that light takes the quickest possible route when reflecting.  If the angle of incidence were not equal to the angle of reflection, then light would obviously take longer to arrive after being deflected than it actually does (i.e., the sum of lengths of the two congruent sides in an isosceles triangle is smaller than the sum of lengths of two dissimilar sides for a trangle with the same altitude line perpendicular to the reflecting surface).

The fact that light classically seems always to go where the time taken is least is a specific instance of the more general principle of least action.  Feynman explains this with path integrals in his book QED (Penguin, 1990).  Physically, path integrals are the mathematical summation of all possibilities.  Feynman crucially discovered that all possibilities have the same magnitude but that the phase or effective direction (argument of the the complex number) varies for different paths.  Because each path is a vector, the differences in directions mean that the different histories will partly cancel each other out.

To get the probability of event y occurring, you first calculate the amplitude for that event.  Then you calculate the path integral for all possible events including event y.  Then you divide the first probability (that for just event y) into the path integral for all possibilities.  The result of this division is the absolute probability of event y occurring in the probability space of all possible events!  Easy.

Feynman found that amplitude for any given history is proportional to eiS/h-bar, and that the probability is proportional to the square of the modulus (positive value) of eiS/h-bar.  Here, S is the action for the history under consideration.

What is pretty important to note is that, contrary to some popular hype by people who should know better (Dr John Gribbin being such an example of someone who won’t correct errors in his books when I email the errors), the particle doesn’t actually travel on all of the paths integrated over in a specific interaction!  What happens is just one interaction, and one path.  The other paths in the path integral are considered so that you can work out the probability of a given path occurring, out of all possibilities.  (You can obviously do other things with path integrals as well, but this is one of the simplest things. For example, instead of calculating the probability of a given event history, you can use path integrals to identify the most probable event history, out of the infinite number of possible event histories.  This is just a matter of applying simple calculus!)

However, the nature of Feynman’s path integral does allow a little interaction between nearby paths!  This doesn’t happen with brownian diffusion!  It is caused by the phase interference of nearby paths, as Feynman explains very carefully:

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The Wiki article explains:

‘In the limit of action that is large compared to Planck’s constant h-bar, the path integral is dominated by solutions which are stationary points of the action, since there the amplitudes of similar histories will tend to constructively interfere with one another. Conversely, for paths that are far from being stationary points of the action, the complex phase of the amplitude calculated according to postulate 3 will vary rapidly for similar paths, and amplitudes will tend to cancel. Therefore the important parts of the integral—the significant possibilities—in the limit of large action simply consist of solutions of the Euler-Lagrange equation, and classical mechanics is correctly recovered.

‘Action principles can seem puzzling to the student of physics because of their seemingly teleological quality: instead of predicting the future from initial conditions, one starts with a combination of initial conditions and final conditions and then finds the path in between, as if the system somehow knows where it’s going to go. The path integral is one way of understanding why this works. The system doesn’t have to know in advance where it’s going; the path integral simply calculates the probability amplitude for a given process, and the stationary points of the action mark neighborhoods of the space of histories for which quantum-mechanical interference will yield large probabilities.’

I think this last bit is badly written: interference is only possible in the ‘small core’ paths that the size of the photon or other particle takes.  The paths which are not taken are not eliminated by inferference: they only occur in the path integral so that you know the absolute probability of a given path actually occurring.

Similarly, to calculate the probability of dice landing heads up, you need to know how many sides dice have.  So on one throw the probability of one particular side landing facing upwards is 1/6 if there are 6 sides per die.  But the fact that the number 6 goes into the calculation doesn’t mean that the dice actually arrive with every side facing up.  Similarly, a photon doesn’t arrive along routes where there is perfect cancellation!  No energy goes along such routes, so nothing at all physical travels along any of them.  Those routes are only included in the calculation because they were possibilities, not because they were paths taken.

In some cases, such as the probability that a photon will be reflected from the front of a block of glass, other factors are involved.  For the block of glass, as Feynman explains, Newton discovered that the probability of reflection depends on the thickness of the block of glass as measured in terms of the wavelength of the light being reflected.  The mechanism here is very simple.  Consider the glass before any photon even approaches it.  A normal block of glass is full of electrons in motion and vibrating atoms.  The thickness of the glass determines the number of wavelengths that can fit into the glass for any given wavelength of vibration.  Some of the vibration frequencies will be cancelled out by interference.  So the vibration frequencies of the electrons at the surface of the glass are modified in accordance to the thickness of the glass, even before the photon approaches the glass.  This is why the exact thickness of the glass determines the precise probability of light of a given frequency being reflected.  It is not determined when the photon hits the electron, because the vibration frequencies of the electron have already been determined by the interference of certain frequencies of vibration in the glass.

The natural frequencies of vibration in a block of glass depend on the size of the block of glass!  These natural frequencies then determine the probability that a photon is reflected.  So there is the two-step mechanism behind the dependency of photon reflection probability upon glass thickness.  It’s extremely simple.  Natural frequency effects are very easy to grasp: take a trip on an old school bus, and the windows rattle with substantial amplitude when the engine revolutions reach a particular frequency.  Higher or lower engine frequencies produce less window rattle.  The frequency where the windows shake the most is the natural frequency.  (Obviously for glass reflecting photons, the oscillations we are dealing with are electron oscillations which are much smaller in amplitude and much higher in frequency, and in this case the natural frequencies are determined by the thickness of the glass.)

The exact way that the precise thickness of a sheet of glass affects the abilities of electrons on the surface to reflect light easily understood by reference to Schroedinger’s original idea of how stationary orbits arise with a wave picture of an electron.  Schroedinger found that where an integer number of wavelengths of the electron fits into the orbit circumference, there is no interference.  But when only a fractional number of wavelengths would fit into that distance, then interference would be caused.  As a result, only quantized orbits were possible in that model, corresponding to Bohr’s quantum mechanics.  In a sheet of glass, when an integer number of wavelengths of light for a particular frequency of oscillation fit into the thickness of the glass, there is no interference in vibrations at that specific frequency, so it is a natural frequency.  However, when only a fractional number of wavelengths fit into the glass thickness, there is destructive interference in the oscillations.  This influences whether the electrons are resonating in the right way to admit or reflect a photon of a given frequency.  (There is also a random element involved, when considering the probability for individual photons chancing to interact with individual electrons on the surface of the glass in a particular way.)

Virtual pair-production can be included in path integrals by treating antimatter (such positrons) as matter (such as electrons) travelling backwards in time (this was one of the conveniences of Feynman diagrams which initially caused Feynman a lot of trouble, but it’s just a mathematical convenience for making calculations).  For more mathematical detail on path integrals, see Richard Feynman and Albert Hibbs, Quantum Mechanics and Path Integrals, as well as excellent briefer introductions such as Christian Grosche, An Introduction into the Feynman Path Integral, and Richard MacKenzie, Path Integral Methods and ApplicationsFor other standard references, scroll down this page.  For Feynman’s problems and hostility from Teller, Bohr, Dirac and Oppenheimer in 1948 to path integrals, see quotations in the comments of the previous post.

Feynman was extremely pragmatic.  To him, what matters is the validity of the physical equations and their predictions, not the specific model used to get the equations and predictions.  For example, Feynman said:

‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2.

If you can get the right equations even from a false model, you have done something useful, as Maxwell did.  However, you might still want to search for the correct model, as Feynman explained:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:

‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182.

This perturbative expansion is a simple example of the application of path integrals.  There are several ways that the electron can move, each corresponding to a unique Feynman diagram.  The electron can do along a direct path from spacetime location A to spacetime location B.  Alternatively, it can be deflected by a virtual particle enroute, and travel by a slightly longer path. Another alternative is that if could be deflected by two virtual particles.  There are, of course, an infinite number of other possibilities.  Each has a unique Feynman diagram and to calculate the most probable outcome you need to average them all in accordance with Feynman’s rules.

For the case of calculating the magnetic moment of leptons, the original calculation came from Dirac and assumed in effect the simplest Feynman diagram situation: that the electron interacts with a virtual (gauge boson) ‘photon’ from a magnet in the simplest simple way possible.  This is what conributes 98.85% of the total (average) magnetic moment of leptons, according to path integrals for lepton magnetic moments.  The next Feynman diagram is the second highest contributor and accounts for over 1% of interactions.  This correction is the situation evaluated by Schwinger in 1947 and is represented by a Feynman diagram in which a lepton emits a virtual photon before it interacts with the magnet.  After interacting with the magnet, it re-absorbs the virtual photon it emitted earlier.  This is odd because if an electron emits a virtual photon, it briefly (until the virtual photon is recaptured) loses energy.  How, physically, can this Feynman diagram explain how the magnetic moment of the electron be increased by 0.116% as a result of losing the energy of a virtual photon for the duration of the interaction with a magnet?  If this mechanism was the correct story, maybe you’d have a reduced magnetic moment result, not an increase?  Since virtual photons  mediate electromagnetic charge, you might expect them to reduce the charge/magnetism of the electromagnetism by being lost during an interaction.  Obviously, the loss of a non-virtual photon from an electron has no effect on the charge energy at all, it merely decelerates the electron (so kinetic energy and mass are slightly reduced, not electromagnetic charge).

There are two possible explanations to this:

1) the Feynman diagram for Schwinger’s correction is physically correct.  The emission of the virtual photon occurs in such a way that the electron gets briefly deflected towards the magnet for the duration of the interaction between electron and magnet.  The reason why the magnetic moment of the electron is increased as a result of this is simply that the virtual ‘photon’ that is exchanged between the magnet and the electron is blue-shifted by the motion of the electron towards the magnet for the duration of the interaction.  After the interaction, the electron re-captures the virtual ‘photon’ and is no-longer moving towards the magnet.  The blue-shift is the opposite of red-shift.  Whereas red-shift reduces the interaction strength between receding charges, blue-shift (due to the approach of charges) increases the interaction strength because the photons have an energy that is directly proportional to their frequency (E = hf).  This mechanism may be correct, and needs further investigation.

2) The other possibility is that there is a pairing between the electron core and a virtual fermion in the vacuum around it which increases the magnetic moment by a factor which depends on the shielding factor of the field from the particle core.  This mechanism was described in the previous post.  It helped inspire the general concept for the mass model discussed in the previous post, which is independent of this magnetic moment mechanism, and makes checkable predictions of all observable lepton and hadron masses.

The relationship of leptons to quarks and the perturbative expansion

As mentioned in the previous post (and comments number 13, 14, 22, 24, 25, 26, 27, 28 and 31 of that post), the number one priority now is to develop the details of the lepton-quark relationship.  The evidence that quarks are pairs or triads of confined leptons with some symmetry transformations was explained in detail in comment 13 to the previous post and is known as universality.  This was first recognised when the lepton beta decay event

muon -> electron + electron antineutrino + muon neutrino

was found to have similar detailed properties to the quark beta decay event

neutron -> proton + electron + electron antineutrino

Nicola Cabibbo used such evidence that quarks are closely related to leptons (I’ve only given one of many examples above) to develop the concept of ‘weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles.’ 

As stated in comment 13 of the previous post, I’m interested in the relationship between electric charge Q, weak isospin charge T and weak hypercharge Y:

Q = T + Y/2.

Where Y = −1 for left-handed leptons (+1 for antileptons) and Y = +1/3 for left-handed quarks (−1/3 for antiquarks). 

The minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” with strong (colour) charge and fractional apparent electric charge, are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.)  Peter Woit’s Not Even Wrong summarises what is known in Figure 7.1 on page 93 of Not Even Wrong:

‘The picture shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).

‘Under SU(3), the quarks are triplets and the leptons are invariant.

‘Under SU(2), the [left-handed] particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other [right-handed] particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).

‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’

This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’).

But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!

The issue of the fine detail in the relationship of leptons and quarks, how the transformation occurs physically and all the details you can predict from the new model suggested in the previous post, is very interesting and, as stated, is the number one priority.

For a start, to study the transformation of a lepton into a quark, we will consider the conversion of electrons into downquarks.  First, the conversion of a left-handed electron into a left-handed downquark will be considered, because the weak isospin charge is the same for each (T = -1/2):

eL  -> dL

The left-handed electron, eL, has a weak hypercharge of Y = -1 and the left-handed downquark, dL, has a weak hypercharge of Y = +1/3.  Therefore, this transformation incurs a fall in observable electric charge by a factor of 3 and an accompanying increase in weak hypercharge by +4/3 units (from -1 to +1/3).

Now, if the vacuum shielding mechanism suggested has any heuristic validity, the right-handed electron should transform into a right-handed downquark by way of a similar fall in electric charge by a factor of 3 and accompanying increase in weak hypercharge by +4/3 units:

eR -> dR

The weak isospin charges are the same for right-handed electrons and right-handed downquarks (T = 0 in each case).

The transformation of a right-handed electron to right-handed downquark involves the same reduction in electric charge by a factor of 3 as for left-handed electrons, while the weak hypercharge changes from Y = -2 to Y = -2/3.  This means that the weak hypercharge increases by +4/3 units, just the same amount as occurred with the transformation of a left-handed electron to a left-handed downquark.  So there is a consistency to this model: the shielding of a given amount of electric charge by the polarized vacuum causes a consistent increase in the weak hypercharge.

If we ignore for the moment the possibility that antimatter leptons may get transformed into upquarks and just consider matter, then the symmetry transformations required to change right-handed neutrinos into right-handed upquarks, and left-handed neutrions into left-handed upquarks are:

vL -> uL

vR -> uR

The first transformation involves a left-handed neutrino, vL, with Y = -1, Q = 0, and T = 1/2, becoming a left-handed upquark, uL, with Y = 1/3, Q = 2/3, and T = 1/2.  We notice that Y gains 4/3 in the transformation, while Q gains 2/3.

The second transformation involves a right-handed neutrino with Y = 0, Q = 0 and T = o  becoming a right-handed upquark with Y = 4/3, Q = 2/3 and T = 0.  We can immediately see that the transformation has again resulted in Y gaining 4/3 while Q gains 2/3.  Hence, the concept that a given change in electric charge is accompanied by a given change in hypercharge remains valid.  So we have accounted for the conversion of the four leptons in one generation of particle physics (two types of handed electrons and two types of handed neutrinos) into the four quarks in the same generation of particle physics (left and right handed versions of two quark flavors).

These transformations are obviously not normal reactions at low energy.  The first two make checkable, falsifiable predictions about unification to replace supersymmetry speculation about the unification of running couplings, the relative charges of the electromagnetic, weak and strong forces as a function of either collision energy (e.g., electromagnetic charge increases at higher energy, while strong charge falls) or distance (e.g., electromagnetic charge increases at small distances, while strong charge falls).

If we review the symmetry transformations suggested for a generation of leptons into a generation of quarks,

eL  -> dL

eR -> dR

vL -> uL

vR -> uR

it is clear that the last two reactions are in difficulty, because the conversion of neutrinos into upquarks (in this example of a generation of quarks) is a potential problem for the suggested physical mechanism in the previous (and earlier) posts.  The physical mechanism for the first two of the four transformations is relatively straightforward to picture: try to collide leptons at enormous energy and the overlap of the polarized vacuum veils of polarizable fermions should shield some of the long-range (observable low energy) electric charge, with this shielded energy is used instead in short range weak hypercharge mediated by weak gauge bosons, and colour charges for the strong force.

Because we know exactly how much energy is ‘lost’ from the electric charge in the first two transformations due to the increased shared polarized vacuum shield, we can quantitatively check this physical mechanism by setting this lost energy equal to the energy gained in the weak force and seeing if the predictions are accurate.  This mechanism might not apply directly to the last two transformations, since neutrinos do not carry a net electric charge.  It is also necessary to investigate the possibilities for the transformation of positrons into upquarks.  This issue of why there is little antimatter might be resolved if positrons were converted into upquarks at high energy in the big bang by the mechanism suggested for the first two transformations.

However, the polarized vacuum shielding mechanism might still apply in some circumstances to neutral particles, depending on the geometry.  Neutrinos may be electrically neutral as observed at low energy or large distances, while actually carrying equal and opposite electric charge.  (Similarly, atoms often appear to be neutral, but if we smash them to pieces we get observable electric charges arise.  The apparent electrical neutrality of atoms is a masking effect of the fact that atoms usually carry equal positive and negative charge, which cancel as seen from a distance.  A photon of light similarly carries positive electric field and negative electric field energy in equal quantities; the two cancel out overall, but the electromagnetic fields of the photon can interact with charges.

Charge is only manifested by way of the field created by a charge; since nobody has ever seen the core of a charged particle, only the field.  A confined field of a given charge is therefore indistinguishable from a charge.  The only reason why an electron appears to be a negative charge is because it has a negative electric field around it.  As shown in Fig. 5 of the previous post, there is a modification necessary to the U(1) symmetry of the standard model of particle physics: negative gauge bosons to mediate the fields around negative charges, and positive gauge bosons to mediate the fields around positive charges.

So a ‘neutral’ particle which is neutral because it contains of equal amounts of positive and negative electric field, may be able to induce electric polarization of the vacuum for the short ranged (uncancelled) electric field.  The range of this effect is obviously limited to the distance between the centrel of the positive part of the particle and the negative part of the particle.  (In the case of a photon for example, this distance is the wavelength.)

If we replace the existing electroweak SU(2)xU(1) symmetry by SU(2)xSU(2), maybe with each SU(2) having a different handedness, then we get four charged bosons (two charged massive bosons for the weak force, and two charged massless bosons for electromagnetism) and two neutral bosons: a massless gravity mediating gauge boson, and a massive weak neutral-current producing gauge boson.

Let’s try the transformation of a positron into an upquark.  This has two major advantages over the idea that neutrinos are transformed into upquarks.  First, it explains why we don’t observe much antimatter in nature (tiny amounts arise from radioactive decays involving positron emission, but it quickly annihilates with matter into gamma rays).  In the big bang, if nature was initially symmetric, you would expect as much matter as antimatter.  The transformation of free positrons into confined upquarks would sort out this problem.  Most of the universe is hydrogen, consisting of a proton containing two upquarks and a downquark, plus an orbital electron.  If the upquarks come come from a transformation of positrons while downquarks come from a transformation of electrons, the matter-antimatter balance is resolved.

Secondly, the transformation of positrons to upquarks has a simple mechanism by vacuum polarization shielding of the electric charge, causing the electric charge of the positron to drop from +1 unit for a positron to +2/3 units for upquarks.  This occurs because you get two positive upquarks and one downquark in a proton.  The transformation is

e+L  -> uL

The positron on the left hand side has Y = +1, Q = +1 and T = +1/2.  The upquark on the right hand side has Y = +1/3, Q = +2/3 and T = +1/2.  Hence, there is an decrease of Y by 2/3, while Q decreases by 1/3.  Hence the amount of change of Y is twice that of Q.  This is impressively identical to the the situation in the transformation of electrons into downquarks, where an increase of Q by 2/3 units is accompanied by an increase of Y by twice 2/3, i.e., by 4/3, for the transformation eL  -> dL 

There are only two ways that quarks can group: in pairs and in triplets or triads.  The pairs are of quarks sharing the same polarized vacuum are known as mesons, and mesons are the SU(2) symmetry pairs of left-handed quark and left-handed anti-quark, which both experience the weak nuclear force (no right-handed particle can participate in the weak nuclear force, because the right handed neutrino has zero weak hypercharge).  The SU(3) symmetry triplets of quarks are called baryons.

Because only left-handed particles experience the weak force (i.e., parity is broken), it is vital to explain why this is so.  This arises from the way the vector bosons gain mass.  In the basic standard model, everything is massless.  Mass is added to the standard model by a separate scalar field (such as that which is speculatively proposed by Philip Anderson and Peter Higgs and called the Higgs field), which gives all the massive particles (including the weak force vector bosons) their mass.  The quanta for the scalar mass field are named ‘Higgs bosons’ but these have never been officially observed, and mainstream speculations do not make predict Higgs boson mass unambiguously.

The model for masses in the previous post predicts composite (meson and baryon) particle masses to be due to an integer number of 91 GeV building blocks of mass which couple weakly due to the shielding factor due to the polarized vacuum around a fermion.  The 91 GeV energy equivalent to the rest mass of the uncharged neutral weak gauge boson, the Z.

The SU(3), SU(2) and U(1) gauge symmetries of the standard model describe triplets (baryons), doublets (mesons) and single particle cores (leptons), dominated by strong, weak and electromagnetic interactions, respectively.  The problem is located in the electroweak SU(2)xU(1) symmetry.  Most of the papers and books on gauge symmetry focus on the technical details of the mathematical machinery, and simple mechanisms are looked at askance (as is generally the case in quantum mechanics and general relativity).  So you end up learning say, how to drive a car without knowing how the engine works, or you learn how the engine works without any knowledge of the territory which would enable you to plan a useful journey.  This is the way some complex mathematical physics is traditionally taught, mainly to get away from useless speculations: Feynman’s analogy of the chess game is fairly good.  (Deduce some of the rules of the game by watching the game being played, and use these rules to make some accurate predictions about what may happen; without having the complete understanding necessary for confident explanation of what the game is about.  Then make do by teaching the better known predictive rules, which are technical and accurate, but don’t always convey a complete understanding of the big picture.)

A serious problem with the U(1) symmetry is that you can’t really ever get single leptons in nature.  They all arise naturally from pair production, so they usually arrive in doublets, contradicting U(1); examples: in beta decay, you get a beta particle and an antineutrino, while in pair production you may get a positron and an electron.

This is part of the reason why SU(2) deals with leptons in the model proposed in the previous post.  Whereas pairs of left-handed quarks are confined in close proximity in mesons, a lepton-antilepton pair is not confined in a small space, but it is still a type of doublet and can be treated as such by SU(2) using massless gauge bosons (take the masses away from the Z, W+ and W- weak bosons, and you are left with a massless Z boson that mediates gravity, and massless W+ and W- bosons which mediate electromagnetic forces).  Because a version of SU(2) with massless gauge bosons has infinite range inverse-square law fields, it is ideal for describing the widely separated lepton-antilepton pairs created by pair production, just as SU(2) with massive guage bosons is ideal for describing the short range weak force in left-handed quark-antiquark pairs (mesons).

The electroweak chiral symmetry arises because only left-handed particles can interact with massive SU(2) gauge bosons (the weak force), while all particles can interact with massless SU(2) gauge bosons (gravity and electromagnetism).  The reason why this is the case is down to the nature of the way mass is given to SU(2) gauge bosons by a mass-giving Higgs-type field.  Presumably the combined Higgs boson when coupled with a massless weak gauge boson gives a composite particle which only interacts with left-handed particles, while the nature of the massless weak gauge bosons is that in the absence of Higgs bosons they can interact equally with left and right handed particles.


To summarise, quarks are probably electron and antielectrons (positrons) with the symmetry transformation modifications you get from close confinement of electrons against the exclusion principle (e.g., such electrons acquire new charges and short range interactions).

Downquarks are electrons trapped in mesons (pairs of quarks containing quark-antiquark, bound together by the SU(2) weak nuclear force, so they have short lifetimes and under beta radioactive decay) or baryons, which are triplets of quarks bound by the SU(3) strong nuclear force.  The confinement of electrons in a small reduces their electric charge because they are all close enough in the pair or triplet to share the same overlapping polarized vacuum which shields part of the electric field.  Because this shielding effect is boosted, the electron charge per electron observed at long range is reduced to a fraction.  The idealistic model is 3 electrons confined in close proximity, giving a 3 times strong polarized vacuum, which reduces the observable charge per electron by a factor of 3, giving the e/3 downquark charge.  This is a bit too simplistic of course because in reality you get mainly stable combinations like protons (2 upquark and 1 downquark).  The energy lost from the electric charge, due to the absorption in the polarized vacuum, powers short-ranged nuclear forces which bind the quarks in mesons and hadrons together.

Upquarks would seem to be trapped positrons.  This is neat because most of the universe is hydrogen, with one electron in orbit and 2 upquarks plus 1 downquark in the proton nucleus.  So one complete hydrogen atom is formed by 2 electrons and 2 positrons.  This explains the absence of antimatter in the universe: the positrons are all here, but trapped in nuclei as upquarks.  Only particles with left-handed Weyl spin undergo weak force interactions.

Possibly the correct electroweak-gravity symmetry group is SU(2)L x SU(2)R, where SU(2)L is a left-handed symmetry and SU(2)R is a right handed one. The left-handed version couples to massive bosons which give mass to particles and vector bosons, creating all the massive particles and weak vector bosons. The right handed version presumably does not couple to massive bosons. The result here is that the right handed version, SU(2)R, produces only mass-less particles, giving the gauge bosons needed for long-range electromagnetic and gravitational forces. If that works in detail, it is a simplification of the SU(2)xU(1) electroweak model, which should make the role of the mass-giving field clearer, and predictions easier.

The mainstream SU(2)xU(1) model requires a symmetry-breaking Higgs field which works by giving mass to weak gauge bosons only below a particular energy or beyond a particular distance from a particle core. The weak gauge bosons are supposed to be mass-less above that energy, where electroweak symmetry exists; electroweak symmetry breaking is supposed to occur below the Higgs expectation energy due to the fact that 3 weak gauge bosons acquire mass at low energy, while photons don’t acquire mass at low energy.

This SU(2)xU(1) model mimics a lot of correct physics, without being the correct electroweak unification. How far has the idea that weak gauge bosons lose mass above the Higgs expectation value been checked (I don’t think it has been checked at all yet)? Presumably this is linked to ongoing efforts to see evidence for a Higgs boson. The electroweak theory correctly unifies the weak force (dealing with neutrinos, beta decay and the behaviour of mesons) with Maxwell’s equations at low energy and the electroweak unification SU(2)xU(1) predicted the W and Z massive weak gauge bosons detected at CERN in 1983. However, the existence of three massive weak gauge bosons is the same in the proposed replacement for SU(2)xU(1). I think that the suggested replacement of U(1) by another SU(2) makes quite a lot of changes to the untested parts of the standard model (in particular the Higgs mechanism), besides the obvious benefits of introducing gravity and causal electromagnetism.

Spherical symmetry of Hubble recession

I’d like to thank Bee and others at the Backreaction blog for patiently explaining to me that a statement that radial distance elements are equal for the Hubble recession in all directions around us,

H = dv/dr = dv/dx = dv/dy = dv/dz


t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv


dv/H = dr = dx = dy = dx

for spherically symmetrical recession of stars around us (in directions x, y, z, where r is the general radial direction that can point any way), appears superficially to be totally ‘wrong’ to people who are only unaccustomed to cosmology where the elementary equations for spherical geometry and metrics in non-symmetric spatial dimensions don’t apply.  Hopefully, ‘critics’ will grasp the point that equation A does not disprove equation B just because you have seen equation A in some textbook, and not equation B.

For example, some people repeatedly and falsely claim that H = dv/dr = dv/dx = dv/dy = dv/dz and the resulting equality dr = dx = dy = dx is total rubbish, and is ‘disproved’ by the existence of metrics and non-symmetrical spherical geometrical equations.  They ignore all explanations that this equality of gradient elements has nothing to do with metrics or spherical geometry, and is due to the spherical symmetry of the cosmic expansion we observer around us.

Another way to look at H = dv/dr = dv/dx = dv/dy = dv/dz is to remember that 1/H is a way to measure the age of the universe.  If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.

However, since 1998 there has been good evidence that gravity is not slowing down the expansion; instead there is either something opposing gravity by causing repulsion at immense distance scales and outward acceleration (so-called ‘dark energy’ giving a small positive cosmological constant), or else there is a partial lack of gravity at long distances due to graviton redshift and/or the geometry of a quantum gravity mechanism (depending on whether you are assuming spin-2 gravitons or not), which has substantially more predictive and less ad hoc. since it was predicted via Electronics World Oct. 1996, years before being confirmed by observation (see comment 11 on previous post).

Therefore, let’s use 1/H as the age of the universe, time!  Then we find:

1/H = dr/dv = dx/dv = dy/dv = dz/dv.

This proves that dr/dv = dx/dv = dy/dv = dz/dv.

Now multiply this out by dv, and what do you get?  You get:

dr = dx = dy = dz.

As Fig. 2 shows, it is a fact that the Hubble parameter can be expressed as H = dv/dr = dv/dx = dv/dy = dv/dz, where the equality of numerators means that the denominators are similarly equal: dr = dx = dy = dx.  This is fact, not an opinion or guess.

Fig 2 - why dr = dx = dy = dx in the Hubble law v/r = H or dv/dr = H

Fig. 2: Illustration of the reason why the Hubble law H = dv/dr = dv/dx = dv/dy = dv/dz, where because of the isotropy (i.e. the Hubble law is the same in every direction we look, as far as observational evidence can tell), the numerators in the fractions are all equal to dv so the denominators are all equal to each other too: dr = dx = dy = dx.  Beware everyone, this has nothing whatsoever to do with metrics, with general relativity, or with the general case in spherical geometry (where the origin of coordinates need not in general be the centre of the spherical symmetry)!

So if your textbook has a formula which ‘contradicts’ dr = dx = dy = dx or if you think that dr = dx = dy = dx should in your opinion be replaced by a metric with the squares of line elements all added up, or with a general formula for spherical geometry which applies to situations where the recession would be vary with directions, then you are wrong.  As one commentator on this blog has said (I don’t agree with most of it), it is true that new ideas which have not been investigated before often look ‘silly’.  People who do not check the physics and instead just pick out formulae, misunderstand them, and then ridicule them, are not “critics”.  They are not criticising the work, instead they are criticising their own misunderstandings.  So any ridicule and character assassinations resulting should be taken with a large pinch of salt.  It’s best to try to see the funny side when this occurs!

One of the very interesting things about dr = dx = dy = dx is what you get for time dimensions because the age of the universe (if there is no gravitational deceleration, as was shown to be the case in 1998) is 1/H, and because we look back in time with increasing distance according to r = x = y = z = ct, it follows that there are equivalent time-like dimensions for each of the spatial dimensions.  This makes spacetime easier to understand and allows a new unification scheme!  The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter.  Surely this contradicts general relativity?  No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square.  To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving:

(dr)/c = (dx)/c = (dy)/c = (dz)/c

which can be written as:

dtr = dtx = dty = dtz

So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal!  This is why we only need one time to describe the expansion of the universe.  If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions.  Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic!  This is quite a surprising result as some hostility to this new idea from traditionalists shows.

But the three time dimensions which are usually hidden by this isotropy are vitally important!  Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need.  It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here.  The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity.  For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.

In addition, as was shown in detail in the previous post, this sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity:

‘To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.’

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength.  They are radical.  Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’

– Herman Minkowski, 1908.

Deriving the relationship between the FitzGerald contraction and the gravitational contraction

Feynman finds that whereas lengths contract in the direction of motion at velocity v by the ratio (1 – v2/c2)1/2, gravity contracts lengths by the amount (1/3)MG/c2 = 1.5 mm for the contraction of Earth’s radius by gravity.

It is of interest that this result can be obtained simply, throwing light on the relationship between the equivalence of mass and energy in ‘special relativity’ (which is at best just an approximation) and the equivalence of inertial mass and gravitational mass in general relativity.

To start with, recall Dr Love’s derivation of Kepler’s law from the equivalence of the kinetic energy of a planet to its gravitational potential energy, given in a previous post.

This is very simple.  If a body’s average kinetic energy in space (outwide the atmosphere) is such that it has just over the escape velocity, it will eventually escape and will therefore be unable to orbit endlessly.  If it has just under that velocity, it will eventually fall back to Earth and so it will not orbit endlessly, just as is the case if the average velocity is too high.  Like Goldilocks and the porridge, it is very fussy.

The average orbital velocity must exactly match the escape velocity – and be neither more nor less than the escape velocity – in order to achieve a stable orbit.

Dr Love points out the consequences: a body in orbit must have an average velocity equal to escape velocity v = (2GM/r1/2 which implies that its kinetic energy must be equal to its gravitational potential energy:

kinetic energy, E = (1/2) mv 2 = (1/2) m((2GM/r1/2 )2 = mMG/r.

This permits him to derive Kepler’s law.  It is also very important because it explains the relationship for stability of orbits:

average kinetic energy = gravitational potential energy

Einstein’s equivalence of inertial and gravitational mass in E = mc2 then allows us to use this equivalence of inertial kinetic energy and gravitational potential energy derive the equivalence principle of general relativity, which states that the inertial mass is equal to the gravitational mass, at least for orbiting bodies.  Another physically justified argument is that gravitational potential is the gravity energy that would be released in the case of collapse.  If you allowed the object to fall and therefore pick up that gravitational potential energy, the latter energy would be converted into kinetic energy of the object.  This is why the two energies are equivalent.  It’s a rigorous argument!

Now test it further.  Take the FitzGerald-Lorentz contraction of length due to inertial motion at velocity, where objects are compressed by the ratio (1 – v2/c2)1/2, using equivalence of average kinetic energy to gravitational potential energy, you can place the escape velocity, v = (2GM/r1/2 into the contraction formula, and expand the result to two terms using the binomial expansion.  You find that the radius of a gravitational mass would be reduced by the amount GM/c2 = 4.5 mm for Earth’s radius which is three times as big as Feynman’s formula for gravitational compression of Earth’s radius.  The factor of three comes from the fact that the FitzGerald-Lorentz contraction is in one dimension only (direction of motion), while the gravitational field lines radiate in three dimensions, so the same amount of contraction is spread over three times as many dimensions, giving a reduction in radius by (1/3)GM/c2 = 1.5 mm!  (There is also a rigorous mathematical discussion of this on the page here if you have the time to scroll down and find it.)

Unusually, Feynman makes a confused mess of this effect in his relevant volume of Lectures on Physics, c42 p6, where correctly he gives his equation 42.3 for excess radius being equal to predicted radius minus measured radius (i.e., he claims that the predicted radius is the bigger one in the equation) but then on the same page in the text falsely and confusingly writes: ‘… actual radius exceeded the predicted radius …’ (i.e., he claims in the text that the predicted radius is the smaller).

Professor Jacques Distler’s philosophical and mathematical genius

‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’

– Professor Jacques Distler, Musings blog post on the Role of Rigour.

Jacques also summarises the issues for theoretical physics clearly in a comment there:

  1. ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
  2. ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
  3. ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’