Correcting the U(1) error in the Standard Model of particle physics

Introduction 

Fundamental particles in the SU(2)xU(1) part of the Standard Model

Above: the Standard Model particles in the existing SU(2)xU(1) electroweak symmetry group (a high-quality PDF version of this table can be found here).  The complexity of chiral symmetry – the fact that only particles with left-handed spins (Weyl spinors) experience the weak force – is shown by the different effective weak charges for left and right handed particles of the same type.  My argument, with evidence to back it up in this post and previous posts, is that there are no real ‘singlets’: all the particles are doublets apart from the gauge bosons (W/Z particles) which are triplets.  This causes a major change to the SU(2)xU(1) electroweak symmetry.  Essentially, the U(1) group which is a source of singlets (i.e., particles shown in blue type in this table which may have weak hypercharge but have no weak isotopic charge) is removed!  An SU(2) symmetry group then becomes a source of electric and weak hypercharge, as well as its existing role in Standard Model as a descriptor of the isotopic spin.  It modifies the role of the ‘Higgs bosons’: some such particles are still be required to give mass, but the mainstream electroweak symmetry breaking mechanism is incorrect.

There are 6 rather than 4 electroweak gauge bosons, the same 3 massive weak bosons as before, but 2 new charged massless gauge bosons in addition to the uncharged massless ‘photon’, B.  The 3 massless gauge bosons are all massless counterparts to the 3 massive weak gauge bosons.  The ‘photon’ is not the gauge boson of electromagnetism because, being neutral, it can’t represent a charged field.  Instead, the ‘photon’ gauge boson is the graviton, while the two massless gauge bosons are the charged exchange radiation (gauge bosons) of electromagnetism.  This allows quantitative predictions and the resolution of existing electromagnetic anomalies (which are usually just censored out of discussions).

It is the U(1) group which falsely introduces singlets.  All Standard Model fermions are really doublets: if they are bound by the weak force (i.e., left-handed Weyl spinors) then they are doublets in close proximity.  If they are right-handed Weyl spinors, they are doublets mediated by only strong, electromagnetic and gravitational forces, so for leptons (which don’t feel the strong force), the individual particles in a doublet can be located relatively far from another (the electromagnetic and gravitational interactions are both long-range forces).  The beauty of this change to the understanding of the Standard Model is that gravitation automatically pops out in the form of massless neutral gauge bosons, while electromagnetism is mediated by two massless charged gauge bosons, which gives a causal mechanism that predicts the quantitative coupling constants for gravity and electromagnetism correctly.  Various other vital predictions are also made by this correction to the Standard Model.

Fundamental vector boson charges of SU(2) 

Above: the fundamental vector boson charges of SU(2).  For any particle which has effective mass, there is a black hole event horizon radius of 2GM/c2.  If there is a strong enough electric field at this radius for pair production to occur (in excess of Schwinger’s threshold of 1.3*1018 v/m), then pairs of virtual charges are produced near the event horizon.  If the particle is positively charged, the negatively charged particles produced at the event horizon will fall into the black hole core, while the positive ones will escape as charged radiation (see Figures 2, 3 and particularly 4 below for the mechanism for propagation of massless charged vector boson exchange radiation between charges scattered around the universe).  If the particle is negatively charged, it will similarly be a source of negatively charged exchange radiation (see Figure 2 for an explanation of why the charge is never depleted by absorbing radiation from nearby pair production of opposite sign to itself; there is simply an equilibrium of exchange of radiation between similar charges which cancels out that effect).  In the case of a normal (large) black hole or neutral dipole charge (one with equal and opposite charges, and therefore neutral as a whole), as many positive as negative pair production charges can escape from the event horizon and these will annihilate one another to produce neutral radiation, which produces the right force of gravity.  Figure 4 proves that this gravity force is about 1040 times stronger than electromagnetism.  Another earlier post calculates the Hawking black hole radiation rate and proves it creates the force strength involved in electromagnetism.

(For a background to the elementary basics of quantum field theory and quantum mechanics, like the Schroedinger and Dirac equations and their consequences, see the earlier post on The Physics of Quantum Field Theory.  For an introduction to symmetry principles, see the previous post.)

The SU(2) symmetry can model electromagnetism (in addition to isospin) because it models two types of charges, hence giving negative and positive charges without the wrong method U(1) uses (where it specifies there are only negative charges, so positive ones have to be represented by negative charges going backwards in time).  In addition, SU(2) gives 3 massless gauge bosons, two charged ones (which mediate the charge in electric fields) and one neutral one (which is the spin-1 graviton, that causes gravity by pushing masses together).  In addition, SU(2) describes doublets, matter-antimatter pairs.  We know that electrons are not produced individidually, only in lepton-antilepton pairs.  The reason why electrons can be separated a long distance from their antiparticle (unlike quarks) is simply the nature of the binding force, which is long range electromagnetism instead of a short-range force.

Quantum field theory, i.e., the standard model of particle physics, is based mainly on experimental facts, not speculating.  The symmetries of baryons give SU(3) symmetry, those of mesons give SU(2) symmetry.  That’s experimental particle physics. The problem in the standard model SU(3)xSU(2)xU(1) is the last component, the U(1) electromagnetic symmetry.  In SU(3) you have three charges (coded red, blue and green) and form triplets of quarks (baryons) bound by 32-1 = 8 charged gauge bosons mediating the strong force.  For SU(2) you have two charges (two isospin states) and form doublets, i.e., quark-antiquark pairs (mesons) bound by 22-1 = 3 gauge bosons (one positively charged, one negatively charged and one neutral).

One problem comes when electromagnetism is represented by U(1) and added to SU(2) to form the electroweak unification, SU(2)xU(1).  This means that you have to add a Higgs field which breaks the SU(2)xU(1) symmetry at low energy, by giving masses (at low energy only) to the 3 gauge bosons of SU(2).  At high energy, the masses of those 3 gauge bosons must disappear, so that they are massless, like the photon assumed to mediate the electromagnetic force represented by U(1).  The required Higgs field which adds mass in the right way for electroweak symmetry breaking to work in the Standard Model but adds complexity and isn’t very predictive.

The other, related, problem is that SU(2) only acts on left-handed particles, i.e., particles whose spin is described by a left-handed Weyl spinor.  U(1) only has one electric charge, the electron.  Feynman represents positrons in the scheme as electrons going backwards in time, and this makes U(1) work, but it has many problems and a massless version of SU(2) is the correct electromagnetism-gravitational model.

So the correct model for electromagnetism is really SU(2) which has two types of electric charge (positive and negative) and acts on all particles regardless of spin, and is mediated by three types of massless gauge bosons: negative ones for the fields around negative charges, positive ones for positive fields, and neutral ones for gravity.

The question then is, what is the corrected Standard Model?  If we delete U(1) do we have to replace it with another SU(2) to get SU(3)xSU(2)xSU(2), or do we just get SU(3)xSU(2) in which SU(2) takes on new meaning, i.e., there is no symmetry breaking?

Assume the symmetry group of the universe is SU(3)xSU(2).  That would mean that the new SU(2) interpretation has to do all the work and more of SU(2)xU(1) in the existing Standard Model.  The U(1) part of SU(2)xU(1) represented both electromagnetism and weak hypercharge, while SU(2) represented weak isospin.

We need to dump the Higgs field as a source for symmetry breaking, and replace it with a simpler mass-giving mechanism that only gives mass to left-handed Weyl spinors.  This is because the electroweak symmetry breaking problem has disappeared. We have to use SU(2) to represent isospin, weak hypercharge, electromagnetism and gravity.   Can it do all that? Can the Standard Model be corrected by simply removing U(1) to leave SU(3)xSU(2) and having the SU(2) produce 3 massless gauge bosons (for electromagnetism and gravity) and 3 massive gauge bosons (for weak interactions)? Can we in other words remove the Higgs mechanism for electroweak symmetry breaking and replace it by a simpler mechanism in which the short range of the three massive weak gauge bosons distinguishes between electromagnetism (and gravity) from the weak force? The mass giving field only gives mass to gauge bosons that normally interact with left-handed particles. What is unnerving is that this compression means that one SU(2) symmetry is generating a lot more physics than in the Standard Model, but in the Standard Model U(1) represented both electric charge and weak hypercharge, so I don’t see any reason why SU(2) shouldn’t represent weak isospin, electromagnetism/gravity and weak hypercharge. The main thing is that because it generates the 3 massless gauge bosons, only half of which need to have mass added to them to act as weak gauge bosons, it has exactly the right field mediators for the forces we require. If it doesn’t work, the alternative replacement to the Standard Model is SU(3)xSU(2)xSU(2) where the first SU(2) is isospin symmetry acting on left-handed particles and the second SU(2) is electrogravity.

Mathematical review

Following from the discussion in previous posts, it is time to correct the errors of the Standard Model, starting with the U(1) phase or gauge invariance.  The use of unitary group U(1) for electromagnetism and weak hypercharge is in error as shown in various ways in the previous posts here, here, and here.

The maths is based on a type of continuous group defined by Sophus Lie in 1873.  Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together.  It was the representation theory of these groups that Weyl was studying.

‘A simple example of a Lie group together with a representation is that of the group of rotations of the two-dimensional plane.  Given a two-dimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point.  This is a symmetry of the plane.  The thing that is invariant is the distance between a point on the plane and the central point.  This is the same before and after the rotation.  One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point.  There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.

 Not Even Wrong

Argand diagram showing rotation by an angle on the complex plane.   Illustration credit: based on Fig. 3.1 in Not Even Wrong.

‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one.  If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers).  As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1).

‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions].  Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave.  This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees.  Because of this analogy, U(1) symmetry transformations are often called phase transformations. …

‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N).  It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest.  The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N).  Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large.

‘In the case N = 1, SU(1) is just the trivial group with one element.  The first non-trivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3).  The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’

Hermann Weyl and Eugene Wigner discovered that Lie groups of complex symmetries represent quantum field theory.  In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin-1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the Yang-Mills theory is renormalizable so the problem of running couplings having no limits can be cut off at effective limits to make the theory work (Yang-Mills theories use non-commutative algebra, usually called non-commutative geometry). The photon Yang-Mills theory is U(1). Equivalent Yang-Mills interaction theories of the strong force SU(3) and the weak force isospin group SU(2) in conjunction with the U(1) force result in the symmetry group  SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on left-handed spinning fermions, breaking the conservation of parity.

Dr Woit’s Not Even Wrong at pages 98-100 summarises the problems in the Standard Model.  While SU(3) ‘has the beautiful property of having no free parameters’, the SU(2)xU(1) electroweak symmetry does introduce two free parameters: alpha and the mass of the speculative ‘Higgs boson’.  However, from solid facts, alpha is not a free parameter but the shielding ratio of the bare core charge of an electron by virtual fermion pairs being polarized in the vacuum and absorbing energy from the field to create short range forces:

“This shielding factor of alpha can actually obtained by working out the bare core charge (within the polarized vacuum) as follows.  Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar.  The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct.  Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence).  Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post).  In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime.  The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology.  Now for the slightly clever bit:

px = h-bar implies (when remembering p = mc, and E = mc2):

x = h-bar /p = h-bar /(mc) = h-bar*c/E

so E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx):

F = E/x = (h-bar*c/x)/x

= h-bar*c/x2.

“So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law!  This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs.  So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a.  The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge.  All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.

“One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance.  However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx.  For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved.  This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.

“It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)

“Experimental evidence:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

“In particular:

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.”

As for the ‘Higgs boson’ mass that gives mass to particles, there is evidence there of its value.  On page 98 of Not Even Wrong, Dr Woit points out:

‘Another related concern is that the U(1) part of the gauge theory is not asymptotically free, and as a result it may not be completely mathematically consistent.’

He adds that it is a mystery why only left-handed particles experience the SU(2) force, and on page 99 points out that: ‘the standard quantum field theory description for a Higgs field is not asymptotically free and, again, one worries about its mathematical consistency.’

Another thing is that the 9 masses of quarks and leptons have to be put into the Standard Model by hand together with 4 mixing angles to describe the interaction strength of the Higgs field with different particles, adding 13 numbers to the Standard Model which you  want to be explained and predicted.

Important symmetries:

  1. ‘electric charge rotation’ would transform quarks into leptons and vice-versa within a given family: this is described by unitary group U(1).  U(1) deals with just 1 type of charge: negative charge, i.e., it ignores positive charge which is treated as a negative charge travelling backwards in time, Feynman’s fatally flawed model of a positron or anti-electron, and with solitary particles (which don’t actually exist since particles always are produced and annihilated as pairs).  U(1) is therefore false when used as a model for electromagnetism, as we will explain in detail in this post.  U(1) also represents weak hypercharge, which is similar to electric charge.
  2. ‘isospin rotation’ would switch the two quarks of a given family, or would switch the lepton and neutrino of a given family: this is described by symmetry unitary group SU(2).  Isospin rotation leads directly to the symmetry unitary group SU(2), i.e., rotations in imaginary space with 2 complex co-ordinates generated by 3 operations: the W+, W, and Z0 gauge bosons of the weak force.  These massive weak bosons only interact with left-handed particles (left handed Weyl spinors).  SU(2) describes doublets, matter-antimatter pairs such as mesons and (as this blog post is arguing) lepton-antilepton charge pairs in general (electric charge mechanism as well as weak isospin).
  3. ‘colour rotation’ would change quarks between colour charges (red, blue, green): this is described by symmetry unitary group SU(3).  Colour rotation leads directly to the Standard Model symmetry unitary group SU(3), i.e., rotations in imaginary space with 3 complex co-ordinates generated by 8 operations, the strong force gluons.  There is also the concept of ‘flavor’ referring to the different types of quarks (up and down, strange and charm, top and bottom).  SU(3) describes triplets of charges, i.e. baryons.

U(1) is a relatively simple phase-transformation symmetry which has a single group generator, leading to a single electric charge.  (Hence, you have to treat positive charge as electrons moving backwards in time to make it incorporate antimatter!  This is false because things don’t travel backwards in time; it violates causality, because we can use pair-production – e.g. electron and positron pairs created by the shielding of gamma rays from cobalt-60 using lead – to create positrons and electrons at the same time, when we choose.)  Moreover, it also only gives rise to one type of massless gauge boson, which means it is a failure to predict the strength of electromagnetism and its causal mechanism of electromagnetism (attractions between dissimilar charges, repulsions between similar charges, etc.).  SU(2) must be used to model the causal mechanism of electromagnetism and gravity; two charged massless gauge bosons mediate electromagnetic forces, while the neutral massless gauge boson mediates gravitation.  Both the detailed mechanism for the forces and the strengths of the interactions (as well as various other predictions), arise automatically from SU(2) with massless gauge bosons replacing U(1).

Fig. 1 - The imaginary U(1) interaction of a photon with an electron, which is fine for photons interacting with electrons, but doesn't adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!

Fig. 1: The imaginary U(1) gauge invariance of quantum electrodynamics (QED) simply consists of a description of the interaction of a photon with an electron (e is the coupling constant, the effective electric charge after allowing for shielding by the polarized vacuum if the interaction is at high energy, i.e., above the IR cutoff).  When the electron’s field undergoes a local phase change, a gauge field quanta called a ‘virtual photon’ is produced, which keeps the Lagrangian invariant; this is how gauge symmetry is supposed to work for U(1).

This doesn’t adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!  It’s just too simplistic: the moving electron is viewed as a current, and the photon (field phase) affects that current by interacting by the electron.  There is nothing wrong with this simple scheme, but it has nothing to do with the detailed causal, predictive mechanism for electromagnetic attraction and repulsion, and to make this virtual-photon-as-gauge-boson idea work for electromagnetism, you have to add two extra polarizations to the normal two polarizations (electric and magnetic field vectors) of ordinary photons.  You might as well replace the photon by two charged massless gauge bosons, instead of adding two extra polarizations!  You have so much more to gain from using the correct physics, than adding extra epicycles to a false model to ‘make it work’.

This is Feynman’s explanation in his book QED, Penguin, 1990, p120:

‘Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization – for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)’

The gauge bosons of mainstream electromagnetic model U(1) are supposed to consist of photons with 4 polarizations, not 2.  However, U(1) has only one type of electric charge: negative charge.  Positive charge is antimatter and is not included.  But in the real universe there as much positive as negative charge around!

We can see this error of U(1) more clearly when considering the SU(3) strong force: the 3 in SU(3) tells us there are three types of color charges, red, blue and green.  The anti-charges are anti-red, anti-blue and anti-green, but these anti-charges are not included.  Similarly, U(1) only contains one electric charge, negative charge.  To make it a reliable and complete theory predictive everything, it should contain 2 electric charges: positive and negative, and 3 gauge bosons: positive charged massless photons for mediating positive electric fields, negative charged massless photons for mediating negative electric fields, and neutral massless photons for mediating gravitation.  The way this correct SU(2) electrogravity unification works was clearly explained in Figures 4 and 5 of the earlier post: https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/

Basically, photons are neutral because if they were charged as well as being massless, the magnetic field generated by its motion would produce infinite self-inductance.  The photon has two charges (positive electric field and negative electric field) which each produce magnetic fields with opposite curls, cancelling one another and allowing the photon to propagate:

Fig. 2 - Mechanism of gauge bosons for electromagnetism

Fig. 2: charged gauge boson mechanism for electromagnetism, as illustrated by the Catt-Davidson-Walton work in charging up transmission lines like capacitors and checking what happens when you discharge the energy through a sampling oscilloscope.  They found evidence, discussed in detail in previous posts on this blog, that the existence of an electric field is represented by two opposite-travelling (gauge boson radiation) light velocity field quanta: while overlapping, the electric fields of each add up (reinforce) but the magnetic fields disappear because the curls of the magnetic field components cancel once there is equilibrium of the exchange radiation going along the same path in opposite directions.  Hence, electric fields are due to charged, massless gauge bosons with Poynting vectors, being exchanged between fermions.  Magnetic fields are cancelled out in certain configurations (such as that illustrated) but in other situations where you send two gauge bosons of opposite charge through one another (in the figure the gauge bosons modelled by electricity have the same charge), you find that the electric field vectors cancel out to give an electrically neutral field, but the magnetic field curls can then add up, explaining magnetism.

The evidence for Fig. 2 is presented near the end of Catt’s March 1983 Wireless World article called ‘Waves in Space’ (typically unavailable on the internet, because Catt won’t make available the most useful of his papers for free): when you charge up x metres of cable to v volts, you do so at light speed, and there is no mechanism for the electromagnetic energy to slow down when the energy enters the cable.  The nearest page Catt has online about this is here: the battery terminals of a v volt battery are indeed at v volts before you connect a transmission line to them, but that’s just because those terminals have been charged up by field energy which is flowing in all directions at light velocity, so only half of the total energy, v/2 volts, is going one way and half is going the other way.  Connect anything to that battery and the initial (transient) output at light speed is only half the battery potential; the full battery potential only appears in a cable connected to the battery when the energy has gone to the far end of the cable at light speed and reflected back, adding to further in-flowing energy from the battery on the return trip, and charging the cable to v/2 + v/2 = v volts.

Because electricity is so fast (light speed for the insulator), early investigators like Ampere and Maxwell (who candidly wrote in the 1873 edition of his Treatise on Electricity and Magnetism, 3rd ed., Article 574: ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second. …’) had no idea whatsoever of this crucial evidence which shows what electricity is all about.  So when you discharge the cable, instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get just what is predicted by Fig. 2: a pulse of v/2 volts taking 2x/c seconds to exit.  In other words, the half of the energy already moving towards the exit end, exits first.  That gives a pulse of v/2 volts lasting x/c seconds.  Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy.  This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse.  Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds.  This is experimental fact.  It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals).  Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a flaw inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.  The Catt, Davidson and Walton history is summarised here

[The original Catt-Davidson-Walton paper can be found here (first page) and here (second page) although it contains various errors.  My discussion of it is here.  For a discussion of the two major awards Catt received for his invention of the first ever practical wafer-scale memory to come to market despite censorship such as the New Scientist of 12 June 1986, p35, quoting anonymous sources who called Catt ‘either a crank or visionary’ – a £16 million British government and foreign sponsored 160 MB ‘chip’ wafer back in 1988 – see this earlier post and the links it contains.  Note that the editors of New Scientist are still vandals today.  Jeremy Webb, current editor of New Scientist, graduated in physics and solid state electronics, so he has no good excuse for finding this stuff – physics and electronics – over his head.  The previous editor to Jeremy was Dr Alum M. Anderson who on 2 June 1997 wrote to me the following insult to my intelligence: ‘I’ve looked through the files and can assure you that we have no wish to suppress the discoveries of Ivor Catt nor do we publish only articles from famous people.  You should understand that New Scientist is not a primary journal and does not publish the first accounts of new experiments and original theories. These are better submitted to an academic journal where they can be subject to the usual scientific review.  New Scientist does not maintain the large panel of scientific referees necessary for this review process. I’m sure you understand that science is now a gigantic enterprise and a small number of scientifically-trained journalists are not the right people to decide which experiments and theories are correct. My advice would be to select an appropriate journal with a good reputation and send Mr Catt’s work there. Should Mr Catt’s theories be accepted and published, I don’t doubt that he will gain recognition and that we will be interested in writing about him.’  Both Catt and I had already sent Dr Anderson abstracts from Catt’s peer-reviewed papers such as IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67. Also Proc. IEE, June 83 and June 87. Also a summary of the book “Digital Hardware Design” by Catt et. al., pub. Macmillan 1979.  I wrote again to Dr Anderson with this information, but he never published it; Catt on 9 June 1997 published his response on the internet which he carbon copied to the editor of New Scientist.  Years later, when Jeremy Webb had taken over, I corresponded with him by email.  The first time Jeremy responded was on an evening in Dec 2002, and all he wrote was a tirade about his email box being full when writing a last-minute editorial.  I politely replied that time, and then sent him by recorded delivery a copy of the Electronics World January 2003 issue with my cover story about Catt’s latest invention for saving lives.  He never acknowledged it or responded.  When I called the office politely, his assistant was rude and said she had thrown it away unread without him seeing it!  I sent another but yet again, Jeremy wasted time and didn’t publish a thing.  According to the Daily Telegraph, 24 Aug. 2005: ‘Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.’  But even when Catt’s stuff was applied to cosmology in Electronics World Aug. 02 and Apr. 03, it was still ignored by New ScientistHelene Guldberg has written a ‘Spiked Science’ article called Eco-evangelism about Jeremy Webb’s bigoted policies and sheer rudeness, while Professor John Baez has publicised the decline of New Scientist due to the junk they publish in place of solid physics.  To be fair, Jeremy was polite to Prime Minister Tony Blair, however.  I should also add that Catt is extremely rude in refusing to discuss facts.  Just because he has a few new solid facts which have been censored out of mainstream discussion even after peer-reviewed publication, he incorrectly thinks that his vast assortment of more half-baked speculations are equally justified.  For example, he refuses to discuss or co-author a paper on the model here.  Catt does not understand Maxwell’s equations (he thinks that if you simply ignore 18 out of 20 long hand Maxwell differential equations and show that when you reduce the number of spatial dimensions from 3 to 1, then – since the remaining 2 equations in one spatial dimension contain two vital constants – that means that Maxwell’s equations are ‘shocking … nonsense’, and he refuses to accept that he is talking complete rubbish in this empty argument), and since he won’t discuss physics he is not a general physics  authority, although he is expert in experimental research on logic signals, e.g., his paper in IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67.]

Fig. 3 - Coulomb force mechanism for electric charged massless gauge bosons

Fig. 3: Coulomb force mechanism for electric charged massless gauge bosons.  The SU(2) electrogravity mechanism.  Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them.  They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets.  The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe).  That explains the electromagnetic repulsion physically.  Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides.  The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd.  In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them.  When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges.  This theory holds water!

This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation.

Fig. 4 - Charged gauge bosons mechanism and how the potential adds up

Fig. 4: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation.  For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible.  But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons.  Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves).  Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue.  Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping.  This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down.  When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed.  This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.

The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.

For some of the many quantitative predictions and tests of this model, see previous posts such as this one.

SU(2), as used in the SU(2)xU(1) electroweak symmetry group, applies only to left-handed particles.  So it’s pretty obvious that half the potential application of SU(2) is being missed out somehow in SU(2)xU(1).

SU(2) is fairly similar to U(1) in Fig. 1 above, except that SU(2) involves 22 – 1 = 3 types of charges (positive, negative and neutral), which (by moving) generate 2 types of charged currents (positive and negative currents) and 1 neutral current (i.e., the motion of an uncharged particle produces a neutral current by analogy to the process whereby the motion of a charged particle produces a charged current), requiring 3 types of gauge boson (W+, W, and Z0).

For weak interactions we need the whole of SU(2)xU(1) because SU(2) models weak isospin by using electric charges as generators, while U(1) is used to represent weak hypercharge, which looks almost identical to Fig. 1 (which illustrates the use of U(1) for quantum electrodynamics).  The SU(2) isospin part of the weak interaction SU(2)xU(1) applies to only left-handed fermions, while the U(1) weak hypercharge part applies to both types of handedness, although the weak hypercharges of left and right handed fermions are not the same (see earlier post for the weak hypercharges of fermions with different spin handedness).

It is interesting that the correct SU(2) symmetry predicts massless versions of the weak gauge bosons (W+, W, and Z0).  Then the mainstream go to a lot of trouble to make them massive by adding some kind of speculative Higgs field, without considering whether the massless versions really exist as the proper gauge bosons of electromagnetism and gravity.  A lot of the problem is that the self-interaction of charged massless gauge bosons is a benefit in explaining the mechanism of electromagnetism (since two similar charged electromagnetic energy currents flowing through one another cancel out each other’s magnetic fields, preventing infinite self-inductance, and allowing charged massless radiation to propagate freely so long as it is exchange radiation in equilibrium with equal amounts flowing from charge A to charge B as flow from charge B to charge A; see Fig. 5 of the earlier post here).  Instead of seeing how the mutual interactions of charged gauge bosons allow exchange radiation to propagate freely without complexity, the mainstream opinion is that this might (it can’t) cause infinities because of the interactions.  Therefore, mainstream (false) consensus is that weak gauge bosons have to have a great mass, simply in order to remove an enormous number of unwanted complex interactions!  They simply are not looking at the physics correctly.

U(2) and unification

Dr Woit has some ideas on how to proceed with the Standard Model: ‘Supersymmetric quantum mechanics, spinors and the standard model’, Nuclear Physics, v. B303 (1988), pp. 329-42; and ‘Topological quantum theories and representation theory’, Differential Geometric Methods in Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced Research Workshop, Ling-Lie Chau and Werner Nahm, Eds., Plenum Press, 1990, pp. 533-45. He summarises the approach in http://www.arxiv.org/abs/hep-th/0206135:

‘… [the theory] should be defined over a Euclidean signature four dimensional space since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) [is a proper subset of] SO(4) (perhaps better thought of as a U(2) [is a proper subset of] Spin^c (4)) and … it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2n) as A *(C^n) applied to a ‘vacuum’ vector.

‘Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons… A generation of quarks has the same transformation properties except that one has to take the ‘vacuum’ vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero. The above comments are … just meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles…’

The SU(3) strong force (colour charge) gauge symmetry

The SU(3) strong interaction – which has 3 color charges (red, blue, green) and 32 – 1 = 8 gauge bosons – is again virtually identical to the U(1) scheme in Fig. 1 above (except that there are 3 charges and 8 spin-1 gauge bosons called gluons, instead of the alleged 1 charge and 1 gauge boson in the flawed U(1) model of QED, and the 8 gluons carry color charge, whereas the photons of U(1) are uncharged).  The SU(3) symmetry is actually correct because it is an empirical model based on observed particle physics, and the fact that the gauge bosons of SU(3) do carry colour makes it a proper causal model of short range strong interactions, unlike U(1).  For an example of the evidence for SU(3), see the illustration and history discussion in this earlier post.SU(3) is based on an observed (empirical, experimentally determined) particle physics symmetry scheme called the eightfold way.  This is pretty solid experimentally, and summarised all the high energy particle physics experiments from about the end of WWII to the late 1960s.  SU(2) describes the mesons which were originally studied in natural cosmic radiation (pions were the first mesons discovered, and they were found in cosmic radiation from outer space in 1947, at Bristol University).  A type of meson, the pion, is the long-range mediator of the strong nuclear force between nucleons (neutrons and protons), which normally prevents the nuclei of atoms from exploding under the immense Coulomb repulsion of having many protons confined in the small space of the nucleus.  The pion was accepted as the gauge boson of the strong force predicted by Japanese physicist Yukawa, who in 1949 was awarded the Nobel Prize for predicting that meson right back in 1935.  So there is plenty of evidence for both SU(3) color forces and SU(2) isospin.  The problems all arise from U(1). To give an example of how SU(3) works well with charged gauge bosons, gluons, remember that this property of gluons is responsible for the major discovery of asymptotic freedom of confined quarks.  What happens is that the mutual interference of the 8 different types of charged gluons with pairs of virtual quarks and virtual antiquarks at very small distances between particles (high energy) weakens the color force.  The gluon-gluon interactions screen the color charge at short distances because each gluon contains two color charges.  If each gluon contained just one color charge, like the virtual fermions in pair production in QED, then the screening effect would be most significant at large, rather than short, distances.  Because the effective colour charge diminishes at very short distances, for a particular range of distances this color charge fall as you get closer offsets the inverse-square force law effect (the divergence of effective field lines), so the quarks are completely free – within given limits of distance – to move around within a neutron or a proton.  This is asymptotic freedom, an idea from SU(3) that was published in 1973 and resulted in Nobel prizes in 2004.  Although colour charges are confined in this way, some strong force ‘leaks out’ as virtual hadrons like neutral pions and rho particles which account for the strong force on the scale of nuclear physics (a much larger scale than is the case in fundamental particle physics): the mechanism here is similar to the way that atoms which are electrically neutral as a whole can still attract one another to form molecules, because there is a residual of the electromagnetic force left over.  The strong interaction weakens exponentially in addition to the usual fall in potential (1/distance) or force (inverse square law), so at large distances compared to the size of the nucleus it is effectively zero.  Only electromagnetic and gravitational forces are significant at greater distances.  The weak force is very similar to the electromagnetic force but is short ranged because the gauge bosons of the weak force are massive.  The massiveness of the weak force gauge bosons also reduces the strength of the weak interaction compared to electromagnetism.

The mechanism for the fall in color charge coupling strength due to interference of charged gauge bosons is not the whole story.  Where is the energy of the field going where the effective charge falls as you get closer to the middle?  Obvious answer: the energy lost from the strong color charges goes into the electromagnetic charge.  Remember, short-range field charges fall as you get closer to the particle core, while electromagnetic charges increase; these are empirical facts.  The strong charge decreases sharply from about 137e at the greatest distances it extends to (via pions) to around 0.15e at 91 GeV, while over the same range of scattering energies (which are appriximately inversely proportional to the distance from the particle core), the electromagnetic charge has been observed to increase by 7%.  We need to apply a new type of continuity equation to the conservation of gauge boson exchange radiation energy of all types, in order to deduce vital new physical insights from the comparison of these figures for charge variation as a function of distance.  The suggested mechanism in a previous post is:

‘We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy.  If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.  So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases!  Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.’https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/ 

Force strengths as a function of distance from a particle core

I’ve written previously that the existing graphs showing U(1), SU(2) and SU(3) force strengths as a function of energy are pretty meaningless; they do not specify which particles are under consideration.  If you scatter leptons at energies up to those which so far have been available for experiments, they don’t exhibit any strong force SU(3) interactions.What should be plotted is effective strong, weak and electromagnetic charge as a function of distance from particles.  This is easily deduced because the distance of closest approach of two charged particles in a head-on scatter reaction is easily calculated: as they approach with a given initial kinetic energy, the repulsive force between them increases, which slows them down until they stop at a particular distance, and they are then repelled away.  So you simply equate the initial kinetic energy of the particles with the potential energy of the repulsive force as a function of distance, and solve for distance.  The initial kinetic energy is radiated away as radiation as they decelerate.  There is some evidence from particle collision experiments that the SU(3) effective charge really does decrease as you get closer to quarks, while the electromagnetic charge increases.  Levine and Koltick published in PRL (v.78, 1997, no.3, p.424) in 1997 that the electron’s charge increases from e to 1.07e as you go from low energy physics to collisions of electrons at an energy of 91 GeV, i.e., a 7% increase in charge.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.

The full investigation of running-couplings and the proper unification of the corrected Standard Model is the next priority for detailed investigation.  (Some details of the mechanism can be found in several other recent posts on this blog, e.g., here.)

‘The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W, and W0 /Z0 gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still the ’seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.] – R. P. Feynman, QED, Penguin, 1990, pp141-142.Mechanism for loop quantum gravity with spin-1 (not spin-2) gravitons

Peter Woit gives a discussion of the basic principle of LQG in his book:

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p189.

I watched Lee Smolin’s Perimeter Institute lectures, “Introduction to Quantum Gravity”, and he explains that loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity.

It’s pretty evident that the quantum gravity loops are best thought of as being the closed exchange cycles of gravitons going between masses (or other gravity field generators like energy fields), to and fro, in an endless cycle of exchange.  That’s the loop mechanism, the closed cycle of Yang-Mills exchange radiation being exchanged from one mass to another, and back again,  continually.

According to this idea, the graviton interaction nodes are associated with the ‘Higgs field quanta’ which generates mass.  Hence, in a Penrose spin network, the vertices represent the points where quantized masses exist. Some predictions from this are here.

Professor Penrose’s interesting original article on spin networks, Angular Momentum: An Approach to Combinatorial Space-Time, published in ‘Quantum Theory and Beyond’ (Ted Bastin, editor), Cambridge University Press, 1971, pp. 151-80, is available online, courtesy of Georg Beyerle and John Baez.

Update (25 June 2007):

Lubos Motl versus Mark McCutcheon’s book The Final Theory

Seeing that there is some alleged evidence that mainstream string theorists are bigoted charlatans, string theorist Dr Lubos Motl, who is soon leaving his Assistant Professorship at Harvard, made me uneasy when he attacked Mark McCutcheon’s book The Final Theory. Motl wrote a blog post attacking McCutcheon’s book by saying that: ‘Mark McCutcheon is a generic arrogant crackpot whose IQ is comparable to chimps.’ Seeing that Motl is a stringer, this kind of abuse coming from him sounds like praise to my ears. Maybe McCutcheon is not so wrong? Anyway, at lunch time today, I was in Colchester town centre and needed to look up a quotation in one of Feynman’s books. Directly beside Feynman’s QED book, on the shelf of Colchester Public Library, was McCutcheon’s chunky book The Final Theory. I found the time to look up what I wanted and to read all the equations in McCutcheon’s book.

Motl ignores McCutcheon’s theory entirely, and Motl is being dishonest when claiming: ‘his [McCutcheon’s] unification is based on the assertion that both relativity as well as quantum mechanics is wrong and should be abandoned.’

This sort of deception is easily seen, because it has nothing to do with McCutcheon’s theory! McCutcheon’s The Final Theory is full of boring controversy or error, such as the sort of things Motl quotes, but the core of the theory is completely different and takes up just two pages: 76 and 194. McCutcheon claims there’s no gravity because the Earth’s radius is expanding at an accelerating rate equal to the acceleration of gravity at Earth’s surface, g = 9.8 ms-2. Thus, in one second, Earth’s radius (in McCutcheon’s theory) expands by (1/2)gt2 = 4.9 m.

I showed in an earlier post that there is a simple relationship between Hubble’s empirical redshift law for the expansion of the universe (which can’t be explained by tired light ideas and so is a genuine observation) and acceleration:

Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2

McCutcheon instead defines a ‘universal atomic expansion rate’ on page 76 of The Final Theory which divides the increase in radius of the Earth over a one second interval (4.9 m) into the Earth’s radius (6,378,000 m, or 6.378*106 m). I don’t like the fact he doesn’t specify a formula properly to define his ‘universal atomic expansion rate’.

McCutcheon should be clear: he is dividing (1/2)gt2 into radius of Earth, RE, to get his ‘universal atomic expansion rate, XA:

XA = (1/2)gt2/RE,

which is a dimensionless ratio. On page 77, McCutcheon honestly states: ‘In expansion theory, the gravity of an object or planet is dependent on it size. This is a significant departure from Newton’s theory, in which gravity is dependent on mass.‘ At first glance, this is a crazy theory, requiring Earth (and all the atoms in it, for he makes the case that all masses expand) to expand much faster than the rate of expansion of the universe.

However, on page 194, he argues that the outward acceleration of the an atom of radius R is:

a = XAR,

now the first thing to notice is that acceleration has units of ms-2 and R has units of m. So this equation is false dimensionally if XA = (1/2)gt2/RE. The only way to make a = XAR accurate dimensionally is to change the definition of XA by dropping t2 from the dimensionless ratio (1/2)gt2/RE to the ratio:

XA = (1/2)g/RE,

which has correct units of s-2. So we end up with this accurate version of McCutcheon’s formula for the outward acceleration of an atom of radius R (we will use the average radius of orbit of the chaotic electron path in the ground state of a hydrogen atom for R, which is 5.29*10-11 m):

a = XAR = [(1/2)g/RE]R, which can be equated to Newton’s formula for acceleration due to mass m, which is 1.67*10-27 kg:

a = [(1/2)g/RE]R

= mG/R2.

Hence, McCutcheon on page 194 calculates a value for G by rearranging these equations:

G = (1/2)gR3/(REm)

=(1/2)*(9.81)*(5.29*10-11)3 /[(6.378*106)*(1.67*10-27)]

= 6.82*10-11 m3/(kg*s2).

Which is only 2% higher than the measured value of

G = 6.673 *10-11 m3/(kg*s2).

After getting this result on page 194, McCutcheon remarks on page 195: ‘Recall … that the value for XA was arrived at by measuring a dropped object in relation to a hypothesized expansion of our overall planet, yet here this same value was borrowed and successfully applied to the proposed expansion of the tinest atom.

We can compress McCutcheon’s theory: what is he basically saying is the scaling ratio:

a = (1/2)g(R/RE) which when set equal to Newton’s law mG/R2, rearranges to give: G = (1/2)gR3/(REm).

However, McCutcheon’s own formula is just his guessed scaling law: a = (1/2)g(R/RE).

Although this quite accurately scales the acceleration of gravity at Earth’s surface (g at RE) to the acceleration of gravity at the ground state orbit radius of a hydrogen atom (a at R), it is not clear if this is just a coincidence, or if it is really anything to do with McCutcheon’s expanding matter idea. He did not derive the relationship, he just defined it by dividing the increased radius into the Earth’s radius and then using this ratio in another expression which is again defined without a rigorous theory underpinning it. In its present form, it is numerology. Furthermore, the theory is not universal: ithe basic scaling law that McCutcheon obtains does not predict the gravitational attraction of the two balls Cavendish measured; instead it only relates the gravity at Earth’s surface to that at the surface of an atom, and then seems to be guesswork or numerology (although it is an impressively accurate ‘coincidence’). It doesn’t have the universal application of Newton’s law. There may be another reason why a = (1/2)g(R/RE) is a fairly accurate and impressive relationship.

Since I regularly oppose censorship based on fact-ignoring consensus and other types of elitist fascism in general (fascism being best defined as the primitive doctrine that ‘might is right’ and who speaks loudest or has the biggest gun is the scientifically correct), it is only correct that I write this blog post to clarify the details that really are interesting.

Maybe McCutcheon could make his case better to scientists by putting the derivation and calculation of G on the front cover of his book, instead of a sunset. Possibly he could justify his guesswork idea to crackpot string theorists by some relativistic obfuscation invoking Einstein, such as:

‘According to relativity, it’s just as reasonable to think as the Earth zooming upwards up to hit you when you jump off a cliff, as to think that you are falling downward.’

If he really wants to go down the road of mainstream hype and obfuscation, he could maybe do even better by invoking the popular misrepresentation of Copernicus:

‘According to Copernicus, the observer is at ‘no special place in the universe’, so it is as justifiable to consider the Earth’s surface accelerating upwards to meet you, as vice-versa. Copernicus used a spaceship to travel all throughout the entire universe on a spaceship or a flying carpet to confirm the crackpot modern claim that we are not at a special place in the universe, you know.’

The string theorists would love that kind of thing (i.e., assertions that there is no preferred reference frame, based on lies) seeing that they think spacetime is 10 or 11 dimensional, based on lies.

My calculation of G is entirely different, being due to a causal mechanism of graviton radiation, and it has detailed empirical (non-speculative) foundations to it, and a derivation which predicts G in terms of the Hubble parameter and the local density:

G = (3/4)H2/(rπe3),

plus a lot of other things about cosmology, including the expansion rate of the universe at long distances in 1996 (two years before it was confirmed by Saul Perlmutter’s observations in 1998). However, this is not necessarily incompatible with McCutcheon’s theory. There are such things as mathematical dualities: where completely different calculations are really just different ways of modelling the same thing.

McCutcheon’s book is not just the interesting sort of calculation above, sadly. It also contains a large amount of drivel (particularly in the first chapter) about his alleged flaw in the equation: W = Fd or work energy = force applied * distance moved by force in the direction that the force operates. McCutcheon claims that there is a problem with this formula, and that work energy is being used continuously by gravity, violating conservation of energy. On page 14 (2004 edition) he claims falsely: ‘Despite the ongoing energy expended by Earth’s gravity to hold objects down and the moon in orbit, this energy never diminishes in strength…’

The error McCutcheon is making here is that no energy is used up unless gravity is making an object move. So the gravity field is not depleted of a single Joule of energy when an object is simply held in one place by gravity. For orbits, gravity force acts at right angles to the distance the moon is going in its orbit, so gravity is not using up energy in doing work on the moon. If the moon was falling straight down to earth, then yes, the gravitational field would be losing energy to the kinetic energy that the moon would gain as it accelerated. But it isn’t falling: the moon is not moving towards us along the lines of gravitational force; instead it is moving at right angles to those lines of force. McCutcheon does eventually get to this explanation on page 21 of his book (2004 edition). But this just leads him to write several more pages of drivel about the subject: by drivel, I mean philosophy. On a positive note, McCutcheon near the end of the book (pages 297-300 of the 2004 edition) correctly points out that that where two waves of equal amplitude and frequency are superimposed (i.e., travel through one another) exactly out of phase, their waveforms cancel out completely due to ‘destructive interference’. He makes the point that there is an issue for conservation of energy where such destructive interference occurs. For example, Young claimed that destructive interference of light occurs at the dark fringes on the screen in the double-slit experiment. Is it true that two out-of-phase photons really do arrive at the dark fringes, cancelling one another out? Clearly, this would violate conservation of energy! Back in February 1997, when I was editor of Science World magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject. Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes. The photons are not arriving at the dark fringes. Instead, they arrive in the bright fringes.

The interference of radio waves and other phased waves is also known as the Hanbury-Brown-Twiss effect, whereby if you have two radio transmitter antennae, the signal that can be received depends on the distance between them: moving they slightly apart or together changes the relative phase of the transmitted signal from one with respect to the other, cancelling the signal out or reinforcing it. (It depends on the frequencies and amplitude as well: if both transmitters are on the same frequency and have the same output amplitude and radiation power, then perfectly destructive interference if they are exactly out of phase, or perfect reinforcement – constructive interference – if they are exactly in-phase, will occur.) This effect also actually occurs in electricity, replacing Maxwell’s mechanical ‘displacement current’ of vacuum dielectric charges.

Feynman quotation

The Feynman quotation I located is this:

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn – the phenomena that we see are very well approximated by rules such as ‘light travels in straight lines’ because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go [influenced by the randomly occurring fermion pair-production in the strong electric field on small distance scales, according to quantum field theory], each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to go.’

– R. P. Feynman, QED, Penguin, London, 1990, pp. 84-5. (Emphasis added in bold.)

Compare that to:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Heisenberg quantum mechanics: Poincare chaos applies on the small scale, since the virtual particles of the Dirac sea in the vacuum regularly interact with the electron and upset the orbit all the time, giving wobbly chaotic orbits which are statistically described by the Schroedinger equation – it’s causal, there is no metaphysics involved. The main error is the false propaganda that ‘classical’ physics models contain no inherent uncertainty (dice throwing, probability): chaos emerges even classically from the 3+ body problem, as first shown by Poincare.

Anti-causal hype for quantum entanglement: Dr Thomas S. Love of California State University has shown that entangled wavefunction collapse (and related assumptions such as superimposed spin states) are a mathematical fabrication introduced as a result of the discontinuity at the instant of switch-over between time dependent and time independent versions of Schroedinger at time of measurement.

Just as the Copenhagen Interpretation was supported by lies (such as von Neumann’s false ‘disproof’ of hidden variables in 1932) and fascism (such as the way Bohm was treated by the mainstream when he disproved von Neumann’s ‘proof’ in the 1950s), string ‘theory’ (it isn’t a theory) is supported by similar tactics which are political in nature and have nothing to do with science:

‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.

*****************************

‘Superstring/M-theory is the language in which God wrote the world.’ – Assistant Professor Lubos Motl, Harvard University, string theorist and friend of Edward Witten, quoted by Professor Bert Schroer, http://arxiv.org/abs/physics/0603112 (p. 21).

‘The mathematician Leonhard Euler … gravely declared: “Monsieur, (a + bn)/n = x, therefore God exists!” … peals of laughter erupted around the room …’ – http://anecdotage.com/index.php?aid=14079

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation – a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? … In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up … So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. … All these numbers … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pp 194-195. [Quoted by Tony Smith.]

Feynman predicted today’s crackpot run world in his 1964 Cornell lectures (broadcast on BBC2 in 1965 and published in his book Character of Physical Law, pp. 171-3):

‘The inexperienced, and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculation] you can immediately see that they are wrong, so that does not count. … There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a territory.’

In the same book Feynman states:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters

‘If you are not criticized, you may not be doing much.’ – Donald Rumsfeld.

The Standard Model, which Edward Witten has done a lot of useful work on (before he went into string speculation), is the best tested physical theory. Forces result from radiation exchange in spacetime. The big bang matter’s speed is 0-c in spacetime of 0-15 billion years, so outward force F = ma = 1043 N. Newton’s 3rd law implies equal inward force, which from the Standard Model possibilities will be carried by gauge bosons (exchange radiation), predicting current cosmology, gravity and the contraction of general relativity, other forces and particle masses.

‘A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments.’ – Novum Organum.

This predicts gravity in a quantitative, checkable way, from other constants which are being measured ever more accurately and will therefore result in more delicate tests. As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are consistent with LQG and Lunsford’s gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law).

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54.

That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.

The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that. Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.

So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles). The LQG loops are entirely different (exchange radiation) and cause gravity, not cosmological constant effects. Hence no dark energy mechanism can be attributed to the charge creation effects in the Dirac sea, which exists only close to real particles.

‘By struggling to find a mathematically precise formulation, one often discovers facets of the subject at hand that were not apparent in a more casual treatment. And, when you succeed, rigorous results (”Theorems”) may flow from that effort.

‘But, particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigorous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ – Professor Jacques Distler, blog entry on The Role of Rigour.

‘[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. … The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting…. The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.’ – Editorial, p5 of the 9 Dec 06 issue of New Scientist.

Far easier to say anything else is crackpot. String isn’t, because it’s mainstream, has more people working on it, and has a large number of ideas connecting one another. No ‘lone genius’ can ever come up with anything more mathematically complex, and amazingly technical than string theory ideas, which are the result of decades of research by hundreds of people. Ironically, the core of a particle is probably something like a string, albeit not the M-theory 10/11 dimensional string, just a small loop of energy which acquires mass by coupling to an external mass-giving bosonic field. It isn’t the basic idea of string which is necessarily wrong, but the way the research is done and the idea that by building a very large number of interconnected buildings on quicksand, it will be absurd for disaster to overcome the result which has no solid foundations. In spacetime, you can equally well interpret recession of stars as a variation of velocity with time past as seen from our frame of reference, or a variation of velocity with distance (the traditional ‘tunnel-vision’ due to Hubble).

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1907.

Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of “predicting gravity”, he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an “alternative to currently accepted theories”. Where is the theory in string? Where is the theory in M-”theory” which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn’t an alternative to a currently accepted theory. It’s tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I’m not saying string should be banned, but I don’t agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!)

There is some dark matter in the form of the mass of neutrinos and other radiations which will be attracted around galaxies and affect their rotation, but it is bizarre to try to use discrepancies in false theories as “evidence” for unobserved “dark energy” and “dark matter”, neither of which has been found in any particle physics experiment or detector in history. The “direct evidence of dark matter” seen in photos of distorted images don’t say what the “dark matter” is and we should remember that Ptolemy’s followers were rewarded for claiming direct evidence of the earth centred universe was apparent to everyone who looked at the sky. Science requires evidence, facts, and not faith based religion which ignores or censors out the evidence and the facts.

The reason for current popularity of M-theory is precisely that it claims to not be falsifiable, so it acquires a religious or mysterious allure to quacks, just as Ptolemy’s epicycles, phlogiston, caloric, Kelvin’s vortex atom and Maxwell’s mechanical gear box aether did in the past. Dr Peter Woit explains the errors and failures of mainstream string theory in his book Not Even Wrong (Jonathan Cape, London, 2006, especially pp 176-228): using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.

By claiming to ‘predict’ everything conceivable, it predicts nothing falsifiable at all and is identical to quackery, although string theory might contain some potentially useful spin-offs such as science fiction and some mathematics (similarly, Ptolemy’s epicycles theory helped to advance maths a little, and certainly Maxwell’s mechanical theory of aether led ultimately to a useful mathematical model for electromagnetism; Kelvin’s false vortex atom also led to some ideas about perfect fluids which have been useful in some aspects of the study of turbulence and even general relativity). Even if you somehow discovered gravitons, superpartners, or branes, these would not confirm the particular string theory model anymore than a theory of leprechauns would be confirmed by discovering small people. Science needs quantitative predictions.

Dr Imre Lakatos explains the way forward in his article ‘Science and Pseudo-Science’:

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

Really, there is nothing more anyone can do after making a long list of predictions which have been confirmed by new measurements, but are censored out of mainstream publications by the mainstream quacks of stringy elitism. Prof Penrose wrote this depressing conclusion well in 2004 in The Road to Reality so I’ll quote some pertinent bits from the British (Jonathan Cape, 2004) edition:

On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’

Penrose identifies the problem clearly on page 1021: ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

On page 1026, Penrose gets down to the business of how science is really done: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’

‘Cargo cult science is defined by Feynman as a situation where a group of people try to be scientists but miss the point. Like writing equations that make no checkable predictions… Of course if the equations are impossible to solve (like due to having a landscape of 10^500 solutions that nobody can handle), it’s impressive, and some believe it. A winning theory is one that sells the most books.’ – http://cosmicvariance.com/2007/02/01/sponsored-links/#comment-188974

‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, 1984, Chancellor Press, London, 1984, p225

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. …’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

59 thoughts on “Correcting the U(1) error in the Standard Model of particle physics

  1. copy of a comment:

    http://www.math.columbia.edu/~woit/wordpress/?p=569#comment-26252

    Your comment is awaiting moderation.
    Nigel Says:

    June 21st, 2007 at 5:33 am

    Baryon and meson masses are related closely by the empirical formula mass = 35n(N+1) MeV, where n is the number of confined quarks and N is an integer. This fits particle masses very accurately, far better than random numbers. (The multiplier of 35 comes is the electron mass i.e. 0.511 MeV divided by twice alpha.)

    The existing (still slightly sketchy) theoretical derivation is here and a table comparing it to meson and baryon masses is here.

    For a baryon (n=3) of mass 5.774 GeV, we get: 5774 = [105(N+1)], so N = 53.99. Close to an integer! So I’d like to thank the experimentalists for adding this new ‘crackpot coincidence’…

    Does anyone know if lattice QCD can predict this new mass that accurately? Are those calculations for masses dependent on the way mass is supplied by the ‘Higgs field’ quanta?

  2. copy of a comment:

    http://dorigo.wordpress.com/2007/06/15/and-so-about-cascade-baryons/#comment-50519

    9. nc – June 21, 2007

    Here’s a copy of a relevant comment submitted to Not Even Wrong (in moderation), predicting this new hadron mass from a theory of integer numbers of massive Higgs field type mass-giving quanta published some time ago:

    Baryon and meson masses are related closely by the empirical formula mass = 35n(N+1) MeV, where n is the number of confined quarks and N is an integer. This fits particle masses very accurately, far better than random numbers. (The multiplier of 35 comes is the electron mass i.e. 0.511 MeV divided by twice alpha.)

    The existing (still slightly sketchy) theoretical derivation is here and a table comparing it to meson and baryon masses is here.

    For a baryon (n=3) of mass 5.774 GeV, we get: 5774 = [105(N+1)], so N = 53.99. Close to an integer! So I’d like to thank the experimentalists for adding this new ‘crackpot coincidence’…

    This prediction doesn’t strictly demand perfect integers to be observable, because it’s possible that effects like isotopes to exist, where the different individuals of the same type of meson or baryon can be surrounded by different integer numbers of Higgs field quanta, giving non-integer average masses. (The number would be likely to actually change during a high-energy interaction, where particles are broken up.)

    The early attempts of Dalton and others to work out an atomic theory were regularly criticised and even ridiculed by the fact that the measured mass of chlorine is 35.5 times the mass of hydrogen, i.e., nowhere near an integer!

  3. copy of a comment:

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-290730

    94. nigel on Jun 21st, 2007 at 8:27 am

    Professor Carroll,

    Did you read Paul Feyerabend’s Against Method? Your first two rules for new genuises are fine, particularly in the cases where they have the money or nearby suitable universities to get that fine degree of education right up to the elitist PhD level. The third rule is kind of demeaning to the likes of Professor Witten. Don’t get me wrong, I don’t think he is helping science that much by claiming string theory predicts gravity and such like, but it is demeaning to advise him:

    3. Present your discovery in a way that is complete, transparent and unambiguous.

    Don’t you think that’s a bit too insulting to the intelligence of the crackpot stringer? There she is, with her PhD, working on string theory, failing to make any predictions, then having to read this nonsense that what she needs to do is to prevent it being a incomplete mess, and turn it into a proper theory. You can imagine her suffering on reading this post. They can’t help having such an incomplete, ambiguous (10^500 solution landscape) mess of a half-baked theory.

    In future when attacking string theory, maybe you should use Professor Baez’ index, awarding points to theories based on speculations which make no falsifiable predictions? BTW, I think the other kind of “crackpot” (leaving Witten aside for a moment) has an idea but lacks the skills to develop it. He or she decides to write it up and publish it, in the hope that someone with the skills will be able to develop it, and take a major share of the credit. Ultimately, if these people are on to something and do live long enough, they may be able to do the work needed themselves. Darwin and Newton were examples who spent decades taking your advice, developing a lot of arguments to support their theories, before publishing all the results in lucid books. Aristarchus o[f] Samos and Boltzmann are examples where this didn’t occur.

    Tony Smith (a string theorist censored off arXiv possibly because he has embarrassingly stuck to 26 dimensional bosonic string theory, instead of changing to 10 dimensional superstrings with 1:1 boson:fermion supersymmetry), has quoted Feynman describing his problems with getting people to listen to him in 1948 at the Pocono conference:

    “Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … Dirac could not think of going forwards and backwards … in time … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

    “… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

    “I gave up, I simply gave up …”. – The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

    Dyson has a google video (search for Freeman Dyson Feynman, on google video) describing how hard it was to get Feynman’s idea taken seriously:

    “… the first seminar was a complete disaster because I tried to talk about what Feynman had been doing, and Oppenheimer interrupted every sentence and told me how it ought to have been said, and how if I understood the thing right it wouldn’t have sounded like that. He always knew everything better, and was a terribly bad organiser of seminars.

    “I mean he would – he had to have the centre stage for himself and couldn’t shut up [like string theorists today!], and we couldn’t tell him to shut up. So in fact, there was very little communication at all. …

    “I always felt Oppenheimer was a bigoted old fool. …”

    Eventually, Dyson got Bethe to explain it to Oppeheimer, who listened to Bethe. Tony Smith quotes Dyson’s conclusion:

    “… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …”

    – Freeman Dyson, 1981 essay Unfashionable Pursuits (reprinted in From Eros to Gaia (Penguin 1992, at page 171).

    Tony Smith, in a comment on the Not Even Wrong weblog, points out that Oppenheimer continued to be bigoted by nature:

    “Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … “We consider it juvenile deviationism …” … no one had actually read the paper … “We don’t waste our time.” … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should “pay no attention to Bohm’s work.” … Oppenheimer went so far as to suggest that “if we cannot disprove Bohm, then we must agree to ignore him.” …”.

    – Infinite Potential, by F. David Peat (Addison-Wesley 1997) at pages 101, 104, and 133.

    Even Carl Sagan falsely argued: “exceptional claims require exceptional evidence”.

    Problem is, what is exceptional evidence to one person, looks like a mere coincidence to a critic:

    “The first and simplest stage in the discipline, which can be taught even to young children, is called, in Newspeak, Crimestop. Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction.”

    – George Orwell, 1984.

    You can see why Feynman gave up explaining path integrals in 1948. If he had presented it differently, would that have helped?

  4. copy of a comment:

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-290735

    96. nigel on Jun 21st, 2007 at 8:37 am

    “So no, genuine string theorists, LQC researchers, Twistor theorists, etc. are not ‘alternative-scientists’ in my opinion.” – Joe Fitzsimons

    What about’s Witten’s claim:

    ‘String theory has the remarkable property of predicting gravity.’

    – E. Witten (M-theory originator), Physics Today, April 1996.

    ‘50 points for claiming you have a revolutionary theory but giving no concrete testable predictions.’

    – J. Baez (crackpot Index originator).

  5. copy of a comment:

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-290761

    100. nigel on Jun 21st, 2007 at 10:57 am

    ‘What makes a crackpot is the willingness to completely ignore contradictory evidence, either by not examining the current state of the field, or by willfully ignoring criticisms or by trying to obfuscate problems with their personal theory.’ – Joe Fitzsimons

    Try this for size:

    ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.

    Witten is “willfully ignoring criticisms” because it seems he advises string theorists to try not to reply directly to criticisms for fear of causing negative controversy.

  6. http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-290762

    101. nigel on Jun 21st, 2007 at 11:04 am

    “String theory predicts a spin-2 massless particle, which is exactly what we expect from a theory of quantum gravity.

    “So, no, Ed Witten is not a crackpot.” – Joe Fitzsimons

    Give the guy credit, string theory predicts a landscape 10^500 different theories, all including spin-2 gravitons. Worth 10^500 Nobel Prizes? 😉

  7. copy of a comment:

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-290776

    103. nigel on Jun 21st, 2007 at 11:38 am

    Van,
    Spot on, you need to be a string theorist expert of Witten’s stature before your criticism, that it doesn’t produce falsifiable predictions, becomes a ‘valid and relevant criticism.’ Apologies for missing that point. The flat earther’s also did that: only flat earther’s were sufficiently qualified experts in flat earth theory to be able to make ‘valid and relevant criticisms’. Other criticisms simply weren’t valid or relevant to them. 😉

  8. http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-290805

    111. nigel on Jun 21st, 2007 at 12:34 pm

    I would say that it isn’t valid criticism to just parrot things you’ve read in recent books or on another blog. When a string theorist does try to respond to criticisms from such people, there is virtually no way to explain things because 1) they don’t have the relevant background and 2) they are really not interested in a true discussion to begin with as they only want to repeat what they’ve heard elsewhere. – Van

    1) The parroting comes from people repeating false string theory claims to do physics; 2) the insults come from those claiming that any critics are repeating rubbish from recent books, i.e. ad hominem attacks ignoring the point. The ‘relevant background’ seems maybe to be a euphemism for prejudice?

  9. copy of a comment:

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist#comment-291201

    119. nigel on Jun 22nd, 2007 at 4:04 am

    Has anybody come across the book called ‘The Final Theory’? I read the intro some time back and the author starts off by pointing out many contradictions in physics, but ends up destroying his own credibility. My favourite was how Newtonian gravity and general relativity conflict, and therefore both must be wrong. – GP

    That book by Mark McCrutcheon is unhelpful because it doesn’t predict anything checkable. The idea is simple: that all planets and masses are expanding at an accelerating rate, with the surface of the Earth accelerating upwards at 9.8 ms^{-2}, so that gravity is superfluous.

    In a fuzzy way, this idea can be superficially related to the accelerating universe, but it fails as a universal theory when you look at the numbers. When you do a mathematical check on it, all masses would have to be expanding at different rates that aren’t directly proportional to mass, which means you lose the mathematical simplicity of universal gravitation without gaining any benefits in return. It makes the universe more mysterious, not less so.

    Your argument that ‘My favourite was how Newtonian gravity and general relativity conflict, and therefore both must be wrong’ reminds me of an essay of Professor Hawking, where he writes something to the effect that he gets lots of letters from people with personal pet theories, but they’re all wrong because they’re all incompatible. (It’s very tempting for busy people to use this bogus argument. I think Dr Motl took offense to McCrutcheon because of a similar statement that string theory is an even more fruitless approach to modelling gravity, with 10^500 string theory landscape models. Dr Motl then published a negative review of some parts of the book (ignoring McCrutcheon’s basic idea) at Amazon, but Amazon deleted it.

  10. copy of a comment:

    http://cosmicvariance.com/2007/06/22/designs-intelligent-and-stupid/#comment-291235

    8. nigel on Jun 22nd, 2007 at 5:01 am

    The standard answer (I think it is mentioned already in Darwin works, or at least implicit) is that the right question is not “What good is half a wheel” but “What bad is half a wheel”. If the mutation is not troublesome, it will survive even if it is not useful, and eventuallly it could evolve into a useful one. – Alejandro Rivero

    Without going off topic, this is a crucial point: how much junk and clutter can you hoard just in case you might one day find a use for it? Get too much clutter, and you can’t see the wood for the trees, but if you hoard abstract odds and ends long enough, it can eventually pay off in a big way. E.g. ellipses were known (as conic sections) in ancient Greece, as well as Aristarchus of Samos’ solar system, but Kepler in c. 1610 was the first to fit both […] together to accurately represent Brahe’s observations of Mars’ orbit. There’s also an allegation that Archimedes’ work The Method used the basic principles of the calculus (Archimedes called it the ‘mechanical method’) to work out the volume of geometric shapes by summing over a lot of thin slices, but it was lost and unavailable when calculus was developed by Newton and Leibniz. Maybe this was for the best because Archimedes […] regarded calculus as just a non-rigorous trick, not a really convincing proof: ‘… certain things first became clear to me by a mechanical method, although they had to be proved by geometry afterwards …’ (That kind of prejudice was probably best lost because there are now many things you can prove with calculus that can’t be proved afterward using simple geometry.)

  11. copy of comments:

    http://dorigo.wordpress.com/2007/06/15/and-so-about-cascade-baryons/#comment-50998

    13. nc – June 22, 2007

    Hi Tomasso,

    If the Omega_b is a baryon, it’s mass should be close to an integer when expressed in units of 105 MeV (3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV).

    If it is a meson, it’s mass should be close to an integer when expressed in units of 70 MeV (2/2 multiplied by the electron mass divided by alpha: 1*0.511*137 = 70 MeV).

    If it is a lepton apart from the electron (the electron is the most complex particle), it’s mass should be close to an integer when expressed in units of 35 MeV (1/2 multiplied by the electron mass divided by alpha: 0.5*0.511*137 = 35 MeV).

    This scheme has a simple causal mechanism in the quantization of the ‘Higgs field’ which supplies mass to fermions. By itself the mechanism just predicts that mass comes in discrete units, depending on how strong the polarized vacuum is in shielding the fermion core from the Higgs field quanta.

    To predict specific masses (apart from the fact they are likely to be near integers if isotopes don’t occur), regular QCD ideas can be used. This prediction doesn’t replace lattice QCD predictions, it just suggests how masses are quantized by the ‘Higgs field’ rather than being a continuous variable.

    Every mass apart form the electron is predictable by the simple expression: mass = 35n(N+1) MeV, where n is the number of real particles in the particle core (hence n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is is the integer number of ‘Higgs field’ quanta giving mass to that fermion core.

    From analogy to the shell structure of nuclear physics where there are highly stable or ‘magic number’ configurations like 2, 8 and 50, we can use n = 1, 2, and 3, and N = 1, 2, 8 and 50 to predict the most stable masses of fermions besides the electron.

    For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV.

    For mesons, n = 2 and N = 1 gives the pion: 35n(N+1) = 140 MeV.

    For baryons, n = 3 and N = 8 gives nucleons: 35n(N+1) = 945 MeV.

    For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV.

    Best,
    nigel

    http://dorigo.wordpress.com/2007/06/15/and-so-about-cascade-baryons/#comment-50999

    14. nc – June 22, 2007

    Whoops, I wrote fermion to cover all the particles, but I include mesons which are of course bosons. Sorry.

  12. copy of comment

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-291414

    nigel on Jun 22nd, 2007 at 1:21 pm

    Van, Professor Siegel has already written a self-diagnosis checklist of that type, and then there is Professor Baez’s crackpot index. And please don’t forget Professor ‘t Hooft’s page on ‘How to become a bad theoretical physicist’ which admits:

    ‘It is much easier to become a bad theoretical physicist than a good one. I know of many individual success stories.’
    🙂

  13. copy of a comment:

    http://matpitka.blogspot.com/2007/06/peter-woit-and-kea-commented-wittens.html

    “The little I know about LQG is that it is 3+1-D theory with 3-geometries as basic objects expressed in terms of loop variables. Witten considers 3-D theory with Chern Simons action: in this case 2-geometries would be the basic dynamical objects. Witten himself made clear that he has no idea about how to generalize the theory to 4-D context.”

    Peter Woit does give a discussion of the basic principle of LQG in his book:

    ‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p189.

    I watched Lee Smolin’s Perimeter Institute lectures, “Introduction to Quantum Gravity”, and he explains that loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity.

    It’s pretty evident that [the] “loops” are the closed exchange cycles of gravitons going between masses (or other gravity field generators like energy fields), back and forward, in an endless cycle of exchange. That’s the loop mechanism, the closed cycle of Yang-Mills exchange radiation being exchanged from one mass to another, and back again, continually.

    According to this view, the graviton interaction nodes are associated with the ‘Higgs field quanta’ which generates mass.

    Hence, in a Penrose spin network, the nodes represent the points where quantized masses exist.

    I think the mainstream is being misled by spin-2 graviton ideas, and the U(1) component of the Standard Model is wrong and SU(2) describes electromagnetism (as well as isospin). The SU(2) symmetry models two types of charges, hence negative and positive charges without the wrong method U(1) uses where it specifies there are only negative charges and positive ones are negative ones going backwards in time. In addition, SU(2) gives 3 massless gauge bosons, two charged ones (which mediate the charge in electric fields) and one neutral one (which is the spin-1 graviton, that causes gravity by pushing masses together).

    In addition, SU(2) describes doublets, charge-anticharge pairs. We know that electrons are not produced individidually, only in lepton-antilepton pairs. The reason why electrons can be separated a long distance from their antiparticle (unlike quarks) is simply the nature of the binding force, which is long range electromagnetism instead of a short-range force.

    The problem is that string people reflexively stamp the label “crackpot” immediately on to any alternative ideas for unifying the Standard Model and gravity.

  14. funny comment spotted on the Not Even Wrong blog, and copied here for future reference in case Dr Woit accidently deletes it:

    http://www.math.columbia.edu/~woit/wordpress/?p=570#comment-26428

    anon. Says:

    June 27th, 2007 at 6:38 am

    Kea, I note that you quote Lubos’ remark on your blog:

    “Well, I happen to think that if Edward Witten started to work on loop quantum gravity, as defined by the existing contemporary methods and standards of the loop quantum gravity community, it wouldn’t mean that physics is undergoing a phase transition. Instead, it would simply mean that Edward Witten would be getting senile. We all admire him and love him, if you want me to say strong words, but he is still a scientist, not God.” – Lubos Motl

    This kind of gives the impression that Ed Witten risks being deemed as “senile” and “not God” but merely “a scientist” if he did take more interest in Loop Quantum Gravity. Maybe that’s why he doesn’t?

  15. funny comment spotted on the Reference Frame blog’s fast comments, and copied here for future reference in case Dr Motl accidently deletes it:

    http://motls.blogspot.com/2007/06/is-witten-working-on-loop-quantum.html

    Thanks for your support of alternatives to string, Lubos. Anything you write negatively about them backfires.

    The girl in your photo (should be labelled Fig. 1) is actually tied up in a knot, making fun of the Calabi-Yau manifold. If she had an extra 6 dimensions, there’d be 10^500 ways she could compactify herself. If she was Planck scale, you’d be unable to probe her to find out how she was tied up with all those extra degrees of freedom in those extra dimensions. So you’d be totally frustrated unable to get any pleasure from her, just as you’re frustrated by string theory which is in just the same state and can never, ever ever ever predict anything real. Admit it, string is just another (bad) loser.

    aktivní blb | Homepage | 06.27.07 – 3:35 pm | #

  16. copy of a comment

    http://riofriospacetime.blogspot.com/2007/06/alpha-magnetic-spectrometer-in-limbo.html

    “Nearly everyone has experienced the power of a thunderstorm. We are taught in school that lightning originates from static discharges within storm clouds. What triggers those discharges is unknown. The tracks of cosmic rays, striking and scattering particles in the atmosphere, are very similiar to lightning. Some researchers have suggested that cosmic rays are the cause of lightning! Since cosmic rays fall nearly steadily across Earth’s surface, that is a hypothesis that needs to be tested. If cosmic rays cause lightning, that is one more example of how our lives are intimately entwined with Space.” – Louise

    Yes! It’s simply the Geiger counter effect: there is a massive (400 kV) electric potential between the ionosphere at high altitude and the earth’s surface, and the ionospheric gas is at low pressure, just like the inside of a Geiger counter tube with high voltage and low pressure gas!

    A sufficiently ionizing, high-energy cosmic ray, can set off a lightning bolt, just as a “count” (electron avalanche) is set off by a beta particle entering a Geiger counter tube.

    If you actually have a Geiger counter with a glass window, turn out the lights and bring some Sr-90 near it. You can see miniature lightning flashes! The gas (argon) sparks when particles of radiation set off electron avalanches, even at the normal operating voltage (you get trouble if you turn the potential up too high, because the gas no longer quenches so the first particle that ionises it causes it to light up like a neon tube and remain glowing until the counter is switched off; that’s damaging to the lifespan of the Geiger tube, and of course the scalar or ratemeter attached to it which is providing the HV can’t detect any pulses if that happens).

    So the atmosphere is in a sense a giant Geiger counter, and lightning bolts are likely just electron avalanches in triggered at altitude in low pressure air by cosmic rays. The direction of the lightning bolt marks the direction of the electric field lines in the atmosphere. Usually they’re vertical, corresponding to the natural electric field gradient of the atmosphere, 120 v/m vertically near sea level.

    Feynman has a great lecture about this in his Lectures on Physics. The base of the electrically conductive ionosphere is 50 km above sea level, and it forms one electrode with the sea (conductive salt water) or wet ground as the other one. The Earth has a net charge of 1,000,000 Coulombs. The vertical electric potential between them is V = 400,000 volts, and since the capacitance of the Earth (treating the oceans and the ionosphere as two concentric capacitor plates with the air between them as the dielectric) is C = 0.091 F, the atmosphere normally stores (1/2)CV^2 = 7.3 GJ of energy!

    Because the air is normally fairly non-conductive, the vertical current flowing as a result of that natural vertical electric field is normally small, just 3.5 pA/m^2, but this means that 1,800 Amps is flowing vertically at any one time over the entire Earth.

    This vertical current is compensated-for by an average of 40 lightning discharges per second (in 2,000 thunderstorms concentrated mainly in warm oceanic tropical locations which suffer the most frequent electrical storms; South America, central Africa, Southeast Asia, and Northern Australia) which maintains the balance of charge between Earth’s surface and the ionosphere. (This figure of 40 lightning strikes per second is based on 1995 data from the Optical Transient Detector satellite and this figure is only about half of the old obsolete estimate used in Feynman’s lectures.)

  17. copy of a comment:

    http://cosmicvariance.com/2007/06/26/constraints-and-signatures-in-particle-cosmology/#comment-293468

    25. nigel on Jun 27th, 2007 at 5:15 pm

    My research in a simple causal mechanism of gauge boson exchange addresses all these problems successfully: http://quantumfieldtheory.org/1.pdf

    The problem I’ve found is that once you hit on something that does agree with nature, how far you should you then go in applying it yourself? Particularly if you have family and work commitments and have limited time? Newton allegedly spend 22 years (1665-87) working out the consequences of his initial idea. He published finally after a priority dispute with Hooke over the inverse-square law.

    If you get bigoted responses from egotists and “crackpot” abuse from the mainstream, probably the best thing to do is to go ahead and apply the basic concepts as far as you can. Boltzmann’s problems were getting depressed by other people’s ignorant reactions. If certain people aren’t interested in new ideas or are prejudiced against you, it’s only your problem if you are dependent on them. If those people are just a nuisance to everyone, then it’s better not to waste too much time in arguments (just enough to prove you made some effort and to document the ignorant hostility or abuse you receive in return). Not making any effort to communicate information is just as dangerous as making too much effort (hype) to do so, because potential opportunities for fruitful discussions will be lost. The first priority is applying the science.

  18. copy of a comment:

    http://cosmicvariance.com/2007/06/26/constraints-and-signatures-in-particle-cosmology/#comment-293567

    nigel on Jun 28th, 2007 at 3:08 pm

    The chirality issue is addressed in the Standard Model just by the SU(2) charge, isospin, being zero for all right-handed Weyl spinors (see the table here. The right-handed particles simply can’t interact with the massive weak gauge bosons, although they still see the same electromagnetic force. Below electroweak unification energy, the weak gauge bosons gain mass, and these massive weak gauge bosons can’t interact with right-handed particles (The loss of the weak isospin charge for right-handed particles is compensated for by their increased weak hypercharges.)

    What’s interesting is that there are severe issues with the U(1) electromagnetic and weak hypercharge gauge field. Sheldon Glashow and Julian Schwinger in 1956 tried to use SU(2) to unify electromagnetism and the weak interaction by having the two charged vector bosons the mediators of weak interactions and the neutral vector boson the mediator of electromagnetism. I.e., they tried to use SU(2) for electroweak unification! Glashow comments on his 1979 Nobel lecture:

    “Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).”

    SU(2)xU(1) gives four vector bosons, two charged and two neutral. However, it implies that all leptons are singlets (in fact they are only formed in lepton-antilepton pairs) and it doesn’t include a gravity vector boson which you’d expect to be found. An alternative would be a second SU(2) group, i.e., SU(2)xSU(2), which gives 6 vector bosons, i.e., the usual 3 weak gauge bosons and another 3 which can be always massless and thus long-range forces. Another option would be that the Higgs mechanism is wrong, and the correct electroweak group is just SU(2), in which some of the 3 massless gauge bosons (2 charged, 1 neutral) acquire mass and interact with left-handed particles.

    When you examine well known anomalies in electromagnetism carefully, it is easy to model electric fields as products of positive and negative charged exchange radiation (usual objections to massless charged bosons propagating are erased in the case of exchange radiation due to cancellation of the curls of the magnetic fields of electrically charged gauge bosons passing through one another in equilibrium), while gravity is electrically uncharged massless gauge bosons. This tells us that electromagnetism is ~10^40 times gravity in a simple way by taking a Brownian motion like a path integral (electric fields add up like diffusion if there are equal positiev and negative charges scattered around) for the ~10^80 charges in the observable universe.

  19. copy of a comment:

    39. nigel on Jun 29th, 2007 at 4:04 am

    Hi Van, glad you got my point. Thank you very much for referring to the Pati-Salam modem, SU(4) x SU(2)_L x SU(2)_R. Yes I am interested in something that looks nearly identical, like SU(3) x SU(2)_L x SU(2)_R. However, SU(4) x SU(2)_L x SU(2)_R is different in many ways. They chose that not due to experimental evidence or unique quantitative predictions it can make, but because it can undergo spontaneous symmetry breaking to produce exactly the existing Standard Model, so that the Higgs field at low energy causes SU(4) x SU(2)_L x SU(2)_R to produce SU(3)xSU(2)xU(1). At high energy where the symmetry is unbroken, it is a grand unification theory.

    This approach has many problems both in methodology and in checking it. 1) It assumes the Standard Model is totally correct at low energies and it assumes that forces do unify at very high energy. 2) It doesn’t make immediate predictions or post-dictions of the strength of gravity, cosmological effects, etc., that can validate the approach. 3) It doesn’t actually seem to make any long-term checkable predictions that are useful. 4) It doesn’t seem to make things simpler with regard to the Higgs field or the masses of different fundamental particles, which is the cause of most of the adjustable parameters in the existing Standard Model. 5) It doesn’t seem to help resolve existing problems in physics or to point in the direction of simple mechanisms to improve understanding. 6) It doesn’t get rid of U(1) at low energy, since U(1) emerges at low energy as a result of the symmetry breaking they are assuming. 7) It’s a theory built on speculation instead of on empirial observations.

    SU(2) does have several advantages in describing leptons as doublets: pair production produces lepton-antilepton pairs. A conversion of 100% of positrons into upquarks and 50% of electrons into downquarks in the big bang would explain the alleged lack of anti-matter in the universe: it’s locked up by quark confinement in nucleons (the universe is mainly hydrogen, an electron, downqrark, and two upquarks). One simple mechanism based entirely on mainstream QFT is that the electric field of the core of a lepton is shielded by the polarization of pairs of virtual fermions around it. The virtual fermion pair production is, Schwinger showed, a result of the electric field of the electron core which extends out to about 1 fm radius where the electric field is above the threshold of 1.3*10^18 v/m required for pair-production. If at high energy in the big bang (very early times), N electrons were crowded together in a small space (against the Pauli exclusion principle), the polarization of the vacuum would be stronger, so the shielding factor due to the vacuum would be N times bigger. Thus, 3 electrons crowded together in a tiny space would still only give an overall electric charge of e; the contribution from each electron would be e/3 due to the extra shielding by the stronger polarized vacuum. This is just a simple heuristic mechanism for fractional charges. The energy conservation issue then comes to the fore: what happens to the 2/3rds of the electric charge energy (that is now being shielded by the stronger, shared vacuum polarization around the triplet)? Clearly, that energy is stopped at very short distances by the vacuum and used to produce loops of virtual particles which mediate short-range interactions. To avoid violating the Pauli exclusion principle (which would prevent a triplet of three identical quarks, since there are only two spin states available), colour charge must appear. This suggests that ‘unification’ of all forces doesn’t occur at very high energy: the colour charge is powered by short-range vacuum loop effects and decreases towards zero when you are close enough to the particle core that there is no room for the vacuum to polarize (i.e. no space for virtual fermion pairs to move apart along the lines of the radial electric field).

  20. … McCutcheon on page 194 calculates a value for G by rearranging these equations:

    G = (1/2)g(R^3)/[(R_E)m]

    =(1/2)*(9.81)*(5.29*10^-11)^3 /[(6.378*10^6)*(1.67*10^-27)]

    = 6.82*10^-11 m^3/(kg*s^2).

    Which is only 2% higher than the measured value of

    G = 6.673 *10^-11 m^3/(kg*s^2). …

    This doesn’t seem to have any physical validity. The mean density of the hydrogen atom, treating it as a sphere of radius equal to the ground state mean radius, is (1.67*10^-27 kg)/(6.20*10^-31 m^3) = 2,700 kg/m^3.

    The mean density of the planet Earth is about 5.5 kg/m^3 Hence, the hydrogen atom has 490 times the density of the Earth, in the way McCutcheon’s calculation goes. His calculation is based on the assumption that:

    a = (1/2)gR/R_E

    where R_E is Earth’s radius, R is hydrogen atom ground state radius, a is acceleration of gravity on hydrogen atom’s surface and g is acceleration of gravity on Earth’s surface.

    If this were to be true, the factor of 1/2 would need to be explained by relative densities of the hydrogen atom to the Earth (i.e. Earth would need to have twice the density of the hydrogen atom). In fact, the hydrogen atom is 490 times higher in density than the Earth. So that road is a failure.

    If we try to find a physical understanding for a = (1/2)gR/R_E, the factor of 1/2 is a difficulty: as shown in the post, McCutcheon doesn’t have any reason to include it because invoking Galileo’s (1/2)gt^2 gives the wrong dimensions. To correct the dimensions, you must drop t^2, and in so doing you have no reason physically for retaining (1/2). However, the 1/2 factor is not a big problem in physics generally. (Einstein’s first estimate of the deflection of light by gravity in 1911 was out by a factor of 1/2, as was the early estimate of the spin of the electron and the magnetic moment of the electron. Generally, where things occur in pairs in nature – like spins and field polarizations – it is possible that in the early estimates people are confused whether 50% are involved in interactions or 100%, so they often risk an error by a factor of 2 or 1/2.)

    a = (1/2)gR/R_E

    is equivalent to:

    2a/g = R/R_E

    why should gravitational accelerations scale in proportion to radius (while density varies by hundreds of times)? The Hubble expansion v = HR implies an outward acceleration in spacetime of a = dv/dt = d[HR]/[dR/v] = vH = RH^2, so acceleration is directly proportional to radial distance. However, the Hubble acceleration is much smaller. If we relate McCutcheon’s formula to the Hubble acceleration a = RH^2, then:

    H^2 = a/R = 1.54*10^-6 s^-2

    Hence:

    H = 1.24*10^-3 s^-1.

    This is bigger than the cosmological Hubble parameter (H = 2.3*10^-18 s^-1) by a factor of 5.4*10^14. So that is another dead end.

    Another way of looking at McCutcheon’s formula 2a/g = R/R_E is trying to see it as an energy balance. Energy, E = FR where F is force and R is distance moved in direction of force as a result of the force. Since F = ma, we get:

    E = maR

    McCutcheon’s relationship stated as an energy balance would then need to be something like:

    E =

    (m_H)aR = (m_E)gR_E

    where the left hand side applies to a hydrogen atom with gravity acceleration a at ground state radius R, while the right hand side applies to the Earth with surface acceleration g at radius R_E.

    However, this doesn’t work because when rearranged it can’t yield anything like McCutcheon’s 2a/g = R/R_E. So that is a dead end also.

  21. copy of an interesting “fast comment” I saw in case it gets accidentally deleted by global warming string theorist Dr Lubos Motl:

    http://motls.blogspot.com/2007/06/realclimate-saturated-confusion.html

    the greenhouse effect gets weaker as the absorption of the appropriate spectral lines gets saturated
    the overall greenhouse effect from several gases is smaller than a simple sum if their spectra overlap

    Lubos, I know a lot about infrared absorption by H2O and CO2, and in these cases you have molecular band absorption spectra, not line absorption spectra. The difference is caused by the freedom of atoms to vibrate many different ways, so the sharp lines do not occur and instead you have fuzzy molecular band spectra. These will not saturate in the way you claim.

    Your argument only applies to line spectra, which only really applies to free atoms and ions, not CO2 and H2O. In the case of molecular band spectra, the absorption doesn’t get saturated: the bands cover a wide range of wavelengths and the more molecules you have, the more absorption there is for radiant transfer over that entire band.

    aktivní blb | Homepage | 06.30.07 – 7:08 am | #

  22. copy of a comment responding to:

    http://cosmicvariance.com/2007/06/26/constraints-and-signatures-in-particle-cosmology/#comment-293764

    Van: the standard model already has that chiral symmetry built into it by simply setting the weak isospin charge of right handed Weyl spinors equal to zero. Hence only left-handed particles have a weak isospin charge and can engage in weak interactions. All I’m pointing out is that SU(2) can be extended to include gravity in the standard model, to predict the strength of electromagnetism, to predict other cosmology and the apparent lack of antimatter. There is more explanation with diagrams on my blog if anyone is interested. Thanks!

  23. copy of a comment:

    http://carlbrannen.wordpress.com/2007/06/29/to-help-miss-cite-reb-eretics-simple-hot/

    “My dream is to get at least single citation from an academic physicist before I die;-).” – Matti Pitkanen

    Matti,

    But Roger Penrose cited your work in the revised edition of the book “The Road to Reality”. Doesn’t that count as a “citation from an academic physicist”?

    “‘[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. … The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting…. The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.” – Editorial, p5 of the 9 Dec 06 issue of New Scientist.

    “(1). The idea is nonsense.
    (2). Somebody thought of it before you did.
    (3). We believed it all the time.”
    – Professor R.A. Lyttleton (quoted by Fred Hoyle in the book “Home is Where the Wind Blows”, Oxford University Press, 1997, p154).

    It is interesting that even people like Feynman and Bohm were censored, mainly by groupthink led by some elite priest figure (Oppenheimer was behind the initial censorship of both Feynman and Bohm, although he changed his mind over Feynman after Dyson got Bethe to argue with Oppenheimer at length).

    Tony Smith quotes Dyson’s conclusion:

    “… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …”

    – Freeman Dyson, 1981 essay “Unfashionable Pursuits” (reprinted in “From Eros to Gaia” (Penguin 1992, at page 171).

    That sort of time delay is totally unacceptable. You can see why the mainstream has so much support: new ideas are liable to “go down the tubes” for half a century not because they’re wrong, but just because they’re unfashionable. Problem is, there is a widespread “common sense” idea that anything unorthodox is crackpot and wrong, while orthodoxy is deemed sensible and correct even when it is, in the case of string theory, just groupthink and fantasy.

    I don’t believe in hunting “crackpot hunters” because when you catch them they’re miserable little losers. There’s Erik Max Francis who owns and runs “www.Crank.net” and allegedly believes himself superior to all others because he discovered how to derive Kepler’s laws from Newton’s laws. (Actually, Newton used Kepler’s laws to derive his laws, so this is a circular argument.)

    He was described in the New York Times as follows:

    “Mr. Francis, 29, is not a scientist, and has taken only a handful of classes at a community college.” (Bonnie Rothman Morris in The New York Times of Dec. 21, 2000)

    But this quotation is a bit misleading because when you look up the article, you see that Bonnie Rothman Morris is actually amazed in a positive way with Francis because he has only a handful of classes. She thinks that means he is really, really clever and qualified to call other people cranks.

    This is the world we live in:

    ‘Fascism is not a doctrinal creed; it is a way of behaving … What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. …’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

    Carl – sorry for the length and please delete this comment if it is unhelpful; I’ll copy it to my blog so it won’t be lost anyway. Cheers, Nigel.

  24. copy of a comment:

    http://www.stevens.edu/csw/cgi-bin/blogs/csw/?p=48

    “Last week the New York Times reported that firstborn children have IQs three points higher on average than their younger siblings, according to a big new study. Interesting. Even more interesting was the association of the psychologist Frank Sulloway with the story. Sulloway wrote an editorial that Science published in tandem with the IQ study. The Times quoted Sulloway at length in its coverage and then made him available for an online Q&A. None of the coverage I’ve seen has mentioned the controversy that has shadowed Sulloway recently.

    “Now at the University of California at Berkeley, Sulloway was at MIT in 1996 when he published Born to Rebel: Birth Order, Family Dynamics and Creative Lives. The book portrayed history as a struggle between stodgy, conservative firstborns and open-minded, creative later-borns. … According to Sulloway, firstborn children are much more likely than their younger siblings to be conservative, support the status quo and reject new scientific or political ideas. Later-born children, in contrast, tend to be more adventurous, radical, open-minded, willing to take risks. (Sulloway, naturally, is the youngest of two brothers.) … Sulloway’s conclusions contradicted those contained in the 1983 book Birth Order: Its Influence on Personality. The authors, the Swiss psychiatrists Cecile Ernst and Jules Angst, sifted through hundreds of previous studies attempting to link birth order to personality traits, and then conducted their own survey of 7,582 college-age residents of Zurich. They concluded not only that birth-order effects do not exist but also that continued efforts to find such effects represent “a sheer waste of time and money.” [Italics in the original.]” – John Horgan.

    Sorry, John, but I don’t like the direction you appear to be coming from. A big new study confirms that first borns are slightly more intelligent. That would be a perfectly rational mechanism for some causal effect, albeit of weak statistical significance, on which leaders in history did what. So maybe the recent media coverage isn’t mentioning the controversy surrounding Sulloway because it is irrelevant to the news story: the news story is confirming Sulloway to some extent. That’s the end of the connection to Sulloway. Why rake up old controversy???

    If I have a theory and it’s controversial, then some new evidence appears to confirm it (which has nothing to do with the past controversy), why on earth should the media write about the old controversy? Maybe that’s good political investigative journalism, but it might be scientifically irrelevant and harmful.

    Put it another way. Galileo’s claim that he had discovered moons orbiting Jupiter using a home-made telescope was “controversial” because Cremonini (the Professor of philosophy at Padua) refused to look through the telescope because he feared the weak would have to be extended from 7 days if it turned out there were more than 7 orbital bodies in the heavens, or something like that. For how long should the media have appended a mention of this scientifically-irrelevant “controversy” to the news that Galileo had discovered Jupiter’s moons?

    By the way, I’m an only child, so I’m both a first born and a last born! So maybe you can use me as a false reductio ad absurdum attack on Sulloway’s thesis: the example of an only child (both first and last born) would mean that that child is somehow more intelligent than itself which is a logical impossibility, so Sulloway is falsified. But that’s too obviously a nonsense counter-argument. The trick is to throw mud at the teflon until it is simply buried under a gigantic big pile of mud; then you can claim the non-stick isn’t working. That’s a great way to get rid of politicians you don’t like, but it’s a bit tough on scientists who aren’t used to wallowing in mud!

  25. more “fast comment” discussion:

    http://motls.blogspot.com/2007/06/realclimate-saturated-confusion.html

    Dear aktivní blb, it doesn’t look like you know something about absorption by CO2, H2O, or by anything else, for that matter.

    The absorption rate can’t grow linearly and indefinitely because it is impossible to absorb more than 100 percent – at a given frequency or in a given band. This fact is independent of discrete vs band spectrum.
    Luboš Motl | Homepage | 06.30.07 – 7:22 am | #

    I didn’t say the absorption rate grows linearly and indefinitely: just that it doesn’t saturate by concentrating the absorption on a small set of narrow lines.

    I don’t like your objection above to the approximation e^{-x} ~ 1 – x. That’s a perfectly good approximation when x is much smaller than 1. For example, when x = 0.1, e^{-x} = 0.905 while 1-x = 0.9. It’s a good approximation and the smaller x is, the more accurate this approximation becomes.

    For global warming % of radiant energy being absorbed by CO2 in the air is always small so this kind of approximation is good.

    Your argument reminds me of the problem with calculating radiation casualties with a linear dose-effects “law”. It’s absurd because the risk of a person being killed by radiation exceeds 100% beyond some dose according to such a law; it doesn’t naturally saturate. Obviously a more realistic model would be f = 1 – e{-x}, which gives mortality risk f as being directly proportional to dosage x where x is small compared to 1, but f then saturates at 1 (i.e. 100% probability) when x is very large.

    However, for small doses, it is more reasonable to assume f ~ x, a linear dependence.

    You haven’t grasped my point about the line spectra. The sun emits almost all of its energy as continuous blackbody spectra. There are spectral lines in it (bright emission lines and dark absorption lines), but most of the energy is in a wide spectrum. The width of individual lines is small and so when you integrate over the spectrum, absorption lines in the atmosphere can’t endlessly reduce the total thermal radiation transmission.

    If you subtract a narrow line from a broadband spectrum, either completely removing the line or only partly removing it, it makes little difference. However, for band spectra there is a substantial effect.

    aktivní blb | Homepage | 06.30.07 – 7:15 pm | #

    (Note by NC: poor old Aktivní Blb! What a problem it is to argue with string theorists about anything, whether related to string or not. He has to explain everything so that a little kid could understand it, or it is misunderstood.)

  26. copy of a comment:

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/#comment-294420

    158. nigel on Jul 1st, 2007 at 4:25 am

    Marty Tysanner on Jun 20th, 2007 at 12:39 am

    “Sometimes it seemed their interest in physics is deep enough to persist and study it at the university level. … If pressed hard enough, they can become verbally abusive, or simply disappear from sight for awhile. … Check out crank.netfor a good summary of many of these guys.”

    I object strongly to this claim. If you look at the page http://www.crank.net/bigbang.html which is part of http://www.Crank.net run by Erik Max Francis you will see a sneering attack on me (ignoring the science totally) by him at the top of the page (who allegedly believes himself superior to all others because he discovered how to derive Kepler’s laws from Newton’s laws; actually, Newton used Kepler’s laws to derive his laws, so Francis’ is a circular argument: http://www.alcyone.com/max/ ).

    Francis was described in the New York Times as follows:

    “Mr. Francis, 29, is not a scientist, and has taken only a handful of classes at a community college.” (Bonnie Rothman Morris in The New York Times of Dec. 21, 2000)

    But this quotation is a bit misleading because when you look up the article, you see that Bonnie Rothman Morris is actually amazed in a positive way with Francis because he has only a handful of classes. She thinks that means he is really, really clever and qualified to call other people cranks: http://www.nytimes.com/2000/12/21/technology/21CRAN.html?ex=1183435200&en=04375f6836cbc7b0&ei=5070

    This is the world we live in:

    ‘Fascism is not a doctrinal creed; it is a way of behaving … What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. …’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

    Francis calls my page “illucid” which he defines as: “Something so beyond understanding that it defies classification.” – http://www.crank.net/about.html

    He gives no indication of even having read the page. If he is unable to understand it, it might indicate that he is less than clever, which might also be consistent with his educational background. However, the media and basically most people think it’s clever to sneer at new ideas. Fortunately, I had a hearing/speech defect when a kid and was sneered at etc. The fact that an idea is attacked doesn’t prove or disprove the idea, but the fact that the idea is supported by natural facts is more useful. The more angry these bigoted charlatans become, the more the media thinks they must be right. Not so!

  27. copy of a comment (which didn’t immediately appear on that blog, possibly because they are changing server see http://cosmicvariance.com/2007/06/29/downtime-2/ ):

    http://cosmicvariance.com/2007/06/19/the-alternative-science-respectability-checklist/

    Very relevant article:

    “Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?

    by Frank J. Tipler

    Abstract- The notion that a scientific idea cannot be considered intellectually respectable until it has first appeared in a “peer” reviewed journal did not become widespread until after World War II. Copernicus’s heliocentric system, Galileo’s mechanics, Newton’s grand synthesis — these ideas never appeared first in journal articles. They appeared first in books, reviewed prior to publication only by their authors, or by their authors’ friends. Even Darwin never submitted his idea of evolution driven by natural selection to a journal to be judged by “impartial” referees. Darwinism indeed first appeared in a journal, but one under the control of Darwin’s friends. And Darwin’s article was completely ignored. Instead, Darwin made his ideas known to his peers and to the world at large through a popular book: On the Origin of Species. I shall argue that prior to the Second World War the refereeing process, even where it existed, had very little effect on the publication of novel ideas, at least in the field of physics. But in the last several decades, many outstanding physicists have complained that their best ideas — the very ideas that brought them fame — were rejected by the refereed journals. Thus, prior to the Second World War, the refereeing process worked primarily to eliminate crackpot papers. Today, the refereeing process works primarily to enforce orthodoxy. I shall offer evidence that “peer” review is NOT peer review: the referee is quite often not as intellectually able as the author whose work he judges. We have pygmies standing in judgment on giants. I shall offer suggestions on ways to correct this problem, which, if continued, may seriously impede, if not stop, the advance of science.” – http://www.iscid.org/boards/ubb-get_topic-f-10-t-000059.html

    “Frank J. Tipler is Professor of Mathematical Physics at Tulane University and a fellow with the International Society for Complexity Information and Design.”

    Notice that the very first time Einstein ever underwent peer-review was in 1936, and he blew his top at the concept of peer-review (how can peer-review even occur when the idea is so radical you have no really specialist “peers” to begin with?), refusing to ever again submit to the journal (it was the Physical Review):

    “… the final [gravitational wave denying] manuscript was prepared and sent to the Physical Review. It was returned to him accompanied by a lengthy referee report in which clarifications were requested. Einstein was enraged and wrote to the editor that he objected to his paper being shown to colleagues prior to publication. The editor courteously replied that refereeing was a procedure generally applied to all papers submitted to his journal, adding that he regretted that Einstein may not have been aware of this custom. Einstein … never published in the
    Physical Review again.”

    – Abraham Pais, “Subtle is the Lord: the Science and the Life of Albert Einstein”, Oxford University Press, 1982, quoted at http://www.physicstoday.org/vol-59/iss-6/p9.html

    (In this case the peer-reviewers were actually correct and Einstein was wrong. The point is, Einstein felt that his paper should have been published, and critics should have been able to criticise it later. Einstein did not feel it was right to censor publication because of alleged errors.) Since I’ll be falsely accused of “comparing myself to Einstein” because of this quotation, I might as well bring up Galileo as well just to keep the bigots happy:

    1. Galileo claimed that he had discovered moons orbiting Jupiter using a home-made telescope.

    2. Cremonini (the Professor of philosophy at Padua) refused to look through the telescope because he feared the week would have to be extended from 7 days if it turned out there were more than 7 orbital bodies in the heavens, or something like that.

    Why should anybody have lifted a finger to support Galileo? What pay-off would they immediately get in return? Sacked from their jobs? Ridiculed? Ignored? Would Galileo have become a “star” laughing stock on crank.net if it had existed then? Or, with the benefit of hindsight, would Mr Francis have been able to judge Galileo to not be a crank?

  28. a second attempt to post comment replying to Van appears a success:

    http://cosmicvariance.com/2007/06/26/constraints-and-signatures-in-particle-cosmology/#comment-294424

    46. nigel on Jul 1st, 2007 at 4:48 am

    Van (comment#40): the standard model already has that chiral symmetry built into it by simply setting the weak isospin charge of right handed Weyl spinors equal to zero.

    Hence only left-handed particles have a weak isospin charge and can engage in weak interactions. All I’m pointing out is that SU(2) can be extended to include gravity in the standard model, to predict the strength of electromagnetism, to predict other things about cosmology etc. There is more explanation on my blog than I can put in a comment here. Thanks.

  29. (E.g. of sneering: some time ago on Lubos’ blog, a friend of Motl, Michael Varney, referred to the crank.net page to sneer at me as crackpot. Notice that Varney is co-author of a paper in Nature 421, 922–925 (2003) on “Upper limits to submillimetre-range forces from extra space-time dimensions” that neither confirms nor denies string theory.)

  30. copy of a comment:

    http://cosmicvariance.com/2007/06/29/downtime-2/#comment-294447

    4. nigel on Jul 1st, 2007 at 6:42 am

    Niel B: I agree that running couplings (relative charge) should be plotted as a function of distance not just collision energy. It is more lucid to plot the shielded charge strength as a function of distance from the particle core, than as a function of collision energy (which is the orthodoxy). You can do this easily: the distance of closest approach of two electrons in a head-on collision occurs when their initial kinetic energies equal the electrostatic potential energy (which is proportional to charge and inversely proportional to distance). The charge is not the low energy charge, but the higher charge given by the running coupling.

    Obviously when you do this you get discontinuities in the graph corresponding to “cutoffs”: the IR cutoff occurs at the greatest distance and marks the limit where the electric charge starts to rise from its normal (Maxwellian) constant value, and the UV cutoff occurs at the shortest distance and is determined by renormalization constraints (i.e. you can’t go down to zero distance because the running coupling is proportional to the logarithm of distance or 1/energy at the highest energies, so it would become infinite as you go to zero distance, creating pairs of charges with infinite momenta, which is unphysical because it stops QED working; you have to take cutoffs or limits on the running coupling range to correctly predict the magnetic moments of leptons, the Lamb shift, etc.).

    Below the Schwinger threshold electric field strength for fermion-antifermion pair-production, (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 v/m, there are no annihilation-creation loops in the vacuum (just gauge boson exchange radiation). The distance for this is r = [e/(2m)]*[(h-bar)/(Pi*Permittivity*c^3)]^{1/2} = 3.2953 * 10^{-14} metre = 32.953 fm.

    It’s curious that this is 11.69 times the classical electron radius, 2.81794 * 10^{-15} m = 2.81794 fm. (For those who like numerology, the square root of the dimensionless factor 1/alpha = 137, is equal to about 11.7.) The classical electron radius comes from integrating the energy of the electromagnetic field from this radius out to infinity, with this radius being chosen so that the integral equals the electron’s rest-mass energy mc^2. Clearly the electron is core is very much smaller than the classical electron radius, so mc^2 cannot be the total energy of the electron, it is merely the energy releasable in pair-production or annihilation phenomena, or when mass becomes binding energy. The physical explanation is probably the chaotic nature of the vacuum where the electric field strength is well above the Schwinger threshold: the pair production energy is almost randomly directed and has near maximum entropy, so most of it cannot be used. It’s the same as trying to extract useful energy from the kinetic energy of air molecules (air pressure). It can’t be done because the energy has maximum entropy so you need to supply more energy than you can possibly extract.

  31. copy of a comment:

    http://tyrannogenius.blogspot.com/2006/06/not-even-wrong.html

    Drop the question mark, please! String is not even wrong, full stop. It has to compactify 6 dimensions as a Calabi-Yau manifold with 100 or so parameters, all unknown because it’s too small to ever see it. So there are 10^500 or more combinations of parameters you need.

    The Standard Model has 19 parameters, mainly Higgs field couplings for masses.

    String theory has at least 125 parameters, giving a landscape of 10^500 possibilities.

    The number of atoms in the observable universe is about 10^80. So string theory has 10^420 times more versions than there are atoms in the universe.

    The age of the universe is 13.7 thousand million years or 4.32*10^17 seconds old.

    So if a super-computer had been evaluating string theory since the instant of the big bang, it would need to have been working through about 10^483 models per second in order to check the consequences of the whole string theory landscape.

    Furthermore, string theory is dependent on the unobserved speculations it is built on.

    For supersymmetry, in the book Not Even Wrong (UK edition), Dr Woit explains on page 177 that – using the measured weak and electromagnetic forces – supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.

    How can anyone take string seriously?

  32. copy of a comment:

    http://kea-monad.blogspot.com/2007/07/m-theory-lesson-72.html

    Interesting post. I’m interested in the ordering of braids to properly explain particle spins. String theory failed to account for the different standard model particles as being different vibrations of the same extra dimensional string.

    So, how do the different particles come about? Some kind of Sundance Bilson-Thompson’s braid model seems likely. Lubos once tried to ridicule the idea by calling it “octopusi swimming in the spin network”, so you know it’s worth investigating.

    If you look at the results, differences in braiding account for differences in particle spin, while differences in electric charge account for the difference between a downquark and an electron.

    This makes sense to me: compress 3 electrons in a small space (against the exclusion principle) so that the polarized vacuum of each (out to the Schwinger pair production cutoff of (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 v/m, which occurs out to a radius of r = [e/(2m)]*[(h-bar)/(Pi*Permittivity*c^3)]^{1/2} = 3.2953 * 10^{-14} metre = 32.953 fm from the middle of an electron) overlaps substantially, and the shielding effect due to the vacuum polarized vacuum would be 3 times stronger, so the screened charge per electron sharing the vacuum shield will be increased 3 times, giving an observable charge at long distances of e/3 per electron, i.e., downquarks. The extra energy shielded by the vacuum when 3 leptons are compressed has to go somewhere: it goes into a new-short range force powered by the vacuum, mediated by colour charge. (There are also other complexities such as isospin charge for mesons, but this basic principle still holds good.)

    The basic structure of a preon, or the unification particle behind leptons and quarks, has real spin if particles can be described by black holes consisting of trapped light velocity radiation. The principal magnetic moment of an electron, 1 Bohr magneton, is easily explained this way. I’ve an article in Electronics World, Apr. 2003 which shows that you get the spherically symmetric E-field, the dipole B-field, and time-dilation from the model of a fermion as radiation trapped into a loop by the black hole effect of gravity.

    Theorem: for any effective gravitational mass M (which includes energy E/c^2 according to general relativity) there is a black hole event horizon radius of R = 2GM/c^2. If charged electromagnetic radiation has a wavelength of that scale, it’s trapped into a tiny loop by the curvature of spacetime and becomes a fermion. It doesn’t slow down; the motion is just circular as a small loop (i.e. electron “spin”).

    The connection to the Bilson-Thompson diagrams is that the Poynting vector of such trapped charged radiation can in some cases rotate while it goes round in the loop, producing an effect like a Mobius strip. This is one way that different types of particles can occur.

    There are other mechanisms as well. Obviously, the main difference between the particles in the three different “generations” of the standard model is mass. Thus, muons are effectively heavy electrons. So to understand the different generations, you need to look closely at the model which describes the masses of particles (Higgs field bosons). It’s possible to do that.

    By the way, a brilliant Nov 64 Feynman lecture, “The relation of mathematics to physics”, is now on google video!

  33. copy of a comment:

    http://kea-monad.blogspot.com/2007/07/grg18-day-3.html

    “As expected, the poster session involved a notable lack of interest in Category Theory, but the sandwiches were yummy and the company pleasant.” – Kea

    The second paragraph of the Wiki entry on “Category theory” begins:

    Category theory has several faces known, not just to specialists, but to other mathematicians. “Generalized abstract nonsense” refers, not entirely affectionately, to its high level of abstraction, compared to more classical branches of mathematics.

    (Emphasis added.)

    I don’t think that this explanation is that helpful.

    The most abstract mathematical tools are generally the most valuable once they are widely understood and applied to appropriate problems. It’s still early days for category theory.

    I also keep getting confused between functors and functions (because the symbols used are similar and I’m more used to thinking about functions) when I read about it, not to mention the different morphisms.

    One thing is to keep working on it and to try to find successful ways of applying Category Theory to explaining preon or some related theory, about how the different sets of quantum numbers (including weak and strong charges, masses, etc.) of fundamental particles of physics are really related to one another by morphisms.

    You don’t necessarily have to have a physical mechanism explaining how the morphism physically occurs. It can just be a mathematical representation of what happens, and what the relationships between different fundamental particles really are.

    I think the key thing here is the relationship of leptons to quarks. Quark properties are only known through composites of 2 or 3 quarks, because quarks can’t be isolated. The fact of universality, e.g., similarities between lepton and quark decay processes

    muon -> electron + electron antineutrino + muon neutrino

    for leptons and

    neutron -> proton + electron + electron antineutrino

    for quarks, hints that quarks and leptons are surprisingly similar, when ignoring the strong force. (My preliminary investigations on the relationship are here,
    here and here.)

    The problem is how much time it takes to apply new maths to solving these physical problems.

    I like the fact that although you are a mathematician, you are free to go to physics conferences and study that stuff, at least as far as your time allows. The standard mathematical tools of particle physics, like Lie and Clifford algebras, aren’t focussed at modelling the morphisms between different fundamental particles (transformations between leptons and quarks obviously haven’t been observed yet, but they probably are possible at very high energy in certain situations). That would appear to be an ideal area to try to apply Category Theory to, because you have a table of particle properties and just have to f[i]nd the correct morphisms between them. That’s very important for trying to understand how unification can occur at high energy, and could lead to quantitative, falsifiable predictions (the old unification ideas like supersymmetry are not even wrong). I hope to learn a lot more about Category Theory.

  34. copy of a relevant email:

    From: “Nigel Cook”
    To: “Guy Grantham” ; “David Tombe” ; ; ; ; ;
    Cc: ; ; ; ; ; ; ; ; ; ; ; ; ;
    Sent: Friday, July 27, 2007 10:10 AM
    Subject: Re: The Effect of Gravity on Light

    Dear Guy,

    Light is an example of a massless boson. There is an error in Maxwell’s model of the photon: he draws it with the variation of electric field (and magnetic field) occurring as a function of distance along the longitudinal axis, say the x axis.

    Maxwell uses the z, and y axes to represent not distances but magnetic and electric field STRENGTHS.

    These field strengths are drawn to vary as a function of one spatial dimension only, the propagation direction.

    Hence, he has drawn a pencil of light, with zero thickness and with no indication of any transverse waving.

    What you get occurring is that people look at it and think the waving E-field line is a physical outline of the photon, and that the y axis is not electric field strength, but is distance in the y-direction.

    In other words, they think it is a three dimensional diagram, when in fact it is one dimensional (x-axis is the only dimension; the other two axes are field strengths varying solely as a function of distance along the x-axis).

    I explained this to Catt, but he wasn’t listening, and I don’t think others listen either.

    The excellent thing is that you can correct the error in Maxwell’s model to get a real transverse wave, and then you find that it doesn’t need to oscillate at all in the longitudinal direction in order to propagate! This is because the variation in E-field strength and B-field strength actually occurs at right angles to the propagation direction (which is the opposite of what Maxwell’s picture shows when plotting these field strengths as a variation along the longitudinal axis or propagation direction of light, not the transverse direction!).

    Maxwell’s drawing of a light photon in his final 1873 3rd ed of A Treatise on Electricity and Magnetism is actually a longitudinal wave because the two variables (E and B) are varying solely as a function of propagation direction x, not as functions of transverse directions y and z which aren’t represented in the diagram (which uses y and z to represent field strengths along x, instead of directions y and z in real space).

    The full description of the gauge boson can be found in figures 2, 3 and 4 of:

    https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    Best wishes,
    Nigel

    —– Original Message —–
    From: “Guy Grantham”
    To: “Nigel Cook” ; “David Tombe” ; ; ; ; ;
    Cc: ; ; ; ; ; ; ; ; ; ; ; ; ;
    Sent: Thursday, July 26, 2007 6:55 PM
    Subject: Re: The Effect of Gravity on Light

    >
    > Dear Nigel
    > My cynicism has been tweaked again … just what *is* a “massless boson”.
    > ie what *is* a photon in your QFT? (What *is* a hole in a
    > semiconductor)?
    > Is it real and how would you know it from a figment of the imagination
    > having the convenient function it is said to accomplish?
    > I can accept it as pseudo particle representing the action of a wave
    > transferring energy but that requires a medium in which to propagate.
    > I do not understand how energy can travel as a slab through totally empty
    > vacuum space, as previously described.
    > I can accept that mass is not apparent when fully bound and but not that a
    > particle has *no* mass.
    >
    > Would you please explain it to me.
    >
    > Best regards, Guy

  35. copy of another relevant email:

    From: “Nigel Cook”
    To: “David Tombe” ; ; ; ; ; ;
    Cc: ; ; ; ; ; ; ; ; ; ; ; ; ;
    Sent: Friday, July 27, 2007 9:54 AM
    Subject: Re: The Effect of Gravity on Light

    Dear David,

    The electrons and positrons we see are not the same thing as an “aether”, or
    the vacuum would be full of matter (electrons and positrons), which could be
    polarized.

    The complete absence of vacuum polarization at electric field strengths
    below 1.3*10^18 volts/metre, Schwinger’s threshold for pair production in
    QED, dispenses with your (and the Simhony/Grantham) electron-positron ether
    as a medium which allows radiation propagation at field strengths below this
    threshold.

    As stated about a year ago to you, if the vacuum could polarize at low field
    strengths, the effective electron charge seen from a large distance would be
    exactly zero. It isn’t, because of the IR cutoff on the running coupling
    for the screening of the electron’s charge by the polarized vacuum.

    More information:

    http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

    This dispenses with the idea that an electron-positron “aether” has any role
    in allowing the propagation of radiation. Ignoring these facts doesn’t make
    your model more rigorous.

    Best wishes,
    Nigel

    —– Original Message —–
    From: “David Tombe”
    To: ; ; ;
    ; ;
    ;
    Cc: ; ; ;
    ; ;
    ; ; ;
    ; ; ;
    ; ;

    Sent: Thursday, July 26, 2007 4:02 PM
    Subject: Re: The Effect of Gravity on Light

    > Dear Nigel,
    > You are confusing two separate issues here. We need to
    > distinguish between what goes on between the electrons and positrons and
    > what is the wider effect of the electron positron sea as a whole.
    >
    > I don’t encounter any of the frictional problems that you
    > are talking about for EM theory because in my view, EM radiation is about
    > the manner in which angular acceleration is propagated from one electron
    > positron dipole to the next.
    >
    > In the wider electric sea we may have friction for fast
    > moving bodies. On the other hand, centrifugal pressure and solenoidal
    > alignement should have a significant effect on reducing friction or maybe
    > even eliminating it.
    >
    > Yours sincerely
    > David Tombe
    >
    > —-Original Message Follows—-
    > From: “Nigel Cook”
    > Reply-To: “Nigel Cook”
    > To: “David Tombe”
    > ,,,,,,
    > CC:
    > ,,,,,,,,,,,,,
    > Subject: Re: The Effect of Gravity on Light
    > Date: Thu, 26 Jul 2007 14:34:13
    >
    > “Gravity and EM radiation both involve aether flow.” – David.
    >
    > Dear David,
    >
    > You might as well say
    >
    > “Gravity and EM radiation both involve Wakalixes flow.”
    >
    > (For information on what Wakalixes are, please see
    > http://www.textbookleague.org/103feyn.htm .)
    >
    > Don’t you think that aether of the kind you’re suggesting would behave
    > like a gas, and slow down things, contrary to Newton’s 1st law of motion?
    >
    > That kind of drag occurs because the electron-positron aether is comprised
    > of fermions, which obey the exclusion principle and interfere with one
    > another, so energy gets spread out in that kind of aether (if it existed
    > as in your picture), making moving bodies lose energy and slow down.
    >
    > Massless (no rest mass) bosonic radiation has the advantage that the
    > bosons which interact with a body don’t dissipate energy by interacting
    > with one another, so they behave as a perfect fluid and don’t slow things
    > down. The bosonic field doesn’t heat up in causing forces, unlike a
    > fermion composed sea such as your electron and positron aether.
    >
    > Best wishes,
    > Nigel
    >
    >
    > —– Original Message —– From: “David Tombe”
    > To: ; ;
    > ; ;
    > ; ;
    >
    > Cc: ; ; ;
    > ; ;
    > ; ; ;
    > ; ; ;
    > ; ;
    >
    > Sent: Thursday, July 26, 2007 12:46 PM
    > Subject: The Effect of Gravity on Light
    >
    >
    >>Dear Nigel,
    >> It’s true that I can’t predict the electric permittivity of
    >> the pure aether. But that doesn’t detract from the fact that the only
    >> function that satisfies E in EM radiation is -(partial)dA/dt where curl A
    >> = B.
    >>
    >> -(partial)dA/dt must therefore represent tangential
    >> acceleration.
    >>
    >> Gravity and EM radiation both involve aether flow. But EM
    >> radiation is a vortex flow of rms velocity c. Gravitation is a radial
    >> flow that imparts its acceleration to particles.
    >>
    >> It would of course probably follow that if the gravity
    >> inflow velocity were greater than the speed of light, that light would be
    >> unable to escape as it wouldn’t be able to overcome the flow.
    >>
    >> Yours sincerely
    >> David Tombe
    >>
    >>
    >>
    >>—-Original Message Follows—-
    >>From: “Nigel Cook”
    >>Reply-To: “Nigel Cook”
    >>To: “David Tombe”
    >>,,,,,,
    >>CC:
    >>,,,,,,,,,,,,,
    >>Subject: Re: Irrotational Flow in Little Switzerland
    >>Date: Thu, 26 Jul 2007 12:34:31
    >>
    >>Dear David,
    >>
    >>I seem to think otherwise because the mechanism predicts gravitation
    >>accurately as well as predicting electromagnetism accurately, and many
    >>other things, see https://nige.wordpress.com/about/ . I don’t see the
    >>evidence for your claims. Yes you can cook up vortex formulae that you
    >>seem to think look like Gauss’s law and Newton’s law, but you can’t also
    >>predict the values of the fundamental constants which determine the
    >>strengths of gravity and electromagnetism, etc.
    >>
    >>This is why your claims that I’ve got it wrong are a bit of political
    >>spin. If I’ve got it wrong, then it’s a coincidence that I’m predicting
    >>all the constants accurately!
    >>
    >>If I were wrong, it would be better than being “not even wrong”, not
    >>making any checkable calculations… It’s pretty easy to say everything
    >>is due to aether swirls.
    >>
    >>Best wishes,
    >>Nigel
    >>
    >>
    >>
    >>—– Original Message —– From: “David Tombe”
    >>To: ; ;
    >>; ;
    >>; ;
    >>
    >>Cc: ; ; ;
    >>; ;
    >>; ; ;
    >>; ; ;
    >>; ;
    >>
    >>Sent: Thursday, July 26, 2007 11:26 AM
    >>Subject: Re: Irrotational Flow in Little Switzerland
    >>
    >>
    >>>Dear Nigel,
    >>> You’ve got it all wrong. Centripetal acceleration has got
    >>> absolutely nothing to do with the EM radiation mechanism. In fact
    >>> neither does centrifugal acceleration.
    >>>
    >>> The component involved in EM radiation is the ‘angular
    >>> acceleration’ which doesn’t even exist in Keplerian orbits. EM radiation
    >>> is linked to vorticity. Radiation exchange may well occur but it doesn’t
    >>> actually cause gravity as you seem to think. It is part of the same
    >>> overall mechanism as gravity.
    >>>
    >>> Yours sincerely
    >>> David Tombe
    >>>
    >>>
    >>>
    >>>—-Original Message Follows—-
    >>>From: “Nigel Cook”
    >>>Reply-To: “Nigel Cook”
    >>>To: “David Tombe”
    >>>,,,,,,
    >>>CC:
    >>>,,,,,,,,,,,,,
    >>>Subject: Re: Irrotational Flow in Little Switzerland
    >>>Date: Thu, 26 Jul 2007 01:31:52
    >>>
    >>>Irrotational flow is fine, just admit the possibility that there is
    >>>exchange! Traditionally, the fact that electrons in orbit should be
    >>>radiating due to centripetal acceleration has been ignored, because Bohr
    >>>thought electrons would lose energy by radiating. Clearly, he was
    >>>assuming that only one electron in the universe was radiating while
    >>>orbiting an atom! When you take account of the fact that all electrons do
    >>>the same thing, the radiation emitted is soon in equilibrium to that
    >>>received: it’s the exhange radiation.
    >>>Similarly, Hawking’s idea that black holes must evaporate if they are
    >>>real simply because they are radiating, is flawed: air molecules in my
    >>>room are all radiating energy, but they aren’t getting cooler: they are
    >>>merely exchanging energy. There’s an equilibrium.
    >>>
    >>>Moving to Hawking’s heuristic mechanism of radiation emission, he writes
    >>>that pair production near the event horizon sometimes leads to one
    >>>particle of the pair falling into the black hole, while the other one
    >>>escapes and becomes a real particle. If on average as many fermions as
    >>>antifermions escape in this manner, they annihilate into gamma rays
    >>>outside the black hole.
    >>>
    >>>Schwinger’s threshold electric field for pair production is 1.3*10^18
    >>>volts/metre. So at least that electric field strength must exist at the
    >>>event horizon, before black holes emit any Hawking radiation! (This is
    >>>the electric field strength at 33 fm from an electron.) Hence, in order
    >>>to radiate by Hawking’s suggested mechanism, black holes must carry
    >>>enough electric charge so make the eelectric field at the event horizon
    >>>radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.
    >>>
    >>>Schwinger’s critical threshold for pair production is E_c =
    >>>(m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in
    >>>http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in
    >>>http://arxiv.org/abs/hep-th/0510040
    >>>
    >>>Now the electric field strength from an electron is given by Coulomb’s
    >>>law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so
    >>>
    >>>E = Q/(4*Pi*Permittivity*R^2) v/m.
    >>>
    >>>Setting this equal to Schwinger’s threshold for pair-production,
    >>>(m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum
    >>>radius out to which fermion-antifermion pair production and annihilation
    >>>can occur is
    >>>
    >>>R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.
    >>>
    >>>Where Q is black hole’s electric charge, and e is electronic charge, and
    >>>m is electron’s mass. Set this R equal to the event horizon radius
    >>>2GM/c^2, and you find the condition that must be satisfied for Hawking
    >>>radiation to be emitted from any black hole:
    >>>
    >>>Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)
    >>>
    >>>where M is black hole mass. So the amount of electric charge a black hole
    >>>must possess before it can radiate (according to Hawking’s mechanism) is
    >>>proportional to the square of the mass of the black hole. This is quite a
    >>>serious problem for big black holes and frankly I don’t see how they can
    >>>ever radiate anything at all.
    >>>
    >>>On the other hand, it’s interesting to look at fundamental particles in
    >>>terms of black holes (Yang-Mills force-mediating exchange radiation may
    >>>be Hawking radiation in an equilibrium).
    >>>
    >>>When you calculate the force of gauge bosons emerging from an electron as
    >>>a black hole (the radiating power is given by the Stefan-Boltzmann
    >>>radiation law, dependent on the black hole radiating temperature which is
    >>>given by Hawking’s formula), you find it correlates to the
    >>>electromagnetic force, allowing quantitative predictions to be made. See
    >>>https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997
    >>>for example.
    >>>
    >>>You also find that because the electron is charged negative, it doesn’t
    >>>quite follow Hawking’s heuristic mechanism. Hawking, considering
    >>>uncharged black holes, says that either of the fermion-antifermion pair
    >>>is equally likey to fall into the black hole. However, if the black hole
    >>>is charged (as it must be in the case of an electron), the black hole
    >>>charge influences which particular charge in the pair of virtual
    >>>particles is likely to fall into the black hole, and which is likely to
    >>>escape. Consequently, you find that virtual positrons fall into the
    >>>electron black hole, so an electron (as a black hole) behaves as a source
    >>>of negatively charged exchange radiation. Any positive charged black hole
    >>>similarly behaves as a source of positive charged exchange radiation.
    >>>
    >>>These charged gauge boson radiations of electromagnetism are predicted by
    >>>an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of
    >>>https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/
    >>>
    >>>For quantum gravity mechanism and the force strengths, particle masses,
    >>>and other predictions resulting, please see
    >>>https://nige.wordpress.com/about/
    >>>
    >>>
    >>>—– Original Message —– From: “David Tombe”
    >>>To: ; ;
    >>>; ;
    >>>; ;
    >>>
    >>>Cc: ; ;
    >>>; ;
    >>>; ; ;
    >>>; ; ;
    >>>; ; ;
    >>>
    >>>Sent: Thursday, July 26, 2007 1:14 AM
    >>>Subject: Irrotational Flow in Little Switzerland
    >>>
    >>>
    >>>>Dear Forrest,
    >>>> If you liked the picture and want to go there, I’ll
    >>>> tell
    >>>>you how I first discovered it.
    >>>>
    >>>> That picture was one of many pictures high up above
    >>>> Snake
    >>>>Alley in Taipei. I first saw it in 1998 but it took more than two years
    >>>>for
    >>>>me to actually find out where the place itself is.
    >>>>
    >>>> It is in the hills above Tainan in southern Taiwan in
    >>>> a
    >>>>region called ‘Little Switzerland’ in Chinese. (Taipei means
    >>>>Taiwan-North
    >>>>and Tainan means Taiwan-South).
    >>>>
    >>>> It is a piece of Japanese engineering from the early
    >>>>1960’s and its purpose is to divert water down a tunnel to a nearby
    >>>>reservoir.
    >>>>
    >>>> A fence keeps you from getting too close to it. There
    >>>> is
    >>>>no safety grid across the sink. You climb over the fence and go near it
    >>>>at
    >>>>your own risk.
    >>>>
    >>>> If you want to go there, you better take this picture
    >>>> with
    >>>>you and get somebody to write down ‘Little Switzerland’ in Chinese
    >>>>characters.
    >>>>
    >>>> Yours sincerely
    >>>> David Tombe

  36. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    “Theoretically if an accelerator fired enough mass into a tiny space a singularity would be created. The Black Hole would almost instantly evaporate, but could be detected via Hawking radiation. Unfortunately quantum mechanics says that a particle’s location can not be precisely measured. This quantum uncertainty would prevent us from putting enough mass into a singularity.”

    I disagree with Lisa Randall here. It depends on whether the black hole is charged or not, which changes the mechanism for the emission of Hawking radiation.

    The basic idea is that in a strong electric field, pairs of virtual positive fermions and virtual negative fermions appear spontaneously. If this occurs at the event horizon of a black hole, one of the pair can at random fall into the black hole, while the other one escapes.

    However, there is a factor Hawking and Lisa Randall ignore: the requirement of the black hole having electric charge in the first place, because pair production has only been demonstrated to occur in strong fields, the standard model fields of the strong and electromagnetic force fields (nobody has ever seen pair production occur in the extremely weak gravitational fields).

    Hawking ignores the fact that pair production in quantum field theory (according to Schwinger’s calculations, which very accurately predict other things like the magnetic moments of leptons and the Lamb shift in the hydrogen spectra) requires a net electric field to exist at the event horizon at the black hole.

    This in turn means that the black hole must carry a net electric charge and cannot be neutral if there is to be any Hawking radiation.

    In turn, this implies that Hawking radiation in general is not gamma rays as Hawking claims it is.

    Gamma rays in Hawking’s theory are produced just beyond the event horizon of the black hole by as many virtual positive fermions as virtual negative fermions escaping and then annihilating into gamma rays.

    This mechanism can’t occur if the black hole is charged, because the net electric charge [which is required to give the electric field which is required for pair-production in the vacuum in the first place] of the black hole interferes with the selection of which virtual fermions escape from the event horizon!

    If the black hole has a net positive charge, it will skew the distribution of escaping radiation so that more virtual positive charges escape than virtual negative charges.

    This, in turn, means that the escaped charges beyond the event horizon won’t be equally positive and negative; so they won’t be able to annihilate into gamma rays.

    It’s strange that Hawking has never investigated this.

    You only get Hawking radiation if the black hole has an electric charge of Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar).

    (This condition is derived below.)

    The type of Hawking radiation you get emitted is generally going to be charged, not neutral.

    My understanding is that the fermion and boson are both results of fundamental prions. As Carl Brannen and Tony Smith have suggested, fermions may be a triplet of prions to explain the three generations of the standard model, and the colour charge in SU(3) QCD.

    Bosons of the classical photon variety would generally have two prions: because their electric field oscillates from positive to negative (the positive electric field half cycle constitutes an effective source of positive electric charge and can be considered to be one preon, while the negative electric field half cycle in a photon can be considered another preon).

    Hence, there are definite reasons to suspect that all fermions are composed of three preons, while bosons consist of pairs of preons.

    Considering this, Hawking radiation is more likely to be charged gauge boson radiation. This does explain electromagnetism if you replace the U(1)xSU(2) electroweak unification with an SU(2) electroweak unification, where you have 3 gauge bosons which exist in both massive forms (at high energy, mediating weak interactions) and also massless forms (at all energies), due to the handedness of the way these three gauge bosons acquire mass from a mass-providing field. Since the standard model’s electroweak symmetry breaking (Higgs) field fails to make really convincing falsifiable predictions (there are lots of versions of Higgs field ideas making different “predictions”, so you can’t falsify the idea easily), it is very poor physics.

    Sheldon Glashow and Julian Schwinger investigated the use of SU(2) to unify electromagnetism and weak interactions in 1956, as Glashow explains in his Nobel lecture of 1979:

    ‘Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).’

    This is plain wrong: Glashow and Schwinger believed that electromagnetism would have to be explained by a massless uncharged photon acting as the vector boson which communicates the force field.

    If they had considered the mechanism for how electromagnetic interactions can occur, they would have seen that it’s entirely possible to have massless charged vector bosons as well as massive ones for short range weak force interactions. Then SU(2) gives you six vector bosons:

    Massless W_+ = +ve electric fields
    Massless W_- = -ve electric fields
    Massless Z_o = graviton (neutral)

    Massive W_+ = mediates weak force
    Massive W_- = mediates weak force
    Massive Z_o = neutral currents

    Going back to the charged radiation from black holes, massless charged radiation mediates electromagnetic interactions.

    This idea that black holes must evaporate if they are real simply because they are radiating, is flawed: air molecules in my room are all radiating energy, but they aren’t getting cooler: they are merely exchanging energy. There’s an equilibrium.

    Equations

    To derive the condition for Hawking’s heuristic mechanism of radiation emission, he writes that pair production near the event horizon sometimes leads to one particle of the pair falling into the black hole, while the other one escapes and becomes a real particle. If on average as many fermions as antifermions escape in this manner, they annihilate into gamma rays outside the black hole.

    Schwinger’s threshold electric field for pair production is: E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040

    So at least that electric field strength must exist at the event horizon, before black holes emit any Hawking radiation! (This is the electric field strength at 33 fm from an electron.) Hence, in order to radiate by Hawking’s suggested mechanism, black holes must carry enough electric charge so make the eelectric field at the event horizon radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.

    Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so

    E = Q/(4*Pi*Permittivity*R^2) v/m.

    Setting this equal to Schwinger’s threshold for pair-production, (m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

    R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.

    Where Q is black hole’s electric charge, and e is electronic charge, and m is electron’s mass. Set this R equal to the event horizon radius 2GM/c^2, and you find the condition that must be satisfied for Hawking radiation to be emitted from any black hole:

    Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)

    where M is black hole mass.

    So the amount of electric charge a black hole must possess before it can radiate (according to Hawking’s mechanism) is proportional to the square of the mass of the black hole.

    On the other hand, it’s interesting to look at fundamental particles in terms of black holes (Yang-Mills force-mediating exchange radiation may be Hawking radiation in an equilibrium).

    When you calculate the force of gauge bosons emerging from an electron as a black hole (the radiating power is given by the Stefan-Boltzmann radiation law, dependent on the black hole radiating temperature which is given by Hawking’s formula), you find it correlates to the electromagnetic force, allowing quantitative predictions to be made. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997 for example.

    To summarize: Hawking, considering uncharged black holes, says that either of the fermion-antifermion pair is equally likey to fall into the black hole. However, if the black hole is charged (as it must be in the case of an electron), the black hole charge influences which particular charge in the pair of virtual particles is likely to fall into the black hole, and which is likely to escape. Consequently, you find that virtual positrons fall into the electron black hole, so an electron (as a black hole) behaves as a source of negatively charged exchange radiation. Any positive charged black hole similarly behaves as a source of positive charged exchange radiation.

    These charged gauge boson radiations of electromagnetism are predicted by an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    It’s amazing how ignorant mainstream people are about this. They don’t understand that charged massless radiation can only propagate if there is an exchange (vector boson radiation going in both directions between charges) so that the magnetic field vectors cancel, preventing infinite self inductance.

    Hence the whole reason why we can only send out uncharged photons from a light source is that we are only sending them one way. Feynman points out clearly that there are additional polarizations but observable long-range photons only have two polarizations.

    It’s fairly obvious that between two positive charges you have a positive electric field because the exchanged vector bosons which create that field are positive in nature. They can propagate despite being massless because there is a high flux of charged radiation being exchanged in both directions (from charge 1 to charge 2, and from charge 2 to charge 1) simultaneously, which cancels out the magnetic fields due to moving charged radiation and prevents infinite self-inductance from stopping the radiation. The magnetic field created by any moving charge has a directional curl, so radiation of similar charge going in opposite directions will cancel out the magnetic fields (since they oppose) for the duration of the overlap.

    All this is well known experimentally from sending logic signals along transmission lines, which behave as photons. E.g. you need two parallel conductors at different potential to cause a logic signal to propagate, each conductor containing a field waveform which is an exact inverted image of that in the other (the magnetic fields around each of the conductors cancels the magnetic field of the other conductor, preventing infinite self-inductance).

    Moreover, the full mechanism for this version of SU(2) makes lots of predictions. So fermions are blac[k] holes and the charged Hawking radiation they emit is the gauge bosons of electromagnetism and weak interactions.

    Presumably the neutral radiation is emitted by electrically neutral field quanta which give rise to the mass (gravitational charge). The reason why gravity is so weak is because it is mediated by electrically neutral vector bosons.

  37. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    Tony,

    You wrote here (that is a U.S. Amazon book discussion comment, where I can’t contribute as participants need to have bought books from the U.S. Amazon site, and being in England I’ve only bought books from Amazon.co.uk):

    … shortly after Baez described his Six Mysteries in Ontario, I sent an e-mail message to Smolin saying:

    ‘… I would like to present, at Perimenter, answers to those questions, as follows: Mysteries 2 and 3: The Higgs probably does exist, and is related to a Tquark-Tantiquark condensate, and mass comes from the Standard Model Higgs mechanism, producing force strengths and particle masses consistent with experiment, as described in http://www.valdostamuseum.org/hamsmith/YamawakiNJL.pdf and http://www.valdostamuseum.org/hamsmith/TQ3mHFII1vNFadd97.pdf

    ‘Mystery 4: Neutrino masses and mixing angles consistent with experiment are described in the first part of this pdf file http://www.valdostamuseum.org/hamsmith/NeutrinosEVOs.pdf Mystery 5: A partial answer: If quarks are regarded as Kerr-Newman black holes, merger of a quark-antiquark pair to form a charged pion produce a toroidal event horizon carrying sine-Gordon structure, so that, given up and down quark constituent masses of about 312 MeV,the charged pion gets a mass of about 139 MeV, as described in http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf Mysteries 6 and 1:The Dark Energy : Dark Matter : Ordinary Matter ratio of about 73 : 23 : 4 is described in http://www.valdostamuseum.org/hamsmith/WMAPpaper.pdf

    I’m extremely interested in this, particularly the idea that the mass-providing boson is a condensate particle formed of a Top quark and an anti-Top quark, like a meson. I’m also extremely interested in quarks modelled as Kerr-Newman black holes in the pion, to predict the mass. Your mathematical technical approach is not easy going for me, however.

    Maybe I can outline some independent information I’ve acquired regarding three basic scientific confirmations that fermions are indeed black holes, emitting gauge bosons at a tremendous rate as a form of Hawking radiation:

    (1) The “contrapuntal model for the charged capacitor”, which I’ll explain in detailed numbered steps below:

    (1.a) All electric energy carried by conductors travels at light velocity for the insulator around the conductors.

    (1.b) A small section of a (two-conductor) transmission line can be charged by like a capacitor, and behaves like a simple capacitor, storing electric energy.

    (1.c) Charge up that piece of transmission line using of sampling oscilloscopes to record what happens, and you learn that energy flows into it at light velocity for the insulator.

    (1.d) There is no mechanism for that electricity to suddenly slow down when it enters a capacitor. It can’t physically slow down. It reflects off the open circuit at the far end and is trapped in a loop, going up and down the transmission line endlessly. This produces the apparently “static” electric field in all charges. The magnetic fields from each component of the trapped energy (going in opposite directions) curl in different directions around the propagation direction, so the magnetic field cancels out.

    (1.e) The “field” (electromagnetic vector boson exchange radiation) that causes electromagnetic forces controls the speed of the logic signal, and the electron drift speed (1 millimetre/second for 1 Amp in typical domestic electric cables) has nothing to do with it.

    (1.f) Electricity in paired conductors is primarily driven by vector boson radiation (comprising the electromagnetic “field”). The electron drift current, although vital for supplying electrons to chemical reactions and to cathode emitters in vacuum tubes, is pretty much irrelevant as far as the delivery of electric energy is concerned. (It’s easy to calculate what the kinetic energy of all the electron drift in a cable amounts to, and it is insignificant compared to the amount of energy being delivered by electricity. This is because of the low speed of the electron drift in typical situations, combined with the fact that the conduction electrons have little mass so their total mass is typically just ~0.1% of the mass of the conductors. Kinetic energy E = (1/2)mv^2 tells you that for small m and tiny drift velocity v, electron drift is not the main source of energy delivery in ordinary electricity. Instead, gauge/vector bosons in the EM field are responsible for delivering the energy. Hence, by a close study of the details of how logic pulses interact and charge up capacitors – which is not modelled accurately by Maxwell’s classical model – something new about the EM vector bosons of QFT may be deduced from solid, repeatable experimental data!)

    (1.g) The trapped light velocity energy in a capacitor is unable to slow down, and the effect of it being trapped leads to the apparently “static” electric field and nil magnetic field (as explained in 1.d above). Another effect of the trapping of energy is that there is no net electric field along the charged up capacitor plate: the potential is the same number of volts everywhere, so there is no gradient (i.e., there is no volts/metre) and thus no electron drift current. Without electron drift current, we have no resistance because resistance is due to moving conduction band electrons colliding with the conductor’s metal lattice and releasing heat as a result of the deceleration. There is merely a energy bounding at light speed in all directions in any charged object.

    There is also the effect of electric charge in the form of electrons that drifts into one capacitor plate (the negative one), and out of the other plate (the positive one), while the capacitor is charging up.

    (1.h) Now for electrons. The capacitor model (1.g above) explains how gauge boson radiation (the field) gets trapped in a capacitor. Experiments by I.C., who pioneered research on logic signal crosstalk in the 60s, confirmed this: a capacitor receives energy at light speed for the insulator in the feel transmission line, the energy that gets trapped in a transmission line can’t slow down, and it exits at light speed when discharged. He, together with two other engineers, also showed how to get Maxwell’s exponential charging law (1 – e^x) out of this model although it contains various errors and omissions in the physics. However, the main results are correct. When you discharge the a capacitor charged at v volts, (such as a charged length of cable), instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get a pulse of v/2 volts taking 2x/c seconds to exit. In other words, the half of the energy already moving towards the exit end, exits first. That gives a pulse of v/2 volts lasting x/c seconds. Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy. This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse. Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds. This is experimental fact. It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals). (Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a “step”, an unphysical discontinuity which would imply infinite rate of change of the field at the instant of the step, causing infinite “displacement current”, and this error is inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.)

    Using the model of trapped gauge boson radiation to represent static charge, the electron is understood to be a trapped charged gauge boson. The only way to trap a light velocity gauge boson like this is for spacetime curvature (gravitation) to trap it in a loop, hence it’s a black hole.

    In the August 2002 issue of British journal Electronics World there is an illustration demonstrating that for such a looped gauge boson, the electric field lines – at long distances compared to the black hole radius – diverge as given by Gauss’s/Coulomb’s law, while the magnetic field lines circling around the looped propagation direction form a toroidal shape near the electron black hole radius but at large distances the results of cancellations is that you just see magnetic dipole, which is a feature of leptons.

    (2) The second piece of empirical evidence that fermions can be modelled by black holes that I’ve come across is in connection with gravity calculations. If the outward acceleration of the mass of the universe creates a force like F = ma (which is a force on the order of 7*10^43 Newtons, although there are obvious various corrections you can think of such as the effect of the higher density of the universe at earlier times and greater distances – I’ve undertaken some such calculations on my newer blog – or questions over how much “dark matter” there is which is behaving like mass and accelerating away from us) where m is mass of universe and a is acceleration, then Newton’s 3rd law suggests an equal inward force, which according to the possibilities available would seem to be carried by vector bosons that cause forces.

    To test this, we work out what cross-sectional shielding area an electron would need to have in order that the shielding of the inward-directed force would give rise to gravity as an asymmetry effect (this asymmetry idea as the cause of gravity is an idea sneered at and ignorantly dismissed for false reasons, and variously credited to Newton’s friend Fatio or to Fatio’s Swiss plagiarist, Georges LeSage).

    It turns out that the cross-sectional area of the electron would be Pi*(2GM/c^2)^2 square metres where M is the electron’s rest mass, which implies an effective electron radius of 2GM/c^2, which is the event horizon radius for a black hole.

    This is the second piece of evidence that an electron is related to black hole, although it is not a strong piece of evidence in my view because the result could be just a coincidence.

    (3) The third piece of evidence is a different calculation for the gravity mechanism discussed in (2) above. A simple physical argument allows the derivation of the the actual cross-sectional shielding area for gravitation, and this calculation can be found as “Approach 2” on my blog page here.

    When combined with the now-verified earlier calculation, this new approach allows gravity strength to be predicted accurately as well as giving evidence that fermions have a cross-sectional area for gravitational interactions equal to the cross-sectional area of the black hole event horizon for the particle mass.

  38. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    One more piece of quantitative evidence that fermions are black holes:

    Using Hawking’s formula to calculate the effective black body radiating temperature of a black hole yields the figure of 1.35*10^53 Kelvin.

    Any black-body at that temperature radiates 1.3*10^205 watts/m^2 (via the Stefan-Boltzmann radiation law). We calculate the spherical radiating surface area 4*Pi*r^2 for the black hole event horizon radius r = 2Gm/c^2 where m is electron mass, hence an electron has a total Hawking radiation power of

    3*10^92 watts

    But that’s Yang-Mills electromagnetic force exchange (vector boson) radiation. Electron’s don’t evaporate, they are in equilibrium with the reception of radiation from other radiating charges.

    So the electron core both receives and emits 3*10^92 watts of electromagnetic gauge bosons, simultaneously.

    The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

    The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

    Using P = 3*10^92 watts as just calculated,

    F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

    For gravity, the model in this blog post gives an inward and an outward gauge boson calculation F = 7*10^43 N.

    So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of [2*10^84] / [7*10^43] = 3*10^40.

    This figure of approximately 10^40 is indeed the ratio between the force coupling constant for electromagnetism and the force coupling constant for gravity.

    So the Hawking radiation force seems to indeed be the electromagnetic force!

    Electromagnetism between fundamental particles is about 10^40 times stronger than gravity.

  39. The policy of not publishing any antimainstream material at PR was discussed in an editorial several years ago. That’s when I quit reading it.

  40. copy of a comment:

    http://kea-monad.blogspot.com/2008/01/riemann-rekindled-iii.html

    I’ll have to concentrate on this a lot more, I guess. At present categorical theory is still way over my head. I think in school we did a bit of very basic set/group maths, like Venn diagrams and the just the abstract symbols for union (U) and intersection (upside down U), but they the whole area was dropped. From there on it was algebra, trig and calculus (particularly the nightmare of integrating complex trig functions like cot or cosec theta, without having a good memory for trivia like definitions of abstract jargon). There was no set or group theory in the pure maths A-level, and at university the quantum mechanics and cosmology (aka elementary general relativity) courses didn’t use anything more advanced than calculus with a bit of symbolic compression (operators).

    The kind of maths where you get logical arguments with lots of abstract symbolism from set theory and group theory is therefore completely alien. I can see the point in categorizing large numbers of simple items, if that is as actually a major objective of categorical theory. It would be nice if it were possible to build up solutions to complex problems like quantum gravitation by categorizing large numbers of very simple operations, i.e. if individual graviton exchanges between masses could be treated as simple vectors and categorized according to direction or resultant to simplify the effect. Smolin had a Perimeter lecture on quantum gravity where he showed how he was getting the Einstein field equation of general relativity by summing all of the interaction graphs in an assumed spin foam vacuum. I’m not sure that a spin foam vacuum is physically the correct, but the general idea of building up from a summing of lots of resultants for individual graviton interaction graphs is certainly appealing from my point of view.

    “with $F(2) = \pi$, and this looks something like a count of binary trees, with an increasing number of branches at each step. What are the higher dimensional analogues of $i$? What if we took the $s$-th root, so that $F(2n)$ was some multiple of $\pi$ for all $n \in \mathbb{N}$, just like the volumes of spheres?”

    I may be way off topic in my physical interpretation here, but if you are considering how graviton exchanges occur between individual masses (particles, including particles of energy since these interact with gravity and thus have associated with them a gravitational charge field), then you could well have a tree structure to help work out the overall flow of energy in a gravitational field from a theory of quantum gravity.

    I.e., each mass (or particle with energy) radiates gravitons to several other masses, which radiate to still more, in an geometric progression. This loss of energy is balanced by the reception of gravitons. Presumably this kind of idea just sounds too naive and simplistic to people in the mainstream, who assume (without it ever having been correctly proved) that such simplistic ideas must be wrong because nobody respectable is working on them.

    I’m studying the maths of the SU(2) lagrangian as time allows. It’s nice that the lagrangian is simplest for the case of massless spinor fields (massless gauge bosons). The most clear matrix representations of U(1) and SU(2) to particle physics I’ve come across are equations 8.59 and 8.65 (which are surprisingly similar) in Ryder’s “Quantum Field Theory”. The Dirac lagrangian for a massless field just summed for the particles: e.g., right handed electron, left handed electron, and also the neutrino which only occurs in the left-handed form. Given some time, it should be possible to understand the massless SU(2) lagrangian since it is relatively simple maths (pages 298-301 of Ryder’s 2nd edition, also the first 3 chapters of Ryder were excellent lucid introductions to gauge fields in general and the Yang-Mills field in particular).

    But one problem I do have with the whole gauge theory approach is that it is built on calculus to represent fields; ideal for a vacuum that is a continuum, but inappropriate for quantized fields. There’s an absurdity in treating the acceleration of an electron by quantized, individual discrete virtual photons or by gravitons as a smooth curvature of spacetime! It’s obviously going to a bumpy (stepwise) acceleration, with a large number of individual impulses causing an overall (statistical) effect that is just approximated by differential geometry. I think it’s manifestly absurd for anyone to be seeking a unification of general relativity and quantum field theory that builds on differential geometry. Air pressure, like gravity, appears to be a continuous variable on large scales where the number of air molecule impacts per unit area per second is a very large number. But it breaks down for small numbers of impacts, for example in Brownian motion, where small particles receive chaotic impulses not a smooth averaged out pressure. Differential equations are usually good approximations for classical physics (large scales), but they are not going to properly model the fndamental physical processes going on in quantum gravity. You can do quite a lot with the calculus of air pressure (such as fnding that it falls off nearly exponentially with increasing altitude, and finding the relationship between wind speed and pressure gradients in hurricanes), but you can’t deduce anything about air molecules from this non-discrete (continuum) differential model. It breaks down on small scales. So does differential geometry when applied to small numbers of quantum interactions in a force field. This is why the classical physics breaks down on small scales, and chaos appears.

    It would be nice if it were possible to replace differential geometry in QFT and GR with some kind of quantized geometry and show how the approximations of QFT and GR are valid, emerging for the limiting case whereby very large numbers of field quanta interact with the particle of interest, so that the averaging of many chaotic impulses produces a deterministic average effect every time on large scales.

  41. Copy of a comment to:

    https://nige.wordpress.com/2007/03/16/why-old-discarded-theories-wont-be-taken-seriously/

    Teresa,

    Electromagnetism and gravity do have a certain amount in common; the inverse square law. What’s also interesting is that the electromagnetic force between a proton and electron is 10^40 times stronger than gravitation. Also, magnetism is dipolar; nobody has discovered even a single magnetic monopole in nature. You get attraction of unlike poles and repulsion of like poles. Gravitation is a monopole force field; yet it is always attractive no matter what the electric charge of the mass/energy.

    I wrote an article in Electronics World April 2003 which leads to the conclusion that the distinction between gravity and electromagnetism is a result of a simple physical difference: the charge of the gauge bosons being exchanged. This predicts the 10^40 coupling constant difference between electromagnetism and gravity, and it explains why gravitation is always attractive (over non-cosmological distances; get too far and the net effect is repulsion because the theory predicts the small positive cosmological constant which is accelerating the universe), and why unlike electromagnetic charges attract while like electromagnetic charges repel.

    Gravity is due to electrically uncharged gauge bosons are exchanged between all mass/energy in the universe. Net gravitational forces arise due to asymmetry, the Lesage shadowing effect, due to the way the exchange process works.

    In order for two masses to exchange gravitons, they must be receding from one another at a relativistic velocity in accordance with the Hubble law, v = HR. This gives them an outward acceleration from one another of a = dv/dt = d(HR)/dt = Hv = RH^2. As a result of this acceleration, they have a force outward from one another of F = ma = mRH^2. Simple!

    Newton’s 3rd law (action and reaction are equal and opposite) then tells us that the outward forces of each of the receding masses must result in an equal inward reaction force. This force – by elimination of all other possibilities – is carried by gravitons.

    Hence, gravitation causes distant receding masses to forcefully fire off gravitons at each other, so the relativistically receding masses end up exchanging gravitons and being repelled apart. Impulses and recoil forces when gravitons are exchanged between relativistically receding masses causes those masses to go on accelerating as they recede from one another. This gives the cosmological acceleration normally attributed to dark energy (the Lambda term).

    Now examine what happens when two masses (say me and the planet Earth) are not relativistically receding relative to one another! There is no forceful exchange of gravitons between me and the Earth! This is because the acceleration of me away from the Earth is zero, so the force of me away from the Earth is zero, and the reaction force of gravitons from me towards the Earth is zero.

    In other words, I’m not exchanging gravitons with the Earth in a forceful way, simply because I’m not receding from the Earth. So the Earth and I are unable to exchange gravitons efficiently! This is a shielding effect, because the Earth and myself are both exchanging gravitons with the distant, receding galaxies in the universe.

    The only direction in which I’m not able to efficiently exchange gravitons is downward, because some of the tiny fundamental particles in the Earth are exchanging gravitons with distant receding masses in that direction from me, but are unable to then exchange those gravitons with me because there is no graviton exchange between myself and the Earth. Hence, the fundamental particles in the Earth are shielding or shadowing a small portion of the graviton force from distant receding galaxies in the downward direction from me!

    So the net graviton force on me is the excess of gravitons pushing downwards over that coming upward through the planet below me.

    Now this is a very simple geometric effect: gravitons are electrically uncharged exchange radiation with spin-1, like photons. For electromagnetism, the only way to get a physical understanding is to change Feynman’s QED U(1) Abelian theory. There are lots of problems with U(1): it only has one type of charge (hence the 1 in U(1) symmetry), so negative and positive charges have to be treated as the same thing moving in different directions through time. But there is no evidence that anything goes backward in time. Also, there are other problems with the mainstream U(1) electromagnetism. It doesn’t predict or explain physically what the mechanism for electromagnetic forces is; it has to use a photon with 4-polarizations instead of the normal 2, so that it can include attraction and not just repulsion. It’s a very unsatisfactory physical description.

    My argument here is that electromagnetism and gravity are actually an SU(2) Yang-Mills theory, with charged massless gauge bosons. SU(2) gives rise to two types of charge and three types of gauge boson: neutral, positive and negative. I’ve worked out that charged massless gauge bosons can propagate in the vacuum despite the usual objection to the charged massless radiation (infinite magnetic self-inductance): what happens is that in exchange radiation, there is an equilibrium of exchange of radiation travelling in two directions at once, so the clockwise magnetic curl of say leftward travelling charged radiation will exactly cancel out the relatively anticlickwise curl of rightward travelling charged radiation. The cancellation of the magnetic curls in this way means that the magnetic self-inductance is no longer infinite but zero!

    Next, the exchange of charged massless gauge bosons between electromagnetic charges has more possibilities than gravitation. The random arrangement of fundamental charges (positive and negative) relative to one another throughout the universe means that all of the positive and negative electric charges in the universe will be linked up by their exchange of charged gauge bosons, like a lot of positive and negative charged capacitor plates separated by vacuum dielectric. Because the arrangement is random, they won’t add up linearly. If the addition was linear with positive and negative charges arranged in a long line with alternating sign at each charge, then the result would be like a series of batteries or capacitors in circuit, and electromagnetism would be stronger than gravitation by about a factor of 10^80 (the number of hydrogen atoms in the universe).

    Because the arrangement is random, and charged gauge bosons of one sign are stopped by half the charges in the universe, the actual addition is non-linear. It’s a drunkard’s statistical walk, like the zig-zag path of a particle undergoing Brownian motion. The vector sum can be worked out by doing a path integral calculation. It’s approximately the square root of the number of hydrogen atoms in the universe, times stronger than gravity. I.e. 10^40.

    This model also explains repulsive forces and attractive forces in electromagnetism, as a correspondent (Guy Grantham) has pointed out to me. Because you have two types of charged gauge boson, two protons have overlapping force fields composed of positively charged massless gauge bosons.

    As a result, the protons exchange positively charged gauge bosons and get repelled away from one another, rather like two people firing machine guns at one another will be forced apart both by the recoil impulses when firing each round, and by the strikes when receiving each round! (The incoming positively charged exchange radiation from distant masses in the universe to the far side of each of the protons being considered is severely redshifted and thus carries little energy and hence little momentum.)

    In the case of dissimilar charges, the positive charge and negative charge (or north pole and south pole in the case of two magnets) suffer the problem that the opposing fields cancel each other out instead of adding up. So there is no forceful exchange of radiation between them; they shield one another just like the Lesage shadowing gravity mechanism, and so opposite charges get pushed together by the exchange radiations coming from the distant receding galaxies in the universe.

    The fact that the electromagnetic attractive force between a proton and an electron is identical in strength but opposite in sign (i.e. direction) to the repulsive force between either two protons or two electrons, is explained by the energy balance of exchange radiation with the surrounding universe during the period that the force is acting, as proved graphically in my April 2003 Electronics World article.

    When two particles repel or attract due to electromagnetism, they are converting the potential energy of the redshifted incoming exchange radiation energy (from distant charges in the receding universe) into kinetic energy. The amount of energy available in this way per second (i.e., the power used to accelerate charges) to just two charges (whether they are proton and electron, proton and proton, or electron and electron) is the same because each charge has a similar cross-section for interactions with exchange radiations!

    Hence, when two protons or two electrons repel, they are being repelled by a similar power of radiant exchange radiation supplied externally by the surrounding universe as in the case of the attraction of one proton and one electron.

    The diagram in the April 2003 Electronics World article makes this energy summation clearer: the resultant of all the exchanges is that unit similar charges repel at the same force that dissimilar charges attract.

    I agree with you that light is a particle and has mass: saying light has “no rest mass” which the literature is fond of announcing, is pathetic because light is not at rest anyway,

    “The fact that photons have no rest mass isn’t a problem because … they can never be at rest anyway …”

    – page 21 of P.C.W. Davies, The Forces of Nature, Cambridge University Press, London, 2nd ed., 1986.

    Nige

  42. here is a comment I will copy here if you don’t mind in case it gets lost:

    http://kea-monad.blogspot.com/2008/06/m-theory-lesson-197.html

    Thank you for the link to the article about Galois’s last letter before his fatal duel. He must have led a very exciting life, making breakthroughs in mathematics and fighting duels. Dueling was a very permanent way to settle a dispute, unlike the uncivilized, interminable, tiresome squabbles which now take the place of duels.

    The discussion of groups is interesting. I didn’t know that geometric solids correspond to Lie algebras. Does category theory have any bearing on group theory in physics, e.g. symmetry groups representing basic aspects of fundamental interactions and particles?

    E.g., the Standard Model group structure of particle physics, U(1)*SU(2)*SU(3) is equivalent to the S(U(3)*U(2)) subgroup of SU(5), and E(8) contains many elements, including S(U(3)*U(2)) subgroups, so SU(5) and E(8) have been considered candidate theories of everything on mathematical grounds.

    Do you think that these platonic symmetry searching methods are the wrong way to proceed in physics? Woit writes in http://arxiv.org/PS_cache/hep-th/pdf/0206/0206135v1.pdf that there the Standard Model problems are not tied to making the symmetry groups appear from some grand theory like a rabbit from a hat, but are concerned with explaining things like why the weak isospin SU(2) force is limited to action on just left-handed particles, why the masses of the standard model particles – including neutrinos – have the values they do, whether some kind of Higgs theory for mass and electroweak symmetry breaking is really solid science or whether it is like epicycles (there are quite a landscape of different versions of the Higgs theory with different numbers of Higgs bosons, so by ad hoc selection of the best fit and the most convenient mass, it’s a quite adjustable theory and not extremely falsifiable), and how quantum gravity can be represented within the symmetry group structure of the Standard Model at low energies (where any presumed grand symmetry like SU(5) or E(8) will be broken down into subgroups by various symmetry breaking mechanisms).

    What worries me is that, because gravity isn’t included within the Standard Model, there is definitely at least one vital omission from the Standard Model. Because gravity is a long-range, inverse-square force at low energy (like electromagnetism), gravity will presumably involve a term similar to part of the electroweak SU(2)*U(1) symmetry group structure, not to the more complex SU(3) group. So maybe the SU(2)*U(1) group structure isn’t complete because it is missing gravity, which would change this structure, possibly simplifying things like the Higgs mechanism and electroweak symmetry breaking. If that’s the case, then it’s premature to search for a grand symmetry group which contains SU(3)*SU(2)*U(1) (or some isomorphism). You need to empirically put quantum gravity into the Standard Model, by altering the Standard Model, before you can tell what you are really looking for.

    Otherwise, what you are doing is what Einstein spend the 1940s doing, i.e., seaching for a unification based on input that fails to include the full story. Einstein tried to unify all forces twenty before the weak and strong interactions were properly understood from experimental data, so he was too far ahead of his time to have sufficient understanding of the universe experimentally to be able to model it correctly theoretically. E.g., parity violation was only discovered after Einstein died. Einstein’s complete dismissal of quantum fields was extremely prejudiced and mistaken, but it’s pretty obvious that he was way off beam not just for his theoretical prejudices, but for trying to build a theory without having sufficient experimental input about the universe. In Einstein’s time there was no evidence of quarks, no colour force, no electroweak unification, and instead of working on trying to understand the large number of particles being discovered, he preferred to stick to classical field theory unification attempts. To the (large) extend that mainstream ideas like string theory tend to bypass experimental data from particle physics entirely, such theories seem to suffer the same fate as Einstein’s efforts at unification. To start with, they ignore most of the real problems in fundamental physics (particle masses, symmetry breaking mechanisms, etc.), they assume that existing speculations about unification and quantum gravity are headed in the correct direction, then they speculatively unify those guesses without making any falsifiable predictions. That’s what Einstein was doing. To those people this approach seemed like a very good idea at the time, or at least it seemed to be the best choice available at the time. However, a theory that isn’t falsifiable experimentally may still be discarded for theoretical reasons when a better theory comes along.

  43. copy of a comment:

    http://dorigo.wordpress.com/2008/07/01/fine-tuning-and-numerical-coincidences/#comment-98513

    Lubos a few years ago condemned the book ‘The Final Theory’, which was being sold on Amazon with lots of positive reviews, after reading the first chapter which was available free ( http://motls.blogspot.com/2005/08/amazoncom-controlled-by-crackpots.html ). I later found that book quite by accident beside Feynman’s lectures on physics in a library! It’s based on the discovery of the numerical ‘coincidence’:

    G = (1/2)g(r^3)/(Rm) = 6.82*10^{-11} (m^3)/(kg*s^2),

    (2% higher than the observed constant), where g is the acceleration due to gravity on the surface of the Earth, R is Earth’s radius, r is mean orbital radius of the ground state of hydrogen, and m is the mass of a hydrogen atom. The book tries to justify this by an argument that the atoms, planets, etc., are expanding at an accelerative rate proportional to their radii. The idea in the book is that the ground accelerates upwards as the Earth expands, creating the illusion of gravity since everything expands.

    This idea fails to hold water ( https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ ) for the simple reason that it would cause gravity to be solely dependent on the radius of the planet, instead of the mass of the planet. Also, his ‘theory’ doesn’t doesn’t explain the convenient arbitrary factor of 1/2 in his ‘coincidence’, and the expansion rate is higher by an astronomical factor than the observed acceleration of the universe. So it is numerology, and can’t explain the facts of gravitation. It doesn’t tie in to the observed expansion of the universe, or to the universal law of gravity where the force depends on the product of masses of the planets involved, not the radii of the planets.

    Feynman explained sarcastically why coincidence can be a misunderstood concept: ‘On my way to campus today, I saw a car with the licence plate XRT-375 in the parking lot – isn’t that amazing? What are the odds of seeing that exact licence?’ The point is, probability just indicates your uncertainty given your ignorance of the situation. Once you have observed something, you are no longer uncertain. If something occurs, it’s no longer unlikely (to those who know about it); it’s just a certain fact. So ‘coincidences’ alone are not really too impressive. You expect some numerical coincidences when juggling with different values in a physics data book, so it’s only impressive if you have a theory that will predict a ‘coincidence’ in advance of finding empirical evidence. If you predict you will see a car with licence plate XRT-375 in the parking lot, then confirm the prediction, that would be a useful coincidence.

    By comparison, Koide’s formula is relatively impressive because it has survived improvements to the data, and has various extensions. It is great that it is leading towards a theory which may be able predict other particle masses.

  44. copy of a comment:

    http://backreaction.blogspot.com/2008/07/recreating-big-bang.html

    “I suspect that there is just little motivation to actually get things right, but instead there is an emphasize on advertising and entertaining, and the more bang the better.”

    It’s about making money. The exact reason why so-called cosmological “implications” of high energy physics research are eagerly hyped by the media was explained by Jeremy Webb, former BBC sound engineer and now Editor of New Scientist, in Roger Highfield’s article “So good she could have won twice” in the Daily Telegraph, 24 Aug. 2005:

    http://www.telegraph.co.uk/connected/main.jhtml?xml=/connected/2005/08/24/ecfysw24.xml

    “Prof Heinz Wolff complained that cosmology is ‘religion, not science.’ Jeremy Webb [Editor] of New Scientist responded that it is not religion but magic. … ‘If I want to sell more copies of New Scientist, I put cosmology on the cover,’ said Jeremy.”

    Since Jeremy’s job includes the requirement to make sure that the magazine sells well, it’s obvious why any alleged development in particle physics which has a cosmological connection is likely to end up as the story spin on the front cover.

    The easiest way for a science journalist to begin an article about the LHC in a way that will grab an editor’s attention and the typical reader’s attention, is to write something like:

    “The purpose of the LHC is to recreate exactly the conditions which occurred in the big bang, allowing us to see precisely what occurred during the creation of the universe.”

    I’m glad that Martinus Veltman and bloggers such as Peter Woit and yourselves decided to be honest about what the LHC really is up to. It’s interesting enough to investigate the electroweak symmetry breaking mechanism, without the really interesting physics of the LHC being misrepresented by the BBC News and other purveyors of nonsense.

  45. Wonderfil poist bbut I wass wannting tto knlw iff you coud wrife a litte
    ore on this topic? I’d be vefy thankful if you could elaborate a littlle
    bbit more. Thanks!

Leave a reply to nc Cancel reply