Lee Smolin and Peter Woit: Comparison of String Critical Books

I read Peter Woit’s Not Even Wrong book last summer, and it is certainly the most important book I’ve read, for it gives a clear explanation of chiral symmetry in the electroweak sector of the Standard Model, as well as a brilliant outline of the mathematical thinking that led to the gauge groups of the Standard Model.

Previously, I had learned how particle physics had provided input to build the Standard Model.  Here’s a sketch of the typical sort of empirical foundation to physics that I mean:

The SU(3) symmetry unitary group produces the eightfold way of particle physics, building up octets of baryons from quarks correctly

The same symmetry principles also describe the mesons in a similar way (which are pairs of quarks, not triplets of quarks as in the case of baryons illustrated in my sketch above).  Baryons and mesons form the hadrons, the strongly interacting particles.  These are all composed of quarks and the strong force symmetry responsible is the SU(3) symmetry unitary group.  Although the idea of colour charge, whereby each quark has a different strong charge apart from its electric and weak charges, seems speculative, there is evidence from the fact that the omega minus particle is composed of three strange quarks.  By the Pauli exclusion principle, you simply can’t have three fermions like strange quarks confined together, because two would have to have the same spin.  (You could have two strange quarks confined because one could have the opposite spin state of the other, which is fine according to the Pauli exclusion principle, but this doesn’t allow three similar quarks to do this.)  In fact, from the measured 3/2 spin of the omega minus, all of its 1/2 spin strange quarks would have the same spin.  The easiest way to account for this seems to be by the new ‘quantum number’ (or, rather, property) of ‘colour charge’.

This story, whereby the composition and spin of the omega minus mean that Pauli’s exclusion principle forces a new quantum number, colour charge, on quarks, is actually back-to-front.  What happened was that Murray Gell-Mann and Yuvall Ne’eman in 1961 independently arranged the particles into families of 8 particles each by the SU(3) symmetry scheme above, and found in one of these families that there was no known particle to fill the spin 3/2 and charge -1 gap, which was actually the prediction of the omega minus!  The omega minus was predicted in 1961, and after two years of experiments it was found in a bubble chamber photograph taken in 1964.  This verified the eight-fold way SU(3) symmetry.  The story of the quark, which is the underlying explanation for the SU(3) symmetry, is afterwards.  Both Gell-Mann and George Zweig in 1964 put forward the quark concept although Zweig called them ‘aces’, on the basis of an uncorrect assumption that were four flavours of such particles altogether (it is now known that there are six quark flavours altogether, in three generations of two quarks each: up and down, charm and strange, top and bottom).  Zweig’s lengthy paper, which independently predicted the same properties of quarks as those Gell-Mann predicted, was censored from publication by the peer-reviewers of a major American journal, but Gell-Mann’s simpler model in a briefer two page paper was published with the title ‘A systematic Model of Baryons and Mesons’ in the European journal Physics Letters, v8, pp. 214-5 (1964).  Gell-Mann in that paper argues that his quark model is ‘a simpler and more elegant scheme’ than just having the eight-fold way as the explanation.  (The name quark was taken from page 383 of James Joyce’s Finnegan’s Wake, Viking Press, New York, 1939.)  David J. Gross has his nice Nobel lecture published here [Proc. Natl. Acad. Sci. U S A. 2005 June 28; 102(26): 9099–9108] where he begins by commenting:

‘The progress of science is much more muddled than is depicted in most history books. This is especially true of theoretical physics, partly because history is written by the victorious. Consequently, historians of science often ignore the many alternate paths that people wandered down, the many false clues they followed, the many misconceptions they had. These alternate points of view are less clearly developed than the final theories, harder to understand and easier to forget, especially as these are viewed years later, when it all really does make sense. Thus, reading history one rarely gets the feeling of the true nature of scientific development, in which the element of farce is as great as the element of triumph.

‘The emergence of QCD is a wonderful example of the evolution from farce to triumph. During a very short period, a transition occurred from experimental discovery and theoretical confusion to theoretical triumph and experimental confirmation. …’

To get back to colour charge: what is it physically?  The labels colour and flavours are just abstract labels for known mathematical properties.  It’s interesting that the Pauli exclusion principle suggested colour charge from the problem of needing three strange quarks with the same spin state in the omega minus particle.  The causal mechanism of the Pauli exclusion principle is probably related to magnetism caused by spin: the system energy is minimised (so it is most stable) when the spins of adjacent particles are opposite to one another, cancelling out the net magnetic field instead of having it add up.  This is why most materials are not strongly magnetic, despite the fact that every electron has a magnetic moment, and atoms are arranged regularly in crystals.  Wherever magnetism does occur such as in iron magnets, it is due to the complex spin alignments of electrons in different atoms, not to orbital motion of electrons, which are of course largely chaotic (there are shaped orbitals where the probability of finding the electron is higher than elsewhere, but the direction of the electron is still random so magnetic fields caused by the ordinary orbital motions of electrons in atoms cancel out naturally).

As stated in the previous post, what happens when two or three fermions are confined in close proximity is that they acquire new charges, such as colour charge, and this avoids violating the Pauli exclusion principle.  Hence, the energy of the system doesn’t make it unstable, because the extra energy results in new forces which are created by the mediation of new vacuum charges in the strong fields which result in vacuum pair production and polarization phenomena.

Peter Woit’s Not Even Wrong is an exciting book because it gives a motivational approach and historical introduction to the group representation theory that you need to know to really start understanding the basic mathematical background to empirically based modern physics.  Hermann Weyl worked on Lie group representation theory in the late 1920s, and wrote a book about it which was ignored at the time.  The Lie groups had been defined in 1873 by Sophus Lie.

It was only when things like the ‘particle zoo’ – which consisted of hundreds of unexplained particles discovered using the early particle accelerators (with cloud chambers and later bubble chambers to record interactions, unlike modern solid state electronic detectors) after World War II – were finally explained by Murray Gell-Mann and Yuval Ne’eman around 1960s using symmetry ideas, that Weyl’s work was taken seriously.  Woit wrotes on page 7 (London edition):

‘The positive argument of this book will be that, historically, one of the main sources of progress in particle theory has been the discovery of new symmetry groups of nature, together with new representations of these groups.  The failure of the superstring theory programme can be traced back to its lack of any fundamental new symmetry group.’

On page 15 (London edition), Woit explains that in special relativity: ‘if I try to move at high speed in the same direction as a beam of light, no matter how fast I go, the light will always be moving away from me at the same speed.’

This is an excellent to express what special relativity says.  The physical mechanism is time-dilation for the observer.  If you are moving at high speed, your clocks and your brain all slow down, so you suffer from the illusion that even a snail is going like a rocket.  That’s why you don’t see the velocity of light appear to slow down: your measurements of speed are crazy due to time-dilation.  That’s physically the mechanism responsible for special relativity in this particular case.  There’s no weird paradox involved, just physics.

If we jump to Lee Smolin’s The Trouble with Physics (New York edition) page 34, we again find a problem of this sort.  Lee Smolin points out that the aether theory was wrong because light was basically some sort of sound wave in the aether, so the aether density was enormous, and it is paradoxical for something filling space with high density to offer no resistance.

Clearly the fundamental particles don’t get much resistance because they’re so small, unlike macroscopic matter, and the resistance is detected as the Lorentz-FitzGerald contraction of special relativity.  But the standard model has exchange radiation filling spacetime, causing forces, and it’s clear that the exchange radiation is causing these effects.  Move into exchange radiation, and you get contracted in the direction of your motion.  If you want to think about a fluid ‘Dirac sea’ you get no drag whatsoever because the vacuum – unlike matter – doesn’t heat up (the temperature of radiation in space, such as the temperature of the microwave background, is the effective temperature of a blackbody radiation emitter corresponding to to the energy spectrum of those photons, and is not the temperature of the vacuum; if the vacuum was radiating the energy due to its own temperature – which it is not – then the microwave background would not be redshifted thermal radiation from the big bang, but would be heat emitted spontaneously from the vacuum).

There are two aspects of the physical resistance to motion in a fluid: the first is an inertial resistance due to the shifting of the fluid out of the path of the moving object.  Once the object is moving (think of a ship), the fluid pushed out of the way from the front travels around and pushes in at the stern or the back of the particle, returning some of the energy.  The percentage of the energy returned is small for a ship, because of dissipative  energy losses: the water molecules that hit the front of the ship get speeded up and hit other molecules, frothing and heating the water slightly, and setting up waves.  But there is still some return, and there is also length contraction in the direction of motion.

In the case of matter moving in the Dirac sea or exchange radiation field (equivalent to the spacetime fabric of general relativity, responsible for inertial and gravitational forces), the exchange radiation is not just acting externally to the macroscopic object; it penetrates to the fundamental particles which are very small (so mutual shielding is trivial in the case of the particles in small mass), and so the whole thing is contracted irrespective of the mechanical strength of the material (if the exchange radiation only acted on the front layer of atoms, the contraction would depend on the strength of the material).

Where this spacetime fabric analogy gets useful is that it allows a prediction for the strength of gravity which is accurate to within experimental error.  This works as follows.  The particles in the surrounding universe are receding from us in spacetime, where bigger apparent distances imply greater times into the past (due to the travel or delay time of light in reaching us).  As these particles recede at increasing speeds with increasing spacetime, assuming that the ‘Dirac sea’ fluid analogy holds, then there will be a net flow of Dirac sea fluid inward towards us to fill in the spatial volumes being vacated as the matter of the universe recedes from us.

The mathematics allows us to calculate the inward force that results, and irrespective of the actual size (cross-sectional area and volume) of the receding particles, the gravity parameter G can be calculated fairly accurately from this inward force equation.  A second calculation was developed assuming that the spacetime fabric can be viewed either as a Dirac sea or as exchange radiation, on the basis that Maxwell’s ‘displacement current’ can be virtual fermions where there are loops, i.e., above the IR cutoff of quantum field theory, but must be radiation where there are no virtual fermion effects, i.e., at distances greater than ~1 fm from a particle, where the electric field is <10^18 v/m (below the IR cutoff), based on exchange radiation doing the compression (rather than a fluid Dirac sea), and when this calculation is normalized with the first equation, we can calculate a second parameter, the exact shielding area per fundamental particle.  The effective cross-sectional shielding area for gravity, of a particle of mass m, is Pi*(2Gm/c^2)^2.  This is the black hole event horizon radius, which seems to tie in with another calculation here.

Getting back to Not Even Wrong, Dr Woit then introduces the state-vector which describes the particle states in the universe, and the Hamiltonian which describes the energy of a state-vector and its rate of change.  What is interesting is that Woit then observes that:

‘The fact that the Hamiltonian simultaneously describes the energy of a state-vector, as well as how fast the state-vector is changing with time, implies that the units in which one measures energy and the units in which one measures time are linked together.  If one changes one’s unit of time from seconds to half-seconds, the rate of change of the state-vector will double and so will the energy.  The constant that relates time units and energy units is called Planck’s constant … It is generally agreed that Planck made an unfortunate choice of how to express the new constant he needed …’

Planck defined his constant as h in the equation E = hf, where f is wave frequency.  The point Woit makes here is that Planck should have represented it using angular (rotational) frequency.  Angular frequency (measured in radians per second, where 1 rotation = 2*Pi radians) is 2*Pi*f, so Planck would have got a constant equal to h/(2*Pi), which is now called h-bar. 

This is usually considered a trivial point, but it is important.  When people go on about Planck’s discovery of the quantum theory of radiation in 1900, they forget that classical radio waves were well known and were actually being used at the time.  This brings up the question for the reason for the difference between quantum and classical electromagnetic waves.

Dr Bernard Haisch has a site with links to various papers of interest here: http://www.calphysics.org/research.html.  Alfonso Rueda and Bernard Haisch have actually investigated some of the important ideas needed to sort out the foundations of quantum field theory, although their papers are incomplete and don’t produce the predictions of important phenomena that are needed to convince string theorists to give up hyping their failed theory.  The key thing is that the electron does radiate in it’s ground state.  The reason it doesn’t fall below the ground state is that it is exchanging radiation because all electrons are radiating, and there are many in the universe.  The electron can’t spiral in due to losing energy, because when it radiates while in the ground state it is in gauge boson radiation equilibrium with the surroundings, receiving the same gauge boson power back as it emits!

The reason why quantum radiation is emitted is that this ground state (equilibrium) exists because all electrons are radiating.  So Yang-Mills quantum field theory really does contain the exchange radiation dynamics for forces which should explain to everyone what is occurring in the ground state of the atom.

The reason why radio waves and light are distinguished from the normally invisible gauge boson exchange radiation is that exchange radiation is received symmetrically from all directions and causes no net forces.  Radio waves and light, on the other hand, can cause net forces, setting up electron motions (electric currents) which we can detect!  I don’t like Dr Haisch’s statement that string theory might be sorted out by this mechanism:

‘It is suggested that inertia is indeed a fundamental property that has not been properly addressed even by superstring theory. The acquisition of mass-energy may still allow for, indeed demand, a mechanism to generate an inertial reaction force upon acceleration. Or to put it another way, even when a Higgs particle is finally detected establishing the existence of a Higgs field, one may still need a mechanism for giving that Higgs-induced mass the property of inertia. A mechanism capable of generating an inertial reaction force has been discovered using the techniques of stochastic electrodynamics (origin of inertia). Perhaps this simple yet elegant result may be pointing to a deep new insight on inertia and the principle of equivalence, and if so, how this may be unified with modern quantum field theory and superstring theory.’

Superstring theory is wrong, and undermines M-theory.  The expense of supersymmetry seems five-fold:

(1) It requires unobserved supersymmetric partners, and doesn’t predict their energies or anything else that is a checkable prediction.

(2) It assumes that there is unification at high energy.  Why?  Obviously a lot of electric field energy is being shielded by the polarized vacuum near the particle core.  That shielded electromagnetic energy goes into short ranged virtual particle loops which will include gauge bosons (W+/-, Z, etc.).  In this case, there’s no high unification.  At really high energy (small distance from particle core), the electromagnetic charge approaches its high bare core value, and there is less shielding between core and observer by the vacuum so there is less effective weak and strong nuclear charge, and those charges fall toward zero (because they’re powered by the energy shielded from the electromagnetic field by the polarized vacuum).  This gets rid of the high energy unification idea altogether.

(3) Supersymmetry requires 10 dimensions and the rolling up of 6 of those dimensions into the Calabi-Yau manifold creates the complexity of string resonances that causes the landscape of 10^500 versions of the standard model, preventing the prediction of particle physics.

(4) Supersymmetry using the measured weak SU(2) and electromagnetic U(1) forces, predicts the SU(3) force incorrectly high by 10-15%.

(5) Supersymmetry when applied to try to solve the cosmological constant problem, gives a useless answer, at least 10^55 times too high.

The real check on the existence of a religion is the clinging on to physically useless orthodoxy.

Gravity and the Quantum Vacuum Inertia Hypothesis
Alfonso Rueda & Bernard Haisch, Annalen der Physik, Vol. 14, No. 8, 479-498 (2005).

Review of Experimental Concepts for Studying the Quantum Vacuum Fields
E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole, Space Technology and Applications International Forum (STAIF 2006), p. 1390 (2006).

Analysis of Orbital Decay Time for the Classical Hydrogen Atom Interacting with Circularly Polarized Electromagnetic Radiation
Daniel C. Cole & Yi Zou, Physical Review E, 69, 016601, (2004).

Inertial mass and the quantum vacuum fields
Bernard Haisch, Alfonso Rueda & York Dobyns, Annalen der Physik, Vol. 10, No. 5, 393-414 (2001).

Stochastic nonrelativistic approach to gravity as originating from vacuum zero-point field van der Waals forces
Daniel C. Cole, Alfonso Rueda, Konn Danley, Physical Review A, 63, 054101, (2001).

The Case for Inertia as a Vacuum Effect: a Reply to Woodward & Mahood
Y. Dobyns, A. Rueda & B.Haisch, Foundations of Physics, Vol. 30, No. 1, 59 (2000).

On the relation between a zero-point-field-induced inertial effect and the Einstein-de Broglie formula
B. Haisch & A. Rueda, Physics Letters A, 268, 224, (2000).

Contribution to inertial mass by reaction of the vacuum to accelerated motion
A. Rueda & B. Haisch, Foundations of Physics, Vol. 28, No. 7, pp. 1057-1108 (1998).

Inertial mass as reaction of the vacuum to acccelerated motion
A. Rueda & B. Haisch, Physics Letters A, vol. 240, No. 3, pp. 115-126, (1998).

Reply to Michel’s “Comment on Zero-Point Fluctuations and the Cosmological Constant”
B. Haisch & A. Rueda, Astrophysical Journal, 488, 563, (1997).

Quantum and classical statistics of the electromagnetic zero-point-field
M. Ibison & B. Haisch, Physical Review A, 54, pp. 2737-2744, (1996).

Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids
A. Rueda, B. Haisch & D.C. Cole, Astrophysical Journal, Vol. 445, pp. 7-16 (1995).

Inertia as a zero-point-field Lorentz force
B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678-694 (1994).

The articles above have various problems.  The claim that the source of inertia is the same zero-point electromagnetic radiation that causes the Casimir force, and that gravitation arises in the same way, is in a sense correct, but you have to increase the number of gauge bosons in electromagnetism in order to explain why gravity is 10^40 times weaker than electromagnetism.  This is actually a benefit, rather than a problem, as shown here.  The electromagnetic theory, in order to causally explain the mechanisms for repulsion and attraction between similar and dissimilar charges as well as gravity with the correct strength from the diffusion of gauge bosons between similar charges throughout the universe (a drunkard’s walk with a vector sum of strength equal to the square root of the number of charges in the universe, multiplied by the gravity force which is mediated by photons) ends up with 3 gauge bosons like the weak SU(2) force.  So this looks as if it can incorporate gravity into the standard model of particle physics.

The conventional treatment of how photons can cause attractive and repulsive forces just specifies the right number of polarizations and the right spin.  If you want a purely attractive gauge boson, you would have a spin-2 ‘graviton’.  But this comes from abstract symmetry principles, it isn’t dynamical physics.  For example, you can get all sorts of different spins and polarizations when radiation is exchanged depending on how you define what is going on.  If, for example, two transverse electromagnetic (TEM) waves pass through one another with the same amplitude while travelling in opposite directions, the curls of their respective magnetic fields will cancel out during the duration of overlap.  So the polarization number will be changed!  As a result, the exchange of radiation in two directions is easier than a one-way transfer of radiation.  Normally you need two parallel conductors to propagate an electromagnetic wave by a cable, or you need an oscillating wave (with as much negative electric field as positive electric field in it) for energy to propagate.  The reason for this is that a wave of purely one type of electric field (positive only or negative only) will have an uncancelled infinite self-inductance due to the magnetic field it creates.  You have to ensure that the net magnetic field is zero, or the wave won’t propagate (whether guided by a wire, or launched into free space).  The only way normally of getting rid of this infinite self-inductance is to fire off two electric field waves, one positive and one negative, so that the magnetic fields from each have opposite curls, and the long range magnetic field is thus zero (perfect cancellation).

This explains why you normally need two wires to send logic signals.  The old explanation for two wires is false: you don’t need a complete circuit.  In fact, because electricity can never go instantly around a circuit when you press the on switch, it is impossible for the electricity to ‘know’ whether the circuit it is entering is open or is terminated by a load (or short-circuit), until the light speed electromagnetic energy completes the circuit. 

Whenever energy first enters a circuit, it does so the same way regardless of whether the circuit is open or is closed, because goes at light speed for the surrounding insulator, and can’t (and doesn’t in experiments) tell what the resistance of the whole circuit will turn out to be.  The effective resistance, until the energy completes the circuit, is equal to the resistance of the conductors up to the position of the front of the energy current current (which is going at light speed for the insulator), plus the characteristic impedance of the geometry of the pair of wires, which is the 377 ohm impedance of the vacuum from Maxwell’s theory, multiplied by a dimensionless correction factor for the geometry.  The 377 ohm impedance here is due to the fact that Maxwell’s so-called ‘displacement current’, which is (for physics at energies below the IR cutoff of QFT) radiation rather than virtual electron and virtual positron motion.

The point is that the photon’s nature is determined by what is required to get propagation to work through the vacuum.  Some configurations are ruled out physically, because the self-inductance of uncancelled magnetic fields is infinite, so such proto-photons literally get nowhere (they can’t even set out from a charge).  It’s really like evolution: anything can try to work, but those things that don’t succeed get screened out.

The photon, therefore, is not the only possibility.  You can make exchange radiation work without photons if where each oppositely-directed component of the exchange radiation has a magnetic field curl that cancels the magnetic field of the other component.  This means that two other types of electromagnetic gauge boson are possible beyond what is normally considered to be the photon: negatively charged electromagnetic radiation will propagate providing that it is propagating in opposite directions simultaneously (exchange radiation!) so that the magnetic fields are cancelled in this way, preventing infinite self-inductance.  Similarly for positive electromagnetic gauge bosons.  See this post.

For those who are easily confused, I’ll recap.  The usual photon has an equal amount of positive and negative electric field energy, spatially separated as implied by the size or wavelength of the photon (it’s a transverse wave, so it has a transverse wavelength).  Each of these propagating positive and negative electric fields has a magnetic field, but because the magnetic field curls in the opposite direction from a moving electric field as from a moving magnetic field, the two curls cancel out when the photon is seen from a distance large compared to the wavelength of the photon.  Hence, near a photon there are electric fields and magnetic fields, but at a distance large compared to the wavelength of the photon, these fields are both cancelled out.  This is the reason why a photon is said to be uncharged.  If the photon’s fields did not cancel, it would have charge.  Now, in the weak force theory there are three gauge bosons which have some connection to the photon: two charged W bosons and a neutral Z boson.  This suggests a workable, predictive revision to electromagnetic theory.

I’ve gone seriously off on a tangent here to comparing the books Not Even Wrong and The Trouble with Physics.  However, I think these are important points to make.

Update, 24 March ’07: the following is the bit of a comment to Clifford’s blog which was snipped off.

In order to be really convincing someone has got to come up with a way of making checkable predictions from a defensible unification of general relativity and the standard model. Smolin has a longer list in his book:

1. Combine quantum field theory and general relativity
2. Determine the foundations of quantum mechanics
3. Unify all standard model particles and forces
4. Explain the standard model constants (masses and forces)
5. Explain dark matter and dark energy, or come up with with some modified theory of gravity that eliminates them but is defensible.

Any non-string solution to these problems is almost by definition a joke and won’t be taken seriously by the mainstream string theorists. Typical argument:

String theorist: “String theory includes 6/7 extra dimensions and predicts superpartners, gravitons, branes, landscape of standard models, anthropic principle, etc.”

Alternative theorist: “My theory resolves real problems that are observable, by explaining existing data!”

String theorist: “That sounds boring/heretical to me.”

What’s completely unique about string theory is that it has managed to acquire public respect and credulity in advance of any experimental confirmation.

This is mainly due to public relations hype. That’s what makes it so tough on alternatives.

The correct unification scheme

The top post here gives a discussion on the problem of unifying gravity and standard model forces: gauge boson radiation is exchanged between all charges in the universes, while electromagnetic forces only result in particular situations (dissimilar or similar charges) .  As discussed below, gravitational exchange radiation interacts indirectly with electric charges, via some vacuum field particles which become associated with electric charges.  [This has nothing to do with the renormalization problem in speculative (string theory) quantum gravity that predicts nothing.  Firstly, this does make predictions of particle masses and of gravity and cosmology.  Secondly, renormalization is accounted for by vacuum polarization shielding electric charge.  The mass changes in the same way, since the field which causes mass is coupled to the charge by the already renormalized (shielded) electric charge.]

The whole idea that gravity is a regular quantum field theory, which causes pair production if the field is strong enough, is totally speculative and there is not the slightest evidence for it.  The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997.  Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).

There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons).  How can gravitational charge be renormalized?  There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field.  For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized.  If they are both displaced in the same direction by the field, they aren’t polarized.  So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s Cavorite.

There is no evidence for this.  Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value.  However, this is not a problem.  The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude).  Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.

Finally, supersymmetry is totally flawed.  What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification.  You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).

The energy sapped from the gauge boson mediated field of electromagnetism is being used.  It’s being used to create pairs of charges, which get polarized and shield the field.  This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory.  Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.

Now take the situation where you put N electrons close together, so that their cores are very nearby.  What will happen is that the surrounding vacuum polarization shells of both electrons will overlap.  The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger.  So the shielding of the polarized vacuum is N times stronger!  This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron.  Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!

This has three remarkable consequences.  First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy.  Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.

Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials).  If you could extend the Pauli exclusion principle, you could allow particles to acquire short-range nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.

Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do.  Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.

The right handed electron has a weak hypercharge of -2.  The left handed electron has a weak hypercharge of -1.  The left handed downquark (with observable low energy, electric charge of -1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of -2/3.

It’s totally obvious what’s happening here.  What you need to focus on is the hadron (meson or baryon), not the individual quarks.  The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks).  The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them.  It all comes from the electromagnetic field.  Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces.  Hence, at the usually assumed so-called Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric).  This ties in with representation theory for particle physics, whereby symmetry transformation principles relate all particles and fields (the conservation of gauge boson energy and the exclusion principle being dynamic processes behind the relationship of a lepton and a quark; it’s a symmetry transformation, physically caused by quark confinement as explained above), and it makes predictions.

It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength.  This is done when electric field energy is stored in a capacitor.  In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose.  See page 70 of http://arxiv.org/abs/hep-th/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy.  (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge.  The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.)  The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces.  This allows predictions and more checks.  It’s totally tied down to hard facts, anyway.  If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.

To give an example from https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron.  Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 – 1 = 136.036 times the electric charge energy of the electron experienced at large distances.  This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.

Smolin, Woit, the failure of string theory, and how string theory responds

Professor Lee Smolin has been attacked by various string theorists (particularly Aaron Bergmann and Lubos Motl), but now Professor Clifford Johnson has seemingly joined in with Aaron and Lubos in a post where he claims that pointing out the failure of string theory in books is unsatisfactory because it puts “their rather distorted views on the issues into the public domain in a manner that serves only to muddle”.

This seems to be a slightly unfair attack to me.  Clifford is certainly trying hardest of all the string theorists to be reasonable, but he has stated that he has not read the books critical of string theory, which means that his claim that the books contain ‘distorted views’ which ‘muddle’ the issues, is really unfounded upon fact (like the claims of string theory).

Dr Peter Woit has a nice set of notes summarising some problems with string theory here.  These are far more sketchy than his book and don’t explain the Standard Model and its history like his book, but the notes do summarise a few of the many problems in string theory.  String theorists, if they even acknowledge the existence of critics at all (Witten has written a letter to Nature saying that he doesn’t, instead he suggests that string theorists should ignore objections while continuing to make or to stand by misleading claims that string theory ‘predicts’ gravity, such as Witten’s own claim of that in the April 1996 issue of Physics Today), dismiss any problem with string theory as a ‘storm in a teacup’, refuse to read the books of critics, misrepresent what the critics are saying, so the arguments don’t address the deep problems.

For instance, Clifford wrote in a particularly upsetting comment:

“For example, a great deal of time was spent by me arguing with Peter Woit that his oft-made public claim that string theory has been shown to be wrong is not a correct claim. I asked him again and again to tell us what the research result is that shows this. He has not, and seems unable to do so. I don’t consider that to be informed criticism, but a very very strong and unfair overstatement of what the current state of on-going research is.”

Peter Woit explains on page 177 of Not Even Wrong (which, admittedly, Clifford is unaware of since he has not read the book!) that using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. So that’s failure #1.

Moreover, Peter Woit also explains on page 179 that supersymmetry makes another false prediction: it predicts a massive amount of dark energy in the vacuum and an immense cosmological constant, totally contradicted by astronomy and too high by a factor of 10^55 or 10^113 depending on whether the string theory is minimally supersymmetric or a supersymmetric grand unified theory, respectively.

Either way, Dr Woit explains: ‘This is almost surely the worst prediction ever made by a physical theory that anyone has taken seriously.’ So that’s failure #2.

This is not a problem with the standard model of particle physics: comparing string theory to the standard model is false.  A student who answers one of the questions on a paper and gets it wrong, derives no excuse from pointing to another who achieved 99%, despite happening to get the same single question wrong. Any assessment by comparison needs to take account of successes, not just errors. In one case the single error marks complete failure, while in the other it’s trivial.

It’s still a a string error, whether the standard model makes it as well, or not as the case may be. String theorists have a different definition of the standard model for this argument, more like a speculative theory than an empirical model of particle physics.  The standard model isn’t claimed to be the final theory. String is. The standard model is extremely well based on empirical observations and makes checked predictions. String doesn’t.

That’s why Smolin and Woit are more favourable to the standard model. String theory if of any use should sort out any problems with the standard model. This is why the errors of string theory are so alarming. It is supposed to theoretically sort things out, unlike the standard model, which is an empirically based model, not a claimed final theory of unification.

Asymptotia

 

More Scenes From the Storm in a Teacup, VII

by Clifford, at 2:18 am, March 13th, 2007 in science, science in the media, string theory

“You can catch up on some of the earlier Scenes by looking at the posts listed at the end of this one. Through the course of doing those posts I’ve tried hard to summarize my views on the debate about the views of Smolin and Woit – especially hard to emphasize how the central point of their debate that is worth some actual discussion actually has nothing to do string theory at all. Basically, the whole business of singling out string theory as some sort of great evil is rather silly. If the debate is about anything (and it largely isn’t) it is about the process of doing scientific research (in any field), and the structure of academic careers in general. For the former matter, Smolin and Woit seem to have become frustrated with the standard channels through which detailed scientific debates are carried out and resolved, resorting to writing popular level books that put their rather distorted views on the issues into the public domain in a manner that serves only to muddle.  …” 

Everything that happens involves particle physics, so it determines the nature of everything, and is just a few types of fundamental particles and four basic fundamental forces, or three at high energy, where electro-weak unification occurs.

It’s better to have debates and disputes over scientific matters that can potentially be resolved, than have arguments over interminable political opinions which can’t be resolved factually, even in principle. I don’t agree that a lack of debate (until new experimental data arrives) is the best option. The issue is that experiments may resolve the electroweak symmetry breaking mechanism, but they won’t necessarily change the facts in the string theory debate one bit. Penrose explains the problem here on pp. 1020-1 of Road to Reality (UK ed.):

34.4 Can a wrong theory be experimentally refuted? … One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. … We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

I’ve written a very brief review of Lee Smolin’s book on Amazon.co.uk, which for brevity concentrates on reviewing the science of the book that I can review objectively (I ignore discussions of academic problems).  Here is a copy of it:

Professor Lee Smolin is one of the founders of the Perimeter Institute in Canada. He worked on string theory in the 1980s and switched to loop quantum gravity when string theory failed.

Before reading this book, I read Dr Peter Woit’s book about the failure of string theory, Not Even Wrong, read his blog, and watched Smolin’s lectures (available streamed online from the Perimeter Institute website), Introduction to Quantum Qravity, which explain the loop quantum gravity theory very clearly.

Smolin concentrates on the subject from the perspective of understanding gravity, although he helped develop a twisted braid representation of the standard model particles. Loop quantum gravity is built on firmer ground that string theory, and tackles the dynamics behind general relativity.

This is quite different from the approach of string theory, which completely ignores the dynamics of quantum gravity. I should qualify this by saying that although the stringy 11-dimensional supergravity, which is the bulk of the mainstream string theory, M-theory (in M-theory 10 dimensional superstring is the brane or membrane on the bulk, like an N-1 dimensional surface on an N-dimensional material), does contain a spin-2 mode which (if real) corresponds to a graviton, that’s not a complete theory of gravitation.

In particular, in reproducing general relativity, string theory suggests a large negative cosmological constant, while the current observation-based cosmological model has a small positive cosmological constant.

In addition to failing there, string theory also fails to produce any of the observable particles of the standard model of physics. This is because of the nature of string theory, which is constructed from a world sheet (a 1-dimensional string when moved gains a time dimension, becoming a 1,1 “worldsheet”) to which 8 additional dimensions are added to satisfy the conformal symmetry of particle physics, assuming that there is supersymmetric unification of standard model forces (which requires the assumption that every fermion in the universe has a bosonic super partner, which nobody has ever observed in an experiment). If supersymmetry is ignored, then you have to add to the worldsheet three times as many dimensions for conformal symmetry, giving 26 dimensional bosonic string theory. That theory traditionally had problems in explaining fermions, although Tony Smith (now censored off arXiv by the mainstream) has recently come up with some ideas to get around that.

The failure of string theory is due to the 10 dimensions of supersymmetric superstring theory from the worldsheet and conformal symmetry requirements. Clearly, we don’t see that many dimensions, so string theorists rise to the challenge by a trick first performed with Kaluza’s 5-dimensional theory back in the 1920s. Klein argued that extra spatial dimension can be compactified by being curled up into a small size. Historically, the smallest size assumed in physics has been the Planck length (which comes purely from dimensional analysis by combining physical constants, not from an experimentally validated theory or from observation).

With 10 dimensional superstring, the dimensions must be reduced on a macroscopic scale to 3 spatial dimensions plus 1 time dimension, so 6 spatial dimensions need compactification. The method to do this is the Calabi-Yau manifold. But this cause a massive problem in string theory, called the landscape. String theory claims that particles are vibrating strings, which becomes very problematic when 6 dimensions are compactified, because the vibration modes possible for a string then depend critically on the size and shape parameters of those 6 compactified dimensions. The possibilities are vast, maybe infinite.

It turns out that there are at least 10^500 ways of producing standard model or vacuum ground state from such strings containing Calabi-Yau manifolds. Nobody can tell if any of those solutions is the real standard model of particle physics. For comparison, the age of the universe is something like 10^17 seconds. Hence, if you had a massive computer trying to compute all the solutions to string theory from the moment of the big bang to now, it would have to work at a speed of 10^483 solutions per second to solve the problem (a practically impossible speed, even if such timescales are available). A few string theorists hope to find a way to statistically tackle this problem in a non-rigorous way (without checking every single solution) before the end of the universe, but most have given up and try to explain particle physics by the anthropic principle, whereby it is assumed that there is one universe for each of the 10^500 solutions to string theory, and we see the one standard model which has parameters which are able to result in humans.

More radical string theorists proclaim that if you fiddle around with the field theories underlying general relativity and the standard model, you can create a landscape of unobserved imaginary universes from those theories, similar to string theory. Therefore, they claim, the problems in string theory are similar to those in general relativity and the standard model. However, this analogy is flawed because those checked theories are built up on the basis of observations of particle symmetries, electrodynamics, energy conservation and gravitation, and they also produce checkable predictions. In short, there is no problem due to the imaginary landscape in those theories, whereas there is a real problem caused by the landscape in string theory, because it prevents a reproduction (post-diction) of existing physics, let alone predictions.

Smolin suggests that the failure of string theory to explain general relativity and the standard model of particle physics means that it may be helpful if physicist get off the string theory bandwaggon and start investigating other ideas. Woit makes the same point and gives the technical reasons.

The problem is that string theory has over the past two decades become a cult topic supported by endless marketing hype, magazine articles, books, even sci fi films. Extra dimensions are popular, and the heroes of string theory have gotten used to being praised despite having not the slightest shred of evidence for their subject. Recently, they have been claiming that string theory mathematics is valuable for tackling some technical problems in nuclear physics, or may be validated by the discovery of vast cosmic strings in space. But even the mathematics of Ptolemy’s earth centred universe epicycles had uses elsewhere, so this defense of string theory is disingenious. It’s not clear that string theory maths solves any nuclear physics problems that can’t be solved by other methods. Even if it does, that’s irrelevant for the issue of whether people should be hyping string as being the best theory around.

Smolin’s alternative is loop quantum gravity. The advantage of this is that it builds up Einstein’s field equation less a metric (so it is background independent) from a simple summing of interaction graphs for the nodes of a Penrose spin network in the 3 spatial dimensions plus time dimension we observe. This sum is equivalent to taking a Feynman path integral, which is a basic method of doing quantum field theory. The result of this is general relativity without a metric. It is not a complete theory yet, and is the very opposite of string theory in many ways.

While string theory requires unobservables like extra dimensions and superpartners, loop quantum gravity works in observable spacetime using quantum field theory to produce a quantum gravity consistent with general relativity. Ockham’s razor, the principle of economy in science, should tell you that loop quantum gravity is tackling real physics in a simple way, whereas string theory is superfluous (at least until there is some evidence for it).

Obviously there is more progress to be made in loop quantum gravity, which needs to become a full Yang-Mills quantum theory if gravity is indeed a force like the other standard model forces. However, maybe the relationship between gravity and the other long-range force, electromagnetism, will turn out to be different to what is expected.

For instance, loop quantum gravity needs to address the problem that of whether gravity is a renormalizable quantum field theory like the standard model Yang-Mills theories. This will depend on the way in which gravitational charge, ie mass, is attached to or associated with standard model charges by way of some sort of “Higgs field”. The large hadron collider is due to investigate this soon. Renormalization involves using a corrected “bare charge” value for electric charge and nuclear charges which is higher than that observed. The justification is that very close to a particle, vacuum pair production occurs in the strong field strength, the pairs polarize and shield the bare core charge to the observed value seen at long distances and low energies. For gravity, renormalization poses the problem of how gravitational charge can be shielded? Clearly, masses don’t polarize in a gravitational field (they all move the same way, unlike electrons and positrons in an electric field) so the mass-giving “Higgs field” effect is not directly capable of renormalization, but is capable of indirect renormalization if the Higgs field is being associated with particles by another field like the electric field, which is renormalized.

These are just aspects which appeal to me. One of the most fun parts of the book is where Smolin explains the reason behind “Doubly Special Relativity”.

Peter Woit’s recent book Not Even Wrong has a far deeper explanation of the standard model and the development of quantum field theory, the proponents and critics of string theory, and gives the case for a deeper understanding of the standard model in observed spacetime dimensions using tools like the well established mathematical modelling methods of representation theory.

Both books should really be read to understand the overall problem and possibilities for progress by alternative ideas despite the failure of string theory.

Update: in the comments on Asymptotia, Peter Woit has made some quick remarks from a web cafe in Pisa, Italy.  Instead of arguing about the substance of his remarks, Aaron Bergmann and Jacques Distler are repeatedly attacking one nonsense sentence he typed where he wrote a contradiction that a cosmological constant can correspond to flat spacetime, whereas the cosmological constant implies a small curvature.  Unable to defend string theory against the substance of the charge that it is false, they are now attacking this one sentence as a straw man.  It’s completely unethical.  The fact that a string theorist will refusing to read the carefully written and proof-read books and then choose instead to endlessly attack a spurious comment on a weblog, just show the level to which their professionalism has sunk.  Jacques Distler does point out correctly that in flat spacetime the vacuum energy does not produce a cosmological constant.  Instead of splitting attacking critics of completely failed theories, he should perhaps admit the theory has no claim to be science.