I read Peter Woit’s Not Even Wrong book last summer, and it is certainly the most important book I’ve read, for it gives a clear explanation of chiral symmetry in the electroweak sector of the Standard Model, as well as a brilliant outline of the mathematical thinking that led to the gauge groups of the Standard Model.
Previously, I had learned how particle physics had provided input to build the Standard Model. Here’s a sketch of the typical sort of empirical foundation to physics that I mean:
The same symmetry principles also describe the mesons in a similar way (which are pairs of quarks, not triplets of quarks as in the case of baryons illustrated in my sketch above). Baryons and mesons form the hadrons, the strongly interacting particles. These are all composed of quarks and the strong force symmetry responsible is the SU(3) symmetry unitary group. Although the idea of colour charge, whereby each quark has a different strong charge apart from its electric and weak charges, seems speculative, there is evidence from the fact that the omega minus particle is composed of three strange quarks. By the Pauli exclusion principle, you simply can’t have three fermions like strange quarks confined together, because two would have to have the same spin. (You could have two strange quarks confined because one could have the opposite spin state of the other, which is fine according to the Pauli exclusion principle, but this doesn’t allow three similar quarks to do this.) In fact, from the measured 3/2 spin of the omega minus, all of its 1/2 spin strange quarks would have the same spin. The easiest way to account for this seems to be by the new ‘quantum number’ (or, rather, property) of ‘colour charge’.
This story, whereby the composition and spin of the omega minus mean that Pauli’s exclusion principle forces a new quantum number, colour charge, on quarks, is actually back-to-front. What happened was that Murray Gell-Mann and Yuvall Ne’eman in 1961 independently arranged the particles into families of 8 particles each by the SU(3) symmetry scheme above, and found in one of these families that there was no known particle to fill the spin 3/2 and charge -1 gap, which was actually the prediction of the omega minus! The omega minus was predicted in 1961, and after two years of experiments it was found in a bubble chamber photograph taken in 1964. This verified the eight-fold way SU(3) symmetry. The story of the quark, which is the underlying explanation for the SU(3) symmetry, is afterwards. Both Gell-Mann and George Zweig in 1964 put forward the quark concept although Zweig called them ‘aces’, on the basis of an uncorrect assumption that were four flavours of such particles altogether (it is now known that there are six quark flavours altogether, in three generations of two quarks each: up and down, charm and strange, top and bottom). Zweig’s lengthy paper, which independently predicted the same properties of quarks as those Gell-Mann predicted, was censored from publication by the peer-reviewers of a major American journal, but Gell-Mann’s simpler model in a briefer two page paper was published with the title ‘A systematic Model of Baryons and Mesons’ in the European journal Physics Letters, v8, pp. 214-5 (1964). Gell-Mann in that paper argues that his quark model is ‘a simpler and more elegant scheme’ than just having the eight-fold way as the explanation. (The name quark was taken from page 383 of James Joyce’s Finnegan’s Wake, Viking Press, New York, 1939.) David J. Gross has his nice Nobel lecture published here [Proc. Natl. Acad. Sci. U S A. 2005 June 28; 102(26): 9099–9108] where he begins by commenting:
‘The progress of science is much more muddled than is depicted in most history books. This is especially true of theoretical physics, partly because history is written by the victorious. Consequently, historians of science often ignore the many alternate paths that people wandered down, the many false clues they followed, the many misconceptions they had. These alternate points of view are less clearly developed than the final theories, harder to understand and easier to forget, especially as these are viewed years later, when it all really does make sense. Thus, reading history one rarely gets the feeling of the true nature of scientific development, in which the element of farce is as great as the element of triumph.
‘The emergence of QCD is a wonderful example of the evolution from farce to triumph. During a very short period, a transition occurred from experimental discovery and theoretical confusion to theoretical triumph and experimental confirmation. …’
To get back to colour charge: what is it physically? The labels colour and flavours are just abstract labels for known mathematical properties. It’s interesting that the Pauli exclusion principle suggested colour charge from the problem of needing three strange quarks with the same spin state in the omega minus particle. The causal mechanism of the Pauli exclusion principle is probably related to magnetism caused by spin: the system energy is minimised (so it is most stable) when the spins of adjacent particles are opposite to one another, cancelling out the net magnetic field instead of having it add up. This is why most materials are not strongly magnetic, despite the fact that every electron has a magnetic moment, and atoms are arranged regularly in crystals. Wherever magnetism does occur such as in iron magnets, it is due to the complex spin alignments of electrons in different atoms, not to orbital motion of electrons, which are of course largely chaotic (there are shaped orbitals where the probability of finding the electron is higher than elsewhere, but the direction of the electron is still random so magnetic fields caused by the ordinary orbital motions of electrons in atoms cancel out naturally).
As stated in the previous post, what happens when two or three fermions are confined in close proximity is that they acquire new charges, such as colour charge, and this avoids violating the Pauli exclusion principle. Hence, the energy of the system doesn’t make it unstable, because the extra energy results in new forces which are created by the mediation of new vacuum charges in the strong fields which result in vacuum pair production and polarization phenomena.
Peter Woit’s Not Even Wrong is an exciting book because it gives a motivational approach and historical introduction to the group representation theory that you need to know to really start understanding the basic mathematical background to empirically based modern physics. Hermann Weyl worked on Lie group representation theory in the late 1920s, and wrote a book about it which was ignored at the time. The Lie groups had been defined in 1873 by Sophus Lie.
It was only when things like the ‘particle zoo’ – which consisted of hundreds of unexplained particles discovered using the early particle accelerators (with cloud chambers and later bubble chambers to record interactions, unlike modern solid state electronic detectors) after World War II – were finally explained by Murray Gell-Mann and Yuval Ne’eman around 1960s using symmetry ideas, that Weyl’s work was taken seriously. Woit wrotes on page 7 (London edition):
‘The positive argument of this book will be that, historically, one of the main sources of progress in particle theory has been the discovery of new symmetry groups of nature, together with new representations of these groups. The failure of the superstring theory programme can be traced back to its lack of any fundamental new symmetry group.’
On page 15 (London edition), Woit explains that in special relativity: ‘if I try to move at high speed in the same direction as a beam of light, no matter how fast I go, the light will always be moving away from me at the same speed.’
This is an excellent to express what special relativity says. The physical mechanism is time-dilation for the observer. If you are moving at high speed, your clocks and your brain all slow down, so you suffer from the illusion that even a snail is going like a rocket. That’s why you don’t see the velocity of light appear to slow down: your measurements of speed are crazy due to time-dilation. That’s physically the mechanism responsible for special relativity in this particular case. There’s no weird paradox involved, just physics.
If we jump to Lee Smolin’s The Trouble with Physics (New York edition) page 34, we again find a problem of this sort. Lee Smolin points out that the aether theory was wrong because light was basically some sort of sound wave in the aether, so the aether density was enormous, and it is paradoxical for something filling space with high density to offer no resistance.
Clearly the fundamental particles don’t get much resistance because they’re so small, unlike macroscopic matter, and the resistance is detected as the Lorentz-FitzGerald contraction of special relativity. But the standard model has exchange radiation filling spacetime, causing forces, and it’s clear that the exchange radiation is causing these effects. Move into exchange radiation, and you get contracted in the direction of your motion. If you want to think about a fluid ‘Dirac sea’ you get no drag whatsoever because the vacuum – unlike matter – doesn’t heat up (the temperature of radiation in space, such as the temperature of the microwave background, is the effective temperature of a blackbody radiation emitter corresponding to to the energy spectrum of those photons, and is not the temperature of the vacuum; if the vacuum was radiating the energy due to its own temperature – which it is not – then the microwave background would not be redshifted thermal radiation from the big bang, but would be heat emitted spontaneously from the vacuum).
There are two aspects of the physical resistance to motion in a fluid: the first is an inertial resistance due to the shifting of the fluid out of the path of the moving object. Once the object is moving (think of a ship), the fluid pushed out of the way from the front travels around and pushes in at the stern or the back of the particle, returning some of the energy. The percentage of the energy returned is small for a ship, because of dissipative energy losses: the water molecules that hit the front of the ship get speeded up and hit other molecules, frothing and heating the water slightly, and setting up waves. But there is still some return, and there is also length contraction in the direction of motion.
In the case of matter moving in the Dirac sea or exchange radiation field (equivalent to the spacetime fabric of general relativity, responsible for inertial and gravitational forces), the exchange radiation is not just acting externally to the macroscopic object; it penetrates to the fundamental particles which are very small (so mutual shielding is trivial in the case of the particles in small mass), and so the whole thing is contracted irrespective of the mechanical strength of the material (if the exchange radiation only acted on the front layer of atoms, the contraction would depend on the strength of the material).
Where this spacetime fabric analogy gets useful is that it allows a prediction for the strength of gravity which is accurate to within experimental error. This works as follows. The particles in the surrounding universe are receding from us in spacetime, where bigger apparent distances imply greater times into the past (due to the travel or delay time of light in reaching us). As these particles recede at increasing speeds with increasing spacetime, assuming that the ‘Dirac sea’ fluid analogy holds, then there will be a net flow of Dirac sea fluid inward towards us to fill in the spatial volumes being vacated as the matter of the universe recedes from us.
The mathematics allows us to calculate the inward force that results, and irrespective of the actual size (cross-sectional area and volume) of the receding particles, the gravity parameter G can be calculated fairly accurately from this inward force equation. A second calculation was developed assuming that the spacetime fabric can be viewed either as a Dirac sea or as exchange radiation, on the basis that Maxwell’s ‘displacement current’ can be virtual fermions where there are loops, i.e., above the IR cutoff of quantum field theory, but must be radiation where there are no virtual fermion effects, i.e., at distances greater than ~1 fm from a particle, where the electric field is <10^18 v/m (below the IR cutoff), based on exchange radiation doing the compression (rather than a fluid Dirac sea), and when this calculation is normalized with the first equation, we can calculate a second parameter, the exact shielding area per fundamental particle. The effective cross-sectional shielding area for gravity, of a particle of mass m, is Pi*(2Gm/c^2)^2. This is the black hole event horizon radius, which seems to tie in with another calculation here.
Getting back to Not Even Wrong, Dr Woit then introduces the state-vector which describes the particle states in the universe, and the Hamiltonian which describes the energy of a state-vector and its rate of change. What is interesting is that Woit then observes that:
‘The fact that the Hamiltonian simultaneously describes the energy of a state-vector, as well as how fast the state-vector is changing with time, implies that the units in which one measures energy and the units in which one measures time are linked together. If one changes one’s unit of time from seconds to half-seconds, the rate of change of the state-vector will double and so will the energy. The constant that relates time units and energy units is called Planck’s constant … It is generally agreed that Planck made an unfortunate choice of how to express the new constant he needed …’
Planck defined his constant as h in the equation E = hf, where f is wave frequency. The point Woit makes here is that Planck should have represented it using angular (rotational) frequency. Angular frequency (measured in radians per second, where 1 rotation = 2*Pi radians) is 2*Pi*f, so Planck would have got a constant equal to h/(2*Pi), which is now called h-bar.
This is usually considered a trivial point, but it is important. When people go on about Planck’s discovery of the quantum theory of radiation in 1900, they forget that classical radio waves were well known and were actually being used at the time. This brings up the question for the reason for the difference between quantum and classical electromagnetic waves.
Dr Bernard Haisch has a site with links to various papers of interest here: http://www.calphysics.org/research.html. Alfonso Rueda and Bernard Haisch have actually investigated some of the important ideas needed to sort out the foundations of quantum field theory, although their papers are incomplete and don’t produce the predictions of important phenomena that are needed to convince string theorists to give up hyping their failed theory. The key thing is that the electron does radiate in it’s ground state. The reason it doesn’t fall below the ground state is that it is exchanging radiation because all electrons are radiating, and there are many in the universe. The electron can’t spiral in due to losing energy, because when it radiates while in the ground state it is in gauge boson radiation equilibrium with the surroundings, receiving the same gauge boson power back as it emits!
The reason why quantum radiation is emitted is that this ground state (equilibrium) exists because all electrons are radiating. So Yang-Mills quantum field theory really does contain the exchange radiation dynamics for forces which should explain to everyone what is occurring in the ground state of the atom.
The reason why radio waves and light are distinguished from the normally invisible gauge boson exchange radiation is that exchange radiation is received symmetrically from all directions and causes no net forces. Radio waves and light, on the other hand, can cause net forces, setting up electron motions (electric currents) which we can detect! I don’t like Dr Haisch’s statement that string theory might be sorted out by this mechanism:
‘It is suggested that inertia is indeed a fundamental property that has not been properly addressed even by superstring theory. The acquisition of mass-energy may still allow for, indeed demand, a mechanism to generate an inertial reaction force upon acceleration. Or to put it another way, even when a Higgs particle is finally detected establishing the existence of a Higgs field, one may still need a mechanism for giving that Higgs-induced mass the property of inertia. A mechanism capable of generating an inertial reaction force has been discovered using the techniques of stochastic electrodynamics (origin of inertia). Perhaps this simple yet elegant result may be pointing to a deep new insight on inertia and the principle of equivalence, and if so, how this may be unified with modern quantum field theory and superstring theory.’
Superstring theory is wrong, and undermines M-theory. The expense of supersymmetry seems five-fold:
(1) It requires unobserved supersymmetric partners, and doesn’t predict their energies or anything else that is a checkable prediction.
(2) It assumes that there is unification at high energy. Why? Obviously a lot of electric field energy is being shielded by the polarized vacuum near the particle core. That shielded electromagnetic energy goes into short ranged virtual particle loops which will include gauge bosons (W+/-, Z, etc.). In this case, there’s no high unification. At really high energy (small distance from particle core), the electromagnetic charge approaches its high bare core value, and there is less shielding between core and observer by the vacuum so there is less effective weak and strong nuclear charge, and those charges fall toward zero (because they’re powered by the energy shielded from the electromagnetic field by the polarized vacuum). This gets rid of the high energy unification idea altogether.
(3) Supersymmetry requires 10 dimensions and the rolling up of 6 of those dimensions into the Calabi-Yau manifold creates the complexity of string resonances that causes the landscape of 10^500 versions of the standard model, preventing the prediction of particle physics.
(4) Supersymmetry using the measured weak SU(2) and electromagnetic U(1) forces, predicts the SU(3) force incorrectly high by 10-15%.
(5) Supersymmetry when applied to try to solve the cosmological constant problem, gives a useless answer, at least 10^55 times too high.
The real check on the existence of a religion is the clinging on to physically useless orthodoxy.
Gravity and the Quantum Vacuum Inertia Hypothesis
Alfonso Rueda & Bernard Haisch, Annalen der Physik, Vol. 14, No. 8, 479-498 (2005).
Review of Experimental Concepts for Studying the Quantum Vacuum Fields
E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole, Space Technology and Applications International Forum (STAIF 2006), p. 1390 (2006).
Analysis of Orbital Decay Time for the Classical Hydrogen Atom Interacting with Circularly Polarized Electromagnetic Radiation
Daniel C. Cole & Yi Zou, Physical Review E, 69, 016601, (2004).
Inertial mass and the quantum vacuum fields
Bernard Haisch, Alfonso Rueda & York Dobyns, Annalen der Physik, Vol. 10, No. 5, 393-414 (2001).
Stochastic nonrelativistic approach to gravity as originating from vacuum zero-point field van der Waals forces
Daniel C. Cole, Alfonso Rueda, Konn Danley, Physical Review A, 63, 054101, (2001).
The Case for Inertia as a Vacuum Effect: a Reply to Woodward & Mahood
Y. Dobyns, A. Rueda & B.Haisch, Foundations of Physics, Vol. 30, No. 1, 59 (2000).
On the relation between a zero-point-field-induced inertial effect and the Einstein-de Broglie formula
B. Haisch & A. Rueda, Physics Letters A, 268, 224, (2000).
Contribution to inertial mass by reaction of the vacuum to accelerated motion
A. Rueda & B. Haisch, Foundations of Physics, Vol. 28, No. 7, pp. 1057-1108 (1998).
Inertial mass as reaction of the vacuum to acccelerated motion
A. Rueda & B. Haisch, Physics Letters A, vol. 240, No. 3, pp. 115-126, (1998).
Reply to Michel’s “Comment on Zero-Point Fluctuations and the Cosmological Constant”
B. Haisch & A. Rueda, Astrophysical Journal, 488, 563, (1997).
Quantum and classical statistics of the electromagnetic zero-point-field
M. Ibison & B. Haisch, Physical Review A, 54, pp. 2737-2744, (1996).
Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids
A. Rueda, B. Haisch & D.C. Cole, Astrophysical Journal, Vol. 445, pp. 7-16 (1995).
Inertia as a zero-point-field Lorentz force
B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678-694 (1994).
The articles above have various problems. The claim that the source of inertia is the same zero-point electromagnetic radiation that causes the Casimir force, and that gravitation arises in the same way, is in a sense correct, but you have to increase the number of gauge bosons in electromagnetism in order to explain why gravity is 10^40 times weaker than electromagnetism. This is actually a benefit, rather than a problem, as shown here. The electromagnetic theory, in order to causally explain the mechanisms for repulsion and attraction between similar and dissimilar charges as well as gravity with the correct strength from the diffusion of gauge bosons between similar charges throughout the universe (a drunkard’s walk with a vector sum of strength equal to the square root of the number of charges in the universe, multiplied by the gravity force which is mediated by photons) ends up with 3 gauge bosons like the weak SU(2) force. So this looks as if it can incorporate gravity into the standard model of particle physics.
The conventional treatment of how photons can cause attractive and repulsive forces just specifies the right number of polarizations and the right spin. If you want a purely attractive gauge boson, you would have a spin-2 ‘graviton’. But this comes from abstract symmetry principles, it isn’t dynamical physics. For example, you can get all sorts of different spins and polarizations when radiation is exchanged depending on how you define what is going on. If, for example, two transverse electromagnetic (TEM) waves pass through one another with the same amplitude while travelling in opposite directions, the curls of their respective magnetic fields will cancel out during the duration of overlap. So the polarization number will be changed! As a result, the exchange of radiation in two directions is easier than a one-way transfer of radiation. Normally you need two parallel conductors to propagate an electromagnetic wave by a cable, or you need an oscillating wave (with as much negative electric field as positive electric field in it) for energy to propagate. The reason for this is that a wave of purely one type of electric field (positive only or negative only) will have an uncancelled infinite self-inductance due to the magnetic field it creates. You have to ensure that the net magnetic field is zero, or the wave won’t propagate (whether guided by a wire, or launched into free space). The only way normally of getting rid of this infinite self-inductance is to fire off two electric field waves, one positive and one negative, so that the magnetic fields from each have opposite curls, and the long range magnetic field is thus zero (perfect cancellation).
This explains why you normally need two wires to send logic signals. The old explanation for two wires is false: you don’t need a complete circuit. In fact, because electricity can never go instantly around a circuit when you press the on switch, it is impossible for the electricity to ‘know’ whether the circuit it is entering is open or is terminated by a load (or short-circuit), until the light speed electromagnetic energy completes the circuit.
Whenever energy first enters a circuit, it does so the same way regardless of whether the circuit is open or is closed, because goes at light speed for the surrounding insulator, and can’t (and doesn’t in experiments) tell what the resistance of the whole circuit will turn out to be. The effective resistance, until the energy completes the circuit, is equal to the resistance of the conductors up to the position of the front of the energy current current (which is going at light speed for the insulator), plus the characteristic impedance of the geometry of the pair of wires, which is the 377 ohm impedance of the vacuum from Maxwell’s theory, multiplied by a dimensionless correction factor for the geometry. The 377 ohm impedance here is due to the fact that Maxwell’s so-called ‘displacement current’, which is (for physics at energies below the IR cutoff of QFT) radiation rather than virtual electron and virtual positron motion.
The point is that the photon’s nature is determined by what is required to get propagation to work through the vacuum. Some configurations are ruled out physically, because the self-inductance of uncancelled magnetic fields is infinite, so such proto-photons literally get nowhere (they can’t even set out from a charge). It’s really like evolution: anything can try to work, but those things that don’t succeed get screened out.
The photon, therefore, is not the only possibility. You can make exchange radiation work without photons if where each oppositely-directed component of the exchange radiation has a magnetic field curl that cancels the magnetic field of the other component. This means that two other types of electromagnetic gauge boson are possible beyond what is normally considered to be the photon: negatively charged electromagnetic radiation will propagate providing that it is propagating in opposite directions simultaneously (exchange radiation!) so that the magnetic fields are cancelled in this way, preventing infinite self-inductance. Similarly for positive electromagnetic gauge bosons. See this post.
For those who are easily confused, I’ll recap. The usual photon has an equal amount of positive and negative electric field energy, spatially separated as implied by the size or wavelength of the photon (it’s a transverse wave, so it has a transverse wavelength). Each of these propagating positive and negative electric fields has a magnetic field, but because the magnetic field curls in the opposite direction from a moving electric field as from a moving magnetic field, the two curls cancel out when the photon is seen from a distance large compared to the wavelength of the photon. Hence, near a photon there are electric fields and magnetic fields, but at a distance large compared to the wavelength of the photon, these fields are both cancelled out. This is the reason why a photon is said to be uncharged. If the photon’s fields did not cancel, it would have charge. Now, in the weak force theory there are three gauge bosons which have some connection to the photon: two charged W bosons and a neutral Z boson. This suggests a workable, predictive revision to electromagnetic theory.
I’ve gone seriously off on a tangent here to comparing the books Not Even Wrong and The Trouble with Physics. However, I think these are important points to make.
Update, 24 March ’07: the following is the bit of a comment to Clifford’s blog which was snipped off.
In order to be really convincing someone has got to come up with a way of making checkable predictions from a defensible unification of general relativity and the standard model. Smolin has a longer list in his book:
1. Combine quantum field theory and general relativity
2. Determine the foundations of quantum mechanics
3. Unify all standard model particles and forces
4. Explain the standard model constants (masses and forces)
5. Explain dark matter and dark energy, or come up with with some modified theory of gravity that eliminates them but is defensible.
Any non-string solution to these problems is almost by definition a joke and won’t be taken seriously by the mainstream string theorists. Typical argument:
String theorist: “String theory includes 6/7 extra dimensions and predicts superpartners, gravitons, branes, landscape of standard models, anthropic principle, etc.”
Alternative theorist: “My theory resolves real problems that are observable, by explaining existing data!”
String theorist: “That sounds boring/heretical to me.”
What’s completely unique about string theory is that it has managed to acquire public respect and credulity in advance of any experimental confirmation.
This is mainly due to public relations hype. That’s what makes it so tough on alternatives.