## Quantum Field Theory Resources

I’m thinking of redesigning http://quantumfieldtheory.org/ as well as rewriting and improving all the material.  Let me have any suggestions, please.

Professor Carlo Rovelli’s Quantum Gravity book

Rovelli deals with loop quantum gravity, the background independent quantum field theory which describes general relativity without requiring unobservable extra dimensions and other speculation.  He comments in the Preface:

‘We have to understand which (possibly new) notions make sense and which old notions must be discarded, in order to describe spacetime in the quantum relativistic regime.’

In chapter 1, General Ideas and Heuristic Picture, Rovelli begins by pointing out that the inclusion of time in the Schroedinger equation, or a fixed reference frame as a spacetime background, is incompatible with the general covariance of general relativity.  In addition, there is a problem with general relativity since it uses a smooth metric in Riemannian geometry and doesn’t quantize the gravitational/inertial field.  He remarks:

‘The fact is that we do have plenty of information about quantum gravity, because we have quantum mechanics and we have general relativity.  Consistency with quantum mechanics and general relativity is an extremely strict restraint.’ (P. 4 of draft.)

I agree completely with Rovelli’s assertion in subsection The Physical Meaning of General Relativity:

‘General relativity is the discovery that spacetime and the gravitational field are the same entity.  What we call “spacetime” is itself a physical object, in many respects similar to the electromagnetic field.  We can say that general relativity is the discovery that there is no spacetime at all.  What Newton called “space”, and Minkowski called “spacetime”, is unmasked: it is nothing but a dynamical object – the gravitational field – in a regime in which we neglect its dynamics.’ (P. 7 of draft.)

• The argument here is that the only relevant background of spacetime is the local gravitational field.  Restricted (special) relativity is based ultimately on Einstein’s argument that a magnetic field is only observed if an electric charge is moving relative to the observer; if both observer and charge are in the same state of motion, the observer experiences no magnetic field.  Hence, only relative motion is important.  (The Michelson-Morley experiment is more controversial, because FitzGerald and Lorentz interpreted the result as implying a physical contraction of the instrument in the direction of motion caused by the physical effects of the gravitational field.  This physical contraction shortened the distance and time taken for light to travel in that direction, preventing an absolute speed of light in the background field from being detected by interference of combined light beams.  Since ‘special’ relativity preserves the contraction formulae, it is consistent in that sense with the FitzGerald-Lorentz absolute background.  Nobody of course wants to go down that road, so false arguments are made that the Michelson-Morley experiment was a measuring apparatus which had arms of the same length, and wouldn’t work with arms of different lengths.  In fact, it had arms of differing lengths in terms of wavelengths of light because you can’t build a massive instrument with arms of identical length, and it didn’t measure speeds at all.  It only sought to find interference fringes from relative changes in the speed of light due to being rotated in a pool of mercury, so the null result is actually a refutation of relative, rather than absolute, motion.  Likewise, a person who accepts the solar system sees ‘sunrise’ as evidence of the daily planetary rotation, while Ptolemy saw exactly the same phenomena as being direct evidence that the sun orbits the planet daily.  The Michelson-Morley experiment superficially supports ‘special’ relativity, but it also supports the FitzGerald-Lorentz theory, depending on the assumptions you choose to assume.  It is, however, useful in ruling out all theories which don’t include the contraction.  ‘Special’ relativity is mathematically accurate in reproducing the Lorentz transformation.)

‘The success of special relativity was rapid, and the theory is today widely empirically supported and universally accepted.  Still, I do not think that special relativity has really been fully absorbed yet: the large majority of the cultivated people, as well as a surprising high number of theoretical physicists still believe, deep in the heart, that there is something happening “right now” on Andromeda; that there is a single universal time ticking away the life of the Universe.’  (P. 7 of draft.)

• Obviously there are definitions of absolute time, such as a figure age of the universe, 13,600 million years or whatever, based on the Hubble parameter and other observations, and there is also a way of determine speeds relative to the microwave cosmic background by which a 3 milliKelvin blueshift in the 2.7 K microwave background shows the Milky Way is headed for Andromeda at 600 km/s.  If somehow the microwave background (originating from 300,000 years after the big bang) can be construed as a universal reference frame (after all, COBE’s project leader Dr George Smoot did call it ‘the face of God’), then the 600 km/s – although the matter in the Milky Way may have speeded up and slowed down by say a factor of ten due to attractions to galaxy clusters and effects of the local super group – may be a reasonable order-of-magnitude estimate for the velocity of the Milky Way since the big bang (if you think that the average speed was more than an order of magnitude higher or lower and has been slowing down or speeding up a lot, then do some computer simulations of the Milky Way’s motion relative the cluster of surrounding galaxies to provide some evidence for that claim).  Therefore, the Milky Way would have travelled a distance of 60-6,000 km/s * (age of universe in seconds, 13,600 million years) = 0.03-3% of the radius of the universe.  Hence, this estimate shows that we are near the absolute origin of the universe, if there really exists such a thing as absolute motion and absolute time.  (The Milky Way is now being attracted toward Andromeda by gravity, and its matter has not been going in the same direction since the big bang.  (Hence, it is not clear that the universe originated as a singularity at a distance of about 0.3% of the current age of the universe located in the direction opposite to that of Andromeda.)

Despite this, Rovelli’s argument does hold water: ‘Mass recession speed is 0-c in spacetime of 0-15 billion years, so outward force F = m.dv/dt ~ m(c – 0)/(t, age of universe) ~ mcH ~ 1043 N (H is Hubble parameter). Newton’s 3rd law implies equal inward force, carried by exchange radiation, predicting cosmology, accurate general relativity, SM forces and particle masses.’

The failure of Hubble’s presentation is it’s relation of recession velocities as a function of distance, not of time.  It’s only when you write the Hubble law in terms of the ratio of observable recession velocity to observable time past (not distance now), that you see that you have constant velocity/time = dv/dt = acceleration.  Then you simply apply Newton’s second and third laws to this acceleration of the mass of the universe radially outward as seen from our frame of reference, and as a result you can predict the coupling strength of gravity, G, using cosmological observations which you can compare to terrestrial, experimentally determined values.

This cannot be sent for peer-review or published in somewhere like Physical Review Letters or even on arXiv because of successful predictions seem absurd to the mainstream, which is all tied up with non-predictive speculation, e.g., string.  Rovelli remarks:

‘The information that the Sun is not anymore there must travel from the Sun to Earth across space, carried by an entity.  This entity is the gravitational field.’  (P. 37 of draft.)

Rovelli uses this argument to solve the problem of Newton’s rotating bucket: Newton maintained that at least rotational motion is in principle absolute, because, if you have a bucket of water with you, you can detect rotation by observing the surface of the water becoming concave.  Rovelli shows that:

The water rotates with respect to a local physical entity: the gravitational field.

‘It is the gravitational field, not Newton’s inert absolute space, that tells objects if they are accelerating or not, if they are rotating or not.  There is no inert background entity such as Newtonian space: there are only dynamical physical entities.  Among these are the fields.  Among the fields is the gravitational field.

‘The flatness of concavity of the water surface in Newton’s bucket is not determined by the motion of the water with respect to absolute space.  It is determined by the physical interaction between the water and the gravitational field.’  (P. 40 of draft.)

This is absolutely correct and very well written, and resolves the problem by providing a clear solution.  It reminds me a comment recently written by Professor Lee Smolin on Dr Peter Woit’s blog in another context:

‘The Hoyle argument is not a “prediction” of the anthropic principle. The Hoyle argument is based on a fallacy in which an extra statement is added to a correct argument, without changing its force. The correct argument is as follows:

‘A The universe is observed to contain a lot of carbon.
‘B That carbon could not have been produced were there not a certain line…

‘Therefore that line must exist.

‘To this correct argument Hoyle added a statement that does no work, to get:

‘U Carbon is necessary for life.
‘A The universe contains a lot of carbon
‘B That carbon could not have been produced were there not a certain line in the spectrum of carbon. …

‘I have found that every single argument proported to be a successful prediction from the AP has a fallacy at its heart. See my hep-th/0407213 for details.

‘What has been so disheartening about the current debates re the landscape is that all this was thought through a long time ago by a few of us and it has been clear since around 1990 what an appeal to the landscape would have to do to result in falsifiable predictions. The issue is not the landscape per se but the cosmological scenario in which it is studied. The fact that eternal inflation can’t yield anything other than a random distribution on the landscape is the heart of the impass, for that leads to the AP being pulled in in an attempt to save the theory and that in turn leads to a replay of old fallacies.’

Notice that as a commentator on Not Even Wrong says, Hoyle’s prediction was claimed to be ‘the only genuine anthropic principle prediction’, according to John Gribbin and Martin Rees’s Cosmic Coincidences, quoted at http://www.novanotes.com/jan2003/anthro1.htm

This shows how top science writers (Gribbin and Rees) are plain wrong, keep on writing wrong stuff, and don’t correct it.  (We don’t need to even mention Gribbin’s Jupiter Effect or the commercial attitude of New Scientist’s editor Jeremy Webb.  I corresponded by email with Gribbin and Webb, and neither was concerned about the unvarnished facts simply – it seems – because nature isn’t as exciting as speculations and its lucrative hype, which sells lots of copies of books and magazines.)

Wilsonian philosophy of renormalization

In a previous post, Loop Quantum Gravity, Representation Theory and Particle Physics, I illustrated the mechanisms by which the cutoffs in renormalization physically occur.  The problem is that continuous differential equations are being used in quantum field theory (as in sound wave theory and quantum mechanics) to represent discrete events in statistical average.  For example, in sound you physically only have a lot of air molecules colliding.  Before the sound wave appears, the molecules are normally colliding in the air at about 500 m/s.  The sound wave is energy conveyed at basically the existing speed, with the air molecules as a carrier.  (The 500 m/s figure is for motion in random directions, so the mean vector speed is slower; however, sound in air goes faster than the mean speed of air molecules, by a factor of the square root of the ratio of specific heat capacities in air, because sound is an adiabatic process where the rise in pressure in the sound wave is accompanied by a rise in temperature, which increases the speed.)

The point is, the theory of sound is not a theory of individual air molecules colliding, but is an abstract level mathematical theory of the statistical average behaviour of the particles (air molecules).  Similarly, quantum field theory as currently built describes the statistical averages resulting from quantum effects.  For example, the magnetic moment of the electron is found to be (to 5 decimal places) approximated by 1 + alpha/(2*Pi) = 1.00116 Bohr magnetons.  The 0.00116 added to the Dirac result of 1 Bohr magneton (for the bare electron) is due to vacuum effects.  No chaotic fluctuation is predicted as such, just the average result.  Chaotic motions of electrons in quantum mechanics are similarly not directly predicted, a statistical model of the resulting distribution is given.  It is not a time-dependent versus time-independent equation issue; there is simply a statistical model and when someone writes down the time-dependent equation for an electron, it is a statistical equation.  This type of particle-wave duality is similar to what exists in classical sound wave theory.

Clearly, classical sound wave theory breaks down when asked to deliver predictions about wave properties on distance scales smaller than the mean separation of individual molecules in air.  Similarly, quantum field theory breaks down when the equations are extended down to predict field effects on the size scale of individual particles.  Rovelli remarks:

‘… loop quantum gravity shows that the structure of spacetime at the [assumed] Planck scale [the Planck scale is a distance based just on dimensional analysis and is only presumed to be the size of particles; another, even shorter, distance scale is of course the black hole event horizon radius of an electron mass] is discrete.  Therefore physical spacetime doesn’t have a short distance structure at all.  The unphysical assumption of a smooth background … may be precisely the cause of the ultraviolet divergences.’  (P. 9 of draft.)

This is basically the argument for the cutoff to prevent UV divergence at Loop Quantum Gravity, Representation Theory and Particle Physics, namely there’s nothing at smaller distances than the size scale of the quantum vacuum, so the field equations are no longer describing anything.  This kind of cutoff is very commonplace in physics.  For example, take the inverse-square law of solar radiation.  This predicts an infinite flux of solar radiation at the middle of the sun.  But you don’t get an infinite flux in the middle of the sun (the temperature is hot, about 15 million Kelvin, but that isn’t infinite).  Since it is the sun’s surface which is radiating most of the energy (not the middle of the sun), the law is not valid for positions within the sun’s radius.

So when you get infinite results occurring as a field equation goes towards zero distance, it is a flaw in the mathematical approximation at such small distances which is real, not an infinity which is real.  This is the rationale for field cutoffs to prevent infinities arising.

Professor John Baez has a good summary of this approach in his article Renormalization Made Easy: ‘This sort of idea goes back to Kenneth Wilson who won the Nobel prize in physics in 1982, for work he did around 1972 on the renormalization group and critical points in statistical mechanics. His ideas are now important not only in statistical mechanics but also in quantum field theory. For a nice short summary of the “Wilsonian philosophy of renormalization”, let me paraphrase Peskin and Schroeder:

• ‘… Wilson’s analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson’s arguments show that this this circumstance explains the renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.’

The cutoff simply occurs at the lattice spacing in the low energy (frozen) ‘Dirac sea’ (or if  particles are loops, then the cutoff would occur at the loop radius, just as you might cutoff the inverse-square law for sunlight at the sun’s radius when calculating the radiating temperature of the sun).

‘Since there is no spatial continuity at small scale, there is (literally!) no room in the theory for ultraviolet divergences.’  (P. 14 of draft.)

Different types of loops

(1) Heaviside energy current loops

The Heaviside energy current is the light speed electromagnetic logic signal propagated along a pair of conductors.  Electrons are normally moving around chaotically in each conductor.  When the Poynting-vector type electromagnetic field of the logic step propagates in the negatively charged conductor, electrons are accelerated in the direction of the logic step, but they typically reach speeds of only about 1 mm/s whereas the field (and logic step) propagate at 300 Mm/s (the velocity of light for the insulator around and inbetween the two conductors.

The situation in the positively charged conductor is considerably more exciting, because electron drift current there goes the opposite way to the logic step!  What causes the electrons to move like that?  After all, the logic step is the normal mechanism by which electricity propagates in computers and other electrical equipment (before the electricity has had time to flow right around the circuit and for the resistance of the circuit to be determined thereby).

What happens is essential for understanding the electron.  The acceleration of charges in each conductor causes the acceleration of charge in the other conductor, by transverse radio wave radiation (each conductor behaves as both transmitter antenna and receiver antenna).  No radio waves are able to escape from the transmission line, however, because the radio signal transmitted by each conductor (since the electrons accelerated at the front of the logic step in each conductor travel in opposite directions) are exact inversions of one another.  The superimposed signals at a distance from the transmission line is exactly zero; there is complete cancellation.

The Maxwell radio wave must have oscillating fields (with equal amounts of positive and negative electric field included) in order to propagate, because a non-oscillating Maxwell wave is just like the Poynting vector of the field in a single conductor of a transmission line.  That cannot propagate because the magnetic field it has gives infinite self-inductance.  The whole point of having two conductors to propagate a logic pulse is that the magnetic curls of the field from the opposite-directed electron drift currents in each conductor partly cancel, getting rid of the infinite self-inductance effect.

Therefore, we can see another solution to Maxwell’s equations: one predicting the electron.  A steady loop of Heaviside energy current can form an electric charge, because at each point on the loop, there is a corresponding point on the other side of the loop where energy current is moving in the opposite direction, and its magnetic field curl partially cancels out the magnetic field from the first point on the loop, preventing the infinite self-inductance problem.  In addition, the magnetic field lines curling around the loop add up to produce a magnetic dipole, just like an electron.  The light speed energy current is massless, like a Standard Model electron, but mass is supplied in the same ways that mass is supplied to Standard Model particles.  The spin of this loop is electron spin.

(2) Yang-Mills exchange radiation loops

Exchange radiation flows between charges in a continuous cycle, causing forces.  Because this radiation is moving to and from charges at the same time, it doesn’t need to oscillate in order to propagate (for photons, the curl of magnetic field from the returning flow of gauge boson radiation can cancel out part of the magnetic field from the outgoing gauge boson radiation through which it passes, just like the Poynting energy current in a transmission line).

(3) Space-time particle creation-annihilation loops

These are illustrated in the top diagram here.  These loops exist only above the IR energy cutoffs, i.e., they exist only within a radius of about 1 fm from a charged particle, this radius being the distance of closest approach in a Coulomb scattering of two particles each having a kinetic energy of the IR cutoff energy (0.511 MeV/electron).  In the vast spaces beyond 1 fm from an electron, there are no such loops, because the electric field of the electron is too weak to briefly free pairs of particles from the Dirac sea (which is effectively frozen for energies below the IR cutoff).  The photon speed in the Dirac sea should be given by some relationship of the sort in Maxwell’s original 1861 paper, http://vacuum-physics.com/Maxwell/maxwell_oplf.pdf

Page 49 on the PDF reader (labelled page 22 on the document) gives Maxwell’s claim to have predicted the velocity of light; using the formula for transverse waves in an elastic solid he gets the right answer and immediately declares in his own italics:

“… we can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena.”

Maxwell’s actual theory for light speed is simply a sound wave type effect in a crystalline solid.  Newton’s flawed formula for a sound wave is identical, because Newton didn’t know about the adiabatic effect (the increase in velocity due to the increase in temperature which accompanies the pressure wave).

Newton says that wave velocity c = [(E/V)/(M/V)]1/2 where E/V is the kinetic energy density (i.e., static pressure) and M/V is the mass density (E is energy, M mass, V volume).  Maxwell simply finds a theory to relate E/V and M/V to the electric and magnetic constants for the vacuum (permittivity and permeability) and uses Newton’s idea to unify electricity and magnetism.  In 1865 Maxwell rebuilt the aether theory and predicted that a Michelson-Morley type experiment could prove the existence of aether, and so Maxwell’s dynamical theory of electromagnetic mechanisms has been removed from physics ever since Einstein’s special relativity was accepted (although Maxwell’s differential equations, expressed in vector calculus by Heaviside in 1893 and in tensor form by Einstein 1915, have of course survived).

Taking the equation c = [(E/V)/(M/V)]1/2 =  (E/M)1/2, we immediately can rearrange to obtain the formula E = Mc2, and we might guess (as Guy Grantham insists by email) that the E/M ratio in c = (E/M)1/2 is the ratio of binding energy to mass of Dirac sea particles, in the vacuum.  Hence the IR cutoff of E = 0.511 MeV could be the binding energy per particle in a vacuum lattice which is broken (in pair production) to free polarizable loops of charges.  However, this kind of half-baked speculation really has no value in physics unless it helps to make checkable calculations and progress.  Otherwise it is clutter.  This Dirac sea idea seems to be nonsense to me, as I will explain.

Grantham claims that c = (E/M)1/2 correctly predicts not just the speed of light in aether but the speed of sound in a salt crystal, where E is the binding energy of salt ions in the lattice, 8 eV, and is the mass of a salt molecule (NaCl), which gives a speed c = 3.6 km/s.  Grantham, quoting Menahem Simhony,  gives values for the longitudinal sound wave speed in salt as 4.5 km/s and the transverse sound wave speed in solid salt as 2.5 km/s (solids support transverse waves as well as longitudinal waves; fluids only support longitudinal waves).   Both figures are far from the result of the formula (which doesn’t come with a mechanism explaining whether it is supposed to model transverse or longitudinal waves!).  In addition, it is obviously wrong to take the bonding energy E = Mc2, where c is light speed, and then suddenly claim that c is the speed of sound if E is lattice bonding energy rather than rest-mass energy.  (This is not a problem in the vacuum, because the IR cutoff energy of 0.511 MeV is the rest mass energy of an electron, as well as being the presumed bonding energy.)  It would make sense to set the bonding energy approximately equal at most to the kinetic energy of ions in the lattice, E = (1/2)Mv2 because if the kinetic energy exceeded the bonding energy the salt crystal structure would be broken down into a fluid.  In this case, the speed of sound v = (2E/M)1/2 = 5.1 km/s.  This is closer to the 4.5 km/s experimental value for longitudinal waves in salt, and is more convincing.

This problem of the missing square root of 2 is also present in Maxwell’s original 1861 light wave ‘prediction’.  A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9) explains the problem.  Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated: ‘history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’

Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:

‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of 21/2 smaller than the velocity of light.’

It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’

If the adiabatic effect on speed can be ignored (i.e., if there is little difference between the specific heats for the material), then one way to mechanistically estimate the speed of sound in a material is to begin with the fact that outer bound electrons have a typical speed of about c/137 (where c is light velocity). These electrons interact with the heavier ion nuclei by electromagnetic field effects, which transfer momentum from the electron to the nuclei of the ions.  The interaction here is a bit like Coulomb elastic scatter because the Schroedinger electron orbitals are not circles but are chaotic and the electrons move away from and towards the nucleus a lot, which constitutes Coulomb elastic scatter type energy and momentum transfer.  By conservation of momentum, after an elastic collision of a particle of velocity c/137 and of mass say m/1836 (i.e., an electron) with one of mass m (representing nuclei, m = 1 for a proton), the average resulting velocity is v = c/(137*1836) = 1200 m/s.  This assumes that the ion is hydrogen; obviously for heavier elements the neutron to proton ratio in the nucleus is about unity so there is then one orbital electron per neutron plus proton, so the speed is half that in hydrogen, v = c/(137*3600) = 600 m/s.   However, many factors are ignored in this back-of-the-envelope estimate based on the ground state of hydrogen.  (E.g., temperature will add kinetic energy to the air molecules or ions, speeding up motion, increasing the speed of sound.)