Aaron’s and Lubos’ anti-Loop Quantum Gravity propaganda

There is some discussion of the break down of special relativity at Steinn Sigurðsson‘s blog here, which is mentioned by Louise Riofrio here.  Several string theorists including Aaron Bergman and Lubos Motl have savagely attacked the replacement theory for special relativity, which is termed ‘doubly special relativity’, because they misunderstand both the physical basis of the theory and ignore the supporting evidence for it’s predictions.

Professor Lee Smolin explains why the Lorentz invariance breaks down at say the Planck scale in his book The Trouble with Physics.  Simply put, in loop quantum gravity spacetime is composed of particles with some ultimate, absolute grain size, such as the Planck scale of length (a distance on the order of 10-35 metre), which is totally independent of, and therefore in violation of, Lorentz contraction.  Hence, special relativity must break down for very small scales: the ultimate grain size is absolute.  Doubly special relativity is any scheme whereby you retain special relativity for large distance scales, but lose it for small ones.  Because you need higher energy to bring particles closer in collisions, small distance scales are for practical purposes in physics equivalent to high energy collisions.  So the failure of Lorentz invariance occurs at very small distances and at correspondingly high energy scales.

Doubly special relativity was applied by Giovanni Amelino-Camelia in 2001 to explain why some cosmic rays have been detected with energies exceeding the limit of special relativity, 5 x 10^19 eV = 3 J (the Greisen-Zatsepin-Kuzmin limit).  So it’s not just a case that LQG makes a speculative prediction of doubly special relativity, because there’s also experimental evidence validating it!

Actually, there are quite a lot of indications of this non-Lorentzian behaviour in quantum field theory, even at lower energies, where space does not look quite the same to all observers due to pair production phenomena. For example, on page 85 of their online Introductory Lectures on Quantum Field Theory, Professors Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo explain in http://arxiv.org/abs/hep-th/0510040:

‘In Quantum Field Theory in Minkowski space-time the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state. The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.

‘This is precisely the situation when fields are quantized on curved backgrounds. In particular, if the background is time-dependent (as it happens in a cosmological setup or for a collapsing star) different observers will identify different vacuum states. As a consequence what one observer call the vacuum will be full of particles for a different observer. This is precisely what is behind the phenomenon of Hawking radiation.’

Sadly, some string theorists are just unable to face the facts and understand them:

‘… the rules of the so-called “doubly special relativity” (DSR) to transform the energy-momentum vectors are nothing else than the ordinary rules of special relativity translated to awkward variables that parameterize the energy and the momentum.’ – Lubos Motl, http://motls.blogspot.com/2006/02/doubly-special-relativity-is-just.html

‘… Still, I just want to say again: DSR and Lorentz violation just aren’t in any way predictions of LQG.’ – Aaron Bergman, http://scienceblogs.com/catdynamics/2007/03/strings_and_apples.php#comment-364824

Loop quantum gravity (LQG) does quantize spacetime. Smolin makes the point clearly in “The Trouble with Physics” that whatever the spin network grain size in LQG, the grains will have an absolute size scale (such as Planck scale, or whatever).

This fixed grain size contradicts the Lorentz invariance, and so you have to modify special relativity to make it compatible with LQG. Hence, DSR in some form (there are several ways of predicting Lorentz violation at small scales while preserving SR at large scales) is a general prediction of LQG.   String theorists are just looking at the mathematics and ignoring the physical basis, and then complaining that they don’t understand need for tha mathematics.  It’s clear why they have got into such difficulties themselves, theoretically.

STRING. In string theory, it is assumed that the fundamental particles are all vibrating strings of this size, and that the various possible vibration modes and frequencies determine the nature of the particle. Nobody can actually ever prove this because string theory only describes gravity with spin-2 gravitons if there are 11 dimensions, and only describes unification near the Planck scale if there are 10 dimensions (which allows supersymmetry, namely a pairing of unobserved super bosons to every observed fermions that is required to make forces unify in the stringy paradigm). The problem is the 6/7 extra dimensions required to make today’s string theory work. The final (but still incomplete in detail) framework of string theory is named M-theory after ‘membrane’, since the 10 dimensional superstring theory is a membrane on 11 dimensional supergravity, analogous to a 2-dimensional bubble surface or membrane on a 3-dimensional bubble volume; the membrane has one dimension fewer than the bulk. To account for why we don’t see the extra dimensions, 6 of them are conveniently curled up in a Calabi-Yau manifold (a massive extension of the old Kaluza-Klein unification from 1929, which postulated 5 dimensional space time, because the metric including the extra dimension could be interpreted as giving a prediction of the photon). The 6 extra dimensions in the Calabi-Yau manifold can have a ‘landscape’ consisting of as many as 101000 different models of particle physics as solutions. It’s now very clear that such a hopelessly vague theory is a Hollywood-hyped religion of groupthink.

LQG. In loop quantum gravity (LQG), however, one possibility (not the only possibility) is that the different particles are supposed to come from the twists of braids of spacetime (see illustration here which is based on the paper of Bilson-Thompson, Markopoulou, and Smolin). This theory also contains the speculative Planck scale, but in a different way: spacetime fabric is assumed to contain a Penrose spin network. The grain size of this spin network is assumed to be the Planck scale. However, since loop quantum gravity so far does not produce any quantitative predictions, the assumption of the Planck scale is not crucial to the theory. Loop quantum gravity is actually more scientific than string theory, because it at least explains observables using other observables, instead of explaining non-observables (spin-2 graviton and unification near the Planck scale) by way of other non-observables (extra-dimensions and supersymmetry). In loop quantum gravity, interactions occur between nodes of the spin network. The summation of all interactions is equivalent to the Feynman path integral, and the result is background independent general relativity (without a metric). The physical theory of gravity is therefore be likely to be a variant or extension of loop quantum gravity, rather than anything to do with super-speculative M-theory.Doubly Special Relativity. The problem Smolin discusses with special relativity and the Planck scale is that distance contracts in the direction of motion in special relativity. Clearly because the Planck distance scale is a fixed distance independent of velocity, special relativity cannot apply to Planck scale distances. Hence ‘doubly special relativity’ was constructed to allow normal special relativity to work as usual at large distance scales, but to break down as distance approaches the Planck scale, which does not obey the Lorentz transformation. Because the Planck distance is related to the Planck energy (a very high energy, at which forces are assumed by many to unify), this is the same as saying that special relativity breaks down at extremely high energy. The insertion of the Planck scale (as a threshold length or maximum energy) gives rise to ‘doubly special relativity’.It isn’t just special relativity which is incomplete.  Supersymmetry (1:1 boson to fermion correspondence for all particles in the universe, just to unify forces at the Panck scale in string theory) also needs to be abandoned because of a failure in quantum field theory. Another example of a problem of incompleteness in modern physics is that in quantum field theory is there does not appear to be any proper constraints on conservation of field energy where the charge of the field is varying due to pair polarization phenomena; the correction of this problem will tell us where the energy of the various short-range fields comes from! It is easy to calculate the energy density of an electromagnetic field. Now, quantum field theory and experimental confirmation show that the effective electric charge of an electron is 7% bigger at 92 GeV than at collision energies up to and including 0.511 MeV (this latter energy corresponds to a distance of closest approach in elastic Coulomb scattering of electrons of about 10-15 m or 1 fm, and if we can assume elastic Coulomb type scattering and ignore inelastic radiation effects, then the energy is inversely proportional to the distance of closest approach).

So the increasing electric charge of the electron as you get closer to the core of the electron poses a problem for energy conservation: where is the energy? Clearly, we know the answer from Dyson’s http://arxiv.org/abs/quant-ph/0608140 page 70 and also Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040 page 85: the electric field creates observable pairs (which annihilate into radiation and so on, causing vacuum ‘loops’ as plotted in spacetime) above a threshold electric field strength of 1.3 x 1018 v/m. This occurs at a distance on the order of 1 fm from an electron and is similar to the IR cutoff energy of 0.511 MeV Coulomb collisions in quantum field theory.

It is clear that stronger electric fields are attenuated by pair production and polarization of these pairs (virtual positrons being closer to the real electron core than the virtual electrons) so that they cause a radial electric field pointing the other way to the particle core’s field. As you get closer to the real electron core, there is less intervening shielding because there are fewer polarized pairs between you and the core. It’s like travelling upwards through thick cloud in an aircraft: the illumination gradually increases, simply since the amount of cloud intervening between you and the sun is diminishing.

Therefore, the pair production and polarization of vacuum loops of virtual charges are absorbing the shielded energy of the electric field out to a distance of 1 fm. The virtual charges are only limited to electrons and positrons at the lowest energy. Higher energies, corresponding to stronger electric field strengths, result in the production of heavier pairs. At a distance closer than 0.005 fm, pairs of virtual muons occur because muons have a rest mass equivalent to Coulomb scattering at 105.6 MeV energy. At still higher energies you get quark pairs forming.

It seems that by the pair production and polarization mechanisms, electromagnetic energy is being transferred into the loop energy of virtual particles.  We know that the strong force charge falls experimentally as particle collision energy increases (after the threshold energy for nuclear charge to peak), while the electromagnetic charge increases as particle collision energy increases.  Surely, this in at least a qualitative way confirms that eletromagnetic gauge boson energy is being converted (via the pair production and polarization mechanism) into nuclear force gauge bosons (pions etc. between nucleons, gluons between quarks).

If so, there is no Planck scale unification of standard model forces, because the conservation of gauge boson energy shared between all forces very near a particle core means that the fall in the strong charge is caused by the increase in the electromagnetic charge as you get closer to a particle.  If this is the mechanism for nuclear forces, then the although at some energy the strong and electromagnetic forces will happen to collide, they won’t unify because as collision energy becomes ever higher, the electromagnetic charge will approach the bare core value.  This implies that there is no energy then being absorbed from the electromagnetic field, and so no energy available for the nuclear charge.  Thus, if this mechanism for nuclear charge is real, at extremely high energies the nuclear charge continues to fall after coinciding with the electromagnetic charge, until the nuclear charge falls to zero where the electromagnetic charge equals to bare core charge.  This discredits stringy supersymmetry, which is based on the assumption that all standard model charges merge into a superforce of one fixed charge value above grand unification energy.  This supersymmetry is just speculative rubbish, and is disproved by the mechanism.

This mechanism is not speculative: it is based entirely on the accepted, experimentally verified, picture of vacuum polarization shielding the core charge of an electron, plus the empirically based concept that the energy of an electromagnetic field is conserved.

Something has to happen to the field energy lost via charge renormalization.  We know what the energy is used for: pair production of ever more massive (nuclear) particle loops in spacetime.  These virtual particles mediate nuclear forces. 

It should be noted, however, that although you get ever more massive particles being created closer and closer to a fundamental charged particle due to pair production in the intense electric field, the pairs do not cause divergent (ever increasing, instead of decreasing) energy problems for two reasons. Firstly, Heisenberg’s uncertainty principle limits the time that a pair of virtual charges can last: this time is inversely proportional to the energy of the pair. Hence, loops of ever more massive virtual particles closer to a real particle core exist for shorter and shorter intervals of time before they annihilate back into the gauge boson energy of the electromagnetic field. Secondly, there is an upper energy limit (called the UV cutoff) corresponding physically to the coarseness of the background quantum nature of spacetime: observable pairs result as strong electric field energy breaks up the quantized spacetime fabric. The quantized spacetime fabric has a limit to how many particles you will find in a given spatial volume. If you look in a volume too small (smaller than the size of the grain in quantized spacetime) you won’t find anything. So although the mathematical differential equations of quantum field theory show an increasingly strong field creates increasingly high energy pairs, this breaks down at very short distances where there simply aren’t any particles because the spacetime is too small spatially to accommodate them:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

This physical view to explain the cutoffs (i.e., the renormalization of charge in quantum field theory) was championed by the Nobel Laureate Kenneth Wilson, as Professor John Baez explains, paraphrasing Peskin and Schroeder:

‘In Chapter 10 we took the philosophy that the distance cutoff D should be disposed of by taking the limit D -> 0 as quickly as possible. We found that this limit gives well-defined predictions only if the Lagrangian contains no coupling constants with dimensions of length^d with d > 0. From this viewpoint, it seemed exceedingly fortunate that quantum electrodynamics, for example, contained no such coupling constants since otherwise this theory would not yield well-defined predictions.

‘Wilson’s analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson’s arguments show that this this circumstance explains the renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.’

I have an extensive discussion of the causal physics behind the mathematics of quantum field theory here (also see later posts and this new domain), but the point I want to make here concerns unification.To me, it is entirely logical that the long range electromagnetic and gravity forces are classical in nature beyond the IR cutoff (i.e., for scattering energies below those that are required for pair production, or distances from particles of more than 1 fm).At such long distances, there are no pair production (annihilation-creation) loops in spacetime (see this blog post for a full discussion).All this implies that the nature of any ‘final theory’ of everything will be causal, with for example:

quantum mechanics = classical physics + mechanisms for chaos.

My understanding is that if you have any orbital system with masses in orbit around mass which all have fairly similar (i.e., no more than an order of magnitude difference) masses to each other and to the central mass, then classical orbitals disappear and you have chaos. Hence you might describe the probability of finding a given planet at some distance by some kind of Schroedinger equation.I think this is a major problem with classical physics; it works only because the planets are all far, far, far smaller than the mass of the sun.In an atom, the electric charge is the equivalent to gravitational mass, so the atom is entirely different to the simplicity of the solar system because the fairly similar charges on electrons and nuclei mean that it is going to be chaotic if you have more than one electron in orbit.There are other issues as well with classical physics which are clearly just down to a lack of physics. For example, the randomly occurring loops of virtual charges in the strong field around an electron will, when the electron is seen on na small scale, cause the path of the electron to be erratic, by analogy to drunkard’s walk Brownian motion the motion of pollen grain which is being affect by random influences of air molecules.  So: quantum mechanics = classical physics + mechanisms for chaos.Another mechanism for chaos is Yang-Mills exchange radiation. Within 1 fm of an electron, the Yang-Mills radiation-caused electric field is so strong that the gauge boson’s of electromagnetism, photons, get to produce short lived spacetime loops of virtual charges in the vacuum, which quickly annihilate back into gauge bosons.

But at greater distances, they lack the energy to polarize the vacuum, so the majority of the vacuum (i.e., the vacuum beyond about 1 fm distance from any real fundamental particle) is just a classical-type continuum of exchange radiation which does not involve any chaotic loops at all.

This is partly why general relativity works so well on large scales (quite apart from the fact that planets have small masses compared to the sun): there really is an Einstein-type classical field, a continuum, outside the IR cutoff of QFT.

Of course, on small scales, this exchange of gauge boson radiation causes the weird interactions you get in the double-slit experiment, the path-integrals effect, where a particle seems to be affected by every possible route it could take.

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The solar system would be as chaotic as a multi-electron atom if the gravitational charges (masses) of the planets were all the same (as for electrons) and if the sum or planetary masses was the sun’s mass (just as the sum of electron charges is equal to the electric charge of the nucleus). This is the 3+ body problem of classical mechanics:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Obviously Bohr did not know anything about this chaos in classical systems, when when coming up with complementarity and correspondence principles in the Copenhagen Interpretation. Nor did even David Bohm, who sought the Holy Grail of a potential which becomes deterministic at large scales and chaotic (due to hidden variables) at small scales.

What is interesting is that, if chaos does produce the statistical effects for multi-body phenomena (atoms with a nucleus and at least two electrons), what produces the interference/chaotic statistically describable (Schroedinger equation model) phenomena when a single photon has a choice of two slits, or when a single electron orbits a proton in hydrogen?

Quantum field theory phenomena obviously contribute to quantum chaotic effects. The loops of charges spontaneously and randomly appearing around a fermion between IR – UV cutoffs could cause chaotic deflections on the motion of even a single orbital electron:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.] … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Yang-Mills exchange radiation is what constitutes electromagnetic fields, both of the electrons in the screen containing the double slits, and also the electromagnetic fields of the actual photon of light itself. Again, consider the remarks of Feynman quoted earlier:

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

Comments about further work planned:

Above electroweak unification energy, the electroweak gauge bosons (W+, W-, Z_0 and photon) are all massless and form a simple symmetry (the Z_0 being similar to the photon), but below electroweak unification energy, everything apart from the photon acquires mass from the Higgs field (or some other mechanism!).

This does fit in perfectly with the mass model that predicts (to within a couple of percent error) the masses of lepton and hadrons from a mechanism whereby mass is usually coupled electromagnetically in quantized units external to the polarized vacuum of a particle.  The case of the electron is indicated to be the odd situation here, involving a double polarization of the vacuum.  The quantized mass-causing particle of the vacuum has a mass equal to the Z_0 mass, 91 GeV.  Since there is evidence (see http://thumbsnap.com/vf/FBeqR0gc.gif ) that the polarization shielding factor of 1/alpha or 137.036… reduces the bare charge of QED to the observed charge of the electron beyond 1 fm distance, the electron charge is on the order of 91 GeV/(137^2) with ignoring small geometric factors like twice Pi, while the muon and heavier particles mass only involves a single vacuum polarization shielding the coupling of charge to quantized mass, so those masses are on the order 91 GeV/137 (again ignoring small integer and Pi geometry factors).  http://thumbsnap.com/vf/FBeqR0gc.gif contains a very quick summary of how masses are related.

Doubtless there are other factors at work.  The periodic table had various anomalies in detail itself at first, and it took a long time to explain mass discrepancies by isotopic abundances, and the theoretical reason for the existence of isotopes of elements was of course only explained when the neutron was discovered as late as 1932.

There are plenty of evidence showing that everything may be explained by a causal model, albeit one based on the experimentally well established facts quantum field theory and general relativity (excluding cosmological constant/dark energy speculation).  It is just a great pity that the mainstream has gone off into speculations, so that, if the causal model is right, there will be some sort of conflict of interests with string theorists before many people take the facts seriously.

I’m planning to publish a free online book which presents this experimental evidence (not the speculative parts and philosophy) for quantum mechanics, quantum field theory including the Standard Model, general relativity, and the big bang (the recession, nucleosynthesis, and cosmic background radiation evidence).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s