# Lee Smolin and Peter Woit: Comparison of String Critical Books

I read Peter Woit’s Not Even Wrong book last summer, and it is certainly the most important book I’ve read, for it gives a clear explanation of chiral symmetry in the electroweak sector of the Standard Model, as well as a brilliant outline of the mathematical thinking that led to the gauge groups of the Standard Model.

Previously, I had learned how particle physics had provided input to build the Standard Model.  Here’s a sketch of the typical sort of empirical foundation to physics that I mean:

The same symmetry principles also describe the mesons in a similar way (which are pairs of quarks, not triplets of quarks as in the case of baryons illustrated in my sketch above).  Baryons and mesons form the hadrons, the strongly interacting particles.  These are all composed of quarks and the strong force symmetry responsible is the SU(3) symmetry unitary group.  Although the idea of colour charge, whereby each quark has a different strong charge apart from its electric and weak charges, seems speculative, there is evidence from the fact that the omega minus particle is composed of three strange quarks.  By the Pauli exclusion principle, you simply can’t have three fermions like strange quarks confined together, because two would have to have the same spin.  (You could have two strange quarks confined because one could have the opposite spin state of the other, which is fine according to the Pauli exclusion principle, but this doesn’t allow three similar quarks to do this.)  In fact, from the measured 3/2 spin of the omega minus, all of its 1/2 spin strange quarks would have the same spin.  The easiest way to account for this seems to be by the new ‘quantum number’ (or, rather, property) of ‘colour charge’.

This story, whereby the composition and spin of the omega minus mean that Pauli’s exclusion principle forces a new quantum number, colour charge, on quarks, is actually back-to-front.  What happened was that Murray Gell-Mann and Yuvall Ne’eman in 1961 independently arranged the particles into families of 8 particles each by the SU(3) symmetry scheme above, and found in one of these families that there was no known particle to fill the spin 3/2 and charge -1 gap, which was actually the prediction of the omega minus!  The omega minus was predicted in 1961, and after two years of experiments it was found in a bubble chamber photograph taken in 1964.  This verified the eight-fold way SU(3) symmetry.  The story of the quark, which is the underlying explanation for the SU(3) symmetry, is afterwards.  Both Gell-Mann and George Zweig in 1964 put forward the quark concept although Zweig called them ‘aces’, on the basis of an uncorrect assumption that were four flavours of such particles altogether (it is now known that there are six quark flavours altogether, in three generations of two quarks each: up and down, charm and strange, top and bottom).  Zweig’s lengthy paper, which independently predicted the same properties of quarks as those Gell-Mann predicted, was censored from publication by the peer-reviewers of a major American journal, but Gell-Mann’s simpler model in a briefer two page paper was published with the title ‘A systematic Model of Baryons and Mesons’ in the European journal Physics Letters, v8, pp. 214-5 (1964).  Gell-Mann in that paper argues that his quark model is ‘a simpler and more elegant scheme’ than just having the eight-fold way as the explanation.  (The name quark was taken from page 383 of James Joyce’s Finnegan’s Wake, Viking Press, New York, 1939.)  David J. Gross has his nice Nobel lecture published here [Proc. Natl. Acad. Sci. U S A. 2005 June 28; 102(26): 9099–9108] where he begins by commenting:

‘The progress of science is much more muddled than is depicted in most history books. This is especially true of theoretical physics, partly because history is written by the victorious. Consequently, historians of science often ignore the many alternate paths that people wandered down, the many false clues they followed, the many misconceptions they had. These alternate points of view are less clearly developed than the final theories, harder to understand and easier to forget, especially as these are viewed years later, when it all really does make sense. Thus, reading history one rarely gets the feeling of the true nature of scientific development, in which the element of farce is as great as the element of triumph.

‘The emergence of QCD is a wonderful example of the evolution from farce to triumph. During a very short period, a transition occurred from experimental discovery and theoretical confusion to theoretical triumph and experimental confirmation. …’

To get back to colour charge: what is it physically?  The labels colour and flavours are just abstract labels for known mathematical properties.  It’s interesting that the Pauli exclusion principle suggested colour charge from the problem of needing three strange quarks with the same spin state in the omega minus particle.  The causal mechanism of the Pauli exclusion principle is probably related to magnetism caused by spin: the system energy is minimised (so it is most stable) when the spins of adjacent particles are opposite to one another, cancelling out the net magnetic field instead of having it add up.  This is why most materials are not strongly magnetic, despite the fact that every electron has a magnetic moment, and atoms are arranged regularly in crystals.  Wherever magnetism does occur such as in iron magnets, it is due to the complex spin alignments of electrons in different atoms, not to orbital motion of electrons, which are of course largely chaotic (there are shaped orbitals where the probability of finding the electron is higher than elsewhere, but the direction of the electron is still random so magnetic fields caused by the ordinary orbital motions of electrons in atoms cancel out naturally).

As stated in the previous post, what happens when two or three fermions are confined in close proximity is that they acquire new charges, such as colour charge, and this avoids violating the Pauli exclusion principle.  Hence, the energy of the system doesn’t make it unstable, because the extra energy results in new forces which are created by the mediation of new vacuum charges in the strong fields which result in vacuum pair production and polarization phenomena.

Peter Woit’s Not Even Wrong is an exciting book because it gives a motivational approach and historical introduction to the group representation theory that you need to know to really start understanding the basic mathematical background to empirically based modern physics.  Hermann Weyl worked on Lie group representation theory in the late 1920s, and wrote a book about it which was ignored at the time.  The Lie groups had been defined in 1873 by Sophus Lie.

It was only when things like the ‘particle zoo’ – which consisted of hundreds of unexplained particles discovered using the early particle accelerators (with cloud chambers and later bubble chambers to record interactions, unlike modern solid state electronic detectors) after World War II – were finally explained by Murray Gell-Mann and Yuval Ne’eman around 1960s using symmetry ideas, that Weyl’s work was taken seriously.  Woit wrotes on page 7 (London edition):

‘The positive argument of this book will be that, historically, one of the main sources of progress in particle theory has been the discovery of new symmetry groups of nature, together with new representations of these groups.  The failure of the superstring theory programme can be traced back to its lack of any fundamental new symmetry group.’

On page 15 (London edition), Woit explains that in special relativity: ‘if I try to move at high speed in the same direction as a beam of light, no matter how fast I go, the light will always be moving away from me at the same speed.’

This is an excellent to express what special relativity says.  The physical mechanism is time-dilation for the observer.  If you are moving at high speed, your clocks and your brain all slow down, so you suffer from the illusion that even a snail is going like a rocket.  That’s why you don’t see the velocity of light appear to slow down: your measurements of speed are crazy due to time-dilation.  That’s physically the mechanism responsible for special relativity in this particular case.  There’s no weird paradox involved, just physics.

If we jump to Lee Smolin’s The Trouble with Physics (New York edition) page 34, we again find a problem of this sort.  Lee Smolin points out that the aether theory was wrong because light was basically some sort of sound wave in the aether, so the aether density was enormous, and it is paradoxical for something filling space with high density to offer no resistance.

Clearly the fundamental particles don’t get much resistance because they’re so small, unlike macroscopic matter, and the resistance is detected as the Lorentz-FitzGerald contraction of special relativity.  But the standard model has exchange radiation filling spacetime, causing forces, and it’s clear that the exchange radiation is causing these effects.  Move into exchange radiation, and you get contracted in the direction of your motion.  If you want to think about a fluid ‘Dirac sea’ you get no drag whatsoever because the vacuum – unlike matter – doesn’t heat up (the temperature of radiation in space, such as the temperature of the microwave background, is the effective temperature of a blackbody radiation emitter corresponding to to the energy spectrum of those photons, and is not the temperature of the vacuum; if the vacuum was radiating the energy due to its own temperature – which it is not – then the microwave background would not be redshifted thermal radiation from the big bang, but would be heat emitted spontaneously from the vacuum).

There are two aspects of the physical resistance to motion in a fluid: the first is an inertial resistance due to the shifting of the fluid out of the path of the moving object.  Once the object is moving (think of a ship), the fluid pushed out of the way from the front travels around and pushes in at the stern or the back of the particle, returning some of the energy.  The percentage of the energy returned is small for a ship, because of dissipative  energy losses: the water molecules that hit the front of the ship get speeded up and hit other molecules, frothing and heating the water slightly, and setting up waves.  But there is still some return, and there is also length contraction in the direction of motion.

In the case of matter moving in the Dirac sea or exchange radiation field (equivalent to the spacetime fabric of general relativity, responsible for inertial and gravitational forces), the exchange radiation is not just acting externally to the macroscopic object; it penetrates to the fundamental particles which are very small (so mutual shielding is trivial in the case of the particles in small mass), and so the whole thing is contracted irrespective of the mechanical strength of the material (if the exchange radiation only acted on the front layer of atoms, the contraction would depend on the strength of the material).

Where this spacetime fabric analogy gets useful is that it allows a prediction for the strength of gravity which is accurate to within experimental error.  This works as follows.  The particles in the surrounding universe are receding from us in spacetime, where bigger apparent distances imply greater times into the past (due to the travel or delay time of light in reaching us).  As these particles recede at increasing speeds with increasing spacetime, assuming that the ‘Dirac sea’ fluid analogy holds, then there will be a net flow of Dirac sea fluid inward towards us to fill in the spatial volumes being vacated as the matter of the universe recedes from us.

The mathematics allows us to calculate the inward force that results, and irrespective of the actual size (cross-sectional area and volume) of the receding particles, the gravity parameter G can be calculated fairly accurately from this inward force equation.  A second calculation was developed assuming that the spacetime fabric can be viewed either as a Dirac sea or as exchange radiation, on the basis that Maxwell’s ‘displacement current’ can be virtual fermions where there are loops, i.e., above the IR cutoff of quantum field theory, but must be radiation where there are no virtual fermion effects, i.e., at distances greater than ~1 fm from a particle, where the electric field is <10^18 v/m (below the IR cutoff), based on exchange radiation doing the compression (rather than a fluid Dirac sea), and when this calculation is normalized with the first equation, we can calculate a second parameter, the exact shielding area per fundamental particle.  The effective cross-sectional shielding area for gravity, of a particle of mass m, is Pi*(2Gm/c^2)^2.  This is the black hole event horizon radius, which seems to tie in with another calculation here.

Getting back to Not Even Wrong, Dr Woit then introduces the state-vector which describes the particle states in the universe, and the Hamiltonian which describes the energy of a state-vector and its rate of change.  What is interesting is that Woit then observes that:

‘The fact that the Hamiltonian simultaneously describes the energy of a state-vector, as well as how fast the state-vector is changing with time, implies that the units in which one measures energy and the units in which one measures time are linked together.  If one changes one’s unit of time from seconds to half-seconds, the rate of change of the state-vector will double and so will the energy.  The constant that relates time units and energy units is called Planck’s constant … It is generally agreed that Planck made an unfortunate choice of how to express the new constant he needed …’

Planck defined his constant as h in the equation E = hf, where f is wave frequency.  The point Woit makes here is that Planck should have represented it using angular (rotational) frequency.  Angular frequency (measured in radians per second, where 1 rotation = 2*Pi radians) is 2*Pi*f, so Planck would have got a constant equal to h/(2*Pi), which is now called h-bar.

This is usually considered a trivial point, but it is important.  When people go on about Planck’s discovery of the quantum theory of radiation in 1900, they forget that classical radio waves were well known and were actually being used at the time.  This brings up the question for the reason for the difference between quantum and classical electromagnetic waves.

Dr Bernard Haisch has a site with links to various papers of interest here: http://www.calphysics.org/research.html.  Alfonso Rueda and Bernard Haisch have actually investigated some of the important ideas needed to sort out the foundations of quantum field theory, although their papers are incomplete and don’t produce the predictions of important phenomena that are needed to convince string theorists to give up hyping their failed theory.  The key thing is that the electron does radiate in it’s ground state.  The reason it doesn’t fall below the ground state is that it is exchanging radiation because all electrons are radiating, and there are many in the universe.  The electron can’t spiral in due to losing energy, because when it radiates while in the ground state it is in gauge boson radiation equilibrium with the surroundings, receiving the same gauge boson power back as it emits!

The reason why quantum radiation is emitted is that this ground state (equilibrium) exists because all electrons are radiating.  So Yang-Mills quantum field theory really does contain the exchange radiation dynamics for forces which should explain to everyone what is occurring in the ground state of the atom.

The reason why radio waves and light are distinguished from the normally invisible gauge boson exchange radiation is that exchange radiation is received symmetrically from all directions and causes no net forces.  Radio waves and light, on the other hand, can cause net forces, setting up electron motions (electric currents) which we can detect!  I don’t like Dr Haisch’s statement that string theory might be sorted out by this mechanism:

‘It is suggested that inertia is indeed a fundamental property that has not been properly addressed even by superstring theory. The acquisition of mass-energy may still allow for, indeed demand, a mechanism to generate an inertial reaction force upon acceleration. Or to put it another way, even when a Higgs particle is finally detected establishing the existence of a Higgs field, one may still need a mechanism for giving that Higgs-induced mass the property of inertia. A mechanism capable of generating an inertial reaction force has been discovered using the techniques of stochastic electrodynamics (origin of inertia). Perhaps this simple yet elegant result may be pointing to a deep new insight on inertia and the principle of equivalence, and if so, how this may be unified with modern quantum field theory and superstring theory.’

Superstring theory is wrong, and undermines M-theory.  The expense of supersymmetry seems five-fold:

(1) It requires unobserved supersymmetric partners, and doesn’t predict their energies or anything else that is a checkable prediction.

(2) It assumes that there is unification at high energy.  Why?  Obviously a lot of electric field energy is being shielded by the polarized vacuum near the particle core.  That shielded electromagnetic energy goes into short ranged virtual particle loops which will include gauge bosons (W+/-, Z, etc.).  In this case, there’s no high unification.  At really high energy (small distance from particle core), the electromagnetic charge approaches its high bare core value, and there is less shielding between core and observer by the vacuum so there is less effective weak and strong nuclear charge, and those charges fall toward zero (because they’re powered by the energy shielded from the electromagnetic field by the polarized vacuum).  This gets rid of the high energy unification idea altogether.

(3) Supersymmetry requires 10 dimensions and the rolling up of 6 of those dimensions into the Calabi-Yau manifold creates the complexity of string resonances that causes the landscape of 10^500 versions of the standard model, preventing the prediction of particle physics.

(4) Supersymmetry using the measured weak SU(2) and electromagnetic U(1) forces, predicts the SU(3) force incorrectly high by 10-15%.

(5) Supersymmetry when applied to try to solve the cosmological constant problem, gives a useless answer, at least 10^55 times too high.

The real check on the existence of a religion is the clinging on to physically useless orthodoxy.

Gravity and the Quantum Vacuum Inertia Hypothesis
Alfonso Rueda & Bernard Haisch, Annalen der Physik, Vol. 14, No. 8, 479-498 (2005).

Review of Experimental Concepts for Studying the Quantum Vacuum Fields
E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole, Space Technology and Applications International Forum (STAIF 2006), p. 1390 (2006).

Analysis of Orbital Decay Time for the Classical Hydrogen Atom Interacting with Circularly Polarized Electromagnetic Radiation
Daniel C. Cole & Yi Zou, Physical Review E, 69, 016601, (2004).

Inertial mass and the quantum vacuum fields
Bernard Haisch, Alfonso Rueda & York Dobyns, Annalen der Physik, Vol. 10, No. 5, 393-414 (2001).

Stochastic nonrelativistic approach to gravity as originating from vacuum zero-point field van der Waals forces
Daniel C. Cole, Alfonso Rueda, Konn Danley, Physical Review A, 63, 054101, (2001).

The Case for Inertia as a Vacuum Effect: a Reply to Woodward & Mahood
Y. Dobyns, A. Rueda & B.Haisch, Foundations of Physics, Vol. 30, No. 1, 59 (2000).

On the relation between a zero-point-field-induced inertial effect and the Einstein-de Broglie formula
B. Haisch & A. Rueda, Physics Letters A, 268, 224, (2000).

Contribution to inertial mass by reaction of the vacuum to accelerated motion
A. Rueda & B. Haisch, Foundations of Physics, Vol. 28, No. 7, pp. 1057-1108 (1998).

Inertial mass as reaction of the vacuum to acccelerated motion
A. Rueda & B. Haisch, Physics Letters A, vol. 240, No. 3, pp. 115-126, (1998).

Reply to Michel’s “Comment on Zero-Point Fluctuations and the Cosmological Constant”
B. Haisch & A. Rueda, Astrophysical Journal, 488, 563, (1997).

Quantum and classical statistics of the electromagnetic zero-point-field
M. Ibison & B. Haisch, Physical Review A, 54, pp. 2737-2744, (1996).

Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids
A. Rueda, B. Haisch & D.C. Cole, Astrophysical Journal, Vol. 445, pp. 7-16 (1995).

Inertia as a zero-point-field Lorentz force
B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678-694 (1994).

The articles above have various problems.  The claim that the source of inertia is the same zero-point electromagnetic radiation that causes the Casimir force, and that gravitation arises in the same way, is in a sense correct, but you have to increase the number of gauge bosons in electromagnetism in order to explain why gravity is 10^40 times weaker than electromagnetism.  This is actually a benefit, rather than a problem, as shown here.  The electromagnetic theory, in order to causally explain the mechanisms for repulsion and attraction between similar and dissimilar charges as well as gravity with the correct strength from the diffusion of gauge bosons between similar charges throughout the universe (a drunkard’s walk with a vector sum of strength equal to the square root of the number of charges in the universe, multiplied by the gravity force which is mediated by photons) ends up with 3 gauge bosons like the weak SU(2) force.  So this looks as if it can incorporate gravity into the standard model of particle physics.

The conventional treatment of how photons can cause attractive and repulsive forces just specifies the right number of polarizations and the right spin.  If you want a purely attractive gauge boson, you would have a spin-2 ‘graviton’.  But this comes from abstract symmetry principles, it isn’t dynamical physics.  For example, you can get all sorts of different spins and polarizations when radiation is exchanged depending on how you define what is going on.  If, for example, two transverse electromagnetic (TEM) waves pass through one another with the same amplitude while travelling in opposite directions, the curls of their respective magnetic fields will cancel out during the duration of overlap.  So the polarization number will be changed!  As a result, the exchange of radiation in two directions is easier than a one-way transfer of radiation.  Normally you need two parallel conductors to propagate an electromagnetic wave by a cable, or you need an oscillating wave (with as much negative electric field as positive electric field in it) for energy to propagate.  The reason for this is that a wave of purely one type of electric field (positive only or negative only) will have an uncancelled infinite self-inductance due to the magnetic field it creates.  You have to ensure that the net magnetic field is zero, or the wave won’t propagate (whether guided by a wire, or launched into free space).  The only way normally of getting rid of this infinite self-inductance is to fire off two electric field waves, one positive and one negative, so that the magnetic fields from each have opposite curls, and the long range magnetic field is thus zero (perfect cancellation).

This explains why you normally need two wires to send logic signals.  The old explanation for two wires is false: you don’t need a complete circuit.  In fact, because electricity can never go instantly around a circuit when you press the on switch, it is impossible for the electricity to ‘know’ whether the circuit it is entering is open or is terminated by a load (or short-circuit), until the light speed electromagnetic energy completes the circuit.

Whenever energy first enters a circuit, it does so the same way regardless of whether the circuit is open or is closed, because goes at light speed for the surrounding insulator, and can’t (and doesn’t in experiments) tell what the resistance of the whole circuit will turn out to be.  The effective resistance, until the energy completes the circuit, is equal to the resistance of the conductors up to the position of the front of the energy current current (which is going at light speed for the insulator), plus the characteristic impedance of the geometry of the pair of wires, which is the 377 ohm impedance of the vacuum from Maxwell’s theory, multiplied by a dimensionless correction factor for the geometry.  The 377 ohm impedance here is due to the fact that Maxwell’s so-called ‘displacement current’, which is (for physics at energies below the IR cutoff of QFT) radiation rather than virtual electron and virtual positron motion.

The point is that the photon’s nature is determined by what is required to get propagation to work through the vacuum.  Some configurations are ruled out physically, because the self-inductance of uncancelled magnetic fields is infinite, so such proto-photons literally get nowhere (they can’t even set out from a charge).  It’s really like evolution: anything can try to work, but those things that don’t succeed get screened out.

The photon, therefore, is not the only possibility.  You can make exchange radiation work without photons if where each oppositely-directed component of the exchange radiation has a magnetic field curl that cancels the magnetic field of the other component.  This means that two other types of electromagnetic gauge boson are possible beyond what is normally considered to be the photon: negatively charged electromagnetic radiation will propagate providing that it is propagating in opposite directions simultaneously (exchange radiation!) so that the magnetic fields are cancelled in this way, preventing infinite self-inductance.  Similarly for positive electromagnetic gauge bosons.  See this post.

For those who are easily confused, I’ll recap.  The usual photon has an equal amount of positive and negative electric field energy, spatially separated as implied by the size or wavelength of the photon (it’s a transverse wave, so it has a transverse wavelength).  Each of these propagating positive and negative electric fields has a magnetic field, but because the magnetic field curls in the opposite direction from a moving electric field as from a moving magnetic field, the two curls cancel out when the photon is seen from a distance large compared to the wavelength of the photon.  Hence, near a photon there are electric fields and magnetic fields, but at a distance large compared to the wavelength of the photon, these fields are both cancelled out.  This is the reason why a photon is said to be uncharged.  If the photon’s fields did not cancel, it would have charge.  Now, in the weak force theory there are three gauge bosons which have some connection to the photon: two charged W bosons and a neutral Z boson.  This suggests a workable, predictive revision to electromagnetic theory.

I’ve gone seriously off on a tangent here to comparing the books Not Even Wrong and The Trouble with Physics.  However, I think these are important points to make.

Update, 24 March ’07: the following is the bit of a comment to Clifford’s blog which was snipped off.

In order to be really convincing someone has got to come up with a way of making checkable predictions from a defensible unification of general relativity and the standard model. Smolin has a longer list in his book:

1. Combine quantum field theory and general relativity
2. Determine the foundations of quantum mechanics
3. Unify all standard model particles and forces
4. Explain the standard model constants (masses and forces)
5. Explain dark matter and dark energy, or come up with with some modified theory of gravity that eliminates them but is defensible.

Any non-string solution to these problems is almost by definition a joke and won’t be taken seriously by the mainstream string theorists. Typical argument:

String theorist: “String theory includes 6/7 extra dimensions and predicts superpartners, gravitons, branes, landscape of standard models, anthropic principle, etc.”

Alternative theorist: “My theory resolves real problems that are observable, by explaining existing data!”

String theorist: “That sounds boring/heretical to me.”

What’s completely unique about string theory is that it has managed to acquire public respect and credulity in advance of any experimental confirmation.

This is mainly due to public relations hype. That’s what makes it so tough on alternatives.

# The correct unification scheme

The top post here gives a discussion on the problem of unifying gravity and standard model forces: gauge boson radiation is exchanged between all charges in the universes, while electromagnetic forces only result in particular situations (dissimilar or similar charges) .  As discussed below, gravitational exchange radiation interacts indirectly with electric charges, via some vacuum field particles which become associated with electric charges.  [This has nothing to do with the renormalization problem in speculative (string theory) quantum gravity that predicts nothing.  Firstly, this does make predictions of particle masses and of gravity and cosmology.  Secondly, renormalization is accounted for by vacuum polarization shielding electric charge.  The mass changes in the same way, since the field which causes mass is coupled to the charge by the already renormalized (shielded) electric charge.]

The whole idea that gravity is a regular quantum field theory, which causes pair production if the field is strong enough, is totally speculative and there is not the slightest evidence for it.  The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997.  Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).

There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons).  How can gravitational charge be renormalized?  There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field.  For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized.  If they are both displaced in the same direction by the field, they aren’t polarized.  So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s Cavorite.

There is no evidence for this.  Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value.  However, this is not a problem.  The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude).  Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.

Finally, supersymmetry is totally flawed.  What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification.  You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).

The energy sapped from the gauge boson mediated field of electromagnetism is being used.  It’s being used to create pairs of charges, which get polarized and shield the field.  This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory.  Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.

Now take the situation where you put N electrons close together, so that their cores are very nearby.  What will happen is that the surrounding vacuum polarization shells of both electrons will overlap.  The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger.  So the shielding of the polarized vacuum is N times stronger!  This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron.  Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!

This has three remarkable consequences.  First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy.  Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.

Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials).  If you could extend the Pauli exclusion principle, you could allow particles to acquire short-range nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.

Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do.  Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.

The right handed electron has a weak hypercharge of -2.  The left handed electron has a weak hypercharge of -1.  The left handed downquark (with observable low energy, electric charge of -1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of -2/3.

It’s totally obvious what’s happening here.  What you need to focus on is the hadron (meson or baryon), not the individual quarks.  The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks).  The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them.  It all comes from the electromagnetic field.  Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces.  Hence, at the usually assumed so-called Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric).  This ties in with representation theory for particle physics, whereby symmetry transformation principles relate all particles and fields (the conservation of gauge boson energy and the exclusion principle being dynamic processes behind the relationship of a lepton and a quark; it’s a symmetry transformation, physically caused by quark confinement as explained above), and it makes predictions.

It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength.  This is done when electric field energy is stored in a capacitor.  In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose.  See page 70 of http://arxiv.org/abs/hep-th/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy.  (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge.  The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.)  The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces.  This allows predictions and more checks.  It’s totally tied down to hard facts, anyway.  If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.

To give an example from https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron.  Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 – 1 = 136.036 times the electric charge energy of the electron experienced at large distances.  This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.

# Smolin, Woit, the failure of string theory, and how string theory responds

Professor Lee Smolin has been attacked by various string theorists (particularly Aaron Bergmann and Lubos Motl), but now Professor Clifford Johnson has seemingly joined in with Aaron and Lubos in a post where he claims that pointing out the failure of string theory in books is unsatisfactory because it puts “their rather distorted views on the issues into the public domain in a manner that serves only to muddle”.

This seems to be a slightly unfair attack to me.  Clifford is certainly trying hardest of all the string theorists to be reasonable, but he has stated that he has not read the books critical of string theory, which means that his claim that the books contain ‘distorted views’ which ‘muddle’ the issues, is really unfounded upon fact (like the claims of string theory).

Dr Peter Woit has a nice set of notes summarising some problems with string theory here.  These are far more sketchy than his book and don’t explain the Standard Model and its history like his book, but the notes do summarise a few of the many problems in string theory.  String theorists, if they even acknowledge the existence of critics at all (Witten has written a letter to Nature saying that he doesn’t, instead he suggests that string theorists should ignore objections while continuing to make or to stand by misleading claims that string theory ‘predicts’ gravity, such as Witten’s own claim of that in the April 1996 issue of Physics Today), dismiss any problem with string theory as a ‘storm in a teacup’, refuse to read the books of critics, misrepresent what the critics are saying, so the arguments don’t address the deep problems.

For instance, Clifford wrote in a particularly upsetting comment:

“For example, a great deal of time was spent by me arguing with Peter Woit that his oft-made public claim that string theory has been shown to be wrong is not a correct claim. I asked him again and again to tell us what the research result is that shows this. He has not, and seems unable to do so. I don’t consider that to be informed criticism, but a very very strong and unfair overstatement of what the current state of on-going research is.”

Peter Woit explains on page 177 of Not Even Wrong (which, admittedly, Clifford is unaware of since he has not read the book!) that using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. So that’s failure #1.

Moreover, Peter Woit also explains on page 179 that supersymmetry makes another false prediction: it predicts a massive amount of dark energy in the vacuum and an immense cosmological constant, totally contradicted by astronomy and too high by a factor of 10^55 or 10^113 depending on whether the string theory is minimally supersymmetric or a supersymmetric grand unified theory, respectively.

Either way, Dr Woit explains: ‘This is almost surely the worst prediction ever made by a physical theory that anyone has taken seriously.’ So that’s failure #2.

This is not a problem with the standard model of particle physics: comparing string theory to the standard model is false.  A student who answers one of the questions on a paper and gets it wrong, derives no excuse from pointing to another who achieved 99%, despite happening to get the same single question wrong. Any assessment by comparison needs to take account of successes, not just errors. In one case the single error marks complete failure, while in the other it’s trivial.

It’s still a a string error, whether the standard model makes it as well, or not as the case may be. String theorists have a different definition of the standard model for this argument, more like a speculative theory than an empirical model of particle physics.  The standard model isn’t claimed to be the final theory. String is. The standard model is extremely well based on empirical observations and makes checked predictions. String doesn’t.

That’s why Smolin and Woit are more favourable to the standard model. String theory if of any use should sort out any problems with the standard model. This is why the errors of string theory are so alarming. It is supposed to theoretically sort things out, unlike the standard model, which is an empirically based model, not a claimed final theory of unification.

# Asymptotia

### More Scenes From the Storm in a Teacup, VII

by Clifford, at 2:18 am, March 13th, 2007 in science, science in the media, string theory

“You can catch up on some of the earlier Scenes by looking at the posts listed at the end of this one. Through the course of doing those posts I’ve tried hard to summarize my views on the debate about the views of Smolin and Woit – especially hard to emphasize how the central point of their debate that is worth some actual discussion actually has nothing to do string theory at all. Basically, the whole business of singling out string theory as some sort of great evil is rather silly. If the debate is about anything (and it largely isn’t) it is about the process of doing scientific research (in any field), and the structure of academic careers in general. For the former matter, Smolin and Woit seem to have become frustrated with the standard channels through which detailed scientific debates are carried out and resolved, resorting to writing popular level books that put their rather distorted views on the issues into the public domain in a manner that serves only to muddle.  …”

Everything that happens involves particle physics, so it determines the nature of everything, and is just a few types of fundamental particles and four basic fundamental forces, or three at high energy, where electro-weak unification occurs.

It’s better to have debates and disputes over scientific matters that can potentially be resolved, than have arguments over interminable political opinions which can’t be resolved factually, even in principle. I don’t agree that a lack of debate (until new experimental data arrives) is the best option. The issue is that experiments may resolve the electroweak symmetry breaking mechanism, but they won’t necessarily change the facts in the string theory debate one bit. Penrose explains the problem here on pp. 1020-1 of Road to Reality (UK ed.):

34.4 Can a wrong theory be experimentally refuted? … One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. … We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

I’ve written a very brief review of Lee Smolin’s book on Amazon.co.uk, which for brevity concentrates on reviewing the science of the book that I can review objectively (I ignore discussions of academic problems).  Here is a copy of it:

Professor Lee Smolin is one of the founders of the Perimeter Institute in Canada. He worked on string theory in the 1980s and switched to loop quantum gravity when string theory failed.

Before reading this book, I read Dr Peter Woit’s book about the failure of string theory, Not Even Wrong, read his blog, and watched Smolin’s lectures (available streamed online from the Perimeter Institute website), Introduction to Quantum Qravity, which explain the loop quantum gravity theory very clearly.

Smolin concentrates on the subject from the perspective of understanding gravity, although he helped develop a twisted braid representation of the standard model particles. Loop quantum gravity is built on firmer ground that string theory, and tackles the dynamics behind general relativity.

This is quite different from the approach of string theory, which completely ignores the dynamics of quantum gravity. I should qualify this by saying that although the stringy 11-dimensional supergravity, which is the bulk of the mainstream string theory, M-theory (in M-theory 10 dimensional superstring is the brane or membrane on the bulk, like an N-1 dimensional surface on an N-dimensional material), does contain a spin-2 mode which (if real) corresponds to a graviton, that’s not a complete theory of gravitation.

In particular, in reproducing general relativity, string theory suggests a large negative cosmological constant, while the current observation-based cosmological model has a small positive cosmological constant.

In addition to failing there, string theory also fails to produce any of the observable particles of the standard model of physics. This is because of the nature of string theory, which is constructed from a world sheet (a 1-dimensional string when moved gains a time dimension, becoming a 1,1 “worldsheet”) to which 8 additional dimensions are added to satisfy the conformal symmetry of particle physics, assuming that there is supersymmetric unification of standard model forces (which requires the assumption that every fermion in the universe has a bosonic super partner, which nobody has ever observed in an experiment). If supersymmetry is ignored, then you have to add to the worldsheet three times as many dimensions for conformal symmetry, giving 26 dimensional bosonic string theory. That theory traditionally had problems in explaining fermions, although Tony Smith (now censored off arXiv by the mainstream) has recently come up with some ideas to get around that.

The failure of string theory is due to the 10 dimensions of supersymmetric superstring theory from the worldsheet and conformal symmetry requirements. Clearly, we don’t see that many dimensions, so string theorists rise to the challenge by a trick first performed with Kaluza’s 5-dimensional theory back in the 1920s. Klein argued that extra spatial dimension can be compactified by being curled up into a small size. Historically, the smallest size assumed in physics has been the Planck length (which comes purely from dimensional analysis by combining physical constants, not from an experimentally validated theory or from observation).

With 10 dimensional superstring, the dimensions must be reduced on a macroscopic scale to 3 spatial dimensions plus 1 time dimension, so 6 spatial dimensions need compactification. The method to do this is the Calabi-Yau manifold. But this cause a massive problem in string theory, called the landscape. String theory claims that particles are vibrating strings, which becomes very problematic when 6 dimensions are compactified, because the vibration modes possible for a string then depend critically on the size and shape parameters of those 6 compactified dimensions. The possibilities are vast, maybe infinite.

It turns out that there are at least 10^500 ways of producing standard model or vacuum ground state from such strings containing Calabi-Yau manifolds. Nobody can tell if any of those solutions is the real standard model of particle physics. For comparison, the age of the universe is something like 10^17 seconds. Hence, if you had a massive computer trying to compute all the solutions to string theory from the moment of the big bang to now, it would have to work at a speed of 10^483 solutions per second to solve the problem (a practically impossible speed, even if such timescales are available). A few string theorists hope to find a way to statistically tackle this problem in a non-rigorous way (without checking every single solution) before the end of the universe, but most have given up and try to explain particle physics by the anthropic principle, whereby it is assumed that there is one universe for each of the 10^500 solutions to string theory, and we see the one standard model which has parameters which are able to result in humans.

More radical string theorists proclaim that if you fiddle around with the field theories underlying general relativity and the standard model, you can create a landscape of unobserved imaginary universes from those theories, similar to string theory. Therefore, they claim, the problems in string theory are similar to those in general relativity and the standard model. However, this analogy is flawed because those checked theories are built up on the basis of observations of particle symmetries, electrodynamics, energy conservation and gravitation, and they also produce checkable predictions. In short, there is no problem due to the imaginary landscape in those theories, whereas there is a real problem caused by the landscape in string theory, because it prevents a reproduction (post-diction) of existing physics, let alone predictions.

Smolin suggests that the failure of string theory to explain general relativity and the standard model of particle physics means that it may be helpful if physicist get off the string theory bandwaggon and start investigating other ideas. Woit makes the same point and gives the technical reasons.

The problem is that string theory has over the past two decades become a cult topic supported by endless marketing hype, magazine articles, books, even sci fi films. Extra dimensions are popular, and the heroes of string theory have gotten used to being praised despite having not the slightest shred of evidence for their subject. Recently, they have been claiming that string theory mathematics is valuable for tackling some technical problems in nuclear physics, or may be validated by the discovery of vast cosmic strings in space. But even the mathematics of Ptolemy’s earth centred universe epicycles had uses elsewhere, so this defense of string theory is disingenious. It’s not clear that string theory maths solves any nuclear physics problems that can’t be solved by other methods. Even if it does, that’s irrelevant for the issue of whether people should be hyping string as being the best theory around.

Smolin’s alternative is loop quantum gravity. The advantage of this is that it builds up Einstein’s field equation less a metric (so it is background independent) from a simple summing of interaction graphs for the nodes of a Penrose spin network in the 3 spatial dimensions plus time dimension we observe. This sum is equivalent to taking a Feynman path integral, which is a basic method of doing quantum field theory. The result of this is general relativity without a metric. It is not a complete theory yet, and is the very opposite of string theory in many ways.

While string theory requires unobservables like extra dimensions and superpartners, loop quantum gravity works in observable spacetime using quantum field theory to produce a quantum gravity consistent with general relativity. Ockham’s razor, the principle of economy in science, should tell you that loop quantum gravity is tackling real physics in a simple way, whereas string theory is superfluous (at least until there is some evidence for it).

Obviously there is more progress to be made in loop quantum gravity, which needs to become a full Yang-Mills quantum theory if gravity is indeed a force like the other standard model forces. However, maybe the relationship between gravity and the other long-range force, electromagnetism, will turn out to be different to what is expected.

For instance, loop quantum gravity needs to address the problem that of whether gravity is a renormalizable quantum field theory like the standard model Yang-Mills theories. This will depend on the way in which gravitational charge, ie mass, is attached to or associated with standard model charges by way of some sort of “Higgs field”. The large hadron collider is due to investigate this soon. Renormalization involves using a corrected “bare charge” value for electric charge and nuclear charges which is higher than that observed. The justification is that very close to a particle, vacuum pair production occurs in the strong field strength, the pairs polarize and shield the bare core charge to the observed value seen at long distances and low energies. For gravity, renormalization poses the problem of how gravitational charge can be shielded? Clearly, masses don’t polarize in a gravitational field (they all move the same way, unlike electrons and positrons in an electric field) so the mass-giving “Higgs field” effect is not directly capable of renormalization, but is capable of indirect renormalization if the Higgs field is being associated with particles by another field like the electric field, which is renormalized.

These are just aspects which appeal to me. One of the most fun parts of the book is where Smolin explains the reason behind “Doubly Special Relativity”.

Peter Woit’s recent book Not Even Wrong has a far deeper explanation of the standard model and the development of quantum field theory, the proponents and critics of string theory, and gives the case for a deeper understanding of the standard model in observed spacetime dimensions using tools like the well established mathematical modelling methods of representation theory.

Both books should really be read to understand the overall problem and possibilities for progress by alternative ideas despite the failure of string theory.

Update: in the comments on Asymptotia, Peter Woit has made some quick remarks from a web cafe in Pisa, Italy.  Instead of arguing about the substance of his remarks, Aaron Bergmann and Jacques Distler are repeatedly attacking one nonsense sentence he typed where he wrote a contradiction that a cosmological constant can correspond to flat spacetime, whereas the cosmological constant implies a small curvature.  Unable to defend string theory against the substance of the charge that it is false, they are now attacking this one sentence as a straw man.  It’s completely unethical.  The fact that a string theorist will refusing to read the carefully written and proof-read books and then choose instead to endlessly attack a spurious comment on a weblog, just show the level to which their professionalism has sunk.  Jacques Distler does point out correctly that in flat spacetime the vacuum energy does not produce a cosmological constant.  Instead of splitting attacking critics of completely failed theories, he should perhaps admit the theory has no claim to be science.

# Why old discarded theories won’t be taken seriously

Heat occurs when radiation is received by the oscillation set up in a charge due to the fields in the radiation.  Thus, a radio wave causes an electron to oscillate, and the resistance to the motion of the electron in the metal or whatever, causes some heating.  That’s the basic mechanism whereby radiation energy can be converted into kinetic heat energy in matter.  This mechanism doesn’t hold, however, for extremely high energy radiation.  A gamma ray has a wavelength too short to merely oscillate electrons, and behaves more like a particle when it hits matter (although it behaves like a wave when travelling, hence the interference pattern which results from the double slit experiment using even single photons).  The gamma ray imparts energy by knocking into charges, and ejecting them from the material if they have enough energy to do so (the photoelectric effect), Compton scattering (where the incident gamma ray is degraded in energy and changed in angle, imparting some energy to the electron like a billiard ball collision), and pair production (where the gamma ray has enough energy to create a free pair of particles from the Dirac sea).

It’s clear from experience that whatever the detailed mechanism for Yang-Mills exchange radiation is, it doesn’t cause things to heat up.  The claim that the energy locked up in fields must be turned to heat by masses is absurd.  This is because heat energy is different to field energy.  An electron has an energy of 0.511 MeV.  This energy is locked up but can only be released by annihilation with a positron to give two gamma rays, each of 0.511 MeV.  However, it would be wrong to ridicule the existence of electric field energy by claiming that if electrons had a rest mass energy of 0.511 MeV, matter would get very hot indeed.  The claim that exchange radiation of a field is the same thing as heat energy is totally false. The two are completely different.

Feynman chose the LeSage gravity mechanism, out of all discarded gravity mechanisms, as the one to describe in detail in his Lectures on Physics, and it is also included with an illustration in his nice little book Character of Physical Law.  Feynman shows it predicts the inverse square law but also predicts drag, which would cause the planets to slow down spiral into the sun if the radiation was intense enough to be capable of causing observed gravitation, so he concluded that it was wrong.  So it is.  This is also a general fault of aether ideas.

The modern quantum field theory, at least successful (non-stringy) work on it, is based on the principle mentioned in the previous post, namely Feynman’s ‘shut up and calculate’ advice.  The Standard Model of particle physics is based on exchange radiation as the physical cause of forces, this is the Yang-Mills quantum field theory first proposed in 1954.  Such mathematical physics is based on symmetry principles, not causal mechanism.  But it is still physics, because it enables you to make calculations that can be checked, although it is not necessarily the complete story.  Any final theory should mathematically correspond to Yang-Mills theory where we observe the symmetry principles to hold, but should also include some deeper understanding.

Where you could go off the deep end is to deny that there is an corresponding deeper understanding than symmetry principles.

Suppose you are given a causal mechanism that exactly predicts everything observed in the universe, but doesn’t predict the unobservable speculations of string theory.  This is not going to be wrong just because the existing Standard Model and general relativity were originally derived using different ideas to causal mechanisms.  Progress occurs not so much by sticking like a parrot to proclaiming the beauty of existing ideas, but by trying some new things until something useful occurs.  Crackpots occur not by making calculations and finding correct new physics methods, but by people sticking to dead ends and refusing to give up false ideas, defending the falsehoods with more nonsense.

From: Nigel Cook
To: Guy Grantham
Cc: Whan Peter ; Montgomery Ian
Sent: Friday, March 16, 2007 12:03 PM
Subject: Re: objections to LeSage type gravity

Dear Guy,

Forget gravity for LeSage’s model!  The LeSage model correctly explains the pion mediated strong nuclear attractive force, not gravity.  Sir Karl Popper discusses how the uncertainty principle arises from impacts at high energy (i.e., in the intense electric field at small distances from a charge), in his book “Logic of Scientific Discovery”, which I quote on my homepage.

The “problems” which you get from trying to apply the LeSage mechanism to gravity become assets when you use it to explain how pion radiation via the vacuum causes protons and neutrons to be pushed together in the nucleus, if they start nearby.  Fusion occurs when protons are brought close enough that the strong attractive effect from pions exceeds Coulomb repulsion, so the particles approach.  Obviously, they don’t endlessly approach or the nucleus would become a singularity; instead, there is a shorter range rho particle mediated exchange which causes repulsion over smaller distances, so the nucleons (neutrons and protons) are kept a certain distance apart, something on the order 1 fm, by pion attraction at longer ranges (but with a limit of a few fm) and rho repulsion at shorter distances.  Repulsion due to rhy particles is just the recoil of particles being mutually exchanged; imagine two thugs shooting machine gun bullets at each other, each will suffer a repulsion force caused partly by impacts from the other thug’s bullets, and partly by recoil (Newton’s 3rd law) of the machine gun as it fires at the other person.

Now for gravity.  It’s a long range force.  There are no charges in motion involved, only radiation, because pair production only occurs out to 1 fm from a charge.  So gravity, which predominates at large distances, is due to exchange radiation.

Photons of light don’t interact with each other.  They exert pressure when they are reflected or absorbed by surfaces because of the change in momentum p = E/c for absorption and p = 2E/c for reflection.  But they don’t form a gas.  Photons don’t obey the exclusion principle, so you can fit an endless amount of photons into a given space without any pressure arising!

This is why they obey bose-einstein statistics, rather than fermi-dirac statistics.

Even with water waves, you can see that there is no permanent interaction between them when they pass through one another!  If you send two waves travelling in different directions through one another, they will superimpose temporarily either giving a resultant that is zero if there is “cancellation” of a peak with a trough, but after that transitory overlap each wave emerges and continues as before with its original form!

This is totally different from firing bullets at one another, where the superposition causes a permanent effect.

Unless you can see the difference between bosons and fermions and that gravity is a boson effect while the “errors” of LeSage are due to fermion radiation assumptions, we’re not getting anywhere.  Once again, gravity is a massless boson (integer spin) exchange radiation effect.  LeSage assumed material particles (fermions, or their composites like mesons such as pions) were the exchange radiation.  LeSage’s particle assumption is only valid for pions, etc., in the strong nuclear attractive force.  There, the “errors” which would be true of gravity are bonuses: the attraction is predicted to have a short range on the order of a mean free path of scatter before radiation pressure equalization in the shadows quenches the attractive force.  This short range is real for nuclear forces.

For gravitation, curvature is the same thing as exchange boson radiation, and in loop quantum gravity curvature is equivalent to the effect of the full cycle of exchange radiation going from mass A to mass B and back again.

Curvature is a name for the radial contraction due to masses. To speak of curvature as being an alternative to exchange radiation causing general relativity, is as absurd as claiming that 1+1 and 2 are not the same thing.  Of course they are merely different mathematical expressions for the same thing, physically.  See http://cosmicvariance.com/2007/03/12/catholic-priest-proposes-new-model-for-creation/#comment-221007

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’

– New Scientist, 17 April 1993, pp32-3.

Spacetime contracts around masses; the earth’s radius is contracted by 1.5 mm radially (the circumference or transverse dimension is unaffected, hence the fourth dimension is needed to keep Pi constant via curvature) by its gravitation. Time is also slowed down.

This is pretty obvious in cause – exchange radiation causes radial contraction of masses in general relativity, just as in special relativity you get contraction of moving masses. Take the Lorentz contraction, stick the Newtonian escape velocity into it, and you get Feynman’s simplified (1/3)MG/c^2 formula for gravitational radial contraction in general relativity (you have to put in the 1/3 factor manually because a moving object only has contraction in one dimension, whereas the contraction is shared over 3 dimensions in GR). The justification here is that the escape velocity is also the velocity acquired by an object falling from an infinite distance, so it is velocity corresponding to the kinetic energy equivalent to the amount of gravitational potential energy involved.

It’s obvious that spacetime is contracted by gravitation. Expanding space really just refers to the recession of masses, i.e., expanding volume.

All the experimentally or observationally confirmed parts of general relativity mathematically correspond to simple physical phenomena of exchange radiation in a Yang-Mills quantum field theory. (Ad hoc theorizing to model observations is not observational confirmation. E.g., dark energy speculation based on redshift observations, isn’t confirmed by the observations which suggested the speculation. A better model is that whatever exchange radiation causes quantum gravity when exchanged by receding masses, gets some kind of redshift like light due to the recession of masses, which weakens gravitational effects over large distances. OK, I know you don’t want to know all the correct predictions which come from this physics, so I’ll stop here.)

*****

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – Peter Woit, Not Even Wrong, Cape, London, 2006, p189. (Emphasis added.)

Professor Lee Smolin also has some excellent online lectures about loop quantum gravity at the Perimeter Institute site, here (you need to scroll down to ‘Introduction to Quantum Gravity’ in the left hand menu bar). Basically, Smolin explains that loop quantum gravity gets the Feynman path integral of quantum field theory by summing all interaction graphs of a Penrose spin network, which amounts to general relativity without a metric (i.e., background independent).

Best wishes,

Nigel

—– Original Message —–
From: Guy Grantham
To: Nigel Cook
Cc: Whan Peter ; Montgomery Ian
Sent: Friday, March 16, 2007 9:59 AM
Subject: objections to LeSage type gravity

Dear Nigel

I found a comment on the original objections to Fatio-LeSage type of gravitation that I understand you to use in modified form by the impact of (redshifted) gauge particles and their shadowing by masses.  .

The Fatio-LeSage theory was criticised because the impact of inelastic particles would overheat the recipient; elastic particles would interact between themselves or be travelling in the wrong direction etc. to give regions where gravitation did not appear.; etc,etc.

The Wiki has a recent article at http://en.wikipedia.org/wiki/Le_Sage%27s_theory_of_gravitation .

Nigel, would please work through for me how the criticisms in section 4 of that article would or would not apply to your preferred theory and the ways your model differs from LeSage? I prefer to think of [can only imagine!] ‘space’ being curved rather than as a ‘particle’ impact model, or process driven like Cahill’s and Ian’s theories and really would like to get a handle on the alternative points of view.

Best regards, Guy

# Feynman versus Bohr over the Copenhagen Interpretation

‘Anybody who is not shocked by quantum mechanics has not understood it!’ – Niels Bohr.

‘Nobody understands quantum mechanics!’ – Richard P. Feynman.

There’s a total lack of respect in Feynman’s writings for the brainwashing non-calculational philosophical baggage of Bohr, which Feynman also generously dished out to string theorists.  Feynman’s lack of respect for string theorists and Copenhagen Interpretationists is summed up by his kindly suggestion: ‘Shut up and calculate!’

‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures [path integrals] – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’

– Feynman, QED: The Strange Theory of Light and Matter, Penguin, London, 1990, footnote on pages 55-6.

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. … why are the masses of the various particles such as quarks what they are? All these numbers … have no explanations in these string theories – absolutely none! … I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation—a fix-up to say, “Well, it might be true.” For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s all possible mathematically, but why not seven? When they write their equation, the equation should decide how many of these things get wrapped up, not the desire to agree with experiment. In other words, there’s no reason whatsoever in superstring theory that it isn’t eight out of the ten dimensions that get wrapped up and that the result is only two dimensions, which would be completely in disagreement with experience. So the fact that it might disagree with experience is very tenuous, it doesn’t produce anything…’

– Feynman, in the book by Davies & Brown, ‘Superstrings’ 1988 at pages 194-195.

The analogy of the Copenhagen Interpretation religion to the string theory religion is very interesting.  The stringers are nearly all devout believers, they have faith in philosophical speculative (non-calculating) orthodoxy because they believe it is beautiful and a safe bet from experimental refutation.  That’s why all these people, the worshippers who prefer to believe in what they don’t understand to the hard work of making predictive calculations, vandalise science, polluting the experimental work of generations with extra-dimensional fantasy that doesn’t connect to reality at all.

Nobody is allowed to say that a connection between non-observable spin-2 gravitons and non-observable near-Planck scale unification based on a non-observable 10 dimensional superstring (mem)brane floating on an 11 dimensional supergravity bulk is a lot of hype, even less scientific than a mathematical theory of leprechauns, or arguing over how many fairies can fit on the end of a pin.  It’s worse because mathematics is being abused by M-theorists (the superstring-supergravity unification ideas) who obfuscate to cover up the fact of their non-existent ‘theory’.  Even the name ‘M-theory’ is a falsehood: their speculations contain no dynamical theory, just an empty framework, like Pauli’s box.

Consider Heisenberg’s crackpotism that Wolfgang Pauli discredited with an anti-Heisenberg campaign.

It is a just a piece of paper with an empty box on it, the label of which reads: ‘Comment on Heisenberg’s Radio advertisement. This is to show the world that I can paint like Titian. Only technical details are missing. W. Pauli.’

Dr Woit explains: ‘With such a dramatic lack of experimental support, string theorists often attempt to make an aesthetic argument, professing that the theory is strikingly ‘elegant’ or ‘beautiful.’ Because there is no well-defined theory to judge, it’s hard to know what to make of these assertions, and one is reminded of another quotation from Pauli. Annoyed by Werner Heisenberg’s claims that, though lacking in some specifics, he had a wonderful unified theory (he didn’t), Pauli sent letters to some of his physicist friends each containing a blank rectangle and the text, ‘This is to show the world that I can paint like Titian. Only technical details are missing.’ Because no one knows what ‘M-theory’ is, its beauty is that of Pauli’s painting. Even if a consistent M-theory can be found, it may very well turn out to be something of great complexity and ugliness.’

– Dr Peter Woit, ‘Is string theory even wrong?’, American Scientist, March-April 2002, http://www.americanscientist.org/template/AssetDetail/assetid/18638/page/2#19239

Worse still, the 6-dimensional Calabi-Yau manifold needed to compress 6 dimensions introduces a vast amount of complexity into the theory of vibrating strings being fundamental particles: so much complexity (due to each extra dimension being capable of having a whole ‘landscape’ of different size and shape parameters), that there are an estimated 10^500 or more different ways (these different solutions are called the ‘landscape’ after a crude two dimensional way of plotting a graph of some parameters that superficially resembles terrain) of producing particle physics from the model, and no reason why any of them is the Standard Model of particle physics we observe.  Even if this model is right, which might take some time to ascertain even with fast computers seeing that the universe is only something like 10^17 seconds old (i.e., even if you had the entire age of the universe to investigate the 10^500 solutions, you would need to be able to check 10^483 solutions per second to do so, which is still a massive computational problem!).

However, no matter what you do to replace the current false and time wasting empty box with genuine physics, you’ll just be insulting the religion founded by the ‘fathers’ of M-theory like Edward Witten and they won’t thank you for correcting their errors.  Nobody has yet discovered a way to disprove a religion.  It can’t be done.  You see, a religion like M-theory isn’t built on any solid facts in the first place, so it’s completely invulnerable.  Point out to the priest that you don’t see any angels or heaven or extra-dimensions floating around, and their response is to educate you that you must believe the theory because it is a beautiful tale, that all will be proved on the day of judgement.  Can’t wait!

Speculation that can’t be checked is religion.

Bad science turned religion which is opposed rationally cannot be defended by rational argument (because the speculations have no empirical basis or confirmation), so such a thing is defensible only by fascism, with disastrous consequences for objective studies:

‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mind-history of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime … The result will easily be guessed. Egypt stood still … Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.

‘Whatever ceases to ascend, fails to preserve itself and enters upon its inevitable path of decay. It decays … by reason of the failure of the new forms to fertilise the perceptive achievements which constitute its past history.’ – Alfred North Whitehead, F.R.S., Sc.D., Religion in the Making, Cambridge University Press, 1927, p. 144.

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

Once you have committed to a false theory, based on speculations that don’t come from observation or experiment but from vain fantasy, science is finished.  This is because all reasonable advances based on facts will be viciously attacked by the speculators, who use ad hominem methods lacking objectivity to censor those making checkable calculations.

# Hawking radiation from black hole electrons has the right radiating power to cause electromagnetic forces; it therefore seems to be the electromagnetic force gauge boson exchange radiation

Here’s a brand new calculation in email to Dr Mario Rabinowitz which seems to confirm the model of gravitation and electromagnetism proposed by Lunsford and others, see discussion at top post here.  The very brief outline for gravity mechanism is:

‘The Standard Model is the most tested theory: forces result from radiation exchanges. There’s outward force F ~ 1043 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.’

See http://quantumfieldtheory.org/Proof.htm for illustrations.

From: Nigel Cook

To: Mario Rabinowitz

Sent: Thursday, March 08, 2007 10:54 PM

Subject: Re: Science is based on the seeking of truth

Dear Mario,

Thank you very much for the information about Kaluza being pre-empted by Nordstrom, http://arxiv.org/PS_cache/physics/pdf/0702/0702221.pdf

I notice that it was only recently (25 Feb) added to arXiv.  Obviously this unification scheme was worked out before the Einstein-Hilbert field equation of general relativity.  It doesn’t make any predictions anyway and as a “unification” is drivel.

My idea of a unification between electricity and magnetism is some theory which predicts the ratio of forces of gravity to electromagnetism between electrons, protons, etc.

Lunsford has some more abstract language for the problem with a 5-dimensional unification, but I think it amounts to the same thing.  If you add dimensions, there are many ways of interpreting the new metric, including the light wave.  But it achieves nothing physical, explains nothing in mechanistic terms, predicts nothing, and has a heavy price because you there are other ways of interpreting an extra dimension, and the theiry becomes mathematically more complex, instead of becoming simpler.

Edward Witten is making the same sort of claim for M-theory that Kaluza-Klein made in the 1920s.  Witten claims 10/11-d M-theory unifies everything and “predicts” gravity.  But it’s not a real prediction.  It’s just hype.  It just gives censors an excuse to ban people from arxiv, on the false basis that the mainstream theory already is proven correct.

Thank you for the arXiv references to your papers on black holes and gravitational tunnelling.

One thing I’m interested in regards these areas is Hawking radiation from black holes.  Quarks and electrons have a cross-sectional shielding area equal to the event horizon of a black hole with their mass.

This conclusion comes from comparing two different calculations I did for gravitational mechanism.  The first calculation is based on a Dirac sea.  This includes an argument that the shielding area needed to completely stop all the pressure from receding masses in the surrounding universe is equal to the total area of those masses.  Hence, the relative proportion of the total inward pressure which is actually shielded is equal to the mass of the shield (say the Earth) divided by the mass of the universe.  An optical-type inverse square law correction is applied for the geometry, because obviously the masses in the universe effectively appear to have a smaller area because the average distance of the masses in the universe is immensely larger than the distance to the middle of the earth (the effective location of all Earth’s mass, as Newton showed geometrically).

Anyway, this type of calculation (completed in 2004/5) gives the predicted gravity strength G, based on Hubble parameter and density of universe locally.  It doesn’t involve the shielding area per particle.

A second (different) calculation I completed in 2005/6 ends up with a relationship between shielding area (unknown) for a particle of given mass, and G.

If the result of this second calculation is set equal to that of the first calculation, the shielding cross-sectional area per particle is found to be Pi*(2GM/c^2)^2, so the effective radius of a particle is 2GM/c^2, which is the black hole horizon radius.  (Both calculations are at http://quantumfieldtheory.org/Proof.htm which I will have to re-write, as it is the result of ten years of evolving material on an old free website I had, and has never been properly edited.  It has been built up in an entirely ranshackle way by adding bits and pieces, and contains much obsolete material.)

I have not considered Hawking radiation from these black hole sized electrons, because Hawking’s approximations mean his formula doesn’t hold for small masses.

From the Hawking radiation perspective, it is interesting that exchange radiation is being emitted and received by black hole electrons.  I believe this to be the case, because the mainstream uses a size for fundamental particles equal to the Planck scale, which has no physical basis (you can get all sorts of numerology from the dimensional analysis which Plank used), and is actually a lot bigger than the event horizon radius for an electron mass.

http://en.wikipedia.org/wiki/Black_hole_thermodynamics#Problem_two states the formula for the black hole effective black-body radiating temperature.  The radiation power from a black hole is proportional to the fourth power of absolute temperature by the Stefan-Boltzmann radiation law.  That wiki page states that the black hole temperature for radiating is inversely proportional to the mass of the black hole.

Hence, a black hole of very small mass would by Hawking’s formula be expected to have an astronomically large radiating temperature, 1.35*10^53 Kelvin.  You can’t even get the fourth power of that on a standard pocket calculator because it is too big a number, although obviously you just multiply the power by 4 to get 10^212 and multiply that by 1.35^4 = 2.32 so (1.35*10^53)^4 = 2.32*10^212.

The radiating power is P/A = sigma *T^4 where sigma = Stefan-Boltzmann constant, 5.6704*10^{-8} W*m^{-2} * K^{-4}.

Hence, P/A = 1.3*10^205 watts/m^2.

The total surface for spherical radiating area, A = 4*Pi*R^2 where R = 2GM/c^2 = 1.351*10^{-57}, so A = 2.301*10^{-113}.

Hence the Hawking radiating power of the black hole electron is: P = A * sigma *T^4 = 2.301*10^{-113} * 1.3*10^205 = 3*10^92 watts.

At least the result has suddenly become a number which can be displayed on a pocket calculator.  It is still an immense radiating power.  I’ve no idea whether this is a real figure or not, and I know that Hawking’s argument is supposed to break down on the quantum scale.

But this might be true.  After all, the force of matter receding outward from each point, if my argument is correct, is effectively something like 7*10^43 N.  The inward force is equal to that.  The force of exchange radiation is reflected back the way it came when it reaches the black hole event horizon of a particle.  So you would expect each particle to be radiating energy at an astronomical rate all the time.  Unless spacetime is filled with gauge boson -acting exchange radiation, there wouldn’t be any inertial force or curvature.

The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

Hence my inward gauge boson calculation F = 7*10^43 N should be given (if Hawking’s formula is right) by the exchange of 3*10^92 watts of energy:

F = 7*10^43 N (my gravity model)

F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of  2*10^84 / [7*10^43] = 3*10^40.

So the Hawking radiation force is the electromagnetic force!  Electromagnetism between fundamental particles is about 10^40 times stronger than gravity!  The exact figure of the ration depends on whether the comparison is for electrons only, electron and proton, or two protons (the Coulomb force is identical in each case, but the ration varies because of the different masses affecting the gravity force).

So I think that’s the solution to the problem: Hawking radiation is the electromagnetic gauge boson exchange radiation.

It must be either that or a coincidence.  If it is true, does that means that the gauge bosons (Hawking radiation quanta) being exchanged are extremely energetic gamma rays?  I know that there is a technical argument that exchange radiation is different to ordinary photons because it has extra polarizations (4 polarizations, versus 2 for a photon), but that might be related to the fact that exchange radiation is passing continually in two directions at once while being exchanged from one particle to another and back again, so the you get superposition effects (like the period of overlap when sending two logic pulses down a transmission line at the same time in opposite directions).

I only did this calculation while writing this email.  This is my whole trouble, it takes so long to fit all the bits together properly.  I nearly didn’t bother working through the calculation above, because the figures looked too big to go in my calculator.

Best wishes,

Nigel

—– Original Message —–

From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, March 07, 2007 11:52 PM

Subject: Science is based on the seeking of truth

Dear Nigel,

You covered a lot of material in your letter of 3-6-07, to which I responded a little in my letter of 3-6-07 and am now responding some more.

I noticed that you mentioned Kaluza in your very interesting site, http://electrogravity.blogspot.com/ .  Since science is based on the seeking of truth, I think acknowledgement of priority must be a high value coin of the realm in our field. Did you know that G. Nordstrom of the Reissner-Nordstrom black hole fame, pre-empted Kaluza’s 1921 paper (done in 1919) by about 7 years?  Three of his papers have been posted in the arXiv by Frank G. Borg who translated them.
physics/0702221 Title: On the possibility of unifying the electromagnetic and the gravitational fields

Authors: Gunnar Nordstr{ö}m
Journal-ref: Physik. Zeitschr. XV (1914) 504-506

Since you are interested in D. R. Lunsford unification of Gravitation and Electrodynamics in which he has 3 space & 3 time dimensions, Nordstrom’s work may also be of interest to you..

When I was younger, I too wanted to write a book(s).  I have written Chapters for 3 books:

physics/0503079 Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions

astro-ph/0302469 Consequences of Gravitational Tunneling Radiation.

So I know how hard it is to do.  Good luck with your book.  Some very prominent people have posted Free Online Books.

Best regards,
Mario

***************************

Further discussion:

From: Mario Rabinowitz

To: Nigel Cook

Sent: Friday, March 09, 2007 12:59 AM

Subject: I differ with your conclusion “So the Hawking radiation force is the electromagnetic force!”

Dear Nigel,  3-8-07

I differ with your conclusion that: “So the Hawking radiation force is the electromagnetic force!”

Hawking Radiation is isotropic in space and can be very small or very large depending on whether one is dealing respectively with a very large or very small black hole.  My Gravitational Tunneling Radiation (GTR) is beamed between a black hole and another body.  In the case of two black holes that are very close, Hawking Radiation is also beamed, and the two radiations produce a similar repulsive force.

One of my early publications on this was in the Hadronic J. Supplement 16, 125 (2001) arXiv.org/abs/astro-ph/0104055. ” Macroscopic Hadronic Little Black Hole Interactions.”  See Eqs. (3.1) and (3.2).  I also discussed this in my Chapter  “Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions.”
This is in the book: “Progress in Dark Matter Research. Editor J. Blain; NovaScience Publishers, Inc. N.Y., (2005), pp. 1 – 66.  It is also in the ArXiv:
physics/0503079.  This is calculated in eq. (11.9) p.26 where I say:
“Thus F = 10^43 N may also be the largest possible repulsive force in nature between two masses.”  I think it is just a coincidence that this is close to the ratio of electrostatic force to gravitational force ~ 10^40 between as you point out 2 electrons, an electron and a proton, or 2 protons.   As  I  point out, my calculation is for  Planck  Mass  ~10^-5 gm LBH, which is the smallest mass one can expect to be able to use in the Hawking and GTR equations.

Little black holes (LBH) (with Hawking radiation) which may result in the early universe can only be short-lived and don’t play the game very long.  As I pointed out over a decade ago, my LBH (with Gravitational Tunneling Radiation) are a different kind of player in the early and later universe in terms of beaming and much, much greater longevity. This is now being rediscovered by people who have not referenced my work.

One needn’t do the complicated equation you did in terms of T^4, etc.  I also found that the quite complicated black hole blackbody expression for Hawking radiation power can be exactly reduced to

P = G(rho)h-bar/90 where rho is the density of the black hole.  The 90 seems out of place for a fundamental equation.  However the 90 goes away for Gravitational Tunneling Radiation where the radiated power is

P= (2/3)G(rho)h-bar x (transmission probability).  This is in the Arxiv.  Eqs. (3) and (4) of my “Black Hole Radiation and Volume Statistical Entropy.” International Journal of Theoretical Physics 45, 851-858 (2006).   arXiv.org/abs/physics/0506029

Best,
Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Friday, March 09, 2007 10:13 AM

Subject: Re: I differ with your conclusion “So the Hawking radiation force is the electromagnetic force!”

Dear Mario,

Thank you for these criticisms, and I agree the gamma rays (Hawking radiation) will be isotropic in the absence of any shield or any motion.  But they are just gamma rays, and suffer from exactly the shielding and the redshift effects I’ve been calculating:

‘The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 10^43 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.’

See http://quantumfieldtheory.org/Proof.htm for illustrations.

I don’t see any theoretical or physical evidence has ever been proposed or found for a Planck mass or Planck length, etc.; they are numerology from dimensional analysis.  You can get all sorts of dimensions.  If Planck had happened to set the “Planck length” as the GM/c^2 where M is electron mass, he would have had a length much smaller than the one he chose (the Planck length), and close to the black hole horizon radius 2GM/c^2.  Planck’s length formula is more complex and lacks any physical explanation: (hG/c^3)^(1/2).  The people who popularise the Planck scale as being fundamental to physics now are mainly string theorists, who clearly are quite willing to believe things without proof (spin-2 gravitons, 10 dimensional superstring as a brane on 11 dimensional supergravity, supersymmetric bosonic partners for every particle to make forces unify near the Planck scale, all of which are totally unobserved).  It is likely that they skipped courses in the experimental basis of quantum theory, and believe that the hypothetical Planck scale is proved somehow by Planck’s earlier empirically confirmed theory of quantum radiation.

I should have written “So the Hawking radiation force is [similar in strength to] the electromagnetic force!”  However, it seems to be an interesting result.

Thank you again for the comments and I look forward to reading these additional papers you mention.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Friday, March 09, 2007 7:32 PM

Subject: Now I’ll respond to more of the points you raised in your letter of 3-6-07

Dear Nigel,

The conventional wisdom, including that of Stephen Hawking, is that Hawking radiation is mainly by the six kinds of neutrinos with photons far down the list.  In Hawking Radiation, as a Little Black Hole (LBH) radiates, it mass quickly diminishes and the radiated power goes up (since it is inversely proportional to M^2) until it evaporates away.  In my Gravitational Tunneling Radiation (GTR), the LBH radiation is exponentially lower by the Tunneling Probability (more correctly the transmission probability), and the LBH live much longer.  The radiation tunnels through the gravitational potential energy barrier between a black hole (BH) and other bodies.  If there is a nearby body, it is beamed and predominantly between the BH and the nearby body.  Since the radiation force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.

Now I’ll respond to more of the points you raised in your letter of 3-6-07.   You said “I don’t quite see why you state the time-dependent form of Schroedinger’s wave function, which is more complicated than the time-independent form that generally is adequate for describing a hydrogen atom.  Maybe you must use the most rigorous derivation available mathematically?  However, it does make the mathematics relatively complicated and this makes the underlying physics harder for me to grasp.”

This was in reference to my paper, “Deterrents to a Theory of Quantum Gravity,” http://arxiv.org/abs/physics/0608193 accepted for publication in the International Journal of Theoretical Physics.  This paper goes the next important step following my earlier paper “A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle,” http://arxiv.org/abs/physics/0601218 published in Concepts of Physics.

Einstein’s General Relativity (EGR) is founded on the Strong Equivalence Principle (SEP) which states that locally a gravitational field and an accelerating frame are equivalent.  Einstein was motivated by the Weak Equivalence Principle (WEP) which states that gravitational mass is equivalent to inertial mass. The SEP implies the WEP.  In my earlier paper, I showed that Quantum Mechanics (QM) violates the WEP and thus violates the SEP.  Since if A implies B, (not B) implies (not A).  This demonstrated an indirect violation of the SEP.

In the second paper I went a step further, and showed a direct violation of the SEP.  It was necessary for full generality to deal with the time-dependent form of Schroedinger’s equation. Since the relativistic Klein-Gordon and Dirac equations reduce to the Schroedinger equation, my conclusion also holds for them.  In addition to showing this violation theoretically, I also referenced experimental evidence that indicates that the equivalence principle is violated in the quantum domain.

Also in your letter of 3-6-07, you mentioned Feynman’s book on QED.    I greatly enjoyed reading Feynman’s little book on QED.  Did you notice that he allows the speed of light to exceed c.

Best,
Mario

From: Nigel Cook

Sent: Monday, March 12, 2007 11:36 AM

Subject: Re: Now I’ll respond to more of the points you raised in your letter of 3-6-07

Dear Mario,

“…Hawking radiation is mainly by the six kinds of neutrinos with photons far down the list.”

The neutrinos are going to interact far less with matter than photons, so aren’t of interest as gauge bosons for gravity unless the ratio of neutrinos to gamma radiation in Hawking radio is really astronomical in size.

Since the radiation force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”

Maybe you are accounting for something fictitious here, which is unfortunate.  I think I mentioned, the universe isn’t accelerating in the dark energy sense, the error there is the assumption that G is constant regardless of the redshift of gauge boson radiation between receding masses (i.e., any distant masses in this particular expanding universe we inhabit).  Clearly this is the error, G does fall if the two masses are receding relativistically, between exchanged gauge bosons are redshifted.

It would violate conservation of energy for gauge bosons exchanged between receding masses to not be reduced in energy when received.  Correct this error in the application of GR to the big bang, and the cosmological constant with associated dark energy vanish.  See https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/

What is interesting however, is that in spacetime the universe IS accelerating, albeit this is the correct way of interpreting the Hubble law which should be written in the observable form of: (recession velocity) is proportional to (time past).  We can unambiguously measure and state what the recession velocity is as a function of time past, which makes the recession a kind of acceleration seen in spacetime, see https://nige.wordpress.com/2007/03/01/a-checkably-correct-theory-of-yang-mills-quantum-gravity/

The gauge boson exchange radiation in a finite universe will contribute to expansion of the universe, just as the pressure due to molecular impacts in a balloon cause the air in the balloon to expand in the absence of the restraining influence of the balloon’s surface material.

Obviously, at early times in the universe, the expansion rate would much higher because in addition to gauge boson exchange pressure, there would be direct material pressure due to the gas of hydrogen produced in the big bang expanding under pressure.

The mechanism for gravity suggested at http://quantumfieldtheory.org/Proof.htm shows that G, in addition to depending on the recession of the masses (gravitational charges) depends on the time since the big bang, increasing in proportion to time.

My mechanism gives the same basic relationship as Louise Riofrio’s and your equation, something like GM  = {dimensionless constant}*tc^3.  (See https://nige.wordpress.com/2007/01/09/rabinowitz-and-quantum-gravity/ and links to earlier posts.)

Louise has assumed that this equation means that the right hand side is constant, and so the velocity of light decreases inversely as the cube-root of the age of the universe.

However, by the mechanism I gave, the velocity of light is constant with age of universe, but instead G increases in direct proportion to age of universe.  (This doesn’t cause the sun’s brightness to vary or the big bang fusion rate to vary at all, because fusion depends on gravitational compression offsetting Coulomb repulsion of protons so that protons approach close enough to be fused by the strong force.  Hence, if you vary G, you don’t affect fusion rates in the big bang or in stars, because the mechanism unifies gravity and the standard model so all forces vary in exactly the same way; a rise in G doesn’t increase the fusion rate because it is accompanied by a rise in Coulomb repulsion which offsets the effect of rising G.)

The smaller value of G at earlier times in the universe produces the effects normally attributed to “inflation”, without requiring the complexity of inflation.  The smoothness (small size of ripples) of the cosmic background radiation is due to the lower value of G at times up to the time of emission of that radiation, 300,000 years.  A list of other predictions is included at http://quantumfieldtheory.org/Proof.htm but it needs expansion and updating, plus some re-writing.

Thank you very much for your further explanation of your work on the violation of the equivalence principle by quantum gravity.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Monday, March 12, 2007 12:40 PM

Subject: what is the presently claimed acceleration of the universe?

Dear Nigel,

Sean M. Carroll takes the approach that one way to account for the acceleration of the universe is to modify general relativity, rather than introducing dark energy.  His published paper is also in the ArXiv.
astro-ph/0607458
Modified-Source Gravity and Cosmological Structure Formation
Authors: Sean M. Carroll, Ignacy Sawicki, Alessandra Silvestri, Mark Trodden
Comments: 22 pages, 6 figures, uses iopart style
Journal-ref: New J.Phys. 8 (2006) 323

In your letter of 3-12-07 you say:
“The gauge boson exchange radiation in a finite universe will contribute to expansion of the universe, just as the pressure due to molecular impacts in a balloon cause the air in the balloon to expand in the absence of the restraining influence of the balloon’s surface material.
Obviously, at early times in the universe, the expansion rate would much higher because in addition to gauge boson exchange pressure, there would be direct material pressure due to the gas of hydrogen produced in the big bang expanding under pressure.”

This seems inconsistent with your criticism in this letter of my statement:
MR: “Since the radiation [pressure] force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”
NC:  “Maybe you are accounting for something fictitious here, which is unfortunate.”

Would you agree with my statement if the word accelerated were deleted so that it would read:
MR: “Since the radiation [pressure] force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”

If so, then I ask how you can be sure that the acceleration is 0 and not just some [small] number?  In fact what is the presently claimed acceleration?

Best,
Mario

From: Nigel Cook

Sent: Monday, March 12, 2007 1:35 PM

Subject: Re: what is the presently claimed acceleration of the universe?

Dear Mario,

The true acceleration has nothing to do with the “acceleration” which is put into GR (via the cosmological constant) to counter long range gravity.

1. There is acceleration implied by the Hubble recession in spacetime, i.e., a variation of recession velocity with respect to observable times past is acceleration, a = dv/dt = d(Hr)/dt = Hv where H is Hubble parameter and v quickly goes to the limit c at great distances, so a = Hc ~ 6 *10^{-10} ms^{-2}.

Any contribution to this true acceleration of the universe (i.e., the Hubble law in spacetime) has nothing to do with the fictitious dark energy/cc.

2. Fictional acceleration of universe is the idea that gravity applies perfectly to the universe with no reduction due to the redshift of force mediating exchange radiation due to the recession of gravitational charges (masses) from one another.  This fictional acceleration is required to make the most appropriate Friedmann-Robertson-Walker solution of GR fit the data which show that the universe is not slowing down.

In other words, an acceleration is invented by inserting a small positive cosmological constant into GR to make it match observations made by Perlmutter et al., since 1998.  The sole purpose of this fictional acceleration is to cancel out gravity.  In fact, it doesn’t do the job well, because it doesn’t cancel out gravity correctly at all distances.  Hence the recent controversy over “evolving dark energy” (i.e., the need for different values of lambda, the cc, for supernovae at different spacetime distances/times from us.)

For the acceleration see https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/ .  There is no evidence for dark energy.  What is claimed as evidence for dark energy is evidence of no long range gravitational deceleration.  The insistence that the evidence from supernovae is evidence for dark energy is like the insistence of Ptolemy that astronomy is evidence for an earth centred universe, insistence from other faith-based belief systems that combustion and thermodynamics are evidence for caloric and phlogiston, and the insistence of Lord Kelvin that the existence of atoms is evidence for his vortex atom model.  The evidence doesn’t specifically support a small positive cosmological constant, see: http://www.google.co.uk/search?hl=en&q=evolving+dark+energy&meta

I can’t see why people are so duped by the application of GR to cosmology that they believe it perfect, and any problem to invoke extra epicycles, rather than quantum gravity effects like redshift of exchange radiation between receding masses.

Einstein 1917 cosmological constant has a very large value, to make gravity become zero at the distance of the average separation between galaxies, and to become repulsive at greater distances than that.

That fiddle proved wrong.  What is occurring is that the exchange radiation can cause both attractive and repulsive effects at the same time.  The exchange radiation pressure causes the local curvature.  As a whole spacetime is flat and has no observed curvature, so all curvature is local.

The curvature is due to the radial compression of masses, being squeezed by the exchange radiation pressure.  This is similar in mechanism to the Lorentz contraction physically of moving bodies.

If a large number of masses are exchanging radiation in a finite sized universe, they wil recoil apart while being individually compressed.  The expansion of the universe and the contraction of gravity are two entirely different things that have the same cause.  There is no contradiction.

The recoil is due to Newton’s 3rd law, and doesn’t prevent the radial compressive force.  In fact, the two are interdependent, you can’t have one without the other.  Gravitation is an aspect of the radial pressure, just a shielding effect:

The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 1043 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.

Regards MOND (modified Newtonian dynamics), such ideas are not necessarily mechanistic and checkable or fact based.  They’re usually speculations that to my mind are in the LeSage category – they do not lead anywhere unless you can inject enough factual physics into them to make them real.  Another thing which I’ve been thinking about in relation to Sean’s writings on his Cosmic Variance group blog, is the problem of religion.

I recall a recent survey which showed that something like 70% of American physicists are religious.  Religion is a not rational belief.  If physicists are irrational with respect to religion, why expect them to be rational about deciding which scientific theories to investigate?  There is a lot of religious or pseudoscientific speculation in physics as witnessed by the rise of 10/11 dimensional M-theory.  If mainstream physicists decide what to investigate based on irrational belief systems akin to their religion or other prejudices (clearly religion instills one form of prejudice, but there are others such as mathematical elitism, racism, all based on the wholesale application of ad hominem arguments against all other people who are different in religion or whatever), why expect them to do anything other than end up in blind alleys like epicycles, phlogiston, caloric, vortex atoms, mechanical aether, M-theory?

It’s very unfortunate that probably 99.9% of those who praise Einstein do so for the wrong (metaphysical, religious) reasons, instead of scientific reasons (his early quickly corrected false claims that clocks run slower at the equator, there is a massive cosmological constant, the universe is static, that quantum mechanics is wrong, that the final theory will be purely geometric with no particles, etc., don’t discredit him).  People like to claim Einstein is a kind of religious-scientific figure who didn’t make mistakes, and was infallible.  At least that’s the popular image the media feel keen on promoting, and it is totally wrong.  It sends out the message to people that they must not ever make mistakes.  I feel this is an error.  As long as there is some mechanism in place for filtering out errors or correcting them, it doesn’t matter if there are errors.  What matters is that there is no risk of making an error because someone is modelling non-observed spin-2 gravitons interacting with a non-observed 10 dimensional superstring brane on a bulk of non-observed 11 dimensional supergravity.  That matters, because it’s not even wrong.

I prefer investigating physics from Feynman’s idea of ultimate simplicity, and emergent complexity:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

If and when this is ever found to be wrong, it will then make sense to start investigating extra dimensions on the basis that the universe can’t be explained in fewer dimensions.

Best wishes,

Nigel

# Aaron’s and Lubos’ anti-Loop Quantum Gravity propaganda

There is some discussion of the break down of special relativity at Steinn Sigurðsson‘s blog here, which is mentioned by Louise Riofrio here.  Several string theorists including Aaron Bergman and Lubos Motl have savagely attacked the replacement theory for special relativity, which is termed ‘doubly special relativity’, because they misunderstand both the physical basis of the theory and ignore the supporting evidence for it’s predictions.

Professor Lee Smolin explains why the Lorentz invariance breaks down at say the Planck scale in his book The Trouble with Physics.  Simply put, in loop quantum gravity spacetime is composed of particles with some ultimate, absolute grain size, such as the Planck scale of length (a distance on the order of 10-35 metre), which is totally independent of, and therefore in violation of, Lorentz contraction.  Hence, special relativity must break down for very small scales: the ultimate grain size is absolute.  Doubly special relativity is any scheme whereby you retain special relativity for large distance scales, but lose it for small ones.  Because you need higher energy to bring particles closer in collisions, small distance scales are for practical purposes in physics equivalent to high energy collisions.  So the failure of Lorentz invariance occurs at very small distances and at correspondingly high energy scales.

Doubly special relativity was applied by Giovanni Amelino-Camelia in 2001 to explain why some cosmic rays have been detected with energies exceeding the limit of special relativity, 5 x 10^19 eV = 3 J (the Greisen-Zatsepin-Kuzmin limit).  So it’s not just a case that LQG makes a speculative prediction of doubly special relativity, because there’s also experimental evidence validating it!

Actually, there are quite a lot of indications of this non-Lorentzian behaviour in quantum field theory, even at lower energies, where space does not look quite the same to all observers due to pair production phenomena. For example, on page 85 of their online Introductory Lectures on Quantum Field Theory, Professors Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo explain in http://arxiv.org/abs/hep-th/0510040:

‘In Quantum Field Theory in Minkowski space-time the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state. The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.

‘This is precisely the situation when fields are quantized on curved backgrounds. In particular, if the background is time-dependent (as it happens in a cosmological setup or for a collapsing star) different observers will identify different vacuum states. As a consequence what one observer call the vacuum will be full of particles for a different observer. This is precisely what is behind the phenomenon of Hawking radiation.’

Sadly, some string theorists are just unable to face the facts and understand them:

‘… the rules of the so-called “doubly special relativity” (DSR) to transform the energy-momentum vectors are nothing else than the ordinary rules of special relativity translated to awkward variables that parameterize the energy and the momentum.’ – Lubos Motl, http://motls.blogspot.com/2006/02/doubly-special-relativity-is-just.html

‘… Still, I just want to say again: DSR and Lorentz violation just aren’t in any way predictions of LQG.’ – Aaron Bergman, http://scienceblogs.com/catdynamics/2007/03/strings_and_apples.php#comment-364824

Loop quantum gravity (LQG) does quantize spacetime. Smolin makes the point clearly in “The Trouble with Physics” that whatever the spin network grain size in LQG, the grains will have an absolute size scale (such as Planck scale, or whatever).

This fixed grain size contradicts the Lorentz invariance, and so you have to modify special relativity to make it compatible with LQG. Hence, DSR in some form (there are several ways of predicting Lorentz violation at small scales while preserving SR at large scales) is a general prediction of LQG.   String theorists are just looking at the mathematics and ignoring the physical basis, and then complaining that they don’t understand need for tha mathematics.  It’s clear why they have got into such difficulties themselves, theoretically.

STRING. In string theory, it is assumed that the fundamental particles are all vibrating strings of this size, and that the various possible vibration modes and frequencies determine the nature of the particle. Nobody can actually ever prove this because string theory only describes gravity with spin-2 gravitons if there are 11 dimensions, and only describes unification near the Planck scale if there are 10 dimensions (which allows supersymmetry, namely a pairing of unobserved super bosons to every observed fermions that is required to make forces unify in the stringy paradigm). The problem is the 6/7 extra dimensions required to make today’s string theory work. The final (but still incomplete in detail) framework of string theory is named M-theory after ‘membrane’, since the 10 dimensional superstring theory is a membrane on 11 dimensional supergravity, analogous to a 2-dimensional bubble surface or membrane on a 3-dimensional bubble volume; the membrane has one dimension fewer than the bulk. To account for why we don’t see the extra dimensions, 6 of them are conveniently curled up in a Calabi-Yau manifold (a massive extension of the old Kaluza-Klein unification from 1929, which postulated 5 dimensional space time, because the metric including the extra dimension could be interpreted as giving a prediction of the photon). The 6 extra dimensions in the Calabi-Yau manifold can have a ‘landscape’ consisting of as many as 101000 different models of particle physics as solutions. It’s now very clear that such a hopelessly vague theory is a Hollywood-hyped religion of groupthink.

LQG. In loop quantum gravity (LQG), however, one possibility (not the only possibility) is that the different particles are supposed to come from the twists of braids of spacetime (see illustration here which is based on the paper of Bilson-Thompson, Markopoulou, and Smolin). This theory also contains the speculative Planck scale, but in a different way: spacetime fabric is assumed to contain a Penrose spin network. The grain size of this spin network is assumed to be the Planck scale. However, since loop quantum gravity so far does not produce any quantitative predictions, the assumption of the Planck scale is not crucial to the theory. Loop quantum gravity is actually more scientific than string theory, because it at least explains observables using other observables, instead of explaining non-observables (spin-2 graviton and unification near the Planck scale) by way of other non-observables (extra-dimensions and supersymmetry). In loop quantum gravity, interactions occur between nodes of the spin network. The summation of all interactions is equivalent to the Feynman path integral, and the result is background independent general relativity (without a metric). The physical theory of gravity is therefore be likely to be a variant or extension of loop quantum gravity, rather than anything to do with super-speculative M-theory.Doubly Special Relativity. The problem Smolin discusses with special relativity and the Planck scale is that distance contracts in the direction of motion in special relativity. Clearly because the Planck distance scale is a fixed distance independent of velocity, special relativity cannot apply to Planck scale distances. Hence ‘doubly special relativity’ was constructed to allow normal special relativity to work as usual at large distance scales, but to break down as distance approaches the Planck scale, which does not obey the Lorentz transformation. Because the Planck distance is related to the Planck energy (a very high energy, at which forces are assumed by many to unify), this is the same as saying that special relativity breaks down at extremely high energy. The insertion of the Planck scale (as a threshold length or maximum energy) gives rise to ‘doubly special relativity’.It isn’t just special relativity which is incomplete.  Supersymmetry (1:1 boson to fermion correspondence for all particles in the universe, just to unify forces at the Panck scale in string theory) also needs to be abandoned because of a failure in quantum field theory. Another example of a problem of incompleteness in modern physics is that in quantum field theory is there does not appear to be any proper constraints on conservation of field energy where the charge of the field is varying due to pair polarization phenomena; the correction of this problem will tell us where the energy of the various short-range fields comes from! It is easy to calculate the energy density of an electromagnetic field. Now, quantum field theory and experimental confirmation show that the effective electric charge of an electron is 7% bigger at 92 GeV than at collision energies up to and including 0.511 MeV (this latter energy corresponds to a distance of closest approach in elastic Coulomb scattering of electrons of about 10-15 m or 1 fm, and if we can assume elastic Coulomb type scattering and ignore inelastic radiation effects, then the energy is inversely proportional to the distance of closest approach).

So the increasing electric charge of the electron as you get closer to the core of the electron poses a problem for energy conservation: where is the energy? Clearly, we know the answer from Dyson’s http://arxiv.org/abs/quant-ph/0608140 page 70 and also Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040 page 85: the electric field creates observable pairs (which annihilate into radiation and so on, causing vacuum ‘loops’ as plotted in spacetime) above a threshold electric field strength of 1.3 x 1018 v/m. This occurs at a distance on the order of 1 fm from an electron and is similar to the IR cutoff energy of 0.511 MeV Coulomb collisions in quantum field theory.

It is clear that stronger electric fields are attenuated by pair production and polarization of these pairs (virtual positrons being closer to the real electron core than the virtual electrons) so that they cause a radial electric field pointing the other way to the particle core’s field. As you get closer to the real electron core, there is less intervening shielding because there are fewer polarized pairs between you and the core. It’s like travelling upwards through thick cloud in an aircraft: the illumination gradually increases, simply since the amount of cloud intervening between you and the sun is diminishing.

Therefore, the pair production and polarization of vacuum loops of virtual charges are absorbing the shielded energy of the electric field out to a distance of 1 fm. The virtual charges are only limited to electrons and positrons at the lowest energy. Higher energies, corresponding to stronger electric field strengths, result in the production of heavier pairs. At a distance closer than 0.005 fm, pairs of virtual muons occur because muons have a rest mass equivalent to Coulomb scattering at 105.6 MeV energy. At still higher energies you get quark pairs forming.

It seems that by the pair production and polarization mechanisms, electromagnetic energy is being transferred into the loop energy of virtual particles.  We know that the strong force charge falls experimentally as particle collision energy increases (after the threshold energy for nuclear charge to peak), while the electromagnetic charge increases as particle collision energy increases.  Surely, this in at least a qualitative way confirms that eletromagnetic gauge boson energy is being converted (via the pair production and polarization mechanism) into nuclear force gauge bosons (pions etc. between nucleons, gluons between quarks).

If so, there is no Planck scale unification of standard model forces, because the conservation of gauge boson energy shared between all forces very near a particle core means that the fall in the strong charge is caused by the increase in the electromagnetic charge as you get closer to a particle.  If this is the mechanism for nuclear forces, then the although at some energy the strong and electromagnetic forces will happen to collide, they won’t unify because as collision energy becomes ever higher, the electromagnetic charge will approach the bare core value.  This implies that there is no energy then being absorbed from the electromagnetic field, and so no energy available for the nuclear charge.  Thus, if this mechanism for nuclear charge is real, at extremely high energies the nuclear charge continues to fall after coinciding with the electromagnetic charge, until the nuclear charge falls to zero where the electromagnetic charge equals to bare core charge.  This discredits stringy supersymmetry, which is based on the assumption that all standard model charges merge into a superforce of one fixed charge value above grand unification energy.  This supersymmetry is just speculative rubbish, and is disproved by the mechanism.

This mechanism is not speculative: it is based entirely on the accepted, experimentally verified, picture of vacuum polarization shielding the core charge of an electron, plus the empirically based concept that the energy of an electromagnetic field is conserved.

Something has to happen to the field energy lost via charge renormalization.  We know what the energy is used for: pair production of ever more massive (nuclear) particle loops in spacetime.  These virtual particles mediate nuclear forces.

It should be noted, however, that although you get ever more massive particles being created closer and closer to a fundamental charged particle due to pair production in the intense electric field, the pairs do not cause divergent (ever increasing, instead of decreasing) energy problems for two reasons. Firstly, Heisenberg’s uncertainty principle limits the time that a pair of virtual charges can last: this time is inversely proportional to the energy of the pair. Hence, loops of ever more massive virtual particles closer to a real particle core exist for shorter and shorter intervals of time before they annihilate back into the gauge boson energy of the electromagnetic field. Secondly, there is an upper energy limit (called the UV cutoff) corresponding physically to the coarseness of the background quantum nature of spacetime: observable pairs result as strong electric field energy breaks up the quantized spacetime fabric. The quantized spacetime fabric has a limit to how many particles you will find in a given spatial volume. If you look in a volume too small (smaller than the size of the grain in quantized spacetime) you won’t find anything. So although the mathematical differential equations of quantum field theory show an increasingly strong field creates increasingly high energy pairs, this breaks down at very short distances where there simply aren’t any particles because the spacetime is too small spatially to accommodate them:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

This physical view to explain the cutoffs (i.e., the renormalization of charge in quantum field theory) was championed by the Nobel Laureate Kenneth Wilson, as Professor John Baez explains, paraphrasing Peskin and Schroeder:

‘In Chapter 10 we took the philosophy that the distance cutoff D should be disposed of by taking the limit D -> 0 as quickly as possible. We found that this limit gives well-defined predictions only if the Lagrangian contains no coupling constants with dimensions of length^d with d > 0. From this viewpoint, it seemed exceedingly fortunate that quantum electrodynamics, for example, contained no such coupling constants since otherwise this theory would not yield well-defined predictions.

‘Wilson’s analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson’s arguments show that this this circumstance explains the renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.’

I have an extensive discussion of the causal physics behind the mathematics of quantum field theory here (also see later posts and this new domain), but the point I want to make here concerns unification.To me, it is entirely logical that the long range electromagnetic and gravity forces are classical in nature beyond the IR cutoff (i.e., for scattering energies below those that are required for pair production, or distances from particles of more than 1 fm).At such long distances, there are no pair production (annihilation-creation) loops in spacetime (see this blog post for a full discussion).All this implies that the nature of any ‘final theory’ of everything will be causal, with for example:

quantum mechanics = classical physics + mechanisms for chaos.

My understanding is that if you have any orbital system with masses in orbit around mass which all have fairly similar (i.e., no more than an order of magnitude difference) masses to each other and to the central mass, then classical orbitals disappear and you have chaos. Hence you might describe the probability of finding a given planet at some distance by some kind of Schroedinger equation.I think this is a major problem with classical physics; it works only because the planets are all far, far, far smaller than the mass of the sun.In an atom, the electric charge is the equivalent to gravitational mass, so the atom is entirely different to the simplicity of the solar system because the fairly similar charges on electrons and nuclei mean that it is going to be chaotic if you have more than one electron in orbit.There are other issues as well with classical physics which are clearly just down to a lack of physics. For example, the randomly occurring loops of virtual charges in the strong field around an electron will, when the electron is seen on na small scale, cause the path of the electron to be erratic, by analogy to drunkard’s walk Brownian motion the motion of pollen grain which is being affect by random influences of air molecules.  So: quantum mechanics = classical physics + mechanisms for chaos.Another mechanism for chaos is Yang-Mills exchange radiation. Within 1 fm of an electron, the Yang-Mills radiation-caused electric field is so strong that the gauge boson’s of electromagnetism, photons, get to produce short lived spacetime loops of virtual charges in the vacuum, which quickly annihilate back into gauge bosons.

But at greater distances, they lack the energy to polarize the vacuum, so the majority of the vacuum (i.e., the vacuum beyond about 1 fm distance from any real fundamental particle) is just a classical-type continuum of exchange radiation which does not involve any chaotic loops at all.

This is partly why general relativity works so well on large scales (quite apart from the fact that planets have small masses compared to the sun): there really is an Einstein-type classical field, a continuum, outside the IR cutoff of QFT.

Of course, on small scales, this exchange of gauge boson radiation causes the weird interactions you get in the double-slit experiment, the path-integrals effect, where a particle seems to be affected by every possible route it could take.

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The solar system would be as chaotic as a multi-electron atom if the gravitational charges (masses) of the planets were all the same (as for electrons) and if the sum or planetary masses was the sun’s mass (just as the sum of electron charges is equal to the electric charge of the nucleus). This is the 3+ body problem of classical mechanics:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Obviously Bohr did not know anything about this chaos in classical systems, when when coming up with complementarity and correspondence principles in the Copenhagen Interpretation. Nor did even David Bohm, who sought the Holy Grail of a potential which becomes deterministic at large scales and chaotic (due to hidden variables) at small scales.

What is interesting is that, if chaos does produce the statistical effects for multi-body phenomena (atoms with a nucleus and at least two electrons), what produces the interference/chaotic statistically describable (Schroedinger equation model) phenomena when a single photon has a choice of two slits, or when a single electron orbits a proton in hydrogen?

Quantum field theory phenomena obviously contribute to quantum chaotic effects. The loops of charges spontaneously and randomly appearing around a fermion between IR – UV cutoffs could cause chaotic deflections on the motion of even a single orbital electron:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.] … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Yang-Mills exchange radiation is what constitutes electromagnetic fields, both of the electrons in the screen containing the double slits, and also the electromagnetic fields of the actual photon of light itself. Again, consider the remarks of Feynman quoted earlier:

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.