The physics of quantum field theory

Quantum field theory is the most successful physical theory ever, encompassing all known nuclear interactions and electromagnetism, and it has many more successful predictions and experimental tests than general relativity, so it is apparent that general relativity needs modification to accommodate quantum field theory, than the other way around.  [General relativity necessitates a stress-energy tensor as the source of the gravitational field, and the gravitational field source in this tensor is represented by continuous differential equations, rather than discontinuous (lumpy) quantized matter.  So the fact a smooth curvature (continuous, smooth acceleration curve on a Feynman diagram) results from general relativity is a product of the approximation used to statistically average the gravity field source, instead of properly representing the lumps.  There other reasons why general relativity is just a flawed – classical – approximation as well.  For instance, in addition to falsely assuming that mass is smoothly distributed in space instead of coming in lumps, general relativity also simply ignores any possibility of quantized field quanta, gravitons.]  Quantum field theory has also been successfully applied to explain superfluid properties, because in condensed matter physics (low temperature phenomena generally) pairs of half integer spin fermions can associate to produce composite particles that have the properties of integer spin particles, bosons. 

In 1925, Max Born and Pascual Jordan recognised that a quantum transition, such as the fall of an electron from an excited to a ground state (accompanied by the emission of a photon), is a complicated problem because the number of particles changes (a photon is created or is absorbed).  Classical Maxwellian electrodynamics does describe radiation emission due to acceleration of charge, but does not explain why radiation is quantized.

The quantum theory of Planck, and Bohr’s atomic model, deal with specific problems (blackbody radiation spectra and line spectra, respectively), but are not general theories.

It is now recognised that the correct explanation of quantum electrodynamics lies in Yang-Mills quantum field theory, in which exchange of radiation (as depicted by Feynman diagrams) is the underlying mechanism.  We will come back to this later in the post.

In 1926, Werner Heisenberg, together with Born and Jordan, developed a quantum theory of electromagnetism by a process called canonical quantization, whereby quanta are treated as separate oscillators of given frequency.  Their treatment neglected polarization and charge, and was inconsistent with relativity considerations.  Jordan in 1927 employed a second quantization to include charges and thus quantum mechanics, while Dirac discovered a Hamiltonian for Schroedinger’s time-dependent equation which is consistent with relativity.

Schroedinger’s time-dependent equation is essentially saying the same thing as this electromagnetic energy mechanism of Maxwell’s ‘displacement current’: Hy= iħ.dy /dt = (½ih/p)dy /dt, where ħ = h/(2p). The energy flow is directly proportional to the rate of change of the wavefunction. This is identical to the classical Maxwell ‘displacement current’ term which states that the rate of flow of energy (via virtual ‘displacement current’) across vacuum in a capacitor or radio system is directly proportional to the rate of change of the electric field!  (Update: a simple explanation of this real electric field energy-transfer mechanism can be found here.)  Maxwell’s displacement current is i = dD/dt = e.dE/dt.  In a charging capacitor, the displacement current falls as a function of time as the capacitor charges up.  The solution for the fall of ‘displacement current’ flow across the vacuum (and through the circuit) as the capacitor charges up to a maximum capacity is: it = ioe– t / RC.  This energy-based solution is similarly exponential to the solution to Schroedinger’s equation: yt = yo exp[-2piH(t – to)/h].

The non-relativistic hamiltonian is defined as:

H = ½ p2/m.

However it is of interest that the ‘special relativity’ prediction of

H = [(mc2)2 + p2c2]2,

was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ2d2y/dt2 = [(mc2)2 + p2c2]y.

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction. Dirac’s equation simply makes the time-dependent Schroedinger equation (Hy = iħ.dy/dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H = apc + b mc2,

where p is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’.  This is not to be confused for the Weyl spinors used in the gauge theories of the Standard Model; whereas the Dirac spinor represents massive spin-1/2 particles, the Dirac equation yields two Weyl equations for massless particles, each with a 2-component Weyl spinor (representing left- and right-handed spin or helicity eigenstates).  The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E = ±[(mc2)2 + p2c2]1/2.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ½ ħ = ± h/(4p). This explains two of the four solutions! The electron is spin-1/2 so it has a spin of only half the amount of a spin-1 particle, which means that the electron must rotate 720 degrees (not 360 degrees!) to undergo one revolution, like a Mobius strip (a strip of paper with a twist before the ends are glued together, so that there is only one surface and you can draw a continuous line around that surface which is twice the length of the strip, i.e. you need 720 degrees turning to return it to the beginning!). Since the spin rate of the electron generates its intrinsic magnetic moment, it affects the magnetic moment of the electron. Zee gives a concise derivation of the fact that the Dirac equation implies that ‘a unit of spin angular momentum interacts with a magnetic field twice as much as a unit of orbital angular momentum’, a fact discovered by Dirac the day after he found his equation (see: A. Zee, Quantum Field Theory in a Nutshell, Princeton University press, 2003, pp. 177-8.) The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc2.  This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc2. The fact that in fact E = ± mc2, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity.  E = ± mc2 allowed Dirac to predict antimatter, such as the anti-electron called the positron, which was later discovered by Anderson in 1932 (anti-matter is naturally produced all the time when suitably high-energy gamma radiation hits heavy nuclei, causing pair production, i.e., the creation of a particle and an anti-particle such as an electron and a positron).

Later it was discovered that Dirac’s prediction of the magnetic moment of the electron was low by a factor of about 1.00116, and this small error was resolved by Julian Schwinger in 1948. This arises as follows. Dirac’s calculation (which accounts for 99.88% of the magnetic moment of the electron) corresponds to a virtual photon from the external magnetic field interacting directly with the electron. Schwinger’s correction takes account of another possibility which sometimes occurs, and which results in a stronger magnetic moment, statistically increasing the overall measured value of the magnetic moment. The actual possibility which Schwinger’s calculation considers is that the electron happens to emit a virtual photon just before interacting with the virtual photon from the external magnetic field. This increases its magnetic moment briefly. Then after it has interacted with the external magnetic field, the electron reabsorbs the virtual photon it emitted earlier. There are various other possibilities which also affect the magnetic moment slightly. For instance, pair-production of virtual photons can occur inbetween interactions of the electron with the magnetic field. Feynman diagrams are needed to draw qualitative interaction possibilities, and then the contributions of these various interaction possibilities can be worked out using the rules of quantum electrodynamics. However, there are an infinite number of possibilities, and in low-energy situations like magnetic moment calculations, only a few diagrams need to be calculated to give very accurate answers. For example, the simplest interaction or first Feynman diagram (given by Dirac’s calculation) predicts the magnetic moment of the electron accurately to 3 significant figures, 1.00 Bohr magnetons, and when the first virtual particle correction or second Feynman diagram (Schwinger’s calculation) is included, the accuracy is then 6 significant figures, 1 + 1/(2*Pi*137.036) = 1.00116 Bohr magnetons. Therefore, the contribution of complex interactions of virtual particles to the magnetic moment of the electron is trivial: the vast majority of interactions between the electron and the magnetic field are very simple in nature! The calculation of Schwinger’s correction to the magnetic moment of the electron is neatly summarised on pages 179-81 of Zee’s book Quantum Field Theory in a Nutshell (Princeton University press, 2003).

It became clear in the work of Schwinger and Feynman on quantum electrodynamics that the electron does not have a fixed electric charge under all conditions: if you approach an electron very closely (such as in a high-energy collision above about 1 MeV energy or so), the observable electric charge of the electron increases above the value measured at long distances or in weaker collisions. This is called a ‘running coupling’, since the charge is treated as a gauge theory ‘coupling constant’ which determines how strongly virtual photons interact with electric charges in quantum electrodynamics. The charge is not strictly a ‘coupling constant’, but its value ‘runs’ with the energy of the collision above some threshold. Just as light is only visible between infrared and ultraviolet cutoff energies, the coupling constants are only variable between two extremes, an infrared cutoff (typically 1 MeV or so, corresponding to a collision energy which brings particles just within the radius of the charge where the electric field strength is high enough to create pairs of virtual charges) and an ultraviolet cutoff (which is supposed to exist at unification energy, where couplings for different types of fundamental forces are supposed to converge at a fixed value, often deemed to be the Planck length which is taken as the grain-size of the vacuum, i.e. the minimum length corresponding to anything physical).

The physical explanation for this running coupling or increase of observed electric charge on small distance scales (high collision energies) is pair production, which Schwinger showed requires a threshold steady electric field strength of at least 1.3*1018 volts/metre, which occurs out to a radius of only m2c3/(e*h-bar) = 33 fm (femtometres). (Equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 and equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040.) So in low energy collisions where the particles colliding encounter field strengths of less than 1.3*1018 volts/metre, no pair-production occurs and the electric charge of the electron is the normal value given in databooks. But at higher energies, it starts to increase because of the production of virtual loops (in spacetime) of positron plus electron pairs, which soon annihilate back into virtual photons of the electric field, in an endless cycle of pair production followed by annihilation, more pair production, and so on. The effect of this pair production of virtual fermions (virtual electrons and positrons at low energy, but virtual muon and anti-muon pairs and other particles appear at higher energies) is that while they briefly exist before annihilation, they become ‘polarized’ by the electric field in which they are situated (and from which they were created). In other words, the virtual positron on the average tends to approach (due to Coulomb attraction) slightly closer to the real electron core than the virtual electron (due to Coulomb repulsion). This causes the virtual fermions to create a net radial electric field which opposes, and therefore partially cancels out, the radial electric field from the electron core. The electron’s core electric field lines (always arrowed from positive towards negative charge) point in towards the middle, while the radial electric charge lines from the polarization of virtual pairs of fermions around the core (and out to 33 fm radius of an electron or other unit charge) point outwards. This results in screening or shielding, reducing the observable electric charge at long distances, and increasing the observable charge when you penetrate into the shielding zone within 33 fm (as with an aircraft flying upwards into the clouds, the less shielding cloud which you have inbetween you are the light or charge source, the stronger the light or charge that you can see!). The equation for the running coupling or variation in effective electric charge shows that it depends on the logarithm of the energy of collision, varying slowly. Experiments with electron scatter have validated this effect up to 91 GeV collisions, where the electron has an effective electric charge which is 7% higher than the measured charge of the electron at low energy.

*************************
Some material should be inserted here to explain:

(a) the tensor form of the Maxwell equations (see sections 2.8 and 2.9 of Ryder’s Quantum Field Theory, 2nd ed., Cambridge University press, 1996, pp. 64-76),

(b) the Lagrangian formulation of particle physics and its relationship to Noether’s theorem of symmetries and Weyl’s gauge theory, including Abelian and Yang-Mills theories (see sections 3.1-3.6 of chapter 3 of Ryder’s Quantum Field Theory, 2nd ed., Cambridge University press, 1996, pp. 80-112),

(c) the path integral formulation of quantum field theory. Feynman found that amplitude for any given history is proportional to eiS/h-bar, and that the probability is proportional to the square of the modulus (positive value) of eiS/h-bar. Here, S is the action for the history under consideration, which depends on the Lagrangian. Integrating this exponential over all possible histories gives the Feynman path integral. The integral can be expanded into a series with an infinite number of terms, called the perturbative expansion. Each term in this perturbative expansion corresponds to a Feynman diagram of increasing complexity. It should be noted that because the mathematics of the calculus deal with continuous variables, it it isn’t a perfect model for reality although it makes useful predictions. E.g., when 100 radioactive atoms are decaying, the exponential decay law tells you that after 8 half lives there will be 0.390625 atom remaining. The law gives a continuous number as output, because the exponential formula comes from the use of calculus in the underlying theory. It is quite wrong, because the number of radioactive atoms remaining is never anything but an integer. The prediction that of 100 atoms 0.390625 of an atom will remain after 8 half lives is often re-interpreted as a prediction that the probability of 1 atom being left will be 0.390625 at that time. However, a prediction of probability is quite different from a realistic model. If you want a prediction that looks realistic, you need one which produces integer results for discrete, quantized phenomena. Otherwise the maths is classical, rather than fully compatible with quantum field phenomena. For some introductory material on Feynman path integrals, see chapter 1 of Zee’s Quantum Field Theory in a Nutshell, http://press.princeton.edu/chapters/s7573.pdf

See also: Quantum Field Theory Resources:

http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040v2.pdf is very useful for beginners, as is http://arxiv.org/PS_cache/hep-th/pdf/9803/9803075v2.pdf, http://arxiv.org/PS_cache/hep-th/pdf/9912/9912205v3.pdf, and http://arxiv.org/PS_cache/quant-ph/pdf/0608/0608140v1.pdf

‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307).

Mainstream frontier fundamental physics, string theory, isn’t scientific because it can’t ever predict real, quantitative, checkable phenomena, since 10-dimensional superstring and 11-dimensional supergravity yield 10500 ‘possibilities’ based not on observed gravity and particle physics facts, but merely on other unobserved, guesswork speculations – namely, unobserved Planck scale unification and unobserved spin-2 gravitons. Even the AdS/CFT correspondence conjecture in string theory is physically empty, since AdS (anti de Sitter space) requires a negative cosmological constant. So you can’t evaluate the conformal field theory (CFT) of particles with AdS, because AdS isn’t real spacetime!

‘Science n. The observation, identification, description, experimental investigation, and theoretical explanation of phenomena.’ – www.answers.com

Loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. The model is not as speculative as string theory, which has been actively promoted in the media since 1985 despite opposition from people like Feynman because it fails to predict anything. Despite endless hype, string theory is now in a state called ‘not even wrong’, which is less objective than the wrong theories of caloric, phlogiston, aether, flat earth, and epicycles, which were theories that tried to model some observed phenomena of heat, combustion, electromagnetism, geography, and astronomy.


 

String theory fails because it postulates that 6 dimensions are compactified into unobservably small manifolds in particles; these 6 unobservable dimensions need about 100 parameters to describe them, and it turns out that there are 10500 or more configurations possible, each describing a different set of particles (different particles within any set arise from the different possible vibration modes or resonances of a given string). This makes it the vaguest, least falsifiable mainstream speculation ever: to make genuine predictions, the state of the extra unobserved 6-dimensions must be known, which means either building a particle accelerator the size of the galaxy and scattering particles to reveal their Planck scale nature, or eliminating the false 10500 guesses, which would take billions of years with supercomputers. But there is some experimental evidence that key stringy assumptions, e.g., spin-2 gravitons and supersymmetry, are false.

For supersymmetry, in the book Not Even Wrong (UK edition), Dr Woit explains on page 177 that – using the measured weak and electromagnetic forces – supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. Supersymmetry is also a disaster for increasing the number of Standard Model parameters (couping constants, masses, mixing angles, etc.) from 19 in the empirically based Standard Model to at least 125 parameters (mostly unknown!) for supersymmetry. Supersymmetry in string theory is 10 dimensional and involves a massive supersymmetric boson as a partner for every observed fermion, just in order to make the three Standard Model forces unify at the Planck scale (which is falsely assumed to be the grain size of the vacuum just because it was the smallest size dimensional analysis gave before the electron mass was known; the black hole radius for an electron is far smaller than the Planck size).


At first glance, this 10-dimensional superstring theory for supersymmetry contradicts the 11-dimensional supergravity ideas, but this 10/11 dimensional issue was conveniently explained or excused by Dr Witten in his 1995 M-theory, which shows that you can make the case that 10-dimensional superstrings are a brane (a kind of extra-dimensional equivalent surface) on 11-dimensional supergravity, similarly to how an n – 1 = 2 dimensional area is a surface (or mem-brane) on an n = 3 dimensional object (or bulk). 11-dimensional supergravity arises from the old Kaluza-Klein idea, which was debunked and corrected by Lunsford in a peer-reviewed, published paper – see International Journal of Theoretical Physics, Volume 43, Number 1, January 2004 , pp. 161-177(17) for publication details and here for a downloadable PDF file, which was immediately censored from arXiv which seems to be partly influenced in the relevant sections by a string professor at the University of Texas, Austin.


On the speculative nature of conjectures concerning spin-2 (attractive or ‘suck’) gravitons, Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has not been observed. Renormalization works in the standard model (for electromagnetic, weak nuclear and strong nuclear charges) because the gauge bosons which mediate force do not interact with themselves to create massive problems. This is not the case with the spin-2 gravitons in general. Spin-2 gravitons, because they have energy, should according to general relativity, themselves be sources for gravity on account of their energy, and should therefore themselves emit gravitons, which usually makes the renormalization technique ineffective for quantum gravity. String theory is supposed to dispense with renormalization problems because strings are not point particles but of Planck-length. The mainstream 11-dimensional supergravity theory includes a superpartner to the unobserved spin-2 graviton, called the spin-3/2 gravitino, which is just as unobserved and non-falsifiable as the spin-2 graviton. The reason is that this supersymmetric scheme gets rid of problems which the spin-2 graviton idea would lead to at unobservably high energy where gravity is speculated to unify with other forces into a single superforce.

So a supersymmetric partner for the spin-2 attractive graviton is postulated in mainstream supergravity to make the spin-2 graviton theory work by cancelling out the unwanted effects of the grand unified theory speculations. Hence, you have to add extra speculations to spin-2 gravitons just to cancel out the inconsistencies in the original speculation that all forces should have equal coupling constants (relative charges) at unobservably high energy. The inventing of new uncheckable speculations to cover up inconsistencies in old uncheckable speculations is not new. (It is reminiscent of the proud Emperor who used invisible cloaks to try to cover up his gullibility and shame, at the end of an 1837 Hans Christian Andersen fairytale.) There is no experimental justification for the speculative mainstream spin-2 graviton scheme, nor any way to check it, which is discussed in detail here (discussion of alleged reason for spin-2 gravitons) and here (the stringy landscape of 10500 spin-2 attractive graviton theories really do suck in more ways than one; spin-1 gravitons avert the normal problems of quantum gravity, and make proper predictions without inconsistencies).


String theory predictions are not analogous to Wolfgang Pauli’s prediction of neutrinos, which was indicated by the solid experimentally-based physical facts of energy conservation and the mean beta particle energy being only about 30% of the total mass-energy lost per typical beta decay event: Pauli made a checkable prediction, Fermi developed the beta decay theory and then invented the nuclear reactor which produced enough decay in the radioactive waste to provide a strong source of neutrinos (actually antineutrinos) which tested the theory because conservation principles had made precise predictions in advance, unlike string theory’s ‘heads I win, tails you lose’ political-type, fiddled, endlessly adjustable, never-falsifiable pseudo-‘predictions’. Contrary to false propaganda from certain incompetent string ‘defenders’, Pauli correctly predicted that neutrinos are experimentally checkable, in a 4 December 1930 letter to experimentalists: ‘… Dear Radioactives, test and judge.’ (See footnote on p12 of this reference.)


‘The one thing the journals do provide which the preprint database does not is the peer-review process. The main thing the journals are selling is the fact that what they publish has supposedly been carefully vetted by experts. The Bogdanov story shows that, at least for papers in quantum gravity in some journals [including the U.K. Institute of Physics journal Classical and Quantum Gravity], this vetting is no longer worth much. … Why did referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don’t understand things.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 223.


‘Worst of all, superstring theory does not follow as a logical consequence of some appealing set of hypotheses about nature. Why, you may ask, do the string theorists insist that space is nine dimensional? Simply because string theory doesn’t make sense in any other kind of space.’ – Nobel Laureate Sheldon Glashow (quoted by Dr Peter Woit in Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics, Jonathan Cape, London, 2006, p181).


‘Actually, I would not even be prepared to call string theory a ‘theory’ … Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ – Nobel Laureate Gerard ‘t Hooft (quoted by Dr Peter Woit in Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics, Jonathan Cape, London, 2006, p181).


‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195. (Quotation courtesy of Tony Smith.)

Tony Smith’s CERN document server paper, EXT-2004-031, uses the Lie algebra E6 to avoid 1-1 boson-fermion supersymmetry: ‘As usually formulated string theory works in 26 dimensions, but deals only with bosons … Superstring theory as usually formulated introduces fermions through a 1-1 supersymmetry between fermions and bosons, resulting in a reduction of spacetime dimensions from 26 to 10. The purpose of this paper is to construct … using the structure of E6 to build a string theory without 1-1 supersymmetry that nevertheless describes gravity and the Standard Model…’ However, that research was censored off arXiv,apparently because mainstream string theorists are bigoted against 26-dimensional ideas since 10/11-dimensional M-theory was discovered in 1995. They don’t exactly encourage alternatives, even within the general framework of string theory (26-dimensional bosonic string theory is similar to 10-dimensional superstring in having a 2-dimensional spacetime worldsheet, the difference is that conformal field theory requires 24 dimensions in the absense of supersymmetry and 8 dimensions if there is supersymmetry).

Worse, attempts to explain observed particle physics with string theory result in 10500 or more different vacuum states, each with its own set of particle physics. 10500 solutions is so many it eliminates falsifiability from string theory. This large number of solutions is named the ‘cosmic landscape’ because Professor Susskind claims that each solution exists in a different parallel universe, and when you plot the resulting vacuum ‘cosmological constants’ as a function of two variables, in string theory, you produce a landscape-like three dimensional graph. The reason for the immense ‘cosmic landscape’ is the fact that string theories only ‘work’ (i.e., satisfy the basic criteria for conformal field theory, CFT) in 10 or more dimensions, so the unobserved dimensions have to be ‘compactified’ by a Calabi-Yau manifold, which – conveniently – curls up the extra dimensions in to a small volume, explaining why nobody has ever observed any of them. In superstring theory, two dimensions (one space and one time) form a ‘worldsheet’ and another eight are required for the CFT of supersymmetric particle physics. Sadly, the Calabi-Yau manifold has many parameters (or moduli) describing size and shape of those unobserved conjectured extra dimensions which must have unknown values (since we can’t observe them), so it is the immense number of possible combinations of these unknown parameters which make string theory fail to produce specific results, by producing too many results to ever rigorously evaluate, even given a supercomputer running for the age of the universe. The 10500 figure might not be right: the true figure might be infinity. String theory results depend on many things, e.g., how the moduli are ‘stabilized’ by ‘Rube-Goldberg machines’, monstrous constructions added to the theory just to stop string field properties from conflicting with existing physics! It’s presumably hoped by Dr Witten, discoverer of a 10/11-dimensional superstring-supergravity unification called M-theory, that somehow a way will turn up to pick out the correct solution from the landscape and start making checkable predictions.

However, the best idea of how to go about this is to assume that cosmology is correctly modelled by the Lambda-CDM general relativity solution, which attributes the observed lack of gravitational deceleration in the universe to dark energy, represented by a small positive cosmological constant in general relativity field equations. Then you can try to evaluate parts of the landscape of solutions to string theory which have a suitably small positive cosmological constant. Unfortunately, general relativity does not include quantum gravity, and even the mainstream quantum gravity candidate of an attractive force mediated by spin-2 gravitons, implies that gravity should be weakened over vast distances due to redshift of gravitons exchanged between receding masses, which lowers the energy of the gravitons received in interactions and reduces the coupling constant for gravity. Thus, dark energy may be superfluous if quantum gravity is correct, so it is clear that string theory is really a belief system, a faith-based initiative, with no physics or science of any kind to support it. String theory produces endless research, and inspires new mathematical ideas, albeit less impressively than Ptolemy’s universe, Maxwell’s aether and Kelvin’s vortex atom (e.g., the difficulties of solving Ptolemy’s false epicycles inspired Indian and Arabic mathematicians to develop trigonometry and algebra in the dark ages), but this doesn’t justify Ptolemy’s earth-centred universe, Maxwell’s mechanical aether, Kelvin’s stable vortex atom, and string theory. Another problem of this stringy mainstream research is that it leads to so many speculative papers being published in physics journals that the media and the journals concentrate on strings, and generally either censor out or give less attention to alternative ideas. Even if many alternative theories are wrong, that may be less harmful to the health of physics than one massive mainstream endeavour that isn’t even wrong…

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.


 

Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:


 

‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182.


 

‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. … It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – P. Woit,


 

‘String theory has the remarkable property of predicting gravity.’ – E. Witten (M-theory originator), Physics Today, April 1996.


 

‘50 points for claiming you have a revolutionary theory but giving no concrete testable predictions.’ – J. Baez (crackpot Index originator), comment about crackpot mainstream string ‘theorists’ on the Not Even Wrong weblog here.


 

‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64. (Here is a link to checkable quantum gravity framework which made published predictions in 1996 which were confirmed by observations in 1998, but censored out due to the immensely loud noise generators in vacuous string theory.).


 

Background information

Quantum field theory is the basis of the Standard Model of particle physics and is the best tested of all physical theories, more general in application and better tested within its range of application than the existing formulation of general relativity (which needs modification to include quantum field effects), describing all electromagnetic and nuclear phenomena. The Standard Model does not as yet include quantum gravity, so it is not a replacement yet for general relativity. However, the elements of quantum gravity may be obtained from an application of quantum field to a Penrose spin network model of spacetime (the path integral is the sum over all interaction graphs in the network, and this yields background independent general relativity). This approach, ‘loop quantum gravity’, is entirely different from that in string theory, which is based on building extra-dimensional speculation upon other speculations, e.g., the speculation that gravity is due to spin-2 gravitons (this is speculative is no experimental evidence for it). In loop quantum gravity, by contrast to string theory, the aim is merely to use quantum field theory to derive the framework of general relativity as simply as possible. Other problems in the Standard Model are related to understanding how electroweak symmetry is broken at low energy and how mass (gravitational charge) is acquired by some particles. There are several forms of speculated Higgs field which may rise to mass and electroweak symmetry breaking, but the details as yet unconfirmed by experiment (the Large Hadron Collider may do it). Moreover, there are questions about how the various parameters of the Standard Model are related, and the nature of fundamental particles (string theory is highly speculative, and there are other possibilities).


 

There are several excellent approaches to quantum field theory: at a popular level there is Wilczek’s 12-page discussion of Quantum Field Theory, Dyson’s Advanced Quantum Mechanics and the excellent approach by Alvarez-Gaume and Vazquez-Mozo, Introductory Lectures on Quantum Field Theory. A good mathematics compendium introducing, in a popular way, some of maths involved is Penrose’s Road to Reality (Penrose’s twistors inspired some concepts in an Electronics World article of April 2003). For a very brief (47 pages) yet more abstract or mathematical (formal) approach to quantum field theory, see for comparison Crewther’s http://arxiv.org/abs/hep-th/9505152. For a slightly more ‘stringy’-orientated approach, see Mark Srednicki’s 608 pages textbook, via http://www.physics.ucsb.edu/~mark/qft.html, and there is also Zee’s Quantum Field Theory in a Nutshell on Amazon to buy if you want something briefer but with that mainstream speculation (stringy) outlook.


 

Ryder’s Quantum Field Theory also contains supersymmetry unification speculations and is available on Amazon here. Kaku has a book on the subject here, Weinberg has one here, Peskin and Schroeder’s is here, while Einstein’s scientific biographer, the physicist Pais, has a history of the subject here. Baez, Segal and Zhou have an algebraic quantum field theory approach available on

http://math.ucr.edu/home/baez/bsz.html, while Dr Peter Woit has a link to handwritten quantum field theory lecture notes from Sidney Coleman’s course which is widely recommended, here. For background on representation theory and the Standard Model see Woit’s page here for maths background and also his detailed suggestion, http://arxiv.org/abs/hep-th/0206135. For some discussion of quantum field theory equations without the interaction picture, polarization, or renormalization of charges due to a physical basis in pair production cutoffs at suitable energy scales, see Dr Chris Oakley’s page http://www.cgoakley.demon.co.uk/qft/:‘… renormalization failed the “hand-waving” test dismally.

‘This is how it works. In the way that quantum field theory is done – even to this day – you get infinite answers for most physical quantities. Are we really saying that particle beams will interact infinitely strongly, producing an infinite number of secondary particles? Apparently not. We just apply some mathematical butchery to the integrals until we get the answer we want. As long as this butchery is systematic and consistent, whatever that means, then we can calculate regardless, and what do you know, we get fantastic agreement between theory and experiment for important measurable numbers (the anomalous magnetic moment of leptons and the Lamb shift in the Hydrogen atom), as well as all the simpler scattering amplitudes. …

‘As long as I have known about it I have argued the case against renormalization. [What about the physical mechanism of virtual fermion polarization in the vacuum, which explains the case for a renormalization of charge since this electric polarization results in a radial electric field that opposes and hence shields most of the core charge of the real particle, and this shielding due to polarization occurs wherever there are pairs of charges that are free and have space to become aligned against the core electric field, i.e. in the shell of space around the particle core that extends in radius between a minimum radius equal to the grain size of the Dirac sea – i.e. the UV cutoff – and an outer radius of about 1 fm which is the range at which the electric field strength is Schwinger’s threshold for pair-production (i.e. the IR cutoff)? This renormalization mechanism has some physical evidence in several experiments, e.g., Levine, Koltick et al., Physical Review Letters, v.78, no.3, p.424, 1997, where the observable electric charge of leptons does indeed increase as you get closer to the core, as seen in higher energy scatter experiments.] …

‘[Due to Haag’s theorem] it is not possible to construct a Hamiltonian operator that treats an interacting field like a free one. Haag’s theorem forbids us from applying the perturbation theory we learned in quantum mechanics to quantum field theory, a circumstance that very few are prepared to consider. Even now, the text-books on quantum field theory gleefully violate Haag’s theorem on the grounds that they dare not contemplate the consequences of accepting it.

‘… The next paper I wrote, in 1986, follows this up. It takes my 1984 paper and adds two things: first, a direct solving of the equal-time commutators, and second, a physical interpretation wherein the interaction picture is rediscovered as an approximation.

‘With regard to the first thing, I doubt if this has been done before in the way I have done it3, but the conclusion is something that some may claim is obvious: namely that local field equations are a necessary result of fields commuting for spacelike intervals. Some call this causality, arguing that if fields did not behave in this way, then the order in which things happen would depend on one’s (relativistic) frame of reference. It is certainly not too difficult to see the corollary: namely that if we start with local field equations, then the equal-time commutators are not inconsistent, whereas non-local field equations could well be. This seems fine, and the spin-statistics theorem is a useful consequence of the principle. But in fact this was not the answer I really wanted as local field equations lead to infinite amplitudes. It could be that local field equations with the terms put into normal order – which avoid these infinities – also solve the commutators, but if they do then there is probably a better argument to be found than the one I give in this paper. …

‘With regard to the second thing, the matrix elements consist of transients plus contributions which survive for large time displacements. The latter turns out to be exactly that which would be obtained by Feynman graph analysis. I now know that – to some extent – I was just revisiting ground already explored by Källén and Stueckelberg4.

‘My third paper [published in Physica Scripta, v41, pp292-303, 1990] applies all of this to the specific case of quantum electrodynamics, replicating all scattering amplitudes up to tree level. …

‘Unfortunately for me, though, most practitioners in the field appear not to be bothered about the inconsistencies in quantum field theory, and regard my solitary campaign against infinite subtractions at best as a humdrum tidying-up exercise and at worst a direct and personal threat to their livelihood. I admit to being taken aback by some of the reactions I have had. In the vast majority of cases, the issue is not even up for discussion.

‘The explanation for this opposition is perhaps to be found on the physics Nobel prize web site. The five prizes awarded for quantum field theory are all for work that is heavily dependent on renormalization. …

‘Although by these awards the Swedish Academy is in my opinion endorsing shoddy science, I would say that, if anything, particle physicists have grown to accept renormalization more rather than less as the years have gone by. Not that they have solved the problem: it is just that they have given up trying. Some even seem to be proud of the fact, lauding the virtues of makeshift “effective” field theories that can be inserted into the infinitely-wide gap defined by infinity minus infinity. Nonetheless, almost all concede that things could be better, it is just that they consider that trying to improve the situation is ridiculously high-minded and idealistic. …

‘The other area of uncertainty is, to my mind, the ‘strong’ nuclear force. The quark model works well as a classification tool. It also explains deep inelastic lepton-hadron scattering. The notion of quark “colour” further provides a possible explanation, inter alia, of the tendency for quarks to bunch together in groups of three, or in quark-antiquark pairs. It is clear that the force has to be strong to overcome electrostatic effects. Beyond that, it is less of an exact science. Quantum chromodynamics, the gauge theory of quark colour is the candidate theory of the binding force, but we are limited by the fact that bound states cannot be done satisfactorily with quantum field theory. The analogy of calculating atomic energy levels with quantum electrodynamics would be to calculate hadron masses with quantum chromodynamics, but the only technique available for doing this – lattice gauge theory – despite decades of work by many talented people and truly phenomenal amounts of computer power being thrown at the problem, seems not to be there yet, and even if it was, many, including myself, would be asking whether we have gained much insight through cracking this particular nut with such a heavy hammer.’

The humorous and super-intelligent (no joke intended) Professor Warren Siegel has an 885 pages long free textbook, Fields http://arxiv.org/abs/hep-th/9912205, the first chapters of which consist of a very nice introduction to the technical mathematical background of experimentally validated quantum field theory (it also has chapters on speculative supersymmetry and speculative string theory toward the end).Gerard ’t Hooft has a brief (69 pages) review article, The Conceptual Basis of Quantum Field Theory, here, and Meinard Kuhlmann has an essay on it for the Stanford Encyclopedia of Philosophy here.

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p189. (Emphasis added.)

‘Plainly, there are different approaches to the five fundamental problems in physics.’ – Lee Smolin, The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, Houghton Mifflin, New York, 2006, p254.The major problem today seems to be that general relativity is fitted to the big bang without applying corrections for quantum gravity which are important for relativistic recession of gravitational charges (masses): the redshift of gravity causing gauge boson radiation reduces the gravitational coupling constant G, weakening long range gravitational effects on cosmological distance scales (i.e., between rapidly receding masses). This mechanism for a lack of gravitational deceleration of the universe on large scales (high redshifts) has counterparts even in alternative push-gravity graviton ideas, where gravity – and generally curvature of spacetime – is due to shielding of gravitons (in that case, the mechanism is more complicated, but the effect still occurs).

Professor Carlo Rovelli’s Quantum Gravity is an excellent background text on loop quantum gravity, and is available in PDF format as an early draft version online at http://www.cpt.univ-mrs.fr/~rovelli/book.pdf and in the final published version from Amazon here. Professor Lee Smolin also has some excellent online lectures about loop quantum gravity at the Perimeter Institute site, here (you need to scroll down to ‘Introduction to Quantum Gravity’ in the left hand menu bar). Basically, Smolin explains that loop quantum gravity gets the Feynman path integral of quantum field theory by summing all interaction graphs of a Penrose spin network, which amounts to general relativity without a metric (i.e., background independent). Smolin also has an arXiv paper, An Invitation to Loop Quantum Gravity, here which contains a summary of the subject from the existing framework of mathematical theorems of special relevance to the more peripherial technical problems in quantum field theory and general relativity.

However, possibly the major future advantage of loop quantum gravity will be as a Yang-Mills quantum gravity framework, with the physical dynamics implied by gravity being caused by full cycles or complete loops of exchange radiation being exchanged between gravitational charges (masses) which are receding from one another as observed in the universe. There is a major difference between the chaotic space-time annihilation-creation massive loops which exist between the IR and UV cutoffs, i.e., within 1 fm distance from a particle core (due to chaotic loops of pair production/annihilation in quantum fields), and the more classical (general relativity and Maxwellian) force-causing exchange/vector radiation loops which occur outside the 1 fm range of the IR cutoff energy (i.e., at lower energy than the closest approach – by Coulomb scatter – of electrons in collisions with a kinetic energy similar to the rest mass-energy of the particles).

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*1018 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, QED, Penguin, 1990, page 84-5.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.


Feynman points out in that book QED that there is a simple physical explanation via Feynman diagrams and path integrals for why the mathematics of electron orbits and photon paths is classical on large scales and chaotic on small ones:

‘… when seen on a large scale, they [electrons, photons, etc.] travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from quantum interactions, each represented by a Feynman diagram] becomes very important, and we have to sum the arrows [amplitudes] to predict where an electron is likely to be.’

– R. P. Feynman, QED, Penguin, 1990, page 84-5.

So according to Feynman, an electron inside the atom has a chaotic path which is physically the result of the small scale involved, which prevents individual virtual photon exchanges from statistically averaging out the way they do on large scales. For analogy, think of the different effects of the impacts of air molecules on a micron sized dust particle – i.e. chaotic Brownian motion – and on a football, where such large numbers of impacts [are] involved that they can be accurately represented by the classical approximation of ‘air pressure’.

But Feynman uses integration (requiring non-quantized continuous variables) to average out the effects of these many paths or interaction histories, where strictly speaking he should be using discrete (sigma symbol) summation of all individual (quantum) interactions.

If you look at general relativity and quantum field theory (QFT), both represent fields using calculus: they both use differential equations describing continuous variables to represent fields which should strictly be sigma sums for the action in discrete interactions. This is why differential QFT leads to perturbative expansions with an infinite number of terms, each term corresponding to a Feynman diagram:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 57-8.

Maybe this effect is what Prof. John Baez was thinking about in his comment at http://www.math.columbia.edu/~woit/wordpress/?p=615.


   

 

Solution to a problem with general relativity: A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests


Acknowledgement


 

Professor Jacques Distler of the University of Texas inspired recent reformulations by suggesting in a comment on Professor Clifford V. Johnson’s discussion blog that I’d be taken more seriously if only I’d only use tensor analysis in discussing the mathematical physics of general relativity.


Part 1: Summary of experimental and theoretical evidence, and comparison of theories

Part 2: The mathematics and physics of general relativity [Currently this links to a paper by Drs. Baez and Bunn]

Part 3: Quantum gravity approaches: string theory and loop quantum gravity [Currently this links to Dr Rovelli’s Quantum Gravity]

Part 4: Quantum mechanics, Dirac’s equation, spin and magnetic moments, pair-production, the polarization of the vacuum above the IR cutoff and it’s role in the renormalization of charge and mass [Currently this links to Dyson’s QED introduction]

Part 5: The path integral of quantum electrodynamics, compared with Maxwell’s classical electrodynamics [Currently this links to Siegel’s Fields, which covers a large area in depth, one gem for example is that it points out that the ‘mass’ of a quark is not a physical reality, firstly because quarks can’t be isolated and secondly because the mass is due to the vacuum particles in the strong field surrounding the quarks anyway]

Part 6: Nuclear and particle physics, Yang-Mills theory, the Standard Model, and representation theory [Currently this links to Woit’s very brief Sketch showing how simple low-dimensional modelling can deliver particle physics, which hopefully will turn into a more detailed, and also slower-paced, technical book very soon]

Part 7: Methodology of doing science: predictions and postdictions of the theory based purely on empirical facts (vacuum mechanism for mass and electroweak symmetry breaking at low energy, including Hans de Vries’ and Alejandro Rivero’s ‘coincidence’) [Currently this links to Alvarez-Gaume and Vazquez-Mozo, Introductory Lectures on Quantum Field Theory]

Part 8: Riofrio’s and Hunter’s equations, and Lunsford’s unification of electromagnetism and gravitation [Currently this links to Lunsford’s paper]

Part 9: Standard Model mechanism: vacuum polarization and gauge boson field mediators for asymptotic freedom and force unification [Currently this links to Wilczek’s brief summary paper]

Part 10: Evidence for the ‘stringy’ nature of fundamental particle cores? [Currently links to Dr Lubos Motl’s list of 12 top superstring theory ‘results’, with literature references]




‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here


‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here


‘Part of the reason string theory makes no new predictions is that it appears to come in an infinite number of versions. … With such a vast number of theories, there is little hope that we can identify an outcome of an experiment that would not be encompassed by one of them. Thus, no matter what the experiments show, string theory cannot be disproved. But the reverse also holds: No experiment will ever be able to prove it true. … if string theorists are wrong, they can’t be just a little wrong. If the new dimensions and symmetries do not exist, then we will count string theorists among science’s greatest failures, like those who continued to work on Ptolemaic epicycles while Kepler and Galileo forged ahead. Theirs will be a cautionary tale of how not to do science, how not to let theoretical conjecture get so far beyond the limits of what can rationally be argued that one starts engaging in fantasy.’ – Professor Lee Smolin, The Trouble with Physics: The Rise of String Theory, the Fall of a Science and What Comes Next, Haughton Mifflin Company, New York, 2006, pp. xiv-xvii.


THE ROAD TO REALITY: A COMPREHENSIVE GUIDE TO THE LAWS OF THE UNIVERSE by Sir Roger Penrose, published by Jonathan Cape, London, 2004. The first half of the 1094 pages hardback book (2.5 inches/6.5 cm thick) briefly summarises fairly well known mathematics of background importance to the subject at issue. The remaining half of the book deals with quantum mechanics and attempts to unify it with general relativity. On page 785, Penrose neatly quotes his co-author Professor Stephen Hawking:


 

  • ‘I don’t demand that a theory correspond to reality because I don’t know what it is. Reality is not a quality you can test with litmus paper. All I’m concerned with is that the theory should predict the results of measurements.’ [Quoted from: Stephen Hawking in S. Hawking and R. Penrose, The Nature of Space and Time, Princeton University Press, Princeton, 1996, p. 121.] 

    But acidity is a reality which you can, indeed, test with litmus paper! On page 896, Penrose analyses those who use string ‘theory’ as an obfuscation (or worse) of the meaning of ‘prediction’:

    ‘In the words of Edward Witten [E. Witten, ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996]:

     

  • ‘String theory has the remarkable property of predicting gravity, 

    ‘and Witten has further commented:

     

  • ‘the fact that gravity is a consequence of string theory is one of the greatest theoretical insights ever. 

    ‘It should be emphasised, however, that in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory …’


    Hence, string ‘theory’ as hyped up by genius Witten in 1996 as predicting gravity, is misleading, really. String ‘theory’ has no proof of a physical mechanism and predicts nothing checkable, not even the strength of gravity, unlike the causal mechanism! (In apt words of exclusion-principle proposer Wolfgang Pauli, string ‘theory’ is in the class of belief junk, ‘not even wrong’.)


    On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’


    Penrose identifies the problem clearly on page 1021: ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’


    On page 1026, Penrose points out: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’


    ‘Science is the belief in the ignorance of [the speculative consensus of] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

  •  

     

    ************************

    Classical Electromagnetism

    Weber, not Maxwell, was the first to notice that, by dimensional analysis (which Maxwell popularised), 1/(square root of product of magnetic force permeability and electric force permittivity) = light speed.

    Maxwell after a lot of failures (like Keplers trial-and-error road to planetary laws) ended up with a cyclical light model in which a changing electric field creates a magnetic field, which creates an electric field, and so on. Sadly, his picture of a light ray in Article 791, showing in-phase electric and magnetic fields at right angles to one another, has been accused of causing confusion and of being incompatible with his light-wave theory (the illustration is still widely used today!).

    In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges.

    Maxwell’s equation for Faraday’s law: dE/dx = -dB/dt

    Maxwell’s equation for displacement current:

    -dB/dx = m e .dE/dt

    where m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:

    d2E/dx2 = – d2B/(dx.dt)

    -d2B/(dx.dt) = m e . d2E/dt2

    Since d2B /(dx.dt) occurs in each of these equations, they are equivalent, so Maxwell got dx2/dt2 = 1/(m e ), so c = 1/Ö (m e ) = 300,000 km/s. Eureka!  The velocity of light comes out of electromagnetism!  Maxwell arrogantly and condescendingly tells us in his Treatise that ‘The only use made of light’ in finding m and e was to ‘see the instrument.’ Sadly it was only in 1885 that J.H. Poynting and Oliver Heaviside independently discovered the ‘Poynting-Heaviside vector’ (Phil. Trans. 1885, p277).Maxwell’s error jumping to this fantastic conclusion is called prejudice, because he admitted he had not a clue of the velocity of electric energy in circuits:

    ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second.’ – James Clerk Maxwell, Treatise on Electricity and Magnetism, 3rd ed., Article 574. It turned out, from the experimental work of Oliver Heaviside on the Newcastle-Denmark cable in 1875, that what Maxwell thought of as ‘light’ is the electric field carrying the energy of electricity (the electron drift mass and velocity is so small it carries neglibible energy compared to the field): this field is called the Poynting-Heaviside vector (discovered independently by Poynting and by Heaviside).  Obviously, electricity seems to be related to light since both have the same velocity, but they are not the same.  Shamefully, Maxwell first made his elaborate claims about ‘predicting light’ in an 1861 article based on his faulty ‘elastic solid’ mechanism of light rays by analogy to longitudinal seismic waves in a solid, which was shown later to contain an error if applied to transverse vibrations instead of longitudinal ones.  Faraday had first suggested the electromagnetic line vibration nature of light in a well reasoned article of 1846 entitled Thoughts on Ray Vibrations.  Weber showed in 1856 that the square root of the ratio of the electric force constant (from Coulomb’s empirical law of electric force between electric charges) and the magnetic force constant (for the empirical law of magnetic force between electromagnets powered by a known amount of electric current), predicted the velocity of light in the correct dimensions.  Maxwell merely set out to cook up some maths to link up Faraday’s idea with Weber’s ratio.  Maxwell found some interesting connections and his final equations are quantitatively accurate, but he failed to build a successful theory of the vacuum (despite a lot of vacuous hype to the contrary), which is why there are problems in understanding the physics of Maxwell’s classical model for light in terms of virtual charges in the vacuum.  Hence, a new mechanism is needed, and has been developed which is almost impossible to publish.One source is A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains: ‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of root 2 smaller than the velocity of light.’It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error.  Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’  Thus, Maxwell set his great precedent for dishonest hype, fudging and fiddling results, obfuscation and complete bull, affecting all future unified field theorists (like string theorists today).Maxwell did however usefully (very heretically) write:

    ‘As I proceeded with the study of Faraday, I perceived that his method of conceiving the phenomena was also a mathematical one, though not exhibited in the conventional form of mathematical symbols. I also found that these methods were capable of being expressed in the ordinary mathematical forms … For instance, Faraday, in his mind’s eye, saw lines of force transversing all space where the mathematicians saw centres of force attracting at a distance: Faraday saw a medium where they saw nothing but distance: Faraday sought the seat of the phenomena in real actions going on in the medium, they were satisfied that they had found it in a power of action at a distance…’ – Dr J. Clerk Maxwell, Preface, A Treatise on Electricity and Magnetism, 3rd ed., 1873.

    ‘In fact, whenever energy is transmitted from one body to another in time, there must be a medium or substance in which the energy exists after it leaves one body and before it reaches the other… I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its action…’ – Dr J. Clerk Maxwell, Conclusion, A Treatise on Electricity and Magnetism, 3rd ed., 1873.

    Quantum field theory describes the relativistic quantum oscillations of fields. The case of zero spin leads to the Klein-Gordon equation. However, everything tends to have some spin. Maxwell’s equations for electromagnetic propagating fields are compatible with an assumption of spin h/(2p), hence the photon is a boson since it has integer spin in units h/(2p ). Dirac’s equation models electrons and other particles that have only half unit spin, as known from quantum mechanics. These half-integer particles are called fermions and have antiparticles with opposite spin. Obviously you can easily make two electrons (neither the antiparticle of the other) have opposite spins, merely by having their spin axes pointing in opposite direction: one pointing up, one pointing down.  (This is totally different from Dirac’s antimatter, where the opposite spin occurs while both matter and antimatter spin axes are pointed in the same direction. It enables the Pauli-pairing of adjacent electrons in the atom with opposite spins and makes most materials non-magnetic; since all electrons have a magnetic moment, everything would be potentially magnetic in the absence of the Pauli exclusion process.)

    From restraints imposed by Pauli’s exclusion principle from quantum mechanics, Eugene Wigner and Jordan in 1928 found the correct way to include creation-annihilation operators in the theory to allow for pair production phenomena as loops in spacetime (ie, creation of pairs followed by annihilation into radiation, in an endless cycle or loop).

    List of developments in Quantum Field Theory 

    The following list of developments is excerpted from a longer one given in Dr Peter Woit’s notes on the mathematics of QFT.  He emphasises:

    ‘Quantum field theory is not a subject which is at the point that it can be developed axiomatically on a rigorous basis. There are various sets of axioms that have been proposed (for instance Wightman’s axioms for non-gauge theories on Minkowski space or Segal’s axioms for conformal field theory), but each of these only captures a limited class of examples. Many quantum field theories that are of great interest have so far resisted any useful rigorous formulation. …’ Dr Woit lists the major events in QFT to give a sense of chronology to the mathematical developments:

    ‘1925: Matrix mechanics version of quantum mechanics (Heisenberg)

    ‘1925-6: Wave mechanics version of quantum mechanics, Schroedinger equation (Schroedinger)

    ‘1927-9: Quantum field theory of electrodynamics (Dirac, Heisenberg, Jordan, Pauli)

    ‘1928: Dirac equation (Dirac) ‘1929: Gauge symmetry of electrodynamics (London, Weyl)‘1931: Heisenberg algebra and group (Weyl), Stone-von Neumann theorem‘1948: Feynman path integrals formulation of quantum mechanics

    ‘1948-9: Renormalised quantum electrodynamics, QED (Feynman, Tomonoga, Schwinger, Dyson)

    ‘1954: Non-abelian gauge symmetry, Yang-Mills action (Yang, Mills, Shaw, Utiyama)

    ‘1959: Wightman axioms (Wightman)

    ‘1962-3: Segal-Shale-Weil representation (Segal, Shale, Weil)

    ‘1967: Glashow-Weinberg-Salam gauge theory of weak interactions (Weinberg, Salam)

    ‘1971: Renormalised non-abelian gauge theory (t’Hooft)

    ‘1971-2: Supersymmetry

    ‘1973: Non-abelian gauge theory of strong interactions, QCD (Gross, Wilczek, Politzer)

    (I’ve omitted the events on Dr Woit’s list after 1973.)

    Dr Chris Oakley has an internet site about renormalisation in quantum field theory, which is also an interest of Dr Peter Woit. Dr Oakley starts by quoting Nobel Laureate Paul A.M. Dirac’s concerns in the 1970s:

    ‘[Renormalization is] just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr’s orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.’

    The Nobel Laureate Richard P. Feynman did two things, describing the accuracy of the prediction of the magnet moment of leptons (electron and muon) and Lamb shift, and two major problems of QFT, namely ‘renormalisation’ and the unknown rationale for the ‘137’ electromagnetic force coupling factor:

    ‘… If you were to measure the distance from Los Angeles to New York to this accuracy, it would be exact to the thickness of a human hair. That’s how delicately quantum electrodynamics has, in the past fifty years, been checked … I suspect that renormalisation is not mathematically legitimate … we do not have a good mathematical way to describe the theory of quantum electrodynamics … the observed coupling … 137.03597 … has been a mystery ever since it was discovered … one of the greatest damn mysteries …’ – QED, Penguin, 1990, pp. 7, 128-9.

    ‘I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small – not neglecting it when it is infinitely great and you do not want it! … Simple changes will not do … I feel that the change required will be just about as dramatic as the passage from the Bohr theory to quantum mechanics.’ – Paul A. M. Dirac, lecture in New Zealand, 1975 (quoted in Directions in Physics).

    Dr Chris Oakley writes: ‘… I believe we already have all the ingredients for a compact and compelling development of the subject. They just need to be assembled in the right way. The important departure I have made from the ‘standard’ treatment (if there is such a thing) is to switch round the roles of quantum field theory and Wigner’s irreducible representations of the Poincaré group. Instead of making quantising the field the most important thing and Wigner’s arguments an interesting curiosity, I have done things the other way round. One advantage of doing this is that since I am not expecting the field quantisation program to be the last word, I need not be too disappointed when I find that it does not work as I may want it to.’

    Describing the problems with ‘renormalisation’, Dr Oakley states: ‘Renormalisation can be summarised as follows: developing quantum field theory from first principles involves applying a process known as ‘quantisation’ to classical field theory. This prescription, suitably adapted, gives a full dynamical theory which is to classical field theory what quantum mechanics is to classical mechanics, but it does not work. Things look fine on the surface, but the more questions one asks the more the cracks start to appear. Perturbation theory, which works so well in ordinary quantum mechanics, throws up some higher-order terms which are infinite, and cannot be made to go away. ‘This was known about as early as 1928, and was the reason why Paul Dirac, who (along with Wolfgang Pauli) was the first to seriously investigate quantum electrodynamics, almost gave up on field theory. The problem remains unsolved to this day. Perturbation theory is done slightly differently, using an approach based on the pioneering work of Richard Feynman, but, other than that, nothing has changed. One seductive fact is that by pretending that infinite terms are not there, which is what renormalisation is, the agreement with experiment is good. … I believe that our failure to really get on top of quantum field theory is the reason for the depressing lack of progress in fundamental physics theory. … I might also add that the way that the whole academic system is set up is not conducive to the production of interesting and original research. … The tone is set by burned-out old men who have long since lost any real interest and seem to do very little other than teaching and politickering. …’In addition to Dirac’s Hamiltonian based formulation of quantum field theory (energy in terms of position and momenta), there is a Lagrangian formulation called Feynman’s path integrals, which calculates the difference between kinetic and potential energy and follows the trajectory of particles:‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

    ‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here

    Renormalization, the Standard Model, Supersymmetry

    Renormalization is the scaling of charges and mass in a quantum field theory to make the theory consistent with experimental results.  However, it does have a physical basis in polarization.  The vacuum loops contain virtual charges which are radially polarized around a charged particle by the particle’s field.  Virtual positrons in the vacuum are closer to an electron than virtual electrons (due to Coulomb attraction and repulsion), so the polarization of the vacuum gives rise to a net radial field which opposes the field from the particle in the middle.  This shields the charge of the core particle, because some of its electric field is cancelled out by the polarized field of virtual fermions surrounding it which extends to the IR cutoff range, at about 1 fm radius.

    Hence, there is a physical mechanism by which electric charge should be renormalized.  In addition to electric charge, mass is also renormalized, which usually creates problems in a theory in which mass is a type of charge (quantum gravity would have masses as its units of charge), because all observed masses move in the same direction in a gravitational field.  Hence, mass is not directly a renormalizable quantity by the polarization mechanism.  However, if mass is indirectly associated with electric charge by a coupling effect, then mass will be renormalizable in the same way as electric charge, because the amount of mass coupled will be dependent on the renormalized (i.e., shielded) magntitude of the electric charge.  This indirect renormalization mechanism, for mass, indicates that the mass-giving field particles must be a distance away from an electron, with the polarized vacuum separating the mass-giving particle from the electron core.

    In the Standard Model of particle physics, the real electron and positron are complicated particles, their nature being determined by vacuum polarization phenomena.  What exists in the vacuum, going by names like ‘virtual electron’ and ‘virtual positron’ are quite different because are part of the vacuum field and don’t exist long enough to have their own vacuum field (and hence polarization effects).  This means that what you might call a vacuum ‘electron’ has an effective mass of zero, and the entire nature of a real electron (distinguished from a ‘virtual electron’ of the vacuum) is it has sufficient energy and lifespan that a mass-giving boson has coupled to a virtual electron, making it into a real electron.

    This is not like speculative supersymmetry, which is the idea that UV divergence problems (infinite momenta and other effects presumed to exist at unobservably high energy, near the Planck scale) can be cancelled out by speculating that there is a supersymmetric (SUSY) bosonic partner for every observed fermion in the universe: these postulated (unobserved) partners are not Higgs (spin zero) mass-giving bosons, and even the postulated Higgs bosons are not assumed, in mainstream models, to be paired up to fermions, but instead to be a general sea in the vacuum which causes inertia by drag-less miring (like a perfect fluid, which exists only acceleration and deceleration, not inertial motion).

    There is only one type of massive particle (having one fixed mass) and one type of charged particle in the universe.  Since representation theory suggests that all charges (leptons and quarks) can be generated by simple transformation symmetries of Clifford algebra.  This is physically determined by the vacuum.  For example, if N charges are nearby and hence share the same vacuum polarization shield out to a radius of 1 fm (the low energy cutoff for polarization), then the vacuum shielding factor will be stronger than that from a single charge by a factor N, so the shielded value of the electric charge per particle will be 1/N, ie, fractional, of that from a positron or electron.  Of course the simplicity of this explanation of fractional charges is partly cloaked by complex effects from weak charge.

    All the leptons and hadrons observed may be combinations of these two types of particle (electron-like charge charge and Z0 boson like mass) with vacuum effects contributing different observable charges and forces: there is an illustration here for how vacuum polarization allows a single mass-giving particle to give rise to all known particle masses (to within a couple of percent), and also table of comparisons here (if you scroll down; that page is now under revision).

    Yang-Mills Gauge Theory: Exchange Radiation is the Force Mechanism

    The observation that like charges repel while unlike charges attract is explained in terms of exchange radiation by Yang-Mills gauge theory.

    Symmetry is an invariance: you rotate a square by 90N degrees, where N is an integer, and looks just the same before and after the rotation.  Another example is that field effects result from differences in relative potentials, not absolute potentials.  Hence, an electron is not accelerated by the absolute number of volts, but just by the local field gradient, which is the number of volts per metre that the field decreases by.  Electric force is F = qE where E is the field gradient in volts/metre and q is charge, while gravitational force is similarly F = ma where m is the gravitational charge (mass) and a is the acceleration due to the gravitational field.  This gives a universal or ‘global’ symmetry because the relative field strengths, gradients, are independent of the absolute potential.

    Similarly, the gradient of a mountain is independent of the absolute height of the mountain.  If you cut the top off a mountain, it doesn’t affect the gradient of the slopes on the remainder.  So Maxwell’s equations have a global invariance because they are independent of the absolute field strength (the height of the mountain).  They also have a local invariance because when an electric field is disturbed, a magnetic field is created and vice-versa, which depends on relative motion of the observer to the charge or field, not absolute motion (if you move relative to an electric charge, you experience a magnetic field from that charge, but if you move along with the charge, you don’t get a magnetic field). 

    Hermann Weyl in 1929 came up with a principle of local phase symmetry or invariance of electromagnetic and other waves, called ‘gauge invariance’.  According to this gauge invariance, the phase of photon or particle can only vary if there is a local change in the field through which it propagates.

    Another example of symmetry is isotopic spin, ‘isospin’, whereby (as in analogous uses of iso, such as isotope and isothermal) particles have the same spin (or other properties, such as nuclear charge and mass) despite having different electric charges.

    Yang and Mills in 1954 worked out a gauge theory for isospin.  Whereas Weyl’s original gauge theory showed how phase shifts of a photon’s wave are due to changes in the electromagnetic field the Yang-Mills theory shows how changes in isospin of a particle are a result of changes in the electromagnetic field (and vice-versa).  This Yang-Mills requires that the electromagnetic field itself must contain charges, and forms the foundation of Standard Model, where electroweak unification has a field which above the electroweak symmetry breaking energy scale is composed of four gauge bosons, two of which have net electric charges.  If colour charge is substituted for isospin, Yang-Mills theory describes the quantum chromodynamics theory of strong nuclear interactions between quarks, where the force mediators for three types of nuclear colour charges are (3 x 3) – 1 = 8 types of colour charged ‘gluons’.  The gluons have charge (colour charge) so this is a Yang-Mills theory like the electroweak theory.

    Initially in 1954 the Yang-Mills theory set out to model the strong force with isospin, and the recognition of the electroweak gauge bosons as Yang-Mills gauge bosons occurred first to Glashow, Salam, and Weinberg around 1967.  This electroweak theory was then proved to renormalizable by ‘t Hooft and Veltman in 1971.  The first experimental evidence for it was the discovery of effects due to neutral but massive Z gauge boson ‘weak force currents’ in 1973, and full confirmation came when the Z and W gauge bosons were discovered in 1983.

    There is a good published essay on the Yang-Mills equation by Dr Christine Sutton (which is reviewed in mathematical detail by the mathematician Professor William G. Faris here) where she points out that a Yang-Mills type theory was independently investigated by others, including Pauli in 1953 (who dismissed it for giving massless particles in the fields, because he wanted massive particles as field carriers to explain the short range of the strong nuclear force; which of course is now known to be due to gluons which don’t require mass to have a limited range), Ronald Shaw in England and Ryoyo Utiyama in Japan, both in 1954.  (The reason why Yang-Mills is the widely known name for the theory is that Yang and Mills published it in the Physical Review in October 1954.)

    Yang-Mills Exchange Radiation

    The exchange radiations in Yang-Mills theory can have charge (such as in the case of the W+ and W weak force gauge bosons, and all gluons), but they don’t have any mass.  So mass results from a separate field, possibly some version the Higgs field.  However, other mechanisms (and several variations in the detail of a Higgs-type mechanism) for mass are possible, so an experimental determination of the correct theory is required before any particular theory of mass should be asserted as being verified fact.

    Rutherford’s empirical evidence for the nuclear atom, from an analysis of Geiger and Marsden’s alpha particle scattering angles from gold foil, should have led to two developments which never occurred.

    First, the discovery that the atom is mainly void is a confirmation of the general prediction from the Fatio-LeSage gravity mechanism that – for exchange radiation to cause gravity – gravity acts on subatomic constituents which are small enough for the gravity causing radiation to penetrate the volume of the earth and not merely affect the superficial atomic surface area of a planet or other object.

    Second, Rutherford’s nuclear atom had the problem that all the positive charges were confined to a small nucleus.  To prevent the Coulomb repulsion of those protons from blowing the nucleus apart, there is a strong but short-ranged nuclear attractive force, mediated by pion exchange.  One of the physical reasons for the short range of this attractive force is indicated by an objection to the Fatio-LeSage mechanism: massive exchange radiation (such as pions) which pushes particles together will undergo scattering reactions in the vacuum, a little like a gas, which over a distance on the order of the mean-free-path will deflect pressure into the ‘shadow’ region between particles, preventing ‘attraction’ at greater distances by equalizing pressure.  A mathematically equivalent way of describing this range limitation is the uncertainty principle, as Popper showed.  The old idea that the limited range of the strong force is due to the massiveness of the gauge bosons is simply plain wrong: gluons in quantum chromodynamics don’t have any mass.

    This isn’t the whole story.  There is also a cutoff limit on the production of loops of pions in the vacuum, which means that they are only created out to a short range around a particle.  The basis of quantum field theory is abstract mathematical modelling in which all physical mechanisms, beyond perhaps the vital Feynman diagrams, are seen as heuristic philosophy, unnecessary baggage, or inconsistent, unpredictive speculation.  If you do overcome these objections, you then have the problem that people are prejudiced against the approach without first checking whether these objections have been overcome.  Hence the situation of submitting a physical prediction which agrees with reality to a journal and having it rejected, unread, as being ‘speculative’ or even merely an ‘alternative to a currently accepted theory’.  However, as the failure of mainstream string theory indicates, even widely published and celebrated, ‘currently accepted’ speculations can fail.

    The cosmological costant ‘supporting’ data just shows a lack of gravitational deceleration of distant supernova, with no proof that this is due to dark energy offsetting gravity; instead it was predicted long before observations, because a fall in gravitational strength is consistent with the serious redshift-caused energy drop of gravitons (or whatever exchange radiation causes gravity) when the gravitational charges (masses in the universe) are receding from one another relativistically due to recession in the big bang.

    The GUT (grand unified theory) scale unification may be wrong itself. The Standard Model might not turn out to be incomplete with regards to requiring supersymmetry: the QED electric charge rises as you get closer to an electron because there’s less polarized vacuum to shield the corer charge. Thus, a lot of electromagnetic energy is absorbed in the vacuum above the IR cutoff, producing loops. It’s possible that the short ranged nuclear forces are powered by this energy absorbed by the vacuum loops.

    In this case, energy from one force (electromagnetism) gets used indirectly to produce pions and other particles that mediate nuclear forces. This mechanism for sharing gauge boson energy between different forces in the Standard Model would get rid of supersymmetry which is an attempt to get three lines to numerically coincide near the Planck scale. With the strong and weak forces caused by energy absorbed when the polarized vacuum shields electromagnetic force, when you get to very high energy (bare electric charge), there won’t be any loops because of the UV cutoff so both weak and strong forces will fall off to zero. That’s why it’s dangerous to just endlessly speculate about only one theory, based on guesswork extrapolations of the Standard Model, which doesn’t have evidence to confirm it.

    UPDATE, 25 Feb. ’07: 

    Here’s a recent comment by Q on the Not Even Wrong blog which explains the problems in applying string theory to particle physics, as opposed to the lesser problems in applying general relativity to cosmology:

    Q Says:
    February 25th, 2007 at 6:52 am

    … The small positive cosmological constant, or some alternative idea which does the same job, is required to fit GR to the observations of supernovae redshifts, explaining why gravity isn’t slowing down the recession.

    String theory by contrast provides speculative explanations for things which are not observed, such as unification at the Planck scale (inventing supersymmetry to explain that speculation), and inventing gravitons (inventing supergravity to explain gravitons). String theory thus ‘explains’ speculations, not observations.

    The point Peter made before about cosmology is that at least it is being led by observations, unlike string theory.

    Mainstream models for observations might turn out wrong, but that’s better than being not even wrong. Approximations like flat earth theory, caloric, and phlogiston could later be disproved by evidence. Epicycles could provide an endlessly complex mathematical way of representing observed data, but were eventually replaced by Kepler’s simpler laws for convenience, which in turn could be explained by a simple inverse square force law.

    String theory doesn’t even model or duplicate any existing observations, so it is worse than Ptolemy’s epicycles: Feynman’s criticism of string theory was largely that it doesn’t address the parameters of the standard model. It’s not even wrong because it doesn’t even model anything known to exist.

    ———–

    Dark matter similarly has this problem (lack of evidence), because, as Q points out, Cooperstock and Tieu have explained galactic rotation ‘evidence’ for dark matter as a GR effect (instead of an effect of dark matter):

    ‘One might be inclined to question how this large departure from the Newtonian picture regarding galactic rotation curves could have arisen since the planetary motion problem is also a gravitationally bound system and the deviations there using general relativity are so small. The reason is that the two problems are very different: in the planetary problem, the source of gravity is the sun and the planets are treated as test particles in this field (apart from contributing minor perturbations when necessary). They respond to the field of the sun but they do not contribute to the field. By contrast, in the galaxy problem, the source of the field is the combined rotating mass of all of the freely-gravitating elements themselves that compose the galaxy.’

    –  http://arxiv.org/abs/astro-ph/0507619, pp. 17-18.

    For an estimate of the density of the universe from a gravity mechanism, see the proof for G here which, given G, predicts the density.

    The charges in quantum gravity are masses, which are the Higgs field effect on standard model charges.

    The Higgs field – which so far isn’t detected – would need to interact with standard model fields for it to give the standard model charges their masses.

    Comment to Cosmic Variance:

     nc on Feb 26th, 2007 at 12:33 pm

    … suppose it were possible to couple a classical theory of gravity to QFT. How do you know which classical theory of gravity? There are infinitely many background-independent classical theories of gravity, the Einstein-Hilbert action is just the one that keeps the dominant term at low energies. So it appears you still have a problem at the Planck scale.

    (The quantum version of this question is the problem of nonrenormalizability of gravitational theories, which as far I can tell from zillions of blog threads on the topic, LQG ignores completely.) – anon.

    To answer the first point.  You choose the classical theory of gravity which, when coupled to QFT, is based on verified facts and makes accurate predictions.

    Regards the second point.  Renormalization in gravitation will be a change in the effective value of gravitational charge, i.e., mass.  Mass is supposed to be given by the Higgs mechanism, which must be gravitational charge because Einstein’s equivalence principle says gravitational mass is the same as inertial mass.

    Renormalization of electric charge in QED is explained by the polarization of the vacuum around the bare core charge, which cancels part of the latter as observed from a distance.  You can’t apparently polarize the vacuum to shield gravitational force as you can for electric force.

    Polarization in an electric field works because virtual positive charges get attracted closer to the bare core negative charge of a particle than virtual positive charges, so the virtual charges give a net radial electric field which opposes and cancels part of the core charge. 

    Clearly this can’t occur in a gravitational field because all mass moves the same way; there are no opposite poles for mass and gravity, so no polarization or shielding occurs, at least directly.  This makes it hard to see how any quantum theory of gravity can physically include renormalization of gravitational charge (mass).

    However, the equivalence principle between gravitational and inertial mass in the context of quantum gravity has been attacked by Rabinowitz in http://arxiv.org/abs/physics/0601218 where it is argued:

    “… a theory of quantum gravity may not be possible unless it is not based upon the equivalence principle, or if quantum mechanics can change its mass dependence. …”

    In QED, both electric charge and electron mass are renormalized parameters and are scaled by similar factors.  This seems to suggest that maybe the source of the electron’s mass is the mass-giving (‘Higgs’) vacuum field outside the polarization region, if mass is associated with the electron by a coupling depending on the electric field of the electron.  Thus, the polarization-shielded electric charge (not the core or bare electron charge) would be responsible for coupling external mass-giving Higgs bosons to the electron.  So renormalization of the electric field automatically causes renormalization of the gravitationam charge (mass), because the shielded electron charge is responsible for the vacuum field effects which give mass to an electron.

    In the Standard Model, all masses are given to particles by field which is separate to electric charge.  The only way such a mass-giving field can couple to an electron core without mass is obviously by some kind of coupling to the electron’s electric field.  So renormalization effects in quantum gravity are likely to be indirect, i.e., the effect of electric field renormalization (which does have a very simple, empirically confirmed  physical mechanism; vacuum charge radial polarization).

    Copy of a comment, 26 Feb. 2007:

    http://riofriospacetime.blogspot.com/2007/02/photons-behind-bars-breaking-loose.html

    Hi Louise,

    For decades Niels Bohr’s Complementarity Principle was thought to prevent the wave and particle qualities of light from being measured simultaneously. Recently physicist Shahriar Afshar proved this wrong with a very simple experiment. As a reward, the physics community attacked everything from Afshar’s religion to his ethnicity. Prevented from publishing a paper, even on arxiv, he “went public” to NEW SCIENTIST magazine.

    Bohr’s Complementary and Correspondence Principles are just religion, they’re not based on evidence.

    The experimental evidence is that Maxwell’s empirical equations are valid apart from vacuum effects which appear close to electrons, where the electric field is above the pair-production threshold of about 10^18 v/m.

    This is clear even in Dyson’s Advanced Quantum Mechanics. There is a physical mechanism – pair production – which causes chaotic phenomena above the IR cutoff, that is within a radius of approx. 10^{-15} m from a unit electric charge like an electron.

    Beyond that range, the field is far simpler (better described by classical physics), because the field doesn’t have enough energy to create loops of particles from the Dirac sea.

    What Bohr tries to do is to freeze the understanding of quantum theory at the 1927 Solvay Congress level, which is unethical.

    Bohr went wrong with his classical theory of the atom in 1917 or so.

    Rutherford wrote to Bohr asking a question like “how do the electrons know when to stop when they reach the ground state (i.e., who don’t they carry on spiralling into the nucleus, radiating more and more energy as Maxwell’s light model suggests)?”

    Bohr should have had the sense to investigate whether radiation continues. We know from Yang-Mills theory and the Feynman diagrams that electric force results from virtual (gauge boson) photon exchanges between charges!

    What is occurring is that Bohr ignored the multibody effects of radiation whereby every atom and spinning charge is radiating! All charges are radiating, or else they wouldn’t have electric charge! (Yang-Mills theory.)

    Let the normal rate of exchange of energy (emission and reception per electron) be X watts. When an electron in an excited state radiates a real photon, it is radiating at a rate exceeding X.

    As it radiates, it loses energy and falls to the ground state where it reaches equilibrium, with emission and reception of gauge boson radiant power equal to X.  [Although you might naively expect the classical Maxwellian radiation emission rate to be greatest in the ground state, you need also take account of the effect of electron spin changes on the radiation emission rate in the full analysis; see ‘An Electronic Universe, Part 2’, Electronics World, April 2003.  I will try to put a detailed paper about this effect on the internet soon.]

    I did a rough calculation of the transition time at http://cosmicvariance.com/2006/11/01/after-reading-a-childs-guide-to-modern-physics/#comment-131020

    Once you know that the Yang-Mills theory suggests electric and other forces are due to exchange of radiation, you know why there is a ground state (i.e., why the electron doesn’t go converting its kinetic energy into radiation, and spiral into the hydrogen nucleus).

    The ground state energy level is the Yang-Mills corresponds to the equilibrium power the electron has radiate which balances the reception of Yang-Mills radiation with the emission of energy.

    The way Bohr should have analysed this was to first calculate the radiative power of an electron in the ground state using its acceleration, which is a = (v^2)/x. Here x = 5.29*10^{-11} m (see http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydr.html) and the value of v is only c.alpha = c/137.

    Thus the appropriate (non-relativistic) radiation formula to use is: power P = (e^2)(a^2)/(6*Pi*Permittivity*c^3), where e is electron charge. The ground state hydrogen electron has an astronomical centripetal acceleration of a = 9.06*10^{22} m/s^2 and a radiative power of P = 4.68*10^{-8} Watts.

    That is the precise amount of background Yang-Mills power being received by electrons in order for the ground state of hydrogen to exist. The historic analogy for this concept is Prevost’s 1792 idea that constant temperature doesn’t correspond to no radiation of heat, but instead corresponds to a steady equilibrium (as much power radiated per second as received per second). This replaced the old Bohr-like Phlogiston and Caloric philosophies with two separate real, physical mechanisms for heat: radiation exchange and kinetic theory. (Of course, the Yang-Mills radiation determines charge and force-fields, not temperature, and the exchange bosons are not to be confused with photons of thermal radiation.)

    Although P = 4.68*10^{-8} Watts sounds small, remember that it is the power of just a single electron in orbit in the ground state, and when the electron undergoes a transition, the photon carries very little energy, so the equilibrium quickly establishes itself: the real photon of heat or light (a discontinuity or oscillation in the normally uniform Yang-Mills exchange progess) is emitted in a very small time!

    Take a photon of red light, which has a frequency of 4.5*10^{14} Hz. By Planck’s law, E = hf = 3.0*10^{-19} Joules. Hence the time taken for an electron with a ground state power of P = 4.68*10^{-8} to emit a photon of red light in falling back to the ground state from a suitably excited state will be only on the order of E/P = (3.0*10^{-19})/(4.68*10^{-8}) = 3.4*10^{-12} second.

    This agrees with the known facts. So the quantum theory of light is compatible with classical Maxwellian theory!

    Now we come to the nature of a light photon and the effects of spatial transverse extent: path integrals.

    ‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

    – Feynman, QED, Penguin, 1990, page 54.

    That’s the double-slit experiment, etc. The explanation behind it is a flaw in Maxwell’s electromagnetic wave illustration:

    http://www.edumedia-sciences.com/a185_l2-transverse-electromagnetic-wave.html

    The problem with the illustration is that the photon goes forward with the electric (E) and magnetic (B) fields orthagonal to both the direction of propagation and to each other, but with the two phases of electric field (positive and negative) behind one another.

    That way can’t work, because the magnetic field curls don’t cancel one another’s infinite self inductance.

    First of all, the illustration is a plot of E, B versus propagation dimension, say the X dimension. So it is one dimensional (E and B depict field strengths, not distances in Z and Y dimensions!).

    The problem is that for the photon to propagate, the two different curls of the magnetic field (one way in the positive electric field half cycle, the other way in the negative electric field half cycle) must partly cancel one another to prevent the photon having infinite self inductance: this is similar to the problem of sending a propagating pulse of electric energy into a single wire.

    It doesn’t work: the wire radiates, the pulse dies out quickly. (This is only useful for antennas.)

    To send a propagating pulse of energy, a logic step, in an electrical system, you need two conductors forming a transmission line. In a Maxwellian photon, there can be no cancellation of infinite inductance from each opposing magnetic curl, because each is behind or in front of the other. Because fields only travel at the speed of light, and the whole photon is moving ahead with that speed, there can be no influence of each half cycle of a light photon upon the other half cycle.

    I’ve illustrated this here:

    photon

    If you look at Maxwell’s equations, they describe how, cyclically, a varying electric field induces a “displacement current” in the vacuum which in turn creates a magnetic field curling around the current, and so on. They don’t explain the dynamics of the photon or light wave in detail.  [For the correct physical mechanism behind the ‘displacement current’ equation – which again is a radiation exchange effect not vacuum charge because the electric fields and frequencies involved are too small (below the IR cutoff) for pair production and vacuum loop effects  – see http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html.  For the actual Maxwell equations and how they are related to the photon see for example http://quantumfieldtheory.org/Proof.htm, which will soon be edited and improve in structure with a table of contents linked to the different sections.]

    One thing that’s interesting about it is this: electromagnetic fields are composed of exchange radiation according to Yang-Mills quantum field theory.

    The photon is composed of electromagnetic fields according to Maxwell’s theory.

    Hence, the photon is composed of electromagnetic fields which in turn are composed of gauge bosons exchange radiation. The photon is a disturbance in the existing field of exchange radiation between the charges in the universe. The apparently cancelled electromagnetic field you get when you pass two logic steps with opposite potentials through each other in a transmission line, is not true cancellation since although you get zero electric field (and zero electrical resistance, as Dr David S. Walton showed!) while those pulses overlap, their individual electric fields re-emerge when they have passed through one another.

    So if you are some distance from an atom, the “neutral” electric field is not the absence of any field, but the superposition of two fields. (The background “cancelled” electromagnetic field is probably the cause of gravitation, as Lunsford’s unification suggests; I’ve done a calculation of this here (top post).)

    Aspect’s “entanglement” seems to be due to wavefunction collapse error in quantum mechanics, as Dr Thomas S. Love has showed: the when you take a measurement on a steady state system like an atom, you need to switch over the mathematical model you are using from the time-independent to the time-dependent Schroedinger equations, because your measurement causes a perturbation to the state of the system (e.g., your probing electron upsets the state of the atom, causing a time-dependent effect). This switch over in equations causes “wavefunction collapse”, it is not a real physical phenomenon travelling instantly! This is the perfect example of confusing a mathematical model with reality.

    Aspects experimental results show that the polarizations of the same-source photons do correlate. All this means is that the measurement paradox doesn’t apply to photons. A photon is moving at light speed, so it doesn’t have any internal time whatsoever (unlike electrons!). Something which is frozen in time like a photon, can’t change state. To change the nature of the photon it has to be absorbed and re-emitted, as in the case of Compton scattering.

    Electrons can have their state changed by being measured, since they aren’t going at light speed. Time only halts for things going at speed c.

    So Heisenberg’s uncertainty principle should strictly apply to measuring electron spins as Einstein, Polansky, and Rosen suggested in the Physical Review in 1935, but it shouldn’t apply to photons. It’s the lack of physical dynamics in modern physics which creates such confusion. The mathematician who lacks physical mechanisms is in fairyland, and drags down too many experimental physicists and others who believe the metaphysical (non-mechanistic) interpretations of the theory. That’s why string theory and other unconnected-to-any-experimental-fact drivel flourishes.

    ——-

    For more information on the power transmission line TEM (transverse electromagnetic) logic step which is useful background for the discussion above, see:

  • One of Ivor Catt’s few useful pages
  • Another useful (semi-correct) Ivor Catt page
  • Ivor Catt’s half-correct article from Wireless World 1978
  • Catt, Davidson, Walton book (semi-correct): Digital Hardware Design
  • Catt’s entirely false attack on “Maxwell’s equations”
  • Correction of Catt’s errors
  •  Quantum mechanics

    There is a strong analogy of ‘string theory’ mentality to ‘Copenhagen Interpretation’ mentality with dictatorial ‘leaders’ of science abusing power to claim spin-2 gravitons via string are the only way forward allowed.  This is a repeat of the Copenhagen Interpretation groupthink, which was supported by John von Naumann’s false ‘disprove’ of hidden variables theories.  When Bohm disproved von Naumann’s ‘disprove’, Oppenheimer and others simply ridiculed Bohm personally and refused to discuss his alternative ideas.  The actual so-called ‘hidden variables’ are gauge bosons and virtual fermions.

    It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:

    ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)

    Experimental evidence:

    ‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

    Statistical Uncertainty. This is the kind of uncertainty that pertains to fluctuation phenomena and random variables. It is the uncertainty associated with ‘honest’ gambling devices…

    Real Uncertainty. This is the uncertainty that arises from the fact that people believe different assumptions…’ – H. Kahn & I. Mann, Techniques of systems analysis, RAND, RM-1829-1, 1957.

    Let us deal with the physical interpretation of the periodic table using quantum mechanics very quickly. Niels Bohr in 1913 came up with an orbit quantum number, n, which comes from his theory and takes positive integer values (1 for first or K shell, 2 for second or M shell, etc.). In 1915, Arnold Sommerfeld (of 137-number fame) introduced an elliptical-shape orbit number, l, which can take values of n –1, n – 2, n – 3, … 0. Back in 1896 Pieter Zeeman introduced orbital direction magnetism, which gives a quantum number m with possible values l, l – 1, l – 2, …, 0, … – (l- 2), -(l – 1), –l. Finally, in 1925 George Uhlenbeck and Samuel Goudsmit introduced the electron’s magnetic spin direction effect, s, which can only take values of +1/2 and –1/2. (Back in 1894, Zeeman had observed the phenomenon of spectral lines splitting when the atoms emitting the light are in a strong magnetic field, which was later explained by the fact of the spin of the electron. Other experiments confirm electron spin. The actual spin is in units of h/(2p ), so the actual amounts of angular spin are + ½ h/(2p ) and – ½ h/(2p ). )

    To get the periodic table we simply work out a table of consistent unique sets of quantum numbers. The first shell then has n, l, m, and s values of 1, 0, 0, +1/2 and 1, 0, 0, -1/2. The fact that each electron has a different set of quantum numbers is called the ‘Pauli exclusion principle’ as it prevents electrons duplicating one another. (Proposed by Wolfgang Pauli in 1925; note the exclusion principle only applies to fermions with half-integral spin like the electron, and does not apply to bosons which all have integer spin, like light photons and gravitons. While you use fermi-dirac statistics for fermions, you have to use bose-einstein statistics for bosons, on account of spin. Non-spinning particles, like gas molecules, obey maxwell-boltzmann statistics.) Hence, the first shell can take only 2 electrons before it is full. (It is physically due to a combination of magnetic and electric force effects from the electron, although the mechanism must be officially ignored by order of the Copenhagen Interpretation ‘Witchfinder General’, like the issue of the electron spin speed.)

    For the second shell, we find it can take 8 electrons, with l = 0 for the first two (an elliptical subshell is we ignore the chaos effect of wave interactions between multiple electrons), and l = 1 for next other 6.Experimentally we find that elements with closed full shells of electrons, i.e., a total of 2 or 8 electrons in these shells, are very stable. Hence, helium (2 electrons) and Argon (2 electrons in first shell and 8 electrons filling second shell) will not burn. Now read the horses*** from ‘expert’ Sir James Jeans: ‘The universe is built so as to operate according to certain laws. As a consequence of these laws atoms having certain definite numbers of electrons, namely 6, 26 to 28, and 83 to 92, have certain properties, which show themselves in the phenomena of life, magnetism and radioactivity respectively … the Great Architect of the Universe now begins to appear as a pure mathematician.’ – Sir James Jeans, MA, DSc, ScD, LLD, FRS, The Mysterious Universe, Penguin, 1938, pp. 20 and 167.One point I’m making here, aside from the simplicity underlying the use of quantum mechanics, is that it has a physical interpretation for each aspect (it is also possible to predict the quantum numbers from abstract mathematical ‘law’ theory, which is not mechanistic, so is not enlightening). Quantum mechanics is only statistically exact if you have one electron, i.e., a single hydrogen atom. As soon as you get to a nucleus plus two or more electrons, you have to use mathematical approximations or computer calculations to estimate results, which are never exact. This problem is not the statistical problem (uncertainty principle), but a mathematical problem in applying it exactly to difficult situations. For example, if you estimate a 2% probability with the simple theory, it is exact providing the input data is reliable. But if you have 2 or more electrons, the calculations estimating where the electron will be will have an uncertainty, so you might have 2% +/- a factor of 2, or something, depending on how much computer power and skill you use to do the approximate solution. Derivation of the Schroedinger equation (an extension of a Wireless World heresy of the late Dr W. A. Scott-Murray), a clearer alternative to Bohm’s ‘hidden variables’ work…The equation for waves in a three-dimensional space, extrapolated from the equation for waves in gases:Ñ 2 Y = –Y (2p f/v)2where Y is the wave amplitude. Notice that this sort of wave equation is used to model waves in particle-based situations, i.e., waves in situations where there are particles of gas (gas molecules, sound waves). So we have particle-wave duality resolved by the fact that any wave equation is a statistical model for the orderly/chaotic group behaviour of (3+ body Poincare chaos). The term Ñ 2 Y is just a shorthand (the ‘Laplacian operator’) for the sum of second-order differentials:Ñ 2 Y = d2 Y x /dx2 + d2 Y y /dy2 + d2 Y z /dz2.

    (Another popular use for the Laplacian operator is heat diffusion when convection doesn’t happen – such as in solids, since the rate of change of temperature, dT/dt = (k /Cv).Ñ 2 T, where k is thermal conductivity and Cv is specific heat capacity measured under fixed volume.) The symbol f is frequency of the wave, while v is velocity of the wave. Now 2p is in there because f/v has units of reciprocal metres, so 2p is needed to make this ‘reciprical metres’ into ‘reciprocal wavelength’. Get it?All waves behave the wave axiom, v = l f, where l is wavelength. Hence:Ñ 2 Y = –Y (2p /l )2.

    Louis de Broglie, who invented ‘wave-particle duality’ (as waves in the physical, real ether, but that part was suppressed), gave us the de Broglie equation for momentum: p = mc = (E/c2)c = [(hc/l )/c2]c = h/l . Hence: Ñ 2 Y = –Y (2p mv/h)2. Isaac Newton’s theory suggests the equation for kinetic energy E = ½ mv2 (although the term ‘kinetic theory’ was I think first used in an article published in a magazine edited by Charles Dickens, a lot later). Hence, v2 = 2E/m. So we obtain:Ñ 2 Y = -8Y mE(p /h)2.Finally, the total energy, W, for an electron is in part electromagnetic energy U, and in part kinetic energy E (already incorporated). Thus, W = U + E. This rearranges using very basic algebra to give E = W – U. So now we have:Ñ 2 Y = -8Y m(W – U).(p /h)2.

    This is Schroedinger’s basic equation for the atomic electron! The electromagnetic energy U = -qe2/(4p e R) where qe is charge of the electron, and e is the electric permittivity of the spacetime vacuum or ether. By extension of Pythagoras’ theorem into 3 dimensions, R = (x2 + y2 + z2) ½. So now we understand how to derive the Schroedinger’s basic wave equation, and as Dr Scott-Murray pointed out in his Wireless World series of the early 1980s, it’s child’s play. It would be better to teach this to primary school kids to illustrate the value of elementary algebra, than hide it as heresy or unorthodox, contrary to Bohr’s mindset!

    Let us now examine the work of Erwin Schroedinger and Max Born. Since the nucleus of hydrogen is 1836 times as massive as the electron, it can in many cases be treated as at rest, with the electron zooming around it. Schroedinger in 1926 took the concept of particle-wave duality and found an equation that could predict the probability of an electron being found within any distance of the nucleus. The full theory includes, of course, electron spin effects and the other quantum numbers, and so the mathematics at least looks a lot harder to understand than the underlying physical reality that gives rise to it. First, Schroedinger could not calculate anything with his equation because he had no idea what the hell he was doing with the wavefunction Y . Max Born naively, perhaps, suggested it is like water waves, where it is an amplitude of the wave that needs to be squared to get the energy of the wave, and thus a measure of the mass-energy to be found within a given space. (Likewise, the ‘electric field strength’ (volts/metre) from a radio transmitter mast falls off generally as the inverse of distance, although the energy intensity (watts per square metre) falls off as the inverse-square law of distance.)Hence, by Born’s conjecture, the energy per unit volume of the electron around the atom is E ~ Y 2. If the volume is a small, 3 dimensional cube in space, dx.dy.dz in volume, then the proportion of (or probability of finding) the electron within that volume will thus be: dx.dy.dz.Y 2 /[ò ò ò Y 2 dx.dy.dz]. Here, ò is the integral from 0 to infinity. Thus, the relative likelyhood of finding the electron in a thin shell between radii of r and a will be the integral of the product of surface area (4p r2) and Y 2, over the range from r to a. The number we get from this integral is converted into an absolute probability of finding the electron between radii r and a by normalising it: in other words, dividing it into the similarly calculated relative probability of finding the electron anywhere between radii of 0 and infinity. Hence we can understand what we are doing for a hydrogen atom.The version of Schroedinger’s wave equation above is really a description of the time-averaged (or time-independent) chaotic motion of the electron, which is why it gives a probability of finding the electron at a given zone, not an exact location for the electron. There is also a time-dependent version of the Schroedinger wave equation, which can be used to obfuscate rather well. But let’s have a go anyhow. To find the time-dependent version, we need to treat the electrostatic energy U as varying in time. If U = hf, from de Broglie’s use of Planck’s equation, and because the electron behaves the wave equation, its time-dependent frequency is: f2 = -(2p Y )-2 (dY /dt)2 where f2 = U2 /h2. Hence, U2 = -h2 (2p Y )-2 (dY /dt)2. To find U we need to remember from basic algebra that we will lose possible mathematical solutions unless we allow for the fact that U may be negative. (For example, if I think of a number, square it, and then get 4, that does not mean I thought of the number 2: I could have started with the number –2.) So we need to introduce i = Ö (-1). Hence we get the solution: U = ih(2p Y )-1 (dY /dt). Remembering E = W – U, we get the time-dependent Schroedinger equation.Let us now examine how fast the electrons go in the atom in their orbits, neglecting spin speed. Assuming simple circular motion to begin with, the inertial ‘outward’ force on the electron is F = ma = mv2/R, which is balanced by electric ‘attractive’ inward force of F = (qe/R)2/(4p e ). Hence, v = ½qe /(p e Rm)1/2.Now for Werner Heisenberg’s ‘uncertainty principle’ of 1927. This is mathematically sound in the sense that the observer always disturbs the signals he observes. If I measure my car tyre pressure, some air leaks out, reducing the pressure. If you have a small charged capacitor and try to measure the voltage of the energy stored in it with an old fashioned analogue volt meter, you will notice that the volt meter itself drains the energy in the capacitor pretty quickly. A digital meter contains an amplifier, so the effect is less pronounced, but it is still there. A geiger counter held in fallout area absorbs some of the gamma radiation it is trying to measure, reducing the reading, as does the presence of the body of the person using it. A blind man searching for a golf ball by swinging a stick around will tend to disturb what he finds. When he feels and hears the click of the impact of his stick hitting the golf ball, he knows the ball is no longer where it was when he detected it. If he prevents this by not moving the stick, he never finds anything. So it is a reality that the observer always tends to disturb the evidence by the very process of observing the evidence. If you even observe a photograph, the light falling on the photograph very slightly fades the colours. With something as tiny as an electron, this effect is pretty severe. But that does not mean that you have to make up metaphysics to stagnate physics for all time, as Bohr and Heisenberg did when they went crazy. Really, Heisenberg’s law has a simple causal meaning to it, as I’ve just explained. If I toss a coin and don’t show you the result, do you assume that the coin is in a limbo, indeterminate state between two parallel universes, in one of which it is heads and in the other of which it landed tails?

    Young claimed that destructive interference of light occurs at the dark fringes on the screen in the double-slit experiment.  Is it true that two out-of-phase photons really do arrive at the dark fringes, cancelling one another out?  Clearly, this would violate conservation of energy!  Back in February 1997, when I was editor of Science World magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject.  Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes.  The photons are not arriving at the dark fringes.  Instead, they arrive in the bright fringes.

    The interference of radio waves and other phased waves is also known as the Hanbury-Brown-Twiss effect, whereby if you have two radio transmitter antennae, the signal that can be received depends on the distance between them: moving they slightly apart or together changes the relative phase of the transmitted signal from one with respect to the other, cancelling the signal out or reinforcing it.  (It depends on the frequencies and amplitude as well: if both transmitters are on the same frequency and have the same output amplitude and radiation power, then perfectly destructive interference if they are exactly out of phase, or perfect reinforcement – constructive interference – if they are exactly in-phase, will occur.)  This effect also actually occurs in electricity, replacing Maxwell’s mechanical ‘displacement current’ of vacuum dielectric charges.

    Feynman quotation

    The Feynman quotation I located is this:

    ‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn – the phenomena that we see are very well approximated by rules such as ‘light travels in straight lines’ because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out.  But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes, and so on.  The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths.  But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go [influenced by the randomly occurring fermion pair-production in the strong electric field on small distance scales, according to quantum field theory], each with an amplitude.  The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to go.’ – R. P. Feynman, QED, Penguin, London, 1990, pp. 84-5. (Emphasis added in bold.)

    ‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54.  (Emphasis added in bold.)

    That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.

    The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that. Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.

    So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles).

    Anti-causal hype for quantum entanglement: Dr Thomas S. Love of California State University has shown that entangled wavefunction collapse (and related assumptions such as superimposed spin states) are a mathematical fabrication introduced as a result of the discontinuity at the instant of switch-over between time dependent and time independent versions of Schroedinger at time of measurement.

    Heisenberg quantum mechanics: Poincare chaos applies on the small scale, since the virtual particles of the Dirac sea in the vacuum regularly interact with the electron and upset the orbit all the time, giving wobbly chaotic orbits which are statistically described by the Schroedinger equation – it’s causal, there is no metaphysics involvedThe main error is the false propaganda that ‘classical’ physics models contain no inherent uncertainty (dice throwing, probability): chaos emerges even classically from the 3+ body problem, as first shown by Poincare.

    ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

    – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

    Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

    (Update: There are some awful grammatical errors in this post, but they are easy to spot and correct and in any case don’t detract from the mathematical physics, so editing the text is not a high priority.  It will be done when time permits.  Readers should also see http://quantumfieldtheory.org/ for some additional resources which are available online.  There is also an introduction to some other technical aspects of quantum field theory at http://en.wikipedia.org/wiki/Quantum_field_theory.)

     

    Quantum Field Theory Resources

    I’m thinking of redesigning http://quantumfieldtheory.org/ as well as rewriting and improving all the material.  Let me have any suggestions, please.

    Professor Carlo Rovelli’s Quantum Gravity book

    Rovelli deals with loop quantum gravity, the background independent quantum field theory which describes general relativity without requiring unobservable extra dimensions and other speculation.  He comments in the Preface:

    ‘We have to understand which (possibly new) notions make sense and which old notions must be discarded, in order to describe spacetime in the quantum relativistic regime.’

    In chapter 1, General Ideas and Heuristic Picture, Rovelli begins by pointing out that the inclusion of time in the Schroedinger equation, or a fixed reference frame as a spacetime background, is incompatible with the general covariance of general relativity.  In addition, there is a problem with general relativity since it uses a smooth metric in Riemannian geometry and doesn’t quantize the gravitational/inertial field.  He remarks:

    ‘The fact is that we do have plenty of information about quantum gravity, because we have quantum mechanics and we have general relativity.  Consistency with quantum mechanics and general relativity is an extremely strict restraint.’ (P. 4 of draft.)

    I agree completely with Rovelli’s assertion in subsection The Physical Meaning of General Relativity:

    ‘General relativity is the discovery that spacetime and the gravitational field are the same entity.  What we call “spacetime” is itself a physical object, in many respects similar to the electromagnetic field.  We can say that general relativity is the discovery that there is no spacetime at all.  What Newton called “space”, and Minkowski called “spacetime”, is unmasked: it is nothing but a dynamical object – the gravitational field – in a regime in which we neglect its dynamics.’ (P. 7 of draft.)

    • The argument here is that the only relevant background of spacetime is the local gravitational field.  Restricted (special) relativity is based ultimately on Einstein’s argument that a magnetic field is only observed if an electric charge is moving relative to the observer; if both observer and charge are in the same state of motion, the observer experiences no magnetic field.  Hence, only relative motion is important.  (The Michelson-Morley experiment is more controversial, because FitzGerald and Lorentz interpreted the result as implying a physical contraction of the instrument in the direction of motion caused by the physical effects of the gravitational field.  This physical contraction shortened the distance and time taken for light to travel in that direction, preventing an absolute speed of light in the background field from being detected by interference of combined light beams.  Since ‘special’ relativity preserves the contraction formulae, it is consistent in that sense with the FitzGerald-Lorentz absolute background.  Nobody of course wants to go down that road, so false arguments are made that the Michelson-Morley experiment was a measuring apparatus which had arms of the same length, and wouldn’t work with arms of different lengths.  In fact, it had arms of differing lengths in terms of wavelengths of light because you can’t build a massive instrument with arms of identical length, and it didn’t measure speeds at all.  It only sought to find interference fringes from relative changes in the speed of light due to being rotated in a pool of mercury, so the null result is actually a refutation of relative, rather than absolute, motion.  Likewise, a person who accepts the solar system sees ‘sunrise’ as evidence of the daily planetary rotation, while Ptolemy saw exactly the same phenomena as being direct evidence that the sun orbits the planet daily.  The Michelson-Morley experiment superficially supports ‘special’ relativity, but it also supports the FitzGerald-Lorentz theory, depending on the assumptions you choose to assume.  It is, however, useful in ruling out all theories which don’t include the contraction.  ‘Special’ relativity is mathematically accurate in reproducing the Lorentz transformation.)

    Rovelli comments:

    ‘The success of special relativity was rapid, and the theory is today widely empirically supported and universally accepted.  Still, I do not think that special relativity has really been fully absorbed yet: the large majority of the cultivated people, as well as a surprising high number of theoretical physicists still believe, deep in the heart, that there is something happening “right now” on Andromeda; that there is a single universal time ticking away the life of the Universe.’  (P. 7 of draft.)

    • Obviously there are definitions of absolute time, such as a figure age of the universe, 13,600 million years or whatever, based on the Hubble parameter and other observations, and there is also a way of determine speeds relative to the microwave cosmic background by which a 3 milliKelvin blueshift in the 2.7 K microwave background shows the Milky Way is headed for Andromeda at 600 km/s.  If somehow the microwave background (originating from 300,000 years after the big bang) can be construed as a universal reference frame (after all, COBE’s project leader Dr George Smoot did call it ‘the face of God’), then the 600 km/s – although the matter in the Milky Way may have speeded up and slowed down by say a factor of ten due to attractions to galaxy clusters and effects of the local super group – may be a reasonable order-of-magnitude estimate for the velocity of the Milky Way since the big bang (if you think that the average speed was more than an order of magnitude higher or lower and has been slowing down or speeding up a lot, then do some computer simulations of the Milky Way’s motion relative the cluster of surrounding galaxies to provide some evidence for that claim).  Therefore, the Milky Way would have travelled a distance of 60-6,000 km/s * (age of universe in seconds, 13,600 million years) = 0.03-3% of the radius of the universe.  Hence, this estimate shows that we are near the absolute origin of the universe, if there really exists such a thing as absolute motion and absolute time.  (The Milky Way is now being attracted toward Andromeda by gravity, and its matter has not been going in the same direction since the big bang.  (Hence, it is not clear that the universe originated as a singularity at a distance of about 0.3% of the current age of the universe located in the direction opposite to that of Andromeda.)

    Despite this, Rovelli’s argument does hold water: ‘Mass recession speed is 0-c in spacetime of 0-15 billion years, so outward force F = m.dv/dt ~ m(c – 0)/(t, age of universe) ~ mcH ~ 1043 N (H is Hubble parameter). Newton’s 3rd law implies equal inward force, carried by exchange radiation, predicting cosmology, accurate general relativity, SM forces and particle masses.’

    The failure of Hubble’s presentation is it’s relation of recession velocities as a function of distance, not of time.  It’s only when you write the Hubble law in terms of the ratio of observable recession velocity to observable time past (not distance now), that you see that you have constant velocity/time = dv/dt = acceleration.  Then you simply apply Newton’s second and third laws to this acceleration of the mass of the universe radially outward as seen from our frame of reference, and as a result you can predict the coupling strength of gravity, G, using cosmological observations which you can compare to terrestrial, experimentally determined values.

    This cannot be sent for peer-review or published in somewhere like Physical Review Letters or even on arXiv because of successful predictions seem absurd to the mainstream, which is all tied up with non-predictive speculation, e.g., string.  Rovelli remarks:

    ‘The information that the Sun is not anymore there must travel from the Sun to Earth across space, carried by an entity.  This entity is the gravitational field.’  (P. 37 of draft.)

    Rovelli uses this argument to solve the problem of Newton’s rotating bucket: Newton maintained that at least rotational motion is in principle absolute, because, if you have a bucket of water with you, you can detect rotation by observing the surface of the water becoming concave.  Rovelli shows that:

    The water rotates with respect to a local physical entity: the gravitational field.

    ‘It is the gravitational field, not Newton’s inert absolute space, that tells objects if they are accelerating or not, if they are rotating or not.  There is no inert background entity such as Newtonian space: there are only dynamical physical entities.  Among these are the fields.  Among the fields is the gravitational field.

    ‘The flatness of concavity of the water surface in Newton’s bucket is not determined by the motion of the water with respect to absolute space.  It is determined by the physical interaction between the water and the gravitational field.’  (P. 40 of draft.)

    This is absolutely correct and very well written, and resolves the problem by providing a clear solution.  It reminds me a comment recently written by Professor Lee Smolin on Dr Peter Woit’s blog in another context:

    ‘The Hoyle argument is not a “prediction” of the anthropic principle. The Hoyle argument is based on a fallacy in which an extra statement is added to a correct argument, without changing its force. The correct argument is as follows:

    ‘A The universe is observed to contain a lot of carbon.
    ‘B That carbon could not have been produced were there not a certain line…

    ‘Therefore that line must exist.

    ‘To this correct argument Hoyle added a statement that does no work, to get:

    ‘U Carbon is necessary for life.
    ‘A The universe contains a lot of carbon
    ‘B That carbon could not have been produced were there not a certain line in the spectrum of carbon. …

    ‘I have found that every single argument proported to be a successful prediction from the AP has a fallacy at its heart. See my hep-th/0407213 for details.

    ‘What has been so disheartening about the current debates re the landscape is that all this was thought through a long time ago by a few of us and it has been clear since around 1990 what an appeal to the landscape would have to do to result in falsifiable predictions. The issue is not the landscape per se but the cosmological scenario in which it is studied. The fact that eternal inflation can’t yield anything other than a random distribution on the landscape is the heart of the impass, for that leads to the AP being pulled in in an attempt to save the theory and that in turn leads to a replay of old fallacies.’

    Notice that as a commentator on Not Even Wrong says, Hoyle’s prediction was claimed to be ‘the only genuine anthropic principle prediction’, according to John Gribbin and Martin Rees’s Cosmic Coincidences, quoted at http://www.novanotes.com/jan2003/anthro1.htm

    This shows how top science writers (Gribbin and Rees) are plain wrong, keep on writing wrong stuff, and don’t correct it.  (We don’t need to even mention Gribbin’s Jupiter Effect or the commercial attitude of New Scientist’s editor Jeremy Webb.  I corresponded by email with Gribbin and Webb, and neither was concerned about the unvarnished facts simply – it seems – because nature isn’t as exciting as speculations and its lucrative hype, which sells lots of copies of books and magazines.)

    Wilsonian philosophy of renormalization

    In a previous post, Loop Quantum Gravity, Representation Theory and Particle Physics, I illustrated the mechanisms by which the cutoffs in renormalization physically occur.  The problem is that continuous differential equations are being used in quantum field theory (as in sound wave theory and quantum mechanics) to represent discrete events in statistical average.  For example, in sound you physically only have a lot of air molecules colliding.  Before the sound wave appears, the molecules are normally colliding in the air at about 500 m/s.  The sound wave is energy conveyed at basically the existing speed, with the air molecules as a carrier.  (The 500 m/s figure is for motion in random directions, so the mean vector speed is slower; however, sound in air goes faster than the mean speed of air molecules, by a factor of the square root of the ratio of specific heat capacities in air, because sound is an adiabatic process where the rise in pressure in the sound wave is accompanied by a rise in temperature, which increases the speed.)

    The point is, the theory of sound is not a theory of individual air molecules colliding, but is an abstract level mathematical theory of the statistical average behaviour of the particles (air molecules).  Similarly, quantum field theory as currently built describes the statistical averages resulting from quantum effects.  For example, the magnetic moment of the electron is found to be (to 5 decimal places) approximated by 1 + alpha/(2*Pi) = 1.00116 Bohr magnetons.  The 0.00116 added to the Dirac result of 1 Bohr magneton (for the bare electron) is due to vacuum effects.  No chaotic fluctuation is predicted as such, just the average result.  Chaotic motions of electrons in quantum mechanics are similarly not directly predicted, a statistical model of the resulting distribution is given.  It is not a time-dependent versus time-independent equation issue; there is simply a statistical model and when someone writes down the time-dependent equation for an electron, it is a statistical equation.  This type of particle-wave duality is similar to what exists in classical sound wave theory.

    Clearly, classical sound wave theory breaks down when asked to deliver predictions about wave properties on distance scales smaller than the mean separation of individual molecules in air.  Similarly, quantum field theory breaks down when the equations are extended down to predict field effects on the size scale of individual particles.  Rovelli remarks:

    ‘… loop quantum gravity shows that the structure of spacetime at the [assumed] Planck scale [the Planck scale is a distance based just on dimensional analysis and is only presumed to be the size of particles; another, even shorter, distance scale is of course the black hole event horizon radius of an electron mass] is discrete.  Therefore physical spacetime doesn’t have a short distance structure at all.  The unphysical assumption of a smooth background … may be precisely the cause of the ultraviolet divergences.’  (P. 9 of draft.)

    This is basically the argument for the cutoff to prevent UV divergence at Loop Quantum Gravity, Representation Theory and Particle Physics, namely there’s nothing at smaller distances than the size scale of the quantum vacuum, so the field equations are no longer describing anything.  This kind of cutoff is very commonplace in physics.  For example, take the inverse-square law of solar radiation.  This predicts an infinite flux of solar radiation at the middle of the sun.  But you don’t get an infinite flux in the middle of the sun (the temperature is hot, about 15 million Kelvin, but that isn’t infinite).  Since it is the sun’s surface which is radiating most of the energy (not the middle of the sun), the law is not valid for positions within the sun’s radius.

    So when you get infinite results occurring as a field equation goes towards zero distance, it is a flaw in the mathematical approximation at such small distances which is real, not an infinity which is real.  This is the rationale for field cutoffs to prevent infinities arising.

    Professor John Baez has a good summary of this approach in his article Renormalization Made Easy: ‘This sort of idea goes back to Kenneth Wilson who won the Nobel prize in physics in 1982, for work he did around 1972 on the renormalization group and critical points in statistical mechanics. His ideas are now important not only in statistical mechanics but also in quantum field theory. For a nice short summary of the “Wilsonian philosophy of renormalization”, let me paraphrase Peskin and Schroeder:

    • ‘… Wilson’s analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson’s arguments show that this this circumstance explains the renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.’

    The cutoff simply occurs at the lattice spacing in the low energy (frozen) ‘Dirac sea’ (or if  particles are loops, then the cutoff would occur at the loop radius, just as you might cutoff the inverse-square law for sunlight at the sun’s radius when calculating the radiating temperature of the sun).

    ‘Since there is no spatial continuity at small scale, there is (literally!) no room in the theory for ultraviolet divergences.’  (P. 14 of draft.)

    Different types of loops

    (1) Heaviside energy current loops

    The Heaviside energy current is the light speed electromagnetic logic signal propagated along a pair of conductors.  Electrons are normally moving around chaotically in each conductor.  When the Poynting-vector type electromagnetic field of the logic step propagates in the negatively charged conductor, electrons are accelerated in the direction of the logic step, but they typically reach speeds of only about 1 mm/s whereas the field (and logic step) propagate at 300 Mm/s (the velocity of light for the insulator around and inbetween the two conductors.

    The situation in the positively charged conductor is considerably more exciting, because electron drift current there goes the opposite way to the logic step!  What causes the electrons to move like that?  After all, the logic step is the normal mechanism by which electricity propagates in computers and other electrical equipment (before the electricity has had time to flow right around the circuit and for the resistance of the circuit to be determined thereby).

    What happens is essential for understanding the electron.  The acceleration of charges in each conductor causes the acceleration of charge in the other conductor, by transverse radio wave radiation (each conductor behaves as both transmitter antenna and receiver antenna).  No radio waves are able to escape from the transmission line, however, because the radio signal transmitted by each conductor (since the electrons accelerated at the front of the logic step in each conductor travel in opposite directions) are exact inversions of one another.  The superimposed signals at a distance from the transmission line is exactly zero; there is complete cancellation.

    The Maxwell radio wave must have oscillating fields (with equal amounts of positive and negative electric field included) in order to propagate, because a non-oscillating Maxwell wave is just like the Poynting vector of the field in a single conductor of a transmission line.  That cannot propagate because the magnetic field it has gives infinite self-inductance.  The whole point of having two conductors to propagate a logic pulse is that the magnetic curls of the field from the opposite-directed electron drift currents in each conductor partly cancel, getting rid of the infinite self-inductance effect.

    Therefore, we can see another solution to Maxwell’s equations: one predicting the electron.  A steady loop of Heaviside energy current can form an electric charge, because at each point on the loop, there is a corresponding point on the other side of the loop where energy current is moving in the opposite direction, and its magnetic field curl partially cancels out the magnetic field from the first point on the loop, preventing the infinite self-inductance problem.  In addition, the magnetic field lines curling around the loop add up to produce a magnetic dipole, just like an electron.  The light speed energy current is massless, like a Standard Model electron, but mass is supplied in the same ways that mass is supplied to Standard Model particles.  The spin of this loop is electron spin.

    (2) Yang-Mills exchange radiation loops

    Exchange radiation flows between charges in a continuous cycle, causing forces.  Because this radiation is moving to and from charges at the same time, it doesn’t need to oscillate in order to propagate (for photons, the curl of magnetic field from the returning flow of gauge boson radiation can cancel out part of the magnetic field from the outgoing gauge boson radiation through which it passes, just like the Poynting energy current in a transmission line).

    (3) Space-time particle creation-annihilation loops

    These are illustrated in the top diagram here.  These loops exist only above the IR energy cutoffs, i.e., they exist only within a radius of about 1 fm from a charged particle, this radius being the distance of closest approach in a Coulomb scattering of two particles each having a kinetic energy of the IR cutoff energy (0.511 MeV/electron).  In the vast spaces beyond 1 fm from an electron, there are no such loops, because the electric field of the electron is too weak to briefly free pairs of particles from the Dirac sea (which is effectively frozen for energies below the IR cutoff).  The photon speed in the Dirac sea should be given by some relationship of the sort in Maxwell’s original 1861 paper, http://vacuum-physics.com/Maxwell/maxwell_oplf.pdf

    Page 49 on the PDF reader (labelled page 22 on the document) gives Maxwell’s claim to have predicted the velocity of light; using the formula for transverse waves in an elastic solid he gets the right answer and immediately declares in his own italics:

    “… we can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena.”

    Maxwell’s actual theory for light speed is simply a sound wave type effect in a crystalline solid.  Newton’s flawed formula for a sound wave is identical, because Newton didn’t know about the adiabatic effect (the increase in velocity due to the increase in temperature which accompanies the pressure wave).

    Newton says that wave velocity c = [(E/V)/(M/V)]1/2 where E/V is the kinetic energy density (i.e., static pressure) and M/V is the mass density (E is energy, M mass, V volume).  Maxwell simply finds a theory to relate E/V and M/V to the electric and magnetic constants for the vacuum (permittivity and permeability) and uses Newton’s idea to unify electricity and magnetism.  In 1865 Maxwell rebuilt the aether theory and predicted that a Michelson-Morley type experiment could prove the existence of aether, and so Maxwell’s dynamical theory of electromagnetic mechanisms has been removed from physics ever since Einstein’s special relativity was accepted (although Maxwell’s differential equations, expressed in vector calculus by Heaviside in 1893 and in tensor form by Einstein 1915, have of course survived).

    Taking the equation c = [(E/V)/(M/V)]1/2 =  (E/M)1/2, we immediately can rearrange to obtain the formula E = Mc2, and we might guess (as Guy Grantham insists by email) that the E/M ratio in c = (E/M)1/2 is the ratio of binding energy to mass of Dirac sea particles, in the vacuum.  Hence the IR cutoff of E = 0.511 MeV could be the binding energy per particle in a vacuum lattice which is broken (in pair production) to free polarizable loops of charges.  However, this kind of half-baked speculation really has no value in physics unless it helps to make checkable calculations and progress.  Otherwise it is clutter.  This Dirac sea idea seems to be nonsense to me, as I will explain.

    Grantham claims that c = (E/M)1/2 correctly predicts not just the speed of light in aether but the speed of sound in a salt crystal, where E is the binding energy of salt ions in the lattice, 8 eV, and is the mass of a salt molecule (NaCl), which gives a speed c = 3.6 km/s.  Grantham, quoting Menahem Simhony,  gives values for the longitudinal sound wave speed in salt as 4.5 km/s and the transverse sound wave speed in solid salt as 2.5 km/s (solids support transverse waves as well as longitudinal waves; fluids only support longitudinal waves).   Both figures are far from the result of the formula (which doesn’t come with a mechanism explaining whether it is supposed to model transverse or longitudinal waves!).  In addition, it is obviously wrong to take the bonding energy E = Mc2, where c is light speed, and then suddenly claim that c is the speed of sound if E is lattice bonding energy rather than rest-mass energy.  (This is not a problem in the vacuum, because the IR cutoff energy of 0.511 MeV is the rest mass energy of an electron, as well as being the presumed bonding energy.)  It would make sense to set the bonding energy approximately equal at most to the kinetic energy of ions in the lattice, E = (1/2)Mv2 because if the kinetic energy exceeded the bonding energy the salt crystal structure would be broken down into a fluid.  In this case, the speed of sound v = (2E/M)1/2 = 5.1 km/s.  This is closer to the 4.5 km/s experimental value for longitudinal waves in salt, and is more convincing.

    This problem of the missing square root of 2 is also present in Maxwell’s original 1861 light wave ‘prediction’.  A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9) explains the problem.  Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated: ‘history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’

    Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:

    ‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of 21/2 smaller than the velocity of light.’

    It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’

    If the adiabatic effect on speed can be ignored (i.e., if there is little difference between the specific heats for the material), then one way to mechanistically estimate the speed of sound in a material is to begin with the fact that outer bound electrons have a typical speed of about c/137 (where c is light velocity). These electrons interact with the heavier ion nuclei by electromagnetic field effects, which transfer momentum from the electron to the nuclei of the ions.  The interaction here is a bit like Coulomb elastic scatter because the Schroedinger electron orbitals are not circles but are chaotic and the electrons move away from and towards the nucleus a lot, which constitutes Coulomb elastic scatter type energy and momentum transfer.  By conservation of momentum, after an elastic collision of a particle of velocity c/137 and of mass say m/1836 (i.e., an electron) with one of mass m (representing nuclei, m = 1 for a proton), the average resulting velocity is v = c/(137*1836) = 1200 m/s.  This assumes that the ion is hydrogen; obviously for heavier elements the neutron to proton ratio in the nucleus is about unity so there is then one orbital electron per neutron plus proton, so the speed is half that in hydrogen, v = c/(137*3600) = 600 m/s.   However, many factors are ignored in this back-of-the-envelope estimate based on the ground state of hydrogen.  (E.g., temperature will add kinetic energy to the air molecules or ions, speeding up motion, increasing the speed of sound.)