Quantum field theory is the most successful physical theory ever, encompassing all known nuclear interactions and electromagnetism, and it has many more successful predictions and experimental tests than general relativity, so it is apparent that general relativity needs modification to accommodate quantum field theory, than the other way around. [General relativity necessitates a stress-energy tensor as the source of the gravitational field, and the gravitational field source in this tensor is represented by continuous differential equations, rather than discontinuous (lumpy) quantized matter. So the fact a smooth curvature (continuous, smooth acceleration curve on a Feynman diagram) results from general relativity is a product of the *approximation *used to statistically average the gravity field source, instead of properly representing the lumps. There other reasons why general relativity is just a flawed - classical - approximation as well. For instance, in addition to *falsely* assuming that mass is smoothly distributed in space instead of coming in lumps, general relativity also simply ignores any possibility of quantized field quanta, gravitons.] Quantum field theory has also been successfully applied to explain superfluid properties, because in condensed matter physics (low temperature phenomena generally) pairs of half integer spin fermions can associate to produce composite particles that have the properties of integer spin particles, bosons.

In 1925, Max Born and Pascual Jordan recognised that a quantum transition, such as the fall of an electron from an excited to a ground state (accompanied by the emission of a photon), is a complicated problem because the number of particles changes (a photon is created or is absorbed). Classical Maxwellian electrodynamics does describe radiation emission due to acceleration of charge, but does not explain why radiation is quantized.

The quantum theory of Planck, and Bohr’s atomic model, deal with specific problems (blackbody radiation spectra and line spectra, respectively), but are not general theories.

It is now recognised that the correct explanation of quantum electrodynamics lies in Yang-Mills quantum field theory, in which exchange of radiation (as depicted by Feynman diagrams) is the underlying mechanism. We will come back to this later in the post.

In 1926, Werner Heisenberg, together with Born and Jordan, developed a quantum theory of electromagnetism by a process called canonical quantization, whereby quanta are treated as separate oscillators of given frequency. Their treatment neglected polarization and charge, and was inconsistent with relativity considerations. Jordan in 1927 employed a second quantization to include charges and thus quantum mechanics, while Dirac discovered a Hamiltonian for Schroedinger’s time-dependent equation which is consistent with relativity.

Schroedinger’s time-dependent equation is essentially saying the same thing as this electromagnetic energy mechanism of Maxwell’s ‘displacement current’: Hy= iħ.dy /dt = (½ih/p)dy /dt, where ħ = h/(2p). *The energy flow is directly proportional to the rate of change of the wavefunction.* This is identical to the classical Maxwell ‘displacement current’ term which states that the rate of flow of energy (via virtual ‘displacement current’) across vacuum in a capacitor or radio system is directly proportional to the rate of change of the electric field! (Update: a simple explanation of this real electric field energy-transfer mechanism can be found here.) Maxwell’s displacement current is i = dD/dt = e.dE/dt. In a charging capacitor, the displacement current falls as a function of time as the capacitor charges up. The solution for the fall of ‘displacement current’ flow across the vacuum (and through the circuit) as the capacitor charges up to a maximum capacity is: i_{t} = i_{o}e^{- t / RC}. This energy-based solution is similarly exponential to the solution to Schroedinger’s equation: y_{t} = y_{o} exp[-2piH(t – t_{o})/h].

The non-relativistic hamiltonian is defined as:

H = ½ **p**^{2}/m.

However it is of interest that the ‘special relativity’ prediction of

H = [(mc^{2})^{2} + **p**^{2}c^{2}]^{2},

was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ^{2}d^{2}y/dt^{2} = [(mc^{2})^{2} + **p**^{2}c^{2}]y.

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction. Dirac’s equation simply makes the time-dependent Schroedinger equation (Hy = iħ.dy/dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H = a**p**c + b mc^{2},

where **p** is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’. This is not to be confused for the Weyl spinors used in the gauge theories of the Standard Model; whereas the Dirac spinor represents massive spin-1/2 particles, the Dirac equation yields two Weyl equations for massless particles, each with a 2-component Weyl spinor (representing left- and right-handed spin or helicity eigenstates). The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E = ±[(mc^{2})^{2} + p^{2}c^{2}]^{1/2}.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ½ ħ = ± h/(4p). This explains two of the four solutions! The electron is spin-1/2 so it has a spin of only half the amount of a spin-1 particle, which means that the electron must rotate 720 degrees (not 360 degrees!) to undergo one revolution, like a Mobius strip (a strip of paper with a twist before the ends are glued together, so that there is only one surface and you can draw a continuous line around that surface which is twice the length of the strip, i.e. you need 720 degrees turning to return it to the beginning!). Since the spin rate of the electron generates its intrinsic magnetic moment, it affects the magnetic moment of the electron. Zee gives a concise derivation of the fact that the Dirac equation implies that ‘a unit of spin angular momentum interacts with a magnetic field twice as much as a unit of orbital angular momentum’, a fact discovered by Dirac the day after he found his equation (see: A. Zee, *Quantum Field Theory in a Nutshell,* Princeton University press, 2003, pp. 177-8.) The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc^{2}. This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc^{2}. The fact that in fact E = ± mc^{2}, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity. E = ± mc^{2 }allowed Dirac to predict antimatter, such as the anti-electron called the positron, which was later discovered by Anderson in 1932 (anti-matter is naturally produced all the time when suitably high-energy gamma radiation hits heavy nuclei, causing pair production, i.e., the creation of a particle and an anti-particle such as an electron and a positron).

Later it was discovered that Dirac’s prediction of the magnetic moment of the electron was low by a factor of about 1.00116, and this small error was resolved by Julian Schwinger in 1948. This arises as follows. Dirac’s calculation (which accounts for 99.88% of the magnetic moment of the electron) corresponds to a virtual photon from the external magnetic field interacting directly with the electron. Schwinger’s correction takes account of another possibility which sometimes occurs, and which results in a stronger magnetic moment, statistically increasing the overall measured value of the magnetic moment. The actual possibility which Schwinger’s calculation considers is that the electron happens to emit a virtual photon just before interacting with the virtual photon from the external magnetic field. This increases its magnetic moment briefly. Then after it has interacted with the external magnetic field, the electron reabsorbs the virtual photon it emitted earlier. There are various other possibilities which also affect the magnetic moment slightly. For instance, pair-production of virtual photons can occur inbetween interactions of the electron with the magnetic field. Feynman diagrams are needed to draw qualitative interaction possibilities, and then the contributions of these various interaction possibilities can be worked out using the rules of quantum electrodynamics. However, there are an infinite number of possibilities, and in low-energy situations like magnetic moment calculations, only a few diagrams need to be calculated to give very accurate answers. For example, the simplest interaction or first Feynman diagram (given by Dirac’s calculation) predicts the magnetic moment of the electron accurately to 3 significant figures, 1.00 Bohr magnetons, and when the first virtual particle correction or second Feynman diagram (Schwinger’s calculation) is included, the accuracy is then 6 significant figures, 1 + 1/(2*Pi*137.036) = 1.00116 Bohr magnetons. Therefore, the contribution of complex interactions of virtual particles to the magnetic moment of the electron is trivial: *the vast majority of interactions between the electron and the magnetic field are very simple in nature!* The calculation of Schwinger’s correction to the magnetic moment of the electron is neatly summarised on pages 179-81 of Zee’s book *Quantum Field Theory in a Nutshell* (Princeton University press, 2003).

It became clear in the work of Schwinger and Feynman on quantum electrodynamics that the electron does not have a fixed electric charge under all conditions: if you approach an electron very closely (such as in a high-energy collision above about 1 MeV energy or so), the observable electric charge of the electron increases above the value measured at long distances or in weaker collisions. This is called a ‘running coupling’, since the charge is treated as a gauge theory ‘coupling constant’ which determines how strongly virtual photons interact with electric charges in quantum electrodynamics. The charge is not strictly a ‘coupling constant’, but its value ‘runs’ with the energy of the collision above some threshold. Just as light is only visible between infrared and ultraviolet cutoff energies, the coupling constants are only variable between two extremes, an infrared cutoff (typically 1 MeV or so, corresponding to a collision energy which brings particles just within the radius of the charge where the electric field strength is high enough to create pairs of virtual charges) and an ultraviolet cutoff (which is supposed to exist at unification energy, where couplings for different types of fundamental forces are supposed to converge at a fixed value, often deemed to be the Planck length which is taken as the grain-size of the vacuum, i.e. the minimum length corresponding to anything physical).

The physical explanation for this running coupling or increase of observed electric charge on small distance scales (high collision energies) is pair production, which Schwinger showed requires a threshold steady electric field strength of at least 1.3*10^{18} volts/metre, which occurs out to a radius of only *m*^{2}*c*^{3}/(e*h-bar) = 33 fm (femtometres). (Equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 and equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040.) So in low energy collisions where the particles colliding encounter field strengths of less than 1.3*10^{18} volts/metre, no pair-production occurs and the electric charge of the electron is the normal value given in databooks. But at higher energies, it starts to increase because of the production of virtual loops (in spacetime) of positron plus electron pairs, which soon annihilate back into virtual photons of the electric field, in an endless cycle of pair production followed by annihilation, more pair production, and so on. The effect of this pair production of virtual fermions (virtual electrons and positrons at low energy, but virtual muon and anti-muon pairs and other particles appear at higher energies) is that while they briefly exist before annihilation, they become ‘polarized’ by the electric field in which they are situated (and from which they were created). In other words, the virtual positron on the average tends to approach (due to Coulomb attraction) slightly closer to the real electron core than the virtual electron (due to Coulomb repulsion). This causes the virtual fermions to create a net radial electric field which opposes, and therefore partially cancels out, the radial electric field from the electron core. The electron’s core electric field lines (always arrowed from positive towards negative charge) point in towards the middle, while the radial electric charge lines from the polarization of virtual pairs of fermions around the core (and out to 33 fm radius of an electron or other unit charge) point outwards. This results in screening or shielding, reducing the observable electric charge at long distances, and increasing the observable charge when you penetrate into the shielding zone within 33 fm (as with an aircraft flying upwards into the clouds, the less shielding cloud which you have inbetween you are the light or charge source, the stronger the light or charge that you can see!). The equation for the running coupling or variation in effective electric charge shows that it depends on the logarithm of the energy of collision, varying slowly. Experiments with electron scatter have validated this effect up to 91 GeV collisions, where the electron has an effective electric charge which is 7% higher than the measured charge of the electron at low energy.

*************************

Some material should be inserted here to explain:

(a) the tensor form of the Maxwell equations (see sections 2.8 and 2.9 of Ryder’s *Quantum Field Theory*, 2nd ed., Cambridge University press, 1996, pp. 64-76),

(b) the Lagrangian formulation of particle physics and its relationship to Noether’s theorem of symmetries and Weyl’s gauge theory, including Abelian and Yang-Mills theories (see sections 3.1-3.6 of chapter 3 of Ryder’s *Quantum Field Theory*, 2nd ed., Cambridge University press, 1996, pp. 80-112),

(c) the path integral formulation of quantum field theory. Feynman found that amplitude for any given history is proportional to e^{iS/h-bar}, and that the probability is proportional to the square of the modulus (positive value) of e^{iS/h-bar}. Here, S is the action for the history under consideration, which depends on the Lagrangian. Integrating this exponential over all possible histories gives the Feynman path integral. The integral can be expanded into a series with an infinite number of terms, called the perturbative expansion. Each term in this perturbative expansion corresponds to a Feynman diagram of increasing complexity. It should be noted that because the mathematics of the calculus deal with continuous variables, it it isn’t a perfect model for reality although it makes useful predictions. E.g., when 100 radioactive atoms are decaying, the exponential decay law tells you that after 8 half lives there will be 0.390625 atom remaining. The law gives a continuous number as output, because the exponential formula comes from the use of calculus in the underlying theory. It is quite wrong, because the number of radioactive atoms remaining is never anything but an integer. The prediction that of 100 atoms 0.390625 of an atom will remain after 8 half lives is often re-interpreted as a prediction that the probability of 1 atom being left will be 0.390625 at that time. However, a prediction of probability is quite different from a realistic model. If you want a prediction that looks realistic, you need one which produces integer results for discrete, quantized phenomena. Otherwise the maths is classical, rather than fully compatible with quantum field phenomena. For some introductory material on Feynman path integrals, see chapter 1 of Zee’s *Quantum Field Theory in a Nutshell*, http://press.princeton.edu/chapters/s7573.pdf

See also: Quantum Field Theory Resources:

http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040v2.pdf is very useful for beginners, as is http://arxiv.org/PS_cache/hep-th/pdf/9803/9803075v2.pdf, http://arxiv.org/PS_cache/hep-th/pdf/9912/9912205v3.pdf, and http://arxiv.org/PS_cache/quant-ph/pdf/0608/0608140v1.pdf

‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, *TTWP*, 2006, p. 307).

Mainstream frontier fundamental physics, string theory, isn’t scientific because it can’t ever predict real, quantitative, *checkable* phenomena, since 10-dimensional superstring and 11-dimensional supergravity yield 10^{500} ‘possibilities’ *based not on observed gravity and particle physics facts,* but merely on *other unobserved, guesswork speculations* – namely, *unobserved Planck scale unification and unobserved spin-2 gravitons*. Even the AdS/CFT correspondence conjecture in string theory is physically empty, since AdS (anti de Sitter space) requires a *negative* cosmological constant. So you can’t evaluate the conformal field theory (CFT) of particles with AdS, because AdS isn’t real spacetime!

‘Science *n.* The observation, identification, description, experimental investigation, and theoretical explanation of phenomena.’ – www.answers.com

Loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. The model is not as speculative as string theory, which has been actively promoted in the media since 1985 despite opposition from people like Feynman because it fails to predict anything. Despite endless hype, string theory is now in a state called ‘not even wrong’, which is less objective than the *wrong* theories of caloric, phlogiston, aether, flat earth, and epicycles, which were theories that tried to model some *observed* phenomena of heat, combustion, electromagnetism, geography, and astronomy.

String theory fails because it postulates that 6 dimensions are compactified into unobservably small manifolds in particles; these 6 unobservable dimensions need about 100 parameters to describe them, and it turns out that there are 10^{500} or more configurations possible, each describing a different set of particles (different particles within any set arise from the different possible vibration modes or resonances of a given string). This makes it the vaguest, least falsifiable mainstream speculation ever: to make genuine predictions, the state of the extra unobserved 6-dimensions must be known, which means either building a particle accelerator the size of the galaxy and scattering particles to reveal their Planck scale nature, or eliminating the false 10^{500} guesses, which would take billions of years with supercomputers. But there is some experimental evidence that key stringy assumptions, e.g., spin-2 gravitons and supersymmetry, are false.

For supersymmetry, in the book *Not Even Wrong* (UK edition), Dr Woit explains on page 177 that – using the measured weak and electromagnetic forces – supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. Supersymmetry is also a disaster for increasing the number of Standard Model parameters (couping constants, masses, mixing angles, etc.) from 19 in the empirically based Standard Model to at least 125 parameters (mostly unknown!) for supersymmetry. Supersymmetry in string theory is 10 dimensional and involves a massive supersymmetric boson as a partner for every observed fermion, just in order to make the three Standard Model forces unify at the Planck scale (which is falsely assumed to be the grain size of the vacuum just because it was the smallest size dimensional analysis gave before the electron mass was known; the black hole radius for an electron is far smaller than the Planck size).

At first glance, this 10-dimensional superstring theory for supersymmetry contradicts the 11-dimensional supergravity ideas, but this 10/11 dimensional issue was conveniently explained or excused by Dr Witten in his 1995 M-theory, which shows that you can make the case that 10-dimensional superstrings are a brane (a kind of extra-dimensional equivalent surface) on 11-dimensional supergravity, similarly to how an

*n*– 1 = 2 dimensional area is a surface (or mem-

*brane*) on an

*n*= 3 dimensional object (or

*bulk*). 11-dimensional supergravity arises from the old Kaluza-Klein idea, which was debunked and corrected by Lunsford in a peer-reviewed, published paper – see International Journal of Theoretical Physics, Volume 43, Number 1, January 2004 , pp. 161-177(17) for publication details and here for a downloadable PDF file, which was immediately censored from arXiv which seems to be partly influenced in the relevant sections by a string professor at the University of Texas, Austin.

On the speculative nature of conjectures concerning spin-2 (attractive or ‘suck’) gravitons, Richard P. Feynman points out in

*The Feynman Lectures on Gravitation*, page 30, that gravitons do not have to be spin-2, which has not been observed. Renormalization works in the standard model (for electromagnetic, weak nuclear and strong nuclear charges) because the gauge bosons which mediate force do not interact with themselves to create massive problems. This is not the case with the spin-2 gravitons in general. Spin-2 gravitons, because they have energy, should according to general relativity, themselves be sources for gravity on account of their energy, and should therefore themselves emit gravitons, which usually makes the renormalization technique ineffective for quantum gravity. String theory is supposed to dispense with renormalization problems because strings are not point particles but of Planck-length. The mainstream 11-dimensional supergravity theory includes a superpartner to the unobserved spin-2 graviton, called the spin-3/2 gravitino, which is just as unobserved and non-falsifiable as the spin-2 graviton. The reason is that this supersymmetric scheme gets rid of problems which the spin-2 graviton idea would lead to at unobservably high energy where gravity is speculated to unify with other forces into a single superforce.

So a supersymmetric partner for the spin-2 attractive graviton is postulated in mainstream supergravity to make the spin-2 graviton theory work by *cancelling out the unwanted effects of the grand unified theory speculations.* Hence, you have to add extra speculations to spin-2 gravitons just to cancel out the inconsistencies in the original speculation that all forces should have equal coupling constants (relative charges) at unobservably high energy. The inventing of new uncheckable speculations to cover up inconsistencies in old uncheckable speculations is not new. (It is reminiscent of the proud Emperor who used invisible cloaks to try to cover up his gullibility and shame, at the end of an 1837 Hans Christian Andersen fairytale.) *There is no experimental justification for the speculative mainstream spin-2 graviton scheme, nor any way to check it,* which is discussed in detail here (discussion of alleged reason for spin-2 gravitons) and here (the stringy landscape of 10^{500} spin-2 attractive graviton theories really do *suck* in more ways than one; spin-1 gravitons avert the normal problems of quantum gravity, and make proper predictions without inconsistencies).

String theory predictions are

*not analogous to*Wolfgang Pauli’s prediction of neutrinos, which was indicated by the solid experimentally-based physical facts of energy conservation and the mean beta particle energy being only about 30% of the total mass-energy lost per typical beta decay event: Pauli made a checkable prediction, Fermi developed the beta decay theory and then invented the nuclear reactor which produced enough decay in the radioactive waste to provide a strong source of neutrinos (actually antineutrinos) which tested the theory because conservation principles had made precise predictions in advance, unlike string theory’s ‘heads I win, tails you lose’ political-type, fiddled, endlessly adjustable, never-falsifiable pseudo-‘predictions’. Contrary to false propaganda from certain incompetent string ‘defenders’, Pauli correctly predicted that neutrinos

*are*experimentally checkable, in a 4 December 1930 letter to experimentalists: ‘… Dear Radioactives, test and judge.’ (See footnote on p12 of this reference.)

‘The one thing the journals do provide which the preprint database does not is the peer-review process. The main thing the journals are selling is the fact that what they publish has supposedly been carefully vetted by experts. The Bogdanov story shows that, at least for papers in quantum gravity in some journals [including the U.K. Institute of Physics journal

*Classical and Quantum Gravity*], this vetting is no longer worth much. … Why did referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don’t understand things.’ – P. Woit,

*Not Even Wrong,*Jonathan Cape, London, 2006, p. 223.

‘Worst of all, superstring theory does not follow as a logical consequence of some appealing set of hypotheses about nature. Why, you may ask, do the string theorists insist that space is nine dimensional? Simply because string theory doesn’t make sense in any other kind of space.’ – Nobel Laureate Sheldon Glashow (quoted by Dr Peter Woit in

*Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics,*Jonathan Cape, London, 2006, p181).

‘Actually, I would not even be prepared to call string theory a ‘theory’ … Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ – Nobel Laureate Gerard ‘t Hooft (quoted by Dr Peter Woit in

*Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics,*Jonathan Cape, London, 2006, p181).

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195. (Quotation courtesy of Tony Smith.)

Tony Smith’s CERN document server paper, EXT-2004-031, uses the Lie algebra E6 to avoid 1-1 boson-fermion supersymmetry: ‘As usually formulated string theory works in 26 dimensions, but deals only with bosons … Superstring theory as usually formulated introduces fermions through a 1-1 supersymmetry between fermions and bosons, resulting in a reduction of spacetime dimensions from 26 to 10. The purpose of this paper is to construct … using the structure of E6 to build a string theory without 1-1 supersymmetry that nevertheless describes gravity and the Standard Model…’ However, that research was censored off arXiv,apparently because mainstream string theorists are bigoted against 26-dimensional ideas since 10/11-dimensional M-theory was discovered in 1995. They don’t exactly encourage alternatives, even within the general framework of string theory (26-dimensional bosonic string theory is similar to 10-dimensional superstring in having a 2-dimensional spacetime worldsheet, the difference is that conformal field theory requires 24 dimensions in the absense of supersymmetry and 8 dimensions if there is supersymmetry).

Worse, attempts to explain observed particle physics with string theory result in 10^{500} or more different vacuum states, each with its own set of particle physics. 10^{500} solutions is so many it eliminates falsifiability from string theory. This large number of solutions is named the ‘cosmic landscape’ because Professor Susskind claims that each solution exists in a different parallel universe, and when you plot the resulting vacuum ‘cosmological constants’ as a function of two variables, in string theory, you produce a landscape-like three dimensional graph. The reason for the immense ‘cosmic landscape’ is the fact that string theories only ‘work’ (i.e., satisfy the basic criteria for conformal field theory, CFT) in 10 or more dimensions, so the unobserved dimensions have to be ‘compactified’ by a Calabi-Yau manifold, which – conveniently – curls up the extra dimensions in to a small volume, explaining why nobody has ever observed any of them. In superstring theory, two dimensions (one space and one time) form a ‘worldsheet’ and another eight are required for the CFT of supersymmetric particle physics. Sadly, the Calabi-Yau manifold has many parameters (or moduli) describing size and shape of those unobserved conjectured extra dimensions which must have unknown values (since we can’t observe them), so it is the immense number of possible combinations of these unknown parameters which make string theory fail to produce specific results, by producing too many results to ever rigorously evaluate, even given a supercomputer running for the age of the universe. The 10^{500} figure might not be right: the true figure might be infinity. String theory results depend on many things, e.g., how the moduli are ‘stabilized’ by ‘Rube-Goldberg machines’, monstrous constructions added to the theory just to stop string field properties from conflicting with existing physics! It’s presumably hoped by Dr Witten, discoverer of a 10/11-dimensional superstring-supergravity unification called M-theory, that somehow a way will turn up to pick out the correct solution from the landscape and start making checkable predictions.

However, the best idea of how to go about this is to assume that cosmology is correctly modelled by the Lambda-CDM general relativity solution, which attributes the observed lack of gravitational deceleration in the universe to dark energy, represented by a small positive cosmological constant in general relativity field equations. Then you can try to evaluate parts of the landscape of solutions to string theory which have a suitably small positive cosmological constant. Unfortunately, general relativity does not include quantum gravity, and even the mainstream quantum gravity candidate of an attractive force mediated by spin-2 gravitons, implies that gravity should be weakened over vast distances due to redshift of gravitons exchanged between receding masses, which lowers the energy of the gravitons received in interactions and reduces the coupling constant for gravity. Thus, dark energy may be superfluous if quantum gravity is correct, so it is clear that string theory is really a belief system, a faith-based initiative, with no physics or science of any kind to support it. String theory produces endless research, and inspires new mathematical ideas, albeit less impressively than Ptolemy’s universe, Maxwell’s aether and Kelvin’s vortex atom (e.g., the difficulties of solving Ptolemy’s false epicycles inspired Indian and Arabic mathematicians to develop trigonometry and algebra in the dark ages), but this doesn’t justify Ptolemy’s earth-centred universe, Maxwell’s mechanical aether, Kelvin’s stable vortex atom, and string theory. Another problem of this stringy mainstream research is that it leads to so many speculative papers being published in physics journals that the media and the journals concentrate on strings, and generally either censor out or give less attention to alternative ideas. Even if many alternative theories are wrong, that may be less harmful to the health of physics than one massive mainstream endeavour that isn’t even wrong…

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ - R. P. Feynman, *The Character of Physical Law,* November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

*Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:*

‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, *Not Even Wrong,* Jonathan Cape, London, 2006, p. 182.

‘String theory has the remarkable property of predicting gravity.’ – E. Witten (M-theory originator), *Physics Today,* April 1996.

‘50 points for claiming you have a revolutionary theory but giving no concrete testable predictions.’ – J. Baez (crackpot Index originator), comment about crackpot mainstream string ‘theorists’ on the *Not Even Wrong* weblog here.

‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, *Space Time and Gravitation,* Cambridge University Press, 1921, p64. (Here is a link to checkable quantum gravity framework which made published predictions in 1996 which were confirmed by observations in 1998, but censored out due to the immensely loud noise generators in vacuous string theory.).

### Background information

Quantum field theory is the basis of the Standard Model of particle physics and is the best tested of all physical theories, more general in application and better tested within its range of application than the existing formulation of general relativity (which needs modification to include quantum field effects), describing all electromagnetic and nuclear phenomena. The Standard Model does not as yet include quantum gravity, so it is not a replacement yet for general relativity. However, the elements of quantum gravity may be obtained from an application of quantum field to a Penrose spin network model of spacetime (the path integral is the sum over all interaction graphs in the network, and this yields background independent general relativity). This approach, ‘loop quantum gravity’, is entirely different from that in string theory, which is based on building extra-dimensional speculation upon other speculations, e.g., the speculation that gravity is due to spin-2 gravitons (this is speculative is no experimental evidence for it). In loop quantum gravity, by contrast to string theory, the aim is merely to use quantum field theory to derive the framework of general relativity as simply as possible. Other problems in the Standard Model are related to understanding how electroweak symmetry is broken at low energy and how mass (gravitational charge) is acquired by some particles. There are several forms of speculated Higgs field which may rise to mass and electroweak symmetry breaking, but the details as yet unconfirmed by experiment (the Large Hadron Collider may do it). Moreover, there are questions about how the various parameters of the Standard Model are related, and the nature of fundamental particles (string theory is highly speculative, and there are other possibilities).

There are several excellent approaches to quantum field theory: at a popular level there is Wilczek’s 12-page discussion of *Quantum Field Theory*, Dyson’s *Advanced Quantum Mechanics* and the excellent approach by Alvarez-Gaume and Vazquez-Mozo, *Introductory Lectures on Quantum Field Theory*. A good mathematics compendium introducing, in a popular way, some of maths involved is Penrose’s *Road to Reality* (Penrose’s twistors inspired some concepts in an *Electronics World* article of April 2003). For a very brief (47 pages) yet more abstract or mathematical (formal) approach to quantum field theory, see for comparison Crewther’s http://arxiv.org/abs/hep-th/9505152. For a slightly more ‘stringy’-orientated approach, see Mark Srednicki’s 608 pages textbook, via http://www.physics.ucsb.edu/~mark/qft.html, and there is also Zee’s *Quantum Field Theory in a Nutshell* on Amazon to buy if you want something briefer but with that mainstream speculation (stringy) outlook.

Ryder’s *Quantum Field Theory* also contains supersymmetry unification speculations and is available on Amazon here. Kaku has a book on the subject here, Weinberg has one here, Peskin and Schroeder’s is here, while Einstein’s scientific biographer, the physicist Pais, has a history of the subject here. Baez, Segal and Zhou have an algebraic quantum field theory approach available on

http://math.ucr.edu/home/baez/bsz.html, while Dr Peter Woit has a link to handwritten quantum field theory lecture notes from Sidney Coleman’s course which is widely recommended, here. For background on representation theory and the Standard Model see Woit’s page here for maths background and also his detailed suggestion, http://arxiv.org/abs/hep-th/0206135. For some discussion of quantum field theory equations without the interaction picture, polarization, or renormalization of charges due to a physical basis in pair production cutoffs at suitable energy scales, see Dr Chris Oakley’s page http://www.cgoakley.demon.co.uk/qft/:‘… renormalization failed the “hand-waving” test dismally.

‘This is how it works. In the way that quantum field theory is done – even to this day – you get infinite answers for most physical quantities. Are we really saying that particle beams will interact infinitely strongly, producing an infinite number of secondary particles? Apparently not. We just apply some mathematical butchery to the integrals until we get the answer we want. As long as this butchery is systematic and consistent, whatever that means, then we can calculate regardless, and what do you know, we get fantastic agreement between theory and experiment for important measurable numbers (the anomalous magnetic moment of leptons and the Lamb shift in the Hydrogen atom), as well as all the simpler scattering amplitudes. …

‘As long as I have known about it I have argued the case against renormalization. [*What about the physical mechanism of virtual fermion polarization in the vacuum, which explains the case for a renormalization of charge since this electric polarization results in a radial electric field that opposes and hence shields most of the core charge of the real particle, and this shielding due to polarization occurs wherever there are pairs of charges that are free and have space to become aligned against the core electric field, i.e. in the shell of space around the particle core that extends in radius between a minimum radius equal to the grain size of the Dirac sea - i.e. the UV cutoff - and an outer radius of about 1 fm which is the range at which the electric field strength is Schwinger's threshold for pair-production (i.e. the IR cutoff)? This renormalization mechanism has some physical evidence in several experiments, e.g., Levine, Koltick et al., *Physical Review Letters,* v.78, no.3, p.424, 1997, where the observable electric charge of leptons does indeed increase as you get closer to the core, as seen in higher energy scatter experiments.*] …

‘[Due to Haag’s theorem] it is not possible to construct a Hamiltonian operator that treats an interacting field like a free one. Haag’s theorem forbids us from applying the perturbation theory we learned in quantum mechanics to quantum field theory, a circumstance that very few are prepared to consider. Even now, the text-books on quantum field theory gleefully violate Haag’s theorem on the grounds that they dare not contemplate the consequences of accepting it.

‘With regard to the first thing, I doubt if this has been done before in the way I have done it^{3, but the conclusion is something that some may claim is obvious: namely that local field equations are a necessary result of fields commuting for spacelike intervals. Some call this causality, arguing that if fields did not behave in this way, then the order in which things happen would depend on one’s (relativistic) frame of reference. It is certainly not too difficult to see the corollary: namely that if we start with local field equations, then the equal-time commutators are not inconsistent, whereas non-local field equations could well be. This seems fine, and the spin-statistics theorem is a useful consequence of the principle. But in fact this was not the answer I really wanted as local field equations lead to infinite amplitudes. It could be that local field equations with the terms put into normal order – which avoid these infinities – also solve the commutators, but if they do then there is probably a better argument to be found than the one I give in this paper. …}

‘With regard to the second thing, the matrix elements consist of transients plus contributions which survive for large time displacements. The latter turns out to be exactly that which would be obtained by Feynman graph analysis. I now know that – to some extent – I was just revisiting ground already explored by Källén and Stueckelberg^{4.}

‘My third paper [published in *Physica Scripta,* v41, pp292-303, 1990] applies all of this to the specific case of quantum electrodynamics, replicating all scattering amplitudes up to tree level. …

‘Unfortunately for me, though, most practitioners in the field appear not to be bothered about the inconsistencies in quantum field theory, and regard my solitary campaign against infinite subtractions at best as a humdrum tidying-up exercise and at worst a direct and personal threat to their livelihood. I admit to being taken aback by some of the reactions I have had. In the vast majority of cases, the issue is not even up for discussion.

‘The explanation for this opposition is perhaps to be found on the physics Nobel prize web site. The five prizes awarded for quantum field theory are all for work that is heavily dependent on renormalization. …

‘Although by these awards the Swedish Academy is in my opinion endorsing shoddy science, I would say that, if anything, particle physicists have grown to accept renormalization more rather than less as the years have gone by. Not that they have solved the problem: it is just that they have given up trying. Some even seem to be proud of the fact, lauding the virtues of makeshift “effective” field theories that can be inserted into the infinitely-wide gap defined by infinity minus infinity. Nonetheless, almost all concede that things could be better, it is just that they consider that trying to improve the situation is ridiculously high-minded and idealistic. …

‘The other area of uncertainty is, to my mind, the ‘strong’ nuclear force. The quark model works well as a classification tool. It also explains deep inelastic lepton-hadron scattering. The notion of quark “colour” further provides a possible explanation, inter alia, of the tendency for quarks to bunch together in groups of three, or in quark-antiquark pairs. It is clear that the force has to be strong to overcome electrostatic effects. Beyond that, it is less of an exact science. Quantum chromodynamics, the gauge theory of quark colour is the candidate theory of the binding force, but we are limited by the fact that bound states cannot be done satisfactorily with quantum field theory. The analogy of calculating atomic energy levels with quantum electrodynamics would be to calculate hadron masses with quantum chromodynamics, but the only technique available for doing this – lattice gauge theory – despite decades of work by many talented people and truly phenomenal amounts of computer power being thrown at the problem, seems not to be there yet, and even if it was, many, including myself, would be asking whether we have gained much insight through cracking this particular nut with such a heavy hammer.’

The humorous and super-intelligent (no joke intended) Professor Warren Siegel has an 885 pages long free textbook, *Fields* http://arxiv.org/abs/hep-th/9912205, the first chapters of which consist of a very nice introduction to the technical mathematical background of experimentally validated quantum field theory (it also has chapters on speculative supersymmetry and speculative string theory toward the end).Gerard ’t Hooft has a brief (69 pages) review article, *The Conceptual Basis of Quantum Field Theory,* here, and Meinard Kuhlmann has an essay on it for the *Stanford Encyclopedia of Philosophy* here.

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that *an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation.* This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, *Not Even Wrong,* Jonathan Cape, London, 2006, p189. (Emphasis added.)

‘Plainly, there *are* different approaches to the five fundamental problems in physics.’ – Lee Smolin, *The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next*, Houghton Mifflin, New York, 2006, p254.The major problem today seems to be that general relativity is fitted to the big bang *without* applying corrections for quantum gravity which are important for relativistic recession of gravitational charges (masses): the redshift of gravity causing gauge boson radiation reduces the gravitational coupling constant *G*, weakening long range gravitational effects on cosmological distance scales (i.e., between rapidly receding masses). This mechanism for a lack of gravitational deceleration of the universe on large scales (high redshifts) has counterparts even in alternative push-gravity graviton ideas, where gravity – and generally curvature of spacetime – is due to shielding of gravitons (in that case, the mechanism is more complicated, but the effect still occurs).

Professor Carlo Rovelli’s *Quantum Gravity* is an excellent background text on loop quantum gravity, and is available in PDF format as an early draft version online at http://www.cpt.univ-mrs.fr/~rovelli/book.pdf and in the final published version from Amazon here. Professor Lee Smolin also has some excellent online lectures about loop quantum gravity at the Perimeter Institute site, here (you need to scroll down to ‘Introduction to Quantum Gravity’ in the left hand menu bar). Basically, Smolin explains that loop quantum gravity gets the Feynman path integral of quantum field theory by summing all interaction graphs of a Penrose spin network, which amounts to general relativity without a metric (i.e., background independent). Smolin also has an arXiv paper, *An Invitation to Loop Quantum Gravity*, here which contains a summary of the subject from the existing framework of mathematical theorems of special relevance to the more peripherial technical problems in quantum field theory and general relativity.

However, possibly the major future advantage of loop quantum gravity will be as a Yang-Mills quantum gravity framework, with the physical dynamics implied by gravity being caused by full cycles or complete loops of exchange radiation being exchanged between gravitational charges (masses) which are receding from one another as observed in the universe. There is a major difference between the chaotic space-time annihilation-creation massive loops which exist between the IR and UV cutoffs, i.e., within 1 fm distance from a particle core (due to chaotic loops of pair production/annihilation in quantum fields), and the more classical (general relativity and Maxwellian) force-causing exchange/vector radiation loops which occur *outside the 1 fm range of the IR cutoff energy* (i.e., at lower energy than the closest approach – by Coulomb scatter – of electrons in collisions with a kinetic energy similar to the rest mass-energy of the particles).

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, *QED*, Penguin, 1990, page 54.

‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*10^{18} v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, *QED*, Penguin, 1990, page 84-5.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in *Analog: Science Fiction/Science Fact,* Vol. C1, No. 129, Davis Publications, New York, November 1981.

Feynman points out in that book

*QED*that there is a simple physical explanation via Feynman diagrams and path integrals for why the

*mathematics*of electron orbits and photon paths is classical on large scales and chaotic on small ones:

‘… when seen on a large scale, they [electrons, photons, etc.] travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from quantum interactions, each represented by a Feynman diagram] becomes very important, and we have to sum the arrows [amplitudes] to predict where an electron is likely to be.’

- R. P. Feynman, *QED,* Penguin, 1990, page 84-5.

So according to Feynman, an electron inside the atom has a chaotic path which is physically the result of the small scale involved, which prevents individual virtual photon exchanges from statistically averaging out the way they do on large scales. For analogy, think of the different effects of the impacts of air molecules on a micron sized dust particle – i.e. chaotic Brownian motion – and on a football, where such large numbers of impacts [are] involved that they can be accurately represented by the classical approximation of ‘air pressure’.

But Feynman uses *integration (requiring non-quantized continuous variables)* to average out the effects of these many paths or interaction histories, where strictly speaking he should be using discrete (sigma symbol) summation of all individual (quantum) interactions.

If you look at general relativity and quantum field theory (QFT), both represent fields using calculus: they both use differential equations describing *continuous variables* to represent fields which should strictly be sigma sums for the action in *discrete interactions.* This is why differential QFT leads to perturbative expansions with an *infinite* number of terms, each term corresponding to a Feynman diagram:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, *The Character of Physical Law,* BBC Books, 1965, pp. 57-8.

Maybe this effect is what Prof. John Baez was thinking about in his comment at http://www.math.columbia.edu/~woit/wordpress/?p=615.

Solution to a problem with general relativity: A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests

**Acknowledgement**

Professor Jacques Distler of the University of Texas inspired recent reformulations by suggesting in a comment on Professor Clifford V. Johnson’s discussion blog that I’d be taken more seriously if only I’d only use tensor analysis in discussing the mathematical physics of general relativity.

**Part 1: **Summary of experimental and theoretical evidence, and comparison of theories

**Part 2: **The mathematics and physics of general relativity [Currently this links to a paper by Drs. Baez and Bunn]

**Part 3: **Quantum gravity approaches: string theory and loop quantum gravity [Currently this links to Dr Rovelli's Quantum Gravity]

**Part 4: **Quantum mechanics, Dirac’s equation, spin and magnetic moments, pair-production, the polarization of the vacuum above the IR cutoff and it’s role in the renormalization of charge and mass [Currently this links to Dyson's QED introduction]

**Part 5: **The path integral of quantum electrodynamics, compared with Maxwell’s classical electrodynamics [Currently this links to Siegel's Fields, which covers a large area in depth, one gem for example is that it points out that the 'mass' of a quark is not a physical reality, firstly because quarks can't be isolated and secondly because the mass is due to the vacuum particles in the strong field surrounding the quarks anyway]

**Part 6: **Nuclear and particle physics, Yang-Mills theory, the Standard Model, and representation theory [Currently this links to Woit's very brief Sketch showing how simple low-dimensional modelling can deliver particle physics, which hopefully will turn into a more detailed, and also slower-paced, technical book very soon]

**Part 7: **Methodology of doing science: predictions and postdictions of the theory based purely on empirical facts (vacuum mechanism for mass and electroweak symmetry breaking at low energy, including Hans de Vries’ and Alejandro Rivero’s ‘coincidence’) [Currently this links to Alvarez-Gaume and Vazquez-Mozo, Introductory Lectures on Quantum Field Theory]

**Part 8: **Riofrio’s and Hunter’s equations, and Lunsford’s unification of electromagnetism and gravitation [Currently this links to Lunsford's paper]

**Part 9: **Standard Model mechanism: vacuum polarization and gauge boson field mediators for asymptotic freedom and force unification [Currently this links to Wilczek's brief summary paper]

**Part 10: **Evidence for the ‘stringy’ nature of fundamental particle cores? [Currently links to Dr Lubos Motl's list of 12 top superstring theory 'results', with literature references]

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of

*proving*once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here

‘Part of the reason string theory makes no new predictions is that it appears to come in an infinite number of versions. … With such a vast number of theories, there is little hope that we can identify an outcome of an experiment that would not be encompassed by one of them. Thus, no matter what the experiments show, string theory cannot be disproved. But the reverse also holds: No experiment will ever be able to prove it true. … if string theorists are wrong, they can’t be just a little wrong. If the new dimensions and symmetries do not exist, then we will count string theorists among science’s greatest failures, like those who continued to work on Ptolemaic epicycles while Kepler and Galileo forged ahead. Theirs will be a cautionary tale of how not to do science, how not to let theoretical conjecture get so far beyond the limits of what can rationally be argued that one starts engaging in fantasy.’ – Professor Lee Smolin,

*The Trouble with Physics: The Rise of String Theory, the Fall of a Science and What Comes Next,*Haughton Mifflin Company, New York, 2006, pp. xiv-xvii.

**THE ROAD TO REALITY: A COMPREHENSIVE GUIDE TO THE LAWS OF THE UNIVERSE**by Sir Roger Penrose, published by Jonathan Cape, London, 2004. The first half of the 1094 pages hardback book (2.5 inches/6.5 cm thick) briefly summarises fairly well known mathematics of background importance to the subject at issue. The remaining half of the book deals with quantum mechanics and attempts to unify it with general relativity. On page 785, Penrose neatly quotes his co-author Professor Stephen Hawking:

*The Nature of Space and Time,*Princeton University Press, Princeton, 1996, p. 121.]

But acidity is a reality which you can, indeed, test with litmus paper! On page 896, Penrose analyses those who use string ‘theory’ as an obfuscation (or worse) of the meaning of ‘prediction’:

‘In the words of Edward Witten [E. Witten, ‘Reflections on the Fate of Spacetime’, *Physics Today,* April 1996]:

‘and Witten has further commented:

‘It should be emphasised, however, that in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory …’

Hence, string ‘theory’ as hyped up by genius Witten in 1996 as predicting gravity, is misleading, really. String ‘theory’ has no proof of a physical mechanism and predicts nothing checkable,

*not even*the strength of gravity, unlike the causal mechanism! (In apt words of exclusion-principle proposer Wolfgang Pauli, string ‘theory’ is in the class of belief junk, ‘not even wrong’.)

On page 1020 of chapter 34 ‘Where lies the road to reality?’,

*34.4 Can a wrong theory be experimentally refuted?*, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’

Penrose identifies the problem clearly on page 1021: ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

On page 1026, Penrose points out: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’

‘Science is the belief in the ignorance of [the speculative consensus of] experts.’ – R. P. Feynman,

*The Pleasure of Finding Things Out,*1999, p187.

************************

**Classical Electromagnetism**

Weber, not Maxwell, was the first to notice that, by dimensional analysis (which Maxwell popularised), 1/(square root of product of magnetic force permeability and electric force permittivity) = light speed.

Maxwell after a lot of failures (like Keplers trial-and-error road to planetary laws) ended up with a cyclical light model in which a changing electric field creates a magnetic field, which creates an electric field, and so on. Sadly, his picture of a light ray in Article 791, showing *in-phase* electric and magnetic fields at right angles to one another, has been accused of causing confusion and of being incompatible with his light-wave theory (the illustration is still widely used today!).

In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges.

Maxwell’s equation for Faraday’s law: dE/dx = -dB/dt

Maxwell’s equation for displacement current:

-dB/dx = m e .dE/dt

where m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:

d^{2}E/dx^{2} = – d^{2}B/(dx.dt)

-d^{2}B/(dx.dt) = m e . d^{2}E/dt^{2}

Since d^{2}B /(dx.dt) occurs in each of these equations, they are equivalent, so Maxwell got dx^{2}/dt^{2} = 1/(m e ), so c = 1/Ö (m e ) = 300,000 km/s. Eureka! The velocity of light comes out of electromagnetism! Maxwell arrogantly and condescendingly tells us in his *Treatise* that ‘The only use made of light’ in finding m and e was to ‘see the instrument.’ Sadly it was only in 1885 that J.H. Poynting and Oliver Heaviside independently discovered the ‘Poynting-Heaviside vector’ (*Phil. Trans.* 1885, p277).Maxwell’s error jumping to this fantastic conclusion is called prejudice, because *he admitted he had not a clue of the velocity of electric energy in circuits:*

‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second.’ – James Clerk Maxwell, *Treatise on Electricity and Magnetism,* 3rd ed., Article 574. It turned out, from the experimental work of Oliver Heaviside on the Newcastle-Denmark cable in 1875, that what Maxwell thought of as ‘light’ is *the electric field carrying the energy of electricity (the electron drift mass and velocity is so small it carries neglibible energy compared to the field)*: this field is called the Poynting-Heaviside vector (discovered independently by Poynting and by Heaviside). Obviously, electricity seems to be related to light since both have the same velocity, but they are not the same. Shamefully, Maxwell first made his elaborate claims about ‘predicting light’ in an 1861 article based on his faulty ‘elastic solid’ mechanism of light rays by analogy to longitudinal seismic waves in a solid, which was shown later to contain an error if applied to transverse vibrations instead of longitudinal ones. Faraday had first suggested the electromagnetic line vibration nature of light in a well reasoned article of 1846 entitled *Thoughts on Ray Vibrations. *Weber showed in 1856 that the square root of the ratio of the electric force constant (from Coulomb’s empirical law of electric force between electric charges) and the magnetic force constant (for the empirical law of magnetic force between electromagnets powered by a known amount of electric current), predicted the velocity of light in the correct dimensions. Maxwell merely set out to cook up some maths to link up Faraday’s idea with Weber’s ratio. Maxwell found some interesting connections and his final equations are quantitatively accurate, but he failed to build a successful theory of the vacuum (despite a lot of vacuous hype to the contrary), which is why there are problems in understanding the physics of Maxwell’s classical model for light in terms of virtual charges in the vacuum. Hence, a new mechanism is needed, and has been developed which is almost impossible to publish.One source is A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains: ‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of root 2 smaller than the velocity of light.’It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’ Thus, Maxwell set his great precedent for dishonest hype, fudging and fiddling results, obfuscation and complete bull, affecting all future unified field theorists (like string theorists today).Maxwell did however usefully (very heretically) write:

*A Treatise on Electricity and Magnetism,*3rd ed., 1873.

‘In fact, whenever energy is transmitted from one body to another in time, there must be a medium or substance in which the energy exists after it leaves one body and before it reaches the other… I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its action…’ – Dr J. Clerk Maxwell, Conclusion, *A Treatise on Electricity and Magnetism, *3rd ed., 1873.

Quantum field theory describes the relativistic quantum oscillations of fields. The case of zero spin leads to the Klein-Gordon equation. However, everything tends to have some spin. Maxwell’s equations for electromagnetic propagating fields are compatible with an assumption of spin h/(2p), hence the photon is a boson since it has integer spin in units h/(2p ). Dirac’s equation models electrons and other particles that have only half unit spin, as known from quantum mechanics. These half-integer particles are called fermions and have antiparticles with opposite spin. Obviously you can easily make two electrons (neither the antiparticle of the other) have opposite spins, merely by having their spin axes pointing in opposite direction: one pointing up, one pointing down. (This is totally different from Dirac’s antimatter, where the opposite spin occurs while both matter and antimatter spin axes are pointed in the same direction. It enables the Pauli-pairing of adjacent electrons in the atom with opposite spins and makes most materials non-magnetic; since all electrons have a magnetic moment, everything would be potentially magnetic in the absence of the Pauli exclusion process.)

From restraints imposed by Pauli’s exclusion principle from quantum mechanics, Eugene Wigner and Jordan in 1928 found the correct way to include creation-annihilation operators in the theory to allow for pair production phenomena as loops in spacetime (ie, creation of pairs followed by annihilation into radiation, in an endless cycle or loop).

**List of developments in Quantum Field Theory**

The following list of developments is excerpted from a longer one given in Dr Peter Woit’s notes on the mathematics of QFT. He emphasises:

‘Quantum field theory is not a subject which is at the point that it can be developed axiomatically on a rigorous basis. There are various sets of axioms that have been proposed (for instance Wightman’s axioms for non-gauge theories on Minkowski space or Segal’s axioms for conformal field theory), but each of these only captures a limited class of examples. Many quantum field theories that are of great interest have so far resisted any useful rigorous formulation. …’ Dr Woit lists the major events in QFT to give a sense of chronology to the mathematical developments:

‘1925: Matrix mechanics version of quantum mechanics (Heisenberg)

‘1925-6: Wave mechanics version of quantum mechanics, Schroedinger equation (Schroedinger)

‘1927-9: Quantum field theory of electrodynamics (Dirac, Heisenberg, Jordan, Pauli)

‘1928: Dirac equation (Dirac) ‘1929: Gauge symmetry of electrodynamics (London, Weyl)‘1931: Heisenberg algebra and group (Weyl), Stone-von Neumann theorem‘1948: Feynman path integrals formulation of quantum mechanics

‘1954: Non-abelian gauge symmetry, Yang-Mills action (Yang, Mills, Shaw, Utiyama)

‘1959: Wightman axioms (Wightman)

‘1962-3: Segal-Shale-Weil representation (Segal, Shale, Weil)

‘1967: Glashow-Weinberg-Salam gauge theory of weak interactions (Weinberg, Salam)

‘1971: Renormalised non-abelian gauge theory (t’Hooft)

‘1971-2: Supersymmetry

‘1973: Non-abelian gauge theory of strong interactions, QCD (Gross, Wilczek, Politzer)

(*I’ve omitted the events on Dr Woit’s list after 1973*.)

Dr Chris Oakley has an internet site about renormalisation in quantum field theory, which is also an interest of Dr Peter Woit. Dr Oakley starts by quoting Nobel Laureate Paul A.M. Dirac’s concerns in the 1970s:

‘[*R**enormalization is] just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr’s orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.’*

The Nobel Laureate Richard P. Feynman did two things, describing the accuracy of the prediction of the magnet moment of leptons (electron and muon) and Lamb shift, and two major problems of QFT, namely ‘renormalisation’ and the unknown rationale for the ‘137’ electromagnetic force coupling factor:

‘… If you were to measure the distance from Los Angeles to New York to this accuracy, it would be exact to the thickness of a human hair. That’s how delicately quantum electrodynamics has, in the past fifty years, been checked … I suspect that renormalisation is not mathematically legitimate … we do not have a good mathematical way to describe the theory of quantum electrodynamics … the observed coupling … 137.03597 … has been a mystery ever since it was discovered … one of the *greatest* damn mysteries …’ – *QED, *Penguin, 1990, pp. 7, 128-9.

‘I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small – not neglecting it when it is infinitely great and you do not want it! … Simple changes will not do … I feel that the change required will be just about as dramatic as the passage from the Bohr theory to quantum mechanics.’ – Paul A. M. Dirac, lecture in New Zealand, 1975 (quoted in *Directions in Physics*).

Dr Chris Oakley writes: ‘… I believe we already have all the ingredients for a compact and compelling development of the subject. They just need to be assembled in the right way. The important departure I have made from the ‘standard’ treatment (if there is such a thing) is to switch round the roles of quantum field theory and Wigner’s irreducible representations of the Poincaré group. Instead of making quantising the field the most important thing and Wigner’s arguments an interesting curiosity, I have done things the other way round. One advantage of doing this is that since I am not expecting the field quantisation program to be the last word, I need not be too disappointed when I find that it does not work as I may want it to.’

Describing the problems with ‘renormalisation’, Dr Oakley states: ‘Renormalisation can be summarised as follows: developing quantum field theory from first principles involves applying a process known as ‘quantisation’ to classical field theory. This prescription, suitably adapted, gives a full dynamical theory which is to classical field theory what quantum mechanics is to classical mechanics, but it does not work. Things look fine on the surface, but the more questions one asks the more the cracks start to appear. Perturbation theory, which works so well in ordinary quantum mechanics, throws up some higher-order terms which are infinite, and cannot be made to go away. ‘This was known about as early as 1928, and was the reason why Paul Dirac, who (along with Wolfgang Pauli) was the first to seriously investigate quantum electrodynamics, almost gave up on field theory. *The problem remains unsolved to this day. *Perturbation theory is done slightly differently, using an approach based on the pioneering work of Richard Feynman, but, other than that, nothing has changed. One seductive fact is that by pretending that infinite terms are not there, which is what renormalisation is, the agreement with experiment is good. … I believe that our failure to really get on top of quantum field theory is the reason for the depressing lack of progress in fundamental physics theory. … I might also add that the way that the whole academic system is set up is not conducive to the production of interesting and original research. … The tone is set by burned-out old men who have long since lost any real interest and seem to do very little other than teaching and politickering. …’In addition to Dirac’s Hamiltonian based formulation of quantum field theory (energy in terms of position and momenta), there is a Lagrangian formulation called Feynman’s path integrals, which calculates the difference between kinetic and potential energy and follows the trajectory of particles:‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here

**Renormalization, the Standard Model, Supersymmetry**

Renormalization is the scaling of charges and mass in a quantum field theory to make the theory consistent with experimental results. However, it does have a physical basis in polarization. The vacuum loops contain virtual charges which are radially polarized around a charged particle by the particle’s field. Virtual positrons in the vacuum are closer to an electron than virtual electrons (due to Coulomb attraction and repulsion), so the polarization of the vacuum gives rise to a net radial field which opposes the field from the particle in the middle. This shields the charge of the core particle, because some of its electric field is cancelled out by the polarized field of virtual fermions surrounding it which extends to the IR cutoff range, at about 1 fm radius.

Hence, there is a physical mechanism by which electric charge should be renormalized. In addition to electric charge, mass is also renormalized, which usually creates problems in a theory in which mass is a type of charge (quantum gravity would have masses as its units of charge), because all observed masses move in the same direction in a gravitational field. Hence, mass is not directly a renormalizable quantity by the polarization mechanism. However, if mass is indirectly associated with electric charge by a coupling effect, then mass will be renormalizable in the same way as electric charge, because the amount of mass coupled will be dependent on the renormalized (i.e., shielded) magntitude of the electric charge. This indirect renormalization mechanism, for mass, indicates that the mass-giving field particles must be a distance away from an electron, with the polarized vacuum separating the mass-giving particle from the electron core.

In the Standard Model of particle physics, the real electron and positron are complicated particles, their nature being determined by vacuum polarization phenomena. What exists in the vacuum, going by names like ‘virtual electron’ and ‘virtual positron’ are quite different because are part of the vacuum field and don’t exist long enough to have their own vacuum field (and hence polarization effects). This means that what you might call a vacuum ‘electron’ has an effective mass of zero, and the entire nature of a real electron (distinguished from a ‘virtual electron’ of the vacuum) is it has sufficient energy and lifespan that a mass-giving boson has coupled to a virtual electron, making it into a real electron.

This is not like speculative supersymmetry, which is the idea that UV divergence problems (infinite momenta and other effects presumed to exist at unobservably high energy, near the Planck scale) can be cancelled out by speculating that there is a supersymmetric (SUSY) bosonic partner for every observed fermion in the universe: these postulated (unobserved) partners are not Higgs (spin zero) mass-giving bosons, and even the postulated Higgs bosons are not assumed, in mainstream models, to be paired up to fermions, but instead to be a general sea in the vacuum which causes inertia by drag-less miring (like a perfect fluid, which exists only acceleration and deceleration, not inertial motion).

There is only one type of massive particle (having one fixed mass) and one type of charged particle in the universe. Since representation theory suggests that all charges (leptons and quarks) can be generated by simple transformation symmetries of Clifford algebra. This is physically determined by the vacuum. For example, if *N* charges are nearby and hence share the same vacuum polarization shield out to a radius of 1 fm (the low energy cutoff for polarization), then the vacuum shielding factor will be stronger than that from a single charge by a factor *N,* so the shielded value of the electric charge per particle will be 1/*N,* ie, fractional, of that from a positron or electron. Of course the simplicity of this explanation of fractional charges is partly cloaked by complex effects from weak charge.

All the leptons and hadrons observed may be combinations of these two types of particle (electron-like charge charge and Z_{0} boson like mass) with vacuum effects contributing different observable charges and forces: there is an illustration here for how vacuum polarization allows a single mass-giving particle to give rise to all known particle masses (to within a couple of percent), and also table of comparisons here (if you scroll down; that page is now under revision).

**Yang-Mills Gauge Theory: Exchange Radiation is the Force Mechanism**

The observation that like charges repel while unlike charges attract is explained in terms of exchange radiation by Yang-Mills gauge theory.

Symmetry is an invariance: you rotate a square by 90*N* degrees, where *N* is an integer, and looks just the same before and after the rotation. Another example is that field effects result from differences in relative potentials, not absolute potentials. Hence, an electron is not accelerated by the absolute number of volts, but just by the local field gradient, which is the number of volts per metre that the field decreases by. Electric force is *F = qE* where *E* is the field gradient in volts/metre and *q* is charge, while gravitational force is similarly *F = ma* where *m* is the gravitational charge (mass) and *a* is the acceleration due to the gravitational field. This gives a universal or ‘global’ symmetry because the relative field strengths, gradients, are independent of the absolute potential.

Similarly, the gradient of a mountain is independent of the absolute height of the mountain. If you cut the top off a mountain, it doesn’t affect the gradient of the slopes on the remainder. So Maxwell’s equations have a global invariance because they are independent of the absolute field strength (the height of the mountain). They also have a local invariance because when an electric field is disturbed, a magnetic field is created and vice-versa, which depends on relative motion of the observer to the charge or field, not absolute motion (if you move relative to an electric charge, you experience a magnetic field from that charge, but if you move along with the charge, you don’t get a magnetic field).

Hermann Weyl in 1929 came up with a principle of local phase symmetry or invariance of electromagnetic and other waves, called ‘gauge invariance’. According to this gauge invariance, the phase of photon or particle can only vary if there is a local change in the field through which it propagates.

Another example of symmetry is isotopic spin, ‘isospin’, whereby (as in analogous uses of iso, such as isotope and isothermal) particles have the same spin (or other properties, such as nuclear charge and mass) despite having different electric charges.

Yang and Mills in 1954 worked out a gauge theory for isospin. Whereas Weyl’s original gauge theory showed how phase shifts of a photon’s wave are due to changes in the electromagnetic field the Yang-Mills theory shows how changes in isospin of a particle are a result of changes in the electromagnetic field (and vice-versa). This Yang-Mills requires that the electromagnetic field itself must contain charges, and forms the foundation of Standard Model, where electroweak unification has a field which above the electroweak symmetry breaking energy scale is composed of four gauge bosons, two of which have net electric charges. If colour charge is substituted for isospin, Yang-Mills theory describes the quantum chromodynamics theory of strong nuclear interactions between quarks, where the force mediators for three types of nuclear colour charges are (3 x 3) – 1 = 8 types of colour charged ‘gluons’. The gluons have charge (colour charge) so this is a Yang-Mills theory like the electroweak theory.

Initially in 1954 the Yang-Mills theory set out to model the strong force with isospin, and the recognition of the electroweak gauge bosons as Yang-Mills gauge bosons occurred first to Glashow, Salam, and Weinberg around 1967. This electroweak theory was then proved to renormalizable by ‘t Hooft and Veltman in 1971. The first experimental evidence for it was the discovery of effects due to neutral but massive Z gauge boson ‘weak force currents’ in 1973, and full confirmation came when the Z and W gauge bosons were discovered in 1983.

There is a good published essay on the Yang-Mills equation by Dr Christine Sutton (which is reviewed in mathematical detail by the mathematician Professor William G. Faris here) where she points out that a Yang-Mills type theory was independently investigated by others, including Pauli in 1953 (who dismissed it for giving massless particles in the fields, because he wanted massive particles as field carriers to explain the short range of the strong nuclear force; which of course is now known to be due to gluons which don’t require mass to have a limited range), Ronald Shaw in England and Ryoyo Utiyama in Japan, both in 1954. (The reason why Yang-Mills is the widely known name for the theory is that Yang and Mills published it in the *Physical Review* in October 1954.)

**Yang-Mills Exchange Radiation**

The exchange radiations in Yang-Mills theory can have charge (such as in the case of the W^{+} and W^{-} weak force gauge bosons, and all gluons), but they don’t have any mass. So mass results from a separate field, possibly some version the Higgs field. However, other mechanisms (and several variations in the detail of a Higgs-type mechanism) for mass are possible, so an experimental determination of the correct theory is required before any particular theory of mass should be asserted as being verified fact.

Rutherford’s empirical evidence for the nuclear atom, from an analysis of Geiger and Marsden’s alpha particle scattering angles from gold foil, should have led to two developments which never occurred.

*First,* the discovery that the atom is mainly void is a confirmation of the general prediction from the Fatio-LeSage gravity mechanism that – for exchange radiation to cause gravity – gravity acts on subatomic constituents which are small enough for the gravity causing radiation to penetrate the volume of the earth and not merely affect the superficial atomic surface area of a planet or other object.

*Second,* Rutherford’s nuclear atom had the problem that all the positive charges were confined to a small nucleus. To prevent the Coulomb repulsion of those protons from blowing the nucleus apart, there is a strong but short-ranged nuclear attractive force, mediated by pion exchange. One of the physical reasons for the short range of this attractive force is indicated by an objection to the Fatio-LeSage mechanism: massive exchange radiation (such as pions) which pushes particles together will undergo scattering reactions in the vacuum, a little like a gas, which over a distance on the order of the mean-free-path will deflect pressure into the ‘shadow’ region between particles, preventing ‘attraction’ at greater distances by equalizing pressure. A mathematically equivalent way of describing this range limitation is the uncertainty principle, as Popper showed. The old idea that the limited range of the strong force is due to the massiveness of the gauge bosons is simply plain wrong: gluons in quantum chromodynamics don’t have any mass.

This isn’t the whole story. There is also a cutoff limit on the production of loops of pions in the vacuum, which means that they are only created out to a short range around a particle. The basis of quantum field theory is abstract mathematical modelling in which all physical mechanisms, beyond perhaps the vital Feynman diagrams, are seen as heuristic philosophy, unnecessary baggage, or inconsistent, unpredictive speculation. If you do overcome these objections, you then have the problem that people are prejudiced against the approach without first checking whether these objections have been overcome. Hence the situation of submitting a physical prediction which agrees with reality to a journal and having it rejected, unread, as being ‘speculative’ or even merely an ‘alternative to a currently accepted theory’. However, as the failure of mainstream string theory indicates, even widely published and celebrated, ‘currently accepted’ speculations can fail.

The cosmological costant ‘supporting’ data just shows a lack of gravitational deceleration of distant supernova, with no proof that this is due to dark energy offsetting gravity; instead it was predicted long before observations, because a fall in gravitational strength is consistent with the serious redshift-caused energy drop of gravitons (or whatever exchange radiation causes gravity) when the gravitational charges (masses in the universe) are receding from one another relativistically due to recession in the big bang.

The GUT (grand unified theory) scale unification may be wrong itself. The Standard Model might not turn out to be incomplete with regards to requiring supersymmetry: the QED electric charge rises as you get closer to an electron because there’s less polarized vacuum to shield the corer charge. Thus, a lot of electromagnetic energy is absorbed in the vacuum above the IR cutoff, producing loops. It’s possible that the short ranged nuclear forces are powered by this energy absorbed by the vacuum loops.

In this case, energy from one force (electromagnetism) gets used indirectly to produce pions and other particles that mediate nuclear forces. This mechanism for sharing gauge boson energy between different forces in the Standard Model would get rid of supersymmetry which is an attempt to get three lines to numerically coincide near the Planck scale. With the strong and weak forces caused by energy absorbed when the polarized vacuum shields electromagnetic force, when you get to very high energy (bare electric charge), there won’t be any loops because of the UV cutoff so both weak and strong forces will fall off to zero. That’s why it’s dangerous to just endlessly speculate about only one theory, based on guesswork extrapolations of the Standard Model, which doesn’t have evidence to confirm it.

UPDATE, 25 Feb. ’07:

Here’s a recent comment by Q on the Not Even Wrong blog which explains the problems in applying string theory to particle physics, as opposed to the lesser problems in applying general relativity to cosmology:

*Q* Says:

February 25th, 2007 at 6:52 am

… The small positive cosmological constant, or some alternative idea which does the same job, is required to fit GR to the observations of supernovae redshifts, explaining why gravity isn’t slowing down the recession.

String theory by contrast provides speculative explanations for things which are not observed, such as unification at the Planck scale (inventing supersymmetry to explain that speculation), and inventing gravitons (inventing supergravity to explain gravitons). String theory thus ‘explains’ speculations, not observations.

The point Peter made before about cosmology is that at least it is being led by observations, unlike string theory.

Mainstream models for observations might turn out wrong, but that’s better than being not even wrong. Approximations like flat earth theory, caloric, and phlogiston could later be disproved by evidence. Epicycles could provide an endlessly complex mathematical way of representing observed data, but were eventually replaced by Kepler’s simpler laws for convenience, which in turn could be explained by a simple inverse square force law.

String theory doesn’t even model or duplicate any existing observations, so it is worse than Ptolemy’s epicycles: Feynman’s criticism of string theory was largely that it doesn’t address the parameters of the standard model. It’s not even wrong because it doesn’t even model anything known to exist.

———–

Dark matter similarly has this problem (lack of evidence), because, as Q points out, Cooperstock and Tieu have explained galactic rotation ‘evidence’ for dark matter as a GR effect (instead of an effect of dark matter):

‘One might be inclined to question how this large departure from the Newtonian picture regarding galactic rotation curves could have arisen since the planetary motion problem is also a gravitationally bound system and the deviations there using general relativity are so small. The reason is that the two problems are very different: in the planetary problem, the source of gravity is the sun and the planets are treated as test particles in this field (apart from contributing minor perturbations when necessary). They respond to the field of the sun but they do not contribute to the field. By contrast, in the galaxy problem, the source of the field is the combined rotating mass of all of the freely-gravitating elements themselves that compose the galaxy.’

- http://arxiv.org/abs/astro-ph/0507619, pp. 17-18.

For an estimate of the density of the universe from a gravity mechanism, see the proof for *G* here which, given *G,* predicts the density.

The charges in quantum gravity are masses, which are the Higgs field effect on standard model charges.

The Higgs field – which so far isn’t detected – would need to interact with standard model fields for it to give the standard model charges their masses.

on*… suppose it were possible to couple a classical theory of gravity to QFT. How do you know which classical theory of gravity? There are infinitely many background-independent classical theories of gravity, the Einstein-Hilbert action is just the one that keeps the dominant term at low energies. So it appears you still have a problem at the Planck scale.*

*(The quantum version of this question is the problem of nonrenormalizability of gravitational theories, which as far I can tell from zillions of blog threads on the topic, LQG ignores completely.) – anon.*

To answer the first point. You choose the classical theory of gravity which, when coupled to QFT, is based on verified facts and makes accurate predictions.

Regards the second point. Renormalization in gravitation will be a change in the effective value of gravitational charge, i.e., mass. Mass is supposed to be given by the Higgs mechanism, which must be gravitational charge because Einstein’s equivalence principle says gravitational mass is the same as inertial mass.

Renormalization of electric charge in QED is explained by the polarization of the vacuum around the bare core charge, which cancels part of the latter as observed from a distance. You can’t apparently polarize the vacuum to shield gravitational force as you can for electric force.

Polarization in an electric field works because virtual positive charges get attracted closer to the bare core negative charge of a particle than virtual positive charges, so the virtual charges give a net radial electric field which opposes and cancels part of the core charge.

Clearly this can’t occur in a gravitational field because all mass moves the same way; there are no opposite poles for mass and gravity, so no polarization or shielding occurs, at least directly. This makes it hard to see how any quantum theory of gravity can physically include renormalization of gravitational charge (mass).

However, the equivalence principle between gravitational and inertial mass in the context of quantum gravity has been attacked by Rabinowitz in http://arxiv.org/abs/physics/0601218 where it is argued:

“… a theory of quantum gravity may not be possible unless it is not based upon the equivalence principle, or if quantum mechanics can change its mass dependence. …”

In QED, both electric charge and electron mass are renormalized parameters and are scaled by similar factors. This seems to suggest that maybe the source of the electron’s mass is the mass-giving (‘Higgs’) vacuum field outside the polarization region, if mass is associated with the electron by a coupling depending on the electric field of the electron. Thus, the polarization-shielded electric charge (not the core or bare electron charge) would be responsible for coupling external mass-giving Higgs bosons to the electron. So renormalization of the electric field automatically causes renormalization of the gravitationam charge (mass), because the shielded electron charge is responsible for the vacuum field effects which give mass to an electron.

In the Standard Model, all masses are given to particles by field which is separate to electric charge. The only way such a mass-giving field can couple to an electron core without mass is obviously by some kind of coupling to the electron’s electric field. So renormalization effects in quantum gravity are likely to be indirect, i.e., the effect of electric field renormalization (which does have a very simple, empirically confirmed physical mechanism; vacuum charge radial polarization).

Copy of a comment, 26 Feb. 2007:

http://riofriospacetime.blogspot.com/2007/02/photons-behind-bars-breaking-loose.html

Hi Louise,

*For decades Niels Bohr’s Complementarity Principle was thought to prevent the wave and particle qualities of light from being measured simultaneously. Recently physicist Shahriar Afshar proved this wrong with a very simple experiment. As a reward, the physics community attacked everything from Afshar’s religion to his ethnicity. Prevented from publishing a paper, even on arxiv, he “went public” to NEW SCIENTIST magazine.*

Bohr’s Complementary and Correspondence Principles are just religion, they’re not based on evidence.

The experimental evidence is that Maxwell’s empirical equations are valid apart from vacuum effects which appear close to electrons, where the electric field is above the pair-production threshold of about 10^18 v/m.

This is clear even in Dyson’s Advanced Quantum Mechanics. There is a physical mechanism – pair production – which causes chaotic phenomena above the IR cutoff, that is within a radius of approx. 10^{-15} m from a unit electric charge like an electron.

Beyond that range, the field is far simpler (better described by classical physics), because the field doesn’t have enough energy to create loops of particles from the Dirac sea.

What Bohr tries to do is to freeze the understanding of quantum theory at the 1927 Solvay Congress level, which is unethical.

Bohr went wrong with his classical theory of the atom in 1917 or so.

Rutherford wrote to Bohr asking a question like “how do the electrons know when to stop when they reach the ground state (i.e., who don’t they carry on spiralling into the nucleus, radiating more and more energy as Maxwell’s light model suggests)?”

Bohr should have had the sense to investigate whether radiation continues. We know from Yang-Mills theory and the Feynman diagrams that electric force results from virtual (gauge boson) photon exchanges between charges!

What is occurring is that Bohr *ignored the multibody effects of radiation* whereby every atom and spinning charge is radiating! All charges are radiating, or else they wouldn’t have electric charge! (Yang-Mills theory.)

Let the normal rate of exchange of energy (emission and reception per electron) be X watts. When an electron in an excited state radiates a real photon, it is radiating at a rate exceeding X.

As it radiates, it loses energy and falls to the ground state where it reaches equilibrium, with emission and reception of gauge boson radiant power equal to X. [Although you might naively expect the classical Maxwellian radiation emission rate to be greatest in the ground state, you need also take account of the effect of electron spin changes on the radiation emission rate in the full analysis; see 'An Electronic Universe, Part 2', Electronics World, April 2003. I will try to put a detailed paper about this effect on the internet soon.]

I did a rough calculation of the transition time at http://cosmicvariance.com/2006/11/01/after-reading-a-childs-guide-to-modern-physics/#comment-131020

Once you know that the Yang-Mills theory suggests electric and other forces are due to exchange of radiation, you know why there is a ground state (i.e., why the electron doesn’t go converting its kinetic energy into radiation, and spiral into the hydrogen nucleus).

The ground state energy level is the Yang-Mills corresponds to the equilibrium power the electron has radiate which balances the reception of Yang-Mills radiation with the emission of energy.

The way Bohr should have analysed this was to first calculate the radiative power of an electron in the ground state using its acceleration, which is a = (v^2)/x. Here x = 5.29*10^{-11} m (see http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydr.html) and the value of v is only c.alpha = c/137.

Thus the appropriate (non-relativistic) radiation formula to use is: power P = (e^2)(a^2)/(6*Pi*Permittivity*c^3), where e is electron charge. The ground state hydrogen electron has an astronomical centripetal acceleration of a = 9.06*10^{22} m/s^2 and a radiative power of P = 4.68*10^{-8} Watts.

That is the precise amount of background Yang-Mills power being received by electrons in order for the ground state of hydrogen to exist. The historic analogy for this concept is Prevost’s 1792 idea that constant temperature doesn’t correspond to no radiation of heat, but instead corresponds to a steady equilibrium (as much power radiated per second as received per second). This replaced the old Bohr-like Phlogiston and Caloric philosophies with two separate real, physical mechanisms for heat: radiation exchange and kinetic theory. (Of course, the Yang-Mills radiation determines charge and force-fields, not temperature, and the exchange bosons are not to be confused with photons of thermal radiation.)

Although P = 4.68*10^{-8} Watts sounds small, remember that it is the power of just a single electron in orbit in the ground state, and when the electron undergoes a transition, the photon carries very little energy, so the equilibrium quickly establishes itself: the real photon of heat or light (a discontinuity or oscillation in the normally uniform Yang-Mills exchange progess) is emitted in a very small time!

Take a photon of red light, which has a frequency of 4.5*10^{14} Hz. By Planck’s law, E = hf = 3.0*10^{-19} Joules. Hence the time taken for an electron with a ground state power of P = 4.68*10^{-8} to emit a photon of red light in falling back to the ground state from a suitably excited state will be only on the order of E/P = (3.0*10^{-19})/(4.68*10^{-8}) = 3.4*10^{-12} second.

This agrees with the known facts. So the quantum theory of light is compatible with classical Maxwellian theory!

Now we come to the nature of a light photon and the effects of spatial transverse extent: path integrals.

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

- Feynman, QED, Penguin, 1990, page 54.

That’s the double-slit experiment, etc. The explanation behind it is a flaw in Maxwell’s electromagnetic wave illustration:

http://www.edumedia-sciences.com/a185_l2-transverse-electromagnetic-wave.html

The problem with the illustration is that the photon goes forward with the electric (E) and magnetic (B) fields orthagonal to both the direction of propagation and to each other, but with the two phases of electric field (positive and negative) behind one another.

That way can’t work, because the magnetic field curls don’t cancel one another’s infinite self inductance.

First of all, the illustration is a plot of E, B versus propagation dimension, say the X dimension. So it is one dimensional (E and B depict field strengths, not distances in Z and Y dimensions!).

The problem is that for the photon to propagate, the two different curls of the magnetic field (one way in the positive electric field half cycle, the other way in the negative electric field half cycle) must partly cancel one another to prevent the photon having infinite self inductance: this is similar to the problem of sending a propagating pulse of electric energy into a single wire.

It doesn’t work: the wire radiates, the pulse dies out quickly. (This is only useful for antennas.)

To send a propagating pulse of energy, a logic step, in an electrical system, you need two conductors forming a transmission line. In a Maxwellian photon, there can be no cancellation of infinite inductance from each opposing magnetic curl, because each is behind or in front of the other. Because fields only travel at the speed of light, and the whole photon is moving ahead with that speed, there can be no influence of each half cycle of a light photon upon the other half cycle.

I’ve illustrated this here:

If you look at Maxwell’s equations, they describe how, cyclically, a varying electric field induces a “displacement current” in the vacuum which in turn creates a magnetic field curling around the current, and so on. They don’t explain the dynamics of the photon or light wave in detail. [For the correct physical mechanism behind the 'displacement current' equation - which again is a radiation exchange effect not vacuum charge because the electric fields and frequencies involved are too small (below the IR cutoff) for pair production and vacuum loop effects - see http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html. For the actual Maxwell equations and how they are related to the photon see for example http://quantumfieldtheory.org/Proof.htm, which will soon be edited and improve in structure with a table of contents linked to the different sections.]

One thing that’s interesting about it is this: electromagnetic fields are composed of exchange radiation according to Yang-Mills quantum field theory.

The photon is composed of electromagnetic fields according to Maxwell’s theory.

Hence, the photon is composed of electromagnetic fields which in turn are composed of gauge bosons exchange radiation. The photon is a disturbance in the existing field of exchange radiation between the charges in the universe. The apparently cancelled electromagnetic field you get when you pass two logic steps with opposite potentials through each other in a transmission line, is not true cancellation since although you get zero electric field (and zero electrical resistance, as Dr David S. Walton showed!) while those pulses overlap, their individual electric fields re-emerge when they have passed through one another.

So if you are some distance from an atom, the “neutral” electric field is not the absence of any field, but the superposition of two fields. (The background “cancelled” electromagnetic field is probably the cause of gravitation, as Lunsford’s unification suggests; I’ve done a calculation of this here (top post).)

Aspect’s “entanglement” seems to be due to wavefunction collapse error in quantum mechanics, as Dr Thomas S. Love has showed: the when you take a measurement on a steady state system like an atom, you need to switch over the mathematical model you are using from the time-independent to the time-dependent Schroedinger equations, because your measurement causes a perturbation to the state of the system (e.g., your probing electron upsets the state of the atom, causing a time-dependent effect). This switch over in equations causes “wavefunction collapse”, it is not a real physical phenomenon travelling instantly! This is the perfect example of confusing a mathematical model with reality.

Aspects experimental results show that the polarizations of the same-source photons do correlate. All this means is that the measurement paradox doesn’t apply to photons. A photon is moving at light speed, so it doesn’t have any internal time whatsoever (*unlike* electrons!). Something which is frozen in time like a photon, can’t change state. To change the nature of the photon it has to be absorbed and re-emitted, as in the case of Compton scattering.

Electrons can have their state changed by being measured, since they aren’t going at light speed. Time only halts for things going at speed c.

So Heisenberg’s uncertainty principle should strictly apply to measuring electron spins as Einstein, Polansky, and Rosen suggested in the Physical Review in 1935, but it shouldn’t apply to photons. It’s the lack of physical dynamics in modern physics which creates such confusion. The mathematician who lacks physical mechanisms is in fairyland, and drags down too many experimental physicists and others who believe the metaphysical (non-mechanistic) interpretations of the theory. That’s why string theory and other unconnected-to-any-experimental-fact drivel flourishes.

——-

For more information on the power transmission line TEM (transverse electromagnetic) logic step which is useful background for the discussion above, see:

**Quantum mechanics**

There is a strong analogy of ‘string theory’ mentality to ‘Copenhagen Interpretation’ mentality with dictatorial ‘leaders’ of science abusing power to claim spin-2 gravitons via string are the only way forward allowed. This is a repeat of the Copenhagen Interpretation groupthink, which was supported by John von Naumann’s false ‘disprove’ of hidden variables theories. When Bohm disproved von Naumann’s ‘disprove’, Oppenheimer and others simply ridiculed Bohm personally and refused to discuss his alternative ideas. *The actual so-called ‘hidden variables’ are gauge bosons and virtual fermions.*

It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)

Experimental evidence:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘*Statistical Uncertainty. *This is the kind of uncertainty that pertains to fluctuation phenomena and random variables. It is the uncertainty associated with ‘honest’ gambling devices…

‘*Real Uncertainty.* This is the uncertainty that arises from the fact that people believe different assumptions…’ – H. Kahn & I. Mann, *Techniques of systems analysis, *RAND, RM-1829-1, 1957.

Let us deal with the physical interpretation of the periodic table using quantum mechanics very quickly. Niels Bohr in 1913 came up with an orbit quantum number, *n, *which comes from his theory and takes positive integer values (1 for first or K shell, 2 for second or M shell, etc.). In 1915, Arnold Sommerfeld (of 137-number fame) introduced an elliptical-shape orbit number, *l, *which can take values of *n –*1, *n – *2, *n – *3, … 0. Back in 1896 Pieter Zeeman introduced orbital direction magnetism, which gives a quantum number *m* with possible values *l, l – *1, *l – *2, …, 0, … – (*l- *2), -(*l – *1), -*l.* Finally, in 1925 George Uhlenbeck and Samuel Goudsmit introduced the electron’s magnetic spin direction effect, *s, *which can only take values of +1/2 and –1/2. (Back in 1894, Zeeman had observed the phenomenon of spectral lines splitting when the atoms emitting the light are in a strong magnetic field, which was later explained by the fact of the spin of the electron. Other experiments confirm electron spin. The actual spin is in units of h/(2p ), so the actual amounts of angular spin are + ½ h/(2p ) and – ½ h/(2p ). )

To get the periodic table we simply work out a table of consistent unique sets of quantum numbers. The first shell then has *n, l, m, *and *s* values of 1, 0, 0, +1/2 and 1, 0, 0, -1/2. The fact that each electron has a different set of quantum numbers is called the ‘Pauli exclusion principle’ as it prevents electrons duplicating one another. (Proposed by Wolfgang Pauli in 1925; note the exclusion principle only applies to fermions with half-integral spin like the electron, and does not apply to bosons which all have integer spin, like light photons and gravitons. While you use fermi-dirac statistics for fermions, you have to use bose-einstein statistics for bosons, on account of spin. Non-spinning particles, like gas molecules, obey maxwell-boltzmann statistics.) Hence, the first shell can take only 2 electrons before it is full. (It is physically due to a combination of magnetic and electric force effects from the electron, although the mechanism must be officially ignored by order of the Copenhagen Interpretation ‘Witchfinder General’, like the issue of the electron spin speed.)

For the second shell, we find it can take 8 electrons, with *l =* 0 for the first two (an elliptical subshell is we ignore the chaos effect of wave interactions between multiple electrons), and *l = *1 for next other 6.Experimentally we find that elements with closed full shells of electrons, i.e., a total of 2 or 8 electrons in these shells, are very stable. Hence, helium (2 electrons) and Argon (2 electrons in first shell and 8 electrons filling second shell) will not burn. Now read the horses*** from ‘expert’ Sir James Jeans: ‘The universe is built so as to operate according to certain laws. As a consequence of these laws atoms having certain definite numbers of electrons, namely 6, 26 to 28, and 83 to 92, have certain properties, which show themselves in the phenomena of life, magnetism and radioactivity respectively … the Great Architect of the Universe now begins to appear as a pure mathematician.’ – Sir James Jeans, MA, DSc, ScD, LLD, FRS, *The Mysterious Universe, *Penguin, 1938, pp. 20 and 167.One point I’m making here, aside from the simplicity underlying the use of quantum mechanics, is that it has a *physical interpretation for each aspect* (it is also possible to predict the quantum numbers from abstract mathematical ‘law’ theory, which is not mechanistic, so is not enlightening). Quantum mechanics is only statistically exact if you have one electron, i.e., a single hydrogen atom. As soon as you get to a nucleus plus two or more electrons, you have to use mathematical approximations or computer calculations to estimate results, which are never exact. This problem is not the statistical problem (uncertainty principle), but a mathematical problem in applying it exactly to difficult situations. For example, if you estimate a 2% probability with the simple theory, it is exact providing the input data is reliable. But if you have 2 or more electrons, the calculations estimating where the electron will be will have an uncertainty, so you might have 2% +/- a factor of 2, or something, depending on how much computer power and skill you use to do the approximate solution. Derivation of the Schroedinger equation (an extension of a Wireless World heresy of the late Dr W. A. Scott-Murray), a clearer alternative to Bohm’s ‘hidden variables’ work…The equation for waves in a three-dimensional space, extrapolated from the equation for waves in gases:Ñ ^{2} Y = -Y (2p f/v)^{2}where Y is the wave amplitude. Notice that this sort of wave equation is used to model waves in particle-based situations, i.e., waves in situations where there are particles of gas (gas molecules, sound waves). So we have particle-wave duality resolved by the fact that any wave equation is a statistical model for the orderly/chaotic group behaviour of (3+ body Poincare chaos). The term Ñ ^{2} Y is just a shorthand (the ‘Laplacian operator’) for the sum of second-order differentials:Ñ ^{2} Y = d^{2} Y_{ x /dx2 + d2} Y^{ y /dy2 + d2} Y^{ z /dz2. }

^{(Another popular use for the Laplacian operator is heat diffusion when convection doesn’t happen – such as in solids, since the rate of change of temperature, dT/dt = (}k /C

_{v}).Ñ

^{2}T, where k is thermal conductivity and C

_{v}is specific heat capacity measured under fixed volume.) The symbol f is frequency of the wave, while v is velocity of the wave. Now 2p is in there because f/v has units of reciprocal metres, so 2p is needed to make this ‘reciprical metres’ into ‘reciprocal wavelength’. Get it?All waves behave the wave axiom, v = l f, where l is wavelength. Hence:Ñ

^{2}Y = -Y (2p /l )

^{2}.

Louis de Broglie, who invented ‘wave-particle duality’ (as waves in the physical, real ether, but that part was suppressed), gave us the de Broglie equation for momentum: p = mc = (E/c^{2})c = [(hc/l )/c^{2}]c = h/l . Hence: Ñ ^{2} Y = -Y (2p mv/h)^{2}. Isaac Newton’s theory suggests the equation for kinetic energy E = ½ mv^{2} (although the term ‘kinetic theory’ was I think first used in an article published in a magazine edited by Charles Dickens, a lot later). Hence, v^{2} = 2E/m. So we obtain:Ñ ^{2} Y = -8Y mE(p /h)^{2}.Finally, the total energy, W, for an electron is in part electromagnetic energy U, and in part kinetic energy E (already incorporated). Thus, W = U + E. This rearranges using very basic algebra to give E = W – U. So now we have:Ñ ^{2} Y = -8Y m(W – U).(p /h)^{2}.

_{e}

^{2}/(4p e R) where q

_{e}is charge of the electron, and e is the electric permittivity of the spacetime vacuum or ether. By extension of Pythagoras’ theorem into 3 dimensions, R = (x

^{2}+ y

^{2}+ z

^{2})

^{ ½}. So now we understand how to derive the Schroedinger’s basic wave equation, and as Dr Scott-Murray pointed out in his Wireless World series of the early 1980s, it’s child’s play. It would be better to teach this to primary school kids to illustrate the value of elementary algebra, than hide it as heresy or unorthodox, contrary to Bohr’s mindset!

Let us now examine the work of Erwin Schroedinger and Max Born. Since the nucleus of hydrogen is 1836 times as massive as the electron, it can in many cases be treated as at rest, with the electron zooming around it. Schroedinger in 1926 took the concept of particle-wave duality and found an equation that could predict the probability of an electron being found within any distance of the nucleus. The full theory includes, of course, electron spin effects and the other quantum numbers, and so the mathematics at least looks a lot harder to understand than the underlying physical reality that gives rise to it. First, Schroedinger could not calculate anything with his equation because he had no idea what the hell he was doing with the wavefunction Y . Max Born naively, perhaps, suggested it is like water waves, where it is an amplitude of the wave that needs to be squared to get the energy of the wave, and thus a measure of the mass-energy to be found within a given space. (Likewise, the ‘electric field strength’ (volts/metre) from a radio transmitter mast falls off generally as the *inverse of distance*, although the energy intensity (watts per square metre) falls off as the *inverse-square law of distance*.)Hence, by Born’s conjecture, the energy per unit volume of the electron around the atom is E ~ Y ^{2}. If the volume is a small, 3 dimensional cube in space, dx.dy.dz in volume, then the proportion of (or probability of finding) the electron within that volume will thus be: dx.dy.dz.Y ^{2} /[ò ò ò Y ^{2} dx.dy.dz]. Here, ò is the integral from 0 to infinity. Thus, the relative likelyhood of finding the electron in a thin shell between radii of r and a will be the integral of the product of surface area (4p r^{2}) and Y ^{2}, over the range from r to a. The number we get from this integral is converted into an absolute probability of finding the electron between radii r and a by normalising it: in other words, dividing it into the similarly calculated relative probability of finding the electron anywhere between radii of 0 and infinity. Hence we can understand what we are doing for a hydrogen atom.The version of Schroedinger’s wave equation above is really a description of the time-averaged (or time-independent) chaotic motion of the electron, which is why it gives a probability of finding the electron at a given zone, not an exact location for the electron. There is also a time-dependent version of the Schroedinger wave equation, which can be used to obfuscate rather well. But let’s have a go anyhow. To find the time-dependent version, we need to treat the electrostatic energy U as varying in *time. *If U = hf, from de Broglie’s use of Planck’s equation, and because the electron behaves the wave equation, its time-dependent frequency is: f^{2} = -(2p Y )^{-2} (dY /dt)^{2} where f^{2} = U^{2} /h^{2}. Hence, U^{2} = -h^{2} (2p Y )^{-2} (dY /dt)^{2}. To find U we need to remember from basic algebra that we will lose possible mathematical solutions unless we allow for the fact that U may be negative. (For example, if I think of a number, square it, and then get 4, that does not mean I thought of the number 2: I could have started with the number –2.) So we need to introduce i* = Ö (-1). Hence we get the solution: U = ih(2*p Y )^{-1} (dY /dt). Remembering E = W – U, we get the time-dependent Schroedinger equation.Let us now examine how fast the electrons go in the atom in their orbits, neglecting spin speed. Assuming simple circular motion to begin with, the inertial ‘outward’ force on the electron is F = ma = mv^{2}/R, which is balanced by electric ‘attractive’ inward force of F = (q_{e}/R)^{2}/(4p e ). Hence, v = ½q_{e} /(p e Rm)^{1/2}.Now for Werner Heisenberg’s ‘uncertainty principle’ of 1927. This is mathematically sound in the sense that the observer always disturbs the signals he observes. If I measure my car tyre pressure, some air leaks out, reducing the pressure. If you have a small charged capacitor and try to measure the voltage of the energy stored in it with an old fashioned analogue volt meter, you will notice that the volt meter itself drains the energy in the capacitor pretty quickly. A digital meter contains an amplifier, so the effect is less pronounced, but it is still there. A geiger counter held in fallout area absorbs some of the gamma radiation it is trying to measure, reducing the reading, as does the presence of the body of the person using it. A blind man searching for a golf ball by swinging a stick around will tend to disturb what he finds. When he feels and hears the click of the impact of his stick hitting the golf ball, he knows the ball is no longer where it was when he detected it. If he prevents this by not moving the stick, he never finds anything. So it is a reality that the observer always tends to disturb the evidence by the very process of observing the evidence. If you even observe a photograph, the light falling on the photograph very slightly fades the colours. With something as tiny as an electron, this effect is pretty severe. But that does not mean that you have to make up metaphysics to stagnate physics for all time, as Bohr and Heisenberg did when they went crazy. Really, Heisenberg’s law has a simple causal meaning to it, as I’ve just explained. If I toss a coin and don’t show you the result, do you assume that the coin is in a limbo, indeterminate state between two parallel universes, in one of which it is heads and in the other of which it landed tails?

*Science World*magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject. Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes. The photons are not arriving at the dark fringes. Instead, they arrive in the bright fringes.

The interference of radio waves and other phased waves is also known as the Hanbury-Brown-Twiss effect, whereby if you have two radio transmitter antennae, the signal that can be received depends on the distance between them: moving they slightly apart or together changes the relative phase of the transmitted signal from one with respect to the other, cancelling the signal out or reinforcing it. (It depends on the frequencies and amplitude as well: if both transmitters are on the same frequency and have the same output amplitude and radiation power, then perfectly destructive interference if they are exactly out of phase, or perfect reinforcement – constructive interference – if they are exactly in-phase, will occur.) This effect also actually occurs in electricity, replacing Maxwell’s mechanical ‘displacement current’ of vacuum dielectric charges.

**Feynman quotation**

The Feynman quotation I located is this:

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn – the phenomena that we see are very well approximated by rules such as ‘light travels in straight lines’ because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. **But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes, and so on. **The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. **But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go [influenced by the randomly occurring fermion pair-production in the strong electric field on small distance scales, according to quantum field theory], each with an amplitude.** The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to go.’ – R. P. Feynman, *QED, *Penguin, London, 1990, pp. 84-5. (Emphasis added in **bold**.)

‘Light … “smells” the neighboring paths around it, and uses a **small core of nearby space**. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54. (Emphasis added in **bold**.)

That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.

The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that. Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.

So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles).

Anti-causal hype for quantum entanglement: Dr Thomas S. Love of California State University has shown that entangled wavefunction collapse (and related assumptions such as superimposed spin states) are a mathematical fabrication introduced as a result of the discontinuity at the instant of switch-over between time dependent and time independent versions of Schroedinger at time of measurement.

Heisenberg quantum mechanics: Poincare chaos applies on the small scale, since the virtual particles of the Dirac sea in the vacuum regularly interact with the electron and upset the orbit all the time, giving wobbly chaotic orbits which are statistically described by the Schroedinger equation – it’s causal, there is no metaphysics involved. The main error is the false propaganda that ‘classical’ physics models contain no inherent uncertainty (dice throwing, probability): chaos emerges even classically from the 3+ body problem, as first shown by Poincare.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book *The Logic of Scientific Discovery*]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, *Objective Knowledge,* Oxford University Press, 1979, p. 303.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

– **Dr Tim Poston** and **Dr Ian Stewart**, ‘Rubber Sheet Physics’ (science article, not science fiction!) in ** Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981**.

(Update: There are some awful grammatical errors in this post, but they are easy to spot and correct and in any case don’t detract from the mathematical physics, so editing the text is not a high priority. It will be done when time permits. Readers should also see http://quantumfieldtheory.org/ for some additional resources which are available online. There is also an introduction to some other technical aspects of quantum field theory at http://en.wikipedia.org/wiki/Quantum_field_theory.)

Copy of a comment:

http://kea-monad.blogspot.com/2007/02/luscious-langlands-ii.html

Most of the maths of physics consists of applications of equations of motion which ultimately go back to empirical observations formulated into laws by Newton, supplemented by Maxwell, Fitzgerald-Lorentz, et al.

The mathematical model

followsexperience. It is only speculative in that it makes predictions as well as summarizing empirical observations. Where the predictions fall well outside the sphere of validity of the empirical observations which suggested the law or equation, then you have a prediction which is worth testing. (However, it may not be falsifiable even then, the error may be due to some missing factor or mechanism in the theory, not to the theory being totally wrong.)Regarding supersymmetry, which is the example of a theory which makes no contact with the real world, Professor Jacques Distler gives an example of the problem in his review of Dine’s book

Supersymmetry and String Theory: Beyond the Standard Model:http://golem.ph.utexas.edu/~distler/blog/

“Another more minor example is his discussion of Grand Unification. He correctly notes that unification works better with supersymmetry than without it. To drive home the point, he presents non-supersymmetric Grand Unification in the maximally unflattering light (run α 1 ,α 2 up to the point where they unify, then run α 3 down to the Z mass, where it is 7 orders of magnitude off). The naïve reader might be forgiven for wondering why anyone ever thought of non-supersymmetric Grand Unification in the first place.”

The idea of supersymmetry is the issue of getting electromagnetic, weak, and strong forces to unify at 10^16 GeV or whatever, near the Planck scale. Dine assumes that unification is a fact (it isn’t) and then shows that in the absense of supersymmetry, unification is incompatible with the Standard Model.

The problem is that the physical mechanism behind unification is closely related to the vacuum polarization phenomena which shield charges.

Polarization of pairs of virtual charges around a real charge partly shields the real charge, because the radial electric field of the polarized pair is pointed the opposite way. (I.e., the electric field lines point inwards towards an electron. The electric field likes between virtual electron-positron pairs, which are polarized with virtual positrons closer to the real electron core than virtual electrons, produces an outwards radial electric field which cancels out part of the real electron’s field.)

So the variation in coupling constant (effective charge) for electric forces is due to this polarization phenomena.

Now, what is happening to the energy of the field when it is shielded like this by polarization?

Energy is conserved! Why is the bare core charge of an electron or quark higher than the shielded value seen outside the polarized region (i.e., beyond 1 fm, the range corresponding to the IR cutoff energy)?

Clearly, the polarized vacuum shielding of the electric field is removing energy from charge field.

That energy is being used to make the loops of virtual particles, some of which are responsible for other forces like the weak force.

This provides a physical mechanism for unification which deviates from the Standard Model (which does not include energy sharing between the different fields), but which does not require supersymmetry.

Unification appears to occur because, as you go to higher energy (distances nearer a particle), the electromagnetic force increases in strength (because there is less polarized vacuum intervening in the smaller distance to the particle core).

This increase in strength, in turn, means that there is less energy in the smaller distance of vacuum which has been absorbed from the electromagnetic field to produce loops.

As a result, there are fewer pions in the vacuum, and the strong force coupling constant/charge (at extremely high energies) starts to fall. When the fall in charge with decreasing distance is balanced by the increase in force due to the geometric inverse square law, you have asymptotic freedom effects (obviously this involves gluon and other particles and is complex) for quarks.

Just to summarise: the electromagnetic energy absorbed by the polarized vacuum at short distances around a charge (out to IR cutoff at about 1 fm distance) is used to form virtual particle loops.

These short ranged loops consist of many different types of particles and produce strong and weak nuclear forces.

As you get close to the bare core charge, there is less polarized vacuum intervening between it and your approaching particle, so the electric charge increases. For example, the observable electric charge of an electron is 7% higher at 90 GeV as found experimentally.

The reduction in shielding means that less energy is being absorbed by the vacuum loops. Therefore, the strength of the nuclear forces starts to decline. At extremely high energy, there is – as in Wilson’s argument – no room physically for any loops (there are no loops beyond the upper energy cutoff, i.e. UV cutoff!), so there is no nuclear force beyond the UV cutoff.

What is missing from the Standard Model is therefore an energy accountancy for the shielded charge of the electron.

It is

easyto calculate this, the electromagnetic field energy for example being used in creating loops up to the 90 GeV scale is the energy of a charge which is 7% of the energy of the electric field of an electron (because 7% of the electron’s charge is lost by vacuumn loop creation and polarization below 90 GeV, as observed experimentally; I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424).So this physical understanding should be investigated. Instead, the mainstream censors physics out and concentrates on a mathematical (non-mechanism) idea, supersymmetry.

Supersymmetry shows how all forces would have the same strength at 10^16 GeV.

This can’t be tested, but maybe it can be disproved theoretically as follows.

The energy of the loops of particles which are causing nuclear forces comes from the energy absorbed by the vacuum polalarization phenomena.

As you get to higher energies, you get to smaller distances. Hence you end up at some UV cutoff, where there are no vacuum loops. Within this range, there is no attenuation of the electromagnetic field by vacuum loop polarization. Hence within the UV cutoff range, there is no vacuum energy available to create short ranged particle loops which mediate nuclear forces.

Thus, energy conservation predicts a lack of nuclear forces at what is traditionally considered to be “unification” energy.

So there would seem to discredit supersymmetry, whereby at “unification” energy, you get all forces having the same strength. The problem is that the mechanism-based physics is ignored in favour of massive quantities of speculation about supersymmetry to “explain” unification, which are not observed.

***************************

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Copy of a comment:

http://riofriospacetime.blogspot.com/2007/02/slayer-and-ever-changing-moods.html

nige said…

… I’d like to see evidence, which is not an ad hoc model involving dark energy. Ptolemy’s epicycles were an ad hoc model of the known cosmos in 150 AD.

You can’t claim that the evidence on which the model is based is evidence for the model. The model has to correctly predict something interesting (i.e., a far out extrapolation from existing data might do, but just a prediction which is an interpolation between known data points won’t usually be impressive) which didn’t go into the data it was originally constructed upon, to be taken seriously.

For example, Kepler’s laws were only a better model than Ptolemy’s and Copernicus’ epicycles because they were much simpler (both Ptolemy and Copernicus used epicycles to model the known cosmos; Ptolemy has 40 for the Earth-centred universe, and Copernicus has 80 for the solar system with false circular orbits).

What really established Kepler’s laws scientifically as fact was when they were explained by an inverse square law of gravity, which worked for the moon just as for apples falling on the earth.

An apple falls with an acceleration of 32 ft/s/s at earth’s radius. Hooke’s and Newton’s inverse-square law predicts that at the moon’s distance from us, 60 earth radii, gravity is 32/(60^2) = 0.0089 ft/s/s.

Newton validated this figure by showing it is the same (within experimental error at that time) as the centripetal acceleration due to the Moon’s orbit of the earth, a = (v^2)/r, where v is moon’s orbital velocity (i.e., circumference of orbit divided by period of orbit) and r is the average distance to the Moon from the middle of the earth.

This is the sort of thing a scientific theory must do. Dark energy is pseudoscience by comparison, lacking theory, lacking objective evidence, relying on the assertions of groupthink and mainstream ideology, hyped by obsequious people in the media and and stringers who don’t understand why experiments are required to validate theories before they gain widespread attention.

5:41 AM

Copy of a comment:

http://terrytao.wordpress.com/2007/02/26/quantum-mechanics-and-tomb-raider/#comment-16

February 27th, 2007 at 3:46 am

nc

“Anyway, its quite clear that Terry Tao is not talking about a classical ensemble of Lara’s, because the various Lara’s are interacting with each other. Its exactly the same as an electron behaving as though there were multiple slits between it and a screen – interference results.”

The solar system would be as chaotic as a multi-electron atom if the gravitational charges (masses) of the planets were all the same (as for electrons) and if the sum or planetary masses was the sun’s mass (just as the sum of electron charges is equal to the electric charge of the nucleus). This is the 3+ body problem of classical mechanics:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Obviously Bohr did not know anything about this chaos in classical systems, when when coming up with complementarity and correspondence principles in the Copenhagen Interpretation. Nor did even David Bohm, who sought the Holy Grail of a potential which becomes deterministic at large scales and chaotic (due to hidden variables) at small scales.

What is interesting is that, if chaos does produce the statistical effects for multi-body phenomena (atoms with a nucleus and at least two electrons), what produces the interference/chaotic statistically describable (Schroedinger equation model) phenomena when a single photon has a choice of two slits, or when a single electron orbits a proton in hydrogen?

Quantum field theory phenomena obviously contribute to quantum chaotic effects. The loops of charges spontaneously and randomly appearing around a fermion between IR – UV cutoffs could cause chaotic deflections on the motion of even a single orbital electron:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.] … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Yang-Mills exchange radiation is what constitutes electromagnetic fields, both of the electrons in the screen containing the double slits, and also the electromagnetic fields of the actual photon of light itself.

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The electrons are exchanging a net amount of gauge boson energy where you have electromagnetic forces doing work. The claim of the failure of Maxwell’s classical electromagnetism (that a spinning charge should radiate, and there should be no ground state) is due to ignoring the fact that a static electric field is simply an equilibrium of light speed radiation exchange between all charges. There could be plenty of interesting checkable mathematical results from rigorously explaining the distinction between classical and quantum results without involking speculation.

copy of a comment:

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

anonymous,

See http://www.iop.org/EJ/abstract/0034-4885/66/11/R04, a publication in Rep. Prog. Phys. 66 2025-2068 states:

“We review recent work on the possibility of a varying speed of light (VSL). We start by discussing the physical meaning of a varying-c, dispelling the myth that the constancy of c is a matter of logical consistency. …”

The fixed velocity of light was only accepted in 1961, and it is fixed by consensus not by science.

Similar consensus fixes are Benjamin Franklin’s guess that there is an excess of free electric charges at the anode of a battery which he labelled positive for surplus, just based on guesswork.

Hence, now we all have to learn that in electric circuits, electrons flow in the opposite direction (i.e., in the direction from – to +) to Franklin’s conventional current (+ toward -).

This has all sorts of effects you have to be aware of. Electrons being accelerated upwards in a vertical antenna consequently results in a radiated signal which starts off with a negative half cycle, not a positive one, because electrons in Franklin’s scheme carry negative charge.

Similarly, the idea of a fixed constant speed of light was appealing in 1961, but it would be as unfortunate to argue that the speed of light can’t change because of a historical consensus as to insist that that electrons can’t flow around a circuit from the – terminal to the + terminal of a battery, because Franklin’s consensus said otherwise.

Sometimes you just need to accept that consensus doesn’t take precedence over scientific facts. What matters is not what a group of people decided was for the best in their ignorance 46 years ago, but what is really occurring.

The speed of light in vacuum is hard to define because it’s clear from Maxwell’s equations that light depends on the vacuum, which may be carrying a lot of electromagnetic field or gravitational field energy per cubic metre, even when there are no atoms present.

This vacuum field energy causes curvature in general relativity, deflecting light, but it also helps light to propagate.

Start off with the nature of light given by Maxwell’s equations.

In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges. Hence the two Maxwell divergence equations are irrelevant and we just deal with the two curl equations.

For a Maxwellian light wave where E field and B field intensities vary along the propagation path (x-axis), Maxwell’s curl equation for Faraday’s law reduces to simply: dE/dx = -dB/dt, while Maxwell’s curl equation for Maxwell’s equation for the magnetic field created by vacuum displacement current is: -dB/dx = m*e*dE/dt, where m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:

d^2 E /dx^2 = – d^2 B/(dx*dt)

-d^2 B /(dx*dt) = m*e*d^2 E/dt^2

Since d^2 B /(dx*dt) occurs in each of these equations, they are equivalent, so Maxwell got dx^2/dt^2 = 1/(me^{1/2}, so c dx/dt = 1/(me)^{1/2} = 300,000 km/s.

However, there’s a problem introduced by Maxwell’s equation -dB/dx = m*e*dE/dt, where e*dE/dt is the displacement current.

Maxwell’s idea is that an electric field which varies in time as it passes a given location, dE/dt, induces the motion of vacuum charges along the electric field lines while the vacuum charges polarize, and this motion of charge constitutes an electric current, which in turn creates a curling magnetic field, which by Faraday’s law of induction completes the electromagnetic cycle of the light wave, allowing propagation.

The problem is that the vacuum doesn’t contain any mobile virtual charges (i.e. virtual fermions) below a threshold electric field of about 10^18 v/m, unless the frequency is extremely high.

If the vacuum contained charge that is polarizable by any weak electric field, then virtual negative charges would be drawn to the protons and virtual positive charges to electrons until there was no net electric charge left, and atoms would no longer be bound together by Coulomb’s law.

Renormalization in quantum field theory shows that there is a limited effect only present at very intense electric fields above 10^18 v/m or so, and so the dielectric vacuum is only capable of pair production and polarization of the resultant vacuum charges in immensely strong electric fields.

Hence, Maxwell’s “displacement current” of i = e*dE/dt amps, doesn’t have the mechanism that Maxwell thought it had.

Feynman, who with Schwinger and others discovered the limited abound vacuum dielectric shielding in quantum electrodynamics when inventing the renormalization technique (where the bare core electron charge is stronger than the shielded charge seen beyond the IR cutoff, because of the effect of shielding by polarization of the vacuum out to 1 fm radius or 10^18 v/m), should have solved this problem.

Instead, Feynman wrote:

‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, 1964, c18, p2.

Feynman is correct here, and he does go further in his 1985 book QED, where he discusses light from the path integrals framework:

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54.

I’ve got some comments about the real mechanism for Maxwell’s “displacement current” from the logic signal cross-talk perspective here, here and here.

The key thing is in a quantum field theory, any field below the IR cutoff is exchange radiation with no virtual fermions appearing (no pair production). The radiation field has to do the work which Maxwell thought was done by the displacement and polarization of virtual charges in the vacuum.

The field energy is sustaining the propagation of light. Feynman’s path integrals shows this pretty clearly too. Professor Clifford Johnson kindly pointed out here:

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’

This is also the approach in Professor Zee’s “Quantum Field Theory in a Nutshell” (Princeton University Press, 2003), Chapter I.2, Path Integral Formulation of Quantum Mechanics.

The idea is that light can go on any path and is affected most strongly by neighboring paths within a wavelength (transverse spatial extent) of the line the photon appears to follow.

What you have to notice, however, is that photons tend to travel between fermions. So does exchange radiation (gauge boson photons) that cause the electromagnetic field. So fermions constitute a network of nodes along which energy is being continuously exchanged, with observable photons of light, etc., travelling along the same paths as the exchange radiation.

It entirely possible that light speed in the vacuum depends on the energy density of the background vacuum field (which could vary as the universe expands), just as the speed of light is slower in glass or air than in a vacuum.

Light speed however tends to slow down when the energy density of the electromagnetic fields through which is travels is higher: hence it slows down more in dense glass than in air. However, it is well worth investigating in more detail.

copy of a comment

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

“One also has to bear in mind that there are incredibly stringent experimental bounds on the breaking of Lorentz symmetry, as Magueijo refers to at the end of the abstract you linked to. Any theory where c changes (in a meaningful way, not as the result of an odd choice of units) will break Lorentz invariance and be subject to such constraints.” – Anonymous

Lorentz invariance is allegedly broken in many ways already.

First, as Smolin and others say in discussing “doubly special relativity”, quantum field theory seems to have some fixed minimum grain size in the vacuum. That breaks Lorentz invariance because the length scale of the grain size doesn’t obey Lorentz invariance.

Ie, the Lorentz transformation contraction apply to the vacuum grain size, which is usually taken to an absolute size irrespective of motion of the observer, such as Planck length.

That’s the basis of Smolin’s argument, described on p227 of his book “The Trouble with Physics.”

I don’t find Smolin’s argument there totally convincing, purely because all the Planck scale length is supposed to be the smallest length you can obtain from physical units but it isn’t. If you take the black hole event horizon radius 2GM/c^2, for an electron mass M this distance is far smaller than the Planck scale.

Nobody has any theoretical, let alone experimental, basis for the Planck scale. There are loads of ways of combining fundamental constants to get distances. So until there is evidence, say from a particle accelerator the size of the galaxy that can probe the Planck scale, it’s speculative.

But there are other indications that Lorentz invariance is just the result of a physical mechanism and not a universal law.

Quantum field theory implies that the number of virtual vacuum particles an observer interacts with is not independent of his or her motion, but depends on absolute motion:

“… what we learned has important applications to the study of quantum fields in curved backgrounds. In Quantum Field Theory in Minkowski space-time the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state.

The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.“This is precisely the situation when fields are quantized on curved backgrounds. …”

- p 85 of

Introductory Lectures on Quantum Field Theoryby Luis Alvarez-Gaume and Miguel A. Vazquez-Mozopage, http://arxiv.org/abs/hep-th/0510040 (Emphasis added in bold to reason why Lorentzian invariance is violated by quantum field theory, which is the fundamental physics of the standard model of particles.)In addition, the whole basis of general relativity is a move away from the fixed Lorentzian background dependence of special relativity; it is a move away from a definite Lorentzian metric. In general relativity, the metric is the result of the field equations for specified conditions.

About 99.9% of people using general relativity and writing about it don’t understand Einstein’s general covariance. So you get “Lorentzian covariance” being discussed. However, general covariance, which is the basis of general relativity, is actually very simple, as I found out in reading Einstein’s original paper:

‘The special theory of relativity… does not extend to non-uniform motion …

The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion.Along this road we arrive at an extension of the postulate of relativity…The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’– Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916. (Emphasis here is Einstein’s own italics in the original paper.)

So the widely held idea of “Lorentzian covariance” is just a nonsense. What matters is general covariance, which is background independence, i.e., the Einstein field equation without a fixed assumed metric.

The metric is the result of solving the field equation.

The Lorentz contraction is a physical result of moving a charge in an exchange radiation field. You are going to get directional compressions. It’s a consequence of Yang-Mills exchange radiation under certain conditions, not a universal law. There’s a simple analogy to the gravitational contraction you get in a mass field. In each case, exchange radiation is causing contractions in the direction of gravitational field lines or the direction of motion relative to some external observer.

Really, general relativity is background independent: the metric is always the solution to the field equation, and can vary in form, depending on the assumptions used because the shape of spacetime (the type and amount of curvature) depends on the mass distribution, cc value, etc. The weak field solutions like the Schwarzschild metric have a simple relationship to the FitzGerald-Lorentz transformation. Just change v^2 to the 2GM/r, and you get the Schwarzschild metric from the FitzGerald-Lorentz transformation, and this is on the basis of the energy equivalence of kinetic and gravitational potential energy:

E = (1/2)mv^2 = GMm/r, hence v^2 = 2GM/r.

Hence gamma = (1 – v^2 / c^2)^{1/2} becomes gamma = (1 – 2GM/ rc^2)^{1/2}, which is the contraction and time dilation form of the Schwarzschild metric.

Einstein’s equivalence principle between inertial and gravitational mass in general relativity when combined with his equivalence between mass and energy in special relativity, implies that the inertial energy equivalent of a mass (E = 1/2 mv^2) is equivalent to the gravitational potential energy of that mass with respect to the surrounding universe (i.e., the amount of energy released per mass m if the universe collapsed, E = GMm/r, where r the effective size scale of the collapse). So there are reasons why the nature of the universe is probably simpler than the mainstream suspects:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

copy of a comment

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

“I’m afraid the supernova data is not in fact consistent with R~t^{2/3}. Indeed it was precisely the supernova data that first showed that the universe is no longer matter dominated, and that the expansion is accelerating. If your solution is equivalent to that of matter-dominated FRW, as it looks, you will find thousands of papers explaining why that simply does not fit the data. It was just this mismatch that forced cosmologists to posit the existence of dark energy.” – anonymous

You may well have reason to be afraid, because you’re plain wrong about dark energy! Louise’s result R ~ t^{2/3} for the expanding size scale of the universe is indeed similar to what you get from the Friedmann-Robertson-Walker metric with no cosmological constant, however that works because she has varying velocity of light which affects the redshifted light distance-luminosity relationship and the data don’t show that the expansion rate of the universe is slowing down because of dark energy, as a Nobel Laureate explains:

‘the flat universe is just not decelerating, it isn’t really accelerating’

- Professor Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Louise’s main analysis has a varying light velocity which affects several relationships. For example, the travel time of the light will be affected, influencing the distance-luminosity relationship.

What prevents long-range gravitational deceleration isn’t dark energy.

All the quantum field theories of fundamental forces (the standard model) are Yang-Mills, in which forces are produced by exchange radiation.

The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inverse-square law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E=hf.

The universe therefore is not like the lab. All forces between receding masses should, according to Yang-Mills QFT, suffer a bigger fall than the inverse square law. Basically, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening long-range gravity.

When you check the facts, you see that the role of “cosmic acceleration” as produced by dark energy (the cc in GR) is designed to weaken the effect of long-range gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.

In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss by E=hf of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.

The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.

Back to Anderson’s comment, “the flat universe is just not decelerating, it isn’t really accelerating”, we find supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R

Lunsford’s paper was censored off arxiv without explanation.

Lunsford had already had it published in a peer-reviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.

The way the mainstream censors out the facts is to first delete them from arxiv and then claim “look at arxiv, there are no valid alternatives”.

“it is certainly not the case that (Lorentz invariant) quantum field theory by itself has a minimum size or violates Lorentz invariance spontaneously,” – anonymous

You haven’t read what I wrote. I stated precisely where the problem is alleged to be by Smolin, which is in the fine graining.

In addition, you should learn a little about renormalization and Wilson’s approach to that, which is to explain the UV cutoff by some grain size in the vacuum – simply put, the reason why UV divergences aren’t physically real (infinite momenta as you go down toward zero distance from the middle of a particle) is that there’s nothing there. Once you get down to size scales smaller than the grain size, there are no loops.

If there is a grain size to the vacuum – and that seems to be the simplest explanation for the UV cutoff – that grain size is

absolute, not relative to motion. Hence, special relativity, Lorentzian invariance is wrong on that scale. But hey, we know it’s not a law anyway, there’s radiation in the vacuum (Casimir force, Yang-Mills exchange radiation, etc.), and when you move you get contracted by the asymmetry of that radiation pressure. No need for stringy extradimensional speculations, just hard facts.The cause of Lorentzian invariance is a physical mechanism, and so the Lorentzian invariance ain’t a law, it’s the effect of a physical process that operates under particular conditions.

“… and that is not what Alverez-Gaume and V-M are saying in the quote you give.” – anonymous

I gave the quote so you can

seewhat they are saying byreadingthe quote. You don’t seem to understand even the reason for giving a quotation. The example they give of curvature is backed up by other stuff based on experiment. They’re not preaching like Ed Witten:‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.

“One other thing you might recall is that all smooth manifolds – including all the solutions to the equations of general relativity that we can control – are locally flat (and therefore locally Lorentz invariant).” – anonymous

Wrong, curvature is not flat locally in this universe, due to something called gravity, which is curvature and occurs due to masses, field energy, pressure, and radiation (all the things included in the stress-energy tensor T_ab). Curvature is flat globally because there’s no long range gravitational deceleration.

Locally, curvature has a value dependent upon the gravitation field or your acceleration relative to the gravitational field.

The local curvature of the planet earth is down to the radius of the earth being contracted by (1/3)MG/c^2 = 1.5 mm in the radial but not the transverse direction.

So the radius of earth is shrunk 1.5 mm, but the circumference is unaffected (just as in the FitzGerald-Lorentz contraction, length is contracted in the direction of motion, but not in the transverse direction).

Hence, the curvature of spacetime locally due to the planet earth is enough to violate Euclidean geometry so that circumference is no longer 2*Pi*R, but is very slightly bigger. That’s the “curved space” effect.

Curvature only exists locally. It can’t exist globally, throughout the universe, because over large distances spacetime is flat. It does exist locally near masses, because curvature is the whole basis for describing gravitation/acceleration effects in general relativity.

Your statement that spacetime is flat locally is just plain ignorance because in fact it isn’t flat locally due to spacetime curvature caused by masses and energy.

A correction to one sentence [in previous comment] above:

…Wrong,

spacetimeis not flat locally in this universe, due to something called gravity, which is curvature and occurs due to masses, field energy, pressure, and radiation (all the things included in the stress-energy tensor T_ab). …copy of a comment

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

“I don’t think it will be productive for either of us. If you want to learn something, take your favorite metric (which can be a solution to GR with or without a non-zero T_{\mu \nu}, you choose), expand it around any non-singular point, and you will discover it is indeed locally flat (locally flat doesn’t mean flat everywhere – it means flat spacetime is a good approximation to it close to any given point). Or if you are more geometrically inclined, read about tangent spaces to manifolds – or just think about using straight tangent lines to approximate a small part of a curvy line, and you’ll get the idea.” – anonymous

Anonymous, even if you take all the matter and energy out of the universe in order to avoid curvature and make it flat, you don’t end up with flat spacetime because spacetime disappears itself, in the mainstream picture.

You can’t generally say that on small scales spacetime is flat, because that depends how far you are from matter.

Your analogy of magnifying the edge of a circle until it looks straight as an example of flat spacetime emerging from curvature as you go to smaller scales is wrong: on smaller scales gravitation is stronger, and curvature is greater. This is precisely the cause of the chaos of spacetime on small distance scales, which prevents general relativity working as you approach the Planck scale distance!

In the quantum field theory you get down to smaller and smaller size scales, far from spacetime getting smoother as your example, it gets more chaotic:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

anonymous, your argument about flat spacetime curvature on small scales requires putting a uniform matter distribution into T_ab which is the sort of false approximation that leads to misunderstandings.

Mass and energy are quantized, they occur in lumps. They’re not continuous and the error you are implying is the statistical one of averaging out discontinuities in T_ab, and then falsely claiming that the flat result on small scales proves spacetime is flat on small scales.

No, it isn’t It’s quantized. It’s just amazing how much rubbish comes out of people who don’t understand physically that a statistical average isn’t proof that things are continuous. As an analogy, children are integers, and the fact that you get 2.5 kids per family as an average (or whatever the figure is), doesn’t disprove the quantization.

You can’t argue that a household can have any fractional number of children, because the mean for a large number of households is a fraction.

Similarly, if you put an average into T_ab as an approximation, assuming that the source of gravity is

of uniform densityyou’re putting in an assumption that doesn’t hold on small scales, only on large scales. You can’t therefore claim that locally spacetime is flat. That contradicts what we know about the quantization of mass and energy. Only on large scales is it flat.Any readers who want links to other online material on quantum field theory can find a discussion and links at http://quantumfieldtheory.org/

When, or should I say; How does anyone get you people to Tell the Truth? It like the Dying Man’s Last Cry; “If you kiss my…, I’ll let you save my Life!

In other words, everything is wrong… Biology, Chemistry, Mathematics, and Physics… there aren’t any Physicists who actually know or understand Physics.

In fact; Ask any Physicist if ‘F = Ma’ is true… and Yet, you people have Nuclear Energy and Bombs… all of which was discovered by accident… Killing everyone, for what, a few dollars, a News Story, and perhaps, the Nobel Prize.

When you’ve learned physics, then you should post the information… But right now… you do not have a clue!

“In other words, everything is wrong… Biology, Chemistry, Mathematics, and Physics… there aren’t any Physicists who actually know or understand Physics.” – e. terrell

It’s not “wrong” so much as incomplete and in the case of chemistry and physics there is a lot of dishonesty about what electrons actually do on small distance scales, say inside the atom or when a light photon hits a double slit where the distance between the two slits is on the order of the transverse wavelength of the photon.

Feynman is honest enough to explain that chaotic behaviour of real particles on small distance scales is due to virtual particles (the quantum field) of the vacuum, which jostle and jiggle the real particles just as small dust particles of less than a critical size (about 5 microns diameter for dust grains) undergo Brownian motion in air due to frequent high speed (~500 metres/second) impacts of (unseen) air molecules:

“… But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*1018 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’

- R. P. Feynman, QED, Penguin, 1990, page 84-5.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Now let’s return to your comment:

“When you’ve learned physics, then you should post the information… But right now… you do not have a clue!” – e. terrell

Wrong. I have studied this and I do have clues and solid calculations which predict things usefully, unlike the mainstream M-theory failure. So you are shooting the messenger.

You are also wrong about the nuclear bomb and nuclear energy “killing everyone”. See my blog http://glasstone.blogspot.com/ particularly the updates in the comments to the most recent posts.

For example, the comment about Hiroshima and Nagasaki firestorm and fallout effects propaganda:

http://glasstone.blogspot.com/2007/03/above-3.html

http://glasstone.blogspot.com/2006/04/ignition-of-fires-by-thermal-radiation.html

http://glasstone.blogspot.com/2006/03/fires-from-nuclear-explosions.html

Nuclear weapons ended WWII because they caused Russia to declare war on Japan (Russia already knew about America’s atomic bombs due to spies like Dr Fuchs and the Rosenbergs) and feared it might lose out on being on the official list of victors for the war against Japan. Japan had been hoping Russia would mediate a peaceful settlement with America, so as soon as Russia declared war, Japan’s leaders had to give up hope that Russia would mediate a settlement on its behalf, and instead Japan offered conditional surrender to America.

It’s very likely that a lot of lives were saved because more people were being killed in firestorms due to some incendiary raids such as that on Tokyo than the nuclear attacks.

In addition, nuclear deterrence successfully prevented WWIII. I do think that you are exaggerating when you claim that nuclear weapons can kill everybody, when a 200 teratons (equal to 200*10^12 tons, i.e., 200 million-million tons or 200 million megatons) explosion from the KT event 65 million years ago failed to kill off all life on earth!

(This figure comes from David W. Hughes, “The approximate ratios between the diameters of terrestrial impact craters and the causative incident asteroids”, Monthly Notice of the Royal Astronomical Society, Vol. 338, Issue 4, pp. 999-1003, February 2003. The KT boundary impact energy was 200,000,000 megatons of TNT equivalent for the 200 km diameter of the Chicxulub crater at Yucatan which marks the KT impact site. Hughes shows that the impact energy (in ergs) is: E= (9.1*10^24)*(D^2.59), where D is impact crater’s diameter in km. To convert from ergs to teratons of TNT equivalent, divide the result by the conversion factor of 4.2*10^28 ergs/teraton.)

Most sickening of all are the propaganda and mainstream lies about “terrific” radiation effects from the bomb: http://glasstone.blogspot.com/2007/03/above-3.html

So maybe you haven’t a clue. I think what you mean to say in your comment is that some scientists are charlatans, claiming they know that the universe is 10/11 dimensional superstring etc., when in fact they don’t have a shred of evidence and they are just as self-duped (or perhaps far more so) than the most arrogant religious fanatics who ever lived. If that is what you mean, fine. But please don’t go on a “shoot the messenger” rampage. Not all scientists are charlatans:

‘Science is the belief in the ignorance of [the speculative consensus of] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307).