Here’s the current solution to the old problem of whether Haag’s theorem prevents axiomatic proof of the self-consistency of the (essential) running charge cut-off (charge renormalization) in quantum field theory:

The argument Dr Woit was responding to was between Dr Chris Oakley and Dr Igor Khavkine. The successive terms in a path integral’s perturbative expansion each represent the magnitude of the contribution from a successively more complex Feynman diagram, which pictorially describes interactions between off-shell (virtual) particles. Virtual fermions are polarized around a real charge, absorbing energy from the field and reducing (shielding) the charge as seen from a greater distance (i.e. a distance beyond the location of the polarized pairs of virtual fermions, which extend out to the low-energy or IR cutoff, the limit for spontaneous pair production in the vacuum given by Schwinger).

There is a groupthink denial about the details of this physical mechanism and the mathematics of renormalization procedures. The fashion is wooden mathematics. Weyl in 1918 gave a flawed quantum gravity gauge quantization by trying to quantize the metric of general relativity, scaling it by a complex exponential function of the electromagnetic field S using exp(iS). After Einstein pointed out it was wrong, Schroedinger in 1922 modified Weyl’s idea into a new mathematical “eigenvalue” model of the Bohr atom, changing the scaling from the metric to the probability of the existence of a discrete energy level existing as function of the electron’s orbital path, the periodic real plane solutions to exp(iS) = cos S + i sin S represented the eigenvalues for “stationary states” of orbital electrons. Finally, after de Broglie’s particle-wave duality became fashionable, Schroedinger published the famous complex plane time-dependent wave equation to which exp(iS) is a solution.

In quantum field theory, as Dirac developed it, Schroedinger’s time-dependent wave equation is supplied with a new Hamiltonian (Dirac’s spinor) to make it treat space and time the same way, to meet relativistic requirements. The new Hamiltonian, however, quantizes the field. Instead of just having one one particle interacting and behaving unpredictably with no mechanism (which is what the single wavefunction model in Schroedinger’s equation or Heisenberg’s matrix says), in quantum field theory you suddenly have a mechanism: lots of virtual particles deflecting an electron whose path action is small compared to h bar. Each of interactions between a virtual particle in the field and the electron has an aplitude and thus a wavefunction. The 2nd quantization (QFT) path integral in quantum field theory, as Feynman points out in his book QED (1985) is now a physical sum of physical processes, so the 1st quantization (non-relativistic QM) “uncertainty principle” is “not needed (Feynman). Uncertainty is now not a metaphysical law from the mind of Heisenberg, it’s good old “multipath interference”, exactly the effect that causes radio interference.

So why exp(iS)? If we have the mechanism of multiple path interference determining eigenvalues in 2nd quantization, why use Schroedinger’s purely ad hoc complex wave equation, whose complex Hilbert space defies a self-consistent axiomatic proof of renormalization (Haag’s theorem)? Why not accept that exp(iS) and the complex wave equation is a historical vestige? What do must replace it with is real space: exp(iS) can be replaced simply with cos S, as Feynman demonstrates graphically in his 1985 book, QED. Thinking physically (without the wooden fuzziness of “believing” in ad hoc mathematical models as a religious belief), you can see that the path integral is always giving a real plane solution: the only variable is the amplitude not the direction of the arrow on an Argand diagram. A path integral can either add up unit length arrows with variable directions which the mainstream method today, using exp(iS), or you can get precisely the same result by making the arrows all point in the same direction (the real axis) but have varying lengths. The path integral is always the same so far as observation is concerned: nobody can see any non-real plane final arrows in the laboratory. Inteferences only affect amplitudes on the real plane so far as we observe them. The cross-sections and probabilities you get from the path integral are always real numbers, never containing i. If that is true, exp(iS) = i sin S + cos S can be replaced by dropping to i sin S to give cos S. This should have been done by Dirac and Feynman when 2nd quantization was developed. Instead, Hilbert space – despite Haag’s theorem – is a religion in quantum field theory.

**Update (15 february 2011): relevant extracts from an email on this subject to Dr Mario Rabinowitz**

From: Nige Cook

To: Mario Rabinowitz

Sent: Wednesday, February 15, 2012 5:15 PM

… “As you may recall I think that existing QM and GR are presently mutually incompatible, being a deterrent to a consistent theory of QG.”

Years ago, I read your excellent paper, “Deterrents to a theory of quantum gravity”, which is very helpful and provides some vital insights. Your approach defines QM by the mainstream Schroedinger 1926 equation of 1st quantization:

i * {h-bar} * d{Psi}/dt = H *{Psi}

Any equation of this form (where the rate of charge of a variable, Psi is directly proportional to Psi) will have an exponential solution, i.e.

{Psi}_t = {Psi}_0 exp(iHt).

This is what Dirac came up with in 1933. I’m sure you’re well aware of the mathematics, but maybe the history and the physical interpretation are less familiar:

1. Weyl came up with the complex exponent, exp(iX), in 1918 as a multiplying factor for the metric of general relativity. This quantized the metric, the original gauge theory of quantum gravity (references are in my paper). Weyl’s factor X was a function of the electromagnetic field, so he claimed to unify electromagnetism and gravity in his theory. Einstein pointed out that Weyl’s 1918 theory contradicted observed data (e.g. line spectra from stars with strong gravitational fields).

2. In 1922, Schroedinger reapplied Weyl’s exp(iX) factor to model the quantized electron energy levels in the atom (the references are in my paper): exp(iX) is a cyclic function on an Argand diagram (complex plane). Schroedinger’s 1922 paper defined the periodic real plane of exp(iX) = i sin X + cos X as the observed electron states corresponding to line spectra, so that exp(iX) was unity (probability of finding the electron = 1) for “real” (observable) electron states.

This was a brilliant application of mathematical intuition to “explain” why lines are quantized: the electron is in some sense in a complex plane (unobservable) when inbetween discrete energy levels.

3. In 1926, after being asked to give a lecture on de Broglie’s wave particle duality, Schroedinger presented his famous reverse-engineered wave equation, to which his 1922 paper’s probability = exp(iX) is the solution. (Feynman claimed in his Lectures on Physics that the wave-equation was a guess which came out of the “mind of Schroedinger”. It actually came out of the mind of Weyl’s gauge theory in 1918, but was changed by Schroedinger from scaling the gravitational metric to scaling the wavefunction.)

4. In 1929, Dirac had to change the non-relativistic Hamiltonian to an SU(2) matrix type spinor in order to make the Schroedinger theory relativistic. Dirac found that this quantizes the field (2nd quantization).

5. in 1933, Dirac suggested following the wavefunction over a path by {Psi}_t = {Psi}_0 exp(iHt). This is really a circular argument physically, since it is what Schroedinger did in his 1922 paper.

My argument is that the amplitude exp(iHt) or its equivalent in the path integral for least action, exp(iS), is only necessary in 1st quantization quantum mechanics where you have a single wavefunction. In this case, you have to rely on the complex conjugate to quantize phenomena.

In 2nd quantization, you have more than one wavefunction (the path integral, one wavefunction for every path). All the many wavefunctions interfere to produce probabilities. For classical situations (path actions minimal compared to h-bar), exp(iS) ~ exp(i*0) ~ 1, so the classical path takes is roughly 100% likely (thus all non-classical paths have trivial contributions).

Benefits:

1. No more complex Hilbert space, and no more complex Schroedinger wave equation. No complex space. No problems of Hilbert space in trying to reconcile quantum mechanics and gravity!

2. No more problems axiomatically in quantum field theory! Haag’s disproof of self-consistent axioms for renormalization is based on complex (Hilbert) space. Drop complex (Hilbert) space, and self-consistent renormalization is no longer a crack to be covered with renormalization group wall-paper.

3. Nobody has ever seen any need for a complex space in 2nd quantization. The path integral’s outputs are alway real numbers: real plane cross-sections, and real probabilities. If quantum mechanics is defended on the basis of empiricism by Bohr and Heisenberg, why include non-observables like complex space, when they are no longer needed. As Feynman states in his 1985 book QED, in the path integral probabilities arise from multipath (multiple wavefunction) interferences, just like the old HF sky-wave radio interference from partial reflection of radio by several different (D, E, and F) layers in the ionosphere. There is a physical mechanism in 2nd quantization so you no longer need exp(iS) which is vital to explain energy level quantization if you only have a single wavefunction (1st quantization)

Notice that if you replace the path wavefunction (amplitude) Psi = exp(iS) with Psi = cos S (obviously S is in units of h bar), what you are doing in the path integral looks a bit different graphically, but is an exact mathematical duality for all real outputs (probabilities, cross-sections).

On an Argand diagram, using exp(iS) as a path’s amplitude means that to determine the “sum over histories” (path integral) you add arrows of fixed length for but variable direction for each path, and the path integral is then the resultant arrow (the sum over the histories).

This sounds as if it involves a vector result, i.e. generates two variables: the path integral (final arrow) has both length (amplitude) and direction. However, although technically “true” in a “wooden” mathematical sense, it is contrived sophistry in a physical sense, because the direction of the vector is always zero, i.e. on the real (horizontal) axis. To repeat, the path integral only produces scalar (not complex vector) probabilities and cross-sections, since the direction of the final arrow is always real. There are only two axes on the Argant diagram: real and imaginary (complex). The fact the path integral is always on the real axis, allows us to replace exp(iS) with cos S (using Euler’s identity), without loss of information. We’re not physically breaking any mathematical rules by “replacing a vector with a scalar”. It’s a scalar at output anyway.

In other words, the only true variable in every experimentally checked and confirmed path integral is the amplitude of a path along the real axis, i.e. cos S. So forget exp(iS), it’s unnecessary in 2nd quantization where we’re calculating real numbers like real probabilities and real cross-sections.

I’m arguing is that the whole of 1st quantization is rendered obsolete by 2nd quantization, and by Haag’s theorem to achieve self-consistent axiomatic renormalization we need to dump the Weyl/Schroedinger/Dirac/Feynman exp(iS) wavefunction amplitude and move over to using cos S as its replacement.

This ends all the doublethink and mathematical duplicity that have held up the development of quantum field theory for the past 80 years. Each path now has no complex vector, just the scalar amplitude cos S. The path integral produces precisely the same checkable cross-sections and probabilities with cos S as with exp(iS). However, we are now dealing with real spacetime, not complex space, so the mathematical barriers to axiomatical progress and unification with gravity are eliminated.

The drawback is that there is a great deal of “genius” invested in exp(iS) and the complex Schroedinger equation, and we can expect a great deal of hostility to progress by replacing exp(iS) with cos S. Mathematical geeks (Pythagorean cult worshippers) like Ed Witten will not find my humble suggestion praiseworthy, but destructive to educational syllabuses, existing textbooks, and the confusion of students. It would make physics less arcane, less mysterious, less attractive to B grade pure mathematics students. You would get more technician-calibre* Michael Faraday’s getting into the ivory towers and upsetting status quo by making discoveries “out of turn”. Physics might start making some real, revolutionary progress again, like it did in the 1920s.* Tragic for the old guard.

_____

*Politically incorrect footnote: a tragedy nearly occurred with Oliver Heaviside, who turned Maxwell’s differential equations into vector calculus without bringing any kudos to Oxbridge (or any academia), but fortunately Sir William Preece had Heaviside censored out when Heaviside started including in published papers sarcastic “bitter” jokes at the expense of perplexed leading Oxbridge educated academia. (Here’s a beautiful specific example from a published article of Heaviside, reprinted in his book Electromagnetic Theory, vol 1, 1893, p337: “*Internal obstruction and superficial construction* … If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner – even though he sat at the feet of Faraday. Beetles could do that. Besides, the old practitioner [any so-called “professional” scientist in general as well] is apt to measure the value of science by the number of dollars he thinks it is likely to bring into his pocket, and if he does not see the dollars, he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaky manner [plagiarism], not unmixed with bluster, and make believe he knew about it when he was a little boy! He sees a prospect of dollars in the distance, that is the reason. The perfect obstructor [“peer”-review bias] having failed, try the perfect conductor. … Prof. Tait [the famed quaternionic hyper] says he cannot understand my vectors, though he can understand much harder things. But men who have no quaterionic prejudices can understand them, and do.”) As another example, Dirac studied electrical engineering at Bristol University (which also taught bricklaying and shoemaking) before coming up with the Dirac spinor (the foundation of quantum field theory), but despite his arguments with Heisenberg over whether 1st quantization QM was a subject “closed” for all time or not, at least he was politically correct enough to end up Lucasian Professor of Mathematics at Cambridge from 1932 to 1969. The brilliant new groupthink ideology is to encourage a diversity of ideas in physics by eliminating anybody who doesn’t think within the (existing flawed status quo) box. The elimination technique is based on mathematical sophistry. If you accept superfluous unobservables and use them to hold back progress, all is well.

**Further discussion:**

From: Nige Cook

To: Mario Rabinowitz

Sent: Wednesday, February 15, 2012 8:28 PM

Subject: Re: I was just skimming your paper, “U(1) ´ SU(2) ´ SU(3) quantum gravity successes.”

… My approach is that the Schroedinger equation is misleading because it only has a single wavefunction and was an ad hoc model formulated before the path integral. People cling on to vestiges long after the reason for them has disappeared. In 2nd quantization you don’t need exp(iS), because cos S does the same job, better, avoiding Hilbert space (Haag’s theorem). If you accept the necessity for a path integral, then each path has a separate wavefunction, and as Feynman explains in QED (his lucid 1985 book), multipath interference between many wavefunctions – one for each path – produces all indeterminancy. There is no intrinsic indeterminancy. All indeterminancy is due to multipath interference. Keeping 1st quantization vestiges in place after 2nd quantization had made them unnecessary obfuscations is like Copernicus’s attempt to retain epicycles in the solar system: it is a half-baked mainstream theory.

Love is an ex-USAF pilot who has a maths PhD and he emailed me a paper called “Towards an Einsteinian Quantum Theory”, which tries to replace the Standard Model, however he doesn’t seem to find any problem with U(1) electrodynamics, just replacing the SU(2) and SU(3) weak and strong gauge group symmetries.

My approach is the opposite. There is an enormous amount of evidence for SU(2) weak and SU(3) strong symmetries. The problem, I find, is U(1) electrodynamics which is really a disguised SU(2) Yang-Mills symmetry. You can see the SU(2) nature of electrodynamics in both Dirac’s SU(2) spinor of relativistic QED, and in the asymmetry in Maxwell’s vector calculus equations: div.B = 0 is not matched by div.E = charge density per unit permittivity. It really seems that magnetic fields are not a U(1) symmetry but an SU(2) symmetry, deriving from spin. This lack of magnetic monopoles is an asymmetry between electricity and magnetism, analogous to the left-handed asymmetry (parity violation) in the weak interaction when electromagnetism is represented by a massless boson SU(2) symmetry. Weyl actually predicted in 1929 that Dirac’s spinor (Weyl’s spinor) breaks parity in electromagnetic interactions, although he didn’t interpret this physically as the lack of magnetic monopoles in Maxwell’s equations, and Pauli dismissed it. Parity conservation was only confirmed for weak interactions (beta decay) in the late 1950s, nobody bothered to see if electromagnetism could be derived from SU(2) Yang-Mills with massless gauge bosons.

Instead of unifying electromagnetism and weak interactions by electromagnetism an SU(2) Yang-Mills theory which reduces to an asymmetric U(1) Maxwell theory due to the massless bosons in electromagnetism (which prevent the charge transfer term in the Yang-Mills term from operating), the simplistic mainstream (wooden mathematics) approach has been to “predict (non-observed) magnetic monopoles”, and despite failing to discover magnetic monopoles in searches, to continue looking and hyping the “prediction” (analogous to the politically convenient “search” for cosmic strings). Maxwell’s original 1861 paper, “On Magnetic Lines of Force”, as quoted in my paper, argues that magnetic fields are just the angular momentum of field quanta spin. Maxwell used vacuum vortices, not field quanta, which did not arise until QED was developed to the stage of Moller scattering theory due to virtual photon elechange. The virtual photons will convey magnetic fields by spin angular momentum.

**Update (1 March 2012)**

Mathematician Dr Marni Sheppeard has closed her Arcadian Pseudofunctor blog, a post after commenting (sarcastically) that the scientific conference disclaimer: “In particular, no **bona fide** scientist will be excluded from participation on the grounds of national origin, nationality, or political considerations unrelated to science” is “cute”. “Hubris” is perhaps the best word for the censorship of politically incorrect nascent science by elite greasy pole climbing geniuses who use what they call “science” to fill their wallets. Great, I say, just be careful to give honest results in return for your wages. What is wrong is not just groupthink science or the politics of science that comes from commercializing research with fancy PR conferences, fancy brochure magazine journals, and other elitist advertising, but the corruption of fashion and orthodoxy in frontier research which gradually creeps into science *indirectly* as a result, and the labelling of the corruption as “science methodology”. Like Orwellian big brother politics, once you have an establishment which knows it’s heart is in the right place, it finds making excuses for extending the corruption very easy, just as it finds it very easy to keep making promises to discover new exciting epicycles if the taxpayer or big business stumps up every more cash.

Eventually, it’s defensiveness in labelling all critics as conspiracy theorists, merely for suggesting that the existing research directions are failures which are being pursued because they bring in research grants from deceived sponsors, starts to look like bitter paranoia, even to Brezhnev era jobsworths who would rather be verbally crucified by long oppressed dissenters and critics, than be disloyal to their dear Party Comrade. Another post of Dr Sheppeard’s quotes a string theorist: “It is really the case that there are brilliant loners out there and that there is some kind of conspiracy by the physics “establishment” to prevent their voices being heard?” Again, too much of this kind of defensiveness can eventually sound like bitter or paranoid hubris. By analogy, if the medical establishment is reducing suffering in return for taxpayer’s cash grants, then fine. But if were to go off into some kind of alternative therapy for 30 years which failed to achieve any checkable evidence in that time, and then started to burn critics merely for suggesting that alternative nascent ideas exist that have been starved of funding, then the credibility and respect of the public in that medical establishment might be affected. “Weak point, shout louder” is advice that has a limited shelf-life, then looks like propaganda, or even the dictatorship of a band of corrupted self-deceived geniuses.

But maybe I’m completely wrong about this. I hope so.