Propagator derivations

Peter Woit is writing a book, Quantum Theory, Groups and Representations: An Introduction, and has a PDF of the draft version linked here.  He has now come up with the slogan “Quantum Theory is Representation Theory”, after postulating “What’s Hard to Understand is Classical Mechanics, Not Quantum Mechanics“.

I’ve recently become interested in the mathematics of QFT, so I’ll just make a suggestion for Dr Woit regarding his section “42.4 The propagator” which is incomplete (he has only the heading there on page 404 of the 11 August 2014 revision, with no test under it at all).

The Propagator is the greatest part of QFT from the perspective of Feynman’s 1985 book QED: you evaluate the propagator from either the Lagrangian or Hamiltonian, since the Propagator is simply the Fourier transform of the potential energy (the interaction part of the lagrangian provides the couplings for Feynman’s rules, not the propagator).  Fourier transforms are simply Laplace transforms with a complex number in the exponent.  The Laplace and Fourier transforms are used extensively in analogue electronics for transforming waveforms (amplitudes as a function of time) into frequency spectra (amplitudes as a function of frequency).  Taking the concept at it’s simplest, the Laplace transform of a constant amplitude is just the reciprocal (inverse), e.g. an amplitude pulse lasting 0.1 second has a frequency of 1/0.1 = 10 Hertz.  You can verify that from dimensional analysis.  For integration between zero and infinity, with F(f) = 1 we have:

Laplace transform, F(t) = Integral [F(f) * exp(-ft)] df

= Integral [exp(-ft)] df

= 1/t.

If we change from F(f) = 1 to F(f) = f, we now get:

Frequency, F(t) = Integral [f * exp(-ft)] df = 1/(t squared).

The trick of the Laplace transform is the integration property of the exponential function by itself, i.e. it’s unique property of remaining unchanged by integration (because e is the base of natural logarithms), apart from multiplication by the constant (the factor which is not a function of factor you’re integrating over) in its power.  The Fourier transform is the same as the Laplace transform, but with a factor of “i” included in the exponential power:

Fourier transform, F(t) = Integral [F(f) * exp(-ift)] df

In quantum field theory, instead of inversely linked frequency f and time t, you have inversely linked variables like momentum p and distance x.   This comes from Heisenberg’s ubiquitous relationship, p*x = h-bar.  Thus, p ~ 1/x.  Suppose that the potential energy of a force field is given by V = 1/x.  Note that field potential energy V is part of the Hamiltonian, and also part of the Lagrangian, when given a minus sign where appropriate.  You want to convert V from position space, V = 1/x, into momentum space, i.e. to make V a function of momentum p.  The Fourier transform of the potential energy over 3-d space shows that V ~ 1/p squared.  (Since this blog isn’t very suitable for lengthy mathematics, I’ll write up a detailed discussion of this in a vixra paper soon to accompany the one on renormalization and mass.)

What’s interesting here is that this shows that the propagator terms in Feynman’s diagrams, which, integrated-over produce the running couplings and thus renormalization, are simply dependent on the field potential, which can be written in terms of classical Coulomb field potentials or quantum Yukawa type potentials (Coulomb field potentials with an exponential decrease included.  There are of course two types of propagator: bosonic (integer spin) and fermionic (half integer spin).  It turns out that the classical Coulomb field law gives a potential of V = 1/x which, when Fourier transformed, gives you V ~ 1/p squared, and when you include a Yukawa exp(-mx) short-range attenuation or decay term, i.e. V = (1/x)exp(-mx), you get a Fourier transform of 1/[(p squared) – (m squared)] which is the same result that a Fourier transform of the spin-1 field propagator (boson propagators) give using a Klein-Gordon lagrangian.
However, using the Dirac lagrangian, which is basically a square-root version of the Klein-Gordon equation with Dirac’s gamma matrices to avoid losing solutions due to the problem that minus signs and complex numbers tend to disappear when you square them, you get a quite different propagator: 1 /(p – m).  The squares on p and m which occur for spin-1 Klein-Gordon equation propagators, disappear for Dirac’s fermion (half integer spin) propagator!
So what does this tell us about the physical meaning of Dirac’s equation, or put another way, we know that Coulomb’s law in QFT (QED more specifically) physically involves field potentials consisting of exchanged spin-1 virtual photons which is why the Fourier transform of Coulomb’s law gives the same result as the propagator from the Klein-Gordon equation but without a mass term (Coulomb’s virtual photons are non-massive, so the electromagnetic force is infinite ranged), but what is the equivalent for Coulomb’s law for Dirac’s spin-1/2 fermion fields?  Doing the Fourier transform in the same way but ending up with Dirac’s 1 /(p – m) fermion propagator gives an interesting answer which I’ll discuss in my forthcoming vixra paper.
Another question is this: the Higgs field and the renormalization mechanism only deal with problems of mass at high energy, i.e. UV cutoffs as discussed in detail in my previous paper.  What about the loss of mass at low energy, the IR cutoff, to prevent the coupling from running due to the presence of a mass term in the propagator?
In other words, in QED we have the issue that the running coupling polarizable pair production only kicks in at 1.02 MeV (the energy needed to briefly form an electron-positron pair).  Below that energy, or in weak fields beyond the classical electron radius, the coupling stops running, so the effective electronic charge is constant.  This is why there is a standard low energy electronic charge that was measured by Millikan.  Below the IR cutoff, or at distances larger than the classical electron radius, the charge of an electron is constant and the force merely varies with the Coulomb geometric law (the spreading or divergence of field lines or field quanta over an increasing space, diluting the force, but with no additional vacuum polarization screening of charge, since this screening is limited to distances shorter than the classical electron radius or energies beyonf about 1 MeV).
So how and why does the Coulomb potential suddenly change from V = 1/x beyond a classical electron radius, to V = (1/x)exp(-mx) within a classical electron radius? Consider the extract below from page 3 of
Integrating using a massless Coulomb propagator to obtain correct low energy mass
The key problem for the existing theory is very clear when looking at the integrals in Fig. 1.  Although we have an upper case Lambda symbol included for an upper limit (or high energy, i.e. UV cutoff) on the integral which includes an electron mass term, we have not included a lower integration limit (IR cutoff): this is in keeping with the shoddy mathematics of most (all?) quantum field theory textbooks, which either deliberately or maliciously cover-up the key (and really interesting or enlightening) problems in the physics by obfuscating or by getting bogged down in mathematical trivia, like a clutter of technical symbolism.  What we’re suggesting is that there is a big problem with the concept that the running coupling merely increases the “bare core” mass of a particle: this standard procedure conflates and confuses the high energy bare core mass that isn’t seen at low energy, with the standard value of electron mass which is what you observe at low energy.
In other words, we’re arguing for a significant re-interpretation of physical dogmas in the existing mathematical structure of QFT, in order to get useful predictions, nothing useful is lost by our approach while there is everything to be gained from it.  Unfortunately, physics is now a big money making industry in which journals and editors are used as a professional status-political-financial-fame-fashion-bigotry-enforcing-consensus-enforcing-power grasping tool, rather than informative tool designed solely and exclusively to speed up the flow of information that is helpful to those people focused merely upon making advances in the basic science.  But that’s nothing new.  [When Mendel’s genetics were finally published after decades of censorship, his ideas had been (allegedly) plagiarized by two other sets of bigwig researchers whose papers the journal editors had more from gain by publishing, than they had to gain from publishing the original research of someone then obscure and dead!  Neat logic, don’t you agree?  Note that is statement of fact is not “bitterness”, it is just fact.  A lot of the bitterness that does arise in science comes not from the hypocrisy of journals and groupthink, but because these are censored out from discussion.  (Similarly the Oscars are designed to bring the attention to the Oscars, since the prize recipients are already famous anyway.  There is no way to escape the fact that the media in any subject, be it science or politics, deems one celebrity more worthy of publicity than the diabolical murder of millions by left wing dictators.  The reason is simply that the more “interesting” news sells more journals than the more difficult to understand problems.)]
 17 August 2014 update:

(1) The Fourier transform of the Coulomb potential (or the Fourier transform of the potential energy term in the Lagrangian or Hamiltonian) gives rest mass.

(2) Please note in particular the observation that since the Coulomb (low energy, below IR cutoff) potential’s Fourier transform gives a propagator omitting a mass term, this propagator does not contribute a logarithmic running coupling. This lack of a running coupling at low energy is observed in classical physics for energy below about 1 Mev where no vacuum polarization or pair production occurs because pair production requires at least the mass of the electron and positron pair, 1.02 MeV. The Coulomb non-mass term propagator contribution at one-loop to electron mass is then non-logarithmic and simply equal to a factor like alpha times the integral (between 0 and A) of (1/k3)d4k = alpha * A. As shown in the diagram we identify this “contribution” from the Coulomb low energy propagator without a mass term to be the actual ground state mass of the electron, with the cutoff A corresponding to the neutral currents that mire down the electron charge core, causing mass, i.e. A is the mass of the uncharged Z boson of the electroweak scale (91 GeV). If you have two one loop diagrams, this integral becomes alpha * A squared.

(3) The one loop corrections shown on page 3 to electron mass for the non-Coulomb potentials (i.e. including mass terms in the propagator integrals) can be found in many textbooks, for example equations 1.2 and 1.3 on page 8 of “Supersymmetry Demystified”. As stated in the blog post, I’m writing a further paper about propagator derivations and their importance.

If you read Feynman’s 1985 QED (not his 1965 book with Hibbs, which misleads most people about path integrals and is preferred to the 1985 book by Zee and Distler), the propagator is the brains of QFT. You can’t directly do a path integral over spacetime with a lagrangian integrated to give action S and then re-integrated in the path integral, the integral of amplitude exp(iS) taken over all possible geometric paths, where S is the lagrangian integral. So, as Feynman argues, you have to construct a perturbative expansion, each term becoming more complex and representing pictorially the physical interactions between particles. Feynman’s point in his 1985 book is that this process essentially turns QFT simple. The contribution from each diagram involves multiplying the charge by a propagator for an internal line and ensuring that momentum is conserved at verticles.

Rank 1 quantum gravity

NC Cook paper

Above: extract from Einstein’s rank-2 tensor compression of Maxwell’s equations does not turn them into rank-2 spacetime curvature.

A problem for unfashionable new alternative theories in a science dominated by noisy ignorant bigots.

Above: a serious problem for unfashionable new alternative theories in a science dominated by noisy ignorant bigots.

Dr Woit has a post (here with comments here) about the “no alternatives argument” used in science to “justify” a research project by “closing down arguments” by dismissing any possibility of an alternative direction (the political side of it, and also in pure politics).  I tried to make a few comments but it proved impossible to defend my position without using maths of a sort which could not be typed in a comment, so I’ll place the material in this post, responding to criticisms here too:

“… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”

It’s weird that Dr Peter Woit claims that this “there is no alternative so you must believe in M-theory” argument is difficult to respond to, seeing that he debunked it in his own 2002 arXiv paper “Quantum field theory and representation theory”.

In that paper he makes the specific point about the neglect of alternatives due to M-theory hype, by arguing there that a good alternative is to find a symmetry group in low dimensions that encompasses and explains better the existing features of the Standard Model.

Woit gives a specific example, showing how to use Clifford algebra to build a representation of a symmetry group that for 4 dimensional spacetime predicts the electroweak charges including left handed chiral weak interactions, which the Standard Model merely postulates.

But he also expresses admiration for Witten, whose first job was in left wing politics, working for George McGovern, a Democratic presidential nominee in 1972. In politics you brainwash yourself that your goal is a noble one, some idealistic utopia, then you lie to gain followers by promising the earth. I don’t see much difference with M-theory, where a circular argument emerges in which you must

(1) shut down alternative theories as taboo, simply because they haven’t (yet) been as well developed or hyped as string theory, and

(2) use the fact that you have effectively censored alternatives out as being somehow proof that there are “no alternatives”.

I don’t think Dr Woit is making the facts crystal clear, and he fails badly to make his own theory crystal clear in his 2002 paper where he takes the opposite approach to Witten’s hype of M-theory. Woit introduces his theory on page 51 of his paper, after a very abstruse 50 pages of advanced mathematics on group symmetry representations using Lie and Clifford algebras. The problem is that alternative ideas that address the core problems are highly mathematical and need a huge amount of careful attention and development. I believe in censorship for objectivity in physics, instead of censorship to support fashion.

” Indeed as Einstein showed, gravity is *not* a force, it is a manifestation of spacetime curvature.”

This is a pretty good example of a “no alternatives” delusion: if gravity is quantized in quantum field theory, the gravitational force will then be mediated by graviton exchange (gauge bosons), just like any Standard Model force, not spacetime curvature as it is in general relativity. Note that Einstein used rank-2 tensors for spacetime curvature to model gravitational fields because that Ricci tensor calculus was freshly minted and available in the early 20th century.

Rank-2 tensors hadn’t been developed to that stage at the time of Maxwell’s formulation of electrodynamics laws, which uses rank-1 tensors or ordinary vector calculus to model fields as bending or diverging “lines” in space. Lines in space are rank 1, spacetime distortion is rank 2. The vector potential version of Maxwell’s equations doesn’t replace field lines with spacetime curvature for electromagnetic fields, it merely generalizes the rank-1 field description of Maxwell. It’s taboo to point out that electrodynamics and general relativity arbitrarily and dogmatically use different mathematical descriptions for reasons of historical fluke, not physical utility (rank 1 equations for field lines versus rank 2 equations for spacetime curvature). Maxwell worked in a pre-tensor era, Einstein in a post-tensor era. Nobody bothered to try to replace Maxwell’s field line description of electrodynamics with a spacetime curvature description, or vice-versa to express gravitational fields in terms of field lines. It’s taboo to even suggest thinking about it! Sure there will be difficulties in doing so, but you learn about physical reality by overcoming difficulties, not by making it taboo to think about.

The standard dogma is to assert that somehow just because Maxwell’s model is rank 1 and involves spin 1 gauge boson exchange when quantized as QED, general relativity involves a different spin to couple to the rank 2 tensor, spin 2 gravitons. However, since 1998 it’s been observationally clear that the cosmological acceleration implies a repulsive long range force between masses, akin to spin-1 boson exchange between similar charges (mass-energy being the gravitational charge). Now, if you take this cosmological acceleration or repulsive interaction or “dark energy” as the fundamental interaction, you can obtain general relativity’s “gravity” force (attraction) in the way the Casimir force emerges, with checkable predictions that were subsequently confirmed by observation (the dark energy predicted in 1996, observed 1998).  Hence, understanding the maths allows you to find the correct physics!

Jesper: what doesn’t make sense is your reference to Ashtekar variables, which don’t convert spacetime curvature into rank-1 equation for field lines. What they do is to introduce more obfuscation without any increase in understanding nature. LQG which resulted from Ashekar variables has been a failure. The fact is, there is no mathematical description of GR in terms of field lines, and no mathematical description of QED in terms of spacetime curvature, and this for purely historical, accidental reasons! The two different descriptions are long held dogma and it’s taboo to mention this.

(For a detailed technical discussion of the difference between spacetime curvature maths and Maxwell’s field lines, please see my 2013 paper “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra).

Geometrodynamics doesn’t express electrodynamics’ rank 1 field lines as spacetime curvature, any more than vortices do, or any more than Ashtekar variables can express spacetime curvature as field lines.

The point is, if you want to unify gravitation with standard model forces, you first need to express them with the same mathematical field description so you can properly understand the differences. You need both Maxwell’s equations and gravitation expressed as field lines (rank 1 tensors), or you need them both expressed as spacetime curvature (rank 2 tensors). The existing mixed description (rank 1 field lines for QED, spacetime curvature for GR) follows from historical accident and has become a hardened dogma to the extent that merely pointing out the error results in attacks of the sort you make, where you mention some other totally irrelevant description and speculatively claim that I haven’t heard of it.

The issue is not “which is the more fundamental one”. The issue is expressing all the fundamental interactions in the *same* common field description, whatever that is, be it rank-1 or rank-2 equations. It doesn’t matter if you choose field lines or spacetime curvature. What does matter is that every force is expressed in a *common* field description. The existing system expresses all SM particle interactions as rank-1 tensors and gravitation as rank-2 tensors. Your comment ignores this and and you claim it is “personal prejudice” to choose “which fundamental theory is correct” which “cannot be established by making dogmatic statements”. I’m not prejudiced in favour of any particular description, I am against the confusion of mixing up different descriptions. That’s based on dogmatic prejudice!

“Yang-Mills theory (Maxwell, QED, QCD etc.) is a theoretical framework of connections (rank 1 tensor) and curvature of connections (rank 2 tensor).”

Wrong: rank 2 field strength tensor is not spacetime curvature! as I prove in my paper on fibre connections, see “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra.

Maxwell’s equations of electromagnetism describe three dimensional electric and magnetic field line divergence and curl (rank 1 tensors, or vector calculus), but were compressed by Einstein by including those rank-1 equations as components of rank 2 tensors by gauge fixing as I showed there. The SU(N) Yang-Mills equations for weak and strong interactions are simply an extension of this by adding on a quadratic term, the Lie product. As for the connection of gauge theory to fibre bundles, I as I showed in that paper, Yang merely postulates that the electromagnetic field strength tensor equals the Riemann tensor and that the Christoffel matrix equals the covariant vector potential. These are efforts to paper over the physical distinctions between the field line description of gauge theory and the curved spacetime description of general relativity. I go into all this in detail in that 2013 paper.

The fact that only ignorant responses are made to factual data also exists in all other areas of science where non-mainstream ideas have been made taboo, and where you have to fight a long war merely to get a fact reviewed without bigoted insanity or apathy.

Karl Popper’s 1935 correspondence arguments with Einstein are vital reading. See, in particular, Einstein’s letter to Karl Popper dated 11 September 1935, published in Appendix xii to the 1959 English edition of Popper’s “Logic of Scientific Discovery,” pages 482-484. Einstein writes in that letter that he has physical objections to the trivial arguments of Heisenberg based on the single wavefunction collapse idea non relativistic QM. Note that wavefunction collapse doesn’t occur at all in relativictic 2nd quantization, as expressed as Feynman’s path integrals, where multipath interference allows physical path interference processes to replace metaphysical collapse of a single indeterminate wavefunction amplitude. You instead integrate over many wave function amplitude contributions, one representing every possible path, including specifically the paths that represent physical interactions with a measuring instrument.

“I regard it as trivial that one cannot, in the range of atomic magnitudes, make predictions with any desired degree of precision … The question may be asked whether, from the point of view of today’s quantum theory, the statistical character of our experimental statistical description of an aggregate of systems, rather than a description of one single system. But first of all, he ought to say so clearly; and secondly, I do not believe that we shall have to be satisfied for ever with so loose and flimsy a description of nature. …

“I wish to say again that I do not believe that you are right in your thesis that it is impossible to derive statistical conclusions from a deterministic theory. Only think of classical statistical mechanics (gas theory, or the theory of Brownian movement). Example: a material point moves with constant velocity in a closed circle; I can calculate the probability of finding it at a given time within a given part of the periphery. What is essential is merely this: that I do not know the initial state, or that I do not know it precisely!” – Albert Einstein, 11 September 1935 letter to Karl Popper.


E.g., groupthink political fashion against looking at alternative explanations of facts – apart those which are screamed by a noisy “elite” of political activists – also prevails in climate “science”, CO2 is correlated to “temperature data”, and any other correlation is banned, e.g. water vapour – a greenhouse gas which contributes far more, about 25 times more, to the greenhouse effect than CO2, has been declining since 1948 according to NOAA measurements.  This water vapour decline is enough to cancel most of the temperature rise, CO2 having a trivial contribution owing to the negative feedback from cloud cover which IPCC ignored in all its 21 over-hyped models.

water vapour fall cancels out CO2 rise

Above: NOAA data on declining humidity (non droplet water, which absorbs heat as a greenhouse gas).  Below: satellite data on Wilson cloud chamber cosmic radiation effects on cloud droplet formation and the long term heating caused by a fall in the abundance of cloud water droplets, which reflect back solar radiation into space, cooling altitudes below the clouds.

cosmic rays vs cloud cover

When the IPCC does select an “alternative” theory to discuss in a report, it is always a strawman target, a false model that they can easily debunk.  E.g. cosmic rays don’t carry any significant energy into earth’s climate, so “solar forcing” (which IPCC analyses and correctly debunks) is a strawman target.  But we don’t need a lengthy analysis to see this.  Cosmic radiation produces a radiation dose of 1 Gray for every 1 Joule of ionizing radiation absorbed in a kilogram of matter.  The prompt lethal dose of ionizing radiation is less than 10 Grays or 10 Joules per kg.  Therefore, it’s obvious from energy-to-radiation dose conversion factor, alone, that cosmic rays can’t affect the energy balance in the atmosphere, for if they could we’d be getting lethal doses of radiation.  What instead happens is a very indirect effect on climate, which produces the very opposite effect to that of “solar forcing” which the IPCC considered.

While solar forcing – that is to say, direct energy delivery by cosmic rays, causing climate heating – would imply that an increase in cosmic rays causes an increase in temperature, the opposite correlation occurs with the “Wilson cloud chamber mechanism”, because cosmic rays leave ionization trails around which cloud droplets condense, which cool (not heat up) the altitudes below the cloud.  This is validated by data (graphs above).  But the media sticks to considering the false “solar forcing” theory as being the only “(in)credible alternative” to the CO2-temperature correlation with no negative feedback IPCC models.  There is no media discussion of any alternative that is remotely correct.

The reason for stamping out dissent and making taboo any discussion of realistic alternative hypotheses is the hubris of dictatorship, which is similar in some ways to pseudo-democratic politics.  The claim in democratic ideology is that we have freedom of the democratic sort, but democracy in ancient Greece was a daily referendum on issues, not a vote only once in four years (i.e., 4 x 365 times fewer votes than democracy) for an effective choice between one of two dictator parties of groupthink ideology, ostensibly different but really both joined together in an unwritten Cartel Agreement to maintain a fashionable status quo even if that involves an ever increasing national debt, threats to security from fashionable disarmament ideology, funding groupthink money-grabbing quack scientists who only want to award each other prizes and shut down “unorthodox” or honest research.

Anyone who points out the problems of calling this “democracy” and suggests methods for achieving actual democracy (e.g. with daily online referendums using secure databases of the sort used for online banking) is attacked falsely as being in favor of anarchy or whatever.  In this way, no progress is possible and status quo is maintained.  (By analogy to groupthink dictatorship in contemporary politics and science, is the money-spinning law profession as described by former law court reporter Charles Dickens in Bleak House: “The one great principle of the English law is, to make business for itself. There is no other principle distinctly, certainly, and consistently maintained through all its narrow turnings. Viewed by this light it becomes a coherent scheme, and not the monstrous maze the laity are apt to think it. Let them but once clearly perceive that its grand principle is to make business for itself at their expense…”  Notice that I’m not critical here of status quo, but of the hypocrisy used to cover up its defects with lying deceptions.  If only people were honest about the lack of freedom and the need for censorship, that would reduce the stigma of bigoted dictatorial coercion behind “freedom”.  As it is, we instead have a “freedom of the press” to tell lies and make facts taboo, and to endlessly proclaim falsehoods as urgent “news” in a effort to brainwash everyone.)

Dr Woit argues rightly “… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”

But he states: “There is a long history and a deeply-ingrained culture that helps mathematicians figure out the difference between promising and empty speculation, and I believe this is something theoretical physicists could use to make progress.”

Well, prove it!

On March 26, 2014, The British Journal for the Philosophy of Science published a paper by philosopher Richard Dawid, Stephan Hartmann, and Jan Sprenger, “The No Alternatives Argument”:

“Scientific theories are hard to find, and once scientists have found a theory, H, they often believe that there are not many distinct alternatives to H. But is this belief justified? What should scientists believe about the number of alternatives to H, and how should they change these beliefs in the light of new evidence? These are some of the questions that we will address in this article. We also ask under which conditions failure to find an alternative to H confirms the theory in question. This kind of reasoning (which we call the ‘no alternatives argument’) is frequently used in science and therefore deserves a careful philosophical analysis.”  (A PDF of their draft paper is linked here.)

The problem for them is that the “no alternatives argument” is used in the popular media and popular politics to “close down discussion” of any argument as being simply taboo or heresy, if there is even a hint that it could constitute “distracting noise” that draws any attention let alone funding away from the mainstream bigots and the mainstream hubris.  This is well described by Irving Janis in his treatment of “groupthink”, proving that collective thought over dogma fails eventually when it resorts to direct or indirect subjective censorship of alternative viewpoints.  The whole notion of “dictatorship” being bad is down to the banning of discussion of alternative viewpoints which turn out correct, in other words it’s not “leadership” which is the inherent problem but:

“leadership + stupid, bigoted, coercive lying about alternatives being rubbish, when the leadership hasn’t even bothered to read or properly evaluate the alternatives.”

Historically, progress of a radical form has – simply because it has had to be radical – been unorthodox, been done by unorthodox people, and has been censored by the mainstream accordingly.  The argument the mainstream makes is tantamount to claiming that anyone with an alternative idea must be a wannaby dictator who should try to overthrow the existing Hitler by first joining the Nazi Party, and then working up the greasy pole and finally reasoning in a gentleman like way with the Great Dictator.  That’s absurd, based on the history of science.  Joule the brewer who discovered the mechanical equivalent of heat energy by the energy needed to stir vats of beer mechanically, did not go about trying to get his “fact” (ahem, “pet theory” to mainstream bigots) accepted by becoming a professor of mathematical physics and a journal editor.  You cannot get a “peer” reviewer to read a radical paper.  The people who did try to go down the orthodox route when they had a radical idea like Mendel were censored out, and their facts were eventually “re-discovered” when others deemed it useful to do so, in order to resolve a priority dispute.

Put another way, the key problem of dictatorship is that it turns paranoid, seeing enemies everywhere in merely honest criticisms and suggestions for improvements, and eliminates those: the “shoot the messenger” fallacy.  What we need is honest, not dishonest, censorship.  We need to censor out quacks, the people who “make money in return for falsehood”, and encourage objectivity.  Power corrupts, so even if you start off with an honest leader, you can end up with that leader turning into a paranoid quack.  Only by censoring in the honest interests of objectivity, rather than to protect fashion from scrutiny, criticism, and improvement, can progress be made.

Woit rejects philosopher Richard Dawid’s invocation of the “no alternatives” delusion to defend string theory from critics, by stating: “This seems to just be an argument that HEP theorists have been successful in the past, so one should believe them now …”.   Dawid uses standard “successful” obfuscation techniques, consisting of taking an obscure and poorly defined argument and making it even more abstruse with Bayesianism probability theory, in which previous successes of a mainstream theory can be used to quantitatively argue that it is “improbable” that an alternative theory dismissed by the mainstream will overturn the mainstream.  This has many objections which Dawid doesn’t discuss.  The basic problem is that that of Hitler, who used precisely the implicit Bayesianism increasing trust from his “successes” in unethically destroying opponents to gain and further gather support for his increasingly mainstream party.  Anyone who objected was simply reminded of Hitler’s “good record”, not just iron cross first class but his tireless struggle, etc.  The fault here is that probability theory is non-deterministic and assumes a lack of bias-causing mechanisms which control the probability.

If you want to model the failure risk of a theory, you should look at the theory, e.g. eugenics for Hitler or the cosmic landscape for string, and see if it is scientific in the useful sense, other than providing corrupt bigots power and authority to sever more objective research which disproves it.  Instead, Dawid merely looks at the history of mainstream theory successes, ignoring the issues with the theories, and simply concludes that since mainstream hubris is generally good at ignoring better ideas, it will continue to prevail.

Which of course was what Bell’s inequality did when it set up a hypothesis test between equally false alternatives, including a “proof” of quantum mechanics viability based on the false assumption that quantum mechanics consists solely of a non-relativistic single-wavefunction amplitude for an electron (no path integral second quantization, with an amplitude for every path).  By setting up a false default hypothesis, you can “prove” it with false logic.

For example, in 1967 Alexander Thom falsely proved by probability theory that there was a 99% probability that the ancient Britons who built stonehenge used a “megathlic yard” of 83 cm length.  He did this by a standard two-theory comparison hypothesis test with standard probability theory: he compared the goodness of fit of two hypotheses only, excluding the real solution!  The two false hypotheses he compared in his probability theory test were his pet theory of the 83 cm megalithic yard, and random spacing.  He proved correctly, that if the correct solution is one of these two options (it isn’t of course), then the data shows a 99% probability that the 83 cm megalithic yard is the correct option.  Thom’s error, and the error of all probability theory and statistical hypothesis tests (Chi squared, Students T), is that they compare only one candidate hypothesis or theory with one other, i.e. you assume without any evidence or proof that you know the correct theory is one of two options that have been investigated.  The calculation then tells you the probability that the data corresponds to one of those two option.  This is fake, because in the real world there are more than just 2 options, or 2 theories to compare.  Bell’s inequality neglects inclusion of path integrals with relativistic 2nd quantization multipath interference causing indeterminancy, rather than the 1st quantization non-relativistic “single wavefunction collapse” metaphysics.  Similarly, in 1973 Hugh Porteous disproved Thom’s “megathlic yard” by invoking a third hypothesis, that distances were paced out.  Porteous modelled the pacing option using a normal distribution and showed it better fitted the data than Thom’s megathlic yard!  This is a warning from history about the dangers of “settling the science”, “closing down the argument”, and banning alternative ideas!

Conjectured theory SO(6) = SO(4) x SO(2) = SU(2) x SU(2) x U(1)

Conjectured electroweak/dark energy/gravity symmetry theory:

SO(6) = SO(4) x SO(2)
= SU(2) x SU(2) x U(1)

If this is true, the Standard Model should be replaced by SU(3) x SO(6). or maybe just SO(6) if SO(6) breaks down two ways, once as shown above, and also as in the old Georgi-Glashow SU(5) grand unified theory (given below), where SO(6) is isomorphic to SU(4) which contains the strong force’s color charge symmetry, SU(3).  (See also Table 10.2 in the introduction to group theory for physicists, linked here.)

Why do we want SO(6)? Answer: Lunsford shows SO(3,3) = SO(6) unifies gravitation and electrodynamics in 6d.

SO(4) = SU(2) x SU(2) is well known as a mathematical isomorphism (see previous post) as is the fact that SO(2) = U(1).

In olden times (circa 1975-84) the media was saturated with the (wrong) prediction of proton decay via the (now long failed) grand unified theory of SU(5) = SO(10). The idea was to break down SU(5) via the SO(10) isomorphism into SO(6) x SO(4), and from there one of the ideas, namely the isomorphism (based on the fact that the left force is left-handed so the Yang-Mills SU(2) model reduces to a simple single element U(1) theory for right-handed spinors): SU(2, Right) = U(1, Hypercharge), may be of use to us for recycling purposes (to produce a better theory):

= SO(10)
= SO(6) x SO(4)
= SU(4) x SU(2, Left) x SU(2, Right)
= SU(3) x SU(2, Left) x U(1)

Well, maybe we don’t need the reduction SU(4) to SU(3), but we do want to consider the symmetry break down of SO(6) because Lunsford found that group useful:

= SO(6)
= SO(4) x SO(2)
= SU(2, Left) x SU(2, Right) x U(1, Dark energy/gravity)
= SU(2, Left) x U(1, Hypercharge) x U(1, Dark energy/gravity)

This is pretty neat because it also fits in with Woit’s conjecture that that shows how to obtain the normal electroweak sector charges with their handedness (chiral) features by using a correspondence between the vacuum charge vector and Clifford algebra to represent SO(4) whose U(2) symmetry group subset contains the 2 x 2 = 4 particles in one generation of Standard Model quarks or leptons, together with their correct Standard Model charges; for details see pages 13-17 together with 51 of Woit’s 2002 paper, QFT and Representation Theory.

(It’s abstract but when you think about it, you’re just using a consistent representation theory to select the 4 elements of the U(2) matrix from the 16 of SO(4). Most of the technical trivia in the paper is superfluous to the key example we’re interested in which occurs in the table of page 51. Likewise, when you look compare the elements of the three 2×2 Pauli matrices of SU(2) to the eight 3×3 Gell-Mann matrices of SU(3) you can see that the first three of the SU(3) matrices are analogous to the three SU(2) matrices, give or take a global multiplication factor of i. In other words, you can pictorially see what’s going on if you write out the matrices and circle those which correspond to one another.)

SU(2) x SU(2) = SO(4) and the Standard Model

The Yang-Mills SU(N) equation for field strength is Maxwell’s U(1) Abelian field strength law plus a quadratic term which represents net charge transfer and contains the matrix constants for the Lie algebra generators of the group.  It is interesting that the spin orthogonal group in three dimensions of space and one of time, SO(4), corresponds to two linked SU(2) groups, i.e.

SO(4) = SU(2) x SU(2),

rather than just one SU(2) as the Standard Model would suggest, which is U(1) X SU(2) X SU(3).  This is one piece of “evidence” for the model proposed in, where U(1) is simply dark energy (the cosmological repulsion between mass, proved in that paper to accurately predict observed quantum gravity coupling by a Casimir force analogy!), and SU(2) occurs in two versions, one with massless bosons which automatically reduces the SU(2) Yang-Mills equation to Maxwell’s by giving a physical mechanism for the Lie algebra SU(2) charge transfer term to be constrained to a value of zero (any other value makes massless charged gauge bosons acquire infinite magnetic self inductance if they are exchanged in an asymmetric rate that fails to cancel the magnetic field curls).  The other SU(2) is the regular one we observe which has massive gauge bosons, giving the weak force.

Maybe we should say, therefore, that our revision of the Standard Model is

U(1) x SU(2) x SU(2) x SU(3)


U(1) x SO(4) x SU(3).

As explained in, the spin structure of standard quantum mechanics is given by the SU(2) Pauli matrices of quantum mechanics.  Any SU(N) group is simply a subgroup of the unitary matrix U(N), containing specifically those matrices of U(N) with a positive determinant of 1.  This means that SU(2) has 3 Pauli spin matrices.  Similarly, SU(3) is the 8 matrices of U(3) having a determinant of +1.  Now what is interesting is that this SU(2) spinor representation on quantum mechanics also arises with the Weyl spinor, which Pauli dismissed originally in 1929 as being chiral, i.e. permitting violation of parity conservation (left and right spinors having different charge or other properties).  Much to Pauli’s surprise in 1956 it was discovered experimentally from the spin of beta particles emitted by cobalt-60 that parity is not a true universal law (a universal law would be like the 3rd law of thermodynamics, where no exceptions exist).  Rather, parity conservation is at least violated in weak interactions, where only left handed spinors undergo weak interactions.  Parity conservation had to be replaced by the CPT theorem, which states that to get a universally applicable conservation law involving charge, parity and time, which applies to weak interactions, you must simultaneously reverse charge, parity and time for a particle together.  Only this combination of three properties is conserved universally, you can’t merely reverse parity alone and expect the particle to behave the same way!  If you reverse all three values, charge, parity and time, you end up, in effect, with a left handed spinor again (if you started with one, or a right handed spinor if you started with that), but the result is an antiparticle which is moving the opposite way in time as plotted on a Feynman diagram.  In other words, the reversals of charge and time cancel the parity reversal.

But why did Pauli not know that Maxwell in deriving the equations of the electromagnetic force in 1861, modelled magnetic fields as mediated by gauge bosons, implying that charges and field quanta are parity conservation breaking (Weyl type chiral handed) spinors?  We discuss this Maxwell 1861 spinor in, which basically amounts to the fact Maxwell thought that the handed curl of the magnetic field around an electric charge moving in space is a result of the spin of vacuum quanta which mediate the magnetic force.  Charge spin, contrary to naive 1st quantization notions of wavefunction indeterminancy, is not indeterminate but takes a preferred handedness relative to the motion of charge, thus being responsible for preferred handedness of the magnetic field at right angles to the direction of motion of charge (magnetic fields, according to Maxwell, are the conservation of angular momentum when spinning field quanta are exchanged by spinning charges).  Other reasons for SU(2) electromagnetism are provided in, such as the prediction of the electromagnetic field strength coupling.  Instead of the 1956 violation of parity conservation in weak interactions provoking a complete return to Maxwell’s SU(2) theory from 1861, what happened instead was a crude epicycle type “fix” for the theory, in which U(1) continued to be used for electrodynamics despite the fact that the fermion charges of electrodynamics are spin half particles which obey SU(2) spinor matrices, and in which the U(1) pseudo-electrodynamics (hypercharge theory) was eventually (by 1967, due to Glashow, Weinberg and Salam) joined to the SU(2) weak interaction theory by a linkage with an ad hoc mixing scheme in which electric charge is given arbitrarily by the empirical Weinberg-Gell Mann-Nishijima relation

electric charge = SU(2) weak isospin charge + half of U(1) hypercharge

Figure 30 on page 36 of gives an alternative interpretation of the facts, better consistent with reality.

Although as stated above, SO(4) = SU(2) x SU(2), the individual SU(2) symmetries here are related to simple spin orthogonal groups

SO(2) ~ U(1)

SO(3) ~ SU(2)

SO(4) ~ SU(3)

It’s pretty tempting therefore to suggest as we did, that the U(1), SU(2) and SU(3) groups are all spinor relations derived from the basic geometry of spacetime.  In other words, for U(1) Abelian symmetry, particles can spin alone; and for SU(2) they can be paired up with parallel spin axes and each particle in this pair can then either have symmetric or antisymmetric spin.  In other words, both spinning in the same direction (0 degrees difference in spin axis directions) so that their spins add together, doubling the net angular momentum and magnetic dipole moment and creating a bose-einstein condensate or effective boson from two fermions; or alternatively spinning in opposite directions (180 degrees difference in spin axis directions) as in Pauli’s exclusion principle, which cancels out the net magnetic dipole moment.  (Although wishy-washy anti-understanding 1st quantization QM dogma insists that only one indeterminate wavefunction exists for spin direction until measured, in fact the absence of strong magnetic fields from most matter in the universe is continuously “collapsing” that “indeterminate” wavefunction into a determinate state, by telling us that Pauli is right and that spins do generally pair up to cancel intrinsic magnetic moments for most matter!)  Finally, for SU(3), three particles can form a triplet in which the spin axes are all orthogonal to one another (i.e. the spin axis directions for the 3 particles are 90 degrees relative from each other, one lying on each x, y, and z direction, relative of course to one another not any absolute frame).  This is color force.

Technically speaking, of course, there are other possibilities.  Woit’s 2002 arXiv paper 0206135, Quantum field theory and representation theory, conjectures on page 4 that the Standard Model can be understood in the representation theory of “some geometric structure” and on page 51 he gives a specific suggestion that you pick U(2) out of SO(4) expressed as a Spin(2n) Clifford spin algebra where n = 2, and this U(2) subgroup of SO(4) then has a spin representation that has the correct chiral electroweak charges.  In other words, Woit suggests replacing the U(1) x SU(2) arbitrary charge structure with a properly unifying U(2) symmetry picked out from SO(4) space time special orthogonal group.  Woit represents SO(4) by a Spin(4) Clifford algebra element (1/2)(e_i)(e_j) which corresponds to the Lie algebra generator L_(ij)

(1/2)(e_i)(e_j) = L_(ij).

The Woit idea, of getting the chiral electroweak charges by picking out U(2) charges from SO(4), can potentially be combined with the previously mentioned suggestion of SO(4) = SU(2) x SU(2), where one effective SU(2) symmetry is electromagnetism and the other is the weak interaction.

My feeling is that there is no mystery, one day people will accept that the various spin axis combinations needed to avoid or overcome intrinsic magnetic dipole anomalies in nature are the source of the fact that fundamental particles exist in groupings of 1, 2 or 3 particles (leptons, mesons, baryons), and that is also the source of the U(1), SU(2) and SU(3) symmetry groups of interactions, once you look at the problems of magnetic inductance associated with the exchange of field quanta to cause fundamental forces.

Quack money making pseudophysics hype by John Gribbin, according to Peter Woit’s “Past the End of Science” article

Mathematician Peter Woit, on Not Even Wrong, points out that John Gribbin is “author of the 2009 multiverse-promotional effort In Search of the Multiverse. I don’t know how Gleiser treats this, but Gribbin emphasizes the multiverse as new progress in science… Gribbin and his multiverse mania for untestable theories provides strong ammunition for Horgan, since it’s the sort of thing he was warning about.”

In an email to me about a decade ago, author John Gribbin asked me if a theory had any confirmed falsifiable predictions. When these were supplied, he didn’t reply and showed no further interest. Catering to prejudice, or entering popular (media aware) controversy, is more profitable and rewarding for the “free media” than getting into backwaters.  The rise of popular quack physics like the multiverse is an infiltration tactic in the physics lobby, a tactic first employed successfully by communist infiltrators of socialist parties.  The reason for infiltration tactics is that an an honest call to turn physics into a religion or quackery is unpopular, just as an honest call for Communism leads to defeat in the elections.  So proponents are “forced” into duplicity and sailing under false flags:

“In 1950 all the Communist Party’s 100 candidates were defeated, including the two Communist MPs who had sat in the 1945 Parliament.  This heightened the determination of the Communists to control the Labour Party by indirect means since they could not establish themselves in Parliament under their own name.”

- Woodrow Wyatt, What’s left of the Labour Party?, Sidgwick and Jackson, London, 1977, p43.

Wyatt, himself one of the authors of the 1947 Keep Left book, goes on to document how religious style bigotry by the hard-left control of the Labour Party (and eventually of the British Government) was indirectly established when in 1956 the Kremlin’s Khrushchev-fan, Mr Frank Cousins, became the general secretary of the Transport and General Workers Union, which held the vote swing in the Labour Party Conference; in 1969 Cousins was succeeded by the even more militant, eye-to-eye with Brezhnev, Jack Jones, leading to Britain’s strife, strikes, IMF bailout (due to national bankrupcy), winter of discontent, etc. in the 1970s.  Our point is, indirect infiltration and subversion tactics are used by fanatics to overcome direct barriers.

It’s like the Maginot Line, the French fortifications supposedly guaranteeing peace for all time by physically preventing German tanks from entering France.  The problem was, the tanks went around it.  Similarly, Nagasaki actually had bomb shelters for 70,000 which survived the nuclear explosion intact with 100% survival rate for the 400 people in them, but because it was a surprise attack, nobody took notice of the single B-29 in the sky and most people were not in the shelters.  The point we’re driving at is that if you pass a law or build a barrier, you must expect that opponents will try to seek a way around it.  In other words, you must deliberately focus on seeking out the weakest link in your defense, and strengthening it, or else the enemy will exploit it.  It’s not good enough to try to close down this argument by using propaganda which promotes the strongest links in your defense to try to stifle criticisms, or to label critics of mainstream defense propaganda as paranoid.  What usually gets dismissed as paranoid is actually often valid criticism.  To assume that the enemy will not exploit weaknesses in your defense is not anti-paranoia, but rather is insanity.

If in science you have a law saying “Law 1: Falsifiable predictions only”, and if opponents of the law can’t directly overturn it to make science a religion by “honest” (i.e. open, fairly stated) democracy means, they simply agitate to add an exception that effectively reverses the law: “Law 1, exception 1: theories that are incomplete need not make falsifiable predictions”.

Similarly, the Soviet Union dismissed critics who claimed that it was an unequal, unjust, non-communist dictatorship of hatred by claiming that once it had disposed of all its enemies like capitalists, it would then be able to become the promised utopia.  Because it was never able to achieve its aims, it had the perfect excuse to remain a fascist-type dictatorship of censorship and enforced poverty.

The propaganda level of science, driven by ruthless fanatics of quackery, makes it far exceed the threat to liberal equality that the USSR presented. The USSR was a failed version of capitalism pretending to on the road to utopia and maintained by force; quack science today is far better at media and taxpayer funding manipulation than the USSR ever was.


Inflation theory debunked by Paul Steinhardt in Nature

QG fundamentals 1

The quantum gravity theory which quantitatively predicted dark energy in 1996 ( and predicts the low curvature of the early universe that’s normally attributed to “inflation” speculation, also predicts the electromagnetic coupling for the for same verified cross-section used in quantum gravity:

QG fundamentals

Testable alternative to inflation theory: quantum gravity theory provably flattens early spacetime curvature as observed, without introducing any epicycles, please see and other papers, for electr

“The … truth about inflationary theory. The common view is that it is a highly predictive theory. … the inflationary paradigm is so flexible that it is immune to experimental and observational tests. First, inflation is driven by a hypothetical scalar field, the inflaton, which has properties that can be adjusted to produce effectively any outcome. Second, inflation does not end with a universe with uniform properties, but almost inevitably leads to a multiverse with an infinite number of bubbles, in which the cosmic and physical properties vary from bubble to bubble. The part of the multiverse that we observe corresponds to a piece of just one such bubble. … No experiment can rule out a theory that allows for all possible outcomes. Hence, the paradigm of inflation is unfalsifiable… Taking this into account, it is clear that the inflationary paradigm is fundamentally untestable, and hence scientifically meaningless.”

– Paul Steinhardt, Big Bang blunder bursts the multiverse bubble, Nature, 3 June 2014.

Rival for inflation theory to explain cosmology

  1. A commenter at Not Even Wrong asks the following question:
    1. Monty says:

      I watched the video from the World Science Festival. Could I ask Peter, or the commenters, what would be wrong with the following response to Steinhardt’s complaint that whatever the BICEP2 results had been, they could be made to fit with some variant of inflationary theory: yes, the observation of B-mode polarisation (let’s assume it’s not an artifact of foreground dust–this will be shown one way or the other soon in any case) cannot by itself prove inflation. But it lets us choose appropriate candidate theories from within the previous set of inflationary theories, and, more importantly for inflation-backers, it adds another item to the list of things requiring explanation for any competing theory. So now we have not only isotropy, flatness, absence of relics, and large-scale structure to explain, but we also have otherwise unexplained B-mode polarisation of the CMB. An appropriately narrowed inflation model can account for all of those things at once; that makes it correspondingly harder for an alternative theory to be equally successful. Doesn’t that make it a stronger theory than it was this time last year?

    2. My response: No if it’s an epicycle theory which “explains” stuff by using censorship of alternatives: “that makes it correspondingly harder for an alternative theory to be equally successful.”  The success of epicycles was that it gave vacuous ammunition to those who wanted for subjective reasons to ignore Aristarchus’s unpopular, unfashionable, apparently more complex system of earth rotating and orbiting the sun. There were various spurious (false law-based, rather than direct evidence-based) no-go theorems against Aristarchus’s solar system, but the alleged simplicity and elegance of epicycles won over in the minds of charlatans (“let’s simply have every thing orbit earth, adding epicycles to make it work! How beautiful! The landscape of possible models is big enough to be non-falsifiable, yipee! Great science!”).  Of course, inflation is different since it’s seeking to close down the scientific search for better alternatives before they’re even emerged into public view…

    3. I will add a paper specifically concerning the quantum gravity alternative to “inflation theory” to vixra when time permits.