Electromagnetism SU 2 theory experiment

Catt and Walton’s lecture at Nottingham University on a key experiment

There are actually two different electric charges, hence SU(2) electromagnetism, not merely one charge which can travel backwards in time to reverse its sign (as Abelian U(1) QED dogma untruthfully asserts). On 9 October 2013, Ivor Catt (author of ‘Crosstalk (Noise) in Digital Systems,’ in IEEE Trans. on Elect. Comp., vol. EC-16, December 1967,  pp. 749–58), and David Walton (who after his PhD in atomic physics worked under Nobel laureate E. T. S. Walton at Trinity College) presented a new experiment to the ASL Electromagnetism Seminar at Nottingham University:

Before reviewing the physical mechanism of electromagnetism yet again, a brief comment on personal bad attitude problems in science. What’s curious about these guys is that like all academic hotshots, they’re bigoted “elitists” camouflaged as caring, brilliant socialists, just like quantum field theorist, the sometimes abusive, reliably patronising Dr Peter Woit of Columbia University maths department (string theory critic and author of an interesting new textbook, Quantum Theory, Groups and Representations, which will be published by Springer in 2015) or category theorist Dr Marni Dee Sheppeard: they’re all people who won’t spend a second listening or objectively responding to anybody, but just attack under a false charge, thus refusing to engage with the actual argument.  They do this by inventing a vacuous claim that anyone who has a technical argument must be ignorant of science or is anti-feminism, etc. and launching into a tirade about that, completely ignoring the point.  I’ve spend countless hours talking to Catt and publishing videos (also here and here) and journal articles about his work (Electronics World feature articles in August 2002, April 2003, January 2004, plus an op-ed), but that doesn’t lead to anything but abusive shouting as soon as I make an objective criticism.  It’s the same with all egotists, inside the mainstream or not!  Simplify the 10 commandments to 1, and you are crucified.

Feynman versus Dirac

“Already in the beginning I had said that I’ll deal with single electrons, and I was going to describe this idea about a positron being an electron going backward in time, and Dirac asked, ‘Is it unitary?’ I said, ‘Let me try to explain how it works, and you can tell me whether it is unitary or not!’ I didn’t even know then what ‘unitary’ meant. So I proceeded further a bit, and Dirac repeated his question: ‘Is it unitary?’ So I finally said: ‘Is what unitary?’ Dirac said: ‘The matrix which carries you from the present to the future position.’ I said, ‘I haven’t got any matrix which carries me from the present to the future position. I go forwards and backwards in time, so I do not know what the answer to your question is.”

– Feynman about his problem with Dirac at the 1948 Pocono Conference.  The argument was with Feynman’s U(1) as applied to pictorial Feynman diagrams, a theory where positrons (positively charged electrons) are represented as electrons travelling backwards in time, i.e. the simplistic U(1) single-charge QED theory.  Quotation source: J. Mehra and K. A. Milton, Climbing the Mountain: The Scientific Biography of Julian Schwinger, Oxford University Press, 2000, page 233.  See also the excellent discussion in that book on pages 231-234 of how badly Feynman was treated by Bohr, Teller, Dirac and others, who instead of being constructive and helping to build and improve Feynman’s theory of multipath interference via the path integral, just preferred to try to shoot it down.  Actually, the path integral works for any symmetry group for example SU(2) and SU(3) and not just for U(1) electrodynamics which has the “Dirac problem” that positrons are represented as electrons going backwards in time in the Feynman diagrams.

“Already in the beginning I had said that I’ll deal with single electrons, and I was going to describe this idea about a positron being an electron going backward in time, and Dirac asked, ‘Is it unitary?’ I said, ‘Let me try to explain how it works, and you can tell me whether it is unitary or not!’ I didn't even know then what ‘unitary’ meant. So I proceeded further a bit, and Dirac repeated his question: ‘Is it unitary?’ So I finally said: ‘Is what unitary?’ Dirac said: ‘The matrix which carries you from the present to the future position.’ I said, ‘I haven’t got any matrix which carries me from the present to the future position. I go forwards and backwards in time, so I do not know what the answer to your question is.” - Richard P. Feynman in J. Mehra and K. A. Milton, Climbing the Mountain: The Scientific Biography of Julian Schwinger, Oxford University Press, 2000, page 233

“Already in the beginning I had said that I’ll deal with single electrons, and I was going to describe this idea about a positron being an electron going backward in time, and Dirac asked, ‘Is it unitary?’ I said, ‘Let me try to explain how it works, and you can tell me whether it is unitary or not!’ I didn’t even know then what ‘unitary’ meant. So I proceeded further a bit, and Dirac repeated his question: ‘Is it unitary?’ So I finally said: ‘Is what unitary?’ Dirac said: ‘The matrix which carries you from the present to the future position.’ I said, ‘I haven’t got any matrix which carries me from the present to the future position. I go forwards and backwards in time, so I do not know what the answer to your question is.”
– Richard P. Feynman in J. Mehra and K. A. Milton, Climbing the Mountain: The Scientific Biography of Julian Schwinger, Oxford University Press, 2000, page 233

Perhaps this excessive and damaging egotism from Catt and also leading quantum field theorists is due to bad experiences with other people who have wasted their time with alternative ideas in the past, but that’s no excuse for people taking out your frustrations on other people who are offering constructive theories that are admittedly incomplete and in need for further development and wider circulation before funding.  Now, when I read a book, I try to see what the strongest arguments are, and focus on those.  That’s called objectivity.  “Critics” who ignore the substance of the argument and falsely make a show out of inventing spurious problems, are being subjective.  Emotional rants always leak into “peer” reviews, because if the “peer” reviewer abuses power when possible to reduce workload, they then tell lies about “rudeness” to escape the complaints.  So they’ll always end up, in the analysis of a radical innovation that does have evidence, claiming it is an “exceptional” case that doesn’t deserve their normal objective methods!

Anyway, the physics.  The new experimental data for the charged capacitor justifies the following interpretation of the Dirac equation’s spinor.



As the diagram above proves, the classical electromagnetism which Dirac and Weyl assumed true when building U(1) Abelian quantum electrodynamics, contains an error due to a mathematical coincidence (or accident, as Catt put it in his 1995 book Electromagnetism 1), in that the false squaring of the superimposed electric fields for trapped Heaviside energy current cause a four-fold increase in energy density, which exactly compensates for the false assumption that magnetic field energy disappears.  The maths of electromagnetism work, but that is provably due to a fluke, a coincidence, an accident of mere numerology.  (Catt buries or hides the key evidence amid much clutter on his internet site.)  The physics of the EM mechanism is quite different to U(1).  However, Catt himself makes an error of inconsistency in his analysis, in not applying his own “separation of fields” (so vital for him when separating the TEM wave fronts flowing into the cable from the reflected TEM wave coming back from the open end of the cable to superimpose) to the individual fields from each conductor within the transmission line!  You then get the fact that the field quanta are charged, and you also get SU(2) QED.

The classical Maxwell (and apparently Abelian) electromagnetic equations are derived from a simple mechanism which cuts down the SU(2) Yang-Mills equations to the Maxwell ones, by eliminating the Yang-Mills charge transfer term. Since the charged EM bosons are massless, they cannot propagate because the magnetic field resulting from charge, prevents the motion of massless charge! The ONLY possible way massless charges can move, is by cancelling out their magnetic fields using a two-way equilibrium of exchange. Any two similar charges must always being exchange equal charge in opposite directions each second, to cancel out the magnetic fields, eliminating infinite self-inductance. Unless this happens, there is no electromagnetic field. So this mechanism means that massless SU(2) charged field quanta can never change the amount of charge present in an onshell particle: if it emits 1 coulomb per second, it also absorbs precisely the same amount. Hence, the Yang-Mills net charge transfer term is always zero for massless field quanta: effectively turning Yang-Mills equations into Maxwell’s in that case (massless charged field quanta). The Catt analysis of the Heaviside transmission line theory, when extended to include an examination of what’s happening to each conductor within the transmission line (accelerating free electrons radiate radio frequency energy to electrons in the other conductor, and this mutual exchange enables the propagation of electricity. The acceleration of electrons at the electrical wave front on the surface of each conductor in a transmission line is then in an opposite direction to that in the other conductor.

This is the cause of all the shouting from Catt when I tried to get a discussion of this point. His response to any objective criticism or improvement suggestion is to endlessly repeat himself more loudly, precisely what the mainstream does when string theory is criticised.  (A complete waste of time, since I had already read his books.)  This paranoid and delusional approach prevents any kind of critical collaborative progress, except homage.  I’m not interested in a dead science that either makes no progress, or is dictatorially constrained to stumble or crawl along worn paths in a very few research directions, with the most obvious and vital applications blocked by rude, silly censorship.

Weyl applied Abelian U(1) gauge theory to electromagnetism, despite the fact that it U(1) only one charge, whereas electromagnetism 2 charges (positive and negative).  There are also 2 charges in SU(2) isospin for weak forces, and 3 charges for SU(3).  Since electromagnetism has 2 charges, not one as U(1), why not therefore use SU(2) for electromagnetism?  This is just weak theory with massless neutral Z and massless charged W particles.  What is “charge”?  Has anyone ever seen or measured an “electron”.  What they have measured is trapped field with a quantized charge and mass, and named that an “electron”, much the way Pluto was once falsely classified as a planet!

(When interpretations of nature charge, like naming conventions, nature remains unaltered.  Pluto didn’t suddenly change when the consensus of expert opinion downgraded it from planet to planetoid.  You cannot therefore use today’s interpretation of any fact in science as a dogmatic “fact” with which to censor out advances, which may involve new theories which re-interpret the facts differently.  Put it another way, confusing facts with their interpretations is a censorship tool used by peer-consensus dogma-worshipping education, not objective science which tests new interpretations of facts.)

Planck scale electron cores are unobservable: nobody has seen them.  Only the fields and charge to mass ratio of “electrons” have been observed.  The sign of the charge (positive or negative) of the electron is therefore inferred from the field that is measured: the only thing observable about the “charge” of an electron is its field.  Therefore, the field quanta convey the information on whether the “charge” is positive or negative.  This can only happen if the 2 “extra polarizations” that field quanta have are charge indicators.

Field quanta have 4 polarizations, whereas uncharged photons (which don’t convey charge sign information) have only 2 observable polarizations.  The extra 2 polarizations of field quanta must therefore carry the information to the observer as to whether the electric field is one due to a “charge” which is positive, or negative.  Therefore, observations of field do indicate that massless charged electromagnetic field quanta exist, in the context of trapped light velocity field energy in Catt’s experiment, the charge of the field quanta being denoted by the quantum numbers for the 2 additional degrees of polarization.

Dirac’s equation predicted antimatter, suggesting that protons are the anti-electrons.  J. Robert Oppenheimer rejected Dirac’s interpretation in a paper in 1930, due to the mass difference between the electron and the proton (which we explain as conversion of screened QED charge into nuclear field energy, see discussion further below).  Dirac then accepted Oppenheimer’s arguments and revised this to predict what came to be called the positron, discovered by Carl Anderson in 1932.  This positron discovery has since been used by bigots tragically to “shut down the argument”, preventing a more careful analysis.

Sure, positrons arise as anti-electrons.  But that can’t be the full story, or there’d be equal amounts of positrons and electrons around, since Dirac’s equation predicts they they are created (by pair production) in equal quantities.  So although Oppenheimer’s objection to Dirac’s simplistic assignation of the proton as the anti-electron creates a problem of mass asymmetry, it did offer a solution to the question of “where is all the antimatter?”, a question which Oppenheimer doesn’t answer.  We argue that both Dirac and Oppenheimer were being far too simplistic, and in doing so laying down a shaky dogmatic interpretative foundation for U(1) electrodynamics which has messed up today’s electroweak theory.

SU(2) is now electroweak group, with U(1) being dark energy which causes gravitation by a Casimir type pseudo-attraction mechanism (plates being pushed together by spin-1 quanta on external sides, not by the simplistic idea of a mutual exchange of spin-2 superstring gravitons, which don’t need to exist in order for gravity to work).  Once you have an chiral SU(2) theory of electromagnetism where the handed curl of the magnetic field vector around the direction of propagation of a charge is analogous to the left-handed weak force SU(2).  Woit showed neatly in 2002 that you can get the handedness of SU(2) electroweak charges by picking out U(2) as a subset of SO(4).  (However, that doesn’t mean he is interested in really being objective.)

We can’t observe Planck size charges, only their fields, and we find two different kinds of fields: positive and negative.  Positive is not merely an absence of negative charge (hole theory): if there is no an absence of negative charge is zero charge.   Sure, in a sea of electrons, a “hole” behaves analogously to positive charge.  But an empty vacuum is not positively charged.  So absence of negative charge is not automatically positive charge.  Electromagnetism then is a 2 charge SU(2) gauge theory, a massless gauge boson version of SU(2).

If antimatter is produced in equal quantities by pair-production at energies above 1.022 MeV in the big bang, then where is it?  It is in upquarks.  They have 2/3rds the positron charge.  The missing 1/3 charge is due to the fact that electromagnetic pair production within femtometres of the core of a charge absorbs electromagnetic energy density in the polarization process, creating virtual particle which result – when you have pairs (mesons) or triplets (baryons) in close proximity (sharing an overlapping vacuum polarization veil of virtual particles) – converts EM field/charge energy into nuclear (weak and/or strong) fields.  The vacuum polarization effect occurs out to a radius where the field strength is about 1.3 x 10^18 volts/metre, Julian Schwinger’s IR cutoff radius for the running of the electric charge in QED.  Many textbooks wrongly follow Dirac’s sea analogy, which ignores the IR cutoff and claims that pair production occurs throughout the entire vacuum.  It doesn’t occur through the whole vacuum, because only gauge bosons – not virtual fermions – occur beyond a few femtometres: all observed couplings cease to run with energy when the energy is too low for onshell particles to be created (i.e. twice the 0.511 MeV electron rest mass energy equivalent or 1.022 MeV).

When EM fields are attenuated by vacuum polarization (causing the effective charges to run by some logarithmic function of distance and energy), electromagnetic energy density is converted into virtual mesons and virtual quarks and gluons that constitute nuclear fields, the SU(2) weak force and the SU(3) strong force. Thus, the reduction in the apparent electric charge of the upquark from the positron’s value of +1 to the value +2/3 needed to fit observations for protons (two upquarks and one downquark) and other hadrons, is explained and turned into a prediction since we can make detailed calculations with this simple approach. Where upquarks are electron antiparticles formed at very high energy, in addition to the simplistic Dirac/Oppenheimer antiparticle of the free positron which Anderson observed in 1932, where a >1.022 MeV gamma ray approaches a nucleus, you:

1. Explain the apparent paucity of antimatter in the universe, and

2. Have a new way to predict the weak and strong nuclear force running couplings, by making use of the fact of the principle of conservation of energy with the fact that one-third of the electric field energy of the electron exists in nuclear fields around upquarks in hadrons.  Coulomb field energy (half permittivity times the square of electric field strength, in Joules per cubic metre) that’s converted into virtual particle nuclear force gluon fields around upquarks in QED renormalization allows us to do simple QCD calculations of nuclear interaction running couplings from energy conservation! Genius or what?  Anyone can calculate the Coulomb field strength and energy density around an electron, and once you integrate over a shell of expanding radius that you get the total energy; incorporating the logarithmic running of the charge from QED now tells you how the QCD color force varies inversely, getting stronger as the QED running charge gets weaker: the sum of both is equal to that of an electron.

3. As Julian Schwinger explained in 1948, the running of couplings that causes charge renormalization in QED is accompanied by a renormalization of mass, in other words the virtual particles created by pair production in intense fields around a quark core contribute some mass to the quark core.  In fact, most of the mass of hadrons is generated in this way.  A simple model of this allows precise predictions to be made (see http://vixra.org/abs/1408.0151, linked here).  Nobel Laureate Dr Gerardus ‘t Hooft responded that the paper was unsuitable for his Foundations of Physics: “because it does notcite current peer-reviewed literature”.  (That’s a catch-22 because this is “new stuff”, with no literature and no “peers” in the field, as such. Duh!)

(In 2006, Harvard string theorist Dr Lubos Motl knocked the nail on the head when he wrote, on Woit’s blog: “Virtually all of string theorists are nice people who never argue with anyone else, they’re not chauvinists, and most of them are feminists. Most of them also think that string/M-theory are robust twin towers that are not threatened by any social effect or passionate proponents of alternative theories or proponents of no theories, and they almost always try to avoid interactions that could lead to tension which also gives them more time for serious work. Almost no string theorists drive SUV and they produce a minimum amount of carbon dioxide.”   Dr Motl was right that they usually “never argue with anyone else”, that’s the whole problem: they’re elitists who sit on their high horses in the clouds and refuse to engage in discussions with objective critics, to participate in constructive arguments, despite all their camouflaged journals of bigotry that paint their work as being precisely the opposite of that.  I made this point in my 2011 paper by quoting the famous string theorist Ed Witten who actually wrote to Nature instructing string theorists to deny critics the oxygen of publicity by refusing to engage in discussions.  Woit, Smolin, Catt, Her Majesty the Queen, and many others maintain prestige that way.)

Physics Review Letters and arxiv weird, egotistic, and frankly vile (not peer) “elitist” moderators proved not only lacking interest in non-standard alternative ideas beyond superstring theory that actually work (predicting cosmological constant accurately in 1996, long before dark energy was even discovered in 1998 from supernovae red-shift observations), but in demented mad bigotry against an attitude of no-bullshit progress:

Nigel says: July 7, 2005 at 7:15 pm Editor of Physical Review Letters says

Sent: 02/01/03 17:47

Subject: Your_manuscript LZ8276 Cook


Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories � Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters

Now, why has this nice genuine guy still not published his personally endorsed proof of what is a �currently accepted� prediction for the strength of gravity? Will he ever do so? …

Peter Woit says: July 7, 2005 at 7:27 pm I’m tempted to delete the previous comment, but am leaving it since I think that, if accurate, it is interesting to see that the editor of PRL is resorting to an indefensible argument in dealing with nonsense submitted to him (although the “…” may hide a more defensible argument). Please discuss this with the author of this comment on his weblog, not here. I’ll be deleting any further comments about this.

[Note to Dr Woit: the email correspondence went on with PRL’s associate editor for months, with them repeatedly changing goalposts as I revised the paper to incorporate suggestions, until they simply refused to publish anything on this topic.  They thus wasted my time deliberately with lies.  The same for Physics Forums, on which it is heresy to engage in a serious objective campaign to make progress; trivial discussion of mainstream dogma is fine.  In the same way, serious politics campaigning is banned from the House of Commons coffee bars because it always ends in punch ups; small talk about football or cricket results is however encouraged.  Freedom of speech is something that makes our democracy different to Hitler’s and Stalin’s regimes.  Except that there is a rule in small print that it must not be heard if it is heretical.  Nobody points out that Hitler and Stalin had no real problems with people shouting their praise.  It’s therefore only in the treatment of heretics and outsiders that “freedom” or its absence can really be assessed.  Nobody seriously disputes that Hitler wanted everyone be was friends with to be free to praise the Nazis.  Freedom depends on progress is free and unopposed, or is blocked by bigots requiring tougher means.  Historian Edward Gibbon wrote controversially that education is only of use where practically superfluous; he would have been less controversial I suspect if he had written instead that: DIPLOMACY is only of use where it is practically superfluous.  Diplomacy only seems to “prevent wars” and fights where the people are honest, engaging with critics and thus non-dictatorial/civilized on boths sides; diplomacy fails where it is most needed, where one side is a bigoted dictator who refuses to engage in objective discussionOne possible way to proceed would be to publish a book quoting all the errors in fashionable textbooks, debunking each and thus ridiculing the aptitude, PhD credentials, educational background, Nobel prizes awarded, etc., to substandard behaviour and the confusion of facts with interpretations by qft textbooks.  Woit’s book contains many excellent cameos but is organized in such a way that the few key understandable mechanisms in QFT are totally ignored, e.g. Feynman’s 1985 QED book explanation that the uncertainty principle arises from multipath interference (the basis of the path integral) of QED field quanta jiggling the bound electron chaotically as it does its orbit.  The uncertainty principle is pre-second quantization, pre-Feynman’s path integral.  Woit simply ignores this and also the fact that vacuum polarization provides a testable, evolving calculation method and mechanism to understand and predict how couplings run and how masses of particles occur.  But to a great extent, this bigoted, anti-progress approach is used in many textbooks, which seek to misinform readers.  Woit also starts by recommending Eugene Wigner’s worst ever paper – which asserts the ignorance-based dogma bias, on false premises, that the universe is intrinsically non-understandable mathematics.  Of course, as the non-PhD quantum field theory professor Freeman Dyson keeps pointing out, the PhD system is a pseudoscientific: “abomination … a gross distortion of the educational process … the student is condemned to work on a single problem in order to write a thesis, for maybe 2-3 years … this straight-jacket which was imposed on the students … all the PhD students had these same constrains imposed on them which I basically disapprove of.  I just don’t like the system.  I think it is an evil system and it has ruined many lives.”  (See video of Dyson explaining this, linked here.],

another specious “no go theorem” test

Another specious “no go theorem” test, full of speculative and false assumptions claims to disprove time varying G:


“Gravitational constant appears universally constant, pulsar study suggests
“The fact that scientists can see gravity perform the same in our solar system as it does in a distant star system helps confirm that the gravitational constant truly is universal.”
By NRAO, Charlottesville, VA | Published: Friday, August 07, 2015

This is a good example of the quack direction of where mainstream “science” is going: papers taking some measurements, then using an analysis riddled with speculative assumptions to “deduce” a result that doesn’t stand up to scrutiny, but serves xsimply to defend speculative dogma from the only real danger to it, that people might work on alternative ideas. Like racism, the “no go theorem” uses ad hoc but consensus-appearing implicit and explicit assumptions with a small sprinkling of factual evidence to provide allow the mainstream trojan horse of orthodoxy to survive close scrutiny.

This mainstream defending “no go theorem” game was started by Ptolemy’s false claim in 150 AD that the solar system can’t be right, because – if it was right – then the earth’s equator would rotate at about 1,000 miles an hour at the equator and – according to Aristotle’s laws of motion (which remained for over 1,400 years, until Descartes and Newton came up with rival laws of motion) clouds would whiz by at 1,000 miles an hour and people would also be thrown off earth by that “violent” motion.

Obviously this no-go theorem was false, but the equator really does rotate at that speed. So there was some fact and some fiction, blended together, by Ptolemy’s ultimate defense of the earth centred universe against Aristarchus of Samos’s 250 BC theory of the solar system and the rotating earth. The arguments about a varying gravitational coupling are similarly vacuous.

Please let me explain. The key fact is, if gravity is due to an asymmetry in forces, which is the case for the Casimir force, then you don’t vary the “gravitational” effect by varying the underlying force for a stable orbit, or any other equilibrium system, like the balance of Coulomb repulsion between hydrogen nuclei in a star, and the gravitational compression.

Put in clearest terms, if you have a tug of war on a rope where there is an equilibrium, then adding more pullers equally to each end of the rope has no net effect, nothing changes.

Similarly, if two matched arm wrestlers were to increase their muscle sizes by the same amount, nothing changes. Similarly, in an arms race if both sides in military equilibrium (parity) increase their weapons stockpiles by the same factor, neither gains an advantage contrary to CND’s propaganda (in fact, the extra deterrence makes a war less likely).

Similarly, if you increase the gravitational compression inside a star by increasing the coupling G, while the electromagnetic (Coulomb repulsion) force increases similarly due to a shared ultimate (unified force theory) mechanism, then the sun doesn’t shine brighter or burn out quicker. The only way that a varying G can have any observable effect is if you make an assumption – either implicitly or explicitly – that G varies with time in a unique way that isn’t shared by other forces. Such an assumption is artificial, speculative, and totallyy specious, and a good tell-tale sign that a science is going corruptly conservative and anti-radical in a poor form of propaganda (good propaganda being honest promotion of objective successes, rather than fake dismissals of all possible alternatives to mainstream dogma), by inventing false or fake reasons to defend status quo and “shut down the argument”. Ptolemy basically said “look the SCIENCE HAS SETTLED, the solar system must be wrong because (1) our gut reaction rejects it as contrived, (2) it disagrees with existing laws of motion by the over hyped expert Aristotle, and (3) we have mathematically fantastic theory of epicycles that can be arbitrarily fiddled to fit any planetary motion, without requiring the earth to rotate or orbit the sun.” That was “attractive” for a long time!

Edward Teller in 1948 first claimed to disprove Dirac’s varying G idea by using an analogously flawed argument to that he used to delay the development of the hydrogen bomb. If you remember the story, Teller at first claimed falsely that compression has no effect on thermonuclear reactions in the hydrogen bomb. He claimed that if you compress deuterium and tritium (fusion fuel), the compressed fuel burns faster, but the same efficiency of burn results. He forgot that the ratio of surface area for escaping heat (x rays in the H bomb) to mass is reduced if you compress the fuel, so that his scaling laws argument is fake. If you compress the fuel, the reduced surface area causes a reduced loss of X-ray energy from the hot surface, so that a higher temperature in the core is maintained, allowing much more fusion that occurs in uncompressed fuel.

Likewise, Teller’s 1948 argument against Dirac’s varying gravitational coupling theory is bogus, because of his biased and wooden, orthodox thinking: if you G with time in the sun, it doesn’t affect the fusion rate because there’s no reason why the similar inverse square law Coulomb force’s coupling shouldn’t vary the same way. Fusion rates depend on a balance between Coulomb repulsion of positive ions (hydrogen nuclei) and gravitational compression. If both forces in an equilibrium are changed the same way, no imbalance occurs. Things remain the same. If you have one baby at each end of a see-saw in balance in a park, and then add another similar baby to each end, nothing changes!

It proves impossible to explain this to a biased mathematical physicist who is obsessed with supergravity and refuses to think logically and rationally about alternatives. What happens then is that you get a dogma being defended by false “no go theorems” that purport to close down all alternative ideas that might possibly threaten their funding, prestige or more likely, that threaten “anarchy” and “disorder”. Really, when a religious dogma starts burning heretics, is not a conspiracy of self-confessed bigots who know they are wrong, trying to do evil to prevent the truth coming out. What really happens is that these people are ultra-conservative dogmatic elitists, camouflaged as caring, understanding, honest liberals. They really believe they’re right, and that their attempts to stifle or kill off honest alternatives using specious no-go theorems are a real contribution to physics.

Feynman’s “rules” for calculating Feynman diagrams, which represent terms in the perturbative Taylor series type expansion to a path integral in quantum field theory, allow very simple calculations of quantum gravity. The Casimir force of a U(1) Abelian dark energy (repulsive force) theory is able to predict the coupling correctly for quantum gravity.   We do this by Feynman’s rule for a two vertex Coulomb type force diagram (which contributes far more to the result than diagrams with more vertices), which implies that the ratios of cross-sections for interactions is proportional to the square of the ratios of the couplings.  We know the cross section for the weak nuclear force and we know the couplings for both gravity and the weak nuclear force.  This gives us the gravitational interaction cross-section.

To get the cross-sections in similar dimensional units for this application of Feynman’s rules, we use a well-established method to get each coupling into units of GeV^{-2}.  The Fermi constant for the weak interaction is divided by the cube of the product h-bar and the velocity of light, while the Newtonian constant G is divided by the product of h-bar and c^5.  This gives a Fermi coupling of 1.166 x 10^{-5} GeV^{-2}, and a Newtonian coupling for gravity of 6.709 x 10^{-39} GeV^{-2}, the ratio of which is squared using Feynman’s rules to obtain the ratio of cross-sections for the fundamental interactions.  This is standard physics procedure.  All we’re doing is taking standard procedures and doing something new with them, predicting dark energy (and vice versa, calculating gravity from dark energy).  Nobody seems to want to know, even Gerard ‘t Hooft rejected a paper using the specious argument that because we’re not “citing recent theoretical journal papers” it can’t be hyped in his Foundations of Physics, which seems to require prior published work in the field, not a new idea.  (Gerard ‘t Hooft’s silly argument would demand Newton to extend Ptolemy’s theory of epicycles, or be censored out, in effect.)

In this theory, particles are pushed together locally by the fact that we’re surrounded by the mass of the universe, and gauge bosons for dark energy (observed cosmological acceleration) are being exchanged between the masses in an apple and the masses in the surrounding universe.

Here’s a new idea. If one tenth of the energy currently put into inventing negative false “no go theorem” objections to established facts that are “heretical” or “taboo” in physics, were instead directed towards constructive criticisms and developments of new predictions, physics could probably break out of its current moratorium today. Arthur C. Clarke long ago made the observation that when subjective, negative scientists claim to invent theorems to “disprove” the possibility of a certain research direction achieving results and overthrowing mainstream dogma, they’re more often wrong than when they do objective work.

It’s very easy to point out that any new, radical idea is incompatible with some old dogma that is “widely held and established orthodoxy”, partly because that’s pretty much the definition of progress (unless you define all progress as merely adding another layer of epicycles to a half baked mainstream theory in order to make it compatible with the latest data on the cosmological acceleration), and partly because the new idea is born half baked not having been researched with lavish funding for decades or longer by many geniuses.

Fighting inflation with observations of the cosmic background

From Dr Peter Woit’s 14 June 2015 Not Even Wrong blog:

Last week Princeton hosted what seems to have been a fascinating conference, celebrating the 50th anniversary of studies of the CMB. … The third day of the conference featured a panel where sparks flew on the topics of inflation and the multiverse, including the following:

Neil Turok: “even from the beginning, inflation looked like a kluge to me… I rapidly formed the opinion that these guys were just making it up as they went along… Today inflation is the junk food of theoretical physics… Inflation isn’t radical enough – it’s too much a patchwork. It all rests on rare initial conditions… Akin to solving electron stability with springs… all we have is proof of expansion, not that the driving force is inflation… “because the alternatives are bad you must believe it” isn’t an option that I ascribe to, and one that is prevalent now…  we should encourage young to … be creative (not just do designer inflation)

David Spergel: papers on anthropics don’t teach us anything – which is why it isn’t useful…

Slava Mukhanov: inflation is defined as exponential expansion (physics) + non-necessary metaphysics (Boltzmann brains etc) … In most papers on initial conditions on inflation, people dig a hole, jump in, and then don’t succeed in getting out… unfortunately now we have three new indistinguishable inflation models a day – who cares?

Paul Steinhardt: inflation is a compelling story, it’s just not clear it is right… I’d appreciate that astronomers presented results as what they are (scale invariant etc) rather than ‘inflationary’… Everyone on this panel thinks multiverse is a disaster.

Roger Penrose: inflation isn’t falsifiable, it’s falsified… BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye.

Marc Davis: astronomers don’t care about what you guys are speculating about …

I was encouraged by Steinhardt’s claim that “Everyone on this panel thinks multiverse is a disaster.” (although I think he wasn’t including moderator Brian Greene). Perhaps as time goes on the fact that “the multiverse did it” is empty as science is becoming more and more obvious to everyone.

Inflation theory, a phase change at the Planck scale that allows the universe to expand for a brief period faster than light, is traditionally defended by:

(a) the need to correct general relativity by reducing the early gravitational curvature, since general relativity by itself predicts too great a density of the early universe to account for the smallness of the ripples in the cosmic background radiation which decoupled from matter when the universe became transparent at 300,000 years after zero time.  (The transparency occurs when electrons combine with ions to form neutral molecules, which are relatively transparent to electromagnetic radiation, unlike free charges which are strong absorbers of radiation.)

Thus, inflation is being used here to reduce the effective gravitational field strength by dispersing the ionized matter over a larger volume, which reduces the rate of gravitational clumping to match the small amount of clumping observed at 300,000 years after zero.

Another way of doing the same thing is to a theory of gravitation as being a Casimir force resulting from dark energy, which correctly predicts from dark energy and makes the gravitational coupling G a linear function of time, so at 300,000 years is merely 2.3 x 10^{-5} of todays’s value, and furthermore it is even smaller at earlier times (the smallness of the CBR ripples is not determined solely by the curvature when they were emitted, but the time-integrated effect of the curvature up to that time).  The standard “no-go theorem” by Edward Teller (1948) used against any variation of is false, as we have shown, because it makes an implicit assumption that’s wrong: the Teller no-go theorem assumes that varies with time in one specific way.  Teller assumes for the sake of his no-go theorem, that the gravitational coupling varies inversely with time as Dirac assumed, rather than linearly with time as a Casimir mechanism for quantum gravity as an emergent effect of dark energy pushing masses together.  He also assumes implicitly that G varies only by itself, without any variation of the Coulomb coupling.  Teller thus relies on an assumed imbalance between gravity and Coulomb forces to “disprove” varying G, as well as assuming that any variation of is inversely with time.  All he does is to disprove his own flawed assumptions, not the facts!

These assumptions are all wrong, as we showed.  Gravity and Coulomb forces are analogous inverse square law interactions so their couplings will vary the same way; therefore, no devastating Teller-type imbalance between Coulomb repulsion of protons in stars and gravitational compression forces arises.  The QG theory works neatly.

(b) Inflation, like string theory, is defended by elitist snobbery of the dictatorial variety: “you must believe this theory because there are no alternatives, we know this to be true because we don’t have any interest in taking alternatives seriously, particularly if they are are not hyped by media personalities or “big science” propaganda budgets, and if anyone suggests one we’ll kill it by groupthink “peer” review.  Therefore, like superstring theory, if you want to work at the cutting edge, you’d better work in inflation, or we’ll kill your work.”

That is what it boils down to.  There’s an attitude problem, with two kinds of scientists, defined more by attitude than by methodology.  One kind wants to find the truth, the other wants to become a star, or, failing that, at least to be a groupie.  This corruption is not confined to science, but also occurs in political parties, organized religion, and capitalism.  Some people want to make the world a better place, others are more selfish.  Ironically, those who are the most corrupt are also the most expert at camouflaging themselves as the opposite, a fact that emerged with the BBC’s star Jimmy Saville.  (While endlessly exaggerating correlations between temperature and CO2 and snubbing others, and making money from the taxpayer, they present themselves as moral.)

Youhei Tsubono’s criticism of the magnetic spin alignment mechanism for the Pauli exclusion principle

“All QM textbooks describe the effects of the Exclusion Principle but its explanation is either avoided or put down to symmetry considerations. The importance of the Exclusion Principle as a foundational pillar of modern physics cannot be overstated since, for example, atomic structure, the rigidity of matter, stellar evolution and the whole of chemistry depend on its operation.” – Mike Towler, “Exchange, antisymmetry and Pauli repulsion. Can we ‘understand’ or provide a physical basis for the Pauli exclusion principle?”, TCM Group, Cavendish Laboratory, University of Cambridge, pdf slide 23.

Japanese physicist Youhei Tsubono, who has a page criticising the spin-orbit coupling, points out that there is an apparent discrepancy between the magnetic field energy for electron alignment and the energy difference between the 1s and 2s states, which creates a question of how the spinning charge (magnetic dipole) alignment mechanism of electrons creates the mechanism for the Pauli exclusion principle.

Referring to Quantum chemistry 6th edition, by Ira N. Levine, p 292, Tsubono argues that the difference in lithium’s energy between having three electrons in the 1s state (forbidden by the Pauli exclusion principle) and having two electrons in the 1s state (with opposite spins) and the third electron in the 2s state is 11 eV, which he claims is far greater than the assumed value of the magnetic dipole (spinning charge) field energy, which he claims is only about 10-5 eV.  I can’t resist commenting here to resolve this alleged anomaly:

Japanese physicist Youhei Tsubono on Pauli exclusion principle mechanism by alignment of magnetic dipoles from spinning electrons.

Japanese physicist Youhei Tsubono on Pauli exclusion principle mechanism by alignment of magnetic dipoles from spinning electrons.

In a nutshell, the error Tsubono makes here is conflating the energy of alignment of magnetic spins for electrons at a given distance from the nucleus with the energy needed to not only flip spin states but also to move to a greater distance from the nucleus. It is true that the repulsive magnetic dipole field energy between similarly-aligned electron spins is only about 10-5 eV, but because they’re both in the same subshell that’s enough to account for the observed Pauli exclusion principle.  The underlying error Tsubono makes is to start from the false model (see left hand side of diagram above) showing three electrons in the 1s state, then raising the rhetorical question of how the small magnetic repulsive energy is able to drive one electron into the 2s state.  This situation never arises. The nucleus is formed first of all in fully ionized form by some nuclear reaction. The first electrons therefore approach the nucleus from a large distance.  The realistic question therefore is not: “how does the third electron in the 1s state get enough energy to move to the 2s state from the weak magnetic repulsion that causes the Pauli exclusion principle?”  The 3rd electron stops in the 2s state because of a mechanism: it’s unable to radiate the energy it would gain in approaching any closer to the nucleus.  The electron in the 2s state can only radiate energy in units of hf, so even a small discrepancy in energy is enough to prevent it approaching closer to the nucleus.  (Similarly, if an entry ticket costs $10, you don’t get in with $9.99.)

Similarly, the objection Tsubono raises to the supposedly faster-than-light speed of spin of the classical electron radius is false, because the core size of the electron is far smaller than the classical electron radius.

The core can therefore spin fast enough to explain the magnetic dipole moment without violating the speed of light, which would only be the case if the classical electron was true.  What’s annoying about Tsubono’s page, like many other popular critics of “modern physics”, is that it tries to throw out the baby with the bathwater.  The spinning electron’s dipole magnetic field alignment mechanism for the Pauli exclusion principle is one of a few really impressive, understandable mechanisms in quantum mechanics, and it is therefore important to defend it.  Having chucked out the physical mechanism that comes from quantum field theory, Tsubono then argues “quantum field theory is not physics, just maths.”

Richard P. Feynman reviews nonsensical “mathematical” (aka philosophical) attacks on objective critics of quantum dogma in the Feynman Lectures on Physics, volume 3, chapter 2, section 2-6:

“Let us consider briefly some philosophical implications of quantum mechanics. … making observations affects a phenomenon … The problem has been raised: if a tree falls in a forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves … Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.) … The question is whether the ideas of the exact position of a particle and the exact momentum of a particle are valid or not. The classical theory admits the ideas; the quantum theory does not. This does not in itself mean that classical physics is wrong.

“When the new quantum mechanics was discovered, the classical people—which included everybody except Heisenberg, Schrödinger, and Born—said: “Look, your theory is not any good because you cannot answer certain questions like: what is the exact position of a particle?, which hole does it go through?, and some others.” Heisenberg’s answer was: “I do not need to answer such questions because you cannot ask such a question experimentally.” … It is always good to know which ideas cannot be checked directly, but it is not necessary to remove them all. … In quantum mechanics itself there is a probability amplitude, there is a potential, and there are many constructs that we cannot measure directly. The basis of a science is its ability to predict. … We have already made a few remarks about the indeterminacy of quantum mechanics. … we cannot predict the future exactly. This has given rise to all kinds of nonsense and questions on the meaning of freedom of will, and of the idea that the world is uncertain. Of course we must emphasize that classical physics is also indeterminate … if we start with only a tiny error it rapidly magnifies to a very great uncertainty. … For already in classical mechanics there was indeterminability from a practical point of view.”

Most QM and QFT textbook authors (excepting Feynman’s 1985 QED) ignore the mechanism for quantum field theory, in order to cater to Pythagorean style mathematical mythology.  This mythology is reminiscent of the elitist warning over Plato’s doorway. Only mathematicians are welcome.  To enforce this policy, an obfuscation of physical mechanisms is usually undertaken in a pro-“Bohring” effort to convince students that physics at the basic level is merely a matter of dogmatically applying certain mathematics rules from geniuses, which lack any physical understanding.  Tsubono has other criticisms of modern dogma, e.g. that dark energy provides a modern ad hoc version of “ether” to make general relativity compatible with observation (just the opposite of Einstein’s basis for special relativity).  So why not go back to Lorentz’s mechanism for mass increase and length contraction as being a field interaction accompanied with radiation which occurs upon acceleration?  The answer seems to be that there is a widespread resistance to trying to understand physics objectively.  It seems that status quo is easier to defend.

There is a widespread journalistic denial of freedom to basic questions in quantum mechanics about what is really going on, what the mechanism is, and efforts are made to close down discussions that could lead revolutionary, unorthodox or heretical direction

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up [trying to explain it further].”

– Richard P. Feynman, as quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’ … electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important, and we have to sum the arrows[*]  to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5, pp. 84-5. [*Arrows = wavefunction amplitudes, each proportional to exp(iS) = cos S + i sin S, where S is the action of the potential path.]

Nobel Laureate Gell-Mann debunked single-wavefunction entanglement using colored socks.  A single and thus entangled/collapsible wavefunction for each quantum number of a particle only occurs in non-relativistic 1st quantization QM, such as Schroedinger’s equation.  By contrast, in relativistic 2nd quantization, there is a separate wavefunction amplitude for each potential/possible interaction, not merely one wavefunction amplitude per quantum number.  This difference gets rid of “wavefunction” collapse, “wavefunction” entanglement philosophy, and so on.  Instead of a single wavefunction that becomes deterministic only when measured, we have the path integral, where you add together all the possible wavefunction amplitudes for a particle’s transition.  The paths with smallest action compared to Planck’s constant (thus having the smallest energy and/time) are in phase, and contribute most, while paths of larger action (large energy and/or time) have phases that interfere and cancel out.

Virtual (or offshell) particles travel along the cancelled paths of large action, not real (or onshell) particles. So there’s a simple mechanism which replaces the single wavefunction chaos of ordinary quantum mechanics with interference and constructive interference for multiple wavefunctions per particle in quantum field theory, which is the correct, relativistic theory.  Single wavefunction theories like Schroedinger’s model of the atom (together with Bell’s inequality, which falsely assumes a single wavefunction per particle, like quantum computing hype) are false, because they are non-relativistic and thus ignore the fact that the Coulomb field is quantized, and the field quanta or virtual photon interactions mediating the force binding an orbital electron to a nucleus.  Once you switch to quantum field theory, the chaotic motion of an orbital electron has a natural origin due to its random, discrete interactions with the quantum Coulomb field.  (The classical Coulomb field in Schroedinger’s model is a falsehood.)

Relativistic quantum field theory, unlike quantum mechanics (1st quantization) gets over Rutherford’s objection to Bohr’s atomic electron, the emission of radiation by accelerating charge.  Charges in quantum field theory have fields which are composed of the exchange of what is effectively offshell radiation: the ground state is thus defined by an equilibrium between emission and reception of virtual radiation. We only “observe” onshell photons emitted while an electron accelerates, because the act of acceleration throws the usual balanced equilibrium (of virtual photon exchange between all charges), temporarily out of equilibrium by preventing the usual complete cancellation of field phases.  Evidence: consider Young’s double slit experiment using Feynman’s path integral.  We can see that virtual photons go through all slits, in the process interacting with the fields in the electrons on slit edges (causing diffraction), but only the uncancelled field phases arriving on the screen are registered as being a real (onshell) photon.  It’s simple.

This is analogous to the history of radiation in thermodynamics. Before Prevost’s suggestion in 1792 that an exchange of thermal energy explains the absence of cooling if all bodies are continuously radiating energy, thermodynamics was in grave difficulties with heroic Niels Bohr “God characters” grandly dismissing as ignorant anyone who discovered an anomaly in the theories of fluid heat like caloric and phlogiston. Today we grasp that a room at 15 C is radiating because it is emitting heat with a radiating temperature of 288 K above absolute zero. Cooling is not synonymous with radiating.  If the surrounding parts of the building are also at 15 C, the room will not cool, since the radiating effect is compensated by the receipt of radiation from the rooms, floor and roof.  Likewise, the electron in the ground state can radiate energy without spiralling into the nucleus, if it is in equilibrium and is receiving as much energy as it radiates.

False no-go theorems, based on false premises, have always been used to quickly end any progressive suggestions without objective discussion.  This censorship deliberately retarded the objective development of new ideas which were contrary to populist dogma.  It was only when the populist dogma became excessively boring or when a rival idea was evolved into a really effective replacement, that “anomalies” in the old theory ceased to be taboo.  Similarly, the populist and highly misleading Newtonian particle theory of light still acts to prevent objective discussions of multipath interference as the explanation of Young’s double-slit experiment, just as it did in Young’s day:

“Commentators have traditionally asked aloud why the two-slit experiment did not immediately lead to an acceptance of the wave theory of light. And the traditional answers were that: (i) few of Young’s contemporaries were willing to question Newton’s authority, (ii) Young’s reputation was severely damaged by the attacks of Lord Brougham in the Edinburgh Review, and that (iii) Young’s style of presentation, spoken and written, was obscure. Recent historians, however, have looked instead for an explanation in the actual theory and in its corpuscular rivals (Kipnis 1991; Worrall 1976). Young had no explanation at the time for the phenomena of polarization: why should the particles of his ether be more willing to vibrate in one plane than another? And the corpuscular theorists had been dealing with diffraction fringes since Grimaldi described them in the 17th century: elaborate explanations were available in terms of the attraction and repulsion of corpuscles as they passed by material bodies. So Young’s wave theory was thus very much a transitional theory. It is his ‘general law of interference’ that has stood the test of time, and it is the power of this concept that we celebrate on the bicentennial of its publication in his Syllabus of 1802.”

– J. D. Mollon, “The Origins of the Concept of Interference”, Phil. Transactions of the Royal Society of London, vol. A360 (2002), pp. 807-819.

Feynman remarks in his Lectures on Physics that if you deny all “unobservables” like Mach and Bohr does, then you can kiss the wavefunction Psi goodbye. You can observe probabilities and cross-sections via reaction rates, but as Feynman argues, that’s not a direct observation for the existence of the wavefunction. There are lots of things in physics that are founded on indirect evidence, giving rise to the possibility that an alternative theory may explain the same evidence using a different basic model. This is exactly the situation as occurred in explaining sunrise by either the sun orbiting the earth daily, or the earth rotating daily while the sun moves only about one degree across the sky.

Propagator derivations

Peter Woit is writing a book, Quantum Theory, Groups and Representations: An Introduction, and has a PDF of the draft version linked here.  He has now come up with the slogan “Quantum Theory is Representation Theory”, after postulating “What’s Hard to Understand is Classical Mechanics, Not Quantum Mechanics“.

I’ve recently become interested in the mathematics of QFT, so I’ll just make a suggestion for Dr Woit regarding his section “42.4 The propagator” which is incomplete (he has only the heading there on page 404 of the 11 August 2014 revision, with no test under it at all).

The Propagator is the greatest part of QFT from the perspective of Feynman’s 1985 book QED: you evaluate the propagator from either the Lagrangian or Hamiltonian, since the Propagator is simply the Fourier transform of the potential energy (the interaction part of the lagrangian provides the couplings for Feynman’s rules, not the propagator).  Fourier transforms are simply Laplace transforms with a complex number in the exponent.  The Laplace and Fourier transforms are used extensively in analogue electronics for transforming waveforms (amplitudes as a function of time) into frequency spectra (amplitudes as a function of frequency).  Taking the concept at it’s simplest, the Laplace transform of a constant amplitude is just the reciprocal (inverse), e.g. an amplitude pulse lasting 0.1 second has a frequency of 1/0.1 = 10 Hertz.  You can verify that from dimensional analysis.  For integration between zero and infinity, with F(f) = 1 we have:

Laplace transform, F(t) = Integral [F(f) * exp(-ft)] df

= Integral [exp(-ft)] df

= 1/t.

If we change from F(f) = 1 to F(f) = f, we now get:

Frequency, F(t) = Integral [f * exp(-ft)] df = 1/(t squared).

The trick of the Laplace transform is the integration property of the exponential function by itself, i.e. it’s unique property of remaining unchanged by integration (because e is the base of natural logarithms), apart from multiplication by the constant (the factor which is not a function of factor you’re integrating over) in its power.  The Fourier transform is the same as the Laplace transform, but with a factor of “i” included in the exponential power:

Fourier transform, F(t) = Integral [F(f) * exp(-ift)] df

In quantum field theory, instead of inversely linked frequency f and time t, you have inversely linked variables like momentum p and distance x.   This comes from Heisenberg’s ubiquitous relationship, p*x = h-bar.  Thus, p ~ 1/x.  Suppose that the potential energy of a force field is given by V = 1/x.  Note that field potential energy V is part of the Hamiltonian, and also part of the Lagrangian, when given a minus sign where appropriate.  You want to convert V from position space, V = 1/x, into momentum space, i.e. to make V a function of momentum p.  The Fourier transform of the potential energy over 3-d space shows that V ~ 1/p squared.  (Since this blog isn’t very suitable for lengthy mathematics, I’ll write up a detailed discussion of this in a vixra paper soon to accompany the one on renormalization and mass.)

What’s interesting here is that this shows that the propagator terms in Feynman’s diagrams, which, integrated-over produce the running couplings and thus renormalization, are simply dependent on the field potential, which can be written in terms of classical Coulomb field potentials or quantum Yukawa type potentials (Coulomb field potentials with an exponential decrease included.  There are of course two types of propagator: bosonic (integer spin) and fermionic (half integer spin).  It turns out that the classical Coulomb field law gives a potential of V = 1/x which, when Fourier transformed, gives you V ~ 1/p squared, and when you include a Yukawa exp(-mx) short-range attenuation or decay term, i.e. V = (1/x)exp(-mx), you get a Fourier transform of 1/[(p squared) – (m squared)] which is the same result that a Fourier transform of the spin-1 field propagator (boson propagators) give using a Klein-Gordon lagrangian.
However, using the Dirac lagrangian, which is basically a square-root version of the Klein-Gordon equation with Dirac’s gamma matrices to avoid losing solutions due to the problem that minus signs and complex numbers tend to disappear when you square them, you get a quite different propagator: 1 /(p – m).  The squares on p and m which occur for spin-1 Klein-Gordon equation propagators, disappear for Dirac’s fermion (half integer spin) propagator!
So what does this tell us about the physical meaning of Dirac’s equation, or put another way, we know that Coulomb’s law in QFT (QED more specifically) physically involves field potentials consisting of exchanged spin-1 virtual photons which is why the Fourier transform of Coulomb’s law gives the same result as the propagator from the Klein-Gordon equation but without a mass term (Coulomb’s virtual photons are non-massive, so the electromagnetic force is infinite ranged), but what is the equivalent for Coulomb’s law for Dirac’s spin-1/2 fermion fields?  Doing the Fourier transform in the same way but ending up with Dirac’s 1 /(p – m) fermion propagator gives an interesting answer which I’ll discuss in my forthcoming vixra paper.
Another question is this: the Higgs field and the renormalization mechanism only deal with problems of mass at high energy, i.e. UV cutoffs as discussed in detail in my previous paper.  What about the loss of mass at low energy, the IR cutoff, to prevent the coupling from running due to the presence of a mass term in the propagator?
In other words, in QED we have the issue that the running coupling polarizable pair production only kicks in at 1.02 MeV (the energy needed to briefly form an electron-positron pair).  Below that energy, or in weak fields beyond the classical electron radius, the coupling stops running, so the effective electronic charge is constant.  This is why there is a standard low energy electronic charge that was measured by Millikan.  Below the IR cutoff, or at distances larger than the classical electron radius, the charge of an electron is constant and the force merely varies with the Coulomb geometric law (the spreading or divergence of field lines or field quanta over an increasing space, diluting the force, but with no additional vacuum polarization screening of charge, since this screening is limited to distances shorter than the classical electron radius or energies beyonf about 1 MeV).
So how and why does the Coulomb potential suddenly change from V = 1/x beyond a classical electron radius, to V = (1/x)exp(-mx) within a classical electron radius? Consider the extract below from page 3 of http://vixra.org/pdf/1407.0186v1.pdf:
Integrating using a massless Coulomb propagator to obtain correct low energy mass
The key problem for the existing theory is very clear when looking at the integrals in Fig. 1.  Although we have an upper case Lambda symbol included for an upper limit (or high energy, i.e. UV cutoff) on the integral which includes an electron mass term, we have not included a lower integration limit (IR cutoff): this is in keeping with the shoddy mathematics of most (all?) quantum field theory textbooks, which either deliberately or maliciously cover-up the key (and really interesting or enlightening) problems in the physics by obfuscating or by getting bogged down in mathematical trivia, like a clutter of technical symbolism.  What we’re suggesting is that there is a big problem with the concept that the running coupling merely increases the “bare core” mass of a particle: this standard procedure conflates and confuses the high energy bare core mass that isn’t seen at low energy, with the standard value of electron mass which is what you observe at low energy.
In other words, we’re arguing for a significant re-interpretation of physical dogmas in the existing mathematical structure of QFT, in order to get useful predictions, nothing useful is lost by our approach while there is everything to be gained from it.  Unfortunately, physics is now a big money making industry in which journals and editors are used as a professional status-political-financial-fame-fashion-bigotry-enforcing-consensus-enforcing-power grasping tool, rather than informative tool designed solely and exclusively to speed up the flow of information that is helpful to those people focused merely upon making advances in the basic science.  But that’s nothing new.  [When Mendel’s genetics were finally published after decades of censorship, his ideas had been (allegedly) plagiarized by two other sets of bigwig researchers whose papers the journal editors had more from gain by publishing, than they had to gain from publishing the original research of someone then obscure and dead!  Neat logic, don’t you agree?  Note that is statement of fact is not “bitterness”, it is just fact.  A lot of the bitterness that does arise in science comes not from the hypocrisy of journals and groupthink, but because these are censored out from discussion.  (Similarly the Oscars are designed to bring the attention to the Oscars, since the prize recipients are already famous anyway.  There is no way to escape the fact that the media in any subject, be it science or politics, deems one celebrity more worthy of publicity than the diabolical murder of millions by left wing dictators.  The reason is simply that the more “interesting” news sells more journals than the more difficult to understand problems.)]
 17 August 2014 update:

(1) The Fourier transform of the Coulomb potential (or the Fourier transform of the potential energy term in the Lagrangian or Hamiltonian) gives rest mass.

(2) Please note in particular the observation that since the Coulomb (low energy, below IR cutoff) potential’s Fourier transform gives a propagator omitting a mass term, this propagator does not contribute a logarithmic running coupling. This lack of a running coupling at low energy is observed in classical physics for energy below about 1 Mev where no vacuum polarization or pair production occurs because pair production requires at least the mass of the electron and positron pair, 1.02 MeV. The Coulomb non-mass term propagator contribution at one-loop to electron mass is then non-logarithmic and simply equal to a factor like alpha times the integral (between 0 and A) of (1/k3)d4k = alpha * A. As shown in the diagram we identify this “contribution” from the Coulomb low energy propagator without a mass term to be the actual ground state mass of the electron, with the cutoff A corresponding to the neutral currents that mire down the electron charge core, causing mass, i.e. A is the mass of the uncharged Z boson of the electroweak scale (91 GeV). If you have two one loop diagrams, this integral becomes alpha * A squared.

(3) The one loop corrections shown on page 3 to electron mass for the non-Coulomb potentials (i.e. including mass terms in the propagator integrals) can be found in many textbooks, for example equations 1.2 and 1.3 on page 8 of “Supersymmetry Demystified”. As stated in the blog post, I’m writing a further paper about propagator derivations and their importance.

If you read Feynman’s 1985 QED (not his 1965 book with Hibbs, which misleads most people about path integrals and is preferred to the 1985 book by Zee and Distler), the propagator is the brains of QFT. You can’t directly do a path integral over spacetime with a lagrangian integrated to give action S and then re-integrated in the path integral, the integral of amplitude exp(iS) taken over all possible geometric paths, where S is the lagrangian integral. So, as Feynman argues, you have to construct a perturbative expansion, each term becoming more complex and representing pictorially the physical interactions between particles. Feynman’s point in his 1985 book is that this process essentially turns QFT simple. The contribution from each diagram involves multiplying the charge by a propagator for an internal line and ensuring that momentum is conserved at verticles.

Rank 1 quantum gravity

NC Cook paper

Above: extract from http://vixra.org/pdf/1301.0187v1.pdf Einstein’s rank-2 tensor compression of Maxwell’s equations does not turn them into rank-2 spacetime curvature.

A problem for unfashionable new alternative theories in a science dominated by noisy ignorant bigots.

Above: a serious problem for unfashionable new alternative theories in a science dominated by noisy ignorant bigots.

Dr Woit has a post (here with comments here) about the “no alternatives argument” used in science to “justify” a research project by “closing down arguments” by dismissing any possibility of an alternative direction (the political side of it, and also in pure politics).  I tried to make a few comments but it proved impossible to defend my position without using maths of a sort which could not be typed in a comment, so I’ll place the material in this post, responding to criticisms here too:

“… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”

It’s weird that Dr Peter Woit claims that this “there is no alternative so you must believe in M-theory” argument is difficult to respond to, seeing that he debunked it in his own 2002 arXiv paper “Quantum field theory and representation theory”.

In that paper he makes the specific point about the neglect of alternatives due to M-theory hype, by arguing there that a good alternative is to find a symmetry group in low dimensions that encompasses and explains better the existing features of the Standard Model.

Woit gives a specific example, showing how to use Clifford algebra to build a representation of a symmetry group that for 4 dimensional spacetime predicts the electroweak charges including left handed chiral weak interactions, which the Standard Model merely postulates.

But he also expresses admiration for Witten, whose first job was in left wing politics, working for George McGovern, a Democratic presidential nominee in 1972. In politics you brainwash yourself that your goal is a noble one, some idealistic utopia, then you lie to gain followers by promising the earth. I don’t see much difference with M-theory, where a circular argument emerges in which you must

(1) shut down alternative theories as taboo, simply because they haven’t (yet) been as well developed or hyped as string theory, and

(2) use the fact that you have effectively censored alternatives out as being somehow proof that there are “no alternatives”.

I don’t think Dr Woit is making the facts crystal clear, and he fails badly to make his own theory crystal clear in his 2002 paper where he takes the opposite approach to Witten’s hype of M-theory. Woit introduces his theory on page 51 of his paper, after a very abstruse 50 pages of advanced mathematics on group symmetry representations using Lie and Clifford algebras. The problem is that alternative ideas that address the core problems are highly mathematical and need a huge amount of careful attention and development. I believe in censorship for objectivity in physics, instead of censorship to support fashion.

” Indeed as Einstein showed, gravity is *not* a force, it is a manifestation of spacetime curvature.”

This is a pretty good example of a “no alternatives” delusion: if gravity is quantized in quantum field theory, the gravitational force will then be mediated by graviton exchange (gauge bosons), just like any Standard Model force, not spacetime curvature as it is in general relativity. Note that Einstein used rank-2 tensors for spacetime curvature to model gravitational fields because that Ricci tensor calculus was freshly minted and available in the early 20th century.

Rank-2 tensors hadn’t been developed to that stage at the time of Maxwell’s formulation of electrodynamics laws, which uses rank-1 tensors or ordinary vector calculus to model fields as bending or diverging “lines” in space. Lines in space are rank 1, spacetime distortion is rank 2. The vector potential version of Maxwell’s equations doesn’t replace field lines with spacetime curvature for electromagnetic fields, it merely generalizes the rank-1 field description of Maxwell. It’s taboo to point out that electrodynamics and general relativity arbitrarily and dogmatically use different mathematical descriptions for reasons of historical fluke, not physical utility (rank 1 equations for field lines versus rank 2 equations for spacetime curvature). Maxwell worked in a pre-tensor era, Einstein in a post-tensor era. Nobody bothered to try to replace Maxwell’s field line description of electrodynamics with a spacetime curvature description, or vice-versa to express gravitational fields in terms of field lines. It’s taboo to even suggest thinking about it! Sure there will be difficulties in doing so, but you learn about physical reality by overcoming difficulties, not by making it taboo to think about.

The standard dogma is to assert that somehow just because Maxwell’s model is rank 1 and involves spin 1 gauge boson exchange when quantized as QED, general relativity involves a different spin to couple to the rank 2 tensor, spin 2 gravitons. However, since 1998 it’s been observationally clear that the cosmological acceleration implies a repulsive long range force between masses, akin to spin-1 boson exchange between similar charges (mass-energy being the gravitational charge). Now, if you take this cosmological acceleration or repulsive interaction or “dark energy” as the fundamental interaction, you can obtain general relativity’s “gravity” force (attraction) in the way the Casimir force emerges, with checkable predictions that were subsequently confirmed by observation (the dark energy predicted in 1996, observed 1998).  Hence, understanding the maths allows you to find the correct physics!

Jesper: what doesn’t make sense is your reference to Ashtekar variables, which don’t convert spacetime curvature into rank-1 equation for field lines. What they do is to introduce more obfuscation without any increase in understanding nature. LQG which resulted from Ashekar variables has been a failure. The fact is, there is no mathematical description of GR in terms of field lines, and no mathematical description of QED in terms of spacetime curvature, and this for purely historical, accidental reasons! The two different descriptions are long held dogma and it’s taboo to mention this.

(For a detailed technical discussion of the difference between spacetime curvature maths and Maxwell’s field lines, please see my 2013 paper “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra).

Geometrodynamics doesn’t express electrodynamics’ rank 1 field lines as spacetime curvature, any more than vortices do, or any more than Ashtekar variables can express spacetime curvature as field lines.

The point is, if you want to unify gravitation with standard model forces, you first need to express them with the same mathematical field description so you can properly understand the differences. You need both Maxwell’s equations and gravitation expressed as field lines (rank 1 tensors), or you need them both expressed as spacetime curvature (rank 2 tensors). The existing mixed description (rank 1 field lines for QED, spacetime curvature for GR) follows from historical accident and has become a hardened dogma to the extent that merely pointing out the error results in attacks of the sort you make, where you mention some other totally irrelevant description and speculatively claim that I haven’t heard of it.

The issue is not “which is the more fundamental one”. The issue is expressing all the fundamental interactions in the *same* common field description, whatever that is, be it rank-1 or rank-2 equations. It doesn’t matter if you choose field lines or spacetime curvature. What does matter is that every force is expressed in a *common* field description. The existing system expresses all SM particle interactions as rank-1 tensors and gravitation as rank-2 tensors. Your comment ignores this and and you claim it is “personal prejudice” to choose “which fundamental theory is correct” which “cannot be established by making dogmatic statements”. I’m not prejudiced in favour of any particular description, I am against the confusion of mixing up different descriptions. That’s based on dogmatic prejudice!

“Yang-Mills theory (Maxwell, QED, QCD etc.) is a theoretical framework of connections (rank 1 tensor) and curvature of connections (rank 2 tensor).”

Wrong: rank 2 field strength tensor is not spacetime curvature! as I prove in my paper on fibre connections, see http://vixra.org/pdf/1301.0187v1.pdf “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra.

Maxwell’s equations of electromagnetism describe three dimensional electric and magnetic field line divergence and curl (rank 1 tensors, or vector calculus), but were compressed by Einstein by including those rank-1 equations as components of rank 2 tensors by gauge fixing as I showed there. The SU(N) Yang-Mills equations for weak and strong interactions are simply an extension of this by adding on a quadratic term, the Lie product. As for the connection of gauge theory to fibre bundles, I as I showed in that paper, Yang merely postulates that the electromagnetic field strength tensor equals the Riemann tensor and that the Christoffel matrix equals the covariant vector potential. These are efforts to paper over the physical distinctions between the field line description of gauge theory and the curved spacetime description of general relativity. I go into all this in detail in that 2013 paper.

The fact that only ignorant responses are made to factual data also exists in all other areas of science where non-mainstream ideas have been made taboo, and where you have to fight a long war merely to get a fact reviewed without bigoted insanity or apathy.

Karl Popper’s 1935 correspondence arguments with Einstein are vital reading. See, in particular, Einstein’s letter to Karl Popper dated 11 September 1935, published in Appendix xii to the 1959 English edition of Popper’s “Logic of Scientific Discovery,” pages 482-484. Einstein writes in that letter that he has physical objections to the trivial arguments of Heisenberg based on the single wavefunction collapse idea non relativistic QM. Note that wavefunction collapse doesn’t occur at all in relativictic 2nd quantization, as expressed as Feynman’s path integrals, where multipath interference allows physical path interference processes to replace metaphysical collapse of a single indeterminate wavefunction amplitude. You instead integrate over many wave function amplitude contributions, one representing every possible path, including specifically the paths that represent physical interactions with a measuring instrument.

“I regard it as trivial that one cannot, in the range of atomic magnitudes, make predictions with any desired degree of precision … The question may be asked whether, from the point of view of today’s quantum theory, the statistical character of our experimental statistical description of an aggregate of systems, rather than a description of one single system. But first of all, he ought to say so clearly; and secondly, I do not believe that we shall have to be satisfied for ever with so loose and flimsy a description of nature. …

“I wish to say again that I do not believe that you are right in your thesis that it is impossible to derive statistical conclusions from a deterministic theory. Only think of classical statistical mechanics (gas theory, or the theory of Brownian movement). Example: a material point moves with constant velocity in a closed circle; I can calculate the probability of finding it at a given time within a given part of the periphery. What is essential is merely this: that I do not know the initial state, or that I do not know it precisely!” – Albert Einstein, 11 September 1935 letter to Karl Popper.


E.g., groupthink political fashion against looking at alternative explanations of facts – apart those which are screamed by a noisy “elite” of political activists – also prevails in climate “science”, CO2 is correlated to “temperature data”, and any other correlation is banned, e.g. water vapour – a greenhouse gas which contributes far more, about 25 times more, to the greenhouse effect than CO2, has been declining since 1948 according to NOAA measurements.  This water vapour decline is enough to cancel most of the temperature rise, CO2 having a trivial contribution owing to the negative feedback from cloud cover which IPCC ignored in all its 21 over-hyped models.

water vapour fall cancels out CO2 rise

Above: NOAA data on declining humidity (non droplet water, which absorbs heat as a greenhouse gas).  Below: satellite data on Wilson cloud chamber cosmic radiation effects on cloud droplet formation and the long term heating caused by a fall in the abundance of cloud water droplets, which reflect back solar radiation into space, cooling altitudes below the clouds.

cosmic rays vs cloud cover

When the IPCC does select an “alternative” theory to discuss in a report, it is always a strawman target, a false model that they can easily debunk.  E.g. cosmic rays don’t carry any significant energy into earth’s climate, so “solar forcing” (which IPCC analyses and correctly debunks) is a strawman target.  But we don’t need a lengthy analysis to see this.  Cosmic radiation produces a radiation dose of 1 Gray for every 1 Joule of ionizing radiation absorbed in a kilogram of matter.  The prompt lethal dose of ionizing radiation is less than 10 Grays or 10 Joules per kg.  Therefore, it’s obvious from energy-to-radiation dose conversion factor, alone, that cosmic rays can’t affect the energy balance in the atmosphere, for if they could we’d be getting lethal doses of radiation.  What instead happens is a very indirect effect on climate, which produces the very opposite effect to that of “solar forcing” which the IPCC considered.

While solar forcing – that is to say, direct energy delivery by cosmic rays, causing climate heating – would imply that an increase in cosmic rays causes an increase in temperature, the opposite correlation occurs with the “Wilson cloud chamber mechanism”, because cosmic rays leave ionization trails around which cloud droplets condense, which cool (not heat up) the altitudes below the cloud.  This is validated by data (graphs above).  But the media sticks to considering the false “solar forcing” theory as being the only “(in)credible alternative” to the CO2-temperature correlation with no negative feedback IPCC models.  There is no media discussion of any alternative that is remotely correct.

The reason for stamping out dissent and making taboo any discussion of realistic alternative hypotheses is the hubris of dictatorship, which is similar in some ways to pseudo-democratic politics.  The claim in democratic ideology is that we have freedom of the democratic sort, but democracy in ancient Greece was a daily referendum on issues, not a vote only once in four years (i.e., 4 x 365 times fewer votes than democracy) for an effective choice between one of two dictator parties of groupthink ideology, ostensibly different but really both joined together in an unwritten Cartel Agreement to maintain a fashionable status quo even if that involves an ever increasing national debt, threats to security from fashionable disarmament ideology, funding groupthink money-grabbing quack scientists who only want to award each other prizes and shut down “unorthodox” or honest research.

Anyone who points out the problems of calling this “democracy” and suggests methods for achieving actual democracy (e.g. with daily online referendums using secure databases of the sort used for online banking) is attacked falsely as being in favor of anarchy or whatever.  In this way, no progress is possible and status quo is maintained.  (By analogy to groupthink dictatorship in contemporary politics and science, is the money-spinning law profession as described by former law court reporter Charles Dickens in Bleak House: “The one great principle of the English law is, to make business for itself. There is no other principle distinctly, certainly, and consistently maintained through all its narrow turnings. Viewed by this light it becomes a coherent scheme, and not the monstrous maze the laity are apt to think it. Let them but once clearly perceive that its grand principle is to make business for itself at their expense…”  Notice that I’m not critical here of status quo, but of the hypocrisy used to cover up its defects with lying deceptions.  If only people were honest about the lack of freedom and the need for censorship, that would reduce the stigma of bigoted dictatorial coercion behind “freedom”.  As it is, we instead have a “freedom of the press” to tell lies and make facts taboo, and to endlessly proclaim falsehoods as urgent “news” in a effort to brainwash everyone.)

Dr Woit argues rightly “… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”

But he states: “There is a long history and a deeply-ingrained culture that helps mathematicians figure out the difference between promising and empty speculation, and I believe this is something theoretical physicists could use to make progress.”

Well, prove it!

On March 26, 2014, The British Journal for the Philosophy of Science published a paper by philosopher Richard Dawid, Stephan Hartmann, and Jan Sprenger, “The No Alternatives Argument”:

“Scientific theories are hard to find, and once scientists have found a theory, H, they often believe that there are not many distinct alternatives to H. But is this belief justified? What should scientists believe about the number of alternatives to H, and how should they change these beliefs in the light of new evidence? These are some of the questions that we will address in this article. We also ask under which conditions failure to find an alternative to H confirms the theory in question. This kind of reasoning (which we call the ‘no alternatives argument’) is frequently used in science and therefore deserves a careful philosophical analysis.”  (A PDF of their draft paper is linked here.)

The problem for them is that the “no alternatives argument” is used in the popular media and popular politics to “close down discussion” of any argument as being simply taboo or heresy, if there is even a hint that it could constitute “distracting noise” that draws any attention let alone funding away from the mainstream bigots and the mainstream hubris.  This is well described by Irving Janis in his treatment of “groupthink”, proving that collective thought over dogma fails eventually when it resorts to direct or indirect subjective censorship of alternative viewpoints.  The whole notion of “dictatorship” being bad is down to the banning of discussion of alternative viewpoints which turn out correct, in other words it’s not “leadership” which is the inherent problem but:

“leadership + stupid, bigoted, coercive lying about alternatives being rubbish, when the leadership hasn’t even bothered to read or properly evaluate the alternatives.”

Historically, progress of a radical form has – simply because it has had to be radical – been unorthodox, been done by unorthodox people, and has been censored by the mainstream accordingly.  The argument the mainstream makes is tantamount to claiming that anyone with an alternative idea must be a wannaby dictator who should try to overthrow the existing Hitler by first joining the Nazi Party, and then working up the greasy pole and finally reasoning in a gentleman like way with the Great Dictator.  That’s absurd, based on the history of science.  Joule the brewer who discovered the mechanical equivalent of heat energy by the energy needed to stir vats of beer mechanically, did not go about trying to get his “fact” (ahem, “pet theory” to mainstream bigots) accepted by becoming a professor of mathematical physics and a journal editor.  You cannot get a “peer” reviewer to read a radical paper.  The people who did try to go down the orthodox route when they had a radical idea like Mendel were censored out, and their facts were eventually “re-discovered” when others deemed it useful to do so, in order to resolve a priority dispute.

Put another way, the key problem of dictatorship is that it turns paranoid, seeing enemies everywhere in merely honest criticisms and suggestions for improvements, and eliminates those: the “shoot the messenger” fallacy.  What we need is honest, not dishonest, censorship.  We need to censor out quacks, the people who “make money in return for falsehood”, and encourage objectivity.  Power corrupts, so even if you start off with an honest leader, you can end up with that leader turning into a paranoid quack.  Only by censoring in the honest interests of objectivity, rather than to protect fashion from scrutiny, criticism, and improvement, can progress be made.

Woit rejects philosopher Richard Dawid’s invocation of the “no alternatives” delusion to defend string theory from critics, by stating: “This seems to just be an argument that HEP theorists have been successful in the past, so one should believe them now …”.   Dawid uses standard “successful” obfuscation techniques, consisting of taking an obscure and poorly defined argument and making it even more abstruse with Bayesianism probability theory, in which previous successes of a mainstream theory can be used to quantitatively argue that it is “improbable” that an alternative theory dismissed by the mainstream will overturn the mainstream.  This has many objections which Dawid doesn’t discuss.  The basic problem is that that of Hitler, who used precisely the implicit Bayesianism increasing trust from his “successes” in unethically destroying opponents to gain and further gather support for his increasingly mainstream party.  Anyone who objected was simply reminded of Hitler’s “good record”, not just iron cross first class but his tireless struggle, etc.  The fault here is that probability theory is non-deterministic and assumes a lack of bias-causing mechanisms which control the probability.

If you want to model the failure risk of a theory, you should look at the theory, e.g. eugenics for Hitler or the cosmic landscape for string, and see if it is scientific in the useful sense, other than providing corrupt bigots power and authority to sever more objective research which disproves it.  Instead, Dawid merely looks at the history of mainstream theory successes, ignoring the issues with the theories, and simply concludes that since mainstream hubris is generally good at ignoring better ideas, it will continue to prevail.

Which of course was what Bell’s inequality did when it set up a hypothesis test between equally false alternatives, including a “proof” of quantum mechanics viability based on the false assumption that quantum mechanics consists solely of a non-relativistic single-wavefunction amplitude for an electron (no path integral second quantization, with an amplitude for every path).  By setting up a false default hypothesis, you can “prove” it with false logic.

For example, in 1967 Alexander Thom falsely proved by probability theory that there was a 99% probability that the ancient Britons who built stonehenge used a “megathlic yard” of 83 cm length.  He did this by a standard two-theory comparison hypothesis test with standard probability theory: he compared the goodness of fit of two hypotheses only, excluding the real solution!  The two false hypotheses he compared in his probability theory test were his pet theory of the 83 cm megalithic yard, and random spacing.  He proved correctly, that if the correct solution is one of these two options (it isn’t of course), then the data shows a 99% probability that the 83 cm megalithic yard is the correct option.  Thom’s error, and the error of all probability theory and statistical hypothesis tests (Chi squared, Students T), is that they compare only one candidate hypothesis or theory with one other, i.e. you assume without any evidence or proof that you know the correct theory is one of two options that have been investigated.  The calculation then tells you the probability that the data corresponds to one of those two option.  This is fake, because in the real world there are more than just 2 options, or 2 theories to compare.  Bell’s inequality neglects inclusion of path integrals with relativistic 2nd quantization multipath interference causing indeterminancy, rather than the 1st quantization non-relativistic “single wavefunction collapse” metaphysics.  Similarly, in 1973 Hugh Porteous disproved Thom’s “megathlic yard” by invoking a third hypothesis, that distances were paced out.  Porteous modelled the pacing option using a normal distribution and showed it better fitted the data than Thom’s megathlic yard!  This is a warning from history about the dangers of “settling the science”, “closing down the argument”, and banning alternative ideas!

Conjectured theory SO(6) = SO(4) x SO(2) = SU(2) x SU(2) x U(1)

Conjectured electroweak/dark energy/gravity symmetry theory:

SO(6) = SO(4) x SO(2)
= SU(2) x SU(2) x U(1)

If this is true, the Standard Model should be replaced by SU(3) x SO(6). or maybe just SO(6) if SO(6) breaks down two ways, once as shown above, and also as in the old Georgi-Glashow SU(5) grand unified theory (given below), where SO(6) is isomorphic to SU(4) which contains the strong force’s color charge symmetry, SU(3).  (See also Table 10.2 in the introduction to group theory for physicists, linked here.)

Why do we want SO(6)? Answer: Lunsford shows SO(3,3) = SO(6) unifies gravitation and electrodynamics in 6d.

SO(4) = SU(2) x SU(2) is well known as a mathematical isomorphism (see previous post) as is the fact that SO(2) = U(1).

In olden times (circa 1975-84) the media was saturated with the (wrong) prediction of proton decay via the (now long failed) grand unified theory of SU(5) = SO(10). The idea was to break down SU(5) via the SO(10) isomorphism into SO(6) x SO(4), and from there one of the ideas, namely the isomorphism (based on the fact that the left force is left-handed so the Yang-Mills SU(2) model reduces to a simple single element U(1) theory for right-handed spinors): SU(2, Right) = U(1, Hypercharge), may be of use to us for recycling purposes (to produce a better theory):

= SO(10)
= SO(6) x SO(4)
= SU(4) x SU(2, Left) x SU(2, Right)
= SU(3) x SU(2, Left) x U(1)

Well, maybe we don’t need the reduction SU(4) to SU(3), but we do want to consider the symmetry break down of SO(6) because Lunsford found that group useful:

= SO(6)
= SO(4) x SO(2)
= SU(2, Left) x SU(2, Right) x U(1, Dark energy/gravity)
= SU(2, Left) x U(1, Hypercharge) x U(1, Dark energy/gravity)

This is pretty neat because it also fits in with Woit’s conjecture that that shows how to obtain the normal electroweak sector charges with their handedness (chiral) features by using a correspondence between the vacuum charge vector and Clifford algebra to represent SO(4) whose U(2) symmetry group subset contains the 2 x 2 = 4 particles in one generation of Standard Model quarks or leptons, together with their correct Standard Model charges; for details see pages 13-17 together with 51 of Woit’s 2002 paper, QFT and Representation Theory.

(It’s abstract but when you think about it, you’re just using a consistent representation theory to select the 4 elements of the U(2) matrix from the 16 of SO(4). Most of the technical trivia in the paper is superfluous to the key example we’re interested in which occurs in the table of page 51. Likewise, when you look compare the elements of the three 2×2 Pauli matrices of SU(2) to the eight 3×3 Gell-Mann matrices of SU(3) you can see that the first three of the SU(3) matrices are analogous to the three SU(2) matrices, give or take a global multiplication factor of i. In other words, you can pictorially see what’s going on if you write out the matrices and circle those which correspond to one another.)