Calculation demonstrating that quantum gravity is a dark energy effect

Calculation demonstrating that quantum gravity is a dark energy effect by analogy to the way the Casimir force arises from the zero point electromagnetic field. References:

http://vixra.org/abs/1305.0012 Quantum Gravity is a Result of U(1) Repulsive Dark Energy

http://vixra.org/abs/1111.0111 U(1) X SU(2) X SU(3) Quantum Gravity Successes

http://vixra.org/abs/1301.0187 Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature

http://vixra.org/abs/1408.0151 Massless Electroweak Field Propagator Predicts Mass Gap

http://vixra.org/abs/1301.0188 The Quantum Gravity Lagrangian

http://vixra.org/abs/1302.0004 Understanding Confirmed Predictions in Quantum Gravity

http://vixra.org/abs/1304.0175 Improved Diagram for Explaining Quantum Gravity

http://vixra.org/abs/1511.0037 Populist journal autistic arrogance of stringy editor

http://vixra.org/abs/1511.0067 Hubris: general populist autism used by mass media to censor any real innovation until fashionable enough to sell using false hype arguments (popularity-based hype, not evidence-based).

Popperazi: Lenny Susskind’s crass dismissal of string theory critics

“You might be guessing the probability of something that–unlike cancer—does not even exist, such as strings … In this way, Bayes’ theorem can promote pseudoscience and superstition … If you aren’t scrupulous in seeking alternative explanations for your evidence … Bayes’ theorem, far from counteracting confirmation bias, enables it.” – John Horgan, Bayes’ Theorem.

Professor Leonard Susskind invented the word “Popperazi” for the string critics, over 10 years ago in the New Scientist: “Susskind … attacking those who ask for falsifiable predictions as “Popperazi” ” – http://www.math.columbia.edu/~woit/wordpress/?p=312 So I guess Susskind is one of the people who are defending ad hoc, untested elitist speculation hype, because he benefits from this. Witten’s 16 November 2006 letter in Nature v. 444, p. 265, advised string theorists not to directly engage in discussions about their use of elitist power to suppress dissent for fear of fuelling discussion: “A direct response may just add fuel to controversies.” Convenient if Witten can’t give a direct answer!

It’s groupthink hubris: the mainstream wants to defend itself by saying “there are no alternatives” while they publically dismiss or ridicule “alternatives” as being speculative (not acknowledging hypocrisy here). I don’t see a problem with purely mathematical speculations in physics. Some of the basic superstring theory mathematics of spinors is interesting, and defenders point out that string theory does have relevance in post-dictions like explaining the Regge trajectory relationship between hadron spin and mass. What’s irritating is the way “string theory” (a set of ad hoc ideas) is built on implicit faith in existing foundations. I’d prefer, for instance, to allow publication of speculative ideas on replacing “gravitation” with a Casimir effect of dark energy, e.g. http://vixra.org/abs/1305.0012

However, one problem is that even amongst friends with similar interests, radical ideas are very easy to censor and ignore. Both leading string theory critics who wrote books about string in 2006, namely Lee Smolin and Peter Woit, have their own pet theories. Woit’s is the deepest because he claims to derive the electroweak symmetry charges (including zero weak isospin charge for right handed spinors) by using Clifford algebras in representation theory to pick out the unitary group U(2) as a subset of SO(4), the usual 4-d spin orthagonal group which is topologically isomorphic to SU(2) X SU(2). See page 51 of Woit’s 2002 paper, http://arxiv.org/abs/hep-th/0206135 where after showing at the top of that page that “the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to to encompass the standard model” he adds a damning criticism of string theory later on the same page: “a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings…” (See also page 17 of that paper for the Lie algebra generators.)

Thus, the light of string theory acts to detract attention from alternative ideas, and this seems to be the motivation of Woit and Smolin. On a lesser level, in 1996 – two years prior to experimental (or rather observational) confirmation – I submitted a calculation replacing “gravitation” with a Casimir effect of dark energy to 9 journals, predicting that there is a cosmological acceleration of 10^{-10} m/s2, the small, positive cc (dark energy effect). Nobody wanted to publish my calculation because this result was either “too small” to measure or “the wrong sign and far too small” to be compatible with string theory’s massive negative cosmological constant: I got these excuses from Classical and Quantum Gravity, Nature’s editor Philip Campbell, etc. that string theory had “ruled it out”, despite the lack of evidence supporting string theory. Fair enough, I wasn’t a big shot.

But I also received rejections from relatively friendly physicists who were also apparently biased by string theory’s claims of a large negative cc. For instance, in 1996 the physics teacher and fellow Electronics World magazine contributor Mike Renardson set up a journal for alternative ideas called “First Thoughts”. He wrote to say that 10^{-10} m/s2 was too small to ever measure, rejecting the paper! Of course, by laboratory standards and normal experiment times he was right, but such a small acceleration adds up to a large velocity when it has been continuing for half the age of the universe, as astronomer Saul Perlmutter discovered in 1998. Supernovae at half the age of the universe (over 5 billion light years away) were accelerating at 10^{-10} m/s2. Thus, dark energy was predicted.

(I did get publish the prediction as an 8 page paper both via the letters column of Electronics World, October 1996, page 896, and in full in February 1997 in “Science World” journal, ISSN 1367-6172.  However, Renardson’s attitude predominated with more suitable journals like Nature, Classical and Quantum Gravity, and even New Scientist which adopts instead what I’d call a hypocritical attitude: publishing celebrity or mainstream science “news” after it has gone viral, while refusing the hard work of checking the facts and publishing news purely on objective merit.)

If anyone’s interested in replacing “gravitation” with a Casimir effect of dark energy, see http://vixra.org/abs/1305.0012 Susskind’s ad hoc explanation of the small positive cc using the anthropic landscape is still frequently and conveniently used to dismiss calculations!

Electromagnetism SU 2 theory experiment

Catt and Walton’s lecture at Nottingham University on a key experiment

There are actually two different electric charges, hence SU(2) electromagnetism, not merely one charge which can travel backwards in time to reverse its sign (as Abelian U(1) QED dogma untruthfully asserts). On 9 October 2013, Ivor Catt (author of ‘Crosstalk (Noise) in Digital Systems,’ in IEEE Trans. on Elect. Comp., vol. EC-16, December 1967,  pp. 749–58), and David Walton (who after his PhD in atomic physics worked under Nobel laureate E. T. S. Walton at Trinity College) presented a new experiment to the ASL Electromagnetism Seminar at Nottingham University:

Before reviewing the physical mechanism of electromagnetism yet again, a brief comment on personal bad attitude problems in science. What’s curious about these guys is that like all academic hotshots, they’re bigoted “elitists” camouflaged as caring, brilliant socialists, just like quantum field theorist, the sometimes abusive, reliably patronising Dr Peter Woit of Columbia University maths department (string theory critic and author of an interesting new textbook, Quantum Theory, Groups and Representations, which will be published by Springer in 2015) or category theorist Dr Marni Dee Sheppeard: they’re all people who won’t spend a second listening or objectively responding to anybody, but just attack under a false charge, thus refusing to engage with the actual argument.  They do this by inventing a vacuous claim that anyone who has a technical argument must be ignorant of science or is anti-feminism, etc. and launching into a tirade about that, completely ignoring the point.  I’ve spend countless hours talking to Catt and publishing videos (also here and here) and journal articles about his work (Electronics World feature articles in August 2002, April 2003, January 2004, plus an op-ed), but that doesn’t lead to anything but abusive shouting as soon as I make an objective criticism.  It’s the same with all egotists, inside the mainstream or not!  Simplify the 10 commandments to 1, and you are crucified.

Feynman versus Dirac

“Already in the beginning I had said that I’ll deal with single electrons, and I was going to describe this idea about a positron being an electron going backward in time, and Dirac asked, ‘Is it unitary?’ I said, ‘Let me try to explain how it works, and you can tell me whether it is unitary or not!’ I didn’t even know then what ‘unitary’ meant. So I proceeded further a bit, and Dirac repeated his question: ‘Is it unitary?’ So I finally said: ‘Is what unitary?’ Dirac said: ‘The matrix which carries you from the present to the future position.’ I said, ‘I haven’t got any matrix which carries me from the present to the future position. I go forwards and backwards in time, so I do not know what the answer to your question is.”

– Feynman about his problem with Dirac at the 1948 Pocono Conference.  The argument was with Feynman’s U(1) as applied to pictorial Feynman diagrams, a theory where positrons (positively charged electrons) are represented as electrons travelling backwards in time, i.e. the simplistic U(1) single-charge QED theory.  Quotation source: J. Mehra and K. A. Milton, Climbing the Mountain: The Scientific Biography of Julian Schwinger, Oxford University Press, 2000, page 233.  See also the excellent discussion in that book on pages 231-234 of how badly Feynman was treated by Bohr, Teller, Dirac and others, who instead of being constructive and helping to build and improve Feynman’s theory of multipath interference via the path integral, just preferred to try to shoot it down.  Actually, the path integral works for any symmetry group for example SU(2) and SU(3) and not just for U(1) electrodynamics which has the “Dirac problem” that positrons are represented as electrons going backwards in time in the Feynman diagrams.

“Already in the beginning I had said that I’ll deal with single electrons, and I was going to describe this idea about a positron being an electron going backward in time, and Dirac asked, ‘Is it unitary?’ I said, ‘Let me try to explain how it works, and you can tell me whether it is unitary or not!’ I didn't even know then what ‘unitary’ meant. So I proceeded further a bit, and Dirac repeated his question: ‘Is it unitary?’ So I finally said: ‘Is what unitary?’ Dirac said: ‘The matrix which carries you from the present to the future position.’ I said, ‘I haven’t got any matrix which carries me from the present to the future position. I go forwards and backwards in time, so I do not know what the answer to your question is.” - Richard P. Feynman in J. Mehra and K. A. Milton, Climbing the Mountain: The Scientific Biography of Julian Schwinger, Oxford University Press, 2000, page 233

“Already in the beginning I had said that I’ll deal with single electrons, and I was going to describe this idea about a positron being an electron going backward in time, and Dirac asked, ‘Is it unitary?’ I said, ‘Let me try to explain how it works, and you can tell me whether it is unitary or not!’ I didn’t even know then what ‘unitary’ meant. So I proceeded further a bit, and Dirac repeated his question: ‘Is it unitary?’ So I finally said: ‘Is what unitary?’ Dirac said: ‘The matrix which carries you from the present to the future position.’ I said, ‘I haven’t got any matrix which carries me from the present to the future position. I go forwards and backwards in time, so I do not know what the answer to your question is.”
– Richard P. Feynman in J. Mehra and K. A. Milton, Climbing the Mountain: The Scientific Biography of Julian Schwinger, Oxford University Press, 2000, page 233

Perhaps this excessive and damaging egotism from Catt and also leading quantum field theorists is due to bad experiences with other people who have wasted their time with alternative ideas in the past, but that’s no excuse for people taking out your frustrations on other people who are offering constructive theories that are admittedly incomplete and in need for further development and wider circulation before funding.  Now, when I read a book, I try to see what the strongest arguments are, and focus on those.  That’s called objectivity.  “Critics” who ignore the substance of the argument and falsely make a show out of inventing spurious problems, are being subjective.  Emotional rants always leak into “peer” reviews, because if the “peer” reviewer abuses power when possible to reduce workload, they then tell lies about “rudeness” to escape the complaints.  So they’ll always end up, in the analysis of a radical innovation that does have evidence, claiming it is an “exceptional” case that doesn’t deserve their normal objective methods!

Anyway, the physics.  The new experimental data for the charged capacitor justifies the following interpretation of the Dirac equation’s spinor.

Catt anomaly diagram ANIMATED GIF FILE BY EUGEN HOCKENJOS

Catt anomaly diagram ANIMATED GIF FILE BY EUGEN HOCKENJOS

As the diagram above proves, the classical electromagnetism which Dirac and Weyl assumed true when building U(1) Abelian quantum electrodynamics, contains an error due to a mathematical coincidence (or accident, as Catt put it in his 1995 book Electromagnetism 1), in that the false squaring of the superimposed electric fields for trapped Heaviside energy current cause a four-fold increase in energy density, which exactly compensates for the false assumption that magnetic field energy disappears.  The maths of electromagnetism work, but that is provably due to a fluke, a coincidence, an accident of mere numerology.  (Catt buries or hides the key evidence amid much clutter on his internet site.)  The physics of the EM mechanism is quite different to U(1).  However, Catt himself makes an error of inconsistency in his analysis, in not applying his own “separation of fields” (so vital for him when separating the TEM wave fronts flowing into the cable from the reflected TEM wave coming back from the open end of the cable to superimpose) to the individual fields from each conductor within the transmission line!  You then get the fact that the field quanta are charged, and you also get SU(2) QED.

The classical Maxwell (and apparently Abelian) electromagnetic equations are derived from a simple mechanism which cuts down the SU(2) Yang-Mills equations to the Maxwell ones, by eliminating the Yang-Mills charge transfer term. Since the charged EM bosons are massless, they cannot propagate because the magnetic field resulting from charge, prevents the motion of massless charge! The ONLY possible way massless charges can move, is by cancelling out their magnetic fields using a two-way equilibrium of exchange. Any two similar charges must always being exchange equal charge in opposite directions each second, to cancel out the magnetic fields, eliminating infinite self-inductance. Unless this happens, there is no electromagnetic field. So this mechanism means that massless SU(2) charged field quanta can never change the amount of charge present in an onshell particle: if it emits 1 coulomb per second, it also absorbs precisely the same amount. Hence, the Yang-Mills net charge transfer term is always zero for massless field quanta: effectively turning Yang-Mills equations into Maxwell’s in that case (massless charged field quanta). The Catt analysis of the Heaviside transmission line theory, when extended to include an examination of what’s happening to each conductor within the transmission line (accelerating free electrons radiate radio frequency energy to electrons in the other conductor, and this mutual exchange enables the propagation of electricity. The acceleration of electrons at the electrical wave front on the surface of each conductor in a transmission line is then in an opposite direction to that in the other conductor.

This is the cause of all the shouting from Catt when I tried to get a discussion of this point. His response to any objective criticism or improvement suggestion is to endlessly repeat himself more loudly, precisely what the mainstream does when string theory is criticised.  (A complete waste of time, since I had already read his books.)  This paranoid and delusional approach prevents any kind of critical collaborative progress, except homage.  I’m not interested in a dead science that either makes no progress, or is dictatorially constrained to stumble or crawl along worn paths in a very few research directions, with the most obvious and vital applications blocked by rude, silly censorship.

Weyl applied Abelian U(1) gauge theory to electromagnetism, despite the fact that it U(1) only one charge, whereas electromagnetism 2 charges (positive and negative).  There are also 2 charges in SU(2) isospin for weak forces, and 3 charges for SU(3).  Since electromagnetism has 2 charges, not one as U(1), why not therefore use SU(2) for electromagnetism?  This is just weak theory with massless neutral Z and massless charged W particles.  What is “charge”?  Has anyone ever seen or measured an “electron”.  What they have measured is trapped field with a quantized charge and mass, and named that an “electron”, much the way Pluto was once falsely classified as a planet!

(When interpretations of nature charge, like naming conventions, nature remains unaltered.  Pluto didn’t suddenly change when the consensus of expert opinion downgraded it from planet to planetoid.  You cannot therefore use today’s interpretation of any fact in science as a dogmatic “fact” with which to censor out advances, which may involve new theories which re-interpret the facts differently.  Put it another way, confusing facts with their interpretations is a censorship tool used by peer-consensus dogma-worshipping education, not objective science which tests new interpretations of facts.)

Planck scale electron cores are unobservable: nobody has seen them.  Only the fields and charge to mass ratio of “electrons” have been observed.  The sign of the charge (positive or negative) of the electron is therefore inferred from the field that is measured: the only thing observable about the “charge” of an electron is its field.  Therefore, the field quanta convey the information on whether the “charge” is positive or negative.  This can only happen if the 2 “extra polarizations” that field quanta have are charge indicators.

Field quanta have 4 polarizations, whereas uncharged photons (which don’t convey charge sign information) have only 2 observable polarizations.  The extra 2 polarizations of field quanta must therefore carry the information to the observer as to whether the electric field is one due to a “charge” which is positive, or negative.  Therefore, observations of field do indicate that massless charged electromagnetic field quanta exist, in the context of trapped light velocity field energy in Catt’s experiment, the charge of the field quanta being denoted by the quantum numbers for the 2 additional degrees of polarization.

Dirac’s equation predicted antimatter, suggesting that protons are the anti-electrons.  J. Robert Oppenheimer rejected Dirac’s interpretation in a paper in 1930, due to the mass difference between the electron and the proton (which we explain as conversion of screened QED charge into nuclear field energy, see discussion further below).  Dirac then accepted Oppenheimer’s arguments and revised this to predict what came to be called the positron, discovered by Carl Anderson in 1932.  This positron discovery has since been used by bigots tragically to “shut down the argument”, preventing a more careful analysis.

Sure, positrons arise as anti-electrons.  But that can’t be the full story, or there’d be equal amounts of positrons and electrons around, since Dirac’s equation predicts they they are created (by pair production) in equal quantities.  So although Oppenheimer’s objection to Dirac’s simplistic assignation of the proton as the anti-electron creates a problem of mass asymmetry, it did offer a solution to the question of “where is all the antimatter?”, a question which Oppenheimer doesn’t answer.  We argue that both Dirac and Oppenheimer were being far too simplistic, and in doing so laying down a shaky dogmatic interpretative foundation for U(1) electrodynamics which has messed up today’s electroweak theory.

SU(2) is now electroweak group, with U(1) being dark energy which causes gravitation by a Casimir type pseudo-attraction mechanism (plates being pushed together by spin-1 quanta on external sides, not by the simplistic idea of a mutual exchange of spin-2 superstring gravitons, which don’t need to exist in order for gravity to work).  Once you have an chiral SU(2) theory of electromagnetism where the handed curl of the magnetic field vector around the direction of propagation of a charge is analogous to the left-handed weak force SU(2).  Woit showed neatly in 2002 that you can get the handedness of SU(2) electroweak charges by picking out U(2) as a subset of SO(4).  (However, that doesn’t mean he is interested in really being objective.)

We can’t observe Planck size charges, only their fields, and we find two different kinds of fields: positive and negative.  Positive is not merely an absence of negative charge (hole theory): if there is no an absence of negative charge is zero charge.   Sure, in a sea of electrons, a “hole” behaves analogously to positive charge.  But an empty vacuum is not positively charged.  So absence of negative charge is not automatically positive charge.  Electromagnetism then is a 2 charge SU(2) gauge theory, a massless gauge boson version of SU(2).

If antimatter is produced in equal quantities by pair-production at energies above 1.022 MeV in the big bang, then where is it?  It is in upquarks.  They have 2/3rds the positron charge.  The missing 1/3 charge is due to the fact that electromagnetic pair production within femtometres of the core of a charge absorbs electromagnetic energy density in the polarization process, creating virtual particle which result – when you have pairs (mesons) or triplets (baryons) in close proximity (sharing an overlapping vacuum polarization veil of virtual particles) – converts EM field/charge energy into nuclear (weak and/or strong) fields.  The vacuum polarization effect occurs out to a radius where the field strength is about 1.3 x 10^18 volts/metre, Julian Schwinger’s IR cutoff radius for the running of the electric charge in QED.  Many textbooks wrongly follow Dirac’s sea analogy, which ignores the IR cutoff and claims that pair production occurs throughout the entire vacuum.  It doesn’t occur through the whole vacuum, because only gauge bosons – not virtual fermions – occur beyond a few femtometres: all observed couplings cease to run with energy when the energy is too low for onshell particles to be created (i.e. twice the 0.511 MeV electron rest mass energy equivalent or 1.022 MeV).

When EM fields are attenuated by vacuum polarization (causing the effective charges to run by some logarithmic function of distance and energy), electromagnetic energy density is converted into virtual mesons and virtual quarks and gluons that constitute nuclear fields, the SU(2) weak force and the SU(3) strong force. Thus, the reduction in the apparent electric charge of the upquark from the positron’s value of +1 to the value +2/3 needed to fit observations for protons (two upquarks and one downquark) and other hadrons, is explained and turned into a prediction since we can make detailed calculations with this simple approach. Where upquarks are electron antiparticles formed at very high energy, in addition to the simplistic Dirac/Oppenheimer antiparticle of the free positron which Anderson observed in 1932, where a >1.022 MeV gamma ray approaches a nucleus, you:

1. Explain the apparent paucity of antimatter in the universe, and

2. Have a new way to predict the weak and strong nuclear force running couplings, by making use of the fact of the principle of conservation of energy with the fact that one-third of the electric field energy of the electron exists in nuclear fields around upquarks in hadrons.  Coulomb field energy (half permittivity times the square of electric field strength, in Joules per cubic metre) that’s converted into virtual particle nuclear force gluon fields around upquarks in QED renormalization allows us to do simple QCD calculations of nuclear interaction running couplings from energy conservation! Genius or what?  Anyone can calculate the Coulomb field strength and energy density around an electron, and once you integrate over a shell of expanding radius that you get the total energy; incorporating the logarithmic running of the charge from QED now tells you how the QCD color force varies inversely, getting stronger as the QED running charge gets weaker: the sum of both is equal to that of an electron.

3. As Julian Schwinger explained in 1948, the running of couplings that causes charge renormalization in QED is accompanied by a renormalization of mass, in other words the virtual particles created by pair production in intense fields around a quark core contribute some mass to the quark core.  In fact, most of the mass of hadrons is generated in this way.  A simple model of this allows precise predictions to be made (see http://vixra.org/abs/1408.0151, linked here).  Nobel Laureate Dr Gerardus ‘t Hooft responded that the paper was unsuitable for his Foundations of Physics: “because it does notcite current peer-reviewed literature”.  (That’s a catch-22 because this is “new stuff”, with no literature and no “peers” in the field, as such. Duh!)

(In 2006, Harvard string theorist Dr Lubos Motl knocked the nail on the head when he wrote, on Woit’s blog: “Virtually all of string theorists are nice people who never argue with anyone else, they’re not chauvinists, and most of them are feminists. Most of them also think that string/M-theory are robust twin towers that are not threatened by any social effect or passionate proponents of alternative theories or proponents of no theories, and they almost always try to avoid interactions that could lead to tension which also gives them more time for serious work. Almost no string theorists drive SUV and they produce a minimum amount of carbon dioxide.”   Dr Motl was right that they usually “never argue with anyone else”, that’s the whole problem: they’re elitists who sit on their high horses in the clouds and refuse to engage in discussions with objective critics, to participate in constructive arguments, despite all their camouflaged journals of bigotry that paint their work as being precisely the opposite of that.  I made this point in my 2011 paper by quoting the famous string theorist Ed Witten who actually wrote to Nature instructing string theorists to deny critics the oxygen of publicity by refusing to engage in discussions.  Woit, Smolin, Catt, Her Majesty the Queen, and many others maintain prestige that way.)

Physics Review Letters and arxiv weird, egotistic, and frankly vile (not peer) “elitist” moderators proved not only lacking interest in non-standard alternative ideas beyond superstring theory that actually work (predicting cosmological constant accurately in 1996, long before dark energy was even discovered in 1998 from supernovae red-shift observations), but in demented mad bigotry against an attitude of no-bullshit progress:

Nigel says: July 7, 2005 at 7:15 pm Editor of Physical Review Letters says

Sent: 02/01/03 17:47

Subject: Your_manuscript LZ8276 Cook

MECHANISM OF GRAVITY

Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories � Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters

Now, why has this nice genuine guy still not published his personally endorsed proof of what is a �currently accepted� prediction for the strength of gravity? Will he ever do so? …

Peter Woit says: July 7, 2005 at 7:27 pm I’m tempted to delete the previous comment, but am leaving it since I think that, if accurate, it is interesting to see that the editor of PRL is resorting to an indefensible argument in dealing with nonsense submitted to him (although the “…” may hide a more defensible argument). Please discuss this with the author of this comment on his weblog, not here. I’ll be deleting any further comments about this.

[Note to Dr Woit: the email correspondence went on with PRL’s associate editor for months, with them repeatedly changing goalposts as I revised the paper to incorporate suggestions, until they simply refused to publish anything on this topic.  They thus wasted my time deliberately with lies.  The same for Physics Forums, on which it is heresy to engage in a serious objective campaign to make progress; trivial discussion of mainstream dogma is fine.  In the same way, serious politics campaigning is banned from the House of Commons coffee bars because it always ends in punch ups; small talk about football or cricket results is however encouraged.  Freedom of speech is something that makes our democracy different to Hitler’s and Stalin’s regimes.  Except that there is a rule in small print that it must not be heard if it is heretical.  Nobody points out that Hitler and Stalin had no real problems with people shouting their praise.  It’s therefore only in the treatment of heretics and outsiders that “freedom” or its absence can really be assessed.  Nobody seriously disputes that Hitler wanted everyone be was friends with to be free to praise the Nazis.  Freedom depends on progress is free and unopposed, or is blocked by bigots requiring tougher means.  Historian Edward Gibbon wrote controversially that education is only of use where practically superfluous; he would have been less controversial I suspect if he had written instead that: DIPLOMACY is only of use where it is practically superfluous.  Diplomacy only seems to “prevent wars” and fights where the people are honest, engaging with critics and thus non-dictatorial/civilized on boths sides; diplomacy fails where it is most needed, where one side is a bigoted dictator who refuses to engage in objective discussionOne possible way to proceed would be to publish a book quoting all the errors in fashionable textbooks, debunking each and thus ridiculing the aptitude, PhD credentials, educational background, Nobel prizes awarded, etc., to substandard behaviour and the confusion of facts with interpretations by qft textbooks.  Woit’s book contains many excellent cameos but is organized in such a way that the few key understandable mechanisms in QFT are totally ignored, e.g. Feynman’s 1985 QED book explanation that the uncertainty principle arises from multipath interference (the basis of the path integral) of QED field quanta jiggling the bound electron chaotically as it does its orbit.  The uncertainty principle is pre-second quantization, pre-Feynman’s path integral.  Woit simply ignores this and also the fact that vacuum polarization provides a testable, evolving calculation method and mechanism to understand and predict how couplings run and how masses of particles occur.  But to a great extent, this bigoted, anti-progress approach is used in many textbooks, which seek to misinform readers.  Woit also starts by recommending Eugene Wigner’s worst ever paper – which asserts the ignorance-based dogma bias, on false premises, that the universe is intrinsically non-understandable mathematics.  Of course, as the non-PhD quantum field theory professor Freeman Dyson keeps pointing out, the PhD system is a pseudoscientific: “abomination … a gross distortion of the educational process … the student is condemned to work on a single problem in order to write a thesis, for maybe 2-3 years … this straight-jacket which was imposed on the students … all the PhD students had these same constrains imposed on them which I basically disapprove of.  I just don’t like the system.  I think it is an evil system and it has ruined many lives.”  (See video of Dyson explaining this, linked here.],

another specious “no go theorem” test

Another specious “no go theorem” test, full of speculative and false assumptions claims to disprove time varying G:

http://www.astronomy.com/news/2015/08/gravitational-constant-appears-universally-constant-pulsar-study-suggests

“Gravitational constant appears universally constant, pulsar study suggests
“The fact that scientists can see gravity perform the same in our solar system as it does in a distant star system helps confirm that the gravitational constant truly is universal.”
By NRAO, Charlottesville, VA | Published: Friday, August 07, 2015

This is a good example of the quack direction of where mainstream “science” is going: papers taking some measurements, then using an analysis riddled with speculative assumptions to “deduce” a result that doesn’t stand up to scrutiny, but serves xsimply to defend speculative dogma from the only real danger to it, that people might work on alternative ideas. Like racism, the “no go theorem” uses ad hoc but consensus-appearing implicit and explicit assumptions with a small sprinkling of factual evidence to provide allow the mainstream trojan horse of orthodoxy to survive close scrutiny.

This mainstream defending “no go theorem” game was started by Ptolemy’s false claim in 150 AD that the solar system can’t be right, because – if it was right – then the earth’s equator would rotate at about 1,000 miles an hour at the equator and – according to Aristotle’s laws of motion (which remained for over 1,400 years, until Descartes and Newton came up with rival laws of motion) clouds would whiz by at 1,000 miles an hour and people would also be thrown off earth by that “violent” motion.

Obviously this no-go theorem was false, but the equator really does rotate at that speed. So there was some fact and some fiction, blended together, by Ptolemy’s ultimate defense of the earth centred universe against Aristarchus of Samos’s 250 BC theory of the solar system and the rotating earth. The arguments about a varying gravitational coupling are similarly vacuous.

Please let me explain. The key fact is, if gravity is due to an asymmetry in forces, which is the case for the Casimir force, then you don’t vary the “gravitational” effect by varying the underlying force for a stable orbit, or any other equilibrium system, like the balance of Coulomb repulsion between hydrogen nuclei in a star, and the gravitational compression.

Put in clearest terms, if you have a tug of war on a rope where there is an equilibrium, then adding more pullers equally to each end of the rope has no net effect, nothing changes.

Similarly, if two matched arm wrestlers were to increase their muscle sizes by the same amount, nothing changes. Similarly, in an arms race if both sides in military equilibrium (parity) increase their weapons stockpiles by the same factor, neither gains an advantage contrary to CND’s propaganda (in fact, the extra deterrence makes a war less likely).

Similarly, if you increase the gravitational compression inside a star by increasing the coupling G, while the electromagnetic (Coulomb repulsion) force increases similarly due to a shared ultimate (unified force theory) mechanism, then the sun doesn’t shine brighter or burn out quicker. The only way that a varying G can have any observable effect is if you make an assumption – either implicitly or explicitly – that G varies with time in a unique way that isn’t shared by other forces. Such an assumption is artificial, speculative, and totallyy specious, and a good tell-tale sign that a science is going corruptly conservative and anti-radical in a poor form of propaganda (good propaganda being honest promotion of objective successes, rather than fake dismissals of all possible alternatives to mainstream dogma), by inventing false or fake reasons to defend status quo and “shut down the argument”. Ptolemy basically said “look the SCIENCE HAS SETTLED, the solar system must be wrong because (1) our gut reaction rejects it as contrived, (2) it disagrees with existing laws of motion by the over hyped expert Aristotle, and (3) we have mathematically fantastic theory of epicycles that can be arbitrarily fiddled to fit any planetary motion, without requiring the earth to rotate or orbit the sun.” That was “attractive” for a long time!

Edward Teller in 1948 first claimed to disprove Dirac’s varying G idea by using an analogously flawed argument to that he used to delay the development of the hydrogen bomb. If you remember the story, Teller at first claimed falsely that compression has no effect on thermonuclear reactions in the hydrogen bomb. He claimed that if you compress deuterium and tritium (fusion fuel), the compressed fuel burns faster, but the same efficiency of burn results. He forgot that the ratio of surface area for escaping heat (x rays in the H bomb) to mass is reduced if you compress the fuel, so that his scaling laws argument is fake. If you compress the fuel, the reduced surface area causes a reduced loss of X-ray energy from the hot surface, so that a higher temperature in the core is maintained, allowing much more fusion that occurs in uncompressed fuel.

Likewise, Teller’s 1948 argument against Dirac’s varying gravitational coupling theory is bogus, because of his biased and wooden, orthodox thinking: if you G with time in the sun, it doesn’t affect the fusion rate because there’s no reason why the similar inverse square law Coulomb force’s coupling shouldn’t vary the same way. Fusion rates depend on a balance between Coulomb repulsion of positive ions (hydrogen nuclei) and gravitational compression. If both forces in an equilibrium are changed the same way, no imbalance occurs. Things remain the same. If you have one baby at each end of a see-saw in balance in a park, and then add another similar baby to each end, nothing changes!

It proves impossible to explain this to a biased mathematical physicist who is obsessed with supergravity and refuses to think logically and rationally about alternatives. What happens then is that you get a dogma being defended by false “no go theorems” that purport to close down all alternative ideas that might possibly threaten their funding, prestige or more likely, that threaten “anarchy” and “disorder”. Really, when a religious dogma starts burning heretics, is not a conspiracy of self-confessed bigots who know they are wrong, trying to do evil to prevent the truth coming out. What really happens is that these people are ultra-conservative dogmatic elitists, camouflaged as caring, understanding, honest liberals. They really believe they’re right, and that their attempts to stifle or kill off honest alternatives using specious no-go theorems are a real contribution to physics.

Feynman’s “rules” for calculating Feynman diagrams, which represent terms in the perturbative Taylor series type expansion to a path integral in quantum field theory, allow very simple calculations of quantum gravity. The Casimir force of a U(1) Abelian dark energy (repulsive force) theory is able to predict the coupling correctly for quantum gravity.   We do this by Feynman’s rule for a two vertex Coulomb type force diagram (which contributes far more to the result than diagrams with more vertices), which implies that the ratios of cross-sections for interactions is proportional to the square of the ratios of the couplings.  We know the cross section for the weak nuclear force and we know the couplings for both gravity and the weak nuclear force.  This gives us the gravitational interaction cross-section.

To get the cross-sections in similar dimensional units for this application of Feynman’s rules, we use a well-established method to get each coupling into units of GeV^{-2}.  The Fermi constant for the weak interaction is divided by the cube of the product h-bar and the velocity of light, while the Newtonian constant G is divided by the product of h-bar and c^5.  This gives a Fermi coupling of 1.166 x 10^{-5} GeV^{-2}, and a Newtonian coupling for gravity of 6.709 x 10^{-39} GeV^{-2}, the ratio of which is squared using Feynman’s rules to obtain the ratio of cross-sections for the fundamental interactions.  This is standard physics procedure.  All we’re doing is taking standard procedures and doing something new with them, predicting dark energy (and vice versa, calculating gravity from dark energy).  Nobody seems to want to know, even Gerard ‘t Hooft rejected a paper using the specious argument that because we’re not “citing recent theoretical journal papers” it can’t be hyped in his Foundations of Physics, which seems to require prior published work in the field, not a new idea.  (Gerard ‘t Hooft’s silly argument would demand Newton to extend Ptolemy’s theory of epicycles, or be censored out, in effect.)

In this theory, particles are pushed together locally by the fact that we’re surrounded by the mass of the universe, and gauge bosons for dark energy (observed cosmological acceleration) are being exchanged between the masses in an apple and the masses in the surrounding universe.

Here’s a new idea. If one tenth of the energy currently put into inventing negative false “no go theorem” objections to established facts that are “heretical” or “taboo” in physics, were instead directed towards constructive criticisms and developments of new predictions, physics could probably break out of its current moratorium today. Arthur C. Clarke long ago made the observation that when subjective, negative scientists claim to invent theorems to “disprove” the possibility of a certain research direction achieving results and overthrowing mainstream dogma, they’re more often wrong than when they do objective work.

It’s very easy to point out that any new, radical idea is incompatible with some old dogma that is “widely held and established orthodoxy”, partly because that’s pretty much the definition of progress (unless you define all progress as merely adding another layer of epicycles to a half baked mainstream theory in order to make it compatible with the latest data on the cosmological acceleration), and partly because the new idea is born half baked not having been researched with lavish funding for decades or longer by many geniuses.

Fighting inflation with observations of the cosmic background

From Dr Peter Woit’s 14 June 2015 Not Even Wrong blog:

Last week Princeton hosted what seems to have been a fascinating conference, celebrating the 50th anniversary of studies of the CMB. … The third day of the conference featured a panel where sparks flew on the topics of inflation and the multiverse, including the following:

Neil Turok: “even from the beginning, inflation looked like a kluge to me… I rapidly formed the opinion that these guys were just making it up as they went along… Today inflation is the junk food of theoretical physics… Inflation isn’t radical enough – it’s too much a patchwork. It all rests on rare initial conditions… Akin to solving electron stability with springs… all we have is proof of expansion, not that the driving force is inflation… “because the alternatives are bad you must believe it” isn’t an option that I ascribe to, and one that is prevalent now…  we should encourage young to … be creative (not just do designer inflation)

David Spergel: papers on anthropics don’t teach us anything – which is why it isn’t useful…

Slava Mukhanov: inflation is defined as exponential expansion (physics) + non-necessary metaphysics (Boltzmann brains etc) … In most papers on initial conditions on inflation, people dig a hole, jump in, and then don’t succeed in getting out… unfortunately now we have three new indistinguishable inflation models a day – who cares?

Paul Steinhardt: inflation is a compelling story, it’s just not clear it is right… I’d appreciate that astronomers presented results as what they are (scale invariant etc) rather than ‘inflationary’… Everyone on this panel thinks multiverse is a disaster.

Roger Penrose: inflation isn’t falsifiable, it’s falsified… BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye.

Marc Davis: astronomers don’t care about what you guys are speculating about …

I was encouraged by Steinhardt’s claim that “Everyone on this panel thinks multiverse is a disaster.” (although I think he wasn’t including moderator Brian Greene). Perhaps as time goes on the fact that “the multiverse did it” is empty as science is becoming more and more obvious to everyone.

Inflation theory, a phase change at the Planck scale that allows the universe to expand for a brief period faster than light, is traditionally defended by:

(a) the need to correct general relativity by reducing the early gravitational curvature, since general relativity by itself predicts too great a density of the early universe to account for the smallness of the ripples in the cosmic background radiation which decoupled from matter when the universe became transparent at 300,000 years after zero time.  (The transparency occurs when electrons combine with ions to form neutral molecules, which are relatively transparent to electromagnetic radiation, unlike free charges which are strong absorbers of radiation.)

Thus, inflation is being used here to reduce the effective gravitational field strength by dispersing the ionized matter over a larger volume, which reduces the rate of gravitational clumping to match the small amount of clumping observed at 300,000 years after zero.

Another way of doing the same thing is to a theory of gravitation as being a Casimir force resulting from dark energy, which correctly predicts from dark energy and makes the gravitational coupling G a linear function of time, so at 300,000 years is merely 2.3 x 10^{-5} of todays’s value, and furthermore it is even smaller at earlier times (the smallness of the CBR ripples is not determined solely by the curvature when they were emitted, but the time-integrated effect of the curvature up to that time).  The standard “no-go theorem” by Edward Teller (1948) used against any variation of is false, as we have shown, because it makes an implicit assumption that’s wrong: the Teller no-go theorem assumes that varies with time in one specific way.  Teller assumes for the sake of his no-go theorem, that the gravitational coupling varies inversely with time as Dirac assumed, rather than linearly with time as a Casimir mechanism for quantum gravity as an emergent effect of dark energy pushing masses together.  He also assumes implicitly that G varies only by itself, without any variation of the Coulomb coupling.  Teller thus relies on an assumed imbalance between gravity and Coulomb forces to “disprove” varying G, as well as assuming that any variation of is inversely with time.  All he does is to disprove his own flawed assumptions, not the facts!

These assumptions are all wrong, as we showed.  Gravity and Coulomb forces are analogous inverse square law interactions so their couplings will vary the same way; therefore, no devastating Teller-type imbalance between Coulomb repulsion of protons in stars and gravitational compression forces arises.  The QG theory works neatly.

(b) Inflation, like string theory, is defended by elitist snobbery of the dictatorial variety: “you must believe this theory because there are no alternatives, we know this to be true because we don’t have any interest in taking alternatives seriously, particularly if they are are not hyped by media personalities or “big science” propaganda budgets, and if anyone suggests one we’ll kill it by groupthink “peer” review.  Therefore, like superstring theory, if you want to work at the cutting edge, you’d better work in inflation, or we’ll kill your work.”

That is what it boils down to.  There’s an attitude problem, with two kinds of scientists, defined more by attitude than by methodology.  One kind wants to find the truth, the other wants to become a star, or, failing that, at least to be a groupie.  This corruption is not confined to science, but also occurs in political parties, organized religion, and capitalism.  Some people want to make the world a better place, others are more selfish.  Ironically, those who are the most corrupt are also the most expert at camouflaging themselves as the opposite, a fact that emerged with the BBC’s star Jimmy Saville.  (While endlessly exaggerating correlations between temperature and CO2 and snubbing others, and making money from the taxpayer, they present themselves as moral.)

Youhei Tsubono’s criticism of the magnetic spin alignment mechanism for the Pauli exclusion principle

“All QM textbooks describe the effects of the Exclusion Principle but its explanation is either avoided or put down to symmetry considerations. The importance of the Exclusion Principle as a foundational pillar of modern physics cannot be overstated since, for example, atomic structure, the rigidity of matter, stellar evolution and the whole of chemistry depend on its operation.” – Mike Towler, “Exchange, antisymmetry and Pauli repulsion. Can we ‘understand’ or provide a physical basis for the Pauli exclusion principle?”, TCM Group, Cavendish Laboratory, University of Cambridge, pdf slide 23.

Japanese physicist Youhei Tsubono, who has a page criticising the spin-orbit coupling, points out that there is an apparent discrepancy between the magnetic field energy for electron alignment and the energy difference between the 1s and 2s states, which creates a question of how the spinning charge (magnetic dipole) alignment mechanism of electrons creates the mechanism for the Pauli exclusion principle.

Referring to Quantum chemistry 6th edition, by Ira N. Levine, p 292, Tsubono argues that the difference in lithium’s energy between having three electrons in the 1s state (forbidden by the Pauli exclusion principle) and having two electrons in the 1s state (with opposite spins) and the third electron in the 2s state is 11 eV, which he claims is far greater than the assumed value of the magnetic dipole (spinning charge) field energy, which he claims is only about 10-5 eV.  I can’t resist commenting here to resolve this alleged anomaly:

Japanese physicist Youhei Tsubono on Pauli exclusion principle mechanism by alignment of magnetic dipoles from spinning electrons.

Japanese physicist Youhei Tsubono on Pauli exclusion principle mechanism by alignment of magnetic dipoles from spinning electrons.

In a nutshell, the error Tsubono makes here is conflating the energy of alignment of magnetic spins for electrons at a given distance from the nucleus with the energy needed to not only flip spin states but also to move to a greater distance from the nucleus. It is true that the repulsive magnetic dipole field energy between similarly-aligned electron spins is only about 10-5 eV, but because they’re both in the same subshell that’s enough to account for the observed Pauli exclusion principle.  The underlying error Tsubono makes is to start from the false model (see left hand side of diagram above) showing three electrons in the 1s state, then raising the rhetorical question of how the small magnetic repulsive energy is able to drive one electron into the 2s state.  This situation never arises. The nucleus is formed first of all in fully ionized form by some nuclear reaction. The first electrons therefore approach the nucleus from a large distance.  The realistic question therefore is not: “how does the third electron in the 1s state get enough energy to move to the 2s state from the weak magnetic repulsion that causes the Pauli exclusion principle?”  The 3rd electron stops in the 2s state because of a mechanism: it’s unable to radiate the energy it would gain in approaching any closer to the nucleus.  The electron in the 2s state can only radiate energy in units of hf, so even a small discrepancy in energy is enough to prevent it approaching closer to the nucleus.  (Similarly, if an entry ticket costs $10, you don’t get in with $9.99.)

Similarly, the objection Tsubono raises to the supposedly faster-than-light speed of spin of the classical electron radius is false, because the core size of the electron is far smaller than the classical electron radius.

The core can therefore spin fast enough to explain the magnetic dipole moment without violating the speed of light, which would only be the case if the classical electron was true.  What’s annoying about Tsubono’s page, like many other popular critics of “modern physics”, is that it tries to throw out the baby with the bathwater.  The spinning electron’s dipole magnetic field alignment mechanism for the Pauli exclusion principle is one of a few really impressive, understandable mechanisms in quantum mechanics, and it is therefore important to defend it.  Having chucked out the physical mechanism that comes from quantum field theory, Tsubono then argues “quantum field theory is not physics, just maths.”

Richard P. Feynman reviews nonsensical “mathematical” (aka philosophical) attacks on objective critics of quantum dogma in the Feynman Lectures on Physics, volume 3, chapter 2, section 2-6:

“Let us consider briefly some philosophical implications of quantum mechanics. … making observations affects a phenomenon … The problem has been raised: if a tree falls in a forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves … Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.) … The question is whether the ideas of the exact position of a particle and the exact momentum of a particle are valid or not. The classical theory admits the ideas; the quantum theory does not. This does not in itself mean that classical physics is wrong.

“When the new quantum mechanics was discovered, the classical people—which included everybody except Heisenberg, Schrödinger, and Born—said: “Look, your theory is not any good because you cannot answer certain questions like: what is the exact position of a particle?, which hole does it go through?, and some others.” Heisenberg’s answer was: “I do not need to answer such questions because you cannot ask such a question experimentally.” … It is always good to know which ideas cannot be checked directly, but it is not necessary to remove them all. … In quantum mechanics itself there is a probability amplitude, there is a potential, and there are many constructs that we cannot measure directly. The basis of a science is its ability to predict. … We have already made a few remarks about the indeterminacy of quantum mechanics. … we cannot predict the future exactly. This has given rise to all kinds of nonsense and questions on the meaning of freedom of will, and of the idea that the world is uncertain. Of course we must emphasize that classical physics is also indeterminate … if we start with only a tiny error it rapidly magnifies to a very great uncertainty. … For already in classical mechanics there was indeterminability from a practical point of view.”

Most QM and QFT textbook authors (excepting Feynman’s 1985 QED) ignore the mechanism for quantum field theory, in order to cater to Pythagorean style mathematical mythology.  This mythology is reminiscent of the elitist warning over Plato’s doorway. Only mathematicians are welcome.  To enforce this policy, an obfuscation of physical mechanisms is usually undertaken in a pro-“Bohring” effort to convince students that physics at the basic level is merely a matter of dogmatically applying certain mathematics rules from geniuses, which lack any physical understanding.  Tsubono has other criticisms of modern dogma, e.g. that dark energy provides a modern ad hoc version of “ether” to make general relativity compatible with observation (just the opposite of Einstein’s basis for special relativity).  So why not go back to Lorentz’s mechanism for mass increase and length contraction as being a field interaction accompanied with radiation which occurs upon acceleration?  The answer seems to be that there is a widespread resistance to trying to understand physics objectively.  It seems that status quo is easier to defend.

There is a widespread journalistic denial of freedom to basic questions in quantum mechanics about what is really going on, what the mechanism is, and efforts are made to close down discussions that could lead revolutionary, unorthodox or heretical direction

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up [trying to explain it further].”

– Richard P. Feynman, as quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’ … electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important, and we have to sum the arrows[*]  to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5, pp. 84-5. [*Arrows = wavefunction amplitudes, each proportional to exp(iS) = cos S + i sin S, where S is the action of the potential path.]

Nobel Laureate Gell-Mann debunked single-wavefunction entanglement using colored socks.  A single and thus entangled/collapsible wavefunction for each quantum number of a particle only occurs in non-relativistic 1st quantization QM, such as Schroedinger’s equation.  By contrast, in relativistic 2nd quantization, there is a separate wavefunction amplitude for each potential/possible interaction, not merely one wavefunction amplitude per quantum number.  This difference gets rid of “wavefunction” collapse, “wavefunction” entanglement philosophy, and so on.  Instead of a single wavefunction that becomes deterministic only when measured, we have the path integral, where you add together all the possible wavefunction amplitudes for a particle’s transition.  The paths with smallest action compared to Planck’s constant (thus having the smallest energy and/time) are in phase, and contribute most, while paths of larger action (large energy and/or time) have phases that interfere and cancel out.

Virtual (or offshell) particles travel along the cancelled paths of large action, not real (or onshell) particles. So there’s a simple mechanism which replaces the single wavefunction chaos of ordinary quantum mechanics with interference and constructive interference for multiple wavefunctions per particle in quantum field theory, which is the correct, relativistic theory.  Single wavefunction theories like Schroedinger’s model of the atom (together with Bell’s inequality, which falsely assumes a single wavefunction per particle, like quantum computing hype) are false, because they are non-relativistic and thus ignore the fact that the Coulomb field is quantized, and the field quanta or virtual photon interactions mediating the force binding an orbital electron to a nucleus.  Once you switch to quantum field theory, the chaotic motion of an orbital electron has a natural origin due to its random, discrete interactions with the quantum Coulomb field.  (The classical Coulomb field in Schroedinger’s model is a falsehood.)

Relativistic quantum field theory, unlike quantum mechanics (1st quantization) gets over Rutherford’s objection to Bohr’s atomic electron, the emission of radiation by accelerating charge.  Charges in quantum field theory have fields which are composed of the exchange of what is effectively offshell radiation: the ground state is thus defined by an equilibrium between emission and reception of virtual radiation. We only “observe” onshell photons emitted while an electron accelerates, because the act of acceleration throws the usual balanced equilibrium (of virtual photon exchange between all charges), temporarily out of equilibrium by preventing the usual complete cancellation of field phases.  Evidence: consider Young’s double slit experiment using Feynman’s path integral.  We can see that virtual photons go through all slits, in the process interacting with the fields in the electrons on slit edges (causing diffraction), but only the uncancelled field phases arriving on the screen are registered as being a real (onshell) photon.  It’s simple.

This is analogous to the history of radiation in thermodynamics. Before Prevost’s suggestion in 1792 that an exchange of thermal energy explains the absence of cooling if all bodies are continuously radiating energy, thermodynamics was in grave difficulties with heroic Niels Bohr “God characters” grandly dismissing as ignorant anyone who discovered an anomaly in the theories of fluid heat like caloric and phlogiston. Today we grasp that a room at 15 C is radiating because it is emitting heat with a radiating temperature of 288 K above absolute zero. Cooling is not synonymous with radiating.  If the surrounding parts of the building are also at 15 C, the room will not cool, since the radiating effect is compensated by the receipt of radiation from the rooms, floor and roof.  Likewise, the electron in the ground state can radiate energy without spiralling into the nucleus, if it is in equilibrium and is receiving as much energy as it radiates.

False no-go theorems, based on false premises, have always been used to quickly end any progressive suggestions without objective discussion.  This censorship deliberately retarded the objective development of new ideas which were contrary to populist dogma.  It was only when the populist dogma became excessively boring or when a rival idea was evolved into a really effective replacement, that “anomalies” in the old theory ceased to be taboo.  Similarly, the populist and highly misleading Newtonian particle theory of light still acts to prevent objective discussions of multipath interference as the explanation of Young’s double-slit experiment, just as it did in Young’s day:

“Commentators have traditionally asked aloud why the two-slit experiment did not immediately lead to an acceptance of the wave theory of light. And the traditional answers were that: (i) few of Young’s contemporaries were willing to question Newton’s authority, (ii) Young’s reputation was severely damaged by the attacks of Lord Brougham in the Edinburgh Review, and that (iii) Young’s style of presentation, spoken and written, was obscure. Recent historians, however, have looked instead for an explanation in the actual theory and in its corpuscular rivals (Kipnis 1991; Worrall 1976). Young had no explanation at the time for the phenomena of polarization: why should the particles of his ether be more willing to vibrate in one plane than another? And the corpuscular theorists had been dealing with diffraction fringes since Grimaldi described them in the 17th century: elaborate explanations were available in terms of the attraction and repulsion of corpuscles as they passed by material bodies. So Young’s wave theory was thus very much a transitional theory. It is his ‘general law of interference’ that has stood the test of time, and it is the power of this concept that we celebrate on the bicentennial of its publication in his Syllabus of 1802.”

– J. D. Mollon, “The Origins of the Concept of Interference”, Phil. Transactions of the Royal Society of London, vol. A360 (2002), pp. 807-819.

Feynman remarks in his Lectures on Physics that if you deny all “unobservables” like Mach and Bohr does, then you can kiss the wavefunction Psi goodbye. You can observe probabilities and cross-sections via reaction rates, but as Feynman argues, that’s not a direct observation for the existence of the wavefunction. There are lots of things in physics that are founded on indirect evidence, giving rise to the possibility that an alternative theory may explain the same evidence using a different basic model. This is exactly the situation as occurred in explaining sunrise by either the sun orbiting the earth daily, or the earth rotating daily while the sun moves only about one degree across the sky.

Propagator derivations

Peter Woit is writing a book, Quantum Theory, Groups and Representations: An Introduction, and has a PDF of the draft version linked here.  He has now come up with the slogan “Quantum Theory is Representation Theory”, after postulating “What’s Hard to Understand is Classical Mechanics, Not Quantum Mechanics“.

I’ve recently become interested in the mathematics of QFT, so I’ll just make a suggestion for Dr Woit regarding his section “42.4 The propagator” which is incomplete (he has only the heading there on page 404 of the 11 August 2014 revision, with no test under it at all).

The Propagator is the greatest part of QFT from the perspective of Feynman’s 1985 book QED: you evaluate the propagator from either the Lagrangian or Hamiltonian, since the Propagator is simply the Fourier transform of the potential energy (the interaction part of the lagrangian provides the couplings for Feynman’s rules, not the propagator).  Fourier transforms are simply Laplace transforms with a complex number in the exponent.  The Laplace and Fourier transforms are used extensively in analogue electronics for transforming waveforms (amplitudes as a function of time) into frequency spectra (amplitudes as a function of frequency).  Taking the concept at it’s simplest, the Laplace transform of a constant amplitude is just the reciprocal (inverse), e.g. an amplitude pulse lasting 0.1 second has a frequency of 1/0.1 = 10 Hertz.  You can verify that from dimensional analysis.  For integration between zero and infinity, with F(f) = 1 we have:

Laplace transform, F(t) = Integral [F(f) * exp(-ft)] df

= Integral [exp(-ft)] df

= 1/t.

If we change from F(f) = 1 to F(f) = f, we now get:

Frequency, F(t) = Integral [f * exp(-ft)] df = 1/(t squared).

The trick of the Laplace transform is the integration property of the exponential function by itself, i.e. it’s unique property of remaining unchanged by integration (because e is the base of natural logarithms), apart from multiplication by the constant (the factor which is not a function of factor you’re integrating over) in its power.  The Fourier transform is the same as the Laplace transform, but with a factor of “i” included in the exponential power:

Fourier transform, F(t) = Integral [F(f) * exp(-ift)] df

In quantum field theory, instead of inversely linked frequency f and time t, you have inversely linked variables like momentum p and distance x.   This comes from Heisenberg’s ubiquitous relationship, p*x = h-bar.  Thus, p ~ 1/x.  Suppose that the potential energy of a force field is given by V = 1/x.  Note that field potential energy V is part of the Hamiltonian, and also part of the Lagrangian, when given a minus sign where appropriate.  You want to convert V from position space, V = 1/x, into momentum space, i.e. to make V a function of momentum p.  The Fourier transform of the potential energy over 3-d space shows that V ~ 1/p squared.  (Since this blog isn’t very suitable for lengthy mathematics, I’ll write up a detailed discussion of this in a vixra paper soon to accompany the one on renormalization and mass.)

What’s interesting here is that this shows that the propagator terms in Feynman’s diagrams, which, integrated-over produce the running couplings and thus renormalization, are simply dependent on the field potential, which can be written in terms of classical Coulomb field potentials or quantum Yukawa type potentials (Coulomb field potentials with an exponential decrease included.  There are of course two types of propagator: bosonic (integer spin) and fermionic (half integer spin).  It turns out that the classical Coulomb field law gives a potential of V = 1/x which, when Fourier transformed, gives you V ~ 1/p squared, and when you include a Yukawa exp(-mx) short-range attenuation or decay term, i.e. V = (1/x)exp(-mx), you get a Fourier transform of 1/[(p squared) – (m squared)] which is the same result that a Fourier transform of the spin-1 field propagator (boson propagators) give using a Klein-Gordon lagrangian.
 
However, using the Dirac lagrangian, which is basically a square-root version of the Klein-Gordon equation with Dirac’s gamma matrices to avoid losing solutions due to the problem that minus signs and complex numbers tend to disappear when you square them, you get a quite different propagator: 1 /(p – m).  The squares on p and m which occur for spin-1 Klein-Gordon equation propagators, disappear for Dirac’s fermion (half integer spin) propagator!
 
So what does this tell us about the physical meaning of Dirac’s equation, or put another way, we know that Coulomb’s law in QFT (QED more specifically) physically involves field potentials consisting of exchanged spin-1 virtual photons which is why the Fourier transform of Coulomb’s law gives the same result as the propagator from the Klein-Gordon equation but without a mass term (Coulomb’s virtual photons are non-massive, so the electromagnetic force is infinite ranged), but what is the equivalent for Coulomb’s law for Dirac’s spin-1/2 fermion fields?  Doing the Fourier transform in the same way but ending up with Dirac’s 1 /(p – m) fermion propagator gives an interesting answer which I’ll discuss in my forthcoming vixra paper.
 
Another question is this: the Higgs field and the renormalization mechanism only deal with problems of mass at high energy, i.e. UV cutoffs as discussed in detail in my previous paper.  What about the loss of mass at low energy, the IR cutoff, to prevent the coupling from running due to the presence of a mass term in the propagator?
 
In other words, in QED we have the issue that the running coupling polarizable pair production only kicks in at 1.02 MeV (the energy needed to briefly form an electron-positron pair).  Below that energy, or in weak fields beyond the classical electron radius, the coupling stops running, so the effective electronic charge is constant.  This is why there is a standard low energy electronic charge that was measured by Millikan.  Below the IR cutoff, or at distances larger than the classical electron radius, the charge of an electron is constant and the force merely varies with the Coulomb geometric law (the spreading or divergence of field lines or field quanta over an increasing space, diluting the force, but with no additional vacuum polarization screening of charge, since this screening is limited to distances shorter than the classical electron radius or energies beyonf about 1 MeV).
 
So how and why does the Coulomb potential suddenly change from V = 1/x beyond a classical electron radius, to V = (1/x)exp(-mx) within a classical electron radius? Consider the extract below from page 3 of http://vixra.org/pdf/1407.0186v1.pdf:
 
Integrating using a massless Coulomb propagator to obtain correct low energy mass
 
 
The key problem for the existing theory is very clear when looking at the integrals in Fig. 1.  Although we have an upper case Lambda symbol included for an upper limit (or high energy, i.e. UV cutoff) on the integral which includes an electron mass term, we have not included a lower integration limit (IR cutoff): this is in keeping with the shoddy mathematics of most (all?) quantum field theory textbooks, which either deliberately or maliciously cover-up the key (and really interesting or enlightening) problems in the physics by obfuscating or by getting bogged down in mathematical trivia, like a clutter of technical symbolism.  What we’re suggesting is that there is a big problem with the concept that the running coupling merely increases the “bare core” mass of a particle: this standard procedure conflates and confuses the high energy bare core mass that isn’t seen at low energy, with the standard value of electron mass which is what you observe at low energy.
 
In other words, we’re arguing for a significant re-interpretation of physical dogmas in the existing mathematical structure of QFT, in order to get useful predictions, nothing useful is lost by our approach while there is everything to be gained from it.  Unfortunately, physics is now a big money making industry in which journals and editors are used as a professional status-political-financial-fame-fashion-bigotry-enforcing-consensus-enforcing-power grasping tool, rather than informative tool designed solely and exclusively to speed up the flow of information that is helpful to those people focused merely upon making advances in the basic science.  But that’s nothing new.  [When Mendel’s genetics were finally published after decades of censorship, his ideas had been (allegedly) plagiarized by two other sets of bigwig researchers whose papers the journal editors had more from gain by publishing, than they had to gain from publishing the original research of someone then obscure and dead!  Neat logic, don’t you agree?  Note that is statement of fact is not “bitterness”, it is just fact.  A lot of the bitterness that does arise in science comes not from the hypocrisy of journals and groupthink, but because these are censored out from discussion.  (Similarly the Oscars are designed to bring the attention to the Oscars, since the prize recipients are already famous anyway.  There is no way to escape the fact that the media in any subject, be it science or politics, deems one celebrity more worthy of publicity than the diabolical murder of millions by left wing dictators.  The reason is simply that the more “interesting” news sells more journals than the more difficult to understand problems.)]
 
 17 August 2014 update:
 
Summary

(1) The Fourier transform of the Coulomb potential (or the Fourier transform of the potential energy term in the Lagrangian or Hamiltonian) gives rest mass.

(2) Please note in particular the observation that since the Coulomb (low energy, below IR cutoff) potential’s Fourier transform gives a propagator omitting a mass term, this propagator does not contribute a logarithmic running coupling. This lack of a running coupling at low energy is observed in classical physics for energy below about 1 Mev where no vacuum polarization or pair production occurs because pair production requires at least the mass of the electron and positron pair, 1.02 MeV. The Coulomb non-mass term propagator contribution at one-loop to electron mass is then non-logarithmic and simply equal to a factor like alpha times the integral (between 0 and A) of (1/k3)d4k = alpha * A. As shown in the diagram we identify this “contribution” from the Coulomb low energy propagator without a mass term to be the actual ground state mass of the electron, with the cutoff A corresponding to the neutral currents that mire down the electron charge core, causing mass, i.e. A is the mass of the uncharged Z boson of the electroweak scale (91 GeV). If you have two one loop diagrams, this integral becomes alpha * A squared.

(3) The one loop corrections shown on page 3 to electron mass for the non-Coulomb potentials (i.e. including mass terms in the propagator integrals) can be found in many textbooks, for example equations 1.2 and 1.3 on page 8 of “Supersymmetry Demystified”. As stated in the blog post, I’m writing a further paper about propagator derivations and their importance.

If you read Feynman’s 1985 QED (not his 1965 book with Hibbs, which misleads most people about path integrals and is preferred to the 1985 book by Zee and Distler), the propagator is the brains of QFT. You can’t directly do a path integral over spacetime with a lagrangian integrated to give action S and then re-integrated in the path integral, the integral of amplitude exp(iS) taken over all possible geometric paths, where S is the lagrangian integral. So, as Feynman argues, you have to construct a perturbative expansion, each term becoming more complex and representing pictorially the physical interactions between particles. Feynman’s point in his 1985 book is that this process essentially turns QFT simple. The contribution from each diagram involves multiplying the charge by a propagator for an internal line and ensuring that momentum is conserved at verticles.