another specious “no go theorem” test

Another specious “no go theorem” test, full of speculative and false assumptions claims to disprove time varying G:

“Gravitational constant appears universally constant, pulsar study suggests
“The fact that scientists can see gravity perform the same in our solar system as it does in a distant star system helps confirm that the gravitational constant truly is universal.”
By NRAO, Charlottesville, VA | Published: Friday, August 07, 2015

This is a good example of the quack direction of where mainstream “science” is going: papers taking some measurements, then using an analysis riddled with speculative assumptions to “deduce” a result that doesn’t stand up to scrutiny, but serves xsimply to defend speculative dogma from the only real danger to it, that people might work on alternative ideas. Like racism, the “no go theorem” uses ad hoc but consensus-appearing implicit and explicit assumptions with a small sprinkling of factual evidence to provide allow the mainstream trojan horse of orthodoxy to survive close scrutiny.

This mainstream defending “no go theorem” game was started by Ptolemy’s false claim in 150 AD that the solar system can’t be right, because – if it was right – then the earth’s equator would rotate at about 1,000 miles an hour at the equator and – according to Aristotle’s laws of motion (which remained for over 1,400 years, until Descartes and Newton came up with rival laws of motion) clouds would whiz by at 1,000 miles an hour and people would also be thrown off earth by that “violent” motion.

Obviously this no-go theorem was false, but the equator really does rotate at that speed. So there was some fact and some fiction, blended together, by Ptolemy’s ultimate defense of the earth centred universe against Aristarchus of Samos’s 250 BC theory of the solar system and the rotating earth. The arguments about a varying gravitational coupling are similarly vacuous.

Please let me explain. The key fact is, if gravity is due to an asymmetry in forces, which is the case for the Casimir force, then you don’t vary the “gravitational” effect by varying the underlying force for a stable orbit, or any other equilibrium system, like the balance of Coulomb repulsion between hydrogen nuclei in a star, and the gravitational compression.

Put in clearest terms, if you have a tug of war on a rope where there is an equilibrium, then adding more pullers equally to each end of the rope has no net effect, nothing changes.

Similarly, if two matched arm wrestlers were to increase their muscle sizes by the same amount, nothing changes. Similarly, in an arms race if both sides in military equilibrium (parity) increase their weapons stockpiles by the same factor, neither gains an advantage contrary to CND’s propaganda (in fact, the extra deterrence makes a war less likely).

Similarly, if you increase the gravitational compression inside a star by increasing the coupling G, while the electromagnetic (Coulomb repulsion) force increases similarly due to a shared ultimate (unified force theory) mechanism, then the sun doesn’t shine brighter or burn out quicker. The only way that a varying G can have any observable effect is if you make an assumption – either implicitly or explicitly – that G varies with time in a unique way that isn’t shared by other forces. Such an assumption is artificial, speculative, and totallyy specious, and a good tell-tale sign that a science is going corruptly conservative and anti-radical in a poor form of propaganda (good propaganda being honest promotion of objective successes, rather than fake dismissals of all possible alternatives to mainstream dogma), by inventing false or fake reasons to defend status quo and “shut down the argument”. Ptolemy basically said “look the SCIENCE HAS SETTLED, the solar system must be wrong because (1) our gut reaction rejects it as contrived, (2) it disagrees with existing laws of motion by the over hyped expert Aristotle, and (3) we have mathematically fantastic theory of epicycles that can be arbitrarily fiddled to fit any planetary motion, without requiring the earth to rotate or orbit the sun.” That was “attractive” for a long time!

Edward Teller in 1948 first claimed to disprove Dirac’s varying G idea by using an analogously flawed argument to that he used to delay the development of the hydrogen bomb. If you remember the story, Teller at first claimed falsely that compression has no effect on thermonuclear reactions in the hydrogen bomb. He claimed that if you compress deuterium and tritium (fusion fuel), the compressed fuel burns faster, but the same efficiency of burn results. He forgot that the ratio of surface area for escaping heat (x rays in the H bomb) to mass is reduced if you compress the fuel, so that his scaling laws argument is fake. If you compress the fuel, the reduced surface area causes a reduced loss of X-ray energy from the hot surface, so that a higher temperature in the core is maintained, allowing much more fusion that occurs in uncompressed fuel.

Likewise, Teller’s 1948 argument against Dirac’s varying gravitational coupling theory is bogus, because of his biased and wooden, orthodox thinking: if you G with time in the sun, it doesn’t affect the fusion rate because there’s no reason why the similar inverse square law Coulomb force’s coupling shouldn’t vary the same way. Fusion rates depend on a balance between Coulomb repulsion of positive ions (hydrogen nuclei) and gravitational compression. If both forces in an equilibrium are changed the same way, no imbalance occurs. Things remain the same. If you have one baby at each end of a see-saw in balance in a park, and then add another similar baby to each end, nothing changes!

It proves impossible to explain this to a biased mathematical physicist who is obsessed with supergravity and refuses to think logically and rationally about alternatives. What happens then is that you get a dogma being defended by false “no go theorems” that purport to close down all alternative ideas that might possibly threaten their funding, prestige or more likely, that threaten “anarchy” and “disorder”. Really, when a religious dogma starts burning heretics, is not a conspiracy of self-confessed bigots who know they are wrong, trying to do evil to prevent the truth coming out. What really happens is that these people are ultra-conservative dogmatic elitists, camouflaged as caring, understanding, honest liberals. They really believe they’re right, and that their attempts to stifle or kill off honest alternatives using specious no-go theorems are a real contribution to physics.

Feynman’s “rules” for calculating Feynman diagrams, which represent terms in the perturbative Taylor series type expansion to a path integral in quantum field theory, allow very simple calculations of quantum gravity. The Casimir force of a U(1) Abelian dark energy (repulsive force) theory is able to predict the coupling correctly for quantum gravity.   We do this by Feynman’s rule for a two vertex Coulomb type force diagram (which contributes far more to the result than diagrams with more vertices), which implies that the ratios of cross-sections for interactions is proportional to the square of the ratios of the couplings.  We know the cross section for the weak nuclear force and we know the couplings for both gravity and the weak nuclear force.  This gives us the gravitational interaction cross-section.

To get the cross-sections in similar dimensional units for this application of Feynman’s rules, we use a well-established method to get each coupling into units of GeV^{-2}.  The Fermi constant for the weak interaction is divided by the cube of the product h-bar and the velocity of light, while the Newtonian constant G is divided by the product of h-bar and c^5.  This gives a Fermi coupling of 1.166 x 10^{-5} GeV^{-2}, and a Newtonian coupling for gravity of 6.709 x 10^{-39} GeV^{-2}, the ratio of which is squared using Feynman’s rules to obtain the ratio of cross-sections for the fundamental interactions.  This is standard physics procedure.  All we’re doing is taking standard procedures and doing something new with them, predicting dark energy (and vice versa, calculating gravity from dark energy).  Nobody seems to want to know, even Gerard ‘t Hooft rejected a paper using the specious argument that because we’re not “citing recent theoretical journal papers” it can’t be hyped in his Foundations of Physics, which seems to require prior published work in the field, not a new idea.  (Gerard ‘t Hooft’s silly argument would demand Newton to extend Ptolemy’s theory of epicycles, or be censored out, in effect.)

In this theory, particles are pushed together locally by the fact that we’re surrounded by the mass of the universe, and gauge bosons for dark energy (observed cosmological acceleration) are being exchanged between the masses in an apple and the masses in the surrounding universe.

Here’s a new idea. If one tenth of the energy currently put into inventing negative false “no go theorem” objections to established facts that are “heretical” or “taboo” in physics, were instead directed towards constructive criticisms and developments of new predictions, physics could probably break out of its current moratorium today. Arthur C. Clarke long ago made the observation that when subjective, negative scientists claim to invent theorems to “disprove” the possibility of a certain research direction achieving results and overthrowing mainstream dogma, they’re more often wrong than when they do objective work.

It’s very easy to point out that any new, radical idea is incompatible with some old dogma that is “widely held and established orthodoxy”, partly because that’s pretty much the definition of progress (unless you define all progress as merely adding another layer of epicycles to a half baked mainstream theory in order to make it compatible with the latest data on the cosmological acceleration), and partly because the new idea is born half baked not having been researched with lavish funding for decades or longer by many geniuses.

Fighting inflation with observations of the cosmic background

From Dr Peter Woit’s 14 June 2015 Not Even Wrong blog:

Last week Princeton hosted what seems to have been a fascinating conference, celebrating the 50th anniversary of studies of the CMB. … The third day of the conference featured a panel where sparks flew on the topics of inflation and the multiverse, including the following:

Neil Turok: “even from the beginning, inflation looked like a kluge to me… I rapidly formed the opinion that these guys were just making it up as they went along… Today inflation is the junk food of theoretical physics… Inflation isn’t radical enough – it’s too much a patchwork. It all rests on rare initial conditions… Akin to solving electron stability with springs… all we have is proof of expansion, not that the driving force is inflation… “because the alternatives are bad you must believe it” isn’t an option that I ascribe to, and one that is prevalent now…  we should encourage young to … be creative (not just do designer inflation)

David Spergel: papers on anthropics don’t teach us anything – which is why it isn’t useful…

Slava Mukhanov: inflation is defined as exponential expansion (physics) + non-necessary metaphysics (Boltzmann brains etc) … In most papers on initial conditions on inflation, people dig a hole, jump in, and then don’t succeed in getting out… unfortunately now we have three new indistinguishable inflation models a day – who cares?

Paul Steinhardt: inflation is a compelling story, it’s just not clear it is right… I’d appreciate that astronomers presented results as what they are (scale invariant etc) rather than ‘inflationary’… Everyone on this panel thinks multiverse is a disaster.

Roger Penrose: inflation isn’t falsifiable, it’s falsified… BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye.

Marc Davis: astronomers don’t care about what you guys are speculating about …

I was encouraged by Steinhardt’s claim that “Everyone on this panel thinks multiverse is a disaster.” (although I think he wasn’t including moderator Brian Greene). Perhaps as time goes on the fact that “the multiverse did it” is empty as science is becoming more and more obvious to everyone.

Inflation theory, a phase change at the Planck scale that allows the universe to expand for a brief period faster than light, is traditionally defended by:

(a) the need to correct general relativity by reducing the early gravitational curvature, since general relativity by itself predicts too great a density of the early universe to account for the smallness of the ripples in the cosmic background radiation which decoupled from matter when the universe became transparent at 300,000 years after zero time.  (The transparency occurs when electrons combine with ions to form neutral molecules, which are relatively transparent to electromagnetic radiation, unlike free charges which are strong absorbers of radiation.)

Thus, inflation is being used here to reduce the effective gravitational field strength by dispersing the ionized matter over a larger volume, which reduces the rate of gravitational clumping to match the small amount of clumping observed at 300,000 years after zero.

Another way of doing the same thing is to a theory of gravitation as being a Casimir force resulting from dark energy, which correctly predicts from dark energy and makes the gravitational coupling G a linear function of time, so at 300,000 years is merely 2.3 x 10^{-5} of todays’s value, and furthermore it is even smaller at earlier times (the smallness of the CBR ripples is not determined solely by the curvature when they were emitted, but the time-integrated effect of the curvature up to that time).  The standard “no-go theorem” by Edward Teller (1948) used against any variation of is false, as we have shown, because it makes an implicit assumption that’s wrong: the Teller no-go theorem assumes that varies with time in one specific way.  Teller assumes for the sake of his no-go theorem, that the gravitational coupling varies inversely with time as Dirac assumed, rather than linearly with time as a Casimir mechanism for quantum gravity as an emergent effect of dark energy pushing masses together.  He also assumes implicitly that G varies only by itself, without any variation of the Coulomb coupling.  Teller thus relies on an assumed imbalance between gravity and Coulomb forces to “disprove” varying G, as well as assuming that any variation of is inversely with time.  All he does is to disprove his own flawed assumptions, not the facts!

These assumptions are all wrong, as we showed.  Gravity and Coulomb forces are analogous inverse square law interactions so their couplings will vary the same way; therefore, no devastating Teller-type imbalance between Coulomb repulsion of protons in stars and gravitational compression forces arises.  The QG theory works neatly.

(b) Inflation, like string theory, is defended by elitist snobbery of the dictatorial variety: “you must believe this theory because there are no alternatives, we know this to be true because we don’t have any interest in taking alternatives seriously, particularly if they are are not hyped by media personalities or “big science” propaganda budgets, and if anyone suggests one we’ll kill it by groupthink “peer” review.  Therefore, like superstring theory, if you want to work at the cutting edge, you’d better work in inflation, or we’ll kill your work.”

That is what it boils down to.  There’s an attitude problem, with two kinds of scientists, defined more by attitude than by methodology.  One kind wants to find the truth, the other wants to become a star, or, failing that, at least to be a groupie.  This corruption is not confined to science, but also occurs in political parties, organized religion, and capitalism.  Some people want to make the world a better place, others are more selfish.  Ironically, those who are the most corrupt are also the most expert at camouflaging themselves as the opposite, a fact that emerged with the BBC’s star Jimmy Saville.  (While endlessly exaggerating correlations between temperature and CO2 and snubbing others, and making money from the taxpayer, they present themselves as moral.)

Youhei Tsubono’s criticism of the magnetic spin alignment mechanism for the Pauli exclusion principle

“All QM textbooks describe the effects of the Exclusion Principle but its explanation is either avoided or put down to symmetry considerations. The importance of the Exclusion Principle as a foundational pillar of modern physics cannot be overstated since, for example, atomic structure, the rigidity of matter, stellar evolution and the whole of chemistry depend on its operation.” – Mike Towler, “Exchange, antisymmetry and Pauli repulsion. Can we ‘understand’ or provide a physical basis for the Pauli exclusion principle?”, TCM Group, Cavendish Laboratory, University of Cambridge, pdf slide 23.

Japanese physicist Youhei Tsubono, who has a page criticising the spin-orbit coupling, points out that there is an apparent discrepancy between the magnetic field energy for electron alignment and the energy difference between the 1s and 2s states, which creates a question of how the spinning charge (magnetic dipole) alignment mechanism of electrons creates the mechanism for the Pauli exclusion principle.

Referring to Quantum chemistry 6th edition, by Ira N. Levine, p 292, Tsubono argues that the difference in lithium’s energy between having three electrons in the 1s state (forbidden by the Pauli exclusion principle) and having two electrons in the 1s state (with opposite spins) and the third electron in the 2s state is 11 eV, which he claims is far greater than the assumed value of the magnetic dipole (spinning charge) field energy, which he claims is only about 10-5 eV.  I can’t resist commenting here to resolve this alleged anomaly:

Japanese physicist Youhei Tsubono on Pauli exclusion principle mechanism by alignment of magnetic dipoles from spinning electrons.

Japanese physicist Youhei Tsubono on Pauli exclusion principle mechanism by alignment of magnetic dipoles from spinning electrons.

In a nutshell, the error Tsubono makes here is conflating the energy of alignment of magnetic spins for electrons at a given distance from the nucleus with the energy needed to not only flip spin states but also to move to a greater distance from the nucleus. It is true that the repulsive magnetic dipole field energy between similarly-aligned electron spins is only about 10-5 eV, but because they’re both in the same subshell that’s enough to account for the observed Pauli exclusion principle.  The underlying error Tsubono makes is to start from the false model (see left hand side of diagram above) showing three electrons in the 1s state, then raising the rhetorical question of how the small magnetic repulsive energy is able to drive one electron into the 2s state.  This situation never arises. The nucleus is formed first of all in fully ionized form by some nuclear reaction. The first electrons therefore approach the nucleus from a large distance.  The realistic question therefore is not: “how does the third electron in the 1s state get enough energy to move to the 2s state from the weak magnetic repulsion that causes the Pauli exclusion principle?”  The 3rd electron stops in the 2s state because of a mechanism: it’s unable to radiate the energy it would gain in approaching any closer to the nucleus.  The electron in the 2s state can only radiate energy in units of hf, so even a small discrepancy in energy is enough to prevent it approaching closer to the nucleus.  (Similarly, if an entry ticket costs $10, you don’t get in with $9.99.)

Similarly, the objection Tsubono raises to the supposedly faster-than-light speed of spin of the classical electron radius is false, because the core size of the electron is far smaller than the classical electron radius.

The core can therefore spin fast enough to explain the magnetic dipole moment without violating the speed of light, which would only be the case if the classical electron was true.  What’s annoying about Tsubono’s page, like many other popular critics of “modern physics”, is that it tries to throw out the baby with the bathwater.  The spinning electron’s dipole magnetic field alignment mechanism for the Pauli exclusion principle is one of a few really impressive, understandable mechanisms in quantum mechanics, and it is therefore important to defend it.  Having chucked out the physical mechanism that comes from quantum field theory, Tsubono then argues “quantum field theory is not physics, just maths.”

Richard P. Feynman reviews nonsensical “mathematical” (aka philosophical) attacks on objective critics of quantum dogma in the Feynman Lectures on Physics, volume 3, chapter 2, section 2-6:

“Let us consider briefly some philosophical implications of quantum mechanics. … making observations affects a phenomenon … The problem has been raised: if a tree falls in a forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves … Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.) … The question is whether the ideas of the exact position of a particle and the exact momentum of a particle are valid or not. The classical theory admits the ideas; the quantum theory does not. This does not in itself mean that classical physics is wrong.

“When the new quantum mechanics was discovered, the classical people—which included everybody except Heisenberg, Schrödinger, and Born—said: “Look, your theory is not any good because you cannot answer certain questions like: what is the exact position of a particle?, which hole does it go through?, and some others.” Heisenberg’s answer was: “I do not need to answer such questions because you cannot ask such a question experimentally.” … It is always good to know which ideas cannot be checked directly, but it is not necessary to remove them all. … In quantum mechanics itself there is a probability amplitude, there is a potential, and there are many constructs that we cannot measure directly. The basis of a science is its ability to predict. … We have already made a few remarks about the indeterminacy of quantum mechanics. … we cannot predict the future exactly. This has given rise to all kinds of nonsense and questions on the meaning of freedom of will, and of the idea that the world is uncertain. Of course we must emphasize that classical physics is also indeterminate … if we start with only a tiny error it rapidly magnifies to a very great uncertainty. … For already in classical mechanics there was indeterminability from a practical point of view.”

Most QM and QFT textbook authors (excepting Feynman’s 1985 QED) ignore the mechanism for quantum field theory, in order to cater to Pythagorean style mathematical mythology.  This mythology is reminiscent of the elitist warning over Plato’s doorway. Only mathematicians are welcome.  To enforce this policy, an obfuscation of physical mechanisms is usually undertaken in a pro-“Bohring” effort to convince students that physics at the basic level is merely a matter of dogmatically applying certain mathematics rules from geniuses, which lack any physical understanding.  Tsubono has other criticisms of modern dogma, e.g. that dark energy provides a modern ad hoc version of “ether” to make general relativity compatible with observation (just the opposite of Einstein’s basis for special relativity).  So why not go back to Lorentz’s mechanism for mass increase and length contraction as being a field interaction accompanied with radiation which occurs upon acceleration?  The answer seems to be that there is a widespread resistance to trying to understand physics objectively.  It seems that status quo is easier to defend.

There is a widespread journalistic denial of freedom to basic questions in quantum mechanics about what is really going on, what the mechanism is, and efforts are made to close down discussions that could lead revolutionary, unorthodox or heretical direction

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up [trying to explain it further].”

– Richard P. Feynman, as quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’ … electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important, and we have to sum the arrows[*]  to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5, pp. 84-5. [*Arrows = wavefunction amplitudes, each proportional to exp(iS) = cos S + i sin S, where S is the action of the potential path.]

Nobel Laureate Gell-Mann debunked single-wavefunction entanglement using colored socks.  A single and thus entangled/collapsible wavefunction for each quantum number of a particle only occurs in non-relativistic 1st quantization QM, such as Schroedinger’s equation.  By contrast, in relativistic 2nd quantization, there is a separate wavefunction amplitude for each potential/possible interaction, not merely one wavefunction amplitude per quantum number.  This difference gets rid of “wavefunction” collapse, “wavefunction” entanglement philosophy, and so on.  Instead of a single wavefunction that becomes deterministic only when measured, we have the path integral, where you add together all the possible wavefunction amplitudes for a particle’s transition.  The paths with smallest action compared to Planck’s constant (thus having the smallest energy and/time) are in phase, and contribute most, while paths of larger action (large energy and/or time) have phases that interfere and cancel out.

Virtual (or offshell) particles travel along the cancelled paths of large action, not real (or onshell) particles. So there’s a simple mechanism which replaces the single wavefunction chaos of ordinary quantum mechanics with interference and constructive interference for multiple wavefunctions per particle in quantum field theory, which is the correct, relativistic theory.  Single wavefunction theories like Schroedinger’s model of the atom (together with Bell’s inequality, which falsely assumes a single wavefunction per particle, like quantum computing hype) are false, because they are non-relativistic and thus ignore the fact that the Coulomb field is quantized, and the field quanta or virtual photon interactions mediating the force binding an orbital electron to a nucleus.  Once you switch to quantum field theory, the chaotic motion of an orbital electron has a natural origin due to its random, discrete interactions with the quantum Coulomb field.  (The classical Coulomb field in Schroedinger’s model is a falsehood.)

Relativistic quantum field theory, unlike quantum mechanics (1st quantization) gets over Rutherford’s objection to Bohr’s atomic electron, the emission of radiation by accelerating charge.  Charges in quantum field theory have fields which are composed of the exchange of what is effectively offshell radiation: the ground state is thus defined by an equilibrium between emission and reception of virtual radiation. We only “observe” onshell photons emitted while an electron accelerates, because the act of acceleration throws the usual balanced equilibrium (of virtual photon exchange between all charges), temporarily out of equilibrium by preventing the usual complete cancellation of field phases.  Evidence: consider Young’s double slit experiment using Feynman’s path integral.  We can see that virtual photons go through all slits, in the process interacting with the fields in the electrons on slit edges (causing diffraction), but only the uncancelled field phases arriving on the screen are registered as being a real (onshell) photon.  It’s simple.

This is analogous to the history of radiation in thermodynamics. Before Prevost’s suggestion in 1792 that an exchange of thermal energy explains the absence of cooling if all bodies are continuously radiating energy, thermodynamics was in grave difficulties with heroic Niels Bohr “God characters” grandly dismissing as ignorant anyone who discovered an anomaly in the theories of fluid heat like caloric and phlogiston. Today we grasp that a room at 15 C is radiating because it is emitting heat with a radiating temperature of 288 K above absolute zero. Cooling is not synonymous with radiating.  If the surrounding parts of the building are also at 15 C, the room will not cool, since the radiating effect is compensated by the receipt of radiation from the rooms, floor and roof.  Likewise, the electron in the ground state can radiate energy without spiralling into the nucleus, if it is in equilibrium and is receiving as much energy as it radiates.

False no-go theorems, based on false premises, have always been used to quickly end any progressive suggestions without objective discussion.  This censorship deliberately retarded the objective development of new ideas which were contrary to populist dogma.  It was only when the populist dogma became excessively boring or when a rival idea was evolved into a really effective replacement, that “anomalies” in the old theory ceased to be taboo.  Similarly, the populist and highly misleading Newtonian particle theory of light still acts to prevent objective discussions of multipath interference as the explanation of Young’s double-slit experiment, just as it did in Young’s day:

“Commentators have traditionally asked aloud why the two-slit experiment did not immediately lead to an acceptance of the wave theory of light. And the traditional answers were that: (i) few of Young’s contemporaries were willing to question Newton’s authority, (ii) Young’s reputation was severely damaged by the attacks of Lord Brougham in the Edinburgh Review, and that (iii) Young’s style of presentation, spoken and written, was obscure. Recent historians, however, have looked instead for an explanation in the actual theory and in its corpuscular rivals (Kipnis 1991; Worrall 1976). Young had no explanation at the time for the phenomena of polarization: why should the particles of his ether be more willing to vibrate in one plane than another? And the corpuscular theorists had been dealing with diffraction fringes since Grimaldi described them in the 17th century: elaborate explanations were available in terms of the attraction and repulsion of corpuscles as they passed by material bodies. So Young’s wave theory was thus very much a transitional theory. It is his ‘general law of interference’ that has stood the test of time, and it is the power of this concept that we celebrate on the bicentennial of its publication in his Syllabus of 1802.”

– J. D. Mollon, “The Origins of the Concept of Interference”, Phil. Transactions of the Royal Society of London, vol. A360 (2002), pp. 807-819.

Feynman remarks in his Lectures on Physics that if you deny all “unobservables” like Mach and Bohr does, then you can kiss the wavefunction Psi goodbye. You can observe probabilities and cross-sections via reaction rates, but as Feynman argues, that’s not a direct observation for the existence of the wavefunction. There are lots of things in physics that are founded on indirect evidence, giving rise to the possibility that an alternative theory may explain the same evidence using a different basic model. This is exactly the situation as occurred in explaining sunrise by either the sun orbiting the earth daily, or the earth rotating daily while the sun moves only about one degree across the sky.

Propagator derivations

Peter Woit is writing a book, Quantum Theory, Groups and Representations: An Introduction, and has a PDF of the draft version linked here.  He has now come up with the slogan “Quantum Theory is Representation Theory”, after postulating “What’s Hard to Understand is Classical Mechanics, Not Quantum Mechanics“.

I’ve recently become interested in the mathematics of QFT, so I’ll just make a suggestion for Dr Woit regarding his section “42.4 The propagator” which is incomplete (he has only the heading there on page 404 of the 11 August 2014 revision, with no test under it at all).

The Propagator is the greatest part of QFT from the perspective of Feynman’s 1985 book QED: you evaluate the propagator from either the Lagrangian or Hamiltonian, since the Propagator is simply the Fourier transform of the potential energy (the interaction part of the lagrangian provides the couplings for Feynman’s rules, not the propagator).  Fourier transforms are simply Laplace transforms with a complex number in the exponent.  The Laplace and Fourier transforms are used extensively in analogue electronics for transforming waveforms (amplitudes as a function of time) into frequency spectra (amplitudes as a function of frequency).  Taking the concept at it’s simplest, the Laplace transform of a constant amplitude is just the reciprocal (inverse), e.g. an amplitude pulse lasting 0.1 second has a frequency of 1/0.1 = 10 Hertz.  You can verify that from dimensional analysis.  For integration between zero and infinity, with F(f) = 1 we have:

Laplace transform, F(t) = Integral [F(f) * exp(-ft)] df

= Integral [exp(-ft)] df

= 1/t.

If we change from F(f) = 1 to F(f) = f, we now get:

Frequency, F(t) = Integral [f * exp(-ft)] df = 1/(t squared).

The trick of the Laplace transform is the integration property of the exponential function by itself, i.e. it’s unique property of remaining unchanged by integration (because e is the base of natural logarithms), apart from multiplication by the constant (the factor which is not a function of factor you’re integrating over) in its power.  The Fourier transform is the same as the Laplace transform, but with a factor of “i” included in the exponential power:

Fourier transform, F(t) = Integral [F(f) * exp(-ift)] df

In quantum field theory, instead of inversely linked frequency f and time t, you have inversely linked variables like momentum p and distance x.   This comes from Heisenberg’s ubiquitous relationship, p*x = h-bar.  Thus, p ~ 1/x.  Suppose that the potential energy of a force field is given by V = 1/x.  Note that field potential energy V is part of the Hamiltonian, and also part of the Lagrangian, when given a minus sign where appropriate.  You want to convert V from position space, V = 1/x, into momentum space, i.e. to make V a function of momentum p.  The Fourier transform of the potential energy over 3-d space shows that V ~ 1/p squared.  (Since this blog isn’t very suitable for lengthy mathematics, I’ll write up a detailed discussion of this in a vixra paper soon to accompany the one on renormalization and mass.)

What’s interesting here is that this shows that the propagator terms in Feynman’s diagrams, which, integrated-over produce the running couplings and thus renormalization, are simply dependent on the field potential, which can be written in terms of classical Coulomb field potentials or quantum Yukawa type potentials (Coulomb field potentials with an exponential decrease included.  There are of course two types of propagator: bosonic (integer spin) and fermionic (half integer spin).  It turns out that the classical Coulomb field law gives a potential of V = 1/x which, when Fourier transformed, gives you V ~ 1/p squared, and when you include a Yukawa exp(-mx) short-range attenuation or decay term, i.e. V = (1/x)exp(-mx), you get a Fourier transform of 1/[(p squared) – (m squared)] which is the same result that a Fourier transform of the spin-1 field propagator (boson propagators) give using a Klein-Gordon lagrangian.
However, using the Dirac lagrangian, which is basically a square-root version of the Klein-Gordon equation with Dirac’s gamma matrices to avoid losing solutions due to the problem that minus signs and complex numbers tend to disappear when you square them, you get a quite different propagator: 1 /(p – m).  The squares on p and m which occur for spin-1 Klein-Gordon equation propagators, disappear for Dirac’s fermion (half integer spin) propagator!
So what does this tell us about the physical meaning of Dirac’s equation, or put another way, we know that Coulomb’s law in QFT (QED more specifically) physically involves field potentials consisting of exchanged spin-1 virtual photons which is why the Fourier transform of Coulomb’s law gives the same result as the propagator from the Klein-Gordon equation but without a mass term (Coulomb’s virtual photons are non-massive, so the electromagnetic force is infinite ranged), but what is the equivalent for Coulomb’s law for Dirac’s spin-1/2 fermion fields?  Doing the Fourier transform in the same way but ending up with Dirac’s 1 /(p – m) fermion propagator gives an interesting answer which I’ll discuss in my forthcoming vixra paper.
Another question is this: the Higgs field and the renormalization mechanism only deal with problems of mass at high energy, i.e. UV cutoffs as discussed in detail in my previous paper.  What about the loss of mass at low energy, the IR cutoff, to prevent the coupling from running due to the presence of a mass term in the propagator?
In other words, in QED we have the issue that the running coupling polarizable pair production only kicks in at 1.02 MeV (the energy needed to briefly form an electron-positron pair).  Below that energy, or in weak fields beyond the classical electron radius, the coupling stops running, so the effective electronic charge is constant.  This is why there is a standard low energy electronic charge that was measured by Millikan.  Below the IR cutoff, or at distances larger than the classical electron radius, the charge of an electron is constant and the force merely varies with the Coulomb geometric law (the spreading or divergence of field lines or field quanta over an increasing space, diluting the force, but with no additional vacuum polarization screening of charge, since this screening is limited to distances shorter than the classical electron radius or energies beyonf about 1 MeV).
So how and why does the Coulomb potential suddenly change from V = 1/x beyond a classical electron radius, to V = (1/x)exp(-mx) within a classical electron radius? Consider the extract below from page 3 of
Integrating using a massless Coulomb propagator to obtain correct low energy mass
The key problem for the existing theory is very clear when looking at the integrals in Fig. 1.  Although we have an upper case Lambda symbol included for an upper limit (or high energy, i.e. UV cutoff) on the integral which includes an electron mass term, we have not included a lower integration limit (IR cutoff): this is in keeping with the shoddy mathematics of most (all?) quantum field theory textbooks, which either deliberately or maliciously cover-up the key (and really interesting or enlightening) problems in the physics by obfuscating or by getting bogged down in mathematical trivia, like a clutter of technical symbolism.  What we’re suggesting is that there is a big problem with the concept that the running coupling merely increases the “bare core” mass of a particle: this standard procedure conflates and confuses the high energy bare core mass that isn’t seen at low energy, with the standard value of electron mass which is what you observe at low energy.
In other words, we’re arguing for a significant re-interpretation of physical dogmas in the existing mathematical structure of QFT, in order to get useful predictions, nothing useful is lost by our approach while there is everything to be gained from it.  Unfortunately, physics is now a big money making industry in which journals and editors are used as a professional status-political-financial-fame-fashion-bigotry-enforcing-consensus-enforcing-power grasping tool, rather than informative tool designed solely and exclusively to speed up the flow of information that is helpful to those people focused merely upon making advances in the basic science.  But that’s nothing new.  [When Mendel’s genetics were finally published after decades of censorship, his ideas had been (allegedly) plagiarized by two other sets of bigwig researchers whose papers the journal editors had more from gain by publishing, than they had to gain from publishing the original research of someone then obscure and dead!  Neat logic, don’t you agree?  Note that is statement of fact is not “bitterness”, it is just fact.  A lot of the bitterness that does arise in science comes not from the hypocrisy of journals and groupthink, but because these are censored out from discussion.  (Similarly the Oscars are designed to bring the attention to the Oscars, since the prize recipients are already famous anyway.  There is no way to escape the fact that the media in any subject, be it science or politics, deems one celebrity more worthy of publicity than the diabolical murder of millions by left wing dictators.  The reason is simply that the more “interesting” news sells more journals than the more difficult to understand problems.)]
 17 August 2014 update:

(1) The Fourier transform of the Coulomb potential (or the Fourier transform of the potential energy term in the Lagrangian or Hamiltonian) gives rest mass.

(2) Please note in particular the observation that since the Coulomb (low energy, below IR cutoff) potential’s Fourier transform gives a propagator omitting a mass term, this propagator does not contribute a logarithmic running coupling. This lack of a running coupling at low energy is observed in classical physics for energy below about 1 Mev where no vacuum polarization or pair production occurs because pair production requires at least the mass of the electron and positron pair, 1.02 MeV. The Coulomb non-mass term propagator contribution at one-loop to electron mass is then non-logarithmic and simply equal to a factor like alpha times the integral (between 0 and A) of (1/k3)d4k = alpha * A. As shown in the diagram we identify this “contribution” from the Coulomb low energy propagator without a mass term to be the actual ground state mass of the electron, with the cutoff A corresponding to the neutral currents that mire down the electron charge core, causing mass, i.e. A is the mass of the uncharged Z boson of the electroweak scale (91 GeV). If you have two one loop diagrams, this integral becomes alpha * A squared.

(3) The one loop corrections shown on page 3 to electron mass for the non-Coulomb potentials (i.e. including mass terms in the propagator integrals) can be found in many textbooks, for example equations 1.2 and 1.3 on page 8 of “Supersymmetry Demystified”. As stated in the blog post, I’m writing a further paper about propagator derivations and their importance.

If you read Feynman’s 1985 QED (not his 1965 book with Hibbs, which misleads most people about path integrals and is preferred to the 1985 book by Zee and Distler), the propagator is the brains of QFT. You can’t directly do a path integral over spacetime with a lagrangian integrated to give action S and then re-integrated in the path integral, the integral of amplitude exp(iS) taken over all possible geometric paths, where S is the lagrangian integral. So, as Feynman argues, you have to construct a perturbative expansion, each term becoming more complex and representing pictorially the physical interactions between particles. Feynman’s point in his 1985 book is that this process essentially turns QFT simple. The contribution from each diagram involves multiplying the charge by a propagator for an internal line and ensuring that momentum is conserved at verticles.

Rank 1 quantum gravity

NC Cook paper

Above: extract from Einstein’s rank-2 tensor compression of Maxwell’s equations does not turn them into rank-2 spacetime curvature.

A problem for unfashionable new alternative theories in a science dominated by noisy ignorant bigots.

Above: a serious problem for unfashionable new alternative theories in a science dominated by noisy ignorant bigots.

Dr Woit has a post (here with comments here) about the “no alternatives argument” used in science to “justify” a research project by “closing down arguments” by dismissing any possibility of an alternative direction (the political side of it, and also in pure politics).  I tried to make a few comments but it proved impossible to defend my position without using maths of a sort which could not be typed in a comment, so I’ll place the material in this post, responding to criticisms here too:

“… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”

It’s weird that Dr Peter Woit claims that this “there is no alternative so you must believe in M-theory” argument is difficult to respond to, seeing that he debunked it in his own 2002 arXiv paper “Quantum field theory and representation theory”.

In that paper he makes the specific point about the neglect of alternatives due to M-theory hype, by arguing there that a good alternative is to find a symmetry group in low dimensions that encompasses and explains better the existing features of the Standard Model.

Woit gives a specific example, showing how to use Clifford algebra to build a representation of a symmetry group that for 4 dimensional spacetime predicts the electroweak charges including left handed chiral weak interactions, which the Standard Model merely postulates.

But he also expresses admiration for Witten, whose first job was in left wing politics, working for George McGovern, a Democratic presidential nominee in 1972. In politics you brainwash yourself that your goal is a noble one, some idealistic utopia, then you lie to gain followers by promising the earth. I don’t see much difference with M-theory, where a circular argument emerges in which you must

(1) shut down alternative theories as taboo, simply because they haven’t (yet) been as well developed or hyped as string theory, and

(2) use the fact that you have effectively censored alternatives out as being somehow proof that there are “no alternatives”.

I don’t think Dr Woit is making the facts crystal clear, and he fails badly to make his own theory crystal clear in his 2002 paper where he takes the opposite approach to Witten’s hype of M-theory. Woit introduces his theory on page 51 of his paper, after a very abstruse 50 pages of advanced mathematics on group symmetry representations using Lie and Clifford algebras. The problem is that alternative ideas that address the core problems are highly mathematical and need a huge amount of careful attention and development. I believe in censorship for objectivity in physics, instead of censorship to support fashion.

” Indeed as Einstein showed, gravity is *not* a force, it is a manifestation of spacetime curvature.”

This is a pretty good example of a “no alternatives” delusion: if gravity is quantized in quantum field theory, the gravitational force will then be mediated by graviton exchange (gauge bosons), just like any Standard Model force, not spacetime curvature as it is in general relativity. Note that Einstein used rank-2 tensors for spacetime curvature to model gravitational fields because that Ricci tensor calculus was freshly minted and available in the early 20th century.

Rank-2 tensors hadn’t been developed to that stage at the time of Maxwell’s formulation of electrodynamics laws, which uses rank-1 tensors or ordinary vector calculus to model fields as bending or diverging “lines” in space. Lines in space are rank 1, spacetime distortion is rank 2. The vector potential version of Maxwell’s equations doesn’t replace field lines with spacetime curvature for electromagnetic fields, it merely generalizes the rank-1 field description of Maxwell. It’s taboo to point out that electrodynamics and general relativity arbitrarily and dogmatically use different mathematical descriptions for reasons of historical fluke, not physical utility (rank 1 equations for field lines versus rank 2 equations for spacetime curvature). Maxwell worked in a pre-tensor era, Einstein in a post-tensor era. Nobody bothered to try to replace Maxwell’s field line description of electrodynamics with a spacetime curvature description, or vice-versa to express gravitational fields in terms of field lines. It’s taboo to even suggest thinking about it! Sure there will be difficulties in doing so, but you learn about physical reality by overcoming difficulties, not by making it taboo to think about.

The standard dogma is to assert that somehow just because Maxwell’s model is rank 1 and involves spin 1 gauge boson exchange when quantized as QED, general relativity involves a different spin to couple to the rank 2 tensor, spin 2 gravitons. However, since 1998 it’s been observationally clear that the cosmological acceleration implies a repulsive long range force between masses, akin to spin-1 boson exchange between similar charges (mass-energy being the gravitational charge). Now, if you take this cosmological acceleration or repulsive interaction or “dark energy” as the fundamental interaction, you can obtain general relativity’s “gravity” force (attraction) in the way the Casimir force emerges, with checkable predictions that were subsequently confirmed by observation (the dark energy predicted in 1996, observed 1998).  Hence, understanding the maths allows you to find the correct physics!

Jesper: what doesn’t make sense is your reference to Ashtekar variables, which don’t convert spacetime curvature into rank-1 equation for field lines. What they do is to introduce more obfuscation without any increase in understanding nature. LQG which resulted from Ashekar variables has been a failure. The fact is, there is no mathematical description of GR in terms of field lines, and no mathematical description of QED in terms of spacetime curvature, and this for purely historical, accidental reasons! The two different descriptions are long held dogma and it’s taboo to mention this.

(For a detailed technical discussion of the difference between spacetime curvature maths and Maxwell’s field lines, please see my 2013 paper “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra).

Geometrodynamics doesn’t express electrodynamics’ rank 1 field lines as spacetime curvature, any more than vortices do, or any more than Ashtekar variables can express spacetime curvature as field lines.

The point is, if you want to unify gravitation with standard model forces, you first need to express them with the same mathematical field description so you can properly understand the differences. You need both Maxwell’s equations and gravitation expressed as field lines (rank 1 tensors), or you need them both expressed as spacetime curvature (rank 2 tensors). The existing mixed description (rank 1 field lines for QED, spacetime curvature for GR) follows from historical accident and has become a hardened dogma to the extent that merely pointing out the error results in attacks of the sort you make, where you mention some other totally irrelevant description and speculatively claim that I haven’t heard of it.

The issue is not “which is the more fundamental one”. The issue is expressing all the fundamental interactions in the *same* common field description, whatever that is, be it rank-1 or rank-2 equations. It doesn’t matter if you choose field lines or spacetime curvature. What does matter is that every force is expressed in a *common* field description. The existing system expresses all SM particle interactions as rank-1 tensors and gravitation as rank-2 tensors. Your comment ignores this and and you claim it is “personal prejudice” to choose “which fundamental theory is correct” which “cannot be established by making dogmatic statements”. I’m not prejudiced in favour of any particular description, I am against the confusion of mixing up different descriptions. That’s based on dogmatic prejudice!

“Yang-Mills theory (Maxwell, QED, QCD etc.) is a theoretical framework of connections (rank 1 tensor) and curvature of connections (rank 2 tensor).”

Wrong: rank 2 field strength tensor is not spacetime curvature! as I prove in my paper on fibre connections, see “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra.

Maxwell’s equations of electromagnetism describe three dimensional electric and magnetic field line divergence and curl (rank 1 tensors, or vector calculus), but were compressed by Einstein by including those rank-1 equations as components of rank 2 tensors by gauge fixing as I showed there. The SU(N) Yang-Mills equations for weak and strong interactions are simply an extension of this by adding on a quadratic term, the Lie product. As for the connection of gauge theory to fibre bundles, I as I showed in that paper, Yang merely postulates that the electromagnetic field strength tensor equals the Riemann tensor and that the Christoffel matrix equals the covariant vector potential. These are efforts to paper over the physical distinctions between the field line description of gauge theory and the curved spacetime description of general relativity. I go into all this in detail in that 2013 paper.

The fact that only ignorant responses are made to factual data also exists in all other areas of science where non-mainstream ideas have been made taboo, and where you have to fight a long war merely to get a fact reviewed without bigoted insanity or apathy.

Karl Popper’s 1935 correspondence arguments with Einstein are vital reading. See, in particular, Einstein’s letter to Karl Popper dated 11 September 1935, published in Appendix xii to the 1959 English edition of Popper’s “Logic of Scientific Discovery,” pages 482-484. Einstein writes in that letter that he has physical objections to the trivial arguments of Heisenberg based on the single wavefunction collapse idea non relativistic QM. Note that wavefunction collapse doesn’t occur at all in relativictic 2nd quantization, as expressed as Feynman’s path integrals, where multipath interference allows physical path interference processes to replace metaphysical collapse of a single indeterminate wavefunction amplitude. You instead integrate over many wave function amplitude contributions, one representing every possible path, including specifically the paths that represent physical interactions with a measuring instrument.

“I regard it as trivial that one cannot, in the range of atomic magnitudes, make predictions with any desired degree of precision … The question may be asked whether, from the point of view of today’s quantum theory, the statistical character of our experimental statistical description of an aggregate of systems, rather than a description of one single system. But first of all, he ought to say so clearly; and secondly, I do not believe that we shall have to be satisfied for ever with so loose and flimsy a description of nature. …

“I wish to say again that I do not believe that you are right in your thesis that it is impossible to derive statistical conclusions from a deterministic theory. Only think of classical statistical mechanics (gas theory, or the theory of Brownian movement). Example: a material point moves with constant velocity in a closed circle; I can calculate the probability of finding it at a given time within a given part of the periphery. What is essential is merely this: that I do not know the initial state, or that I do not know it precisely!” – Albert Einstein, 11 September 1935 letter to Karl Popper.


E.g., groupthink political fashion against looking at alternative explanations of facts – apart those which are screamed by a noisy “elite” of political activists – also prevails in climate “science”, CO2 is correlated to “temperature data”, and any other correlation is banned, e.g. water vapour – a greenhouse gas which contributes far more, about 25 times more, to the greenhouse effect than CO2, has been declining since 1948 according to NOAA measurements.  This water vapour decline is enough to cancel most of the temperature rise, CO2 having a trivial contribution owing to the negative feedback from cloud cover which IPCC ignored in all its 21 over-hyped models.

water vapour fall cancels out CO2 rise

Above: NOAA data on declining humidity (non droplet water, which absorbs heat as a greenhouse gas).  Below: satellite data on Wilson cloud chamber cosmic radiation effects on cloud droplet formation and the long term heating caused by a fall in the abundance of cloud water droplets, which reflect back solar radiation into space, cooling altitudes below the clouds.

cosmic rays vs cloud cover

When the IPCC does select an “alternative” theory to discuss in a report, it is always a strawman target, a false model that they can easily debunk.  E.g. cosmic rays don’t carry any significant energy into earth’s climate, so “solar forcing” (which IPCC analyses and correctly debunks) is a strawman target.  But we don’t need a lengthy analysis to see this.  Cosmic radiation produces a radiation dose of 1 Gray for every 1 Joule of ionizing radiation absorbed in a kilogram of matter.  The prompt lethal dose of ionizing radiation is less than 10 Grays or 10 Joules per kg.  Therefore, it’s obvious from energy-to-radiation dose conversion factor, alone, that cosmic rays can’t affect the energy balance in the atmosphere, for if they could we’d be getting lethal doses of radiation.  What instead happens is a very indirect effect on climate, which produces the very opposite effect to that of “solar forcing” which the IPCC considered.

While solar forcing – that is to say, direct energy delivery by cosmic rays, causing climate heating – would imply that an increase in cosmic rays causes an increase in temperature, the opposite correlation occurs with the “Wilson cloud chamber mechanism”, because cosmic rays leave ionization trails around which cloud droplets condense, which cool (not heat up) the altitudes below the cloud.  This is validated by data (graphs above).  But the media sticks to considering the false “solar forcing” theory as being the only “(in)credible alternative” to the CO2-temperature correlation with no negative feedback IPCC models.  There is no media discussion of any alternative that is remotely correct.

The reason for stamping out dissent and making taboo any discussion of realistic alternative hypotheses is the hubris of dictatorship, which is similar in some ways to pseudo-democratic politics.  The claim in democratic ideology is that we have freedom of the democratic sort, but democracy in ancient Greece was a daily referendum on issues, not a vote only once in four years (i.e., 4 x 365 times fewer votes than democracy) for an effective choice between one of two dictator parties of groupthink ideology, ostensibly different but really both joined together in an unwritten Cartel Agreement to maintain a fashionable status quo even if that involves an ever increasing national debt, threats to security from fashionable disarmament ideology, funding groupthink money-grabbing quack scientists who only want to award each other prizes and shut down “unorthodox” or honest research.

Anyone who points out the problems of calling this “democracy” and suggests methods for achieving actual democracy (e.g. with daily online referendums using secure databases of the sort used for online banking) is attacked falsely as being in favor of anarchy or whatever.  In this way, no progress is possible and status quo is maintained.  (By analogy to groupthink dictatorship in contemporary politics and science, is the money-spinning law profession as described by former law court reporter Charles Dickens in Bleak House: “The one great principle of the English law is, to make business for itself. There is no other principle distinctly, certainly, and consistently maintained through all its narrow turnings. Viewed by this light it becomes a coherent scheme, and not the monstrous maze the laity are apt to think it. Let them but once clearly perceive that its grand principle is to make business for itself at their expense…”  Notice that I’m not critical here of status quo, but of the hypocrisy used to cover up its defects with lying deceptions.  If only people were honest about the lack of freedom and the need for censorship, that would reduce the stigma of bigoted dictatorial coercion behind “freedom”.  As it is, we instead have a “freedom of the press” to tell lies and make facts taboo, and to endlessly proclaim falsehoods as urgent “news” in a effort to brainwash everyone.)

Dr Woit argues rightly “… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”

But he states: “There is a long history and a deeply-ingrained culture that helps mathematicians figure out the difference between promising and empty speculation, and I believe this is something theoretical physicists could use to make progress.”

Well, prove it!

On March 26, 2014, The British Journal for the Philosophy of Science published a paper by philosopher Richard Dawid, Stephan Hartmann, and Jan Sprenger, “The No Alternatives Argument”:

“Scientific theories are hard to find, and once scientists have found a theory, H, they often believe that there are not many distinct alternatives to H. But is this belief justified? What should scientists believe about the number of alternatives to H, and how should they change these beliefs in the light of new evidence? These are some of the questions that we will address in this article. We also ask under which conditions failure to find an alternative to H confirms the theory in question. This kind of reasoning (which we call the ‘no alternatives argument’) is frequently used in science and therefore deserves a careful philosophical analysis.”  (A PDF of their draft paper is linked here.)

The problem for them is that the “no alternatives argument” is used in the popular media and popular politics to “close down discussion” of any argument as being simply taboo or heresy, if there is even a hint that it could constitute “distracting noise” that draws any attention let alone funding away from the mainstream bigots and the mainstream hubris.  This is well described by Irving Janis in his treatment of “groupthink”, proving that collective thought over dogma fails eventually when it resorts to direct or indirect subjective censorship of alternative viewpoints.  The whole notion of “dictatorship” being bad is down to the banning of discussion of alternative viewpoints which turn out correct, in other words it’s not “leadership” which is the inherent problem but:

“leadership + stupid, bigoted, coercive lying about alternatives being rubbish, when the leadership hasn’t even bothered to read or properly evaluate the alternatives.”

Historically, progress of a radical form has – simply because it has had to be radical – been unorthodox, been done by unorthodox people, and has been censored by the mainstream accordingly.  The argument the mainstream makes is tantamount to claiming that anyone with an alternative idea must be a wannaby dictator who should try to overthrow the existing Hitler by first joining the Nazi Party, and then working up the greasy pole and finally reasoning in a gentleman like way with the Great Dictator.  That’s absurd, based on the history of science.  Joule the brewer who discovered the mechanical equivalent of heat energy by the energy needed to stir vats of beer mechanically, did not go about trying to get his “fact” (ahem, “pet theory” to mainstream bigots) accepted by becoming a professor of mathematical physics and a journal editor.  You cannot get a “peer” reviewer to read a radical paper.  The people who did try to go down the orthodox route when they had a radical idea like Mendel were censored out, and their facts were eventually “re-discovered” when others deemed it useful to do so, in order to resolve a priority dispute.

Put another way, the key problem of dictatorship is that it turns paranoid, seeing enemies everywhere in merely honest criticisms and suggestions for improvements, and eliminates those: the “shoot the messenger” fallacy.  What we need is honest, not dishonest, censorship.  We need to censor out quacks, the people who “make money in return for falsehood”, and encourage objectivity.  Power corrupts, so even if you start off with an honest leader, you can end up with that leader turning into a paranoid quack.  Only by censoring in the honest interests of objectivity, rather than to protect fashion from scrutiny, criticism, and improvement, can progress be made.

Woit rejects philosopher Richard Dawid’s invocation of the “no alternatives” delusion to defend string theory from critics, by stating: “This seems to just be an argument that HEP theorists have been successful in the past, so one should believe them now …”.   Dawid uses standard “successful” obfuscation techniques, consisting of taking an obscure and poorly defined argument and making it even more abstruse with Bayesianism probability theory, in which previous successes of a mainstream theory can be used to quantitatively argue that it is “improbable” that an alternative theory dismissed by the mainstream will overturn the mainstream.  This has many objections which Dawid doesn’t discuss.  The basic problem is that that of Hitler, who used precisely the implicit Bayesianism increasing trust from his “successes” in unethically destroying opponents to gain and further gather support for his increasingly mainstream party.  Anyone who objected was simply reminded of Hitler’s “good record”, not just iron cross first class but his tireless struggle, etc.  The fault here is that probability theory is non-deterministic and assumes a lack of bias-causing mechanisms which control the probability.

If you want to model the failure risk of a theory, you should look at the theory, e.g. eugenics for Hitler or the cosmic landscape for string, and see if it is scientific in the useful sense, other than providing corrupt bigots power and authority to sever more objective research which disproves it.  Instead, Dawid merely looks at the history of mainstream theory successes, ignoring the issues with the theories, and simply concludes that since mainstream hubris is generally good at ignoring better ideas, it will continue to prevail.

Which of course was what Bell’s inequality did when it set up a hypothesis test between equally false alternatives, including a “proof” of quantum mechanics viability based on the false assumption that quantum mechanics consists solely of a non-relativistic single-wavefunction amplitude for an electron (no path integral second quantization, with an amplitude for every path).  By setting up a false default hypothesis, you can “prove” it with false logic.

For example, in 1967 Alexander Thom falsely proved by probability theory that there was a 99% probability that the ancient Britons who built stonehenge used a “megathlic yard” of 83 cm length.  He did this by a standard two-theory comparison hypothesis test with standard probability theory: he compared the goodness of fit of two hypotheses only, excluding the real solution!  The two false hypotheses he compared in his probability theory test were his pet theory of the 83 cm megalithic yard, and random spacing.  He proved correctly, that if the correct solution is one of these two options (it isn’t of course), then the data shows a 99% probability that the 83 cm megalithic yard is the correct option.  Thom’s error, and the error of all probability theory and statistical hypothesis tests (Chi squared, Students T), is that they compare only one candidate hypothesis or theory with one other, i.e. you assume without any evidence or proof that you know the correct theory is one of two options that have been investigated.  The calculation then tells you the probability that the data corresponds to one of those two option.  This is fake, because in the real world there are more than just 2 options, or 2 theories to compare.  Bell’s inequality neglects inclusion of path integrals with relativistic 2nd quantization multipath interference causing indeterminancy, rather than the 1st quantization non-relativistic “single wavefunction collapse” metaphysics.  Similarly, in 1973 Hugh Porteous disproved Thom’s “megathlic yard” by invoking a third hypothesis, that distances were paced out.  Porteous modelled the pacing option using a normal distribution and showed it better fitted the data than Thom’s megathlic yard!  This is a warning from history about the dangers of “settling the science”, “closing down the argument”, and banning alternative ideas!

Conjectured theory SO(6) = SO(4) x SO(2) = SU(2) x SU(2) x U(1)

Conjectured electroweak/dark energy/gravity symmetry theory:

SO(6) = SO(4) x SO(2)
= SU(2) x SU(2) x U(1)

If this is true, the Standard Model should be replaced by SU(3) x SO(6). or maybe just SO(6) if SO(6) breaks down two ways, once as shown above, and also as in the old Georgi-Glashow SU(5) grand unified theory (given below), where SO(6) is isomorphic to SU(4) which contains the strong force’s color charge symmetry, SU(3).  (See also Table 10.2 in the introduction to group theory for physicists, linked here.)

Why do we want SO(6)? Answer: Lunsford shows SO(3,3) = SO(6) unifies gravitation and electrodynamics in 6d.

SO(4) = SU(2) x SU(2) is well known as a mathematical isomorphism (see previous post) as is the fact that SO(2) = U(1).

In olden times (circa 1975-84) the media was saturated with the (wrong) prediction of proton decay via the (now long failed) grand unified theory of SU(5) = SO(10). The idea was to break down SU(5) via the SO(10) isomorphism into SO(6) x SO(4), and from there one of the ideas, namely the isomorphism (based on the fact that the left force is left-handed so the Yang-Mills SU(2) model reduces to a simple single element U(1) theory for right-handed spinors): SU(2, Right) = U(1, Hypercharge), may be of use to us for recycling purposes (to produce a better theory):

= SO(10)
= SO(6) x SO(4)
= SU(4) x SU(2, Left) x SU(2, Right)
= SU(3) x SU(2, Left) x U(1)

Well, maybe we don’t need the reduction SU(4) to SU(3), but we do want to consider the symmetry break down of SO(6) because Lunsford found that group useful:

= SO(6)
= SO(4) x SO(2)
= SU(2, Left) x SU(2, Right) x U(1, Dark energy/gravity)
= SU(2, Left) x U(1, Hypercharge) x U(1, Dark energy/gravity)

This is pretty neat because it also fits in with Woit’s conjecture that that shows how to obtain the normal electroweak sector charges with their handedness (chiral) features by using a correspondence between the vacuum charge vector and Clifford algebra to represent SO(4) whose U(2) symmetry group subset contains the 2 x 2 = 4 particles in one generation of Standard Model quarks or leptons, together with their correct Standard Model charges; for details see pages 13-17 together with 51 of Woit’s 2002 paper, QFT and Representation Theory.

(It’s abstract but when you think about it, you’re just using a consistent representation theory to select the 4 elements of the U(2) matrix from the 16 of SO(4). Most of the technical trivia in the paper is superfluous to the key example we’re interested in which occurs in the table of page 51. Likewise, when you look compare the elements of the three 2×2 Pauli matrices of SU(2) to the eight 3×3 Gell-Mann matrices of SU(3) you can see that the first three of the SU(3) matrices are analogous to the three SU(2) matrices, give or take a global multiplication factor of i. In other words, you can pictorially see what’s going on if you write out the matrices and circle those which correspond to one another.)

SU(2) x SU(2) = SO(4) and the Standard Model

The Yang-Mills SU(N) equation for field strength is Maxwell’s U(1) Abelian field strength law plus a quadratic term which represents net charge transfer and contains the matrix constants for the Lie algebra generators of the group.  It is interesting that the spin orthogonal group in three dimensions of space and one of time, SO(4), corresponds to two linked SU(2) groups, i.e.

SO(4) = SU(2) x SU(2),

rather than just one SU(2) as the Standard Model would suggest, which is U(1) X SU(2) X SU(3).  This is one piece of “evidence” for the model proposed in, where U(1) is simply dark energy (the cosmological repulsion between mass, proved in that paper to accurately predict observed quantum gravity coupling by a Casimir force analogy!), and SU(2) occurs in two versions, one with massless bosons which automatically reduces the SU(2) Yang-Mills equation to Maxwell’s by giving a physical mechanism for the Lie algebra SU(2) charge transfer term to be constrained to a value of zero (any other value makes massless charged gauge bosons acquire infinite magnetic self inductance if they are exchanged in an asymmetric rate that fails to cancel the magnetic field curls).  The other SU(2) is the regular one we observe which has massive gauge bosons, giving the weak force.

Maybe we should say, therefore, that our revision of the Standard Model is

U(1) x SU(2) x SU(2) x SU(3)


U(1) x SO(4) x SU(3).

As explained in, the spin structure of standard quantum mechanics is given by the SU(2) Pauli matrices of quantum mechanics.  Any SU(N) group is simply a subgroup of the unitary matrix U(N), containing specifically those matrices of U(N) with a positive determinant of 1.  This means that SU(2) has 3 Pauli spin matrices.  Similarly, SU(3) is the 8 matrices of U(3) having a determinant of +1.  Now what is interesting is that this SU(2) spinor representation on quantum mechanics also arises with the Weyl spinor, which Pauli dismissed originally in 1929 as being chiral, i.e. permitting violation of parity conservation (left and right spinors having different charge or other properties).  Much to Pauli’s surprise in 1956 it was discovered experimentally from the spin of beta particles emitted by cobalt-60 that parity is not a true universal law (a universal law would be like the 3rd law of thermodynamics, where no exceptions exist).  Rather, parity conservation is at least violated in weak interactions, where only left handed spinors undergo weak interactions.  Parity conservation had to be replaced by the CPT theorem, which states that to get a universally applicable conservation law involving charge, parity and time, which applies to weak interactions, you must simultaneously reverse charge, parity and time for a particle together.  Only this combination of three properties is conserved universally, you can’t merely reverse parity alone and expect the particle to behave the same way!  If you reverse all three values, charge, parity and time, you end up, in effect, with a left handed spinor again (if you started with one, or a right handed spinor if you started with that), but the result is an antiparticle which is moving the opposite way in time as plotted on a Feynman diagram.  In other words, the reversals of charge and time cancel the parity reversal.

But why did Pauli not know that Maxwell in deriving the equations of the electromagnetic force in 1861, modelled magnetic fields as mediated by gauge bosons, implying that charges and field quanta are parity conservation breaking (Weyl type chiral handed) spinors?  We discuss this Maxwell 1861 spinor in, which basically amounts to the fact Maxwell thought that the handed curl of the magnetic field around an electric charge moving in space is a result of the spin of vacuum quanta which mediate the magnetic force.  Charge spin, contrary to naive 1st quantization notions of wavefunction indeterminancy, is not indeterminate but takes a preferred handedness relative to the motion of charge, thus being responsible for preferred handedness of the magnetic field at right angles to the direction of motion of charge (magnetic fields, according to Maxwell, are the conservation of angular momentum when spinning field quanta are exchanged by spinning charges).  Other reasons for SU(2) electromagnetism are provided in, such as the prediction of the electromagnetic field strength coupling.  Instead of the 1956 violation of parity conservation in weak interactions provoking a complete return to Maxwell’s SU(2) theory from 1861, what happened instead was a crude epicycle type “fix” for the theory, in which U(1) continued to be used for electrodynamics despite the fact that the fermion charges of electrodynamics are spin half particles which obey SU(2) spinor matrices, and in which the U(1) pseudo-electrodynamics (hypercharge theory) was eventually (by 1967, due to Glashow, Weinberg and Salam) joined to the SU(2) weak interaction theory by a linkage with an ad hoc mixing scheme in which electric charge is given arbitrarily by the empirical Weinberg-Gell Mann-Nishijima relation

electric charge = SU(2) weak isospin charge + half of U(1) hypercharge

Figure 30 on page 36 of gives an alternative interpretation of the facts, better consistent with reality.

Although as stated above, SO(4) = SU(2) x SU(2), the individual SU(2) symmetries here are related to simple spin orthogonal groups

SO(2) ~ U(1)

SO(3) ~ SU(2)

SO(4) ~ SU(3)

It’s pretty tempting therefore to suggest as we did, that the U(1), SU(2) and SU(3) groups are all spinor relations derived from the basic geometry of spacetime.  In other words, for U(1) Abelian symmetry, particles can spin alone; and for SU(2) they can be paired up with parallel spin axes and each particle in this pair can then either have symmetric or antisymmetric spin.  In other words, both spinning in the same direction (0 degrees difference in spin axis directions) so that their spins add together, doubling the net angular momentum and magnetic dipole moment and creating a bose-einstein condensate or effective boson from two fermions; or alternatively spinning in opposite directions (180 degrees difference in spin axis directions) as in Pauli’s exclusion principle, which cancels out the net magnetic dipole moment.  (Although wishy-washy anti-understanding 1st quantization QM dogma insists that only one indeterminate wavefunction exists for spin direction until measured, in fact the absence of strong magnetic fields from most matter in the universe is continuously “collapsing” that “indeterminate” wavefunction into a determinate state, by telling us that Pauli is right and that spins do generally pair up to cancel intrinsic magnetic moments for most matter!)  Finally, for SU(3), three particles can form a triplet in which the spin axes are all orthogonal to one another (i.e. the spin axis directions for the 3 particles are 90 degrees relative from each other, one lying on each x, y, and z direction, relative of course to one another not any absolute frame).  This is color force.

Technically speaking, of course, there are other possibilities.  Woit’s 2002 arXiv paper 0206135, Quantum field theory and representation theory, conjectures on page 4 that the Standard Model can be understood in the representation theory of “some geometric structure” and on page 51 he gives a specific suggestion that you pick U(2) out of SO(4) expressed as a Spin(2n) Clifford spin algebra where n = 2, and this U(2) subgroup of SO(4) then has a spin representation that has the correct chiral electroweak charges.  In other words, Woit suggests replacing the U(1) x SU(2) arbitrary charge structure with a properly unifying U(2) symmetry picked out from SO(4) space time special orthogonal group.  Woit represents SO(4) by a Spin(4) Clifford algebra element (1/2)(e_i)(e_j) which corresponds to the Lie algebra generator L_(ij)

(1/2)(e_i)(e_j) = L_(ij).

The Woit idea, of getting the chiral electroweak charges by picking out U(2) charges from SO(4), can potentially be combined with the previously mentioned suggestion of SO(4) = SU(2) x SU(2), where one effective SU(2) symmetry is electromagnetism and the other is the weak interaction.

My feeling is that there is no mystery, one day people will accept that the various spin axis combinations needed to avoid or overcome intrinsic magnetic dipole anomalies in nature are the source of the fact that fundamental particles exist in groupings of 1, 2 or 3 particles (leptons, mesons, baryons), and that is also the source of the U(1), SU(2) and SU(3) symmetry groups of interactions, once you look at the problems of magnetic inductance associated with the exchange of field quanta to cause fundamental forces.