Category morphisms for quantum gravity masses, and draft material for new paper





Above: later in this blog post a U(1) Abelian quantum gravity will be outlined in which mass is generated without a Higgs field by the mixing of U(1) with SU(2); the weak gauge bosons are give masses by this mixing at all energies (there is no intrinsic Higgs type electroweak symmetry at high energy). The neutral currents of massive, graviton-interacting Z0 bosons of the weak field around leptons and quarks then act like a Higgs field in physically “miring” accelerating particles, providing inertia and thus mass. Because weak bosons are generated by the decays of virtual fermions which annihilate in the vacuum fields, their abundance and distribution in spacetime is controlled by the vacuum polarization phenomena whereby the electric field causes virtual fermions of opposite charge sign to move apart, gaining energy from the field in this process and in very strong fields becoming effectively closer to being on the relativistic the mass-shell, so that the fermions become more real (less virtual), and begin to conform to the exclusion principle which controls the distribution and spin alignments of fermions in a given volume of space (on-shell fermions in states effectively bound by electric fields must have a different set of quantum numbers than their neighbours, so they need different spin states to adjacent fermions, and shell structures like atoms or nuclei). This, as we shall see later, leads to the mathematical form of Schwinger’s first Feynman diagram vacuum correction to Dirac’s magnetic moment for the electron: the virtual fermions in the electron’s pair production field are not completely random (the uncertainty principle is modified by vacuum polarization, which by moving opposite charges further apart increases their lifespan over that given by the uncertainty principle), so the magnetic field of the electron’s core is increased by the factor 1 + alpha/(2*Pi) = 1.00116 (accurate to 6 significant figures), or 0.116%, as a result of the vacuum field around the core. In other words, some of the electric charge energy of the electron which is shielded by the use of energy to form virtual pairs and to polarize them around the core, gets converted into magnetic field energy which adds to that from the spinning core. It is important to note the presence of alpha and twice Pi in the vacuum field contribution. Looking at the category morphisms of masses between various leptons and quarks, we find such factors, and present in this post the theory to explain them.

The muon is a heavy electron which is radioactive, undergoing beta decay via the intermediate stage in which a negatively charged weak vector (or gauge) boson is emitted, which then decays with electron emission. In this diagram, the decay width formula (related to the particle’s mean lifetime) is given which includes GF = (21/2/8)(h-bar*c)3(g/mw)2, which is Fermi’s constant for the weak force strength (g is the weak coupling constant). In Fermi’s original theory of beta decay, the intermediate weak boson stage was excluded.

The weak force has a strength equal to the Coulomb (electromagnetic) force law multiplied down by the dimensionless ratio Pi2hM4/(Tc2m5) (reference: derived from Matthews’ Quantum Mechanics textbook and given in my Electronics World vol. 109, issue 1804, pp. 47-52, article ‘The Electronic Universe Part 2’; notice that this is the dimensionless force strengths ratio derived from the phase space for beta decay, and is not equal to the dimensionful Fermi constant GF), where h is Planck’s constant, T is the mean lifetime of the particle (which for the observed exponential decay rate is simply the half-life multiplied up by the factor 1/ln2 ~ 1.44), m is the mass of the emitted fermion, and M is the mass of the neutron (which is the decaying particle in Fermi’s theory) before it decays.

The mass dependence of the weak force is therefore very sensitive and weak interaction rates depend critically on the masses of the particles involved. For example, later we will discuss in detail the CKM parameters, which show that the relative weak interaction strengths for Standard Model type transitions between leptons only, between quarks of the same generation only, and between quarks of different generations are 1, 24/25, and 1/25 respectively (Cabibbo’s numbers for two generations, ignoring the small, 1/1000 modification introduced by the presence of the third generation). The electromagnetic, weak and strong coupling “constants” (effective charges) also vary or “run” with the energy of the collision (which controls how closely particles approach one another in collisions, and therefore how much polarized vacuum is intervening between the particles to shield their charge, or augment it in the case of the strong force).

In order to calculate interaction amplitudes for the simple Feynman diagrams above (trees with no spacetime loops), we just follow very simple Feynman rules (this link is to an overly complex and exact Wiki page; see the QFT textbooks cited later in this post for a more physical, hands-on summary of the vital features of the rules for making quick and useful estimates), which followed and updated Fermi’s “golden rule”:

1. each vertex in a diagram has a coupling “constant”, g, for the relative strength of the interaction,
2. each internal line (which in each case we’re considering is a weak gauge boson) has a “propagator” which is a function of the momentum transfer along that line, which for the simplistic case of no spin is i/(q2 – m2), where q is momentum and m is the mass of the particle propagating along the internal line, and
3. multiply the propagator by the factor -ig for each vertex, and then integrate the product over momentum flowing through the interaction, which for a spinless propagator and the Feynman diagrams depicted above gives an interaction amplitude of the form A = ig2/(p2 – m2), where p is the sum of the momenta of the propagator’s decay products.

Beta decay is vitally important in particle physics because it allows leptons and quarks to change their nature, as shown in the Feynman diagrams above. Although quarks and leptons can interact electromagnetically, they supposedly “can’t transform into one another by weak interactions” in the Standard Model, although in fact the decay of a quark can produce an electron!

The Standard Model fanatics dogmatically interpret all quark decays as ending up with quarks, and just arm-wavingly dismiss any idea that the quark decays into the leptons like electrons and neutrinos which result from such transitions; they believe dogmatically that the leptons resulting from quark decays are merely secondary emissions! In other words, they see the equivocal or ambiguous multistable gesalt the way they want, and ignore the other possibility just because of historical chance or prejudice. In lepton to lepton decays (like the decay of the muon into an electron), the muon first transforms into a weak boson before emerging as an electron. Therefore, for consistency with lepton decays, quark decay analysis should look to what emerges from the weak boson, which can be leptons! The clue (diagram at the top of this blog post) is that the quarks branch off the “tree” of the Feynman diagram directly, not via a weak boson! The false quark to quark decay interpretation is just a dogmatic groupthink speculation which seems to be suspect, as will be discussed in more detail later. In the Standard Model view (which may be wrong), the lepton-quark and quark-lepton mass morphisms shown above, while correctly depicting the differences in masses between quarks and leptons, supposedly don’t correspond to observed transitions or decays; only the lepton-lepton and quark-quark mass morphisms are supposed to be real. The problem with such groupthink is that it soon becomes a self-fulfilling myth and is defended just because it has so much groupthink bias behind it, not because it has any factual proof. In fact, it makes the Standard Model ugly and ad hoc, full of epicycle-like parameters present to “glue” it together (in Feynman’s words, which we will quote later).

Notice the factor of Pi3 in the muon decay formula and other numerical multiplying factors of Pi in the mass morphisms: these are very important in such interactions (these will be discussed in more detail later in this post). Very briefly, as we will discuss in detail later, there is a confusion in the Standard Model between quarks and leptons and their antiparticles. Our approach indicates that the universe contains as much matter as antimatter, because downquarks are a morphism of electrons; the universe is mainly hydrogen (two upquarks, one downquark and an electron) which is perfectly symmetrical if there is a simple morphism and mechanism for the electron charge to fall by a factor of 3 and for it to gain short-range weak and strong charges due to a vacum mechanism when confined. In this view, the downquark is a disguised electron, and the antiparticle of both is the upquark (the doubling of electric charge corresponding to a halving of gravitational charge, mass). We will give concrete evidence to support this later on. Therefore, it is possible to completely reinterpret particle interactions while retaining consistency with experiments, or to put it another way, most of the claims like “quarks can’t be converted into leptons” are based on interpretations and groupthink, not direct evidence: quarks do undergo beta decay and it is possible to revise the weak interaction theory so that the main transition path is quark to lepton (electron), via a weak boson.

In electroweak theory, the large mass of the weak bosons discovered in 1983, W, W+, and Z0, reduces the transition rate and makes the coupling (the relative force strength) very small, because the massive weak bosons of the field have a short range and a great deal of inertia. The Z0, weak boson mass is Mz = 91.1876 ±0.0021 GeV, while the W and W+ charged weak bosons have the same bass which is smaller than Mz and is given by the combination of Z0 with the Weinberg electroweak mixing angle Thetaw:

Mw = Mzcosine (Thetaw) = 80.398 ±0.023 GeV.

In the Glashow-Weinberg-Salam electroweak model, which has problems we will discuss and resoilve in this post, U(1) represents hypercharge and SU(2) the weak interaction, with electromagnetism emerging from a mixing of the two symmetries so that the photon is a mixture of the weak SU(2) neutral boson W0 and the hypercharge boson B:

electromagnetic photon, A = (W0 sine Thetaw) + (B cosine Thetaw),

an equation we object to since it is not experimentally defensible; the electromagnetic theory quanta don’t need to be represented in this complex fashion with electric charge coming from a mixture of weak hypercharge and weak isospin. As we will prove, electromagnetism is far better described by a massless field SU(2) symmetry! We have explained (and will explain below) that massless SU(2) charged bosons will represent electromagnetism better than neutral massless U(1) bosons, and will get rid of the Higgs field in the Standard Model, since the mixed photon equivalent will then be a spin-1 graviton and the charge it has will be mass, so the mixing of U(1) quantum gravity with SU(2) can give mass by mixing to the left-handed half of the SU(2) bosons making them massive weak bosons, while allowing the other half to become charged massless electromagnetic bosons which automatically cut the Yang-Mills equation down to Maxwell’s because only an equilibrium of two-way exchange of such massless charged bosons is possible to cancel out the magnetic field vectors of each component (charged massless radiation can’t go in one-direction only, so can’t carry net charge, only an equilibrium of two-way exchange is possible; the electromagnetic field can’t change charges and the Yang-Mills equation for SU(2) electromagnetism thus automatically gets limited to the bare Maxwell structure!).

The Z0 boson is still produced by a U(1) and SU(2) mixing in our model, but gains its mass from the quantum gravity like U(1) hypercharge in the mix, instead of from a Higgs field. In the misinterpreted Glashow-Weinberg-Salam model:

weak neutral current “photon”, Z0 = (W0 cosine Thetaw) – (B sine Thetaw),

where B is hypercharge. The funny thing about the Glashow-Weinberg-Salam model is that it was formulated in 1967-8, but was not well received until its renormalizability had been demonstrated years later by ‘t Hooft. The electroweak theory they formulated was perfectly renormalizable prior to the addition of the Higgs field, i.e. it was renormalizable with massless SU(2) gauge bosons (which we use for electromagnetism), because the lagrangian had a local gauge invariance. ‘t Hooft’s trivial proof that it was also renormalizable after “symmetry breaking” (the acquisition of mass by all of the SU(2) gauge bosons, a property again not justified by experiment because the weak force is left-handed so it would be natural for only half of the SU(2) gauge bosons to acuqire mass to explain this handedness) merely showed that the W-boson propagator expressions in the Feynman path integral are independent of mass when the momentum flowing through the propagator is very large. I.e., ‘t Hooft just showed that for large momentum flows, mass makes no difference and the proof of renormalization for massless electroweak bosons is also applicable to the case of massive electroweak bosons. ‘t Hooft pathetically seems to be unaware of the trivial nature of his proof since his personal website makes the false claim: “…I found in 1970 how to renormalize the theory, and, more importantly, we identified the theories for which this works, and what conditions they must fulfil. One must, for instance, have a so-called Higgs-particle. These theories are now called gauge theories.” That claim that he has a proof that the Higgs particle must exist is totally without justification. He merely showed that if the Higgs field provides mass, the electroweak theory is still renormalizable just as it is with massless bosons. He did not disprove mass-energy as being the charge of quantum gravity (which in our theory is an alternative to a Higgs boson, not a supplement); we have a very effective way of showing that a “pseudo-Higgs field” exists in nature very simply, consisting of Z0 field bosons that does breaks electroweak symmetry (probably at all energies, not just at low energy like the Higgs field) without needing any Higgs bosons, as we will prove in this post. (‘t Hooft sent me a bigoted, egotistical email when the problems with electromagnetism were pointed out, claiming he was right without proving it, and refusing to discuss the matter any further. Well, he hadn’t discussed the matter at all, just preaching on the back of his trivial proof of four decades ago, so it was not really much of a loss. Although it is a pity that there is so much hatred for new ideas in quantum field theory among the people supposedly working in that very area.)

After ‘t Hooft did that, the Glashow-Weinberg-Salam model became popular, because of the mainstream obsession with renormalizable fields. They seemed to assume that a proof of consistency of a theory with the procedure of renormalization was a proof of the theory itself, which it wasn’t. The electroweak theory retains numeroud ad hoc parameters and as Feynman writes in his 1985 book QED (chapter 4, p. 142; Penguin, London, 1990 edition):

“Stephen [sic] Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you look at the results they get you can see the glue, so to speak. It’s very clear that the photon and the three W’s are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes more beautiful and, therefore, probably more correct.”

Cabibbo in 1964 discovered the first in a series of CKM parameters which relate the relative weak interaction strengths for the conversion of different leptons and quarks into other particles, like the example above of the decay of a muon into an electron. Cabibbo found that if the relative weak interaction strength between leptons is designated 1 unit, then the relative strength between pairs of the quarks in same generation is 24/25 unit, and the relative strength for transformations between different quark generations (e.g., c to d and u to s) is only 1/25 unit. He then explained these numbers 1, 24/25, and 1/25 by simply suggesting that the lepton has only one way to decay, but the quark has more choice because there are more quarks than leptons, so the relative probabilities of the two quark decay possibilities are equal to A2 = 1/25 and 1 – A2 = 24/25, so that the sum of the two decay route probabilities equals (1/25) + (24/25) = 1, like leptons. In other words, the overall weak interaction strength between quarks and leptons is absolutely identical, and the apparent differences are merely due to the fact that quarks come in more charges than leptons, so you have more than one possible decay route and the sum of the branching fractions (rather than any single branching fraction) of the decay tree equals the lepton weak force. There is a 1/1000 discrepancy in Cabibbo’s original scheme as compared to experiment, which was fully explained when the third generation of quarks was discovered (only two generations of quarks were known in 1964). My argument is that because the Cabibbo CKM numbers describe the relative weak interaction strengths for transitions or rather category morphisms between different leptons and quarks, and because in Fermi’s theory these interaction strengths depends on masses (the whole reason why the weak force is weaker than electromagnetism is due to the massiveness of the weak bosons), it follows that there are simple multiplying factors relating the different masses of leptons and quarks:




Above: mass is the charge of quantum gravity, so it doesn’t follow the usual patterns of quark flavours and electric charges. As discussed in this post, the downquark has double the mass of the upquark, but only half of the magnitude of the electric charge (with the opposite sign). Just as electric charges of electrons (-1) and downquarks (-1/3) differ by the “numerological” factor of 3, there are a variety of geometric multiplication factors involved in the charges of quantum gravity, fundamental particle masses. In 1964, Cabibbo first discovered that lepton and quark weak interaction rates are very similar, suggesting a universality between quarks and leptons which is discussed in this post. The muon (a lepton) and the strange quark have similar masses; the truth quark mass is Pi4 times the tauon lepton mass; the beauty quark mass is Pi times the charm quark mass, and so on. In this diagram, you can reverse the direction of an arrow if you divide by the factor on the arrow, instead of multiplying. As indicated in the table below (which is only approximate for the electron mass, which is so small and therefore is subjected to relatively large effects from second-order field-coupling corrections, unlike larger masses in this theory), all these masses are functions of the Z0 massive neutral electroweak boson mass, which is present in the electroweak fields of all the particles, and like an interacting “slow photon” acts a little like the currently postulated Higgs field in miring fermions and thus providing inertia and thus mass (its mass interacts directly with gravitons).

The Z0 is the building block of all particle masses in the theory because in the U(1) x SU(2) electroweak-gravity theory we use to replace the electroweak sector of the Standard Model, the U(1) hypercharge is the basis of quantum gravity; it mixes with SU(2) to produce the graviton and the Z0 boson, which are deeply connected (the field quanta of electromagnetism are charged massless W and W+ bosons, as explained in this post in detail); so the Z0 boson is the pure quantum gravity charge unit. The presence of Pi (with its multiplication factors and powers) is a geometrical effect of the nature of the coupling between massive Z0 bosons in the field and the particle core of the fermion, as discussed in several previous posts; the factor of alpha is present due to vacuum field effect in this theory where all the mass comes from the vacuum field around the core, due to pair production as a result of the electromagnetic field at short distances where it exceeds about 1.3 x 1018 v/m at 33 fm from a lepton, the pair production threshold.

Although popular accounts of pair production in the vacuum describe it as a purely random process governed solely by the uncertainty principle, this is not true as seen for example in the phenomenon of the polarization of fermion pairs which shields the electric field by taking energy from it. This process occurs simply like this. In strong electric fields, within 33 fm from a fermion of unit charge, pair production occurs spontaneously, but it is not governed solely by the uncertainty principle because, after the pair forms, the charge in the pair with similar charge to the real fermion is repelled from it, while the opposite charge is attracted to it. This is “vacuum polarization”; it shields the core electric field by using energy, and the fact that the pair of virtual charges get driven further apart is what uses up the energy, and also increases the average survival time (above that of the uncertanity principle for energy-time) of the pair of virtual particles before they annihilate back into field bosons. Polarization forces the virtual fermion pair of opposite charges to move further apart than they would in the absence of an electric field, so it takes longer on the average before they can come back together and be annihilated.

In other words, they gain energy from the field, and so move slightly towards becoming more like real particles than virtual ones (they approach the mass shell closer than is governed by the uncertainty principle). During the time that they approach being “on shell” due to vacuum polarization, they have the time to behave more like real particles: for example, they can “make some effort” to obey the Pauli exclusion principle, which geometrically controls how many fermions (of a given set of quantum characteristics or quantum numbers, which includes two states of spin) can occupy a given space. This is a geometric effect on the structure on the virtual fermion field surrounding real fermions where the electric field strength is so strong that it polarizes the virtual charges to an extent sufficient to make them nearly on-shell for brief periods, so that they start structuring their locations and densities in spacetime into shells (almost like orbital electrons) before they annihilate. While they annihilate they can produce neutral currents of Z0 bosons, which are thus produced in an almost “structured” space around the real particle core in a way that closely follows the structure of the virtual fermions when they annihilated.

A familiar analogy for scientists is Schwinger’s first-order radiative field self-interaction correction to Dirac’s magnetic moment of the electron: 1 + alpha/(2*Pi) = 1.00116. This is the multiplying factor that you use to increase Dirac’s equation estimate (which ignored the effect on the electron’s magnetic moment from self-interactions with its own field) of 1 Bohr magneton, to get the observed 1.00116 Bohr magnetons measured in experiments. The number 1 is the contribution from the core of the particle, and the alpha/(2*Pi) is the relative contribution from the vacuum field to a first-order (1st term in the perturbative expansion, i.e. simplest Feynman diagram, which in this case gives a prediction accurate to 6 significant figures; even for the much heavier muon this first vacuum field correction is accurate to 5 significant figures!). For quantum gravity masses, all of the mass of fermions comes from the physical miring of the particle core by the Z0 massive bosons in the vacuum field which surrounds it, which is also (by the “snowplough mechanism” for relativistic mass increase, whereby as you approach velocity c, the Z0 bosons pile up because they can’t move fast enough to get out of the way can’t get out of the way, effectively adding to the total mass like snow piling up higher on the blade of a snowplough which tries to go faster, making the whole system heavier through the added inertia). Since all of fermion masses comes from the vacuum field, it has the geometric factors for the shape of the arrangement and the physical way that interactions can occur with spinning particles. (A simple example is that when we fire a laser beam side-on at a spinning missile, the interaction rate per unit area is reduced by a factor of Pi if the missile is spinning rapidly, as opposed to not spinning. Different arrangements of spin in the exchange of gauge bosons between different numbers of particles with different relative spins can therefore simply insert factors like 1/2, 2 or 3 times Pi into the mass morphisms.)

There are further corrections for more complex vacuum interactions represented by loopy Feynman diagrams, and each additional loop adds another factor of alpha (as proved later in this post, alpha is a polarization shielding factor; although such polarization occurs for the electromagnetic interactions rather than gravitational fields, the whole point is that the electromagnetic field’s pair-production creates the massive Z0 bosons which mire the core charge, providing its mass in a way which is reminiscient of Higgs field explanations). Hence the masses containing alpha-squared or other power terms rather than just alpha, can ultimately be traced to particular looped Feynman diagrams which make Z0 bosons due to pair-production, since a space-time loop in a Feynman diagram consists of the cycle of pair-production followed by annihilation into bosons, and back again, in an endless closed-loops. There are literally an infinite number of perturbative corrections, with more and more complex accompanying Feynman diagrams: it is in principle impossible to depict them all or calculate them all. Schwinger’s theory was sketchy (e.g., Dirac rejected his renormalization of charges as a fiddle) and incomplete (he didn’t have a scheme to consistently analyze further higher-order radiative corrections in his first paper), and was only taken seriously when combined with Feynman’s theory (which described each term in the perturbative expansion to the path integral with a simple pictorial diagram of what was physically occurring in that term) by Dyson.

Our point of the analogy is that the entire mass of a fermion comes from vacuum field interactions in this quantum gravity theory, so the role of the field correction term such as alpha or alpha squared and geometric factors of twice pi or similar, is important in all cases (see table below). So there is a solid theoretical basis for these predictions, and despite some sketchy arguments and semi-empirical extrapolations based on the appearance of symmetry (see the patterns in the categorical type multiplication factors diagram above), and despite the fact that at first glance the ignorant and the arrogant may prefer to dismiss them as mere “numerology”, they are in fact quite reliable, accurate and checkable as masses are better determined by the LHC experiments (apart from the rough estimate for electron mass):


Above: a PDF version of these tables is linked here. As noted in previous posts, the prediction of quark masses from this theory has been slow, most attention being given to predicting the strengths of fundamental interactions, and cosmological implications. However, some progress has recently been made and it is hopeful therefore that a complete paper will be published soon.

Quantum field theory

OVERVIEW

The reductionist problem – breaking a system down into small bits to simplify it enough to be able to understand it – seems to apply to quantum gravity, if there’s nothing to prevent graviton exchanges between all masses throughout the universe. It was a nice simplistic guess by Pauli and Fierz that gravitons exchanged between just two particles cause gravitation (hence gravitons have spin-2 since similar gravitational charges are thus assumed to attract rather than repel), but this is based on the reductionist assumption that all the surrounding masses have no net influence. The failures of spin-2 quantum gravity theories to make checkable predictions support the thesis that you cannot understand quantum gravity by the reductionist method. You cannot get gravity by just considering two particles of matter, but must consider graviton exchanges with all of the matter located all around them in the universe. The resulting model permits spin-1 gravitons, suggesting a repulsive gauge theory of gravity which easily fits within the Standard Model of particle physics. Like a raisin cake baking, the dough (graviton) pressure will in general push raisins apart (cosmological acceleration); but where two raisins are nearby (i.e. with little or no dough pressure acting between them, but a lot of dough pressing on the opposite sides) those raisins will be pushed together in a clump by the dough pressure (gravitation) on the outward sides which exceeds the mutual repulsion between them.

This paper reviews the mathematics of quantum field theory used in the Standard Model, then reformulates the Standard Model into a simpler theory incorporating a gauge group for quantum gravity which makes checkable predictions. On Page 195 of the original 1954 Yang-Mills paper in Physical Review, where they tried to make an isospin SU(2) theory of the strong (not weak!) interaction, they had problems with the mass of field quanta: “We next come to the question of the mass of the b quantum, to which we do not have a satisfactory answer. One may argue that without a nucleon field the Lagrangian would contain no quantity of the dimension of a mass, and that therefore the mass of the b quantum in such case is zero.”

In 1956, Schwinger and his student Glashow first attempted to apply the Yang-Mills SU(2) isospin field theory to unify electromagnetic and weak nuclear interactions, as described in Glashow’s Nobel Prize Lecture of 1979: “The charged massive vector intermediary and the massless photon were to be the gauge mesons. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2). … We come to my own work done in Copenhagen in 1960, and done independently by Salam and Ward. … I was led to SU(2) x U(1) by analogy with the appropriate isospin-hypercharge group which characterizes strong interactions. In this model there were two electrically neutral intermediaries: the massless photon and a massive neutral vector meson which I called B but which is now known as Z. The weak mixing angle determined to what linear combination of SU(2) x U(1) generators B would correspond.”

There is a problem with Glashow’s rejection of SU(2) for the unification of electromagnetism and weak interactions: Schwinger and Glashow assumed that the massless neutral current was the electromagnetic current. In fact, massless but electrically charged gauge bosons can propagate as electromagnetic gauge boson exchange radiation in both directions along a path, so that the opposing magnetic curls cancel out, which (1) prevents infinite magnetic self-inductance (which prevents massless charged radiations from propagating along a one-way trajectory), and (2) ensures exact equilibrium in the rate of exchange of charge by the field, so that fermions can’t spontaneously change their charges due to electromagnetic fields. I.e., this two-way (exchange) equilibrium, required to cancel out the infinite self-inductance from the exchange of massless electrically charged gauge bosons, negates the Yang-Mills term for charge transfer by electromagnetic fields (effectively reducing the Yang-Mills equations to Maxwell’s).

The new argument is that SU(2) is the unified gauge group of weak and electromagnetic interactions, while U(1) is a spin-1 quantum gravity group which mixes with SU(2) to give gravitons and the left-handed bosons mass, creating the weak interaction without a Higgs field. The unmixed SU(2) massless charged bosons remain as electromagnetic field quanta.

Since this mixing with the gauge group for quantum gravity is ignored by the Standard Model, Yang-Mills and Abelian gauge symmetries are supposed to require massless field quanta, implying a separate “Higgs field” to add observed masses. Massless field quanta imply equal couplings, hence unbroken symmetry, but at low energy the observed couplings differ (e.g. the weak field is weak!), and this is conventionally explained by the gauge bosons acquiring immense masses from a “Higgs field”, slowing them down, thus reducing the weak interactions rate, so that the weak force is very small. However, the Higgs field is contrived so it doesn’t give the weak gauge bosons mass above the electroweak unification energy, so perfect U(1) x SU(2) symmetry is assumed to exist at high energy by making electromagnetism and weak interactions of similar strength. Another Higgs type field is supposedly needed to break the U(1) x SU(2) symmetry from the SU(3) QCD symmetry. No Higgs field has been observed. It is clear, however, that if there is quantum gravity, then the “charge” symmetry group for quantum gravity will be “mass-energy”, so mass-energy will be a fundamental charge, rather than something always given by a separate Higgs field. The weak bosons simply acquire mass from Weinberg type mixing with the quantum gravity group.

The new procedure leaves the SU(3) symmetry and the SU(2) weak (isospin) symmetry intact, but replaces the bosonic Higgs field with gravitational charge and changes the way electromagnetism is generated by U(1) mixing with SU(2). Electromagnetism is generated by massless, charged SU(2) gauge bosons, while the massless neutral W0 mixes with the hypercharge U(1) photon to produce spin-1 bosons, causing cosmological acceleration by repulsion of similar gravitational charges, and gravity by the mutual shadowing of relatively small nearby masses from the convergence of gravitons exchanged with relativistically receding distant masses in the isotropic universe. In 1996 this accurately predicted the value of the cosmological acceleration first observed in 1998. Relatively small masses (compared to the mass of the surrounding universe) that aren’t cosmologically accelerating away with any appreciable force from one another, fail to mutually repel because they don’t exchange gravitons strongly; they are pushed together by asymmetric graviton exchange with the surrounding universe on their “unshadowed” sides. These off-mass shell gravitons don’t cause the drag and heating of on-shell radiation, and the sizes of the forces involved are predictable by applying Newton’s 2nd and 3rd laws to the cosmological acceleration of masses.

The charge for quantum gravity (mass) replaces the Higgs field, and a fact-based field energy conservation model replaces the speculative mainstream field unification programmes (Higgs and SUSY), which were speculatively based on an assumed, unobservable equality of coupling parameters at the Planck scale. The energy density of an electromagnetic field is known as a function of field strength, and using the running couplings in QFT we can calculate the fall in field strength due to virtual fermion pair polarization in fields strengths exceeding Schwinger’s threshold of 1.3 x 1018 v/m for pair production (which occurs at aradius of 33 fm from an electron or proton). The loss of energy density from the electromagnetic field as it is shielded due to vacuum polarization of virtual fermion pairs is used to drive those pairs apart, increasing their survival time before annihilation, and allowing the time for the gauge bosons which accompany them to mediate short ranged weak and strong nuclear interactions. If the real fermion producing the electromagnetic field has isospin charge, the weak gauge bosons formed with virtual fermions will be able to mediate weak interactions between real fermions. Likewise for color charge with gluons.

Energy attenuated from the electromagnetic interaction is converted into short-ranged weak and strong field energy by this mechanism. The fact based model automatically generates the various couplings and masses which have to be put by hand into the existing Standard Model. A large variety of checkable predictions are included. Another of the most simple findings is the theoretical basis of a version of Riofrio’s equation for the universal gravitational coupling, G = tc3/M, where t and M are the age and mass of the universe. It is shown that this time-variation of G, making gravity weaker at earlier times after the big bang, accurately explains the early-time flatness of the universe, observed in the very small anisotrophy of the CBR. (This avoids the need for explain the flatness of the early universe by the speculation of “inflation”, i.e. a proposed faster-than-light exponential expansion for a brief interval after the Planck time, reducing gravitational field strengths by dispersing the mass-energy of the universe over an immense volume.) Teller’s objection to a variation of G in theories such as Dirac’s large numbers hypothesis (where G decreases with time rather than increaing) is falsified since it assumes a variation of G independently of electromagnetic forces. The unification of electromagnetism and gravity imply that both couplings vary similarly with time, so that Teller’s model of the variation of fusion rates in the sun or in the big bang as a function of time no longer occurs. This is because fusion depends on gravitational compression to overcome the Coulomb (electromagnetic) barrier. If both gravitational compression and electromagnetic repulsion are varied in a similar way, fusion rates are unaltered.

Therefore, there are no Higgs and no SUZY, which are both modern day ethers, electroweak symmetry is broken at all energies, there’s no high energy unification of the equality of running couplings as the Planck scale is approached, and the deep change needed in the SM is a failure of the electroweak symmetry descriptions. Instead of U(1) being the basic constitute (after Weinberg’s mixing with the W0) of electromagnetism, U(1) is quantum gravity, so the graviton is a spin-1, uncharged (Abelian rather than Yang-Mills) off-shell repulsion force mediator. Newton’s apple was repelled down to earth by spin-1 gravitons exchanged with convergence inward from the immense mass of distant galaxies above it, since the Earth below the apple has a smaller mass, no graviton convergence, so it’s repulsion of the apple is trivial by comparison, and most importantly, it is interacting with and thus shielding some of the upward flux of gravitons coming from distant galaxies below the apple. The deflection light by the sun’s gravity during ellipses no more proves that energy is gravitational charge (implying Yang-Mills gravity) than the deflection of light in a lens of glass proves that since the normal photon interacts with electromagnetic fields in the glass, it “must” be electrically charged. A path integral QFT approach to the deflection of light by quantum gravity averts the need for energy to be gravitational charge: light just takes the path of least action when gravitational field energy is properly taken into account. So there is no Higgs field needed to give mass to energy. Mass as a charge is only seen it its pure form in the 91 GeV Z-boson. The masses of other particles are derived from this 91 GeV sized building block via their ability to create Z-bosons in the pair-production fields around them, and to acquire some mass from them, allowing us to “predict” not only lepton but also existing and new so-far unobserved hadron particle masses independently of the usual lattice QFT calculations. In this very limited way, the Z-boson mixed U(1) and SU(2) field produced by pair-production around real particles behaves like the Higgs field in “miring” such particles to a certain extent, indirectly giving them quantum gravitational and thus inertial mass, the inertia being due to the equilibrium of graviton exchange fluxes with all the distant masses distributed throughout the universe.

Hawking radiation is suppressed by Julian Schwinger’s threshold electric field strength required for pair production, implying the requirement that black holes must have a large net electric charge in order to radiate. Schwinger’s formula shows that you need at least 1.8 x 1018 v/m of electric field strength at the event horizon in order to create pairs of fermions there, so that you get someof the partners escaping and annihilating into gamma rays beyond the event horizon. No massive black hole with a small or nil electric charge can radiate. Hawking and others are unaware of the IR cutoff. The vacuum isn’t full of pair production: pair production is limited to strong electric fields near charges, i.e. 33 fm from an electron. This imposes electric charge requirements upon black holes for their emission of Hawking radiation. If the whole vacuum was full of pair production, it would be polarizable, so there would be no ~1 MeV IR cutoff upon the logarithmic type running coupling of shielded charge to decreasing collision energy or increasing distance, so the observable electric charge of the electron would keep falling with increasing distance, instead of ceasing to decrease at collision energies below ~1 MeV. Hence, the only abundant black holes that can be theoretically proven to be capable of Hawking radiation (i.e. have a strong enough electric field for pair production at their event horizon) are charged fermions. (See see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040.) It might be claimed that the pair production in spacetime near the event horizon of an electrically uncharged black hole, required for Hawking radiation to be emitted, could be provided for by a strong quantum gravity field, but unlike pair production in electric fields, there is neither an experimental nor theoretical treatment of quantum gravity to back up this speculation.

In summary, quantum field theory is plagued at present with popular errors which have been disproved and corrected, but are nevertheless still being taught and falsely promoted, examples being first quantization (falsely classical Coulomb field with indeterminancy falsely applied intrinsically to the particle motion, instead of via quantum field interactions) instead of second quantization (where the indeterminancy is produced by random field quanta interactions), obfuscation in teaching the physical basis of the low energy (IR) cutoff, not merely the better known higher energy (UV) cutoff which eliminates unphysical divergence problems in perturbative expansions, for the running couplings for charge renormalization, and the Pauli-Fierz graviton spin theorem. Many textbook authors do not appear to grasp the physical basis of second quantization and the unphysical nature of popular first quantization quantum mechanics where the Coulomb field is treated classically and indeterminancy is instead falsely introduced by making the uncertainty principle act directly upon position and momentum (instead of indirectly via the real non-classical, quantized electromagnetic field). It has become the norm for physicists to live with a wide range of compartmentalised ideas leading to contradictory dualities where equations don’t have to correspond to consistent physical phenomena, let alone mechanisms. By carefully examining the physical basis for the mathematics, the resulting errors are exposed and easily corrected.

One particularly important example is general relativity, a classical theory of gravitation in which curved spacetime differential geometry is set up to model accelerations as a smooth curve on a graph of distance versus time (curved spacetime), with Newtonian gravitation the low-energy limit, and a contraction term for relativistic energy conservation (because the divergence of the stress-energy field-creating tensor is not zero, which would violate energy conservation if it was directly proportional to spacetime curvature). Because it incorporates relativity, energy conservation and Newtonian gravitation for the weak field asymptotic limit, general relativity is an accurate approximation to gravitation on small scales, but it fails for cosmology since it is compatible with an infinite spectrum of different universes, by ad hoc selection of dark matter and dark energy parameters, but despite this apparent flexibility it is weak in making the implicit assumption that gravitation is the same everywhere.

This implicit assumption is that gravitation is not a consequence of the net motion of the surrounding matter in the universe, whose cosmological acceleration implies outward force F = ma which by Newton’s 3rd law produces an equal and opposite inward graviton-mediated force, hence gravity. There are a number of failures of general relativity related to this, and some surprising consequences for cosmology. In the title of his May 1978 Scientific American article of The Cosmic Background Radiation and the New Aether Drift, Richard A. Muller reported his finding of a 600 km/s absolute motion of our Milky Way galaxy with respect to the cosmic background radiation emitted just 400,000 years after the big bang, from U2 aircraft studies which showed a large anisotropy in the 2.7 K blackbody radiation, with temperatures about 3 mK hotter in the direction of motion towards Andromeda (blueshift) and 3 mK cooler in the opposite direction (redshift), the variation being dependent on the cosine of the angle.

Since 1978, this large anisotropy has been confirmed by all subsequent investigations, which have utilized satellites COBE and WMAP and have documented on very much smaller anisotropies, such as the galaxy seeding anisotropies allegedly explained by inflation. There is an interesting psychological phenomenon called the Copernican Principle, which originally stated that there is no physical evidence to support the dogma that the sun orbits the Earth, but has since been corrupted by relativity propaganda into a new dogma of its own, which states that there is scientific grounds to refute the earth being at any “special place” in the universe. The basis for this assertion was the mere speculation by Einstein that over long distances the universe is curved so light will bend back and the universe will appear similar from any position. However, in 1998, this gravitational curvature was found to be offset by cosmological acceleration.

The relativistic view of the 600 km/s “absolute motion” of the Milky Way towards Andromeda with respect to the reference frame of the cosmic bachground radiation is that it is simply a gravitational attraction towards an invisible dark matter attractor or “great attractor”, and that the cosmic background radiation does not provide an adequate basis for an absolute motion reference frame (because, if it did, it would beopposed by Einstein propagandarists like his biographer Abraham Pais). The alternative and correct view is that the 600 km/s motion is indeed a refutation of relativity. The universe is uncurved on the largest, cosmological, distance scales according to the cosmological acceleration data, so light doesn’t orbit the universe. Contraction is produced simply by spin-1 gravitons compressing masses radially; this quantum gravity unifies the Lorentz-FitzGerald contraction in the direction of motion of moving objects (due to head-on pressure while accelerating) with the radial “excess radius” of gravitational curvature. The 600 km/s motion of the Milky Way is not necessarily due to gravitation towards invisible ad hoc speculated mass, but may be largely a residual of the absolute motion of the Milky Way’s mass velocity from early times. In this fact-based model, all relativistic results are physical effects of graviton exchanges. Although passing near other galaxies will influence the motion of the Milky Way, taking the figure of 600 km/s as at least an approximate figure for the speed over the past 13,700 million years of the universe suggests a distance from a “point of origin” (singularity) of under 1% of the radius of the uncurved universe, refuting the dogma of the Copernican Principle. So crass are relativistic prejudices that the Copernican Principle, proposed originally as a defense of objectivity against dogmatic religious speculation, has itself become a dogmatic relativistic speculation which is used by so-called peer-reviewers against scientific objectivity and progress. The fact that we are so close to the centre explains the high degree of isotropy of the universe around us, so that the principle of relativity is then superfluous, while its mathematical nature is a consequence of quantum gravity.

CONTENTS

1. Introduction to the repulsive quantum gravity model
2. Mathematics in the Standard Model
2.1 Field operators, Maxwell and Yang-Mills equations
2.2 Path integrals and renormalization
2.3 Higgs fields and SUSY
3. The new fact based replacement to the Standard Model

4. Historical background and the hostile reception of useful innovations in mainstream physics

1. Introduction

In 1905 Einstein discarded vacuum ether as “superfluous” since it introduced needless complexity, without being required to make falsifiable predictions; today we reject the Higgs field and SUSY as being similarly superfluous ethers. This paper is conservative in seeking to ascertain, then build upon, the confirmed facts of quantum field theory. In quantum field theory the curved continuum spacetime of classical physics is replaced by a second-quantization utilizing field operators, in which indeterminancy is produced by random interactions of real charges with field quanta. This second-quantization differs from the first-quantization of quantum mechanics, where fields are treated classically and indeterminancy is artificially generated by making the positions or momenta for real particles intrinsically indeterminate. In other words, the quantum field theory (second-quantization) says that fields cause quantum chaos, whereas quantum mechanics (first quantization) falsely keeps fields classical and introduces indeterminancy using an unmechanistic model for motion.

2. Mathematics in the Standard Model
2.1 Field operators, Maxwell and Yang-Mills equations

Maxwell’s equations are classical field equations for electromagnetism, in which the fields while mediating the effects of the charge through space (charges are only known via the fields; nobody has ever seen the core of a charge), do not change the signs of charges.

The Yang-Mills equations are similar to Maxwell’s equations but have an additional term representing the charge carried by the field (i.e., field quanta constitute currents travelling through the vacuum), enabling the field to change the signs of charges by delivering charge as well as force. This 1954 Yang-Mill idea of charged fields to mediate weak charge (a 1932 idea of Heisenberg’s named “isospin” by Wigner in 1937) was first investigated before Yang and Mills by Pauli, who abandoned his investigations after finding that the gauge bosons involved were massless and therefore of infinite rather than short range.

Currently this objection is overcome by a Higgs field which makes the three weak gauge bosons acquire large masses at low energies (giving them a short range and a slow speed, so that the interaction rate is relatively weak, thus the “weak interaction”), while at higher energy the gauge bosons move faster thus interacting less with the slow, massive Higgs field bosons, giving the weak gauge bosons less mass and allowing the interaction strength to numerically “unify” with electromagnetism above the Higgs electroweak unification energy. One purpose of this paper is to show that this Higgs theory is a contrived, superfluous epicycle in the Standard Model. The three weak bosons do not all acquire mass from a separate field; the charged massless versions are actually the field quanta of electromagnetism, while the supposed virtual photon of electromagnetism (which in QED has to have 4 polarizations, not just 2 like a real photon, in order to account for attraction as well as repulsion) from the mixing of the neutral massless SU(2) boson with the U(1) boson in the Standard Model is actually the graviton, and the U(1) weak hypercharge does not yield electric charge and after mixing but rather mass (quantum gravitational charge). In this new model, therefore, the well known “mass-gap” problem with Yang-Mills theories is averted by the mixing of the U(1) which does have a mass gap (because its hypercharge yields mass, the origin of quantum gravity) with the massless symmetry of SU(2) (electromagnetism and weak isospin fields). This replaces the Higgs field. Since the isospin charge has a handedness, it is possible to couple mass only to left-handed isospin field interactions. Instead of all SU(2) bosons acquiring mass at low energies, only one handedness of the chiral symmetry does, regardless of energy.

Gravitational fields have energy and therefore are a source of gravitation (i.e. carry charge) in classical general relativity, which naively is often assumed to imply the need for “charged” gravitons. But if the Higgs field were the origin of mass, the Higgs field would be the charge for quantum gravity, because mass is gravitational charge. The observed gravitational deflection of massless photons of starlight passing near the sun during eclipses proves that gravity can act directly upon energy, not just upon rest mass. Our “graviton” is the neutral virtual photon, so the energy of gravitational fields due to these graviton exchanges between masses structures the spacetime so that it affects the motion of real (on mass shell) photons of light, in just the same way that the loading of photon fields by the “neutral” electromagnetic fields in a block of glass called a lens will deflect the motion of such photons. The use of virtual photons as gravitons therefore simplifies quantum gravity.

The best explanation of the Maxwell equations in vector calculus is in Feynman’s Lectures on Physics, while the clearest explanation of their conversion into tensor form is given in Ryder’s Quantum Field Theory (2nd ed., 1996, section 2.8, pp. 64-5). Ryder also gives a discussion of the electroweak theory in section 8.5, pp. 298-306, although to understand the basis of the electroweak theory equations the reader must supplement this section in Ryder with pages 215-32 in chapter 10 (“Electroweak Theory”) of McMahon’s 2008 Quantum Field Theory Demystified, and the excellent original paper by Yang and Mills, “Conservation of Isotopic Spin and Isotopic Gauge Invariance”, Physical Review, v96, 1954, pp. 191-5. Weinberg’s The Quantum Theory of Fields vol. 1, chapters 1 are 11, are a particularly important historical introduction to the subject and the mechanism of one loop radiative corrections (the polarization of virtual fermion pairs produced in the vacuum near real charges); section 21.3 in chapter 21 in vol. 2 covers the electroweak theory briefly, but Weinberg’s writings are not a replacement for the Yang-Mills paper or other sources. The most recent reformulations of the Yang-Mills equations may look mathematically simpler and thus more elegant, but they only achieve visual prettiness at the ugly hidden cost of obfuscating still further the already abstruse physical phenomena they model, so we prefer to use the easily understandable but lengthy notation of Yang and Mills.

Zee’s textbook on quantum field theory is worth reading but begins with a statement claiming to convey Feynman’s perspective on path integrals, which appears to conflict with Feynman’s own statements in his self-illustrated excellent 1985 path integrals book QED; for instance, Feynman draws paths for the path integral for the refraction and reflection of light as straight lines with the integral summing the paths of different straight lines and reflection angles to determine the path of least action, such as that of least time. Zee instead conveys the idea that the path integralk includes non-straight lines produced by the diffraction by the physical fields of atoms at the edges of slits in a screen, even if the screen is completely removed by having so many holes or slits drilled in it that it ceases to exist altogether. Zee praises this misrepresentation of Feynman as “very Zen”, but actually it’s very pseudoscientific like “the memory of water” in homeopathy, where you dilute a treatment so much that only pure water remains and try to claim that it still has an effect beyond that of a placebo.

Textbooks including non-falsifiable, half-baked “string theory” are unable to convey a scientific understanding of nature on such speculations, but deliver to the student the kind of delusions which once were found in religion, and now have crept into the sales hype of modern physics. First quantization is the best example of the intrusion of not mystery but error into the teaching of modern physics; the experimental confirmation of second quantization – field quanta causing indeterminancy – should have debunked first quantization which is belief in a mysterious direct action of the “uncertainty principle” upon the position or momentum of a particle.

Understanding the infinite self-inductance problem for the propagation of electrically charged, massless radiations is central to the innovation. Classical electromagnetism presents a confused picture, supposedly determined by Maxwell’s equations, over thirty years before the electron was actually discovered in 1897. The confusion stems from Kirchhoff’s First Law, which states that electricity can only flow in a complete circuit, i.e. it cannot flow in an open circuit. The law supposes that somehow electrons all instantaneously begin moving throughout the circuit simultaneously. At the time Kirchoff formulated that “law”, it was not known that electricity is constrained by the velocity of light. Maxwell addressed the issue that electricity will flow into a capacitor even if the dielectric is nothing but a vacuum, but he did not address the velocity of electricity itself, noting in his Treatise on Electricity and Magnetism, 3rd ed., 1873, Article 574: “… there is, as yet, no experimental evidence to shew whether the electric current … velocity is great or small as measured in feet per second.”

It was eventually shown by Heaviside that electricity propagates at the velocity of light for the dielectric material, violating Kirchoff’s First Law by flowing into even incomplete circuits at light velocity (since electricity is unable to sense a broken circuit ahead). This velocity suggested to Heaviside that electricity behaves like light, i.e. essentially all of the energy of electricity propagates as disturbances in gauge boson mediated electromagnetic fields, rather than as the kinetic energy of drifting electrons. A logic pulse in a computer is therefore a good approximation to a pulse of massless fields representing two opposite charges, forming a transverse electromagnetic (TEM) wave. This has strong implications for modern quantum field theory: each conductor carries an inversion of the electric signal in the other, corresponding to opposite directions of gradient for the acceleration of charges in each conductor, so that the magnetic field vectors generated around each conductor have opposing curl directions and cancel out the net magnetic field.

This cancellation of magnetic field vectors prevents the infinite magnetic self-inductance per unit length of each conductor which would if it were carrying a current alone. A similar effect occurs with the exchange of electrically charged field quanta. Massless versions of the charged W quanta of SU(2) should be unable to propagate in one direction along any given path because of Lenz’s law, i.e. the induced magnetic field opposes the motion. But if there is an equilibrium in the exchange in two opposite directions (which is the whole basis of the idea of gauge field theory, the exchange of field quanta between charges!), then the magnetic field vectors from the “return” current will neatly cancel out the magnetic field from the simultaneous, equilibrium “primary” current. How the equilibrium is established in the first place is irrelevant to this innovation, since an equilibrium of exchange currents is necessary for the stability of any field based upon the exchange of field quanta.

(I can’t easily incorporate the mathematics typesetting here from the draft paper done using, appropriately, Quark XPress – to avoid layout problems in Word – but will post a link here to the final PDF paper version once it is completed.)


Fig. 1: it is an observed fact that there is an isotropic outward acceleration of about a ~ Hc = 6.9 x 10-10 ms-2 of 3 x 1052 kg of matter distributed isotropically around is. F = ma tells you that the small acceleration of the immense mass in universe around you constitutes a radial outward isotropic force of 2 x 1043 Newtons. Newton’s 3rd law tells you that every action has an equal and opposite reaction, a 2 x 1043 Newtons isotropic inward force, which it turns out is mediated by gravitons. Every fundamental particle is exchanging gravitons giving rise to that force on its surface, causing no net force (although there is a lot of chaotic motion on small scales from individual field quanta exchanges in electromagnetism and gravitation, akin to Brownian motion). Two particles shadow one another from the immense inward graviton flux from the rest of the universe. The shadow area cast on one fundamental particle by another is the cross-sectional area of the fundamental particle for quantum gravity, multiplied by the square of the ratio of separation distances to radii. The ratio of shadow area to total surface area for quantum gravity interactions is equal to the proportion of the 2 x 1043 Newtons isotropic inward force which causes gravity. Hence quantum gravity yields a prediction for Newton’s gravity coupling G that can be compared to experiments. Putting that differently, we can predict the value for the acceleration of an apple toward the Earth from quantum gravity, from knowing at the cosmological acceleration and the mass of the surrounding universe.

Fig. 2: quantum gravity is the same thing as “dark energy”: both are purely repulsive effects of the exchange of spin-1 field quanta. The more mass there is in the Earth below you, the greater the fraction of the upward coming graviton flux that gets shadowed, and therefore the greater net downward flux at the Earth’s surface (gravitational acceleration). Although the amount of the downward flux from the universe above your head does not depend on the mass below your feet, the fraction of that flux which is not being cancelled out is directly proportional to the mass below your feet which shields the upward flux. More mass gives more shielding, which gives less cancellation of the net force from the graviton flux which is pushing you down. So you get accelerated down faster to a big mass than to a small one at the same distance. If you had a perfect shield below your feet, you would be pushed down with a graviton flux force equal to half the inward force of the universe, i.e. 1043 Newtons. The fact is, this can never happen because the gravitational cross-sections of fundamental particles are so small that all masses are very ineffective graviton interactors! Only a tiny fraction of the upward graviton flux gets shadowed by the Earth. The acceleration of the universe observed from supernovae redshifts in 1998 is roughly a ~ Hc = 7 x 10-10 ms-2. In 1996 it was predicted that this acceleration exists, in order to explain the observed small coupling parameter of quantum gravity. Very simply, this acceleration a tells us that the mass of the surrounding isotropic universe M has an outward force F = Ma. This comes from a well tested piece of physics known as Newton’s 2nd law of motion, which seems to have been forgotten by some cosmologists. Newton’s equally well founded 3rd law then tells us that the outward force must be accompanied by a equal and opposite inward reaction force (which is mediated by spin-1 bosons, gravitons). Hence we know that gravitons carry a converging inward force to us equal to the product of the mass of the receding universe and the cosmological acceleration. Knowing the geometry of how gravity is produced from a tiny fraction of this force being shadowed by the mass of the Earth or other effectively non-receding masses, allowed us to predict the cosmological acceleration of the universe. One of the problems in explaining this theory is getting people to grasp the relative mass of the Earth to that of the surrounding isotropically distributed masses in the universe, which are immense and more important due to the inward convergence of gravitons. Most quantum field theorists simply follow the 1939 error of Wolfgang Pauli and Markus Fierz, which naively ignores all the masses in the universe except for the two masses which appear to be attracting. This is a false foundational assumption with disastrous consequences, leading to the mistaken “proof” that gravitons need to have spin-2 rather than spin-1.

Fig. 3: the detailed geometry of the shielding process outlined in Fig. 2. Around us, the accelerating mass of the isotropic universe causes an outward force that can be calculated by Newton’s 2nd law, which in turn gives an equal inward reaction force by Newton’s 3rd law. The fraction of that inward force which causes gravity is simply equal to the fraction of the total effective surface area of the particle which is “shadowed” by relatively nearby, non-receding masses. If the distance R between two such particles is much larger than their effective radii r for graviton scatter (exchange), then by geometry the area of the shadow cast on the surface area of one of the particles, 4*Pi*r2 by the other similar sized fundamental particle is simply its cross-sectional area scaled down by the inverse square law using the ratio of its distance to the radius of the first particle, i.e. the shadow area is Pi*r2(r/R)2 = Pi*r4/R2, so the fraction of the total surface area of one particle which is shadowed by another similar particle is (Pi*r4/R2)/(4*Pi*r2) = (1/4)(r/R)2. This fraction merely has to be multiplied by the total inward force generated through Newton’s 3rd law by distant mass m undergoing radial outward observed cosmological acceleration a, i.e. force F = ma, in order to predict the net gravitational force. The total observed mass of the accelerating universe is known from Hubble space telescope observations to be around 9 x 1021 stars with mean mass roughly that of the sun, ignoring relativistic effects and dark matter (which may not be accelerating like visible matter anyway). The graviton radiation in this model is “off-mass shell” virtual bosons, so it doesn’t slow moving masses down continuously by drag or heat them up; it is not the same thing as LeSage’s problematic “on-mass shell” radiation or gas pressure shadowing (which is to quantum gravity what Lemarke’s theory was to Darwin’s evolution, or what Aristotle’s laws of motion were to Newton’s, i.e. mainly wrong). In other words, the source of gravity and dark energy is the same thing: spin-1 bosonic gravitons. Spin-2 gravitons are a red-herring, originating from a calculation which assumed falsely that gravitons either would not be exchanged with distant masses, or that any effect would somehow cancel out or be negligible.

The effective cross-sectional size of any real (on-mass shell) fundamental particle for fundamental force field quanta interactions is equal to the cross-sectional area of its black hole event horizon, which has radius 2GM/c2. This assertion is just not an ad hoc assumption to force the gravity model to correctly predict cosmological acceleration or the accelerative fall of an apple; it is easily and independently proved to be the correct cross-section to use by the fact that the emission of pseudo (off-shell) “Hawking radiation” from black hole electrons will produce a force similar to that of Coulomb’s law. A black hole electron has a Hawking radiation temperature of 1.35*1053 K, which by the Stefan–Boltzmann law radiates 1.3*10205 W/m2. For the black hole event horizon surface area, this gives a power output of 3*1092 watts. The absorption of energy E implies momentum gain E/c, and when that absorbed energy is re-emitted back in the direction it came from, there is an additional recoil which imparts similar momentum, so that the “reflection” like process in the exchange of gauge bosons of energy E imparts total effective momentum p = 2E/c, thus generating force, F = dp/dt ~ (2E/c)/t = 2P/c, where P is the radiating power (J/s). Hence F = 2P/c = 2(3*1092 watts)/c = 2*1084 N, which is larger than total graviton exchange force (F = ma = 7*1043 N) by a factor around 1040, a correct prediction of the ratio of Coulomb to gravitational force couplings parameters, thus proving the validity of the assumption that the cross-section of a fundamental particle is its black hole event horizon size.

(There is still another way to understand this ratio in the new theory. Electromagnetism is mediated by electrically charged massless gauge bosons, the usual infinite self-inductance objection being averted by the cancellation of the magnetic field vectors in the two-way exchange between charges. The universe contains an equal number of positive and negative charges, randomly distributed. Any radial line from us outward across the universe will thus on average hold an equal number of positive and negative charges, nearly causing cancellation although 50% of such paths will contain an odd number of charges where the odd charge causes a tiny resultant force. But a “drunkard’s walk” summation will also occur, in which the vector summation of all the little effective “capacitors” composed of hydrogen atoms at any instant – positive nucleus and negative electron with a vacuum dielectric between and around them – add up to give a net resultant equal to charge of one atom multiplied by the square root of the number of hydrogen atoms in the universe. With 1080 atoms you get a multiplication factor of 1040. The drunkard’s walk on average involves as much divergence as convergence effects with distance, with these factors simply cancelling out. This field quanta path integral approach offers a physical mechanism for the large difference in field strength between electromagnetism and gravitation.)

Since the outward force – predicted by Newton’s 2nd law from cosmological acceleration – must by Newton’s 3rd law be balanced by an equal and inward force, the energy in the “explosive” outward cosmological acceleration driving the outward expansion of the universe is equal to the inward directed “implosive” gravitational potential energy of the universe, or E = Mc2 = M2G/R, where R = ct is the average distance masses in the universe would have to fall to revert the universe into a singularity. This is one simple way, based on explaining John Hunter’s hypothesis E = Mc2 = M2G/R, for producing Riofrio’s equation for G. The same result comes from the quantum gravity shadowing model above.

Put another way, the reason why the universe expanded at all, instead of being a gravitational singularity, is that classical general relativity is just an approximation to the quantum gravity mechanism. While there is a definite balance between the net outward force of cosmological expansion and inward gravitational force, the quantum gravity mechanism explains how expansion predominates over the greatest distance scales, and gravitational “attraction” predominates over smaller distance scales. This is consistent with observations.

Black hole event horizon radius R = ct = MG/c2, hence Riofrio’s equation GM = tc3

Riofrio starts with GM = tc3 and assumes that tc^3 is constant so c = (GM/t)^{1/3}. This one option should not be promoted as if it is the only possibility, because nature doesn’t always conform to what initially appears the most obvious solution. Riofrio’s equation has other possibilities, such as a simpler time variation of M with t. A still simpler possibility is that G is directly proportional to t, which a successful theory of quantum gravity proves correct. Consequently, the gravitational coupling, G was 45,000 times smaller when the cosmic background radiation was emitted (0.3 Myr) than today (13.7 Gyr), showing why the fluctuations were so much smaller at that early time than models of galaxy evolution require when assuming constant G. In other words, the small size of G at early times suppresses the rate at which matter clumps together, doing exactly the same job as so-called ‘inflation’ (which reduces curvature by faster than c exponential expansion near the Planck time), without the epicycles. Dirac in the 1930s investigated the wrong ‘large numbers hypothesis’ that G is inversely proportional to t, so G starts off very big (unified with electromagnetism) and decays with time as the universe expands. Teller in 1948 disproved Dirac’s assumptions by pointing out that a varying G will affect the compression of protons that overcomes Coulomb’s law inside the sun, allowing fusion. Hence, Teller showed that the sea would have been boiling in the Cambrian explosion! However, Teller’s argument is null and void if electromagnetism and gravity are unified in such a way that – instead of having equal couplings at high energy or early times – their couplings differ due to a simple mechanism and always remain in the same ratio (~1040), both increasing linearly with time. This allows both Coulomb repulsion (opposing fusion) and gravitational compression (allowing fusion) to increase with time, so that the net fusion rate is unaffected in stars, the first three minutes, etc.

Fundamental unification research and the Standard Model

Above: the fermions (leptons and quarks) of the first generation in particle physics, arranged for unification. As shown in this blog post, the downquark is a disguised electron. The difference in apparent electric charges between say the electron and downquark 1 – 1/3 = 2/3 tells us that 2/3 of the electromagnetic field energy of the downquark is being converted into short-range nuclear binding energy and mass by the exchange of the virtual particles (created by pair production and polarization of the vacuum from the energy of the electric field) between the confined quarks.

This is needed to properly predict the break in the Standard Model symmetry pertaining to the SU(3) colour charge which only applies to quarks, not leptons. The explanation is the use of shielded field energy in vacuum polarization of pair production fermions which have accompanying vector bosons. The other two generations of particle physics have identical charges to those in the table above; they are merely more massive, occur in shorter lived particles, and have different names. The heavier versions of the electron and neutrino are the muon and tauon and their neutrinos, while the heavier versions of the down quark are the strange and botton quarks; the heavier versions of the up quark are the charm and the top. The heavy version of the down quark called the strange quark is very important in understanding the mechanism, three strange quarks occur together in the omega minus baryon, which has a net charge of -1. Because the electric fields from the three similar charges are all superimposed, the shielding factor of the quark charges in the omega minus by the polarized pair production virtual fermion cloud should be three times stronger than in the case of an electron, and this will attenuate each of the apparent quark electric charges by 3 times as much as the polarization of the vacuum around an electron core does. This is like putting three blankets on your bed instead of one; the core charges of strange quarks are therefore 3 times higher than you naively expect when simply dividing the total shielded charge observed (-1) into 3. You should multiply the result again. The result is that strange quarks have a relative core charge of -1 each, just like the electron, and the apparent strange quark electric charge of -1/3 is due to ignoring the fact that the polarized vacuum for three similar electric charges is sumulative due to overlap. As a simple analogy, if you have a radioactive source behind a lead shield and measure an attenuated radiation level of 1 unit, this can represent a single electron with its vacuum polarization. If you now triple the amount of shielding by adding two more sets of sources and shields, but making sure that the shields add up so that the radiation must then penetrate 3 times as much shielding, you won’t increase the observed (shielded) radiation level because the increase in source strength is being offset or negated by the increase in shielding where each shield overlaps the other shields and increases the overall attenuation factor.

Therefore, in the case of the omega minus you know (1) that you’ve tripled the amount of core electric charge shielding by vacuum polarization in having three identical charges together, sharing their overlapping electric fields and associated vacuum pair production and polarization shields, and (2) that you’ve got the same shielded value of the electric charge that you have for an electron with just a single vacuum polarization shield due to a single core. Therefore, you can expect the strange quark’s effective charge to be the same as that of the electron, -1, and to merely appear to be -1/3 due to having three times as much shielding from the overlap of the electric fields from the three strange quarks in the omega minus baryon. The apparent discrepancy between the -1/3 charge of a down, strange or bottom quark and the -1 charge of a lepton (electron, muon, or tauon) is therefore a physical artifact from the vacuum polarization overlap effect in shielding the core charges differently for a single core (lepton) and triplet core (omega minus). We will discuss the mechanism for the +2/3 charges of up, charm and top quarks later on in detail. Basically, as the table above shows, what happens is that while the downquark is a modified electron, the upquark is related to the neutrino.

I summarized Dr Woit’s issues with the Standard Model on the earlier post linked here. Marni Sheppeard and Carl Brannen have made some progress in developing mathematical models for the Standard Model’s CKM parameters. I want to do two things in this post. First, I want to explain what the CKM parameters are and why they are so important in physically understanding the Standard Model and the nature of our everything that exists. Secondly, I want to outline my research on this subject, which ties up the left handed weak interaction with the observed excess of matter over antimatter in the universe (the Standard Model is simplistic in making the weak force left-handed by only including left-handed and not right-handed neutrinos; this is an ad hoc way of constraining the weak interaction to particles with left-handed spin, and is not the whole story).

1. The CKM parameters

The first of these was discovered by Cabibbo (the C in CKM) in 1964. The weak force like other gauge interactions is a force due to the exchange of field quanta between charges. Because the field quanta (vector bosons) have rest mass, they travel slower than the velocity of light and can carry a net flow of charge, so that a fermion interacting with a charged vector boson can gain or lose charge, transforming it. A neutrino is a lepton with very little mass and no net electric charge, only weak charge. In ordinary beta radioactive decay, such as that of the fission products from radioactive waste or fallout, most of the energy is actually carried off by an anti-neutrino, not by the beta particle. The neutrino is a very important particle, but it has always been controversial and poorly presented in popular journalism. Wolfgang Pauli first suggested it in 1930 when it was known that most of the energy lost in beta decay was not being detected, suggesting that it was a weakly interacting neutral particle that could be detected if sufficiently strong beta sources were available (they were first detected using a nuclear reactor; even though they are incredibly weakly interacting, if you simply have a strong enough source then statistically you will get some interactions). Bohr preferred a rival wishy-washy (non-checkable) theory in which the uncertainty principle controls the conservation of energy, which is only conserved on the average for all interactions, so that in the specific example beta decay energy is simply not conserved. Like Born’s application of the uncertainty principle to social politics, Bohr’s idea was not even wrong. Murray Gell-Mann complains in his autobiography that the 1950s New York Times science editor, William L. Laurence (a very important figure in the history of the nuclear age, being the only journalist present at the first nuclear test and the nuclear bombing of Nagasaki), did not take the neutrino seriously because it was so hard to detect due to being so weakly interacting. However, they have been discovered, and they are a fact.

A neutrino can be converted into an electron plus a W+ weak boson, which can annihilate each other back into a neutrino. This is analogous to the well known pair-production process with gamma rays (electromagnetic radiation) in electromagnetism, where you use lead to shield gamma rays because the gamma rays that pass near a heavy lead nucleus experience a strong electromagnetic field which can convert the gamma ray into an electron + positron “pair” if the gamma ray exceeds the energy corresponding to the rest mass of the pair (1.02 MeV). This is a more effective form of radiation shielding than Compton scattering, where gamma rays lose energy in scattering off orbital electrons. Pair-production can also occur when a gamma ray energys the electromagnetic field of an electron, but this is far less likely than pair production near a lead nucleus where there is a much stronger electromagnetic field present from the 82 protons in its nucleus (pair production probability is strongly dependent upon the strength of the electromagnetic field, i.e. the amount of charge).

Rearranging this neutrino pair production interaction, it is possible for an electron to be converted into a neutrino plus a weak W gauge boson. This interaction is related to the beta decay predominating in the mundane fission product nuclear reactor waste and bomb fallout, where a downquark in a neutron is converted into an upquark (converting the neutron into a proton) by the emission of an antineutrino and a beta particle (electron). This is pretty interesting physics for several reasons. For one thing, nobody has ever seen a free proton decay into a neutron. Free neutrons decay into protons with a half life of about 10.3 minutes, but free protons don’t decay by positron emission into a neutrino, a positron and a neutron!

Remember, I’m talking of free neutrons and protons (hydrogen-1 is effectively a free proton from the perspective of nuclear interactions, but protons in every other type of nucleus are not free), not the “bound states” of those particles inside nuclei where they are within range of the weak and strong fields from other nucleons. Bound neutrons can be prevented from decaying or have their half-lives increased or shortened from 10.3 minutes, while many instances of proton decay in bound states exist, e.g. carbon-11, potassium-40, nitrogen-13, oxygen-15, fluorine-18, and iodine-121. These examples of proton decay are important in medicine, because the patient can be given a positron emitter, and the positrons emitted by the proton decays have a short range in the body, being quickly annihilated into a pair of gamma rays moving in opposite directions. Detectors outside the body can identify many of these simultaneously emitted gamma rays, allowing the position of the emitting atom to be determined by nanosecond timescale resolution of the gamma ray detections by scintillation counters. This data is then plotted on a three-dimensional virtual image of the body in a computer, allowing the internal distribution of uptake of the positron emitted by the body tissues to be displayed, which gives a picture of the internal body tissues which is more detailed than you can achieve with X-rays. Of course, the body is irradiated with gamma rays in the process, but this “positron emission tomography” (PET) scanning method is very important in modern medicine. My purpose in outlining the method is to try to convey a sense of the reality of positron emission by decaying protons. It’s too easy to write about particle physics in a way that seems totally abstract and irrelevant to daily life, when in fact of course particle physics is the ultimate basis for life, the universe and everything.

Going back to quark decay into neutrinos and charged W bosons, the rate of these weak interactions was observed by the Italian physicist Nicola Cabibbo to be within 1 part in 25 identical to that when acting on the leptons when they undergo weak interactions like the conversion of an electron into a neutrino plus a W. To be precise, the weak force acting on quarks was 1 part in 25 weaker than between leptons, i.e. it was 24/25 that of the weak force between leptons. This led to the concept of “universality”, i.e., a deep similarity between leptons and quarks.

Furthermore, the small 1 part in 25 discrepancy had a simple explanation. Cabibbo noticed that the weak interaction strength is only 1/25 as strong when quarks of different generations are involved. Upquarks and downquarks, and their antiparticles, are one generation of quarks. Charm and strange are the second generation, while top and bottom and the third generation of quarks, most recently discovered (there is evidence from particle interactions that there cannot be more than these three generations, or the additional interactions would have shown up in certain types of interaction).

“This inspired him [Cabibbo] to the following insight into the nature of the weak interaction acting on quarks and leptons. It is as if a lepton has only one way to decay, whereas a quark can choose one of two paths, with relative chances of A2 = 1/25 and 1-A2 = 24/25, the sum of the two paths being the same as that for the lepton. Today we know that this is true to better than one part in a thousand. This one part in a thousand is itself a real deviation from Cabibbo’s original theory, and is due to the effects of a third generation, which was utterly unknown in 1964.”

– Professor Frank Close, The New Cosmic Onion, Taylor and Francis, N.Y., 2007, pp. 158.

In other words, for quarks the overall sum of weak interaction coupling over all three generations is identical to that for leptons, but if you are only dealing with one generation of quarks, the coupling appears to be weaker than for leptons by the proportion of the coupling which applies to the interaction.

A good example of the important effect of the flavour is the change of flavours of neutrinos from the sun. The nuclear fusion of protons in the sun releases only one flavour of neutrino, namely electron neutrinos. Hence, throughout the 1980s and 1990s efforts were made to measure the electron neutrinos from the sun, so see if the emission rate is that predicted by the Standard Model of particle physics. They consistently found that only 1/3rd of the amount of electrons predicted to arrive at the Earth could be detected. During the 1980s, the consensus of opinion was that either the fusion model for the sun was wrong (e.g., the core temperature of the sun or the type of fusion fuel it uses was different to that expected, which seems unrealistic because observations on the other particles emitted from the sun and on the spectroscopy, radiant power and mass of the sun strongly constrain the range of possibilities for what is going on deep inside it), or that the electron neutrino detectors were somehow mis-calibrated (which seems unrealistic, because you can easily stick a strong beta decay neutrino source into them to accurately calibrate them).

So it turned out that the neutrino detectors here on Earth were accurately measuring just one third of the solar electron neutrinos. The explanation is simply that on the way from the sun to the Earth, a journey of about 8.3 minutes for such relativistic particles, the neutrinos were being randomly distributed between the three different flavours, so that the by the time they arrived on Earth they had gone from 100% electron neutrinos to 1/3rd tauon neutrinos, 1/3rd muon neutrinos, and 1/3rd electron neutrinos (which alone could be measured by the detectors as designed originally). This was confirmed in 2001 by the Sudbury Neutrino Observatory (SNO) in Canada, which detected all types of neutrinos coming from the sun, not just electron neutrinos.

As a result, the neutrino description in the Standard Model had to be changed, not the theory of the sun or the calibration of the electron neutrino detectors. Neutrinos had to be allowed to change or oscillate in flavour between the three different generations during their passage through the vacuum. This required giving a small mass to the neutrinos, so that they are not completely “frozen” in nature due to relativistc time dilation when travelling at the velocity of light (which is the velocity of totally massless particles). Giving neutrinos small masses allowed them to travel slightly slower than light and thus to oscillate in flavour somehow.

The Standard Model is a numerical prediction system, not a strictly mechanistic theory of particle physics; e.g. it doesn’t specify whether masses are given to particles before or after the Weinberg mixing of U(1) and SU(2) occurs. A particle in the Standard Model is represented by the matrix of properties. E.g., suppose (A|B) is a matrix for A and B (in a matrix, A should be printed vertically above B, but to keep the symbolism simple and non-intimidating for this blog I will just use a vertical line, |, to separate matrix rows; the upper most row will be given first and separate by “|” from the next row down, and so on). If A is the probability of a particle being a neutrino and B is the probability of it being an electron, then the matrix (1|0) represents a neutrino and (0|1) represents an electron, while (0 1|0 0) can represent the weak boson W+, (0 0|1 0) represents W+, and (1 0|0 -1) represents the W0. These 2 x 2 W+, -, 0 matrices all have zero traces (the trace is the sum of the numbers on the diagonal from top left to bottom right in the matrix), and such zero trace matrices are called SU(2) matrices. SU(N) contains N2 – 1 matrices, so SU(2) represents 3 vector boson descriptions while SU(3) represents 8 gluons.

The clever bit of this matrix description is that you can employ the standard mathematical rules for the multiplication of ordinary matrices, so that when you multiply the matrices for say a W+ and an electron, the mathematics of matrix multiplication gives you the matrix for the neutrino: (0 0|1 0) x (1|0) = (0|1). Thus, multiplying the matrices represents the interaction: W+ + electron = neutrino.

This is interesting because ordinary matrix product is not commutative: it matters in which order you do the multiplication! E.g., when you collide and interact W+ and W, the multiplication of matrices for W+ and W gives (1 0|0 0) while multiplying W and W+ matrices gives (0 0|0 1). The sum of these two different results is the photon of U(1), namely (1 0|0 1), while the difference of the results is the W0, namely (1 0|0 -1). Therefore, the annihilation of W+ with W produces two possibilities: either aan electromagnetic non-traceless U(1) photon, or a traceless SU(2) W0 weak boson.

One thing that should be noted about the new symmetry is that right-handed neutrinos exist in it, which they don’t in the existing standard model. Right-handed neutrinos are however currently postulated as being the explanation for so-far unobserved “dark matter” as indicated by the discrepancy between the models and observations for the rotation rates of the arms of spiral galaxies, etc. (Attempts to detect other forms of postulated dark matter which do interact by non-gravitational interactions have failed; see the results from the XENON100 dark matter experiment.) Such right-handed neutrinos are non-interacting in the conventional sense as they possess no weak charge, but they can have mass and have been called “sterile neutrinos”. (The Scientific American has an article on it where it falsely tries to link evidence to stringy lies.)

2. The relationship between the left handedness of all weak interactions and the excess of matter over antimatter in the universe

The Weinberg mixing angle was discussed in pervious posts. Basically, the Weinberg mixing angle would be equal to zero if U(1) was electromagnetism and SU(2) was the weak interaction. It isn’t. There is a significant “mixing” of U(1) and SU(2) gauge theories needed. The photon of electromagnetism and the Z0 weak neutral boson are both results of mixing of the hypercharge U(1) boson B with the W0 SU(2) boson. This electroweak mixing is quite separate from the speculative unification of electroweak interactions at high energy using a Higgs field which makes the SU(2) bosons massless above the supposed unification energy, so they have the same interaction rate as photons, making the weak force of identical strength with electromagnetism and thus unified with it in the current paradigm.

As explained in previous posts, both of these mainstream ideas, mixing and unification via equality of force coupling parameters at high energy, are speculative and cover up a simpler approach to electroweak unification which includes quantum gravity. Instead of the mixing producing the photon of electromagnetism, it produces a spin-1 graviton so the hypercharge U(1) gives mass (the charge of quantum gravity), while electromagnetism is mediated by massless versions of the W+ and W which can only be exchanged in a perfect equilibrium due to being massless (the self-induction magnetic field problem prevents they from propagating in any one-way path in the vacuum, so they can only travel in both directions in perfect equilibrium between charges, thus cancelling out the magnetic field curls and related self-inductance which normally prevents charged massless radiations), so although they are technically governed by Yang-Mills field equations due to their charge (rather than the Maxwell equations), the physical circumstances in which they can be exchanged prevents any imbalance in equilibrium and thus prevents any net delivery of charge, so the Yang-Mills term for charge transfer by the field is always zero for massless charged bosons, reducing the Yang-Mills equations to effectively Maxwell’s equations.

Furthermore, because we have U(1) based quantum gravity and SU(2) electromagnetism from this, we have mass generated as the charge of quantum gravity and don’t have a separate Higgs field. The electroweak symmetry is permanently broken regarding strength; it doesn’t numerically unify in coupling strength via the weak gauge bosons becoming massless at high energy. The couplings run with energy for a different reason, namely energy conservation in the shielding of the electromagnetic field by vacuum polarization. As the electromagnetic field energy is used for pair production, the field is screened i.e. weakened by the absorption of energy by the polarization of the virtual fermion pairs. The polarization draws the positive and negative virtual pairs further apart, increasing the time taken on the average for their annihilation back into field quanta. Hence, the electromagnetic field energy they absorb in becoming polarized is effectively being converted into vacuum field fermions which have are accompanied bytheir own gauge bosons, and which exist for a longer time in stronger electromagnetic fields which polarize them more effectively, drawing apart the oppositely charged virtual fermions for a longer average time before them can annihilate back into field quanta. While they exist, the energy used in this way creates the gauge bosons that accompany each type of virtual fermion, and these gauge bosons are available for mediating fields between the long-lived particles whose electromagnetic field energy is being used to produce and polarize the virtual fermions. In other words, the field quanta of the virtual fermions resulting from the pair production and polarization phenomena in electromagnetic fields can mediate and indeed represent the weak and strong fields. Conservation of energy then allows you to use the variation in the electromagnetic running coupling with distance to predict the energy density of the attenuated electromagnetic field in the short ranged polarization region which is available to produce the weak and strong fields by this mechanism.

Now for the link between the left-handedness of the weak interaction and the fact that matter predominates over antimatter in the universe. Both of these things are asymmetries. There seems to be a linkage between the electromagnetic model discussed above, i.e. using massless charged SU(2) bosons with an appropriate mixing with U(1) hypercharge to produce electromagnetism, and the handedness. This is because we must get rid of the ubiquous Higgs field as a mass generating mechanism that gives mass to all of the SU(2) bosons at low energy, and to none of them at high energy. This Higgs model is the wrong approach to the search for symmetry in nature; it takes a path of least resistance, failing to achieve anything really known for sure or even to make falsifiable predictions as Woit points out. The Higgs field is a weak point in the Standard Model. What we want instead is the acquisition of mass not by a Higgs field, but as the charge of a quantum gravity U(1) Abelian symmetry. In the previous post and other posts we have discussed the failure of general relativity and the propaganda spread by relativists about the need for spin-2 gravitons. We explained that the deflection of light by a glass lens does not prove that light photons have a net electrical charge; they are neutral. Yet neutral photons can still be deflected by interactions with the electromagnetic fields in glass. Similarly, gravitons don’t need to have gravitational charge in order to be deflected by gravity, just as photons don’t need to have electric charge in order to be deflected by electromagnetic fields in a block of glass.

What we have therefore is a model in which the coupling of mass from U(1) quantum gravity to the SU(2) charge bosons only occurs to to some of those bosons, creating weak bosons; the remainder of the charged weak bosons of SU(2) don’t acquire mass from mixing with U(1) and instead propagate as massless electrically charged bosons which constitute the signed (not neutral) electric fields observed around positive and negative charges. This explains physically why the U(1) force-carrying photon needs 4 “polarizations” unlike the directly observed photon which has only 2 polarizations (electric and magnetic field planes): the extra 2 polarizations of the gauge photon are simply positive and negative charges.

Now, why do only some of the SU(2) charged bosons gain mass from mixing with U(1), instead of all of them? This appears to be related to the left-handedness of the weak interaction for the massive SU(2) gauge bosons. In the Standard Model, the weak force is made left-handed by means of only including neutrinos of left-handed spin. However, nobody has ever observed the spin of a neutrino. They’re so weakly interacting it is hard enough to detect them at all, let alone determine their spin. They’re detected using detectors resembling, for instance, swimming pools full of transparent dry cleaning fluid with which they can interact, producing flashes of light which can be picked up with photomultipliers and identified from background noise by calibration. You can’t pick up neutrinos in conventional high energy physics detectors like those used in colliders or ordinary laboratory radiation detectors. So you can’t determine justify the assumption that the left-handedness of the weak force is really due to neutrinos only existing in a left-handed variety. It could be that there is a more interesting and helpful explanation. (You can see here that we’re relying on the weak assumptions in the mainstream theory. Most people seem to be unaware that there is any possibility for different parts of the Standard Model having different levels of reliability and justification. It is more convenient to many people to accept the whole thing without question or to ignore it entirely; in other words to treat modern physics as an all-or-nothing religion, rather than with a constructively critical skepticism for the guesswork areas and more respect for those parts with have direct experimental confirmation.)

The explanation seems to be that the left-handedness of weak interactions is related to the excess of matter over antimatter in the universe. The vast majority of the matter observed in the universe is hydrogen, i.e. the fermionic matter consists mainly of two upquarks for each downquark and for each electron! Now if you just look at this fact and want symmetry, you come up with the idea that in order to explain the Cabibbo “universality” discussed above i.e., the similarity of the way weak interactions act on leptons and quarks, the downquark is a disguised electron, or vice-versa the electron is a modified quark that can’t fit into the proton but has to orbit it. This sounds crude, but the vacuum polarization model neatly explains the -1/3 charge of the quarks in the omega minus baryon, a vitally important particle in the history of physics, since it has three strange quarks each with -1/3 charge, a situation where at least 2 of the 3 quarks would have similar spin and thus identical quantum numbers, which would be prohibited by the exclusion principle unless quarks also have another kind of charge, colour charge, giving another quantum number.

As already explained in some previous posts, the omega minus contains three quarks each with similar amounts of negative charge. However, the amount is not necessarily -1/3. Nobody has, or can (even in principle), isolate a quark to measure its charge. It is only supposed to be -1/3 from the fact that the triplet of identical charges adds up to -1, so you divide by 3. However, this is physically at odds with the facts known about vacuum polarization in shielding the core charge of a quark or lepton. The vacuum polarization is proportional to the field strength.

The core charge of a quark is not going to be -1/3 because this is just the average of the low-energy net charge per quark observed at long distances (low energy) for the omega minus. As you get closer to an electron or an omega minus hadron, the electric charge begins to “run”, i.e. the coupling for electromagnetism increases. The bare core charge can be shown to be about 137.036 or 1/alpha times the low energy charge corresponding to distance beyond which the vacuum doesn’t polarize. The vacuum polarizes, as shown by Schwinger’s formula, out to a distance from an electron or omega minus where the electric field strength is above 1.3 x 1018 volts/metre, i.e. out to a radius of 33 femtometres from an electron or an omega minus hadron, but not beyond that radius.

As explained in the previous post, most quantum field theorists are so far gone into fairy land mathematical abstraction, they’ve lost all understanding for physical mechanism and don’t seem to even grasp the fact that Schwinger’s electric field strength threshold for pair production is the important mechanism for the IR cutoff on the running coupling for electric charge. When I tried to raise the point with Tommaso that there is a simple physical demonstration that the bare core charge of a particle (i.e. the running coupling at the upper energy or short range limit, namely the UV cutoff) is 1/alpha of the low energy asymptotic value of the electronic charge (i.e. the running coupling at the low energy limit or IR cutoff, the textbook value for the electronic charge), he had no interest. Hence, you cannot get anywhere by trying to present advances piecemeal. They’re not interested. You have to assemble all of the corrections to the mainstream understanding together as a complete picture, not a dribble of bits and pieces which are technical and not particularly convincing when isolated by themselves.

The fact the bare core charge of an electron is 1/alpha (about 137) times the low energy (below ~1 MeV) value is shown by using the uncertainty principle to model the electromagnetic field. Hence, dp*dx = h-bar. Now by definition, force, F = dp/dt = d(h-bar/dx)/dt where for relativistic gauge bosons dt = dx/c, so

F = c*d(h-bar/dx)/dx = c*h-bar /x2

which is a factor of 1/alpha greater than Coulomb’s law for 2 electrons or 2 omega minus hadrons colliding at low energy.

Note that Sir Roger Penrose doesn’t know this logic. In his book The Road to Reality he claims that the bar core charge of an electron is unknown, and guesses it is bigger than the low energy value by the factor of the square root of 1/alpha, i.e. by a factor of 11.7 instead of the true value of about 137. Penrose justifies his guess by the mere numerology of observing that the electric charge is squared in the formula for alpha, so he takes this as an indication that the the square root of alpha is related to charge, but this is just a red herring of numerology, since electric charge is dimensionful and alpha is dimensionless. Part of Penrose’s problem with nthis is that he assumes a Planck length scale UV cutoff (i.e. a vacuum grain size of the Planck length, with no further vacuum pair production of polarization at shorter distances, and thus an end to the running coupling at that scale), which is the mainstream theory and which suggests a bare core charge of roughly what Penrose assumes when the running coupling of electromagnetism is extrapolated to the Planck scale. In fact, as explained in the previous post, the UV cutoff appears to occur at the black hole size scale, which is far smaller (higher energy) than the Planck scale. There’re no physical basis for the Planck scale; it’s just dimensional analysis utilizing Planck’s constant by an egotistical Planck. Planck could just as well have got a much smaller length by the combination of fundamental constants MeG/c2, which is the product of the electron mass and G divided into the square of the velocity of light. This is far smaller and thus more fundamental than the Planck scale, and when doubled to produce the black hole event horizon radius for an electron, it is also physically more meaningful for physics (as shown in the previous post). Nobody has any evidence for Planck’s scale, but we have very strong evidence for the black hole scale, shown in the previous post.

The omega minus at low energy has an observable, long-range charge of -1, like the electron. In the case of the electron, the core charge is being shielded by a factor of 137 at low energy (at ligher energy, the factor is less than 137, because only a fraction of the polarized vacuum intervenes, e.g. the shielding factor has been predicted an experimentally measured to be 128.5 for 91 GeV collision energy).

But in the case of the omega minus, things are now different. Each quark in the omega minus is contributing to the electric field. The core charge per strange quark need not be 137e/3. It could be 137e, just like the electron. The reason is that the stronger electric field you get from 3 strange quarks each of electron sized bare core charge will make the vacuum polarization 3 times stronger, because the strange quarks are confined within a small region of space and their long range electromagnetic fields overlap over the pair production and polarization region. So the net core charge of the omega minus could be 3*137e, and this three-fold boosted electromagnetic field (compared to an electron) would then simply produce 3 times the amount of shielding due to vacuum polarization than you get around the core of an electron, so that instead of having a 137 shielding factor giving the omega minus a charge of -3, you have a 3*137 shielding factor giving it the observed -1 charge.

So it is understandable for the charge of a strange quark to be -1 rather than -1/3, and for the apparent observed charge per strange quark to be only -1/3 as a consequence of the cancellation whereby the the extra charge in the core produces more shielding due to the stronger electromagnetic field, which which cancels out the stronger field at long distances. This model for fractional quark charges being a due to vacuum polariation phenomena while the true quark charge is the same as that for leptons is only clearly shown in the case of the omega minus. For all other baryons and particularly for mesons (quark-antiquark pairs), nature covers up the mechanism by making sure that three identically charged quarks don’t appear together: in those cases, the fractional electric charge is conserved by the respective weak and strong fields binding those particles together. (We can come up with a simple anthropic explanation for why nobody else has apparently come up with this mechanism based scheme: if someone had, it would already have been explained so we wouldn’t need to address it here and now. However, the explanation for why this was missed until now is probably just down to the groupthink and obfuscation of the mainstream approach to particle physics in general, as mentioned in the previous post. By being outside the mainstream, it is easier to be constructively critical and to learn slowly, testing each step before trusting too much weight upon it.)

The point of all this is that the universe is mainly hydrogen, i.e. the ration of two quarks and one downquark per electron. Looking for symmetry here, we want the downquark with it’s inconvenient -1/3 electric charge to be a camouflaged electron with -1 charge. We accomplish this with a mechanistic argument of vacuum shielding of observed charges by electric fields using the example of the omega minus baryon which contains three strange quarks, each of apparent electric charge -1/3 like its relative the downquark. We argue that the combination of quarks can create electric field polarization phenomena which mask the -1 charge of down and strange quarks, making it appear as a fraction, -1/3, when the quarks are confined in triplets or pairs.

(The alternative model for unifying quarks and leptons would be to make leptons like the electron a composite of three -1/3 charged particles, each with a different colour charge so that the whole lepton is colourless overall. This seems to be the more obvious and attractive scheme to most people. However, it is junk because the structure of three quarks in baryons has been indicated by high energy scattering experiments; no such structure has been releaved for leptons like the electron despite very high energy scattering collisions. The three similar charged -1/3 perons supposed to exist in an electron would repel one another so this repulsion would need to be countered by another force such as the strong nuclear force, making it just a duplicate of the omega minus hadron containing 3 strange quarks of -1/3 charge each. It’s not possible to manufacture a useful model of the electron as composed of -1/3 particles without a lot of ad hoc epicycles to suppress conflicts with existing experiments, and a lack of any falsifiable pediction to be checked. It’s junk. The scheme discussed on this blog is different because it makes falsifiable predictions and is based on hard facts such as the conservation of energy for shielded electric fields in the polarization region of pair production.)

Now this model explains the absence of antimatter, because we end up with a different kind of symmetry being fundamental: downquarks are heavily field-disguised electrons. Hence there is a symmetry which is normally hidden: the majority of the matter in the universe is hydrogen, in which vacuum field (strong) confinement has converted two positron-like charges of +1 charge each into +2/3 charged upqrarks (the lost 1/3 charge being converted from electromagnetic field energy into short range fields by vacuum polarization phenomena), while two electron-like -1 charged particles exist as the -1/3 (screened) downquark (where 2/3 rds of the electromagnetic charge field energy is converted into short range nuclear fields) and the orbital electron around this proton. In other words, 100% of the positive positron-like charges have been trapped, reducing their apparent charges to fractions by the use of electromagnetic energy to produce short-range nuclear fields, while only 50% of the negative charges have been trapped. This difference masks the perfect symmetry betwen matter and antimatter: there is a perfect symmetry because there are two positron-like charges per two electron-like particles. This perfect symmetry is being hidden by the vacuum polarization confinement effects which make the positron-like particles appear to have +2/3 charge rather than +1 charges, and make the confined downquark appear to have a charge of -1/3.

So there is really no asymmetry between matter and antimatter in the universe; there is just a mechanism (vacuum polarization) which hides the symmetry and misleads mainstream physicists. Th only asymmetry involved here because we now have the nice observation that the two negative charges behave differently; one one is confined and the other is unconfined and exists as the orbital electron. Why is this? The clue is that 100% of positive charges are being confined and only 50% of negative charges. This seems to be a chiral effect.

Lack of matter-antimatter asymmetry in the universe

To recap the foregoing discussion, the usual distinction of matter and antimatter is incorrect; the anti-particle of an electron is not a positron but is a disguised upquark; a positron is just an electron with a positive charge. The difference here is real, because it affects the symmetry group responsible for electromagnetism. Instead of having positive and negative charges assumed to just one charge in a U(1) symmetry (albeit with mixing by the Weinberg angle), where opposite charges are “antimatter” or real particles assumed to be travelling backward in time, what we have is a SU(2) symmetry for 2 distinct charges: the anti-particle versions of the positive and negative charges are linked to the massive charge isospin charged fields by the chiral symmetry effect, where the handedness of the spin (left or right) is reversed for anti-matter, but is not reversed for a particle of given spin which can have positive or negative charge independently of its spin. This is why SU(2) needs to be used for electromagnetism directly, rather than U(1) hypercharge which is currently used for electromagnetism in the Standard Model.

This is not to junk U(1) hypercharge. U(1) hypercharge still has a role to play in mixing with the W0 of SU(2) to produce neutral spin-1 massless bosons (gravitons) and the Z0; but the main function of U(1) is now as the gauge group of quantum gravity, which has a charge of only positive sign called mass. The Weinberg mixing of the electrically neutral bosons of U(1) with SU(2) thereby now gives the Z0 mass (gravitational charge), without the need for a separate Higgs field. Instead of a Higgs field being added to the Standard Model to give mass to the weak bosons, we have a quantum gravity U(1) charge mixing with SU(2) to give the weak bosons mass directly. The Z0 boson exists in the polarized pair-production charge of pair-production-annihilation “loops” in the spacetime around every fermion, and these bosons can physically mire the moving fermion when it moves by interactions rather like photon pressure which creates forces of inertia, i.e. conveys mass. This miring is analogous to physical explanations of mass creation by the (generally incorrect) “Higgs field”. We borrow from research into a inaccurate Higgs field only the concept that boson in the vacuum can interact with moving particles to give them inertia, thus explaining mass.

The weak force only acts on left-handed spinors, hence the deep reason for the structure of the universe is that one handedness of electron preons is confined within protons and the other handedness is unconfined. The explanation for why neutrons (free neutrons) undergo beta decay whereas free protons don’t is the difference in the mass of up and downquarks. Most of the mass of the neutron (940 MeV) and proton (938 MeV) is the mass of the virtual fermion and virtual boson field, which as we have explained in previous posts , is something often ignored in attempts to understand the charge of quantum gravity (mass). The mass of up and downquarks are estimated to be only 2.4 and 4.8 MeV, respectively (the rest mass of the electron is 0.511 MeV). Because the downquark is heavier, neutrons are able to decay into protons, but free protons can’t decay because they can’t convert a 2.4 MeV upquark into a 4.8 MeV downquark without violating the conservation of mass-energy. Free neutrons are radioactive, unlike free protons, because the downquark is heavier than the upquark so a downquark is able to decay (losing energy) into an upquark etc. We see the handedness effect most clearly when considering the neutron, where the two downquarks in our model are matched by a neutrino and upquark pair; the neutriono is supposed to be purely left-handed, so there is a handedness effect suggested here. Why do downquarks have twice the mass (gravitational charge) of upquarks? Clearly this is linked to the fact that they have only half of the apparent magnitude of electric charge that upquarks have. There is a balance: upquarks have more electric charge (twice as much) while the downquarks compensate for having half the amount of charge by having twice the amount of mass that an upquark has! It’s a beautiful illustration of Feynman’s checker board argument for apparent complexity emerging from a deep simplicity which just looks complicated due to various phenomena which distract you from the simplicity if you cannot focus on simple mechanisms and following up the few strong clues which nature offers to the observer.

As we pointed out in the previous post on this blog, the gravitational force coupling is not intrinsically 1040 times weaker than electromagnetism; it differs from electromagnetism because of the way that the path integral of exchanged field bosons throughout the universe differs for a two-sign charge theory from a single-sign charge gravitational theory. Intrinsically, electromagnetism and gravitation have exactly the same coupling, and the difference in observed coupling is due to the effect of the mechanism by which contributions from the surrounding universe add together. Hence, the relative gravitational charge and relative electric charge are more closely connected than would appear to be the case when just staring at the 1040 difference and assuming that difference to be intrinsic, rather than a mechanical result of vector summation for all the charges in the universe. In other words, the pertinent symmetry is being hidden by nature, causing mainstream physicists to be confused and unable to see the wood for trees:




The Standard Model, U(1) x SU(2) x SU(3) implies some kind of symmetry between lepton, which ignore SU(3), and quarks which don’t ignore SU(3). Why in the Standard Model do leptons ignore SU(3)? Or put that another way, what breaks the U(1) x SU(2) x SU(3) symmetry for leptons so that they experience only U(1) x SU(2)? The Standard Model adds a Higgs field to break U(1) x SU(2) symmetry into the weak and electromagnetic forces of different strengths thatwe see at low energy, but it doesn’t contain any mechanism to break U(1) x SU(2) x SU(3) into U(1) x SU(2) for leptons.

Instead, the Standard Model simply sets the colour charges for leptons to zero, with no mechanistic explanation for this break in symmetry for leptons as compared to quarks. It might just as get rid of the Higgs field for breaking electroweak symmetry, and instead break the symmetry by giving masses (from a gauge group of quantum gravity) to weak field bosons as required, without there being any high-energy symmetry. Notice that the Standard Model doesn’t attempt to unify leptons and quarks by making them both have colour charges at high energy, while only quarks have colour charges at low energy.

In other words, instead of the Higgs field applying to electroweak symmetry as is assumed at present, some such mechanism operates to only give the leptons colour charges at very high energy, leaving them colourless at low energy. In this way, leptons and quarks will be unified. We know that at existing particle collider energies, leptons show no evidence of strong interactions, i.e. they appear colourless.

In an effort to make definite predictions and to establish a fact-based model, let’s go back to the idea that a hydrogen atom contains a hidden symmetry, with the downquark and electron being together equivalent to the two upquarks. The sum of apparent charges of the two upquarks is +4/3, while the sum of apparent charges of the electron and downquark is -4/3. This is also justified by the downquark and the electron having similar weak isotopic charges. E.g., for left-handed downquarks and left-handed spinor electrons, the weak isotopic charge in both cases is -1/2, while for right-handed spinor versions the the weak isospin charge is 0 in both cases.

If it were possible to convert the proton into a neutron (e.g. in positron emission in some bound states) the proton either emits a positron or takes in an electron (both processes being similar on a Feynman diagram), and also either emits a neutrino or takes in an antineutrino. Now consider the neutron and the neutrino as a pair. You then have a symmetry in two downquarks, with the upquark and the neutrino forming a pair. In the Standard Model, neutrinos only occur with left-handed spin. The left-handed neutrino has the same weak isotopic charge (+1/2) as the left-handed upquark. The total electric charge for the neutrino plus upquark pair is the upquark’s charge of +2/3, similar in magnitude but opposite in sign to the sum of the charges of the two downquarks. So the neutrino and the left-handed upquark form a pair with identical weak isospin but different electric charge, with the total electric charge being equal in magnitude but opposite in sign to the sum of the electric charges of two downquarks. Specifically, the total sum of charge for the pair is carried by the upquark. So it seems that one way of explaining why the upquark has a charge of +2/3 unlike the -1/3 charge of the downquark, is that the upquark comes in a doublet with the neutrino, and carries all of the electric charge of the doublet, instead of sharing it democratically with neutrino:

concluding word

I’ve included some ideas in this post which are developments of older ideas. Originally in May 1996 I came up with a model for quantum gravity predictions which predicted cosmological acceleration correctly, although I received a lot of flak for the presentation and arguments used when they were published in EW and later on a Physics Forums discussion site. My original idea was to modernize the LeSage pushing gravity idea by the use of spin-1 gravitons instead of on-shell gas particles. The original calculation in May 1996 was quite different. I differentiated the observed Hubble recession velocity with respect to time and interpreted the result as an acceleration, a = dv/dt = d(HR)/dt = H*dR/dt + R*dH/dt = Hv + R*0 = Hv = H(HR) = H2cT = Hc (for flat spacetime, as observed) where H is Hubble’s “constant” and T is the age of the universe. Although clearly the time t is not time progressing from the time of the big bang bit is the travel time of light coming from stars located at distance R = ct from us, there is a simple relationship between the time measured from the big bang and the time in the formula R = ct, where R is the distance of an object we see and t is the time light takes to reach us from that object (see the About page on this blog), and this gives the same type of prediction for cosmological acceleration. Once that acceleration prediction was made in May 1996 and submitted to EW which published it via the letters page of the October 1996 issue, the acceleration was used to predict gravity. Newton’s 2nd and 3rd laws (F = ma and F = -F’) do this: matter accelerating radially outward from us is accompanied by an inward radial force mediated by gravitons, causing the curvature in general relativity by the Feynman “excess radius” effect, illustrated neatly by Glasstone and Dolan in their diagram of “implosion” (The Effects of Nuclear Weapons, 1977, chapter 1); LeSage’s mechanism corrected from on-shell to off-shell gauge boson radiatioon, does the rest. However, I preferred to argue using a physical mechanism not Newton’s empirical laws of motion. I considered space as filled with either a pair-production “sea” or some kind of “Higgs field” which was like the perfect fluid source term (Tuv stress-energy tensor) which falsely fabricates a smooth, continuous distribution of mass-energy as the source of gravity (a damnable lie, because matter is quantized and discontinuous; features that general relativity can’t use because you can’t get smooth curvature using differential equations from discontinuities, so general relativity is a lying, overhyped classical physics fiddle). My mechanism was that the perfect sea would have to fill in volumes being vacated by moving particles. If a man or submarine of volume A moves at speed B, then volume of air or water A will move in the opposite direction at speed B simply in order to fill in the void being vacated. You don’t get a vacuum formed behind you when you walk down a corridor and air pressure piling up on your front face (unless you move at sound velocity, so the air can’t get out of your way fast enough, and a shock wave of compressed air develops which is hard to accelerate beyond; analogously to mass increase in relativistic motion when approaching the speed barrier of c). This mechanism of an equal volume of spacetime fluid moving in the opposite direction at the same rate is equivalent effectively to Newton’s 3rd law when applied to the gravity mechanism. However, nobody wanted it. My initial belief was that I would not have the time to develop the theory, and that someone else in the mainstream would take it up and apply it to quantum field theory. Instead, I’ve had to do it myself because of arrogance and abuse from the people who should have discovered this stuff themselves, but who were too busy wasting their time on abject, non-falsifiable speculative stringy ideas and loopy quantum gravity. The particle mass quantization mechanism, applied to quarks and leptons in this post for the first time, was easlier used (see About page) a few years ago to predict hadron masses, because the virtual fermions gain energy through polarization and exist long enough to show some signs of exclusion principle response, forming shells by analogy to the shells of nucleons inside any nucleus, complete with magic numbers of stable structures, which help to predict the masses of mesons and baryons that are stable enough (i.e. have long enough half-lives) for us to observe and measure. Finally I need to mention that I hope to get an understanding of the CKM parameters of the Standard Model from the matrix multiplication modelling of Marni and Carl, who has also investigated neutrino masses.

Copies of comments to Marni’s post Top Down Numbers, http://pseudomonad.blogspot.com/2010/05/top-down-numbers.html:

“If you look at it by hand, the quotient of phases comes down to a number close to π.”

The quotient of which phases? The presence of Pi may be significant. Please consider using the category morphisms between all the different lepton and quark masses, because the CKM parameters govern the difference between weak interactions of quarks and leptons within and between different generations, and masses are the major difference between generations of particles: differences in particle lifetimes must be related to differences in masses!

(That new link isn’t to my terribly long blog pages, just to a very small, easy to load jpg image of symmetries in the category morphism between the lepton and quark masses. Delete this comment if you like, but please consider trying to use category morphisms between different particles to pictorially look for symmetry patterns in the way that the CKM elements relate interactions between quarks and leptons. I don’t know if your matrix multiplications can be represented that way easily or not.)

Just to clarify:

(1) the CKM parameters determine the relative weak force interaction strengths.

(2) the weak force interaction rate depends on the relative mass of the weak gauge boson to the particle being considered. Neglecting electroweak theory for a moment, in Fermi’s theory of beta decay, the weak force has a strength equal to the electromagnetic force strength multiplied by the impressive looking ratio ratio: Pi2hM4/(Tc2m5) (reference: Matthews’ Quantum Mechanics textbook), where T is the effective lifetime of the particle (i.e. halflife times the factor 1/ln2 ~ 1.44), m is the mass of the emitted fermion, and M is the mass of the particle before it decays.

(3) therefore, there seems to be a link between the CKM parameters and the masses of fundamental particles.

Do you agree?

Update: Marni responded by deleting the key comment and leaving the other, then adding abusive and factually incorrect replies: “Nigel, you must stop putting big links on my blog, or I will ban you from the blog.” Well, I can’t help making comments if I read a blog, so I will stop reading that blog to comply with the request (the small jpg linked was not a big link, as the comment stated!). It’s very interesting that it is widely claimed in communist style political brainwashing that progress comes from collaboration and groupthink, when in fact it is a waste of time and effort and trying to even communicate with other people working on similar problems is completely hopeless when they’re just the kind of non-listening and deliberately rude human beings I had to put up with as a kid when I had audio related speech problems. At first it’s actually funny to be mimicked etc., and you can take it for a while, but after a few years it gets boring and grinds you down, so eventually you do become bitter that so many other people – while claiming to be great, sane human beings – nevertheless behave like a lynch mob and all band together against those with differences, just like the Chamberlain-praised and collaborated-with thugs who attacked those who didn’t side with the eugenics or other mainstream-funded politically-biased groupthink travesty of science. In Kea’s case, maybe she has the excuse that she had a PhD and needs to find funding, so she has to ever mimic the physically ignorant mainstream lemmings in building epicycles without a solid physical foundation, and even naming her mysterious epicycles after stringy M-theory. What I can’t tolerate is the hypocrisy that such people pretend to be tolerant of physics, when like string theorists, they’re really anti-physics, i.e. pro-pseudophysics. Maybe it is true that grants only go to pseudophysics, but in that case these people could do what Einstein did and try to get work outside the mainstream and work on physics as a spare time hobby without mainstream grant funding pressure. I try to be understanding, but there are limits. How far should you go in trying to be understanding towards people who are just ignorant and rude towards solid facts of nature?

Update (12 May 2010):

At the “Not Even Wrong” blog of Peter Woit, which I read sometimes, Marni has left a comment supporting the false, non-relativistic, groupthink 1st quantization uncertainty principle entanglement “Quantum Information Theory” (QIT) pseudophysics by Duff: “Perhaps instead of bickering like stupid little boys, you could all spend some time reading, for instance, the papers of Levay, who is mentioned in Duff’s article. Sure, Duff is deluded about strings, but he has done some cool stuff here and if it turns out that the QI is in fact at the foundations of QG, then it will be relevant. And won’t you want to know something about it?”

My refutation of the Duff pseudophysics is submitted there in the comment moderation queue (where it will probably be lost amongst millions of other, briefer submitted comments, so I will copy it below):

Your comment is awaiting moderation.
May 12, 2010 at 5:31 am
Nige Cook says:

“EPR concluded rightly that if quantum mechanics is correct then nature is nonlocal, and if we insist on local “realism” then quantum mechanics must be incomplete. Einstein himself favoured the latter hypothesis. However, it was not until 1964 that CERN theorist John Bell proposed an experiment that could decide which version was correct – and it was not until 1982 that Alain Aspect actually performed the experiment. Quantum mechanics was right, Einstein was wrong and local realism went out the window. As QIT developed, the impact of entanglement went far beyond the testing of the conceptual foundations of quantum mechanics. Entanglement is now essential to numerous quantum-information tasks such as quantum cryptography, teleportation and quantum computation.” – Duff’s article

Duff is claiming groupthink is right, when in fact the manipulated experimental results are all just spin, with no significance unless there is an arbitrary subtraction of inconvenient “accidental” data. See: http://arxiv.org/abs/quant-ph/9903066

Feynman explains very clearly in his 1985 book QED that all this uncertainty principle stuff is 1st quantization quantum mechanics, which is non-relativistic. He writes:

“there is no need [emphasis by Feynman] for an uncertainty principle.”

Instead, Feynman explains that there is a quantum field which acts in all possible paths, producing randomness via interferences, like brownian motion. The quantum field has been verified by the Casimir effect tests, so the entanglement and wavefunction collapse claims come from the disproved, unnecessary uncertainty principle.

Why can’t QIT people SIMPLY READ Feynman’s book, and admit they’re wrong?

Alain Aspect’s false theoretical analysis of his “adjusted” experimental data

Alain Aspect hasn’t so far won the Nobel Prize, and if he had really proved entanglement or wavefunction collapse was real, it would be likely he would have done long ago. In fact, Alain Aspect might then have 10500 Nobel prizes, one for each stringy parallel universe. But that doesn’t affect the error in the theoretical physics (the Bell theorem) used in the analysis of Alain Aspect’s experimental results: his Bell theorem “first-quantization” (uncertainty principle based quantum mechanics) analysis with its use of the uncertainty principle for describing the motion of real particles, remains incompatible with empirically-verified relativity equations (regardless of whether there is an absolute reference frame via the cosmic background radiation). In second-quantization of quantum mechanics (second-quantization is quantum field theory), the uncertainty principle can still be used to statistically relationship between energy and survival time for the space-time loops of pair production and annihilation of virtual particles providing that polarization phenomena are trivial, but the uncertainty principle is there not applied directly to real particles so there is no wavefunction collapse for real, on-shell (directly observable) particles like photons.

As Feynman shows in his 1985 book QED, the whole point of quantum field theory is that the off-shell radiation, field quanta, are randomly exchanged rather than being steady classical fields. The random fluctuations in the Coulomb field in the atom between electrons and positive nuclei are what make the electrons move chaotically. There is no smooth, continuously acting field curvature in quantum field theory, despite the commonplace mathematical use of differential equations that don’t apply to discontinuities, or else infinities or zeros result. We need to beware of the mathematical concerns pointed out by Thomas Love:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

The road ahead

Peter Woit has just made a comment on his blog (which exposes the fraud of string theory hype as a theory of everything) stating: “By formal QFT I just mean the investigation of the structure of QFTs, often using mathematical methods. This has been going on not for 25 years, but since the invention of the subject back in the 1930s. An example of success would be Veltman-’t Hooft’s work on the quantization and renormalization of gauge theories, which was crucial for the Standard Model. Current ideology is that we understand completely gauge theories, that formal theoretical work on their structure is a waste of time. I disagree. … I may be beating a dead horse, with the particle theory community completely in agreement with me about the multiverse and string theory unification. I do find some encouragement to continue when I see that instead of just boring people and making them go away, it continues to upset some who want me to stop interfering with the continuing production of hype like Duff’s. It’s also interesting to note that while half the criticism I get is that I should be ignored since I’m a marginal figure who doesn’t know what he is talking about, the other half seems to be that I should be ignored since I’m just saying well-known things that almost everyone in the community agrees upon.”

Woit wrote a preprint paper called Quantum Field Theory and Representation Theory: A Sketch http://arxiv.org/PS_cache/hep-th/pdf/0206/0206135v1.pdf which is extremely painstaking and formal in the way it discusses particle physics symmetries and their mathematical representations, but includes some interesting ideas for understanding the Standard Model in section 10, pages 48-51, followed by criticisms of string theory groupthink and its brainwashing effects in section 11. In particular, he presents a summary of the ideas in his 1988 paper, “Supersymmetric quantum mechanics, spinors and the standard Model”, Nuclear Physics, vol. B303, pp. 329-342 (although that paper considered supersymmetry ideas, Woit criticises the lack of falsifiability of supersymmetry ideas in his 2002 paper):

“There it is argued that the standard model should be defined over a Euclidean signature four dimensional space time since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) [is a subset of] SO(4) (perhaps better thought of as a U(2) [is a subset of] Spinc(4)) and in [Woit’s 1988 paper cited previously] it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2n) ^a(Cn) applied to a “vacuum” vector. Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons:

^0(C2) = 1 produces SU(2) × U(1) charges of (0, 0) => right-handed neutrino

^1(C2) = C2 produces SU(2) × U(1) charges of (½, -1) => left-handed neutrino and left-handed electron

^2(C2) produces SU(2) × U(1) charges of (0, -2) => right-handed electron

“A generation of quarks has the same transformation properties except that one has to take the ‘vacuum’ vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero.

“The above comments are … meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles. …

“While historically the attempt to make progress in theoretical physics by pursuing mathematical elegance in the absence of experimental guidance has had few successes (general relativity being a notable exception), we may now not have any choice in the matter. The exploitation of symmetry principles has lead to much of the progress in theoretical physics made during the past century. Representation theory is the central mathematical tool here and in various forms it has also been crucial to much of twentieth century mathematics. The striking lack of any underlying symmetry principle for string/M-theory is matched by the theory’s complete inability to make any predictions about nature. This is probably not a coincidence.”

These ideas may offer an alternative to non-falsifiable string theory, in searching for the simplest mathematical structures which represent observables. In trying to understand quantum gravity with the mass morphisms and related mechanisms of this post, we’re using the tried and tested approach to physics of utilizing experimental guidence, which historically was used in atomic theory by Dalton and Mendeleev when formulating the periodic table, which made some empirical predictions (just as the exact relations in our scheme predict definite masses that will be checked when the LHC measures heavy quark masses more accurately than presently known). The early atomic theory came under attack for failing to explain non-integer masses (later found to be due to isotopes), while the detailed theoretical explanation of the periodic table using quantum mechanics had to wait for decades. However, we might make faster progress with this idea, as it is starting to look exciting and makes real sense.

Extract from an earlier post (linked here) on the second quantization lying cover-up in quantum mechanics teaching to undergraduates in physics

Quantum tunnelling is possible because electromagnetic fields are not classical, but are mediated by field quanta randomly exchanged between charges. For large charges and/or long times, the number of field quanta exchanged is so large that the result is similar to a steady classical field. But for small charges and small times, such as the scattering of charges in high energy physics, there is some small probability that no or few field quanta will happen to be exchanged in the time available, so the charge will be able to penetrate through the classical “Coulomb barrier”. If you quantize the Coulomb field, the electron’s motion is indeterministic in the atom because it’s randomly exchanging Coulomb field quanta which cause chaotic motion. This is second quantization as explained by Feynman in QED. This is not what is done in quantum mechanics, which is based on first quantization, i.e. treating the Coulomb field V classically, and falsely representing the chaotic motion of the electron by a wave-type equation. This is a physically false mathematical model since it omits the physical cause of the indeterminancy (although it gives convenient predictions, somewhat like Ptolemy’s accurate epicycle based predictions of planetary positions):

Schroedinger error
Fig. 1:The Schrodinger equation, based on quantizing the momentum p in the classical Hamiltonian (the sum of kinetic and potential energy for the particle), H. This is an example of ‘first quantization’, which is inaccurate and is also used in Heisenberg’s matrix mechanics. Correct quantization will instead quantize the Coulomb field potential energy, V, because the whole indeterminancy of the electron in the atom is physically caused by the chaos of the randomly timed individual interactions of the electron with the discrete Coulomb field quanta which bind the electron to orbit the nucleus, as Feynman proved (see quotations below). The triangular symbol is the divergence operator (simply the sum of the gradients in all applicable spatial dimensions, for whatever it operates on) which when squared becomes the laplacian operator (simply the sum of second-order derivatives in all applicable spatial dimensions, for whatever it operates on). We illustrate the Schrodinger equation in just one spatial dimension, x, above, since the terms for other spatial dimensions are identical.

Dirac’s quantum field theory is needed because textbook quantum mechanics is simply wrong: the Schroedinger equation has a second-order dependence on spatial distance but only a first-order dependence on time. In the real world, time and space are found to be on an equal footing, hence spacetime. There are deeper errors in textbook quantum mechanics: it ignores the quantization of the electromagnetic field and instead treats it classically, when the field quanta are the whole distinction between classical and quantum mechanics (the random motion of the electron orbiting the nucleus in the atom is caused by discrete field quanta interactions, as proved by Feynman).

Dirac was the first to achieve a relativistic field equation to replace the non-relativistic quantum mechanics approximations (the Schroedinger wave equation and the Heisenberg momentum-distance matrix mechanics). Dirac also laid the groundwork for Feynman’s path integrals in his 1933 paper “The Lagrangian in Quantum Mechanics” published in Physikalische Zeitschrift der Sowjetunion where he states:

“Quantum mechanics was built up on a foundation of analogy with the Hamiltonian theory of classical mechanics. This is because the classical notion of canonical coordinates and momenta was found to be one with a very simple quantum analogue …

“Now there is an alternative formulation for classical dynamics, provided by the Lagrangian. … The two formulations are, of course, closely related, but there are reasons for believing that the Lagrangian one is the more fundamental. … the Lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; while the Hamiltonian method is essentially nonrelativistic in form …”

Schroedinger’s time-dependent equation is: Hy= iħ.dy /dt, which has the exponential solution:

yt = yo exp[-iH(t – to)/ħ].

This equation is accurate, because the error in Schroedinger’s equation comes only from the expression used for the Hamiltonian, H. This exponential law represents the time-dependent value of the wavefunction for any Hamiltonian and time. Squaring this wavefunction gives the amplitude or relative probability for a given Hamiltonian and time. Dirac took this amplitude e-iHT/ħ and derived the more fundamental lagrangian amplitude for action S, i.e. eiS/ħ. Feynman showed that summing this amplitude factor over all possible paths or interaction histories gave a result proportional to the total probability for a given interaction. This is the path integral.

Schroedinger’s incorrect, non-relativistic hamiltonian before quantization (ignoring the inclusion of the Coulomb field potential energy, V, which is an added term) is: H = ½ p2/m. Quantization is done using the substitution for momentum, p -> -iħ{divergence operator} as in Fig. 1 above. The Coulomb field potential energy, V, remains classical in Schroedinger’s equation, instead of being quantized as it should.

The bogus ‘special relativity’ prediction to correct the expectation H = ½ p2/m is simply: H = [(mc2)2 + p2c2]2, but that was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ2d2y/dt2 = [(mc2)2 + p2c2]y.

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction. Dirac’s equation simply makes the time-dependent Schroedinger equation (Hy = iħ.dy/dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H = apc + b mc2,

where p is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’. This is not to be confused for the Weyl spinors used in the gauge theories of the Standard Model; whereas the Dirac spinor represents massive spin-1/2 particles, the Dirac equation yields two Weyl equations for massless particles, each with a 2-component Weyl spinor (representing left- and right-handed spin or helicity eigenstates). The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E = ±[(mc2)2 + p2c2]1/2.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ½ ħ = ± h/(4p). This explains two of the four solutions! The electron is spin-1/2 so it has a spin of only half the amount of a spin-1 particle, which means that the electron must rotate 720 degrees (not 360 degrees!) to undergo one revolution, like a Mobius strip (a strip of paper with a twist before the ends are glued together, so that there is only one surface and you can draw a continuous line around that surface which is twice the length of the strip, i.e. you need 720 degrees turning to return it to the beginning!). Since the spin rate of the electron generates its intrinsic magnetic moment, it affects the magnetic moment of the electron. Zee gives a concise derivation of the fact that the Dirac equation implies that ‘a unit of spin angular momentum interacts with a magnetic field twice as much as a unit of orbital angular momentum’, a fact discovered by Dirac the day after he found his equation (see: A. Zee, Quantum Field Theory in a Nutshell, Princeton University press, 2003, pp. 177-8.) The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc2. This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc2. The fact that in fact E = ± mc2, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity. E = ± mc2 allowed Dirac to predict antimatter, such as the anti-electron called the positron, which was later discovered by Anderson in 1932 (anti-matter is naturally produced all the time when suitably high-energy gamma radiation hits heavy nuclei, causing pair production, i.e., the creation of a particle and an anti-particle such as an electron and a positron).

Much of the material above is from the previous post (I’m putting it here on a separate post because that previous post began with sorting out errors in mainstream cosmology, which may have put off some bigoted and dogmatic people who are only interested in non-cosmology aspects of quantum field theory; it also helps me towards assembling background/draft material for a forthcoming book/paper).

To understand how the path integrals approach explains the double slit experiment, see this post. To see how scientific criticisms of mainstream first quantization lies have been censored out of mainstream journals by dogmatic mathematical simpletons who lack a grasp of the nature of science itself (‘Science is the organized skepticism in the reliability of expert opinion.’ – Richard Feynman in Lee Smolin, The Trouble with Physics, Houghton-Mifflin, 2006, p. 307), see this post. There’s a completely causal explanation: the photon is not a point but has transverse spatial extent; when it encounters two nearby slits (closer than a wavelength) part diffracts through each slit and the recombination on the other side gives rise to the photon whose probability of landing at any point depends on both slits, not just one of them.

String theorists who believe dogmatically that mathematical elegance, mystery and beauty in physics rather than hard evidence of agreement with experiment, are the central requirements, should listen to Einstein and Boltzmann:

“I adhered scrupulously to the precept of that brilliant theoretical physicist L. Boltzmann, according to whom matters of elegance ought to be left to the tailor and to the cobbler.”

– A. Einstein, December 1916 Preface to his book Relativity: The Special and General Theory, Methuen & Co., London, 1920.

Mathematical relationship between the Hamiltonian formalism of first quantization quantum mechanics (bound states of particles) and the Lagrangian path integral formalism necessary to adequately describe quantum fields

Heisenberg’s uncertainty principle (for minimum uncertainty, i.e. intrinsic uncertainty):

px = ħ

is quantized in first quantization (Heisenberg and Schroedinger methods) by turning uncertainties in momentum p and position x into non-commuting operators (which I’ll signify by simply placing square brackets around them), and replacing ħ with -iħ. This gives [p,x] = ħ. The two solutions to that are firstly

[x] = iħd/dp with [p]=p,

and secondly

[p] = -iħd/dx with [x] = x.

Either of these solutions is a first quantization of classical physics. Then you do the same thing replacing momentum p = E/c and x = ct for light, giving px = (E/c)(ct) = Et, which allows you to replace the product of uncertainties px in Heisenberg’s uncertainty principle with the product of uncertainties in energy and time, Et. Repeating the previous recipe for quantization on this energy-time Heisenberg uncertainty principle then gives us [E,t] = ħ. This has the two solutions:

[E] = -iħd/dt with [t] = t,

and

[t] = iħd/dE with [E] = E.

Taking [E] = -iħd/dt, this gives Schroedinger’s time-dependent equation when it acts on wavefunction y, with energy operator [E] = H, the Hamiltonian:

Hy = -ħdy/dt

Rearranging

(1/y)dy = -H.dt/(iħ)

integrating this gives:

ln y = -Ht/(iħ)

(ln yt) – (ln y0) = -Ht/(iħ)

Taking both sides to natural exponents to get rid of the natural logarithms on the left hand side:

(yt)/(y0) = exp(-Ht/(iħ))

hence

yt = y0exp(-Ht/(iħ))

Thus the time-dependent wavefunction equals simply the time-independent wavefunction multiplied by the exponential amplitude factor, exp(-Ht/(iħ)), in which the fraction can be rewritten by multiplying both its numerator and denominator by i, giving:

exp(-Ht/(iħ)) = exp(-iHt/(iiħ)) = exp(iHt/ħ).

The product of the Hamiltonian operator for energy with time is analogous to the integral of the Lagrangian for energy over time, so let Ht = {integral symbol}L dt = S, action. Thus the relative amplitude of a wavefunction (representing the contribution from one Feynman diagram or one “path” in the path integral) is given by:

exp(iHt/ħ). = exp(iS/ħ).

So the path integral amplitude factor is mathematically equivalent to both the Heisenberg matrix mechanics and the Schroedinger wave equation. However, there are physical differences. First quantization is physically wrong. Second quantization is physically correct in the way Feynman presents it.

For a detailed derivation of the time-dependent Schroedinger equation using the path integrals formulation, see David Derbes, “Feynman’s derivation of the Schroedinger equation”, Am. J. Phys. v64, issue 7, July 1996, pp. 881-4. For discussions of random or stochastic quantization, see Poul Henrik Damgaard and Helmuth Hüffel, Stochastic quantization (1988) and Mikio Namiki, Stochastic quantization (1992).

The mathematician in modern physics resolutely refuses to see the difference physically between the Hamiltonian and the Lagrangian approaches, 1st and 2nd quantization, instead seeing them as mathematically equivalent descriptions of the same thing This is totally bogus, because in 1st quantization you keep the field classical (falsely) and then falsely make particle motions intrinsically indeterminate (falsely) with no mechanism for this (hence leading to wave function collapses upon measurement and multiple universe entanglement speculations which are provably false in consequence of the falsehood of the 1st quantization model), and in doing this your model is non-relativistic, i.e. contravenes physically confirmed equations of relativity. But 2nd quantization correctly attributes the indeterminancy of real, relativistically on-shell particles to simple random interactions with the Coulomb field quanta, instead of having the Coulomb field classical. This is just like air pressure being approximately classical and contionuous on large scales (where individual random air molecule bombardments are large enough in number to average out statistically), but produces chaotic, random motion on small scales, called Brownian motion, due to the fact that on small scales there is not enough space for good averaging and cancellation of randomness by large numbers of interactions, so that individual impacts become relatively more important and so randomness predominates in the Coulomb field quanta exchanged by atomic electrons and nuclei. There is no magic of the sort that the string theorists and science fiction buffs would like to believe in, such as wave functions collapsing and being entangled, leading to quantum information theory, etc. That is a myth. Caroline H. Thompson has shown how Alain Aspect’s entanglement experiments are not good experimental data, but are adjusted to make them agree with prejudiced beliefs like a religion: http://arxiv.org/abs/quant-ph/9903066, Subtraction of “accidentals” and the validity of Bell tests:

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal” explanations for the observations have been wrongfully neglected.”

Just as Bohr’s atom is taught in school physics, most mainstream general physicists with training in quantum mechanics are still trapped in the use of the “anything goes” false (non-relativistic) 1927-originating “first quantization” for quantum mechanics (where anything is possible because motion is described by an uncertainty principle instead of a quantized field mechanism for chaos on small scales). The physically correct replacement is called “second quantization” or “quantum field theory”, which was developed from 1929-48 by Dirac, Feynman and others.

The discoverer of the path integrals approach to quantum field theory, Nobel laureate Richard P. Feynman, has debunked the mainstream first-quantization uncertainty principle of quantum mechanics. Instead of anything being possible, the indeterminate electron motion in the atom is caused by second-quantization: the field quanta randomly interacting and deflecting the electron.

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

His path integrals rebuild and reformulate quantum mechanics itself, getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific baggage like ‘entanglement hype’ it brings with it:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger’s wave equation and Heisenberg’s matrix mechanics being the first two attempts, which both generate nonsense ‘interpretations’]. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.

‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’

– Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

– Richard MacKenzie, Path Integral Methods and Applications, pp. 2-13.

‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

There are other serious and well-known failures of first quantization aside from the nonrelativistic Hamiltonian time dependence:

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

First quantization for QM (e.g. Schroedinger) quantizes the product of position and momentum of an electron, rather than the Coulomb field which is treated classically. This leads to a mathematically useful approximation for bound states like atoms, which is physically false and inaccurate in detail (a bit like Ptolemy’s epicycles, where all planets were assumed to orbit Earth in circles within circles). Feynman explains this in his 1985 book QED (he dismisses the uncertainty principle as complete model, in favour of path integrals) because indeterminancy is physically caused by virtual particle interactions from the quantized Coulomb field becoming important on small, subatomic scales! Second quantization (QFT) introduced by Dirac in 1929 and developed with Feynman’s path integrals in 1948, instead quantizes the field. Second quantization is physically the correct theory because all indeterminancy results from the random fluctuations in the interactions of discrete field quanta, and first quantization by Heisenberg and Schroedinger’s approaches is just a semi-classical, non-relativistic mathematical approximation useful for obtaining simple mathematical solutions for bound states like atoms:

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Sound waves are composed of the group oscillations of large numbers of randomly colliding air molecules; despite the randomness of individual air molecule collisions, the average pressure variations from many molecules obey a simple wave equation and carry the wave energy. Likewise, although the actual motion of an atomic electron is random due to individual interactions with field quanta, the average location of the electron resulting from many random field quanta interactions is non-random and can be described by a simple wave equation such as Schroedinger’s.

This is fact, it isn’t my opinion or speculation: professor David Bohm in 1952 proved that “brownian motion” of an atomic electron will result in average positions described by a Schroedinger wave equation. Unfortunately, Bohm also introduced unnecessary “hidden variables” with an infinite field potential into his messy treatment, making it a needlessly complex, uncheckable representation, instead of simply accepting that the quantum field interations produce the “Brownian motion” of the electron as described by Feynman’s path integrals for simple random field quanta interactions with the electron.

Relevant copy of a comment to Professor Johnson’s Asymptotia:

“Gell-Mann is best known as the person who came up with the idea of quarks, the particles that make up (for example) protons and neutrons, the building blocks of atomic nuclei.”

It took genius to publish such a speculative idea. According to William H. Cropper’s book Great physicists (Oxford U.P., p. 418), George Zweig’s paper on that theory was “emphatically rejected” by Physical Review but Murray Gell-Mann was “older and wiser” so he “anticipated a negative reception at the Physical Review to such bizarre entities as unobservable, fractionally charged elementary particles, and he published his first quark paper in Physics Letters. Zweig’s theory went unpublished except in a CERN report, but it and its author acquired a certain reputation. When Zweig sought an appointment at a major university, the head of the department pronounced him a ‘charlatan’.”

It’s good that Gell-Mann managed to anticipate and avoid that censorship so cleverly, or we wouldn’t have quark theory, with the SU(3) strong interaction part of the Standard Model. Another example: Pauli’s attempt to censor Yang-Mills theory in February 1954 because the particles are massless (Pauli had already discarded the idea for this “failure”) is another example (Yang simply sat down when Pauli persisted in objecting).

Consider Oppenheimer’s attempt to censor Feynman’s path integrals without listening at all, as described by Freeman Dyson (Stuckelberg was working on the same idea independently, but was ignored and – as with Zweig’s quarks – he received no Nobel Prize). It’s remarkable that genius in the past has consisted to such a large degree in overcoming apathy (Oppenheimer was not just a stubborn exception who objected to path integrals. E.g., Feynman is quoted by Jagdish Mehra in The Beat of a Different Drum, pp. 245-248, saying that Teller, Dirac and Bohr all also claimed to have “disproved” path integrals: Teller’s disproof consisted of saying that Feynman didn’t have to take account of the exclusion principle, Dirac disproved it for not having a unitary operator, and Bohr disproved it because he believed that Feynman didn’t know the uncertainty principle: “it was hopeless to try to explain it further.” So without Dyson’s brilliance at explaining ideas, Feynman’s path integrals would probably have been ignored.)

“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, in Jagdish Mehra, The Beat of a Different Drum (Oxford, 1994, pp. 245-248).

Update: New Scientist issue on debunking mass delusions

The New Scientist on page 45 has an article by Michael Shermer which correctly explains: “Denialist movements can be beaten. Patient rebuttal is a powerful weapon.”

Patient rebuttal. Exactly what the arrogant, egotistical, impatient scientists, engineers and politicians lack when dealing with propaganda claims both from lobby groups and also from genuine critics. The arrogant, egotistical, and impatient prefer to ignore any criticisms instead of answering them, or they simply adopt the stance of Prime Minister Gordon Brown by Gillian Duffy, in taking the perceived path of least resistance when making decisions, then later trying to sweep concerns under the carpet and dismiss the critics as mere bigots, instead of undertaking the harder work of taking the hard choices, listening to the concerns, doing the research to find out the facts, and patiently explaining and discussing the facts in detail:

Living in denial: The truth is our only weapon
12 May 2010 by Michael Shermer

ENGAGING with people who doubt well-established theories is a perennial challenge. How should we respond?

My answer is this: let them be heard. Examine their evidence. Consider their interpretation. If they have anything of substance to say, then the truth will out.

What do you do, however, with people who, after their claim has been fully discussed and thoroughly debunked, continue to make the claim anyway? This, of course, is where scepticism morphs into denialism. Does there come a point when it is time to move on to other challenges? Sometimes there does.

Case in point: Holocaust denial. In the 1990s, a number of us engaged Holocaust deniers in debate and outlined in exhaustive detail the evidence for the Nazi genocide. It had no effect. They sailed on through into the 2000s making the same discredited arguments. At that point I threw up my hands and moved on to other challenges. By the late 2000s the Holocaust deniers had largely disappeared.

Throwing up your hands is not always an option, though. Holocaust denial has always been on the fringe, but other forms – notably creationism and climate denial – wield considerable influence and show no signs of going away. In such cases, eternal vigilance is the price we must pay for both freedom and truth. Those who are in possession of the facts have a duty to stand up to the deniers with a full-throated debunking repeated often and everywhere until they too go the way of the dinosaurs.

Those in possession of the facts have a duty to stand up to deniers with a full-throated debunking.

We should not, however, cover up, hide, suppress or, worst of all, use the state to quash someone else’s belief system. There are several good arguments for this:

■1. They might be right and we would have just squashed a bit of truth.
■2. They might be completely wrong, but in the process of examining their claims we discover the truth; we also discover how thinking can go wrong, and in the process improve our thinking skills.
■3. In science, it is never possible to know the absolute truth about anything, and so we must always be on the alert for where our ideas need to change.
■4. Being tolerant when you are in the believing majority means you have a greater chance of being tolerated when you are in the sceptical minority. Once censorship of ideas is established, it can work against you if and when you find yourself in the minority.

No matter what ideas the human mind generates, they must never be quashed. When evolutionists were in the minority in Tennessee in 1925, powerful fundamentalists were passing laws making it a crime to teach evolution, and the teacher John Scopes was put on trial. I cannot think of a better argument for tolerance and debate than his lawyer Clarence Darrow’s plea in the closing remarks of the trial.

“If today you can take a thing like evolution and make it a crime to teach it in the public schools, tomorrow you can make it a crime to teach it in the private schools, and next year you can make it a crime to teach it in the church. At the next session you can ban books and the newspapers. Ignorance and fanaticism are ever busy… After a while, your honour, it is the setting of man against man, creed against creed, until the flying banners and beating drums are marching backwards to the glorious ages of the 16th century when bigots lighted fagots to burn the man who dared to bring any intelligence and enlightenment and culture to the human mind.”

I want to take issue with the climate change assertion above by Michael Shermer: he builds a straw man by missing the whole point that human caused climate change is smaller in both rate and magnitude than natural climate change, just as global fallout from nuclear testing was smaller than natural background radiation. The temperature on this planet is never constant! It is always either increasing or decreasing!

Hence, you can predict a 50% chance at any random time in history that the temperature will be rising, and a 50% chance that it will be falling. It’s always one or the other: it’s not normally constant. Moreover, the temperature has been almost continuously rising for 18,000 years when the last ice age started to thaw, so for this period the expectancy of warming is higher than 50%. Over the past 18,000 years global warming has caused the sea levels to rise 120 metres, a mean rise of 0.67 cm/year, with even higher rates of rise during part of this time. In the century of 1910-2010, sea levels have risen linearly by a total of 20 cm or a mean rate of rise of 0.2 cm/year.

Hence, the current rate of rise of the oceans (0.2 cm/year) is less than one third the average rate which naturally occurred over the past 18,000 years (0.67 cm/year). This tells you that the current rate of global climate change flooding risks is not record-breaking, and is not unprecedented in history. This is a fact you won’t hear from the propaganda cranks. Whatever the cause of the current rate of global warming, it is not unprecedented and the fact we’re alive proves that it is possible to survive and thrive during such climate change without spending trillions fighting it. In particular, nuclear power creates very little carbon dioxide relative to fossil fuels, and generally produces very little physical waste, whose radiation is very easily shielded and has been safely contained for literally two thousand millions years, as proved by the Oklo natural uranium ore reactor waste containment in Gabon. The 1986 Chernobyl nuclear reactor explosion was an inherently unsafe positive void coefficient reactor design, produced due to careless communist technological incompetence. With recent reassessments low-level radiation effects data from the radium dial painters, Hiroshima, and the physical effect of DNA repair enzymes like protein P53 on repairing DNA damage, certain low dose-rate safety limits can be revised in a way that will make nuclear power far more cost effective (by grossly reducing the normal maintenance, decontamination and reprocessing costs).

In assuming that climate change is a problem under attack from deluded critics, and thereby failing to see climate change as a politically-hyped taxpayer-funded mainstream “science” propaganda delusion by environmental fanatics with funding agendas like the deluded “string theorists”, Michael Shermer makes the mistake of adopting groupthink himself, in forcing himself to see the equivocal or ambiguous multistable gesalt the way he wants, thereby ignoring the other possibility just because of historical chance or prejudice (as written in a different context in a post on my physics blog, linked here; see also the brief Wikipedia article on multistability). Facts are not multistable.

“She’s just a sort of bigoted woman!” – Prime Minister Gordon Brown’s denial of the points of disabled children’s worker Gillian Duffy, who objected to, and questioned, some of his failures. Prime Minister Brown dismisses Gillian privately in his car in a way which may suggest that he is dangerously egotistical, deluded and paranoid about the motives of such critics, despite being the British Prime Minister at that time (the full video of the incident is below, together with another video of the only previous time that someone was allowed to tell him to his face that he was making errors, namely Daniel Hannan’s speech in the European Parliament when the Prime Minister was present and happily smirking with arrogant self-denial).




The latest issue of New Scientist, 15 May 2010, has a special report on Age of Denial: Why so many people refuse to believe the truth, which is relevant to the issues on this blog, such as the “controversies” over nuclear weapons, radiation (see earlier posts linked here, here and here), and whether weakness and pacifism encourages war-mongers in terrorist states and dictatorships to take advantage and push ahead with aggression, instead of making them reform and be reasonable.

In “controversial” subjects, you experience a strong bias against the facts, whereby efforts to present the facts to objectors don’t easily succeed in convincing them that they are wrong and making them apologise and correct their assertions: for reasons of intrinsic prejudice, pride and egotistical conceit, people may prefer to dismiss others as being bigoted and to live in denial like the Prime Minister in these videos. Gillian Duffy, a disabled children’s worker who had been a lifelong supporter of Prime Minister Gordon Brown’s political party, Labour, has a discussion with the Prime Minister and it seems to end very well, until the Prime Minister gets into his car (forgetting to hand back a Sky News lapel radio microphone first) and “privately” vents his fury of having to speak to a critic, and his disrespect and distrust of Gillian. (Prime Minister Brown’s claim that a million people from Britain had moved abroad to compensate for the influx from Eastern Europe was inaccurate: most Eastern Europeans working in Britain have been sending money back to families in Poland, not spending it here. The million Britains who have left during Labour’s government have not done the same and many have been retired people who have taken their savings with them; they have not left jobs behind which need to be filled by Eastern Europeans. Hence, the two migrations do not cancel one another; there is a net flow of money out of Britain, and jobs have gone to Eastern Europeans, while unemployment has been rising for native Britons. State benefits in Britain are also a magnet for Eastern Europeans.)

This demonstrates beautifully some of the problems in trying to make people really listen and take you seriously if there is too much difference. Prime Minister Brown was determined not to admit that he had made costly mistakes. He persuaded himself that every decision he made was backed up by “solid” reasoning, for example he was under pressure from the trade unions which prop up the Labour party, to create the many unproductive but expensive state sector jobs at a time when the economy was shrinking and government tax revenues were falling, not increasing. To him, these decisions were inevitable and necessary. In fact, they were the height of folly, and a sign of his weakness and preference to the path of least resistance for his own party politics, rather than a sign of his strength to take the difficult choices in the national interest. Moreover, like antinuclear protestors, he surrounded himself with like-minded “media spin-doctors” to “educate” and “inform” the “ignorant”, a situation slightly analogous to one German Chancellor’s use of spin-doctor Dr Goebbels (propaganda minister) to “explain” the “morality” of racist eugenics and ethnic extermination. They rewarded the spin doctor Mandelson with a Lordship, despite claiming to be critical of the democratic basis of the House of Lords! Even when confronted with examples of his failure which he could not deny, like the sale of Britain’s Gold Standard at the minimum value of gold which cost Britain a massive £7 billion loss, he refused to accept responsibility and selectively tried blamed his predecessors who were not responsible because they were no longer making the decisions:



In summary, Scottish MP Gordon Brown, between 1997-2010 (first as Chancellor of the Exchequer and later Prime Minister), ran up a record-breaking £777 billion British national debt and a record-breaking £163 billion annual borrowing national deficit by the time he resigned on 11 May 2010. Part of the reason why he failed to see this as a problem let alone a responsibility was that he is a Scottish MP (Scotland has its own devolved Parliament), and Scotland is protected against cuts by massive subsidy under the Barnett formula. Brown surrounded himself by Scots, who applauded and worshipped him for doing the right thing for Scotland, never mind the costs to the rest of Britain. Seen from this Scottish viewpoint, Brown was truly a genius. The Barnett formula is the reason why Scotland, unlike England, has free university tuition for students. It supports inequality; Scotland gets special treatment with £8,623 spent per person at the expense of England which gets only £7,121 per person, despite having a larger population than Scotland! Thus, England props up Scotland with its taxes, and Scottish MP Gordon Brown supported this inequality, despite demands by Lord Barnett himself for his formula to be revised: ‘I only meant the Barnett formula to last a year, not 30’. So he had a fan base of vested interests behind him who he could always rely on for support, taking away the need to listen to critics! This is typical of how groupthink errors persist! By analogy, Stalin and Hitler surrounded themselves with racists who called critics dangerous bigots, thus insulating themselves from reality; you find the same thing in science where peer-review is used not to approve science but to censor according to the orthodox dogma of the mainstream-supported journal. (The Liberal Democrats now sharing power with Conservatives in the government since Brown resigned, have pledged equality for all, abolishing the Barnett formula.)

25 thoughts on “Category morphisms for quantum gravity masses, and draft material for new paper

  1. Just an editorial comment. The category morfism from s -> d is marked as being pi alpha is some places and 2 pi alpha in others. Based on the table of masses you give, 2 pi alpha seems the correct value.

  2. “Why can’t QIT people SIMPLY READ Feynman’s book, and admit they’re wrong?”

    They are too busy hyping pseudophysics to read something based on factual evidence. They see physics as a collaboration, a welding together of people united in a common hope that the mystery of entanglement is at the heart of the universe, and a desire to implement this in the way that eugenics was enforced in Germany under the in the period 1933-40, prior to the actual gassing of millions Jews, when people who dissented were treated “merely” to abusive behaviour from the “glorious” collaboration of millions of people with pseudoscientific thuggery. That’s why, like Hitler’s propaganda minister Goebbels, they promote lies in the media.

    Woit unfortunately underestimates the public appetite for pseudophysics. The real problem is to make factual alternatives to string theory popular, because the only way to sink string theory is to prove that there is an alternative theory which really does 100% of the job of sorting out unresolved issues in the Standard Model and quantum gravity, leaving 0% of problems to be “explained” by the 10^500 metastable vacua of stringy, not fact-based, non-falsifiable propaganda!

  3. Hi Nigel,

    I saw this blog, from Jacques Distler’s website and I thought his reply was maybe a little harsh.

    If I’ve interpreted the diagrams at the top correctly, I think you’re claiming that current particle physics says that a strange quark cannot decay into an electron. Please delete this post if I’ve misunderstood your argument.

    However, it is well known that a strange quark does decay into an electron, an electron neutrino and an up quark. For example just check the Particle Data Group listing where a K^- decays to a pi^0 e^- nu_e.

    In quark language, this is s + (anti u) -> u + (anti u) + e^- + nu_e.

    Technically, the pion is more complicated than this, but this will do as an example.

    I don’t want to sound insulting but you should really check that you understand the basics before putting so much effort into new research. By the way, Distler’s recommendation of Griffiths is a good one, it really is an excellent book.

    I do hope you take this comment in the right spirit, it’s not meant to be insulting.

    REPLY: “I think you’re claiming that current particle physics says that a strange quark cannot decay into an electron” – WRONG. I state in the diagram that this is precisely what happens!!!!!!!!!!!! Your comment shows that you haven’t read the post which states that quarks DO decay into electrons and that the mainstream beta decay theory makes the error of claiming that the electron should be viewed as an indirect decay product because the Standard Model is based on the dogma that the principal decay product for quark decay is another quark. What you’ve done is just what Distler did, which is to not bother to read the point I’m making, and then you repeat what I stated as if I hadn’t already stated it and moved on past it.

    The point I’m making is that if we agree on all the observed decay products of quark decay, we don’t have to hold on to the Standard Model dogma that quarks can’t transform into leptons (e.g. electrons, muons, tauons). If we analyze the situation of beta decay from the new perspective illustrated in this post, we can make progress theoretically.

  4. nigel, hi,

    i note that you say that you haven’t found a fundamental basis for the electron in the theory that you’ve been working on, but dr andrew worsley has. the formula is as follows: star = (-1 + (2*bohr_magneton_ratio) mass = (h/c^2) * (c^2.5) / (4 * pi * star) in other words the mass of the electron can be derived in terms of the planck’s constant and the speed of light. in his 2nd book dr worsley goes over the derivation in some detail, and goes to some lengths to ensure that the reader is made fully aware of the dimensions of each of the intermediate calculations and shows clearly and logically that the resultant calculation’s dimensions are solely and exclusively “mass”. the figure he derives is accurate to something like 5 decimal places. although i do not have the same level of knowledge as either you or dr worsley, being a reverse-engineer and software engineer not a physicist, i have come up with some corrective factors which i do not fully understand but are based on integer fractions that get that down to the level of experimental accuracy for current measurements of the electron’s mass.

    also i thought you might like to know that the corrective factor that you use (1/137.06) is actually related to a cube-root of the speed of light divided by pi. dr worsley, again, has covered the same area that you have, and has again derived this figure from first principles. i believe he has however made some logical deduction errors in his book, covering the mass of the up and down quark in particular, but they are based on a finding that is extremely hard to ignore: a logically-derived figure for the mass of the neutron which is accurate to five or six decimal places.

    anyway, between the two of you i believe there is definitely merit to the approach taken, indicative of something fundamental which the Standard Model is too complex to notice at this point in its development cycle. if you would like to discuss this further please do contact me at luke.leighton@gmail.com

  5. nigel i thought you might appreciate being made aware that i’ve since also managed to derive a second integer fraction “magic constant”, the use of which corrects dr worsley’s formula for the mass of the muon to an accuracy equal to that for which current experimental observations exist: nine decimal places.

    the formula is posted here:
    http://www.physicsdiscussionforum.org/rfc-expanded-rishon-model-draft-paper-t500-20.html#p5328

    in your posting here it was the mention of the adjustment by dirac (1.0016) and the format of the formula used that allowed me to come up with the correction.

    anyway, i have absolutely no idea of the sigificance of this new magic number, 2.627, just as i have no idea of the significance of the electron-derived magic constant (4^3+6/5)/(3^3). i hope someone else does though!

  6. “Murray Gell-Mann complains in his autobiography that the 1950s New York Times science editor, William L. Laurence (a very important figure in the history of the nuclear age, being the only journalist present at the first nuclear test and the nuclear bombing of Nagasaki), did not take the neutrino seriously because it was so hard to detect due to being so weakly interacting. However, they have been discovered, and they are a fact.” – post text.

    Just an update correcting this. Vincent Kiernan of Georgetown university contacted me by email, pointing out that a source is of Laurence’s reputation for objecting to the indirect evidence for neutrino is George Johnson’s biography of Gell-Mann, which refers obliquely to “a two-time winner of the Pulitzer Prize, who infurated him by refusing to believe in the existence of the famously elusive particle called the neutrino.”

    Because the neutrino has only been discovered via its weak charge, it’s interactions are very hard to detect due to the small cross section for weak interactions (with a very small mass it’s gravitational interaction will be even weaker still), so typically you need a very high flux of neutrinos or else a very long counting time, and a large detector like a swimming pool full of carbon tetrachloride and photomultiplier tubes, to detect any. I’ve recently reread Gell-Mann’s “Quark and Jaguar” and so it seems that I must have read about his condemnation of Laurence in Johnson’s biography, like Kiernan.

    Journalists in science should be objective and refuse to pander to either popular fashion or personal prejudice. Unfortunately, many sensationalize, and Laurence has been condemned for refusing to exaggerate the radiation effects from Hiroshima which Stalin’s paid agent Wilfred Burchett hyped in the Daily Express in September 1945.

Leave a comment