The final theory

Update (11 Dec 2023): Copy of a comment submitted to Dr Wilson’s blog at https://robwilson1.wordpress.com/2023/12/11/is-spacetime-right-handed/:

When you go into the physics of this in detail e.g. see https://vixra.org/pdf/1111.0111v1.pdf and other papers there, there’s a very different natural result from what Dr Woit is trying to get. The SU(2) left handed weak force is fine, the problem is the Abelian U(1) group dogma (hypercharge/EM).

When Woit wick rotates the Lorentz group to get Spin(4) = SU(2)_L × SU(2)_R with the SU(2)_L being the weak force and SU(2)_R being right handed spacetime, he needs to know that SU(2)_R is electromagnetism, with the charge transfer term disappearing in the Yang-Mills theory (so the Y-M becomes simply Maxwell equations) due to charged massless field quanta having infinite magnetic self-inductance unless exchanged in an equilibrium (simultaneously from charge A to B and back from B to A, so the magnetic fields cancel, preventing the infinite self-inductance).

Dr Woit just deletes anything like this as “crackpot” but it works.

See for example p1 of https://vixra.org/pdf/1111.0111v1.pdf The SU(2) two charges (rather than positive charge being anti-matter of negative as assumed presently in the Abelian U(1) hypercharge/Maxwell in SM) are needed for the mechanism of attraction and repulsion, see diagrams at: quantumfieldtheory.org

This yields a lot of accurate physical predictions, e.g. the accurate quantitative prediction of cosmological acceleration/dark energy years prior to its experimental discovery in supernovae reshifts, particle masses, etc:

I’m glad you managed to widen Dr Woit’s vision with your comment. Maybe at some point other considerations on the physical dynamics needed to revise the SM can be rapidly incorporated into a paper. I was suppressed while submitting papers to Nature, Physical Review, Physical Review Letters, and Classical and Quantum Gravity in the 1990s while at Surrey University and the Open University, UK. It’s not just as if there’s no interest in anything outside mainstream dogma, it’s far worse than that, because people with alternative ideas are also suffering groupthink autism, deliberately confuseingproved predictions with nonsense, to avoid the heresy of making rapid progress in finding the truth.

Copy of a comment to another post about censorship lying by arXiv et al:

Arxiv suppression of papers is down to Jacques Distler, string “theorist” at Texas Uni, Austin. I was able to upload a predictive QG paper to arxiv using my Gloucestershire Uni email address in 2003, but distler or one of his postdocs removed it within seconds.

Later I had a discussion with Distler (on his blog, at least he didn’t immediately delete it!), which explained everything. He’s a wooden mathematical barrel organ handle winder, who can’t accept that new theories don’t need to accept old stringy ideas as a subset. To give a specific example, Richard P. Feynman himself replaces his own traditional complex space (Argand diagram) particle polarization vector, exp(iS) with simply its real cos S, in diagrams (and explanations) of how a mirror works in his 1985 book QED.

Distler however kept coming back that the book was a completely different one, which Feynman co-authored with his student Albert Hibbs in 1965, twenty years earlier, before Feynman met Bohm and developed his path integral explanation of the reflection and refraction of light using real space only, thus cos S not exp(iS).

Distler then insisted as a “no go theorem” against my fully proof checked QG that, because the optical theorem employs complex space, Feynman was wrong to simplify exp(iS) to cos S in his 1985 book to explain light properties simply! This isn’t true. The origin of complex space in quantum field theory and indeed QM was Hilbert’s original gauge theory around 1918, which Einstein dismissed but Schroedinger later used to explain quantum energy levels and get his equation. exp(iX) gives a series of discrete real solutions for X which makes it ideal for modelling spectral lines, which Schroedinger correlated to Bohr’s energy levels. The continuous solutions are conveniently in complex space. It’s also the solution to Schroedinger’s equation. However, it’s a mathematical tool being used to represent discrete events. You can’t assume that the first simplistic model that is used must disprove all subsequent alternatives!

If that’s the level of Distler’s intelligence, God help physics.

{additional bitter commentary, not submitted in comments reproduced above: suppressive “elitists” who vandalise the progress of physics by suppressing reality tend to all vote Democrat in an effort to attone for their actions, or at least, perhaps, to deflect criticisms just as 80s BBC “stars” ran marathons repeatedly to raise money on TV for charity to ensure those complaining about their actions wouldn’t be listened to}

Update: copy of another comment to Robert Wilson’s blog post “Is spacetime right-handed?”:

Hi Robert: if you look at Peter Woit’s original 1988 paper, “Supersymmetric quantum mechanics, spinors and the standard model”, Nuclear Physics B303, pp329-342, which he kindly posted (after I emailed him, since I don’t currently have easy journal access outside academia) at https://www.math.columbia.edu/~woit/ssym-nuclphysb.pdf you can see the issue at page 332.

U(2) = SU(2) × U(1)/Z_2

Woit identifies SU(2) with weak isospin, and U(1) with weak hypercharge, showing that the hypercharge Y = -1 for doublets like the pairing of the left-handed electron and the left-handed neutrino;

yet is correctly Y = -2 for the right-handed electron which doesn’t partake in weak interactions (simply because it physically has no right-handed neutrino to interact with (because right-handed neutrinos simply don’t exist).

(Woit’s more recent papers tend to try to apply this to Penrose’s twistors.)

He’s trying the explain the existing SU(3) X SU(2)_L X U(1) standard model, focussing quite correctly on the electroweak group SU(2)_L X U(1) and showing that by looking at 4-d Euclidean spacetime (forget Lorentz as it doesn’t fit QFT path integrals and you can get relativity from compression of particles moving in a quantum field anyway, so it’s just a mechanism not a spacetime law) you can explain weak interactions that way, suggesting that electroweak sector is unified by U(2) so the SM is really SU(3) X U(2).

However, U(2) is just a subgroup of SO(4). Looking at the full equivalence

SO(4) = SU(2) x SU(2)

surely proves mathematically that you must have some duality between the U(1) hypercharge of the SM and an Yang-Mills electromagnetic SU(2) version of Maxwell’s equations. This is interesting because it throws light on a physical model for why like charges repel and opposite charges attract. It also allows you to look more closely at the mechanism of electroweak symmetry breaking:

basically, one SU(2) reduces to U(1) by failing to couple to mass. This physically prevents its gauge bosons from undergoing one-way propagation, due to massless charged particle having infinite self-inductance. This is what really breaks the electroweak symmetry. What the SM is modelling is a broken symmetry. The Higgs mechanism is being inproperly applied.

What’s really occurring is that it’s chiral: the SU(2)_R fails to acquire mass for its W and Z bosons, which therefore remain massless. This means a zero the charge-transfer term in the Yang-Mills equations (the only distinction physically of those equations from Abelian classical Maxwell theory): infinite magnetic self-inductance for massless charge bosons means they can’t propagate (at least along any one-way path, changing the charge of a particle). (Massless charged bosons can be exchanged only if they are transferred back and forth in an equilibrium, at a rate which allows the magnetic field curls to cancel out. This means the Yang-Mills charge transfer term is zero, because there’s no net flow of charge. In the same way, if you have no overdraft, it is still possible to pay money out of your bank account provided that it is being replenished by income.)

The point is, there’s a lot of progress that could be made if you can avoid seeing the explanation of the existing SM as the holy grail in applied mathematics. You need to think carefully about the physics of the U(1) hypercharge, which really is a complete misunderstanding.

{In other words, Woit is making the “law of octaves” error of Newlands, rather than allowing for omissions/errors in the existing data like Mendeleev, when model-building by looking at data/empirical equations. Before you can “explain” the SM, you need to check it’s a complete model. E.g. where is the gauge group in the SM for Higgs’ mechanism’s mass, the quantum charge of gravity? Or, why should electromagnetism alone of all the forces in the SM emerge only after “mixing” between the the hypercharge group U(1) and weak isospin SU(2)?}

One final comment there: Just on the Dirac spinors in the SM: I hope you’re aware of the anomaly in beta decay which arose when the old Fermi theory was replaced by electroweak theory, after being used to explain muon decay and strange quark decay. See for example Figure 34 on page 44 of https://vixra.org/pdf/1111.0111v1.pdf The electroweak sector of the SM is bunk. You can’t understand the maths of an error. You have to build models on solid facts…

“My job as a mathematician, I feel, is to find the best mathematics I can to describe what experimental physicists, astronomers, etc, tell me actually happens in the real universe.”

You still confront huge problems with finding mathematics to model something very incomplete. The fact that the SM isn’t complete is blindly clear even at the elementary level: the Higgs sector giving electroweak symmetry breaking and mass isn’t depicted as a gauge group.

It should be, because “mass” should be the quantum charge for gravitation. Also, the SM obviously omits dark matter, dark energy. You might as well go back a thousand years and try to find maths to understand better the “epicycles” of the earth-centred cosmology.

What they “tell you” contains their ad hoc, bac-of -the-envelope interpretation of the data, not just the data. The first empirical equation they can find to fit some data is labelled “what actually happens”, so they can publish fast. This is what happened with “facts” about epicycles. It was only by ignoring the “established wisdom” and finding the best equations to model fresh observational data of the same phenomenon, that Kepler came up with elliptical orbits.

Surely, it’s a cop-out to say humbly that you’re just a “mathematician”. Surely you’re an applied mathematician, which means getting mud on boots.

He has another post: “First of all, we must include the gauge group U(1) in the picture, so that the full group is SU(2)_L x [U(1) x SL(2,C)_R]/Z_2. Then the restriction to the compact subgroup becomes SU(2)_L x U(2)_R. Then S_L is the natural representation of SU(2)_L and S_R is the natural representation of U(2)_R and SL(2,C). Minkowski spacetime arises in the standard picture from S_R tensored with its dual. In Woit’s picture it arises from S_R tensored with its contragredient. Now do you see why it is absolutely vital to distinguish between dual and contragredient? Woit’s picture is not the same as the standard picture. He presents it as a better alternative to the standard picture. He does not want anyone to tell him that it is really the same as the standard picture!”

He had it coming. But please just look at the basis in fact for what guided his approach. Again, if you follow Woit’s history back to 1988, namely page 332 of Nuclear Physics B303, pp329-342, https://www.math.columbia.edu/~woit/ssym-nuclphysb.pdf , you should be able to see where this whole thing starts:

U(2) = SU(2) × U(1)/Z_2

He finds there in that paper that you get the correct weak hypercharges from this, which the SM doesn’t give you. This is also better summarized on page 51 of his 2002 paper https://arxiv.org/abs/hep-th/0206135 “the standard model should be defined over a Euclidean signature four dimensional space time since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) ⊂ SO(4) (perhaps better thought of as a U(2) ⊂ Spin^c (4)) and in [48] it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2^n) as Λ∗(C^n) applied to a “vacuum” vector. Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons [table of results follows, showing how progress is made beyond the SM by getting these numbers from theory] A generation of quarks has the same transformation properties except that one has to take the “vacuum” vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero.”

Can you find any faults with this foundation stone?

Update (13 Dec 2023): it turns out Robert A. Wilson is one of the people filling arxiv with papers that brilliantly wind handles on mathematical barrel organs, yet don’t seem to make any contact with reality, by which I mean any kind of numerical checks or contact with measurements that can test or falsify models: https://arxiv.org/search/math?searchtype=author&query=Wilson,+R+A e.g. “A New Division Algebra Representation of E_6”, “An octonionic construction of E_8 and the Lie algebra magic square”, etc. Rule of thumb: anyone who is desperate to apply quaternions where they aren’t needed is a best avoided: the reason why “mathematicians” have a bad reputation with the public is down to showy obfuscation, elitism, etc. If you look at his paper https://arxiv.org/pdf/2202.08263.pdf the maths is all disconnected from any real world checkable numbers (no masses, no coupling values, etc), whereas such topics are covered in the paper not using maths but instead by a lot of vague words (the following is from p10):

“… mass cannot be a fundamental physical concept, but must be emergent. In other words, mass is not a cause, but an effect. In particular, mass cannot be the ultimate cause of gravity, however much it may seem like that to us. Perhaps this is why it has proved so difficult to quantise gravity: have we misunderstood its cause? I emphasise that this conclusion is not as ridiculous as it sounds: it is a necessary consequence of following ’t Hooft’s line of reasoning to its logical conclusion.

  1. Consequences for mass and gravity
    “4.1. The measurement of mass. If we accept this conclusion, then it becomes
    much easier to understand such mysteries as why the electron mass is so small,
    and why the proton and neutron masses are so close to each other. These facts
    then cease to be fundamental properties of the universe, but properties that exist
    only in the eye of the beholder. This on its own does not explain the facts, but
    it does show that we have been looking in the wrong place for the answer. We
    need to look more closely into our own eyes. If the electron/proton/neutron mass
    ratios have no fundamental meaning, but only a practical meaning in effective field
    theory, then the near equality of proton and neutron masses can be put down to
    pure coincidence and nothing more.
    “Moreover, the mass ratios that we use in our effective field theories depend on
    our choice of SO(3, 1), which in practice is determined by our assumption that the
    laboratory frame of reference is near enough inertial that it doesn’t matter. But
    the laboratory frame of reference is not inertial, so that the actual copy of SO(3, 1)
    that we use is crucially dependent on such accidents as the relative lengths of the
    day, the month and the year, the angle of tilt of the Earth’s axis, the eccentricity
    of the orbits of the Earth and the Moon, and many other factors. That does not
    mean that the masses change when any of these parameters changes, because we are
    free to choose a ‘standard’ copy of SO(3, 1) that only approximately describes the
    laboratory frame of reference, so that the practical variability can be moved into
    the identification of SU(2) × SU(2) with SL(2, C) instead. In order to investigate
    whether this choice of SO(3, 1) actually matters or not, we need to look at the
    history of this choice, and see whether it has left its imprint on the parameters of
    the standard model, in the form of suspicious coincidences.”

As for the remainder of https://arxiv.org/pdf/2202.08263.pdf, I don’t see any contact with physics. This probably explains why the paper is hosted on arxiv. Ouch. Sorry to be honest. Or maybe not sorry.

Update (15 Dec 2023): on a related topic (at least as regards censorship of vital information by groupthink political “secrecy for socialism” lunacy), PDF uploaded of UK DAMAGE BY NUCLEAR WEAPONS, D1/57 (1957 updated to 1960) – secret 1000 pages UK and USA nuclear weapon test effects analysis, and protective measures determined at those tests (not guesswork) relevant to escalation threats by Russia for EU invasion in response to Ukraine potentially joining EU (this is now fully declassified without deletions, and in the UK National Archives at Kew):

Update (17 December 2023):

Wilson has today (17 Dec 23) added to his blog another discussion of Woit’s “God is right” talk, https://robwilson1.wordpress.com/2023/12/17/duals-contragredients-and-opposites/ , basically “Woit is right but he is not saying anything new.” He then goes on about electron spin directions not being real, which is contrary to the Stern-Gerlach experiment and to the whole basis of of quantum field theory from Dirac’s spinor to left-handed nature of beta decay. If you read Maxwell’s theory (not to be confused with Heaviside’s version of his equations), you will also see the direction of curl of the magnetic field in the plane perpendicular to the motion of an electron as being a spin effect, conveyed by the “virtual particles” of the vacuum. As for the uncertainty principle debunking determinism: the uncertainty principle only applies when you’re NOT measuring anything, e.g. it applies to philosophical estimates of Mr Schroedinger’s poor cat in a box still being alive. As soon as you take a measurement, opening the box to take the cat’s pulse, the mathematical wavefunction of probability collapses into a measured result, and there is no uncertainty anymore (at least of the sort Wilson refers to; the cat may have a weak pulse, needing oxygen to live, but that’s a different kind of uncertainty to the naive mathematics of probability). The same is true of all double slit paradox/”entanglement”/Bell theorem stuff: all such “weird interpretations” are plain Hwisenberg/single wave function quantum mechanics, 1st quantization (pre-path integrals) stuff, so it doesn’t allow for the reality that every path is taken (2nd quantization, quantum field theory; there is no single wavefunction for a photon going through a slit, instead photons take all paths all of the time, each with a wavefunction amplitude, most of which cancel; it’s path integral reality).

Wilson refers to his paper https://arxiv.org/pdf/2102.02817.pdf which I hadn’t seen before. It does at least address the ~33 degree electron neutrino and muon neutrino (inter generation) mixing angle at page 7, the ~28 degree Weinberg electroweak mixing angle on page 8, and the weak boson W/Z mass ratio of ~0.88 at page 9. Then there’s some blather about quantization of mass (mass is neatly quantized in the very simple equations for QG, universally ignored by the great genuises; see the third sentence on page 3 of https://vixra.org/pdf/1111.0111v1.pdf) the next contact with anything supposedly real is at page 41, where we get “10.3. The wave-function. … To simplify the problem almost to the point of caricature, we may ask, what is the wavefunction, and how does it ‘collapse’?” ARRRRGGGGGG: there’s no “single wavefunction” per particle. There is one wavefunction per possible path, {Psi} ~ exp(iS), so you have many wavefunctions per particle, and you sum over all of them to allow most to cancel one another out: hence the uncancelled paths have least action (i.e. least time, if the energy remains constant). The “wavefunction collapse” paradoxes are all due to Dirac’s flawed 1933 solution, {Psi} ~ exp(-iHt), assuming a single wavefunction for an interaction. Feynman debunked it! What would be interesting here is to physically model the path integral; that way you get predictions you can test and kill of superstring hype.

Update (29 December 2023): on the more important world crisis, a detailed review of “Britain and the H-bomb”, and why the “nuclear deterrence issue” isn’t about “whether we should deter evil”, but precisely what design of nuclear warhead we should have in order to do that cheaply, credibly, safely, and efficiently without guaranteeing either escalation or the failure of deterrence. When we disarmed our chemical and biological weapons, it was claimed by nutters that the West could easily deter those weapons using strategic nuclear weapons to bomb Moscow (which has shelters, unlike us). That failed when Putin used sarin and chlorine in Syria to support Assad and Novichok the UK to kill Dawn Sturgess in 2018. It’s not a credible deterrent to say you will bomb Moscow if Putin invades Europe or uses his 2000 tactical nuclear weapons. – nukegate.org

A few interesting reports by Teller and also Oppenheimer’s secret 1949 report opposing the H bomb project as it then stood on the grounds of low damage per dollar – precisely the exact opposite of the “interpretation” the media and anti-deterrence dictator-supporting fools will assert until the cows come home – are above. The most interesting is Teller’s 14 August 1952 Top Secret paper debunking Hans Bethe’s propaganda, by explaining that contrary to Bethe’s claims, Stalin’s spy Klaus Fuch had the key “radiation implosion”- see second para on p2 – secret of the H-bomb because he attended the April 1946 Superbomb Conference at which was not even attended by Bethe! It was this very fact in April 1946, noted by two British attendees of the 1946 Superbomb Conference before collaboration was ended later in the year by the 1946 Atomic Energy Act, statement that led to Sir James Cladwick’s secret use of “radiation implosion” for stages 2 and 3 of his triple staged H-bomb report the next month, “The Superbomb”, a still secret document that inspired Penney’s original Tom/Dick/Harry staged and radiation imploded H-bomb thinking, which is summarized by security cleared official historian Arnold’s Britain and the H-Bomb. Teller’s 24 March 1951 letter to Los Alamos director Bradbury was written just 15 days after his historic Teller-Ulam 9 March 1951 report on radiation coupling and “radiation mirrors” (i.e. plastic casing lining to re-radiate soft x-rays on to the thermonuclear stage to ablate and thus compress it), and states: “Among the tests which seem to be of importance at the present time are those concerned with boosted weapons. Another is connected vith the possibility of a heterocatalytic explosion, that is, implosion of a bomb using the energy from another, auxiliary bomb. A third concerns itself with tests on mixing during atomic explosions, which question is of particular importance in connection with the Alarm Clock.

This debunks fake news that Teller’s and Ulam’s 9 March 1951 report LAMS-1225 itself gave Los Alamos the Mike H-bomb design, ready for testing! Teller was proposing a series of nuclear tests of the basic principles, not 10Mt Ivy-Mike! Note also that contrary to official historian Arnold’s book (which claims due to a misleading statement by Dr Corner that all the original 1946 UK copies of Superbomb Conference documentation were destroyed after being sent from AWRE Aldermaston to London between 1955-63), all the documents did exist in the AWRE TPN (theoretical physics notes, 100% of which have been perserved) and are at the UK National Archives, e.g. AWRE-TPN 5/54 is listed in National Archives discovery catalogue ref ES 10/5: “Miscellaneous super bomb notes by Klaus Fuchs”, see also the 1954 report AWRE-TPN 6/54, “Implosion super bomb: substitution of U235 for plutonium” ES 10/6, the 1954 report AWRE-TPN 39/54 is “Development of the American thermonuclear bomb: implosion super bomb” ES 10/39, see also ES 10/21 “Collected notes on Fermi’s super bomb lectures”, ES 10/51 “Revised reconstruction of the development of the American thermonuclear bombs”, ES 1/548 and ES 1/461 “Superbomb Papers”, etc. Many reports are secret and retained, despite containing “obsolete” designs!

UPDATE (30 Dec 2023): Dr Wilson has another post up, “Is spacetime left-handed?” https://robwilson1.wordpress.com/2023/12/30/is-spacetime-left-handed/ I commented there:

“The Standard Model makes the distinction here between spin (W and W) and weak isospin (H), and further splits the spin into left-handed (which we may take to be W) and right-handed (that is then W). Woit makes the same distinction in different words. Both attempt to unify electromagnetism (described by tensors of W and W*) with the weak interaction (described by H) in terms of H x W. But both destroy the fundamental structure of H x W: the Standard Model destroys the quaternion structure, and hence destroys the three generations; and Woit destroys the complex structure, and hence destroys Minkowski spacetime. If you want both, my model is the only game in town. And in my game you can win the jackpot, because H x W is also the strong force.”

But Woit argues that Minkowski spacetime is just plain wrong because it isn’t the Euclidean spacetime the path integral – the heart of all the calculational successes of the Standard Model requires – so please don’t resort to Witten’s/Susskind’s stringy “only game in town” justification for a theory because it enables the survival of Minkowski spacetime (it just censors any other alternative ideas, and turns science into dogma). The essential geometric facts of 4-d spacetime in terms of symmetry is SO(4), which is a proper subset of U(2), Woit’s starting point:

SO(4) ⊂ U(2),

Now,

SO(4) = SU(2) X SU(2)

Surely the lack of chirality in electromagnetism must in some sense half the isotopic charge suggested here (from 2 to 1), reducing the SU(2) to U(1):

SU(2) -{chirality}-> U(1)

It’s a very simple to come up with a physically correct model of fundamental forces that does this mechanically: charged massless SU(2) bosons would have infinite magnetic self inductance, preventing any one net massless current, and reducing the effective SU(2) Yang-Mills equations down to an effective Abelian U(1) theory, by forcing the Yang-Mills charge transfer term to be 0.

If that’s true (it must be), I’d argue Woit is just a step away from the final theory. But he’s stuck in explaining the standard model in its existing form, without allowing for any error historically – I’d argue that there is an error in the standard model in the SU(2) X U(1) = spin X hypercharge mixing. The correct electroweak group must be U(2), including a simple U(1) type dark energy force whose charge is mass (and which produces the effect called “gravity” by Casimir shielding of its gauge bosons, which aren’t Pauli/Fiertz spin-2 gravitons), plus SU(2) X SU(2) which then reduces to an effective “visible” (illusory) SU(2) X U(1) by the mechanism that charged electrically massless field quanta can’t propagate to transfer charge on a one-way path (due to back reaction/magnetic self-inductance).

“The great advantage of factorising the strong force as 2×4 instead of 3×3 is that you automatically lose the 9th dimension. You also get colour confinement from the 2, and a fourth (lepton) “colour” from the 4. “Confinement” of lepton “colour” doesn’t make sense said like that, but what it really means is neutrino oscillations. The complex 2 has chosen a direction (of spin), and if we now ignore the spin up/down distinction then what is left of 2×4 is a complex 4-space that breaks up as a complex amplitude plus a complex 3-vector. The complex amplitude distinguishes the three generations of leptons…”

This is very interesting to me because I have been assuming that at least the SU(3) colour charge theory of strong interactions in the standard model is OK. But surely the dimension of any Lie group

dim[SU(n)] = (n^2) -1,

so for n=3, the dimension is 8 gluons, without factorizing it as 2×4 to reduced the 9 to 8. Put another way:

SU(3) = SO(8) [CORRECTION: SU(3) ≠ SO(8).]

The 8 in the SO(8) above is the number of gluons of the strong force, the 3 is the number of colour charges of quarks. Surely this is one piece of maths that’s OK? Entia non sunt multiplicanda praeter necessitatem.

Copy of another comment to Wilson’s blog (I have the feeling he uses his blog as a convenient public open blackboard for his own ideas only, and he may not appreciate my suggestions, so it may end its life in his comments moderation queue over there):

Just seen your paper on SL(4,R), https://www.newton.ac.uk/files/preprints/ni19014.pdf where you mention it acts on 6d Lie Algebra in Table 4:

SL(4,R) → SO(3,3)

This is particularly interesting (aside from the 6 leptons and 6 quarks in the standard model), because Lunsford unified electrodynamics and gravitation with SO(3,3). See his paper: https://cdsweb.cern.ch/record/688763/files/ext-2003-090.pdf which produces the Pauli-Lubanski spin vector from 6d spacetime (3 time dimensions, 3 spatial).

I strongly argue that this must be true because every big step of progress in quantum field theory has involved putting space and time on an equal footing. Spacetime in the first place led to special and general relativity theories, and then Dirac’s equation giving the original Dirac spinor (predicting antimatter) was based on putting space and time on an equal footing to correct Schroedinger’s equation. Extrapolating, this needs to be done with Feynman’s path integral. The path amplitude exp(iS) must be made reversible between space and time, to put both dimensions on an equal footing. One way here is simply to relate space and time dimensionally. The age of the universe t = 1/H (where H = Hubble parameter) can be measured in three perpendicular directions (of space or of time), just as the size of the universe can be. So there is a simple physical model.

Maybe we can discuss the possibly relevant groups and how the relate to spins:

U(1) = SO(2)
SU(2) = SO(3)
SU(2) × SU(2) = SO(4)
SU(3) = SO(8) [CORRECTION: SU(3) ≠ SO(8)]
SU(3) × SU(3) = SO(9)
SU(4) = SO(6)
SU(4) × SU(2) = SO(9)
SU(5) ⊂ SO(10)
SU(5) ⊃ SU(3) × SU(2) × U(1)

The SU(5) Georgi-Glashow symmetry to yield the SU(3) × SU(2) × U(1) standard model (last line above) was debunked by proton stability, but there is an alternative way to break SU(5):

SU(5) ⊃ SU(4) × SU(2) × SU(2)

(1974 Pati-Salam Physics Review D10, p275).

The SU(4) gives 4 colour charges, the additional colour being interpreted as lepton charge. The SU(2) × SU(2) can be interpreted as a chiral electroweak theory, with the gauge bosons of one SU(2) handedness failing to acquire mass and so reducing from Yang-Mills to what we see as an effective Abelian U(1) electromagnetism.

Glashow, Georgi and de Rujula came up with another simple idea in 1984, “Trinification”:

SU(3)_c × SU(3)_L × SU(3)_R

The first SU(3) is QCD, the SU(3)_L × SU(3)_R breaks down into SU(2)_L × U(1)_R.

I suspect that this work, done around the time Woit was a student of Glashow at Harvard, influenced Woit. He’s basically trying to succeed where Glashow failed, and get a simple explanation for everything in low dimensions! I do believe that in some sense this is the way to go…

————————————————————————

Update: an old blog post of this blog is SU(2) x SU(2) = SO(4) and the Standard Model at https://nige.wordpress.com/2014/06/21/su2-x-su2-so4-and-the-standard-model/ However, since it is a decade old, it may be suffering senile dementia… likewise for https://nige.wordpress.com/2014/07/02/conjectured-theory-so6-so4-x-so2-su2-x-su2-x-u1/. I’m not working from any top-down mathematical SU(5) = SO(10) symmetry by breaking it to see what comes out and shouting Eureka! In fact the opposite, this work began at the solid, empirical foundations and builds models upward. But at some point you get some vague idea of what size of roof you will need try various options to see what seems to be fit… don’t judge the foundations by some crazy ideas about what the roof might eventually look like.

Further update: just found a few kilos of A4 sized notes on group theory, spin, etc., so will summarize this material here so I can clear up paper clutter. First, I should have referred to the Zeeman effect (splitting of spectral lines in magnetic field) or even NMR, rather than Stern-Gerlach experiment. However, I remember actually doing the Stern-Gerlach experiment in a lab (but not Zeeman or NMR). Also, the whole basis of “quantum computing” (to the extent you can disengage it from the pseudophysics brigade of “entangled” collapsing single-wavefunction 1st quantization crap) is that you use a quantum state like the spin of a single electron to store a bit of information (an idea that needs an incredible amount of lying hype to “sell” since it’s as of today in the crackpot Parkinson’s Law “in eternal development, jobs for life with no risk of ever completing a practical product” category, not the kind of invention that you can buy).

Woit had a draft of his “Quantum Theory, groups and Representations: An Introduction” dated September 30, 2015, which I made notes from:

  1. Woit notes (p1) that “read in context” Feynman’s statement “no one understands quantum mechanics” refers to Feynman “contrasting the mathematical formulism of quantum mechanics with that of the theory of general relativity.” This is pretty honest. Usually, Feynman’s statement is used out of context to claim Feynman denied the understanding of QM. On the contrary, in his 1985 book, “QED,” Feynman dismisses the Heisenberg uncertainty principle and all that, and says yes quantum mechanics is easy to understand: particles take all possible paths, each has a rotating phase vector amplitude of exp(iS), and most arrive at the detector or your eye in random phase and cancel out. Only particles with very small path actions S (which means particles taking the least possible path time, when reflecting from a mirror, or being diffracted through glass that slows them down) don’t suffer such cancellation and are therefore mistaken for the only ones (this accounts for the interferences in the double slit experiment, the “entanglement” of single wavefunction crap – there are no single wavefunctions – there are in the path integral approach infinite wavefunctions, one for each path, so there is multipath interferences and you sum all the wavefunction amplitudes. Also, Feynman’s 1985 book shows clearly that particles obey Newton’s first line between interactions so “curved paths” are just CLASSICAL calculus approximations – the reality is that a particle is only deflected by a discrete interaction, for example photons are deflected by gravitons discretely, not the “curved spacetime” of general relativity; calculus is just an approximation to a discrete set of little impulses from discrete, quantum interactions… a hard empirical fact thugs in the mainstream reject.
  2. Exactly the same occurs water waves started from each end of a tank of water towards each other: when their amplitudes (heights) cancel as they overlap, you see calm water temporarily, but they are still present, passing each other and “reappearing” when the overlap-cancellation ends! The same used to regularly occur with HF radiowaves in old fashioned multipath interference, when a signal bounced off the ionosphere hits several layers (D, E, F) at different altitudes, taking varying paths, causing signal fading at the detector when multiple out-of-phase components (each with a different path length and thus a different travel time), arrive and partially cancel out. This is a classical example. You can find other examples of path integrals in physics everywhere you look, which help to get an understanding of the path integral as being a straightforward piece of mechanics, nothing mysterious or arcane. 
  3. Woit states that Hermann Weyl created the Gruppenpest (group theory plague) in gauge theory: Lie groups describe the gauge theories, since exponentiating iθQ gives exp(iθQ) which gives the U(1) group of matrices with just one element, while for the more complex set of SU(2) Pauli type spin-1/2 matrices, the exponent itself contains matrices, e.g. exp(θL) where L is itself a 2×2 matrix containing elements 0, -1, +1, 0. Basically, the Lie group Spin(3) can be constructed either using quaternions or 2×2 unitary matrices of determinant 1, in other words SU(2). This SU(2) models spin-1/2 particles, fermions, thus the Pauli matrices. Lie group Spin(4) = SU(2) × SU(2). Basically SO(3) is a set of three 3×3 matrices, each of which contains 7 elements of zero, in addition to -1 and +1, arranged for all intents and purposes as they occur in SU(2) matrices, so you just observe that Spin(4) = SO(4) = SO(3) × SO(3), which gives SU(2) × SU(2). (Really, the correct terminology is that Spin(4) is “isomorphic” to SU(2) × SU(2), rather than the less mathematically precise “equal to”. God help us.) The U(1) hypercharge is just a subgroup of the simple spin-1 SO(2) group: U(1) ⊂ SO(2), thus since electromagnetism (allegedly) uses spin-1 gauge bosons, U(1) “must” model electrodynamics and Maxwell’s equations. But spin-1 is an effective composite of two spin-1/2 particles, so don’t trust orthodoxy here; it’s very well known even in superconductivity that pairs of spin-1/2 fermions can pair up to behave like spin-1 bosons, and this is not controversial but just plain factual physics. Always beware of misinterpreted “data”.
  4. On p328 (section 30.5 “An example: spinors for SO(4)”) Woit says Spin(4) is isomorphic to SU(2) × SU(2), and has a matrix representation of Cliff(4, 0, R). In a nutshell, a Clifford algebra involves the groups of quaternions invented by Hamilton in 1843: -1 = ijk = i^2 = j^2 = k^2. William Clifford (1845-79; he died young in the pre-antibiotics TB plague) wrote in his essay called The ethics of belief: “It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” Surely the best reason to dive deep into his mathematics of Clifford algebras (God help us)… The roots of Clifford algebra Cl(1) are merely the complex numbers, whereas the roots for Cl(2) are simply 2 x 2 complex matrices, being Hamilton’s quaternions i, j, k. The late lawyer/physicist Tony Smith used to explain that for real numbers of Euclidean signature, Clifford algebras Cl(n) have the childhood maths binomial triangle structure with dimension equal to 2^n, so that for n=0, the dimension is 2^0 = 1, for n=1, you have 1 + 1 = 2^1, for n=3, you have 1 + 2 + 1 = 2^2 = 4, for n=3, you have 1 + 3 + 3 + 1 = 2^3 = 8, and so on. See Clifford’s Mathematical Papers, Macmillan, 1882.
  5. (Tony Smith used to comment on Woit’s blog. He made the interesting comment on 12 March 2005 at 3:13pm there, replying to quantoken: “my objective is NOT to make my model THE accepted theory. In fact, I think that the current abysmal situation with superstring theory shows that it is BAD for ANY single model … to become so ‘accepted’ that alternatives cannot be made available in the archived records of physics for evaluation by anyone with interest. The ‘archive’ part is important, because sometimes it is decades before some useful stuff is appreciated as being useful.“ At that time, Tony Smith had running his offer of a $100,000 reward to anyone who could rewrite his theory to get it past so-called “peer” review. Once a lawyer, always a lawyer. But maybe doing this stuff is the science equivalent of entering a charity lottery, not with any great expectation of winning, but of hoping to make a small but useful contribution to a cause that you care about. Woit had the previous day 11 March 2005 9:43am encouraged Tony Smith: “what seems to me to be missing is some deeper link between the geometry and the dynamics.” He also at 9:16am that day dismissed the notion of elegance in a theory: “I’m a bit leery of the term ‘beauty’ … what with ‘all in the eye of the beholder’ and everything. Somewhere on the internet there’s an argument between me and Lubos [Motl] about the beauty of string theory.” His posting on Not Even Wrong where these comments occurred was headed “Clifford Modules” and Woir stated: “We know spinors are fundamental … But there are several different ways of constructing spinors … spinor fields, the Dirac equation, connections and the Yang-Mills functional are fundamental parts of the story, because we have strong experimental evidence for this. But there is quite possibly some more fundamental way of looking at these things, one which would give us the right idea about how to get beyond the standard model. I’m guessing that such a new idea exists, but is missing now, and that whatever it is, it will be related to some new perspective on the geometry of spin.”)
  6. Woit’s chapter 41, Representations of the Lorentz group SO(3,1) ends with the Majorana (real Clifford algebra matrices) representation of Dirac’s spinor, which applies only to neutral particles where antimatter is identical to matter: {\displaystyle \psi =\psi _{c}.} (Woit re-visits Majorana fields in Chapter 48: basically, the Klein-Gordon equation’s real solution is the Majorana spinor.)
  7. Woit goes wrong by following mainstream dogma at p447 (part of “43.2 The Klein-Gordon equation in momentum space”), giving the usual U(1) model of Feynman where “A positive energy particle moving forward in time” is equivalent to its equivalent antiparticle moving “backwards in time”. This is the deep failure of the U(1) electrodynamics (forget for a second that U(1) is hypercharge in the SM, and has to be mixed with SU(2) to yield Maxwell). SU(2) gives a cleaner model of electrodynamics which works (as we’ve explained above and for many years elsewhere), and it is easier to unify with SU(2) based weak isospin. Woit tries to wear the soft teaching cap as well as the construction worker’s hard hat, and has problems keeping both in place. You can’t teach orthodoxy for exams while completely revolutionising everything (see Plato for tissues created by teachers innovating before kids, who then get confused when examined for competency by other folk). On p442, Woit states: “It turns out though that one cannot consistently make a physical interpretation of these [Klein-Gordon equation wavefunction] solutions as single-particle wavefunctions.”
  8. Chapter 47, “The Dirac Equation and Spin-1/2 fields” remarks: “Elementary matter particles (quarks and lepton) are spin 1/2” and are modelled by “a remarkable construction that uses Clifford algebras and its action on spinors to find a square root of the Klein-Gordon equation. The result, the Dirac operator … In the massless case, the Dirac operator decouples into two separate Weyl equations, for left-handed (1/2, 0) representations of the Lorentz Group and right-handed (0, 1/2) representation Weyl spinor fields.” The Minkowski (3,1) space is represented by Clifford algebra Cl(3,1). Dirac’s The Principles of Quantum Mechanics (4th ed.) explains how his equation is a square root of the Klein-Gordon equation (an alternative formulation of Dirac’s spinor is Weyl’s left and right handed chiral version, useful for helicity; Woit states on p491 “Helicity in some sense measures the component of the spin along the direction of the momentum of a particle, and particles with spin in the same direction as the momentum are said to have ‘right-handed helicity’…”):

Update (1 Jan 2023): copies of comments to Wilson’s Hidden assumptions. First:

“Supersymmetry, as a physical concept uniting fermions and bosons, is well and truly dead, thanks to experiments at the Large Hadron Collider.”

Yes. But that doesn’t stop some physical mechanism from unifying fermions and bosons. We know from superconductivity etc that fermions can be paired up to make an effective boson, so that could happen at the elementary particle level. Maybe fermions are fundamental, and bosons are composites?

And:

I didn’t understand your reply comment on the previous post, that symmetry groups are not the way forward, while you use of Mathieu group M12 to model particles. Maybe M12 is related to symmetry groups?

My issue is that the strong interaction is well modelled by SU(3), weak interactions by SU(2). There are problems with mixing, symmetry breaking, masses and other fiddled parameters, and (in my outrageous/outraged view) the issue that there is no symmetry group included for dark energy/gravity, but really there is evidence symmetry groups are part of the big picture.

Update (2 January 2024): Dr Woit is still pondering Wilson’s no-go theorem, it seems, but he has responded to an anonymous comment objecting to his call to cut off funding for vacuous mainstream hype that fails to make testable predictions: “How am I or anyone else supposed to tell the difference between a serious argument from an informed person and something unserious that should be ignored? … You’re arguing that more hep-th funding is needed, to go to support the people adding to the 20,000 papers on the heavily overhyped topic of holography. Yes, there are too many such people on the market, so their job prospects are not great. What is needed though is not more funding to keep them doing holography, but redirection of current funding to other “non-mainstream” topics.”

Regarding the first sentence of this quotation, Sir Richard Hoare sensibly started and ended a book on a contentious subject with the fine motto: “WE SPEAK FROM FACTS NOT THEORY.” If I built a theory, I build it from facts, not from theory. It is possible. That’s not what happened in superstring, built on the quicksand of unobserved spin-2 gravitons plus extra dimensions being invented to accommodate the unobservable spin-2 gravitons. Woit – or even a mathematician like Wilson – could in principle check the hard physical (non-speculative) basis of this and this (total length of both together is just 1 + 3 = 4 pages). It’s not superstring drivel. But they won’t, because of fashion about what is kosher and taboo. I don’t actually like those papers myself (although they compress a lot), but they show a problem with the second half of the quotation from Woit, above. It’s not always about money, mate. It’s about taboos: something far more sinister altogether. It’s about the kind of belief systems that led to the “science” of eugenics implementation for “humanity” at Auschwitz, Marxist economics “science” that enslaved half the world, pseudo scientific religious superstitions regarded as mainstream bigotry to be defended by the lives of millions of committed, moronic fools. Ah, Happy New Year everybody!

Update (2 Jan 2024), copy of a comment to Wilson’s blog:

OK, so you suggest PSU(3) which is a subset of M12. The problem here is that a google search for PSU(3) and M12 produces one paper by Larry Finkelstein written in 1979 which might as well be in double Dutch (or a contribution to Bertrand Russell’s “Principia mathematica” (I’ve always thought he reversed the title of Newton’s work to make clear his book was the exact opposite…). I’m not familiar with this.

In looking at standard model particles in terms of how to how to unify leptons (like neutrinos) and quarks, and what charges or masses they have, I’d suggest taking a look at fig 24 at the bottom right of p30 and fig 34 on p44 of https://vixra.org/pdf/1111.0111v1.pdf (ignore the remainder of the paper for the present). (This paper is a compilation of decades of collecting useful insights, it’s not off the cuff conjecture/speculation.)

The key thing is to start by ignoring the fractional “electric charges” of quarks because they’re emergent from the very large vacuum polarization shielding of a pair or triplet of quarks. The Rosetta is the Omega Minus is a triplet of strange quarks with the electric charge -1, so the strange quarks are all -1/3.

But look at the maths in this: the you have in close proximity THREE similar electric charges, which must physically produce a vacuum polarization (pairs of charged virtual fermions which align to shield the core charge within) THREE times stronger than a single charge would produce. Hence, the strange quarks hypothetical isolated electric charge is 3 x (-1/3) = -1, the same as the electron.

Sure, you can’t isolate a quark, but my point is that the theory used to get “fractional charges” is bunk because it doesn’t take account of the difference between the strength of the vacuum polarization shielding electric charges of a single lepton like the electron, and a triplet like the omega minus, a triplet of identical electric charges! So there’s your quark-lepton unification.

What happens to the missing “shielded” energy? The virtual particles acquire this energy, adding to their survival time beyond the Heisenberg’s t = ℏ/E (virtual fermions only exist between UV and IR cutoff energies, which translate into distances out to ~33 fm). So this acquired electric field energy allows them to briefly behave like real particles, obeying Pauli’s exclusion principle and thus gaining a quasi nuclear shell structure (near the UV cutoff where virtual quark pairs exist) and a quasi electron structure further out (nearer the IR cutoff where electron-positron pair production occurs). Simple calculations prove this predicts particle masses: Table 1 in https://vixra.org/pdf/1408.0151v1.pdf

Now back to https://vixra.org/pdf/1111.0111v1.pdf at Fig 34 in the middle of p44 (this figure should be in colour, but isn’t, follow the grey lines). Conventionally, Fermi’s point theory of beta decay says a muon decays into an electron, and strange quarks decay into upquarks. But when the W- propagator was added to Fermi’s theory, an anomaly emerged (compare top half of diagram to bottom half): if a muon decays into an electron, then the corresponding Feynman diagram shows a strange quark decaying also into an ELECTRON. I rest my case, your honour. Beware mainstream self-contradictory quackery.

I tried to shorten and improve that argument:

Can I just ask please if you’re aware of the following “anomalies” with the strange quark (electric charge -1/3) and the omega minus (triplet of strange quarks) which has is normally swept under the carpet but has enormous implications (I’ll try to keep this brief and clear).

  1. Fermi’s point theory of beta decay says a muon decays into an electron, and strange quarks decay into upquarks.

But when the W- propagator was added to Fermi’s theory, an anomaly emerged: if a muon decays into an electron, it has to become a W- boson (briefly) en route, then the corresponding Feynman diagram for beta decay of a strange QUARK decaying via a W- boson turns the strange quark into an electron! There you have quark-lepton unification. (Diagram: https://vixra.org/pdf/1111.0111v1.pdf at Fig 34 in the middle of p44.)

  1. Fractional “electric charges” of quarks are artifacts like emergent from the very large vacuum polarization shielding of a pair or triplet of quarks and this is proved by the Omega Minus, which should be viewed as the Rosetta Stone for understanding everything: it is a triplet of strange quarks with the electric charge -1, so the strange quarks are all -1/3. This makes it understandably simple!

Look at the maths in this: the you have in close proximity THREE similar electric charges, which must physically produce a vacuum polarization (pairs of charged virtual fermions which align to shield the core charge within) THREE times stronger than a single charge would produce. It’s like wearing three pairs of sunglasses at once, you get three times as much filtering. Hence, the strange quarks hypothetical isolated electric charge is 3 x (-1/3) = -1, the same as the electron.

So there’s your quark-lepton unification.

The missing “shielded” energy, can be easily caculated here: 2/3 of the -1 electric charge per quark is shielded, so you see an apparent total omega minus charge of -1. The virtual particles acquire it this energy, adding to their survival time beyond the Heisenberg’s t = ℏ/E (virtual fermions only exist between UV and IR cutoff energies, which translate into distances out to ~33 fm). So this acquired electric field energy allows them to briefly behave like real particles, obeying Pauli’s exclusion principle and thus gaining a quasi nuclear shell structure (near the UV cutoff where virtual quark pairs exist) and a quasi electron structure further out (nearer the IR cutoff where electron-positron pair production occurs). Simple calculations prove this predicts particle masses: Table 1 in https://vixra.org/pdf/1408.0151v1.pdf (which is merely based on the nuclear shell model magic numbers analogy). It should also be possible to make more detailed calculations by calculating statistically average mass of the polarized vacuum particles using the easily deduced omega minus shielded electric field energy.

The bottom line is, if you are going to build a model of particles, beware that part (maybe all) of the existing problem of stagnation can be people trying to model the standard model quarks etc, with all their supposed features. If it turns out (as per above) that “anomalies” imply the quarks are “really” integer charged and just strongly vacuum polarization shielded from a distance due to existing in pairs/triplets rather than than singletons, you’re done for. Dalton made the mistake of assuming the chemists had discovered everything when he made his Law of Octaves. They had missed key elements, so his model was false.

THE ROAD AHEAD:

Contrast the above approach to unification with superstrings, i.e. SUSY unification using an argument consisting of running couplings supposedly converging around 10^15 GeV.


For unification just calculate energy conservation. Ignore collision energy in GeV. Just look at distance from a particle core and what the couplings are, and why they are varying due to vacuum polarization mechanisms etc.


First, it’s clear that not all particles have the strong charge: with leptons there is just electroweak. If “unification” is true, there must be some way to transform a lepton into a quark.


Consider field energy conservation: quarks have less electric charge than leptons. The electric field extends to infinity because its vector bosons are massless, but for the strong and weak fields the vector bosons are massive with a short range.


Take the simplest example to analyse: the Omega minus, composed of three strange quarks, each with -1/3 the electric charge of the electrically charged lepton, electron/muon/tauon.


Now consider the vacuum polarization: if strong interactions and associated color charge are an emergent property of having 2 or 3 particles in proximity, a simple way to understand this, for the Omega minus, is to imagine 2 or 3 particles with unit electric charge like a charged lepton: the only they can exist in proximity without violating Pauli’s exclusion principle is to have an extra quantum number for color charge, which means a gluon field that contains energy.


If you can integrate the total gluon field energy per strange quark over radial distance – which shouldn’t be that difficult because although the coupling gets bigger at much higher energy, that corresponds to very tiny distances with very small volumes since volume is proportional to the cube of the radius – so you’re multiplying a large uncertain coupling controlled energy density for the field by an extremely small volume which goes to zero as the coupling gets large near zero radius.


(The mainstream “unification” approach whereby radii from a particle core are inverted and called “energy scale” seriously obfuscates the whole issue, particularly as the volume gets smaller as the cube of the radius and thus the cube of the energy scale!)


Therefore, if “unification” between quarks and leptons is real, that calculation should indicate the total gluon mediated color field strong interaction energy around a strange quark in Joules. You then simply compare that result to the energy in the electric field of the charged lepton. If you can’t decide what small radius to use, don’t worry, simply assume the total electric charge field energy of an electron is its rest mass energy 0.511MeV.


If the former is 2/3 of the latter, this is evidence that the “unification” theory for the transformation between leptons and quarks is equivalent to electric charge field energy being converted into strong colour charge energy, this explaining why quarks have fractional charges, I.e. a strange quark has 1/3 of the electron’s electric charge because 2/3 of the electric charge energy is turned into colour charge energy.


There’s still plenty of fun here, e.g. calculating the energy in the weak (W/Z mediated) field of a particle, and looking at different quark’s charges (beyond the strange quark).

Since the 1960s electroweak unification successes and the 1970s successes of the standard model incorporating strong interactions, quantum gravity has been pursued mostly through a theory called supersymmetry, which assumes (1) the simplest possible quantum gravitational interaction, depicted as an exchange of a spin-2 quanta between two masses, together with (2) the assumption that the word unification implies equal coupling constants for all forces at very high energy scales, corresponding to early times in big bang cosmology. Both of these assumptions are fundamentally wrong: (1) gravity is the simplest possible exchange of field quanta but a Casimir-type exclusion of radiation (dark energy is the cause of it) and (2) unification implies that polarized vacuum volume-integrated field strength energy densities for different particles differ according to the proximity of 1, 2 or 3 leptons or quark cores in leptons, mesons or baryons, respectively: the strong force coupling for a lepton singleton is zero because none of its electromagnetic coupling is converted into a strong field, whereas this is not the case for quarks! This is a new direction of physics where calculations can be checked by data immediately to make progress!

6 thoughts on “The final theory

  1. I’m not proud of my papers that wind handles on mathematical barrel organs. I know that they have nothing to do with physics. But it gets my name to the point where some physicists (not many, but a few) are prepared to listen to some of my real ideas. I have fought hard for ten years to get my co-authors to allow some real physics into the papers, but I have consistently failed. So please don’t blame me for winding the handle on the barrel organ – that’s what my co-authors do.

    And you are right to think that 2202.08263 is only hosted on the arxiv because it has little or no contact to physics. As soon as I put any physics into my papers, the arXiv reject them. Again, please don’t blame me for having to remove all the physics in order to get my papers accepted.

    1. “As soon as I put any physics into my papers, the arXiv reject them.”

      Are you telling me? It’s run by string advocates. The whole defence of string theory by Witten is that there are no better theories. So anything better must be airbrushed out. Marxism was defended the same way: as soon as Trotsky criticised Stalin, he was in trouble.

      But you could put papers with practical calculations elsewhere, even if just on Internet Archive or vixra, or you can upload PDF files straight to your wordpress blog (although I’m not sure how permanent this platform is).

  2. If you want some actual numbers, you should look at some other papers of mine, for example 2109.06626, especially section 9, where I analyse in great detail the difference between the inertial mass ratio of electron to proton, and the gravitational mass ratio, and find a 3 sigma signal in the literature in 1949-1969 that indicates that these may not be the same. From 1973 onwards, the standard model took over from classical electromagnetism, so that the gravitational mass ratio no longer contributed to any experimental analysis, and all trace of the discrepancy disappeared. But if I extrapolate my findings to 2023, I predict an average discrepancy between inertial and gravitational mass of a lump of platinum-iridium of around 35 micrograms per kilogram. This is pretty much the level of uncertainty or inconsistency in calibrations of copies of the IPK over 50 years, that eventually forced a change in the definition of the kilogram.

    But my model has much the same effect as yours, because between 1973 and 2023 the gravitational relationship between the Earth and the “rest of the universe” has changed measurably, principally as a result of the decrease in angle of tilt of the Earth’s axis. I put in some numbers, and mix gravitational and inertial mass of the electron in the ratio sin^2 to cos^2 of the Weinberg angle, and this is the prediction I get. Taking into account that the gravitational mass is measured by quantum interactions (in my model, with neutrinos, but it really doesn’t matter what sort of particles they are), there is a spread of values that are measured, not a single precise value as in a classical theory, and local variations in quantum gravitational deviations from classical theory may create significant noise to mask the signal. Experiment seems to show a signal to noise ratio of around 1, which is a bit disappointing, so we need more targeted experiments.

  3. The Stern-Gerlach experiment is an experiment on silver atoms, not on elementary particles. A silver atom is large enough to be practically “macroscopic”, so that it has a spin direction that can be quite precisely “measured”, and yet this direction is still experimentally discrete. If you try and pretend this direction is intrinsically continuous, you get nonsense. The illusion of continuity is provided by the external environment and the macroscopic experiment. The same applies to the left-handed nature of beta decay, in which the spin in the Wu experiment is a property of a cobalt-60 nucleus, which contains a huge number of elementary particles. The “direction of spin” can thus be defined relatively precisely, but not exactly. On the other hand, experiments on electron spin and photon polarisation show that at the elementary particle level the discreteness is much more coarse-grained.

Leave a comment