Edward A. Schuert for understanding the path integral

Above: is there any relationship between Edward A. Schuert’s method plotting of particle trajectories, falling from a height through air with differing wind speeds and directions at different altitudes, and the quantum field path integral? At first sight, none, because the path integral considers multiple paths with the same point of origin and termination, whereas the fallout particle trajectories shown above don’t all end up at the same termination point (shown for the 4.5 megatons “cleaner” 95% fusion Los Alamos Redwing-Navajo test at Bikini lagoon in 1956, a device of only 5% fission yield, or one tenth the fallout for typical Teller-Ulam devices; an innovation opposed automatically by everyone in the world, arms controllers and the military, who all loved dirty bombs). Additionally, the whole point of the path integral is that each particle has a phase vector, usually denoted by exp(iS) in complex space (although if you lack appreciation for mathematics you can, for bog standard calculations, replace exp(iS) with cos S, S being action (expressed in units of Planck’s parameter, h, divided by twice Pi). This action is the Lagrangian energy of a given path integrated over time. So the path integral is a double integral, in which you’re calculating the sum of the phases of all possible paths, each with its own Lagrangian integral over time. If we are going to consider only very simple phenomena, perhaps we can go one step further, replacing S in cos S, with the Hamiltonian-time product, S ~ -Ht, so that exp(iS) gets replaced with cos (-Ht), where H is the Hamiltonian energy (multiplied by twice Pi, and divided by Planck’s parameter, h). Then, as Feynman shows in the Schuert-like spatial “path integral” diagrams of his 1985 book QED for light reflection, you just have a phase vector in each photon which rotates with a frequency equal to the frequency of the photon (duh!) as the photon moves!

So, looking again at Schuert’s graph, and comparing it to the QFT path integral as Feynman depicts it in several spatial (not space versus time, as classically done) graphs in his 1985 QED book, you can develop a clearer understanding of what’s really going on in the latter. For example, suppose Schuert had wanted not to see the “big picture” of where particles end up, but merely wanted to see what fallout particles arrive at a fixed spatial point in the fallout area. Then he would ignore all particle trajectories that didn’t end up at that termination point. All he wants to know, then, is what arrives at designated location. This helps to understand what the hell is going on, if you want a mechanical understanding of the universe, which we do, if we aren’t all 100% total quacks.

In the path integral, you’re working out the multipath interference amplitude by summing all possible spatial paths, where the individual paths have a phase amplitude that’s a function of the action (K.E. – P.E. integrated over a fixed time for a path; the amplitude is always for multipath inteference where at a given time, the paths arrive at a fixed spatial point to interfere). This treats space and time differently, and Dr Woit argues for using the Euclidean not Minkowski signature for such integrals. The usual path integral for SM particle physics cross-sections and reaction rate type calculations, where the amplitudes for different paths vary, due to varying SPATIAL configurations over a FIXED TIME for all the paths involved (every path integrated must arrive at spatial end point at the SAME time) and are are summed to give total amplitude at a FIXED SPATIAL ENDPOINT LOCATION and for a FIXED TIME. Schuert’s plots, and Feynman’s revolutionary all-spatial path integral diagrams in his 1985 QED book, are a step forward in physical understanding that will forever be ignored by the epicycles/supersymmetry nuts. Shame!

There are loads of other “clues”. One massive issue which again is totally ignored by the mainstream (including PW) and by popular science writers is that the quantum electrodynamic propagator has the form: 1/(m + k)^2, where m is the virtual massive (short ranged) electromagnetic field quanta (e.g. the virtual electrons and positrons that contribute vacuum polarization shielding and other effects between IR and UV cutoffs), and k is the term for the massless (infinite range) classical Coulomb field quanta (virtual photons which cause all electromagnetic interactions at energies lower than the IR cutoff, i.e. below collision energies of about 1 MeV, which is the minimum energy needed for pair production of electrons and positrons).

The point is, you have two separate contributions to the mass of a particle from such a propagator: k gives you the rest mass, while m gives you additional mass due to the running coupling for collision energy >1MeV. (See for instance fig 1 in https://vixra.org/pdf/1408.0151v1.pdf .)

The fact that you can separate the Coulomb propagator’s classical mass of a fermion at low energy (<1 MeV) from the increased mass due to the running coupling at higher energy, proves that there’s a physical mechanism for particle masses in general: the virtual fermions must contribute the increase in mass at high energy by vacuum polarization, which pulls them apart, taking Coulomb field energy and thus shielding the electric charge (the experimentally measured and proved running of the QED coupling with energy). In being polarized the electric field, the virtual positron and electron pair (or muon or tauon or whatever) soaks up real electric field energy E in addition to Heisenberg’s borrowed vacuum energy (h-bar/t). So the virtual particles must have a total energy E + (h-bar/t), which allows them to turn the energy they have have absorbed (in being polarized) into mass. This understanding of the two terms in the propagator, m and k, therefore gives you a very simple mechanism basis for predicting all particle masses, m, which shows how the mass gap originates from treating the propagator as a simple physical model of real phenomena, and not a sacred scripture dictated by a God. But efforts like this to explain all SM parameters and masses are unfashionable to superstringers.

The media’s continued obsession with Nukemap lies (run by a colleague of John Horgan’s at Steven’s Institute in the home of evil, America) has been driving me stir-fry crazy with anger, as it continues the lying “WMD’s disarmament will lead to universal love and world peace” mythology (debunked for poison gas in the 1930s when Angell, Noel-Baker, and other Nobel Peace Prize winning liars used it to start WWII, with no retribution at all, just as Nobel Laureate Dr Carrell’s call for gas chamber eugenics in 1938 in a book that became a German best-seller called Man the Unknown, went unpunished), that nuclear weapons shouldn’t be used to deter the invasions that kill kids. The Nukemap guy has a blog and years ago I explained he was wrong in a post on my nuclear weapons facts blog about his deceit, receiving the first and only comment from him on my blog, so I deleted the post (though it is saved on a HDD with his comment), hoping he’s correct his stuff. But he didn’t. It delivers what people want: lies that nuclear weapons can only be used to kill huge numbers of civilians, not to stop invasions and DETER war. There’s nothing about the most important effect of nuclear weapons in Nukemap: DETERRENCE IS AN EFFECT OF NUCLEAR WEAPONS YOU IGNORE AT YOUR PERIL, AND AT THE PERIL OF UKRAINIAN KIDS, AND IN FUTURE AMERICAN KIDS.

I remember the same frustration at groupthink fascism (yep fascism is what killing kids for eugenics pseudoscience or whatever Marxists use as their so-called “excuse”, and get lost Mr “Godwin’s law”) back in 1997 when I met the physicist David A. Chambers who had done integrals of the energy delivered on the screen in the Feynman style double-slit experiment, using a laser and photomultiplier. The key thing here, as I saw when he showed me, was that you can make a pinholes (yep, with a small pin!) in the screen at key places and analyse what light gets through: the “interference fringe” spots where photons are arriving and cancel out or reinforce (depending on whether their amplitudes are in-phase or out-of-phase as discussed two paragraphs above – see the figures in Feynman’s 1985 book, QED: The Strange Theory of Light and Matter) can be the pinhole locations! You can learn a lot about the mechanics this way, even without firing photons one at a time, if you measure the amount of light getting through. Also, when you actually do the experiment (rather than looking at the exaggerated diagrams in certain obfuscating books), the gap between the two slits in front of the screen has to be so small that there is no mystery as to what is going on: the transverse extent of a photon will always overlap both slits, and so be affected by both! The whole mystery arises from the artificially exaggerated gap between slits shown in textbooks. With such a large gap between two slits, you simply don’t get interference fringes! Anyway, I checked and published Chalmer’s paper in the first issue of Science World magazine, ISSN 1367-6172 (if I remember the barcode correctly), and then got simply ill-informed abuse and death threats in response from I think some nutters at Hull University. The police weren’t interested in this, unsurprisingly. You can’t tell the facts without upsetting nutters!

Going back to the Steven’s Institute where John Horgan is, I followed a link from Woit’s blog to Horgan’s site containing back-numbers of some of his Scientific American articles and a some general posts, before realising he was at Stevens where Nukemap originates, allegedly (actually it goes back to the terrible Carter admin politically correct – i.e. trash – 1977 version of Glasstone’s book, The Effects of Nuclear Weapons, which deletes all the useful data on protective measures nuclear tests in previous versions, creating the delusion that a nuclear bomb on an unobstructed desert creates the same effect as in a highly shielded concrete city, where buildings PROVABLY absorb all the effects – radiation and also blast as proved by Lord Penney to the continuing horror of the Pentagon’s nuke disarmament freaks – VERY effectively, reducing casualties by a factor on the order of 100 from what you get for Glasstone’s assumption of nukes over nudist beaches). Anyway, Horgan has a blog post claiming we have free-will. I don’t think he wants my comment on his site, because like all the Scientific American fascists and general American fascism in the media over there, he dismisses anyone with proven facts as a quack without bothering to check his facts first, also he’s at the same place as the Nukemap charlatan who seemingly wants Putin to go on murdering people without credible nuclear deterrence (correct me if you have proof he’s corrected Nukemap now, please), so I’ll comment here instead: Jews in Nazi territory had a very limited free-will choice of slavery or death, if lucky (if unlucky, they were first used in science experiments to determine survival time in a vacuum chamber or in ice cold water, etc.). Most people in the world have constraints on their free-will, hopefully not all as bad as those in the most extreme fascists regimes! In 2021, 1.07/10,000 people in the UK committed suicide, the most extreme form of free-will. Guess which country tops the charts of suicide rates? Correct! Russia, with just over the fascism media dictatorship and pseudo-democracy UK figure (Russia had 1.16/10,000 people in 2019!).  So certainly, free-will appears to exist, in some “form”, for everyone: you can always go jump off a cliff if reading my stuff depresses you even more than it depresses me to write it. (Don’t do that!) What’s in question is not the qualitative existence of free-will, but it’s quantitative distribution as a function of wealth, country, race, education, and so on. Clearly, we don’t all have the same amount of free-will, and those with the greatest choices tend to be most loathed to make good use of this luxury by going off the beaten track in an honest way, particularly if they have got to where they are by being conformist. Those who got millions easily to fund their adventures, like Trump and Meg/Harry, display the most freewill (alas, usually the easy “controversial” forms if it, rather than 100% originality), and become polarizing media figures (called “marmite” if you are British; i.e. something 50%-loathed, 50%-loved).

UPDATE (31 DECEMBER 2022): if you want a really good WICKED laugh and you are like me a practical mathematician and NOT a elitist snob who believes God, the universe, and everything is a “beautiful equation” (Feynman has a pretty good debunking of this based on the difficulty of finding a non perturbative – i.e. mathematically exact – solution to anything such as the infinite series of terms in the perturbative expansion to even the simplest two particle interaction in the REAL WORLD rather than some BS world certain elitist mathematicians live in), then Dr Peter Woit has a new blog post for you to enjoy: “Earlier this year I bought a copy of the recently published version of Grothendieck’s Récoltes et Semailles, and spent quite a lot of time reading it. I wrote a bit about it here, intended to write something much longer when I finished reading, but I’ve given up on that idea. At some point this past fall I stopped reading, having made it through all but 100 pages or so of the roughly 1900 total. I planned to pick it up again and finish, but haven’t managed to bring myself to do that, largely because getting to the end would mean I should write something, and the task of doing justice to this text looks far too difficult.” Ha. Ha Ha. Serves you right. Bertrand Russell took over 100 pages to “prove” 1 + 1 = 2 in his acclaimed pure mathematics book. In the real world 1 + 1 = 2 is always a lie because no two real world electrons have the precisely the same polarized vacuum state around them (which partly shields their core electric charge and has effects on mass, spin, magnetic moment, etc, etc), which is inherently non-deterministic and NON MATHEMATICAL, due to the random nature of pair production in that vacuum shield. Mathematics is a human invention of ego, not a real world phenomenon. The fact anyone thinks differently, with all the physics evidence against them, tells you STAY CLEAR OF THEM. They’re the nutters.

Above: very brief PDF “flavour” extracts from two long books that contain data vital to the nuclear weapons deterrence use and deterrence effects debate today, but which are basically as unavailable to most people as Top Secret classified bomb design documents: John A. Northrop, Handbook of Nuclear Weapon Effects Abstracted from EM-1 and Frank H. Shelton Reflections of a Nuclear Weaponeer 2nd revised, updated and expanded edition 1990. In my opinion, for what it is worth, the entire subject is corrupted as presented in the populist media which has always lied maliciously to encourage wars while pretending to do the opposite.

Update 5 Jan 2023: John Horgan’s blog now contains the nonsense “no-go theorem” style BS quotation:

“It is true that you can convince yourself that you understand quantum mechanics without calculus (or using simpler mathematics than that which professional physicists use) but that’s totally delusional. Good luck calculating the Lamb shift!”

The whole point of Feynman diagrams is that any perturbative correction whatsoever in QM or QFT, including the Lamb shift in hydrogen spectra, can be calculated very simply by a two-year old, drawing two-year old style simple interaction diagrams, and then applying Feynman’s very simple calculating rules (counting up the number of vertices in the diagrams to determine the total power of the interaction coupling, etc.). Sorry, this guy doesn’t understand that each term in the perturbative expansion (equivalent to the “path integral” which CANNOT be directly evaluated using calculus in general) has a simple calculating procedure [using feynman’s rules]. Things get more complicated for the strong nuclear interaction, but even then you can use simple lattice approximations to evaluate it reasonably well in a computer, without getting an [exact] analytical solution using calculus to [the] lagrangian (impossible to date). Cheers!

(I’ve submitted this comment there but it has as much chance of appearing as a comment on the Nukemap guy’s site. They only allow comments that appease them, to make it appear as if they don’t make mistakes. Typical media charlatan con-trick! Duh!)

Update: 8 January 2023. Professor John Horgan at stevens has another blog post up, referring to a new 2023 dated Nature paper: Michael ParkErin Leahey and Russell J. Funk, “Papers and patents are becoming less disruptive over time”, Nature volume 613, pages138–144 (2023), which states: “Theories of scientific and technological change view discovery and invention as endogenous processes1,2, wherein previous accumulated knowledge enables future progress by allowing researchers to, in Newton’s words, ‘stand on the shoulders of giants’3,4,5,6,7. Recent decades have witnessed exponential growth in the volume of new scientific and technological knowledge, thereby creating conditions that should be ripe for major advances8,9. Yet contrary to this view, studies suggest that progress is slowing in several major fields10,11. Here, we analyse these claims at scale across six decades, using data on 45 million papers and 3.9 million patents from six large-scale datasets … We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions. This pattern holds universally across fields and is robust across multiple different citation- and text-based metrics1,13,14,15,16,17. Subsequently, we link this decline in disruptiveness to a narrowing in the use of previous knowledge, allowing us to reconcile the patterns we observe with the ‘shoulders of giants’ view. We find that the observed declines are unlikely to be driven by changes in the quality of published science, citation practices or field-specific factors. Overall, our results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.” Yup. It’s called “practicality” and is what happened when Marx’s and Engel’s Communist Manifesto was implemented by Stalin, requiring the massacre of millions of Ukrainians.

My dad was an idealistic Communist, as am I, in principle. However, no humane person who is sane and rational can along exactly with the “practicalities of Communism” including genocide and dictatorship involved in what the Stalinists like Putin refer to as the “practicality of implementation”. The early 1920s USSR had 20,000,000 small farms. Lenin, Stalin and Trotsky wanted more efficient, social, Communist big farms, called collectives, the State-owned “kolkhozy” and tried to use propaganda to make this work from 1921-1928. Problem: the propaganda failed, and by 1927 USSR statistics proved that only 0.8% of the 20,000,000 small farmers had joined the efficient State owned collectives or kolkhozy. Stalin was thus “forced” to use starvation under Five-Year Plans to encourage obedience. At this point, “Communist” became a fascist dictatorship of theft and genocide. Millions were starved to death in Ukraine for the crime of not joining the collectives. They were driven off their land, their properties were burned, their livestock stolen by the state, and they were told that people who refuse to work on the State collectives (not their own land) “shall not eat”. So they starved to death under state compulsion. What difference to Anne Frank in Belsen under Hitler? Not a bit. Using Comrade Corbyn’s preferred adjectives (but not of course preferred analogy!), you can’t put a cigarette paper between Stalin and Hitler. They jointly invaded Poland in September 1939. They both massacred scapegoat “capitalist jews”. They were both evil. The good new? In 1931, USSR statistics showed that 52.7% of farmers had joined the kolkhozy, rather than see their kids starve to death before their eyes. Wonderful! By 1940, it was 97% membership of the kolkhozy, and 99.9% of cultivated USSR land under state control. Communism made practical. This is analogous to what has happened to “big science”.

By bringing everything under dictatorial control for “efficiency” it has been turned into the similar conformity of rigor mortis:

Above: graph from the 2023 Nature paper, “A new study by Russell Funk et al shows a sharp decline in “disruptive” science over the past 60 years” – Professor Horgan, from https://www.johnhorgan.org/blog/posts/42122.

THE FOLLOWING IS A BRIEF RELEVANT EXTRACT FROM a 2015 post on our other blog: Lapp’s 1965 book The New Priesthood begins (page 1) with the following quotation from President Woodrow Wilson, on the dangers of [BIG SCIENCE] dictatorship by secretive expert advisers, like a Manhattan project:

“What I fear is a government of experts.  God forbid that in a democratic society we should resign the task and give the government over to experts.  What are we for if we are to be scientifically taken care of by a small number of gentlemen who are the only men who understand the job?  because if we don’t understand the job, then we are not a free people.”

Lapp then points out how he saw science change during WWII from a poorly funded, low-prestige business of struggling individuals pursuing unpopular technical questions to find the truth, into today’s “big science” of groupthink-dominated government (taxpayer)-funded teams of aim-biased technicians, seeking wealth and prestige, paying only lip-service to freedom and objectivity:

“Today … the lone researcher is a rara avis (rare bird); most scientists team up to work together toward agreed upon objectives [not an unbiased agenda]. … A single experiment may involve a hundred scientists … the research is no longer unspecified as to objective … democracy faces its most severe test in preserving its traditions in an age of scientific revolution. … scientists in key advisory positions wield enormous power.  The ordinary checks and balances in a democracy fail when the Congress, for example, is incapable of intelligent discourse on vital issues.  The danger to our democracy is that national policy will be decided by the few acting without even attempting to enter a public discourse … our democracy will become a timocracy. … Even if no formal secrecy is invoked by the government, an issue might as well be classified ‘secret’ if the people in a democracy are incapable of carrying on an intelligent discussion of it. … The danger is that a new priesthood of scientists may usurp the traditional roles of democratic decision-making”
– Dr Ralph E. Lapp, The New Priesthood: The Scientific Elite and the Uses of Power, Harper, New York, 1965, pages 2-3.

Lapp on page 8 quotes President Thomas Jefferson:
“To furnish the citizens with full and correct information is a matter of the highest importance.  If we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education.”

Education in fact, not groupthink indoctrination nor the propaganda substitutes for fact used by dictatorships.

Lapp on page 14 quotes President Dwight Eisenhower’s 17 January 1961 farewell address:
“Today the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists … In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution … Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. … The prospect of domination of the nation’s scholars by federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.”

Lapp on page 16 quotes Dr Alvin Weinberg (director of Oak Ridge National Laboratory, 1955-1973):
“I do believe that big science can ruin our universities, by diverting the universities from their primary purpose and by converting our university professors into administrators, housekeepers and publicists.”

Alvin Weinberg expanded on his critique of “big science” in his 1967 book, Reflections on Big Science.

We quoted Alvin Weinberg’s analogy of populist anti-nuclear pseudoscientific rants to witch hunts, in a previous post (linked here). Weinberg wrote Appendix B: Civil Defense and Nuclear Energy, pages 275-7 of The Control of Exposure of the Public to Ionizing Radiation in the Event of Accident or Attack, Proceedings of a Symposium Sponsored by the National Council on Radiation Protection and Measurements (NCRP), April 27-29, 1981, Held at the International Conference Center, Reston, Virginia. (The proceedings were published on May 15, 1982, by the U.S. National Council on Radiation Protection and Measurements, Bethesda, Md.):

“That people will eventually acquire more sensible attitudes towards low level radiation is suggested by an analogy, pointed out by William Clark, between our fear of very low levels of radiation insult and of witches. In the fifteenth and sixteenth centuries, people knew that their children were dying and their cattle were getting sick because witches were casting spells on them. During these centuries no fewer than 500,000 witches were burned at the stake. Since the witches were causing the trouble, if you burn the witches, then the trouble will disappear. Of course, one could never be really sure that the witches were causing the trouble. Indeed, though many witches were killed, the troubles remained. The answer was not to stop killing the witches – the answer was: kill more witches. … I want to end on a happy note. The Inquisitor of the south of Spain, Alonzo Frias, in 1610 decided that he ought to appoint a committee to examine the connection between witches and all these bad things that were happening. The committee could find no real correlation … So the Inquisitor decided to make illegal the use of torture to extract a confession from a witch. … it took 200 years for the Inquisition to run its course on witches.”

Above: Herman Kahn’s graph of the massive rise in U.S. government taxpayer funded research and development from 1940-1960, about 20% of which is military and 80% is civilian.  (Lapp states on page 45 of The New Priesthood that in 1939 the entire U.S. Federal research and development budget was just $50 million, mostly for agricultural science, with a small portion for ship studies at the Naval Research Laboratory, and just $2 million for physics research, by the National Bureau of Standards.)  The Manhattan project which resulted in the first nuclear weapons used in 1945, was reported to have cost $2 billion from 1942-1945.  Thus began “big science”.  By 1960s, six times as much as that was being spent per year.  Essentially all of this expenditure is decided in advance by timetable and grant-proposal dominated groupthink bureaucracy and officialdom, not by a completely unbiased search for the truth by individuals who are free to follow the evidence.  You cannot find the unknown by a search governed by planned timetables.

Lapp quotes an editorial by Science editor Dr Philip Abelson on page 30 of The New Priesthood:

“The witness in questioning the wisdom of the establishment pays a price and incurs hazards.  He is diverted from his professional activities.  He stirs the enmity of powerful foes.  He fears that reprisals may extend beyond him to his institution.  Perhaps he fears shadows, but … prudence seems to dictate silence.”

critics have their own critics: some rational, others crazy

On 14 December 2022, shortly after dear old CIA Director William Burns poetically announced that there is “no evidence” of Putin being ready for war (maybe a special military operation, you understand, but not a war), Putin ordered the loading of an ICBM into a Russian silo to be filmed and published; just love the fact they tilt the entire lorry with the wheels up into the air, don’t you? So practical, not needing a huge crane and a million bucks. This publication is purely a matter of setting the record straight, true, and honest. Just to clarify, you understand. To make sure Biden doesn’t get fake news. From his CIA Director. Fake news that is classified Top Secret – restricted data sigma 1, so nobody who actually has the facts can see the fake news to debunk it. So we’re all good. (Putin, the kindly old son of a gun, has lovingly also been declassifying and publishing all the other Russian nuclear weapons secrets, too. Just for clarification. Maybe he thinks, unlike America, that for deterrence to work the other side has to believe it? Who knows. Why cares?)

Above: moving on from the deplorable way that Putin has disgracefully debunked the lies of the CIA Director, William Burns, we have another type of “Criticism of Criticism” (c-squared, henceforth). This c-squared example is an “anonymous” hate attack on Woit’s Not Even Wrong blog. It mentions Prof. Gil Kalai, who has put the case that quantum computers are fake news, like the fake claim that the USA exploded a “hydrogen bomb” in 1952 (it was over 75% fission, under 25% fusion). The argument here is that at the quantum level, noise prohibits practical computing. I have no interest in this. Freeman Dyson has a lecture about heresy being rejected by “experts” before being proved right. The point I’d like to make here is that, if you are going to censor freedom of speech, either using “secrecy laws” or attacks on critics, then we might as well surrender to Putin and live in the utopian Russian Federation. (Tip for the kind of people who love mad censorship: I’m being sarcastic, not endorsing Putin.)

Above: beyond c-squared (critics of critics) there is c-cubed (criticism of critics of critics), as evidenced Ivor Catt (he quotes bits from me and other critics on that page, failing naturally to bother lower himself to responding to the criticisms), whose successfully peer-reviewed paper “Cross-talk in Digital Systems” in  I.E.E.E. Trans, electronic Comput., 16 (1967), went to his head, leading him to claim to “disprove” first displacement current (Maxwell’s term to explain charging and discharging of capacitors, and which also gives us a theory for electromagnetic radiation, a very uncontroversial “theory” well substantiated by evidence transmitted by radio waves!), and then “disproves” electric current! (yeah, he thinks he can use Occam’s Razor to replace it with electromagnetic radiation in the form of Heaviside’s “slab of energy current”, and in his own little world there are no vacuum tubes or old fashioned TV screens with electron beams, or beta particles from the Sr-90 source I gave him, all these are little packets of trapped Heaviside “energy current” that acquire rest mass by magic). I first met him shortly after my first article was published in the November 1994 issue of Electronics World. Like a Sir Jimmy Saville or a superstring theorist, at first he appears humble and genuinely interested in defending freedom of progress, but you eventually discover he’s just a conman, like a typical “superstring theorist”. Actually, Catt’s 1967 paper, though peer-reviewed, is based on what is just an approximation (by Heaviside) to more interesting quantum electrodynamic mechanics, and it is the fake part of the approximation that Catt assumes to be Gospel truth, not the empirical evidence which it fits. In other words, it’s the old problem of “factoids” in science; usually the hardest to kill factoids are mixtures of measured evidence and approximate theory, which are being fashionably presented as if they were 100% empirical evidence, and are then used to falsely denounce alternative analyses of the data that are more accurate. (Classic examples being caloric and phlogiston; in other words, once you have a name for a symbol in an equation, and it has received say a trillion dollars funding grant from harry and meghan’s charitable foundation for universal love and anti-racism, the factoid becomes as “real” to simpletons as wormholes in computer simulations or dare I say the non-water vapour exponential increase in the amount of Marx-media hot air. You might think that “dark energy” and “dark matter” are similar epicycles, but there is evidence that they are in fact more fundamental than currently believed by populist fashion.)

Woit’s Euclidean Twistor Unification slides at the Algebra, Particles and Quantum Theory Seminar, Feb. 14, 2022

Highlights of Woit’s Twistor Unification Theory

Over the past few years, Woit has been developing a twistor unification framework for the basic ideas he first published back in 1988, which I found interesting on page 23 of a 2011 paper: https://vixra.org/abs/1111.0111 – “For right-handed particles, Woit(30) shows how the most trivial possible Clifford algebra representation of U(2) spinors in Euclidean spacetime yields the chiral electroweak isospin and hypercharge law. This proves the claim that left-handed helicity electrons have a hypercharge of -1, and right-handed electrons have a hypercharge of -2 because, in a left-handed electron, half of the hypercharge field energy appears as weak isospin charge, which doesn’t happen in right-handed electrons. This is because left-handed electrons are not a mirror-image of right-handed electrons; there is no such thing as parity.”

In other words, already in 1988 Woit had shown how to get the observed left handedness of the weak nuclear force modelled by using a Clifford algebra U(2) in Euclidean spacetime, not Minkowski spacetime, and since Feynman’s discovery of the path integral which is Euclidean (not Minkowski), you’d think this would gain attention. Nope, because of string theory hype (political groupthink in modern physics). Anyway, now Woit has expanded his 1988 sketchy hypothesis on to broader canvas in the manner of Titian:

“One can do four-dimensional complex geometry by identifying C^4 with 2×2 complex matrices
(z0, z1, z2, z3) ↔ z = z01 − i(z1σ1 + z2σ2 + z3σ3)
and defining
|z|2 = det z
Pairs g_L, g_R ∈ SL(2, C) × SL(2, C) = Spin(4, C) act preserving |z| by
z → g_L z [g^−1]_R
We are interested in “real forms” of this (real 4d subspaces that give above after complexification).

Three real forms are:
(2, 2) signature inner product: Spin(2, 2) = SL(2, R) × SL(2, R),
using g_L, g_R ∈ SL(2, R).
(3, 1) signature inner product: Spin(3, 1) = SL(2, C), using
g_R = (g†_L)^−1
This is Minkowski space-time.
(4, 0) signature inner product: Spin(4, 0) = SU(2) × SU(2), using
gL, gR ∈ SU(2).
This is Euclidean space-time.
Our interest will be in the Minkowski and Euclidean cases, together with the analytic continuation relating them.

In Euclidean signature, can use quaternions instead of complex matrices
(x0, x1, x2, x3) ↔ x = x01 + x1i + x2j + x3k …

Going back from number theory to physics, the philosophy we will pursue is that fundamental theory should be defined in Euclidean signature space-time, our observed physical space time is an analytic continuation. On reason is that QFT has inherent definitional problems in Minkowski signature that don’t occur in Euclidean signature. …

There have been attempts to unify the weak interactions with gravity, using the chiral decomposition of the spin connection as above, with SU(2)_R a space-time symmetry giving a gravity theory, and SU(2)_L the internal symmetry of a Yang-Mills theory of the weak interactions. Our proposal is of this nature, but with the following different features:

Take the Euclidean signature QFT theory as fundamental, with Minkowski signature physics to be found later by analytic continuation. Note that in Euclidean QFT one component of the vierbein is
distinguished (the imaginary time direction). Use twistor geometry to get not just an SU(2)_L internal symmetry but the full electroweak SU(2)_L × U(1) electroweak internal symmetry, with the imaginary time component of the vierbein behaving like a Higgs field.

If one works on the projective twistor space PT, one can get the idea of gravi-weak unification to work (in its Euclidean form):

  • There is not just an SU(2) internal symmetry, but also a U(1), given by the complex structure specified by the point in the fiber. This complex structure picks out a U(2) ⊂ SO(4), the complex structure preserving orthogonal transformations of the tangent space to the point on the base S^4. This is the electroweak U(2) symmetry, to be gauged to get the standard electroweak gauge theory.
  • If one lifts the choice of vector in the imaginary time direction up to PT, it transforms like the Higgs field: it is a vector in C^2 (using the complex structure on the tangent space given by the point in the fiber). The U(2) act on this C^2 in the usual way. Each choice of Higgs field breaks the U(2) down to a U(1) subgroup, which will be the unbroken gauge symmetry of electromagnetism. [Emphasis added; Woit then goes on to SU(4) which acts on C^4, getting U(3), which includes an SU(3) and a U(1).]

A generation of SM [Standard Model of particle physics] matter fields has exactly the transformation properties under the SM gauge groups as maps from C^4 to itself. … In this proposal, there’s a profound reorganization of fundamental degrees of freedom. They now live on points of PT which one can think of as light-rays, rather than on points of space-time. Mathematically, one needs to find a formalism on PT that corresponds to the usual Yang-Mills formalism on the base S^4. Need to use holomorphicity on the CP^1 fibers to match degrees of freedom on S^4 and on PT. The Penrose-Ward correspondence does this for anti-self-dual connections. Similarly need to match the Dirac equation on S^4 and equations on PT. For bundles on the base with ASD connections, this is done by the Penrose-Ward correspondence, but the U(1) and SU(3) bundles are on PT, vary on a fiber. Have mostly just rewritten the usual electroweak and GR theory. One difference though is that one component of the vierbein is now the Higgs, which has the electroweak dynamics.”

  • Spinors are tautological objects (a point in space-time is a space of Weyl spinors), rather than complicated objects that must be separately introduced in the usual geometrical formalism.
  • Analytic continuation between Minkowski and Euclidean space-time can be naturally performed in twistor geometry.
  • Exactly the internal symmetries of the Standard Model occur.
  • The intricate transformation properties of a generation of Standard Model fermions correspond to a simple construction.
  • One gets a new chiral formulation of gravity, unified with the SM.
  • Conformal symmetry is built into the picture in a fundamental way.
  • Points in space time are described by the p = ∞ analog of the Fargues-Fontaine description of the “points” p of number theory – from Woit’s February 2022, “Euclidean Twistor Unification and the Twistor P^1”, Algebra, Particles and Quantum Theory Seminar, Feb. 14, 2022 slides, see also his blog post.

I’d like to again point out – see note 30 on page 63 of the 2011 paper, vixra.org/abs/1111.0111 – that Woit shows “that the most trivial possible Clifford algebra representation of U(2) spinors in Euclidean spacetime produces this chiral electroweak isospin and hypercharge association: P. Woit, “Supersymmetric quantum mechanics, spinors and the standard model,” Nuclear Physics, v. B303 (1988), pp. 329-42.”

The problem is that once “anomalies” in the Standard Model became encoded into a hardened empirical mixing angle and parameter set orthodoxy, theories like string set out to reproduce them (along with a landscape of 10^500 or more other variants), rather than to really challenge them, let alone explain them. Whenever a difficulty arises, you also get two psychological reactions from any groupthink infrastructure:

  1. Glass is half empty: give up the climb, accept the existing dogma as being the pinnacle, and kick away constructive criticisms into the long grass as being crackpot or outsider nonsense.
  2. Glass is half full: keep doing the same general thing (not a really radical change in direction).

What you don’t get from groupthink is radically objective but sensible, non-paranoid activity. This didn’t matter that much before say 1945, since there was no groupthink “big science” with the smell of money to corrupt mainstreams into a transistor style logic gate switch-over mentality, whereby all media ignore new ideas until they generate enough current to break down a barrier and activate a switch. In groupthink, “noise” is suppressed by a squelch circuit until the signal strength exceeds a threshold, which allows mainstream publicity. This slows down the emergence of new ideas into the mainstream, while accelerating the development of mainstream ideas. The older non-groupthink model of science was the opposite: far more “noise” (radical nascent ideas) was publishable, which speeded up the emergence of new ideas, but this came at the price of slowing down the development of mainstream ideas which have already emerged (there was little funding for them, as most of the money was spent generating “noise”, the radical nascent ideas). The more money you pump into something, the more red-tape groupthink.

A problem for people like Woit is that the sort of people interested in radical objective physics models today are not always the Directors of Stringy Groupthink Corporation, but outsiders.

We became interested in Woit’s work after he started blogging in 2004. We wanted a critical revision of the basis of the standard model of particle physics, rather than the usual stringy notion that what everybody needs is to assume as the 100% true Holy Grail the ad hoc parameter filled, standard model and existing quantum gravity speculation (i.e. all anomalies are assumed to be fundamental facets of nature, not merely indications of mathematical approximations that are incomplete or a boostrap fix), and also a subset of some bigger, more complex superstring framework in 10/11 dimensions.

For various reasons, e.g. family health issues, and also the groupthink demotivation of mainstream hype, we haven’t published physics papers on vixra since Massless Electroweak Field Propagator Predicts Mass Gap https://vixra.org/abs/1408.0151 which uses the Euclidean Laplace transform in 3-dimensions to understand masses, as opposed to the usual obfuscating Fourier transform. Don’t get us wrong: we don’t object to any particular mathematical tool in general, just the specific problem that trying to use a bulldozer to recover a china cup is not always the best selection of tool available. Sometimes, despite hype from funding obsessed fools, cheap, simpler tools have their uses and are most appropriate.

By obfuscating, we mean that while Fourier transforms have their uses (they create frequency spectra from waveforms, etc) but their problem of poles in complex space (integrations around the origin on Argand diagrams) means that nobody else notices that particle masses result from integrating cut-offs.

Another mathematician, besides Woit, has recently been coming up against the mainstream groupthink problem: see Professor Robert Arnott Wilson’s comments on being censored by arXiv for heretical papers submitted to maths journals. His London uni page is here. A pre-print of his January 2022 submission to the International Journal of Geometric Methods in Modern Physics is uploaded here, “Remarks on the group-Theoretical Foundations of Particle Physics” whose abstract states: “I propose using the group SL(4, R) as a generalisation of the Dirac group SL(2, C) used in quantum mechanics, in an attempt to match the symmetries of the ‘internal’ spacetime of elementary particles to the symmetries of the ‘external’ spacetime of general relativity”. On his blog, he writes: “the arXiv refused to post it”.

The problem is, this censorship makes the mainstream appear as it is: corrupt. It all recalls old 1930s poems about the degenerative censorship methods used against the various group of critics and outsiders, shutting down controversies about the beloved Leader. “First they came for those who objected to the Leader’s theory, then they came for those with alternative suggestions, then they rounded up and deported those all who reported what was really going on, then in the end they became so paranoid, deluded and contemptuous of objective, useful criticisms, and so submerged in their own one-sided hype and glory, that they lost sight of realistic objectives and methods, and lost their costly war.”