Electroweak theory beta decay error

Fig. 1: an edited down (de-cluttered) version of the figure in previous posts. The point is, there is an inconsistency due to historical prejudice, which affects the interpretation of the CKM matrix values, which measure the electroweak mixing. We have a choice on how to view these diagrams, just as Copernicus had a choice between interpreting sunrise as daily earth rotation or the daily orbit of the sun around the earth. Either we can be conventional and remain stuck in the inconsistent traditional model, which leads to epicycles and a messy standard model CKM matrix, or we can change the perception of the facts to the consistent treatment of beta decay, and view the quark as decaying into an electron via a weak W boson (propagator). For consistency, we should interpret all beta decays (both decays of quarks and heavy leptons like muons) by the same analysis. Otherwise, the distinction we introduce between quarks and leptons is just a subjective human discrimination, which introduces stupidity as it is just an artifact of defective analysis.

Fig. 2: another way of explaining the allegedly “subtle” point I’m getting at. It’s the ultimate heresy to even raise the question of whether quarks have been changing into leptons in beta decay all along! The mainstream “interpretation” is an historical accident due to the way beta decay was discovered and modelled in the first place by Fermi, and then applied to quark decays in 1964, before the advent of the W boson intermediary.

If you omit the W (weak boson), the “propagator” in the Feynman diagrams above, my whole objection disappears. This objection thus did not exist until 1967 when the existence of the W was proposed. When it was proposed, it was radical (the W wasn’t discovered until CERN found it in 1983), so the innovators (Weinberg, Salam, Glashow) were focussed on justifying what they were doing, not looking for physical consistency errors of the sort shown in Fig. 1. They were preoccupied with mathematics. So my argument is that the inconsistency “sneaked into” the electroweak theory because the dogma was founded by the Fermi theory (in which there is no W boson, so the conflict in Fig. 1 doesn’t exist) when “extended” to include the W boson propagator. This is just an historical accident, and a classic route that science can become totally corrupted when you extend theories without re-checking whether the foundations can take the extra weight; adding a W boson completely messes up the separation of quark and lepton decays in the naive Fermi theory of beta decay. W bosons should force you to take another look at precisely how you are analyzing what is decaying into what in beta decay, but this was not done by Weinberg, Salam and Glashow. Just like Ptolemy, they and their successors were distracted by the mathematical problems, and failed to confront key issues of physical consistency.

Beta decay transforms neutrons decay into protons, so it was assumed that a downquark decays into an upquark by emitting a negative weak boson (W). I’m not stating that you don’t get a proton when a neutron decays, and I’m not stating that a downquark isn’t replaced by an upquark! You do get quarks when quarks decay. What I am stating is that in the precise statement of what is occurring, there is misleading error. Put it like this, Copernicus didn’t deny “sunrise” when he argued for a solar system; he got the sunrise by making the earth rotate instead of the sun orbiting the earth daily. You can see the kind of problem that occurs when you question or move foundationals: the basic observations are re-interpretated in the new theory.

Nobody seems to have ever raised the question of whether this is the correct way to look at the evidence! The reason to be suspicious is the beta decay of leptons into leptons via W bosons. If muons decayed directly into electrons, not via the intermediary or metamorphosis stage of first transforming into a weak W boson, then all would be rosy with the electroweak theory ideology and Standard Model beta decay analysis as it stands. It isn’t, because muons don’t decay that way!

Let’s assume that Fig. 1 is valid objection to the whole way quarks and leptons are separated in electroweak theory. What precisely does it mean for the Standard Model? It means that quarks and leptons can transform into one another, because the well established experimentally proved data has been misinterpreted by a contradictory epicycle like model. It also means that if you say quarks routinely transform into leptons in normal low energy, you’re going to get a dialogue of the deaf (or worse, crackpot insult exchanges) with the mainstream Standard Model fanatics (much like Copernicus telling Ptolemy’s followers that the earth orbits the sun, despite all the elaborate calculations and predictions and the immense number of mainstream followers of the earth-centred universe framework). So it’s important to be crystal clear about what the evidence is. It stems from the inconsistency shown in Fig. 1.

Fig. 3: beta decay error for muon decay on Wikipedia: note that the beta decay equation for a negative muon as written in the text states the opposite of the diagram on the right: the text states that an electron, an electron antineutrino and a muon neutrion are emitted in negative muon decay, whereas the diagram shows an inward arrow on the electron antineutrino, making it the same thing the emission of an electron neutrino. Therefore, the equation in the text is correct, but the diagram is wrong and either the sign of the arrow on the electron antineutrino needs to be reversed to show the electron antineutrino being emitted not absorbed, or else the particle absorbed needs to be changed from an electron antineutrino to an electron neutrino (v without the overbar that signifies antiparticle). This persistent sloppy error clearly indicates the current level of sloppiness people have in looking at electroweak diagrams, because currently the mathematical calculations are considered fundamentally more important than understanding what is physically occurring.

Fig. 4: we can check weak interaction equations to see what kinds of particles are emitted in particle decays using the principles of conservation of electric and weak isospin charge: the total sum of each kind of charge is the same before and after an interaction, like a beta decay. Muons have similar electroweak charges to electrons; strange quarks have similar electroweak charges to downquarks (we’re just into the second generation of Standard Model particles).

CKM matrix obfuscations by mixing angles, and CKM transition probabilities compared to mass transition factors (mass category morphisms)

Fig. 5: symmetries in the relationships between fundamental particle masses (from a previous post on mass morphisms), showing that they do not best follow the SM categories of particles (the “see text” reference in the diagrams is to an earlier blog post, linked here)! The actual theory of mass is discussed in earlier posts. Basically, the virtual (off shell) fermions created in pair production by off shell bosons (vector bosons) in the intense quantum fields near fundamental particle are “polarized” by the field; the polarization supplies energy to off-shell fermions by pushing them apart in opposite directions, which increases their average lifespan before annihilation in the vacuum. In other words, their lifespan is increased above the expected off-shell value of (h-bar)/(energy equivalent to the rest mass of the fermion pair). This makes them effectively on-shell particles for the additional time they exist before annihilation, and they have time to be affected by on-shell considerations like the Pauli exclusion principle, which organizes them into shells. So the vacuum polarization of pair production is not entirely random in the istrong electric fields at very high energy! The virtual particles, by their interaction with the field, effectively add mass to the real on-shell particles, and the various relatively “stable” organized shell structures of the vacuum at very high energies determine the masses of the various leptons and quarks, but not in the obvious way you’d expect from the analogy to shells in quantum mechanics or even nuclear shell structures.

This is actually expected, because if all was that easy, we’d have had the final theory of particle mass long ago! Maybe, therefore, the conventional discrimination between leptons and quarks – based upon whether they feel strong interactions or not – is inappropriate when considering masses. This reminds you of the original rejection of isospin symmetry by some people: it was based on the fact that neutrons and protons in the nucleus in many ways behave alike, despite having very different net electric charge (zero and plus one). If you become too straight-jacketed by conventional “wisdom” on what is supposed to be the “most fundamental symmetry” (when it is just the first symmetry to be found by historical accident and not the most fundamental to what you are concerned with), you’ll get stuck in a dead end, and sooner rather than later. There is an anthropic selection principle at work, not in the universe, but in human prejudices in physics: it often turns out that the correct theories are heresies, and not in the direction of the groupthink consensus. Why is this? It’s precisely because the ignored, unfashionable ideas are the least explored by the groupthink consensus, that they are not properly ruled out and therefore when physics appears to be approaching a dead end, it’s more likely it’s created the dead end by being excessive mind control (groupthink fashion of permissible research topics and methodology).

To make a name for learning
When other ways are barred
Take something very easy
And make it very hard

[e.g. by obfuscation using an elaborate mixing angle representation of transition amplitudes in the CKM matrix, akin to epicycles]

One of the problems you confront endlessly if reintroducing physics into mathematical physics is the allergy of elite obfuscators to simple processes, in other words the problem of short-circuiting their denial of Occam’s razor. The CKM matrix contains nine amplitudes for transitions between quarks. Squaring the amplitude of course gives the relative probability of the transition. What the numbers mean is that when a beta decay or related “weak interaction” occurs, there are various branching fractions. Once you know a transition occurs, the total probability for the various branching possibilities is obviously 1 (for 1 event), and the fractions making up that 1 denote the relative occurrance of different interactions. The amplitude for a downquark to decay into an upquark in the CKM matrix is 0.974, by which we mean that – given a downquark interaction – you are most likely that an upquark is formed. Very high amplitudes near 1 (and thus high relative probabilities) also occur for charm to strange and top to bottom quark transitions, in other words quarks within a given “flavour” are highly likely to transform into a similar flavour rather than to change flavours. The amplitude for a top quark to change flavour into a strange quark is just 0.040, and it is even lower, 0.0086, for a top quark to decay into a down quark. So we represent the CKM matrix as a diagram of quarks with amplitudes written on the arrows between them, showing relative transition strengths in interactions:

Fig. 6: diagram illustrating what the decay amplitudes in the CKM matrix (shown) apply to. It seems that the less massive quarks are more likely to change flavour than the very heavy top and bottom quarks. Problem: relate these quark transition amplitudes to the mass morphisms in Fig. 5 above, somehow, and see if you can learn the deep hidden secret of mass and the CKM matrix of the universe in the process, whatever that secret may be. Presumably there’s a simple solution that produces all the apparent complexity. Note: lepton transition amplitudes are all similar (~ 1) within a generation, but inter-generation lepton neutrino mixing ONLY occurs over long distances in spacetime, which permit “neutrino oscillations”. E.g., 2 electron neutrinos are released by the sun for every helium-4 atom formed by nuclear fusion, but only 1/3rd of these electron neutrinos are detected here on earth. We can detect electron neutrinos accurately in beta-radioactive source calibrated instruments utilizing large detectors (usually tanks of dry cleaning fluid). The accepted neutrino oscillation theory is extremely reasonable so far as it goes (not far enough); but it’s not a “classical oscillation”, but a quantum process whereby discrete interactions of neutrinos with something cause a change of flavour – the neutrinos pick up mass from this process on the 8.3 minute journey from the sun to the earth, and become randomly scattered into three flavours, arriving at earth 1/3rd electron neutrinos, 1/3rd muon neutrinos, and 1/3rd tauon neutrinos. Because the instruments were designed and calibrated to detect only electron neutrinos, before neutrino oscillation was known there was an anomaly between predicted and observed solar neutrino flux: detectors were only detecting 1/3rd of the total. The resolution is simply that the flavours became mixed uniformly during the journey to the earth. If you put a cobalt-60 source or a nuclear reactor near an electron neutrino detector, you don’t get this problem because the distance is so small, the neutrinos don’t have space to oscillate in flavour, so you detect effectively 100% electron neutrinos. Lederman won the 1989 Nobel prize for proving experimentally that muon neutrinos are different from electron neutrinos. He found found that muon neutrinos hitting neutrons on 51 occasions produced a proton plus a muon, but never produced a proton plus an electron.
It’s worth emphasising this contrast clearly, that when a muon decays “into an electron”, it makes more sense to view this in the Standard Model as the transformation of a muon into a muon neutrino, accompanied by the pair-production of an “electron-electron antineutrino pair”. If this is so, then Figure 1 at the top of this blog post should be redrawn to show the need for a different kind of consistency. This is that weak interactions of electrons always require electron neutrinos (or their antiparticles moving in the opposite direction), of muons always require muon neutrinos, and those of tauons always require tauon neutrinos.

Neutrino oscillations, unless classical, require a quantum field theory so the change in neutrino flavour while travelling through the vacuum occurs discretely in a random interaction. An interaction with what? Something in the vacuum, which permits them to change flavour. While being heretical, here’s the old pion decay heresy again:

Fig. 7: pion decay violates the conservation of spin angular momentum! How does a spin-0 pion decay into a spin-1 weak boson? Once you start getting too many exceptions to a set of textbook rules, maybe you should consider altering the rules of nature so that they actually agree with nature in the first place, instead of having solid rules which have to be broken all the time by exceptions, anomalies, “interesting questions”, etc?

CKM mixing matrices and the final theory, dictatorship smokescreen AGW politics dressed up as science, etc

First, I have a new vixra.org paper summarising the negative-feedback evidence on CO2 emissions. The basical physical fact is that if you increase temperature slightly by increasing atmospheric CO2 on a planet covered by 71% water, the extra evaporation creates moist sunlight warmed air which rises to form increased cloud cover, increasing albedo, effectively cancelling out the “greenhouse” CO2 effect. In short, if you want an accurate greenhouse model, you need to include negative feedback from enhanced cloud cover. So what is causing the massive hockey-stick curve of temperature rise, fabricated to fit CO2 emissions? Answer: data set splicing by the climategate heroes, like Dr Jones. What’s the cause of the conspiracy? James Delingpole says it’s the “watermelon” effect: environmentalists are green on the outside, red socialist inside. The red socialist fanatic believes that “the ends justify the means”, the sacking of any Trotsky character who raises criticisms, the redefinition of “science” from skepticism and the refusal to believe in any dogma, back to a “consensus of expert opinion” and authoritative experts, which constituted the “natural philosophy” of the earth-centred universe. Big science is now a gigantic multibillion dollar enterprise which is proudly political in the non-democratic sense, the politics of the Brezhnev era USSR dictatorship. The media loves this science dictatorship because it’s whole fives W’s ethos is tied to writing “stories” around “famous people” or at least important “events”, not to “skepticism about facts” (which to the media is contradictory nonsense): Who?, What?, Where?, When?, Why?

It’s significant that Sir Paul Nurse’s January BBC Horizon “documentary”, Science Under Attack briefly throws off English graduate climategate journalist James Delingpole, by bringing up the “skepticism about facts” issue under disguise of a patient questioning a “consensus of medical opinion” on a diagnosis. This is the difference between politics and science in a nutshell. What is a “fact”? If a fact is the “consensus of expert opinion”, then you must accept that if and when that consensus changes due to fashion, the “facts” will change! So then your definition of “fact” is not something immutable. If you want to define a fact as an immutable statement about nature, then you have to prove that you haven’t misinterpreted anything, made any errors, been lied to (Piltdown Man), etc. Skepticism is the opposite of accepting a consensus of expert opinion. Skepticism is the bedrock of freedom and liberty. Once you start to ban, suppress, censor skepticism, you are doing exactly what dictatorial regimes do. So you have to accept that science is a subset of democratic politics; it’s not apolitical as practised. To say science “should be apolitical” is a statement of ideals that doesn’t apply to the real world, like saying “everything should be perfect always”, or “there should be universal peace”. Science is about making progress, which is not always a matter of happy incremental additions, but sometimes requires a rebuilding of the foundations, a process that causes conflict which is eventually going to be dealt with by some kind of political-type arrangement, whether you want politics in science or not. Omitting democratic principles from the organizational politics of science is not a way to “force politics out of science”, just to force the politics of science to be the worst sort, dictatorship under a smokescreen.

Second, I’m really trying to complete a new paper, setting down in a more conventional and slowly written (easier to read) version of the material on recent QFT blog posts like https://nige.wordpress.com/2011/02/26/test/ and https://nige.wordpress.com/2011/02/26/the-standard-model-and-quantum-gravity-identifying-and-correcting-errors/. Carl brannen and Marni Sheppeard have been working on CKM mixing matrix phase factors, see http://vixra.org/pdf/1008.0015v5.pdf, page 9, in particular, see equations 19 and 20 on page 9; don’t worry if you don’t understand the paper’s introductory pages because the basis of the paper is mathematical modelling of empirical CKM matrix data and if you don’t grasp a clear simple explanation in the text, it’s possible that either (1) a simple explanation doesn’t exist, or (2) the authors haven’t found it (yet). I’ve got to evaluate this because the CKM matrix is vital to what I’m doing. My basic approach to physics is entirely different: looking first at the mechanism and trying to see if some errors in interpretation or guesswork assumptions in the Standard Model can be rectified to improve comprehension.  In the CKM case (see posts here and here), the key error seems to stem from the way the electroweak theory was rushed out to replace Fermi’s theory of beta decay in 1967, ignoring completely the following anomaly:

Above: do you see the anomaly I’ve pointed out? It seems that a lot of people don’t grasp it, so let’s try once again. The diagram on the left is undisputed; the diagram in the middle is “wrong” by mainstream analysis standards (which claims quarks don’t decay into leptons as a “direct” decay product), yet it is consistent with the diagram on the left (in the sense that the decay product is interpreted the same way), while the diagram on the far right is the mainstream model showing one quark decaying “directly” into a another quark, with leptons emitted as “side effects”. What I’m stating is that the whole structure of the Standard Model is self-inconsistent and wrong, because beta decay “products” are viewed inconsistently between quark and lepton (e.g. muon) decays, and I’m stating that strange quarks should be viewed as transforming into electrons.

An alternative would be to change the far left diagram to make the muon decay into a muon neutrino, with the weak boson emission considered a side show. Either way, the definition of what is the “primary” or “direct” product of a decaying lepton or quark needs to be analyzed consistently, not inconsistently as is done in the electroweak theory.