The invisible glass ceiling of the global greenhouse, versus factual data

Fig. 1: the latest UAH global temperature subset satellite-based latest global warming data (credit: Dr Roy Spencer).

According to all IPCC greenhouse effect models, air warmed due to sunlight absorption by increasing atmospheric CO2 causes increased water evaporation, which itself is assumed to have a bigger warming effect than CO2 itself. This is the “positive feedback” assumption, essential to all IPCC climate change models. However, this assumption contravenes Archimedes’ law of buoyancy. Archimedes’ law shows that if the tiny temperature rise from the tiny increase in CO2 causes an increase in water evaporation and the evaporated water (humid air) then absorbs infrared and warms up, it should rise buoyantly, so that the average amount of cloud cover increases, which shadows the surface and causes overall negative feedback. Figure 1 shows microwave oxygen temperature measurements for the lower atomsphere (troposphere), and does not show surface temperatures where the surface is under cloud cover, which is the only situation where negative feedback could be detected in real world data:

“Since 1978 Microwave sounding units (MSUs) on National Oceanic and Atmospheric Administration polar orbiting satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen, which is proportional to the temperature of broad vertical layers of the atmosphere. Measurements of infrared radiation pertaining to sea surface temperature have been collected since 1967.”

Therefore, Figure 1 exaggerates the global warming at the surface, where increased cloud cover (negative feedback from H2O) opposes and essentially offsets CO2 induced temperature rises. Air near the upper (sunlight reflecting) layer of clouds is heated and warmed up because infrared (long wavelengths) are absorbed by the upper parts of a cloud, but the air below clouds is cooled down. There is no way for a satellite to measure surface temperatures below clouds: they have two methods of measuring temperature and neither penetrates cloud cover effectively. One is the microwave emissions from oxygen (which is distributed through the atmosphere, above and below clouds) and the other is the Planck radiating spectrum which will only measure surface temperatures if and when there is no cloud cover obscuring the surface (otherwise it tells you the temperature of the upper parts of the cloud cover).

Fig. 2: a comparison of direct surface Planck temperature measurements which are not possible through cloud cover (which are therefore limited to clear skies, which rules out the inclusion of negative feedback data and ensures only positive feedback from H2O can be included), with the UAH/RSS tropospheric oxygen microwave emission temperature. There is a very close fit, as you would expect. None of the data curves are true surface temperature, because none include negative feedback from shadows on the surface caused by evaporated water which has been heated by sunshine and buoyantly risen by Archimedes’s law to high altitudes, gradually forming extra cloud cover. As soon as the clouds form, the satellites cannot measure surface temperatures from cloud-cover areas, so negative feedback data is always excluded.

The deceit in this graph is two-fold. First, satellites cannot by any means measure negative feedback effects which only occur under cloud cover, so they are biased in favour of clear skies where H2O feedback on CO2 can only ever be positive. Second, the straight line through the data points is deliberately misleading.

Fig. 3: negative feedback (increased cloud cover) implies that surface temperatures – if detected under clouds without positive feedback bias in satellite data – increases with CO2 induced temperature until it cancels out further temperature rises. The sky becomes slightly more cloudy to compensate for CO2: a self-regulation mechanism like a thermostat as far as the surface is concerned. This negative feedback effect can never be seen properly in existing satellite data, which either average the air temperature of the entire height of the troposphere, which obscures the negative feedback in the smaller height of air under the clouds (microwave oxygen emission sensors) or else exclude negative feedback data altogether by just measuring the Planck temperature of the surface in cloud-free clear skies (which automatically excludes all negative feedback effects from cloud cover).

The temperature proxy data is all a fraud: before 1960 tree ring growth must be used as a proxy despite its failure to correlate to direct temperature measurements after 1960 (leading to the “hide the decline” Phil Jones/Michael Mann IPCC hockey stick curve). It’s clear why tree ring growth isn’t a reliable proxy: trees simply don’t grow as a function of temperature variations alone. The amounts of cloud cover and rainfall sensitively determine growth, so it is a falsehood to first assume temperature is the only variable, and then to turn this assumption concerning data interpretation into “evidence” that somehow defends the assumption in the first place. From 1960 until satellite data arrived in 1980, they used weather station data affected by “urban heat island” hot air pollution from nearby growing cities which has nothing to do with the CO2 greenhouse effect but conveniently gives data which can be manipulated to contribute to a hockey stick curve. So the IPCC choose different unreliable data sources that fit to different parts of a curve that mimicks the CO2 rise curve, and then join them together, omitting the parts of the temperature proxy data which did not convey the intended correlation.

Summary: all IPCC climate models assume H2O causes positive feedback which amplifies a tiny amount of warming from CO2 into a major problem. This assumption is only valid if Archimedes law of buoyancy (the rising of infrared heated moist air to condense and form clouds) is ignored. They give no reason for ignoring buoyancy. The greenhouse effect is exactly what the IPCC models assume, but the earth isn’t a greenhouse because clouds form in the earth (not in a greenhouse) in response to temperature-dependent ocean water evaporation, and the clouds shadow the surface and thus have a cooling, negative feedback effect. This negative feedback can’t be seen in the Planck spectrum surface temperature instruments in satellites because they can’t see see through cloud cover. Although the microwave sensors in satellites do respond in part to oxygen temperatures below clouds, they obfuscate negative feedback by averaging the temperature of all the oxygen in the troposphere including positive feedback from air near the upper (sunlight heated) parts of clouds. Using a greenhouse with a cloud cover preventing glass ceiling as a model for the earth is a lie. The earth doesn’t have a glass ceiling to prevent increasing cloud cover from increasingly CO2 heated ocean evaporation. All IPCC models and data are frauds. Clearly, there is a small CO2 temperature rise from CO2 alone, but this causes an increase in cloud cover which largely offsets this. The IPCC lie is to assume falsely that climatic cloud cover is independent of temperature, and then to fiddle the data to coincide with the false predictions from its wide array of false models.

Sure the climate is changing and CO2 is increasing, but the climate is always changing so there is 50% chance of rising temperatures, and 50% chance of falling temperatures at any time in history. In the 1970s, fanatical experts sought funding for a scare story that predicted a new ice age due to falling temperatures caused by pollution blocking out sunlight. Now it’s the opposite. But the CO2-temperature correlation is qualitatively meaningless because there is a massive 50% chance by sheer coincidence that temperatures will be rising like CO2, and the correlation is quantitatively a fiddle because there is no reliable data that properly includes negative feedback for the whole planet (i.e. surface temperatures, under cloud cover).

People need to be told:

(1) that the earth is not a greenhouse because cloud cover increases and cools the earth (cancelling most of the CO2 effect, not amplifying it) as the oceans are warmed slightly by CO2 in the atomsphere (something that does not happen inside a greenhouse, because they don’t have oceans and clouds in them), and

(2) satellite data on temperatures either use clear sky area Planck spectrum data or else average the oxygen microwave emissions from the entire troposphere and thus exclude the cloud cover. In neither case does the satellite data include negative feedback on surface temperatures from increasing cloud cover overhead. So the satellite data is all biased against including observed negative feedback from H2O, and only including positive feedback from H2O in the early stage of heating (which occurs over the oceans prior to the development of cloud cover). There is absolutely no evidence for the massive amplification of temperature rise by H2O positive feedback assumed in all IPCC CO2 scare mongering computer models, while there is objective evidence (from both Archimedes’ buoyancy of infrared warmed moist air, and from Spencer’s negative feedback cloud cover evidence) that this assumption is false. Liars conflate this false assumption with half-baked data which is misinterpreted using this false assumption, and then pass off this abuse of data as evidence to substantiate the false assumption (an entirely circular argument, just like claiming the sun’s apparent motion across the sky proves that the sun orbits the earth daily). (Don’t get me wrong: we’re only biased against quackery and nobody has ever published any scientific evidence for positive feedback from H2O which is reliable, and disproves Archimedes’ law of buoyancy.)

Update (19 March 2012):

A revised version of my comment submitted to Calder’s blog, concerning the opposition to genuine scientific debate by political activists using green scare policies as the propaganda pseudoscience tool for implementing a USSR-type state control of industry and individuals (the “Reichstag fire” mechanism):

Regardless of the precise altitude and cloud types involved (cold saturated low pressure air in all cases), the cosmic ray mechanism for climate change is the Wilson cloud chamber effect. I suggest this instrument would be a handy way of getting photos and video to communicate what is going on to people in a clear, hard-hitting manner.

A former climate change editor for Scientific American has just (17 March 2012) written a Scientific American blog post called “Effective World Government Will Be Needed to Stave Off Climate Catastrophe”, which states:

“To be effective, a new set of institutions would have to be imbued with heavy-handed, transnational enforcement powers. There would have to be consideration of some way of embracing head-in-the-cloud answers to social problems that are usually dismissed by policymakers as academic naivete.”

This is scare mongering for political world government, the way that nuclear war dangers was used as a cover for political ideology by the Moscow “World Peace Council” of Brezhnev during the Cold War. Anyone questioning the pseudoscientific propaganda was simply dismissed as a nuclear war advocate or an anti-communist (regardless of the distinction between ideology and realism), i.e. the whole scientific debate was shut down in advance of resolution, by the use of pseudo-morality censorship. Although you would naively expect some experts to continue sticking up for the scientific facts, the corruption spreads. The world government idealism never ended: when the cold war ended, it was just transferred from nuclear war to pollution and climate change fear-mongering designed to scare and panic people into the desired political activity. The dream persists of world government by means of scaring people into it, using “scientific consensus authority”. Maybe the aim is right, maybe not. It is misleading for “science” to be turned into a dogma of religious style dogmatic, bigoted consensus in order to motivate political actions. These people want to achieve a goal using underhand methods. Why use underhand methods? They think it’s the only way. In other words, the aim itself (world government, communism, fascism) is unattractive to the majority, so scare stories are required to force the majority to be interested or tolerant. It was eugenics pseudoscience, plus other lies and stunts like the burning of the Reichstag and the faked “protocol of the elders of zion”, which were essential to the Nazis. Science is killed by making it a mere dogma for use as a political tool.

Green propaganda is effectively working as a replacement for the old Moscow World Peace Council nuclear war fear-mongering, which sought to scare people into agreeing with communism rather than be blown up. Green propaganda today results in a socialist state controlled industry based on national subsidies for inefficient industry (as in the USSR), via the back door. When national socialist state control was abandoned, socialists converted to green state ideology because it involves increasing state control industry and individuals, by laws and taxation. Jimmy Delingpole has named these world government ideologues “Watermelons”, because they’re red on the inside but green on the outside. The subtext is that they don’t really care about science, be it the effects of nuclear war or the effect of the natural climate change Wilson cloud chamber mechanism. What they do care about is using and promoting any currently fashionable spurious arguments to scare people into state control activism. This is the old ideologue tactic. It was used by Lenin and Hitler, and more recently self-deluded fanatics like Saddam and Gadaffi. These people use creepy lying and scare-tactic propaganda, not facts.

The incorrect amplitude for the “Higgs boson” predicted by the Standard Model

“The start of the LHC 2012 physics run is still a while off, scheduled for around the beginning of April, with beam energy likely raised a bit, to 8 TeV total in the center of mass. So, it’s going to be quite a few more months before the LHC experiments have enough new data to analyze that will allow a conclusive determination of whether the evidence seen for a Higgs around 125 GeV is confirmed, with a significance high enough to claim discovery. … the best fit size of the bump is, as with ATLAS, about twice what the SM predicts. The errors are large, so quite possibly both experiments just got a bit lucky, in which case the first few months of 2012 data may not quickly add much to the significance of the signal.” – Dr Peter Woit

The Chi-squared test for the “Higgs boson” has two “possibilities”: either it doesn’t exist, or it does exist and is the particle in the mainstream electroweak theory. This is fraud. It’s precisely Joesph Priestley’s error in his phlogiston experiment: either phlogiston exists, or it doesn’t. There was a third possibility: oxygen exists, replacing phlogiston theory. This was recognised by Lavoisier. You need to take account of alternative theories to the Higgs mechanism and the standard electroweak theory, before you can claim that the spin-0 boson (if it exists) is the one you are actually looking for. Otherwise, it’s like interpreting the “motion of the sun” across the sky as clear evidence that the sun orbits the earth daily.

The Standard Model doesn’t predict, prima facie, a Higgs boson mass, but given an experimentally determined mass, the Standard Model with Higgs mechanism does constrain the amplitude of the Higgs signal (the cross-section for the Higgs boson production reactions). Woit points out that the observed amplitude is greater than predicted, for the apparently observed mass. As we explained earlier, Karl Popper’s falsifiable prediction methodology is not science: you make a prediction, the experiment confirms the prediction, and then you use this as a Stalinist propaganda to claim that the experiment has confirmed the theory. (Hoping nobody notices the subtle conflation of prediction with theory.) Example: Ptolemy’s epicycle theory could “predict” planetary positions using a complex metaphysics. Many of the predictions worked well enough within the accuracy of early observations, so there was no need for Kepler’s more accurate elliptical laws of planetary motion until after Brahe had made more accurate observations. If you claim to set out to “test” Ptolemy’s epicycles theory, using a statistical correlation test where there are only two possible outcomes (Ptolemy’s predictions are true, or the data are random as the null hypothesis), with no proper analysis of alternative models allowed, then the statistical correlation test will “confirm” the flawed model statistically over no correlation.

Statistical correlation tests are the most easily corrupted form of science, and this is rife: you test for “correlation” between one model and the experimental data, given a null (default) hypothesis that the “correlation” is just random coincidence. The flaw here is that the “evidence” you gain from a successful correlation test only tells you that the model accords with the data better than random noise. It doesn’t tell you anything about the problem that another theory may also agree, e.g. FitzGerald’s, Lorentz’s, Poincare’s and Larmor’s equations match Einstein’s special relativity’s transformation and E = mc2 law, so “experimental tests” of these equations doesn’t specifically support Einstein’s theory over the more mechanical derivations of the same equations by the earlier investigators. It’s also been shown that the confirmed predictions of general relativity come from energy conservation and are not specific confirmation of the geometric space-time continuum model. Therefore, it is Popperian sophistry to claim that a specific theory is “confirmed” by experiments merely when its predictions are confirmed, unless you have somehow disproved the possibility of any other theory predicting the same results by a different route. Politically, this sophistry gives rise to the “historical accident syndrome” whereby the first theory which gives the correct prediction in a politically-correct, fashionable manner, is hyped by the popular media as having been “confirmed” by experiment, when in fact only the predictions (which are also given by totally different theoretical frameworks sharing the same mathematical duality in the limits of the experimental regime) are confirmed. This is fascist hubris. We saw it with the earth-centred universe of Ptolemy. Once you have a fashionable model, it gets into the educational textbooks, it is “understood” by the popular media, and any alternative framework is wrongly dismissed as superfluous, unnecessary, boring, etc., without first being properly investigated to see if it fits more data more accurately.

It’s important to note that this is a general problem in politics and human endeavour generally. The advice is to keep to well-worn paths or you will get lost. However, you’re unlikely to find much on well-worn paths, because so many people keep to them, and the probability of finding anything on them is therefore low. Ironically, this point is “controversial” because you get the counter-argument that you’re unlikely to find anything if you go off the beaten track. More to the point, if you do find anything off the beaten track, you still have a difficulty in convincing anybody that it actually exists, as Niccolò Machiavelli explains in the political context (The Prince, Chapter VI): “the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.”

It’s quite correct that that a lukewarm argument on a radical and unpopular proposal leads either nowhere or to failure (suppression). You cannot easily overthrow a tyrant with kindly, gentle words alone. By the time a tyrant is susceptible to arguments (in dementia), it is easier to overthrow the regime by other means anyway. Diplomacy is the policy of feeding wolves in the expectation of achieving peace through appeasement. Groupthink is never revolutionary: it is always counter-revolutionary, developing political structures to stabilize a success by preventing a further revolution. New ideas are only welcome within the narrow confines of an existing theory, like epicycles.

Irving L. Janis, Victims of Groupthink, Houghton Mifflin, Boston, 1972

Janis, civil defense research psychologist and author of Psychological Stress (Wiley, N.Y., 1958), Stress and Frustration (Harcourt Brace, N.Y., 1971), and Air War and Emotional Stress (RAND Corporation/McGraw-Hill, N.Y., 1951), begins Victims of Groupthink with a study of classic errors by “groupthink” advisers to four American presidents (page iv):

“Franklin D. Roosevelt (failure to be prepared for the attack on Pearl Harbor), Harry S. Truman (the invasion of North Korea), John F. Kennedy (the Bay of Pigs invasion), and Lyndon B. Johnson (escalation of the Vietnam War) … in each instance, the members of the policy-making group made incredibly gross miscalculations about both the practical and moral consequences of their decisions.”

Joseph de Rivera’s The Psychological Dimension of Foreign Policy showed how a critic of Korean War tactics was excluded from the advisory group, to maintain a complete consensus for President Truman. Schlesinger’s A Thousand Days shows how President Kennedy was misled by a group of advisers on the decision to land 1,400 Cuban exiles in the Bay of Pigs to try to overthrow Castro’s 200,000 troops, a 1:143 ratio. Janis writes in Victims of Groupthink:

“I use the term “groupthink” … when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.”(p. 9)

“… the group’s discussions are limited … without a survey of the full range of alternatives.”(p. 10)

“The objective assessment of relevant information and the rethinking necessary for developing more differentiated concepts can emerge only out of the crucible of heated debate [to overcome inert prejudice/status quo], which is anathema to the members of a concurrence-seeking group.”(p.61)

“One rationalization, accepted by the Navy right up to December 7 [1941], was that the Japanese would never dare attempt a full-scale assault against Hawaii because they would realize that it would precipitate an all-out war, which the United States would surely win. It was utterly inconceivable … But … the United States had imposed a strangling blockade … Japan was getting ready to take some drastic military counteraction to nullify the blockade.”(p.87)

“… in 1914 the French military high command ignored repeated warnings that Germany had adopted the Schlieffen Plan, which called for a rapid assault through Belgium … their illusions were shattered when the Germans broke through France’s weakly fortified Belgian frontier in the first few weeks of the war and approached the gates of Paris. … the origins of World War II … Neville Chamberlain’s … inner circle of close associates … urged him to give in to Hitler’s demands … in exchange for nothing more than promises that he would make no further demands”(pp.185-6)

“Eight main symptoms run through the case studies of historic fiascoes … an illusion of invulnerability … collective efforts to … discount warnings … an unquestioned belief in the group’s inherent morality … stereotyped views of enemy leaders … dissent is contrary to what is expected of all loyal members … self-censorship of … doubts and counterarguments … a shared illusion of unanimity … (partly resulting from self-censorship of deviations, augmented by the false assumption that silence means consent)… the emergence of … members who protect the group from adverse information that might shatter their shared complacency about the effectiveness and morality of their decisions.”(pp.197-8)

“… other members are not exposed to information that might challenge their self-confidence.”(p.206)

Higgs versus Nambu-Goldstone bosons, supersymmetry and a neutrino condensate

In 1973, D.V. Volkov and V.P. Akulov published a paper entitled “Is the neutrino a goldstone particle?”, in Physics Letters B, Volume 46, Issue 1, Pages 109-110. A neutrino is a spin-1/2 fermion, not a boson. Suppose two massive neutrinos form a Bose-Einstein condensate, with effective spin-0 (analogous to Cooper pairs of electrons, an effective boson in superconductivity).

W+ + W + Z0 -> 2H0

80.4 + 80.4 + 91.2 = 2(126) GeV

where each boson is a condensate of a pair of spin-1/2 fermions.

To annoy Ed Witten and confuse the string theorists (who are in knots anyway), let’s name as “supersymmetry” the checkable theory that Standard Model bosons are Bose-Einstein condensates of Standard Model fermions. (This has nothing to do with the 1:1 mythical boson:fermion supersymmetry theory in string theory, which increases the number of parameters from 18 in the standard model to 125 without predicting any of their values, just to try to make couplings similar at the uncheckable Planck scale.) For a massive Nambu-Goldstone or Higgs boson, this ties up the loose ends in electroweak theory:

“Higgs did not resolve the dilemma between the Goldstone theorem and the Higgs mechanism. … I emphasize that the Nambu-Goldstone boson does exist in the electroweak theory. It is merely unobservable by the subsidary condition (Gupta condition). Indeed, without Nambu-Goldstone boson, the charged pion could not decay into muon and antineutrino (or antimuon and neutrino) because the decay through W-boson violates angular-momentum conservation. … I know that it is a common belief that pion is regarded as an “approximate” NG boson. But it is quite strange to regard pion as an almost massless particle. It is equivalent to regard nuclear force as an almost long-range force! The chiral invariance is broken in the electroweak theory. And as I stated above, the massless NG boson does exist.”

– Professor N. Nakanishi, Not Even Wrong blog comment, November 14, 2010 at 9:42 pm (See our diagram of this pion spin “anomaly”above.)

“Pion’s spin is zero, while W-boson’s spin is one. People usually understand that the pion decays into a muon and a neutrino through an intermediate state consisting of one W-boson. But this is forbidden by the angular-momentum conservation law in the rest frame of the pion.”

– Professor N. Nakanishi, Not Even Wrong blog comment, November 15, 2010 at 1:46 am.
Nakanishi states that despite the Higgs mechanism which produces massive weak bosons (Z and W massive particles), a massless Nambu-Goldstone boson is also required in electroweak theory, in order to permit the charged pion with spin-0 to decay without having to decay into a spin-1 massive weak boson. In other words, there must be a “hidden” massless alternative to weak bosons as intermediaries. This is explained clearly in our theory of SU(2).

The nature of neutrinos (Majorana or Dirac) is involved. Please see our paper for a discussion of the difference and its importance for chiral symmetry and dark matter: right-handed neutrinos don’t undergo weak interactions, so they would be dark matter. The fact that neutrinos change flavour in transit is evidence for a small mass and thus is indirect evidence for the existence of right-handed massive neutrinos. We discussed the recent CERN LHC evidence for a massive ~126 GeV Nambu-Goldstone boson in posts linked here and in the previous post here and here.

The Standard Model as it stands can’t predict the mass of the Higgs boson, and the Higgs mass mechanism ignores quantum gravity considerations (where mass is quantized gravitational charge). It’s not even proved (only surmised by groupthink dogma) that ~126 GeV is rest mass, since if you have a predictive mechanism in place of the Higgs mechanism, we showed a chiral SU(2) electromagnetic Yang-Mills theory where the chiral left-handedness of spin appears as Lenz’s law of the magnetic field curl helicity around the direction of motion of an electric charge. This fact comes from Maxwell’s electromagnetism treatise of 1873 and is defensible using Weyl’s 1929 chiral parity-breaking interpretation of Dirac’s spinor in 1929, which Pauli opposed, and is completely separate from the SU(2) left-handed spinor evidence which is incorporated in the Standard Model (by excluding right-handed neutrinos). Suppose electroweak symmetry breaking involves some kind of annihilation of the triplet of weak bosons to form a pair of spin-0 H-bosons (H standing preferably for Hypothetical, not Higgs):

W+ + W + Z0 -> 2H0

80.4 + 80.4 + 91.2 = 2(126) GeV

(Dharwadker and Khachatryan’s prediction from 2009. See also their guest post on Dr Dorigo’s blog. It seems that any abstract reasoning behind their formula is as physically impenetrable the Koide formula. However, like Rydberg’s empirical formula years later in the hands of Bohr, it may prove useful to developing physics.)

if the two spin-0 bosons have equal masses, each has a mass of 126 GeV. If one H-boson spinor is left-handed and one is right-handed, only the left-handed one is seen, because it is the only one which undergoes weak interactions. Notice an analogy between this simple H formula and one side of the Koide formula, summing lepton masses:

(MW+ + MW- + MZ0)/2 = MH

(Me + Mmuon + Mtauon)/2 = (Me1/2 + Mmuon1/2 + Mtauon1/2)2/3.

Does the H-boson have 126 GeV rest mass or not? Not necessarily! Many assume that it a H-boson is converted into four leptons or two gamma rays, in the LHC ATLAS and CMS detectors, it is a massive 126 GeV spin-0 particle decaying. However, a massless boson like a gamma ray can undergo pair-production in a strong field (the LHC collisions create strong fields!), despite having no rest mass. The fact that you always see 126 GeV as total energy of the spin-0 boson interaction doesn’t prove that it is a particle of 126 GeV rest mass which is decaying due to its mass. A gamma ray can undergo pair-production to form a pair of particles only if the gamma ray has a total energy of 1.022 MeV or more, because pair-production is only possible when the gamma ray energy exceeds the rest mass of the particles it forms. (Pair-production is a non-reversible process, because when an electron and positron annihilate, the conservation of momentum shows you get a pair of gamma rays coming off in opposite directions, each being the recoil momentum of the other. You can argue that there is symmetry if a gamma ray interacts with a virtual photon which behaves like a gamma ray to cause pair production in strong fields, although here the virtual photon is off-shell, not onshell like a gamma ray.) So you could be fooled by this false pair production logic when considering the case of H-boson “decay” into four leptons or two gamma rays, and you could claim that gamma rays have a rest mass of 1.022 MeV, or that the H-boson has a “rest mass” of 126 GeV. Both claims would be correct. Higgs electroweak interactions are new territory, since electroweak mixing in the Standard Model is empirically checked, but electroweak symmetry breaking details are not yet fully established. You cannot confuse speculative theoretical conjecture with facts.

A spin-0 Nambu-Goldstone boson therefore doesn’t have to have rest mass or “decay” in order to produce 126 GeV four-lepton or two-gamma ray products. Like a massless gamma ray which always produces fermion pairs with an energy of 1.022 MeV or more, the spin-0 Nambu-Goldstone boson could be massless, and carries energy without rest mass. The 126 GeV energy (confused for the Higgs boson rest mass) is then a result of the interaction above, half the sum of the weak boson masses. The electroweak symmetry breaking boson only has rest mass if there is explicit symmetry breaking in U(1) X SU(2), such as occurs in the standard electroweak theory where electromagnetism is treated as a U(1) parity-conserving interaction and SU(2) as a parity-breaking (left handed spinor) interaction. If electrodynamics and weak interactions both have the same chiral properties, there is no explicit symmetry breaking, but only spontaneous symmetry breaking.


Standard model electroweak theory: requires massive spin-0 Higgs boson because of explicit electroweak symmetry breaking, since U(1) conserves parity but SU(2) doesn’t conserve parity (it is left-handed).

Alternative electroweak theory: spontaneous symmetry breaking produces a massless (not massless) spin-0 boson. Both electrodynamics and weak interactions are derived from SU(2); massless SU(2) bosons give electromagnetic interaction, massive SU(2) bosons give weak interaction. Both electromagnetism and weak interactions are chiral, the chiral handedness of the electromagnetic interaction is seen in the handedness of the magnetic field helicity around the path of a moving charge. Magnetic fields wouldn’t exist according to Maxwell’s theory of the mechanism for magnetism (gauge boson spin handedness) if the electromagnetic interaction obeyed parity conservation, so it doesn’t.

Copy of a comment submitted to

There is an illustration here.

2W + Z -> 2H

2(80.4) + 91.2 = 2(126) GeV.

Note that 2W -> H is one Standard Model Higgs production interaction, while

truth quark + anti-truth quark -> H

is another Standard Model Higgs production interaction. If we treat this second example as equivalent to a Bose-Einstein condensate (each quark being one fermion in the condensate boson), the Z boson is in some sense equivalent to a spin-1 version of the H spin-0 boson, so

2W + Z -> 2H

is feasible, although only one H boson has the spin-0 observed, and the other is spin-1 (right-handed spinor, if it doesn’t participate in weak interactions, thus remaining invisible to ATLAS and CMS).

Dr Peter Woit’s representation theory for electroweak symmetry

Understanding spin is crucial to QFT. As shown in our paper, page 23, Figure 17, fermion spin results in angular momentum transfer in gauge boson exchange, producing magnetic fields, as Maxwell found in articles 822-3 of his final 1873 third edition of the Treatise on Electricity and Magnetism: “The … action of magnetism on polarized light leads … to the conclusion that in a medium … is something belonging to the mathematical class as an angular velocity … We must therefore conceive the rotation to be that of very small portions of the medium, each rotating [spin angular momentum].” (See Fig. 15 on page 21 of my paper for the origin of Maxwell’s theory.)

Maxwell’s deterministic magnetic field model of what is now called “field quanta” spin as the basis of magnetic fields makes electromagnetism an SU(2) Yang-Mills theory, not essentially an U(1) theory as assumed by Feynman and Pauli for QED (see Figure 31 in my paper for how isospin and electric charge are then related under SU(2) in the standard model). Abelian U(1) hypercharge still exists but only as the basis for quantum gravity, giving mass to the weak bosons via Weinberg-Glashow mixing, which replaces the Higgs mass mechanism, although the left-handed symmetry breaking due to mixing can still produce spin-0 Nambu-Goldstone bosons with a mass/gravitational charge of half the sum of the gravitational charges of the three weak bosons, (80 + 80 + 91)/2 = 125.5 GeV, and this accords with the Dirac spinor, the SU(2) Pauli spin matrix, and Weyl’s 1929 argument that Dirac’s spinor is chiral.

Copy of a comment submitted to: concerning the 2009 prediction by Dharwadker and Khachatryans of (80 + 80 + 91)/2 = 125.5 GeV spin-0 massive Nambu-Goldstone boson:

Cooper pairs of spin-1/2 fermions produce a spin-1 boson (condensate) explaining superconductivity, so since the Higgs spin-0 boson is already a boson, your case is that you’re not going to have two Higgs fermions forming a Cooper pair.

However, they do point out on pages 2-3:

“Theoretically, it is known that the SM Higgs boson is one neutral quantum component of the Higgs field, along with another neutral and two charged components acting as Goldstone bosons.”


What they are really doing (so far as their prediction is valid, ignoring BS arm-waving) is replacing this SM Higgs mechanism with a ~126 GeV spin-0 Higgs boson formed from two half integer spin particles (fermions).

While “supersymmetry” (postulating an additional high mass boson for every fermion in order to try to achieve similar couplings for all interactions at the Planck scale) is arm-waving unfalsifiable speculation, there is a glimmer of relevant physics you can gain here, if you go for a simpler and more predictive “supersymmetry” in which all bosons are composites of either massless or massive fermions.

Hence, SU(2) can be thought of as having two different charges of spin-1/2 fermions and their antiparticles, which can combine in 2×2 = 4 ways producing three distinctive bosons, with electric charges +1, -1, and 0 (there are two ways you get zero electric charge, thus a total of only three kinds of bosons from two charges of fermions).

Please see page 51 of Woit’s 2002 paper “QFT and representation theory” (part 10, Speculative remarks about the standard model) at where he shows that taking U(2) as a subset of SO(4) gives the standard model electroweak fermions with chiral features, for both leptons and quarks if the hypercharge is selected to make the “overall average U(1) charge of a generation of leptons and quarks to be zero.”

This is the underlying physics of the so-called “Higgs boson” mass (mass is quantum gravitational charge, and the “Higgs mechanism” ignores this), because since 1996 we have been publishing and a predictive U(1) gauge gravity theory, and the charge of quantum gravity is mass: so Woit’s 2002 argument about averaging hypercharge should also apply to masses for the particles. If there are right and left handed weak gauge bosons, half of the mass (the right-handed spinors) is “dark matter” because of the short-range (due to the mass) and the fact that it doesn’t undergo weak interactions. So Woit’s 2002 argument of averaging charges, applied to gravitational charges (masses) of the weak bosons, with only half of them engaging in weak interactions, could substantiate the formula (80.4 + 80.4 + 90)/2 ~ 126 GeV.

Weyl’s chiral electromagnetism was rejected by Pauli, who believed in parity conservation and thus apparently didn’t understand Lenz’s law in electromagnetism: all electrons in motion produce similar chiral helicity of magnetic field curl around the current, where this chiral magnetic field in Maxwell’s theory is due to spin. Thus, the curl of the magnetic field around a current is indicative of the chiral SU(2) nature of electromagnetism, if Maxwell’s theory of electromagnetism is correct. The problem for Woit is that spin quantum numbers are essential in quantum mechanics for the Pauli exclusion principle, which is an electromagnetic effect, not a weak force SU(2) interaction. Thus, there is empirical evidence for SU(2) spinor phenomena in electrodynamics. This indicates that U(1) is not the QED symmetry. Dr Thomas S. Love also makes this point by quoting Hans C. Ohanian’s article “What is spin” from the American Journal of Physics, v54, 1986, pp. 500-5:

“… contrary to the common prejudice, the spin of the electron has a close classical analog: it is an angular momentum of exactly the same kind as carried by the fields of a circularly polarized electromagnetic wave.”

However, Gerard ‘t Hooft rejects the spin by using a false argument based on a solid electron (which doesn’t exist), stating on page 27 of his 1997 Cambridge University press book In Search of the Ultimate Building blocks: “the ‘surface of the electron’ would have to move 137 times as fast as the speed of light.” This is a false objection to spin, since there the classical solid model of an electron upon which this calculation is based is wrong: the electron doesn’t have a surface moving faster than light. The spin is conveyed by field quanta, not a classical electron solid revolving like a planet.

Nitpicker (January 3, 2012 at 11:02 am): “A tad puzzled why you say “Spin(2n) as a double cover of SO(2n)”. For example Spin(3) = SU(2) is the double cover of SO(3) is the classic example of spin angular momentum.”

Peter Woit (January 3, 2012 at 11:43 am): “In the course I’ll certainly discuss the relationship between SO(3) and Spin(3)=SU(2) and their reps, but for the general case of SO(n) and Spin(n), even and odd n behave somewhat differently. In the even case there’s a beautiful parallelism with the symplectic group which I want to discuss, so that’s the case I’ll work out in detail. If you take a look at the old lecture notes linked to, maybe you can see what I’m doing.”

Woit’s 2002 paper on QFT and representation theory offers at page 51 an interesting and relevant U(2) representation in 4-d spacetime SO(4), which yields the correct chiral electroweak particle charges. This is interesting because as far as Woit is concerned, U(2) produces U(1) x SU(2), which is fair enough mathematically, but from our point of view the U(1) quantum gravity still contributes effectively (akin to hypercharge in the standard model) to SU(2) by Weinberg-Glashow mixing, although the actual mechanism is that the fractional SU(2) electric charges simply share field energy with mass (gravitational charge) as our model predicts. U(1) not only gives mass to SU(2) left-handed weak bosons by Weinberg-Glashow mixing, replacing the Higgs mass mechanism (although you can still have spin-0 massive Nambu-Goldstone bosons from the resulting breaking of symmetry), it also checkably predicted dark energy accurately two years ahead of discovery, and gravitation. General relativity is just a classical approximation, in which the Weyl quantum gauge type backreaction on the gravitational field is modelled by the contraction of the metric due to mass-energy. (This has nothing to do with Weyl’s earlier 1918 quantum gravity theory, which incorrectly quantized the metric, as explained in my paper.)

In the Einstein field equation which relates the Ricci curvature tensor to the stress-energy field source tensor, the product of the Ricci scalar and the metric represent the equivalent to the minimal coupling procedure in QED: the gravitational field is contracted due to the gravitational energy employed on mass. In other words, the contraction term in general relativity is nearest gravitational equivalent to the running coupling behind charge renormalization in QED. The gravitational field comes with only one sign of charge, not two as in electromagnetism, so it is not renormalized due to pair production polarization like electromagnetism. But it is renormalized in the sense that mass-energy is conserved, and the use of gravitational field energy affects the mass-energy which is the source of the gravitational field. You can’t do work by gravity without taking energy out of the gravitational field. Similarly, in electromagnetism, an electric charge can’t polarize virtual charges without some of the electric charge energy being used (core field “screening”). If an apple falls off a tree and hits the ground with a thump, the energy of the sound waves has come from gravitons in the gravitational field which accelerated the apple, converting gravitational potential energy (offshell field energy) into the kinetic energy of the apple (onshell energy).

This is the gravitational field “backreaction”. In QED, when the electromagnetic field does work, for instance in polarizing the vacuum, the energy used to polarize the vacuum has a backreaction upon the charge, “screening it”. This is just conservation of mass-energy. You cannot do work ordering the vacuum without expending energy. Einstein’s field equation contraction (needed to make both sides divergentless, for energy conservation) is analogous to this backreaction in the electromagnetic field. The work done energy by the gravitational field on mass (holding a planet together, for example) is exhibited by the conversion of gravitational charge (mass) into this energy. This is equivalent to a contraction of spacetime in the vicinity of mass. In other words, general relativity is already equivalent to QED in terms of quantum field theory. The major flaw of general relativity is the stress-energy tensor source term, which cannot correctly model discontinuous particles, but has to use “perfect fluid continuum” classical (smooth) approximations for the actually discontinuous distribution of matter. But the basic structure of Einstein’s field equation with the relativistic effects of the contraction term correctly models energy conservation.

Multipath interference causes the indeterminism in quantum field theory

On 30 Nov 2011, we completed a 63-page, 7.5 MB draft revision of the standard model, including quantum gravity predictions and confirmations for particle masses, couplings, etc. This is based on quantum field theory, Feynman’s approach to it, not Woit’s. Woit’s article in the American Scientist, “Grappling with Quantum Weirdness” claims that “quantum mechanics” (he doesn’t distinguish between 1st and 2nd quantization, one wavefunction or a path integral over separate wavefunctions for every path) “postulates that the state of a physical system is completely characterized by a vector in an infinite-dimensional vector space (the familiar quantum-mechanical “wavefunction”)”. Actually, each wavefunction amplitude is given by exp(iS/h bar), and you sum an infinite number of these wavefunctions, one for each path. So, yes, on an argand diagram this is represented by an infinite number of vectors (an infinite dimensional Hilbert space), and the resultant (integral of an infinite number of wavefunctions) is then equivalent to a single wavefunction for 1st quantization, but this is a false and wolly way of thinking. 1st quantization (a single wavefunction) is not relativistic and is not real: it’s a mathematical artifact of non-relativistic quantum mechanics. It’s wrong physically: there are field quanta, and the multipath interferences caused by these field quanta produce indeterminancy. The uncertainty principle is not a physical limit to understanding: in QFT it is caused by multipath interference from field quanta, as Feynman proves in his book QED (1985). Woit ignores this, proceeding instead with:

“The general consensus of the physics community is that Bohr’s point of view triumphed, enshrined in what became known as the “Copenhagen interpretation” of quantum mechanics. According to Bohr, the state-vector of a physical system evolves in time according to the Schrödinger equation and does not typically have a well-defined value for classical observables like position and velocity. When the system interacts with an experimental apparatus, the state-vector “collapses” into a state with a well-defined value of the observable being measured. In general, Bohr’s interpretation works perfectly well operationally, but it is conceptually incoherent and leaves important questions unanswered. How exactly does this “collapse” take place? … Most physicists generally believe that quantum mechanics, in its relativistic version as a theory of quantum fields, is a complete, consistent and highly successful conceptual framework.”

Woit shows he has no grasp of how 2nd quantization physically differs from 1st quantization. There is no single wavefunction for any particle: every particle has a separate wavefunction amplitude (wavefunction) for every single potential and real interaction with an onshell or offshell particle. It’s own field consists of offshell particles, with which it interacts. There is no single wavefunction! You always have a path integral, summing an infinite number of possible interaction paths. The Schrödinger equation has only a single wavefunction and is thus wrong: the real wavefunctions don’t “evolve” or “collapse”:

“If you … use the ideas that I’m explaining in these lectures – adding arrows [wavefunctions] for all the ways an event can happen – there is no need for an uncertainty principle! … The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.” – Richard P. Feynman, QED, 1990, pp. 55-56, and 84-85.

(Feynman’s position is a path-integral over off-shell scattering interaction’s of a particle with its own field, which is just Sir Karl Popper’s argument on page 303 of his 1979 Oxford University press book Objective Knowledge, “… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [Popper, The Logic of Scientific Discovery, German ed., 1934] … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.”)

As Dr Love explains, the eigenstates in quantum mechanics are artificial discontinuities which produce wavefunction “collapse” mathematically, not physically whenever a measurement is taken. There are no real eigenstates. The electron has a path integral of field quanta interference which determines (to the electron, not to a human, who can’t do the path integral accurately or non-perturbatively) where it is at any time, so there is no real wavefunction collapse (except in the 1st quantization non-relativistic Schrödinger equation) when a measurement is taken. The point is, as Feynman explains very clearly, there is a difference between reality and 1st quantization. It is a lie that a single wavefunction exists; this is proved by the fact that the Schrödinger equation is non-relativistic and hence is wrong. It is quantum mechanics double-talk to lie that 1st quantization is not replaced by 2nd quantization. This double-talk is equivalent to claiming that phlogiston theory is a duality to oxygen theory, that epicycles are a duality to Kepler’s elliptical orbits, or that Piltdown man was not really a fraud but was a very helpful evolutionary pedalogical tool for convincing/teaching students, until discredited.