The invisible glass ceiling of the global greenhouse, versus factual data


Fig. 1: the latest UAH global temperature subset satellite-based latest global warming data (credit: Dr Roy Spencer).

According to all IPCC greenhouse effect models, air warmed due to sunlight absorption by increasing atmospheric CO2 causes increased water evaporation, which itself is assumed to have a bigger warming effect than CO2 itself. This is the “positive feedback” assumption, essential to all IPCC climate change models. However, this assumption contravenes Archimedes’ law of buoyancy. Archimedes’ law shows that if the tiny temperature rise from the tiny increase in CO2 causes an increase in water evaporation and the evaporated water (humid air) then absorbs infrared and warms up, it should rise buoyantly, so that the average amount of cloud cover increases, which shadows the surface and causes overall negative feedback. Figure 1 shows microwave oxygen temperature measurements for the lower atomsphere (troposphere), and does not show surface temperatures where the surface is under cloud cover, which is the only situation where negative feedback could be detected in real world data:

“Since 1978 Microwave sounding units (MSUs) on National Oceanic and Atmospheric Administration polar orbiting satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen, which is proportional to the temperature of broad vertical layers of the atmosphere. Measurements of infrared radiation pertaining to sea surface temperature have been collected since 1967.”

Therefore, Figure 1 exaggerates the global warming at the surface, where increased cloud cover (negative feedback from H2O) opposes and essentially offsets CO2 induced temperature rises. Air near the upper (sunlight reflecting) layer of clouds is heated and warmed up because infrared (long wavelengths) are absorbed by the upper parts of a cloud, but the air below clouds is cooled down. There is no way for a satellite to measure surface temperatures below clouds: they have two methods of measuring temperature and neither penetrates cloud cover effectively. One is the microwave emissions from oxygen (which is distributed through the atmosphere, above and below clouds) and the other is the Planck radiating spectrum which will only measure surface temperatures if and when there is no cloud cover obscuring the surface (otherwise it tells you the temperature of the upper parts of the cloud cover).


Fig. 2: a comparison of direct surface Planck temperature measurements which are not possible through cloud cover (which are therefore limited to clear skies, which rules out the inclusion of negative feedback data and ensures only positive feedback from H2O can be included), with the UAH/RSS tropospheric oxygen microwave emission temperature. There is a very close fit, as you would expect. None of the data curves are true surface temperature, because none include negative feedback from shadows on the surface caused by evaporated water which has been heated by sunshine and buoyantly risen by Archimedes’s law to high altitudes, gradually forming extra cloud cover. As soon as the clouds form, the satellites cannot measure surface temperatures from cloud-cover areas, so negative feedback data is always excluded.

The deceit in this graph is two-fold. First, satellites cannot by any means measure negative feedback effects which only occur under cloud cover, so they are biased in favour of clear skies where H2O feedback on CO2 can only ever be positive. Second, the straight line through the data points is deliberately misleading.


Fig. 3: negative feedback (increased cloud cover) implies that surface temperatures – if detected under clouds without positive feedback bias in satellite data – increases with CO2 induced temperature until it cancels out further temperature rises. The sky becomes slightly more cloudy to compensate for CO2: a self-regulation mechanism like a thermostat as far as the surface is concerned. This negative feedback effect can never be seen properly in existing satellite data, which either average the air temperature of the entire height of the troposphere, which obscures the negative feedback in the smaller height of air under the clouds (microwave oxygen emission sensors) or else exclude negative feedback data altogether by just measuring the Planck temperature of the surface in cloud-free clear skies (which automatically excludes all negative feedback effects from cloud cover).

The temperature proxy data is all a fraud: before 1960 tree ring growth must be used as a proxy despite its failure to correlate to direct temperature measurements after 1960 (leading to the “hide the decline” Phil Jones/Michael Mann IPCC hockey stick curve). It’s clear why tree ring growth isn’t a reliable proxy: trees simply don’t grow as a function of temperature variations alone. The amounts of cloud cover and rainfall sensitively determine growth, so it is a falsehood to first assume temperature is the only variable, and then to turn this assumption concerning data interpretation into “evidence” that somehow defends the assumption in the first place. From 1960 until satellite data arrived in 1980, they used weather station data affected by “urban heat island” hot air pollution from nearby growing cities which has nothing to do with the CO2 greenhouse effect but conveniently gives data which can be manipulated to contribute to a hockey stick curve. So the IPCC choose different unreliable data sources that fit to different parts of a curve that mimicks the CO2 rise curve, and then join them together, omitting the parts of the temperature proxy data which did not convey the intended correlation.

Summary: all IPCC climate models assume H2O causes positive feedback which amplifies a tiny amount of warming from CO2 into a major problem. This assumption is only valid if Archimedes law of buoyancy (the rising of infrared heated moist air to condense and form clouds) is ignored. They give no reason for ignoring buoyancy. The greenhouse effect is exactly what the IPCC models assume, but the earth isn’t a greenhouse because clouds form in the earth (not in a greenhouse) in response to temperature-dependent ocean water evaporation, and the clouds shadow the surface and thus have a cooling, negative feedback effect. This negative feedback can’t be seen in the Planck spectrum surface temperature instruments in satellites because they can’t see see through cloud cover. Although the microwave sensors in satellites do respond in part to oxygen temperatures below clouds, they obfuscate negative feedback by averaging the temperature of all the oxygen in the troposphere including positive feedback from air near the upper (sunlight heated) parts of clouds. Using a greenhouse with a cloud cover preventing glass ceiling as a model for the earth is a lie. The earth doesn’t have a glass ceiling to prevent increasing cloud cover from increasingly CO2 heated ocean evaporation. All IPCC models and data are frauds. Clearly, there is a small CO2 temperature rise from CO2 alone, but this causes an increase in cloud cover which largely offsets this. The IPCC lie is to assume falsely that climatic cloud cover is independent of temperature, and then to fiddle the data to coincide with the false predictions from its wide array of false models.

Sure the climate is changing and CO2 is increasing, but the climate is always changing so there is 50% chance of rising temperatures, and 50% chance of falling temperatures at any time in history. In the 1970s, fanatical experts sought funding for a scare story that predicted a new ice age due to falling temperatures caused by pollution blocking out sunlight. Now it’s the opposite. But the CO2-temperature correlation is qualitatively meaningless because there is a massive 50% chance by sheer coincidence that temperatures will be rising like CO2, and the correlation is quantitatively a fiddle because there is no reliable data that properly includes negative feedback for the whole planet (i.e. surface temperatures, under cloud cover).

People need to be told:

(1) that the earth is not a greenhouse because cloud cover increases and cools the earth (cancelling most of the CO2 effect, not amplifying it) as the oceans are warmed slightly by CO2 in the atomsphere (something that does not happen inside a greenhouse, because they don’t have oceans and clouds in them), and

(2) satellite data on temperatures either use clear sky area Planck spectrum data or else average the oxygen microwave emissions from the entire troposphere and thus exclude the cloud cover. In neither case does the satellite data include negative feedback on surface temperatures from increasing cloud cover overhead. So the satellite data is all biased against including observed negative feedback from H2O, and only including positive feedback from H2O in the early stage of heating (which occurs over the oceans prior to the development of cloud cover). There is absolutely no evidence for the massive amplification of temperature rise by H2O positive feedback assumed in all IPCC CO2 scare mongering computer models, while there is objective evidence (from both Archimedes’ buoyancy of infrared warmed moist air, and from Spencer’s negative feedback cloud cover evidence) that this assumption is false. Liars conflate this false assumption with half-baked data which is misinterpreted using this false assumption, and then pass off this abuse of data as evidence to substantiate the false assumption (an entirely circular argument, just like claiming the sun’s apparent motion across the sky proves that the sun orbits the earth daily). (Don’t get me wrong: we’re only biased against quackery and nobody has ever published any scientific evidence for positive feedback from H2O which is reliable, and disproves Archimedes’ law of buoyancy.)

Update (19 March 2012):

A revised version of my comment submitted to Calder’s blog, concerning the opposition to genuine scientific debate by political activists using green scare policies as the propaganda pseudoscience tool for implementing a USSR-type state control of industry and individuals (the “Reichstag fire” mechanism):

Regardless of the precise altitude and cloud types involved (cold saturated low pressure air in all cases), the cosmic ray mechanism for climate change is the Wilson cloud chamber effect. I suggest this instrument would be a handy way of getting photos and video to communicate what is going on to people in a clear, hard-hitting manner.

A former climate change editor for Scientific American has just (17 March 2012) written a Scientific American blog post called “Effective World Government Will Be Needed to Stave Off Climate Catastrophe”, http://blogs.scientificamerican.com/observations/2012/03/17/effective-world-government-will-still-be-needed-to-stave-off-climate-catastrophe/ which states:

“To be effective, a new set of institutions would have to be imbued with heavy-handed, transnational enforcement powers. There would have to be consideration of some way of embracing head-in-the-cloud answers to social problems that are usually dismissed by policymakers as academic naivete.”

This is scare mongering for political world government, the way that nuclear war dangers was used as a cover for political ideology by the Moscow “World Peace Council” of Brezhnev during the Cold War. Anyone questioning the pseudoscientific propaganda was simply dismissed as a nuclear war advocate or an anti-communist (regardless of the distinction between ideology and realism), i.e. the whole scientific debate was shut down in advance of resolution, by the use of pseudo-morality censorship. Although you would naively expect some experts to continue sticking up for the scientific facts, the corruption spreads. The world government idealism never ended: when the cold war ended, it was just transferred from nuclear war to pollution and climate change fear-mongering designed to scare and panic people into the desired political activity. The dream persists of world government by means of scaring people into it, using “scientific consensus authority”. Maybe the aim is right, maybe not. It is misleading for “science” to be turned into a dogma of religious style dogmatic, bigoted consensus in order to motivate political actions. These people want to achieve a goal using underhand methods. Why use underhand methods? They think it’s the only way. In other words, the aim itself (world government, communism, fascism) is unattractive to the majority, so scare stories are required to force the majority to be interested or tolerant. It was eugenics pseudoscience, plus other lies and stunts like the burning of the Reichstag and the faked “protocol of the elders of zion”, which were essential to the Nazis. Science is killed by making it a mere dogma for use as a political tool.

Green propaganda is effectively working as a replacement for the old Moscow World Peace Council nuclear war fear-mongering, which sought to scare people into agreeing with communism rather than be blown up. Green propaganda today results in a socialist state controlled industry based on national subsidies for inefficient industry (as in the USSR), via the back door. When national socialist state control was abandoned, socialists converted to green state ideology because it involves increasing state control industry and individuals, by laws and taxation. Jimmy Delingpole has named these world government ideologues “Watermelons”, because they’re red on the inside but green on the outside. The subtext is that they don’t really care about science, be it the effects of nuclear war or the effect of the natural climate change Wilson cloud chamber mechanism. What they do care about is using and promoting any currently fashionable spurious arguments to scare people into state control activism. This is the old ideologue tactic. It was used by Lenin and Hitler, and more recently self-deluded fanatics like Saddam and Gadaffi. These people use creepy lying and scare-tactic propaganda, not facts.

The incorrect amplitude for the “Higgs boson” predicted by the Standard Model

“The start of the LHC 2012 physics run is still a while off, scheduled for around the beginning of April, with beam energy likely raised a bit, to 8 TeV total in the center of mass. So, it’s going to be quite a few more months before the LHC experiments have enough new data to analyze that will allow a conclusive determination of whether the evidence seen for a Higgs around 125 GeV is confirmed, with a significance high enough to claim discovery. … the best fit size of the bump is, as with ATLAS, about twice what the SM predicts. The errors are large, so quite possibly both experiments just got a bit lucky, in which case the first few months of 2012 data may not quickly add much to the significance of the signal.” – Dr Peter Woit

The Chi-squared test for the “Higgs boson” has two “possibilities”: either it doesn’t exist, or it does exist and is the particle in the mainstream electroweak theory. This is fraud. It’s precisely Joesph Priestley’s error in his phlogiston experiment: either phlogiston exists, or it doesn’t. There was a third possibility: oxygen exists, replacing phlogiston theory. This was recognised by Lavoisier. You need to take account of alternative theories to the Higgs mechanism and the standard electroweak theory, before you can claim that the spin-0 boson (if it exists) is the one you are actually looking for. Otherwise, it’s like interpreting the “motion of the sun” across the sky as clear evidence that the sun orbits the earth daily.

The Standard Model doesn’t predict, prima facie, a Higgs boson mass, but given an experimentally determined mass, the Standard Model with Higgs mechanism does constrain the amplitude of the Higgs signal (the cross-section for the Higgs boson production reactions). Woit points out that the observed amplitude is greater than predicted, for the apparently observed mass. As we explained earlier, Karl Popper’s falsifiable prediction methodology is not science: you make a prediction, the experiment confirms the prediction, and then you use this as a Stalinist propaganda to claim that the experiment has confirmed the theory. (Hoping nobody notices the subtle conflation of prediction with theory.) Example: Ptolemy’s epicycle theory could “predict” planetary positions using a complex metaphysics. Many of the predictions worked well enough within the accuracy of early observations, so there was no need for Kepler’s more accurate elliptical laws of planetary motion until after Brahe had made more accurate observations. If you claim to set out to “test” Ptolemy’s epicycles theory, using a statistical correlation test where there are only two possible outcomes (Ptolemy’s predictions are true, or the data are random as the null hypothesis), with no proper analysis of alternative models allowed, then the statistical correlation test will “confirm” the flawed model statistically over no correlation.

Statistical correlation tests are the most easily corrupted form of science, and this is rife: you test for “correlation” between one model and the experimental data, given a null (default) hypothesis that the “correlation” is just random coincidence. The flaw here is that the “evidence” you gain from a successful correlation test only tells you that the model accords with the data better than random noise. It doesn’t tell you anything about the problem that another theory may also agree, e.g. FitzGerald’s, Lorentz’s, Poincare’s and Larmor’s equations match Einstein’s special relativity’s transformation and E = mc2 law, so “experimental tests” of these equations doesn’t specifically support Einstein’s theory over the more mechanical derivations of the same equations by the earlier investigators. It’s also been shown that the confirmed predictions of general relativity come from energy conservation and are not specific confirmation of the geometric space-time continuum model. Therefore, it is Popperian sophistry to claim that a specific theory is “confirmed” by experiments merely when its predictions are confirmed, unless you have somehow disproved the possibility of any other theory predicting the same results by a different route. Politically, this sophistry gives rise to the “historical accident syndrome” whereby the first theory which gives the correct prediction in a politically-correct, fashionable manner, is hyped by the popular media as having been “confirmed” by experiment, when in fact only the predictions (which are also given by totally different theoretical frameworks sharing the same mathematical duality in the limits of the experimental regime) are confirmed. This is fascist hubris. We saw it with the earth-centred universe of Ptolemy. Once you have a fashionable model, it gets into the educational textbooks, it is “understood” by the popular media, and any alternative framework is wrongly dismissed as superfluous, unnecessary, boring, etc., without first being properly investigated to see if it fits more data more accurately.

It’s important to note that this is a general problem in politics and human endeavour generally. The advice is to keep to well-worn paths or you will get lost. However, you’re unlikely to find much on well-worn paths, because so many people keep to them, and the probability of finding anything on them is therefore low. Ironically, this point is “controversial” because you get the counter-argument that you’re unlikely to find anything if you go off the beaten track. More to the point, if you do find anything off the beaten track, you still have a difficulty in convincing anybody that it actually exists, as Niccolò Machiavelli explains in the political context (The Prince, Chapter VI): “the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.”

It’s quite correct that that a lukewarm argument on a radical and unpopular proposal leads either nowhere or to failure (suppression). You cannot easily overthrow a tyrant with kindly, gentle words alone. By the time a tyrant is susceptible to arguments (in dementia), it is easier to overthrow the regime by other means anyway. Diplomacy is the policy of feeding wolves in the expectation of achieving peace through appeasement. Groupthink is never revolutionary: it is always counter-revolutionary, developing political structures to stabilize a success by preventing a further revolution. New ideas are only welcome within the narrow confines of an existing theory, like epicycles.

Irving L. Janis, Victims of Groupthink, Houghton Mifflin, Boston, 1972

Janis, civil defense research psychologist and author of Psychological Stress (Wiley, N.Y., 1958), Stress and Frustration (Harcourt Brace, N.Y., 1971), and Air War and Emotional Stress (RAND Corporation/McGraw-Hill, N.Y., 1951), begins Victims of Groupthink with a study of classic errors by “groupthink” advisers to four American presidents (page iv):

“Franklin D. Roosevelt (failure to be prepared for the attack on Pearl Harbor), Harry S. Truman (the invasion of North Korea), John F. Kennedy (the Bay of Pigs invasion), and Lyndon B. Johnson (escalation of the Vietnam War) … in each instance, the members of the policy-making group made incredibly gross miscalculations about both the practical and moral consequences of their decisions.”

Joseph de Rivera’s The Psychological Dimension of Foreign Policy showed how a critic of Korean War tactics was excluded from the advisory group, to maintain a complete consensus for President Truman. Schlesinger’s A Thousand Days shows how President Kennedy was misled by a group of advisers on the decision to land 1,400 Cuban exiles in the Bay of Pigs to try to overthrow Castro’s 200,000 troops, a 1:143 ratio. Janis writes in Victims of Groupthink:

“I use the term “groupthink” … when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.”(p. 9)

“… the group’s discussions are limited … without a survey of the full range of alternatives.”(p. 10)

“The objective assessment of relevant information and the rethinking necessary for developing more differentiated concepts can emerge only out of the crucible of heated debate [to overcome inert prejudice/status quo], which is anathema to the members of a concurrence-seeking group.”(p.61)

“One rationalization, accepted by the Navy right up to December 7 [1941], was that the Japanese would never dare attempt a full-scale assault against Hawaii because they would realize that it would precipitate an all-out war, which the United States would surely win. It was utterly inconceivable … But … the United States had imposed a strangling blockade … Japan was getting ready to take some drastic military counteraction to nullify the blockade.”(p.87)

“… in 1914 the French military high command ignored repeated warnings that Germany had adopted the Schlieffen Plan, which called for a rapid assault through Belgium … their illusions were shattered when the Germans broke through France’s weakly fortified Belgian frontier in the first few weeks of the war and approached the gates of Paris. … the origins of World War II … Neville Chamberlain’s … inner circle of close associates … urged him to give in to Hitler’s demands … in exchange for nothing more than promises that he would make no further demands”(pp.185-6)

“Eight main symptoms run through the case studies of historic fiascoes … an illusion of invulnerability … collective efforts to … discount warnings … an unquestioned belief in the group’s inherent morality … stereotyped views of enemy leaders … dissent is contrary to what is expected of all loyal members … self-censorship of … doubts and counterarguments … a shared illusion of unanimity … (partly resulting from self-censorship of deviations, augmented by the false assumption that silence means consent)… the emergence of … members who protect the group from adverse information that might shatter their shared complacency about the effectiveness and morality of their decisions.”(pp.197-8)

“… other members are not exposed to information that might challenge their self-confidence.”(p.206)

Higgs versus Nambu-Goldstone bosons, supersymmetry and a neutrino condensate

In 1973, D.V. Volkov and V.P. Akulov published a paper entitled “Is the neutrino a goldstone particle?”, in Physics Letters B, Volume 46, Issue 1, Pages 109-110. A neutrino is a spin-1/2 fermion, not a boson. Suppose two massive neutrinos form a Bose-Einstein condensate, with effective spin-0 (analogous to Cooper pairs of electrons, an effective boson in superconductivity).

W+ + W + Z0 -> 2H0

80.4 + 80.4 + 91.2 = 2(126) GeV

where each boson is a condensate of a pair of spin-1/2 fermions.

To annoy Ed Witten and confuse the string theorists (who are in knots anyway), let’s name as “supersymmetry” the checkable theory that Standard Model bosons are Bose-Einstein condensates of Standard Model fermions. (This has nothing to do with the 1:1 mythical boson:fermion supersymmetry theory in string theory, which increases the number of parameters from 18 in the standard model to 125 without predicting any of their values, just to try to make couplings similar at the uncheckable Planck scale.) For a massive Nambu-Goldstone or Higgs boson, this ties up the loose ends in electroweak theory:

“Higgs did not resolve the dilemma between the Goldstone theorem and the Higgs mechanism. … I emphasize that the Nambu-Goldstone boson does exist in the electroweak theory. It is merely unobservable by the subsidary condition (Gupta condition). Indeed, without Nambu-Goldstone boson, the charged pion could not decay into muon and antineutrino (or antimuon and neutrino) because the decay through W-boson violates angular-momentum conservation. … I know that it is a common belief that pion is regarded as an “approximate” NG boson. But it is quite strange to regard pion as an almost massless particle. It is equivalent to regard nuclear force as an almost long-range force! The chiral invariance is broken in the electroweak theory. And as I stated above, the massless NG boson does exist.”

– Professor N. Nakanishi, Not Even Wrong blog comment, November 14, 2010 at 9:42 pm (See our diagram of this pion spin “anomaly”above.)

“Pion’s spin is zero, while W-boson’s spin is one. People usually understand that the pion decays into a muon and a neutrino through an intermediate state consisting of one W-boson. But this is forbidden by the angular-momentum conservation law in the rest frame of the pion.”

– Professor N. Nakanishi, Not Even Wrong blog comment, November 15, 2010 at 1:46 am.
Nakanishi states that despite the Higgs mechanism which produces massive weak bosons (Z and W massive particles), a massless Nambu-Goldstone boson is also required in electroweak theory, in order to permit the charged pion with spin-0 to decay without having to decay into a spin-1 massive weak boson. In other words, there must be a “hidden” massless alternative to weak bosons as intermediaries. This is explained clearly in our theory of SU(2).

The nature of neutrinos (Majorana or Dirac) is involved. Please see our paper for a discussion of the difference and its importance for chiral symmetry and dark matter: right-handed neutrinos don’t undergo weak interactions, so they would be dark matter. The fact that neutrinos change flavour in transit is evidence for a small mass and thus is indirect evidence for the existence of right-handed massive neutrinos. We discussed the recent CERN LHC evidence for a massive ~126 GeV Nambu-Goldstone boson in posts linked here and in the previous post here and here.

The Standard Model as it stands can’t predict the mass of the Higgs boson, and the Higgs mass mechanism ignores quantum gravity considerations (where mass is quantized gravitational charge). It’s not even proved (only surmised by groupthink dogma) that ~126 GeV is rest mass, since if you have a predictive mechanism in place of the Higgs mechanism, we showed a chiral SU(2) electromagnetic Yang-Mills theory where the chiral left-handedness of spin appears as Lenz’s law of the magnetic field curl helicity around the direction of motion of an electric charge. This fact comes from Maxwell’s electromagnetism treatise of 1873 and is defensible using Weyl’s 1929 chiral parity-breaking interpretation of Dirac’s spinor in 1929, which Pauli opposed, and is completely separate from the SU(2) left-handed spinor evidence which is incorporated in the Standard Model (by excluding right-handed neutrinos). Suppose electroweak symmetry breaking involves some kind of annihilation of the triplet of weak bosons to form a pair of spin-0 H-bosons (H standing preferably for Hypothetical, not Higgs):

W+ + W + Z0 -> 2H0

80.4 + 80.4 + 91.2 = 2(126) GeV

(Dharwadker and Khachatryan’s prediction from 2009. See also their guest post on Dr Dorigo’s blog. It seems that any abstract reasoning behind their formula is as physically impenetrable the Koide formula. However, like Rydberg’s empirical formula years later in the hands of Bohr, it may prove useful to developing physics.)

if the two spin-0 bosons have equal masses, each has a mass of 126 GeV. If one H-boson spinor is left-handed and one is right-handed, only the left-handed one is seen, because it is the only one which undergoes weak interactions. Notice an analogy between this simple H formula and one side of the Koide formula, summing lepton masses:

(MW+ + MW- + MZ0)/2 = MH

(Me + Mmuon + Mtauon)/2 = (Me1/2 + Mmuon1/2 + Mtauon1/2)2/3.

Does the H-boson have 126 GeV rest mass or not? Not necessarily! Many assume that it a H-boson is converted into four leptons or two gamma rays, in the LHC ATLAS and CMS detectors, it is a massive 126 GeV spin-0 particle decaying. However, a massless boson like a gamma ray can undergo pair-production in a strong field (the LHC collisions create strong fields!), despite having no rest mass. The fact that you always see 126 GeV as total energy of the spin-0 boson interaction doesn’t prove that it is a particle of 126 GeV rest mass which is decaying due to its mass. A gamma ray can undergo pair-production to form a pair of particles only if the gamma ray has a total energy of 1.022 MeV or more, because pair-production is only possible when the gamma ray energy exceeds the rest mass of the particles it forms. (Pair-production is a non-reversible process, because when an electron and positron annihilate, the conservation of momentum shows you get a pair of gamma rays coming off in opposite directions, each being the recoil momentum of the other. You can argue that there is symmetry if a gamma ray interacts with a virtual photon which behaves like a gamma ray to cause pair production in strong fields, although here the virtual photon is off-shell, not onshell like a gamma ray.) So you could be fooled by this false pair production logic when considering the case of H-boson “decay” into four leptons or two gamma rays, and you could claim that gamma rays have a rest mass of 1.022 MeV, or that the H-boson has a “rest mass” of 126 GeV. Both claims would be correct. Higgs electroweak interactions are new territory, since electroweak mixing in the Standard Model is empirically checked, but electroweak symmetry breaking details are not yet fully established. You cannot confuse speculative theoretical conjecture with facts.

A spin-0 Nambu-Goldstone boson therefore doesn’t have to have rest mass or “decay” in order to produce 126 GeV four-lepton or two-gamma ray products. Like a massless gamma ray which always produces fermion pairs with an energy of 1.022 MeV or more, the spin-0 Nambu-Goldstone boson could be massless, and carries energy without rest mass. The 126 GeV energy (confused for the Higgs boson rest mass) is then a result of the interaction above, half the sum of the weak boson masses. The electroweak symmetry breaking boson only has rest mass if there is explicit symmetry breaking in U(1) X SU(2), such as occurs in the standard electroweak theory where electromagnetism is treated as a U(1) parity-conserving interaction and SU(2) as a parity-breaking (left handed spinor) interaction. If electrodynamics and weak interactions both have the same chiral properties, there is no explicit symmetry breaking, but only spontaneous symmetry breaking.

Comparison

Standard model electroweak theory: requires massive spin-0 Higgs boson because of explicit electroweak symmetry breaking, since U(1) conserves parity but SU(2) doesn’t conserve parity (it is left-handed).

Alternative electroweak theory: spontaneous symmetry breaking produces a massless (not massless) spin-0 boson. Both electrodynamics and weak interactions are derived from SU(2); massless SU(2) bosons give electromagnetic interaction, massive SU(2) bosons give weak interaction. Both electromagnetism and weak interactions are chiral, the chiral handedness of the electromagnetic interaction is seen in the handedness of the magnetic field helicity around the path of a moving charge. Magnetic fields wouldn’t exist according to Maxwell’s theory of the mechanism for magnetism (gauge boson spin handedness) if the electromagnetic interaction obeyed parity conservation, so it doesn’t.

Copy of a comment submitted to http://snarxivblog.blogspot.com/2012/01/dharwadker-and-khachatryans-prediction.html:

There is an illustration here.

2W + Z -> 2H

2(80.4) + 91.2 = 2(126) GeV.

Note that 2W -> H is one Standard Model Higgs production interaction, while

truth quark + anti-truth quark -> H

is another Standard Model Higgs production interaction. If we treat this second example as equivalent to a Bose-Einstein condensate (each quark being one fermion in the condensate boson), the Z boson is in some sense equivalent to a spin-1 version of the H spin-0 boson, so

2W + Z -> 2H

is feasible, although only one H boson has the spin-0 observed, and the other is spin-1 (right-handed spinor, if it doesn’t participate in weak interactions, thus remaining invisible to ATLAS and CMS).