The Standard Model and Quantum Gravity: Identifying and Correcting Errors (part 1)



Above: spin-1 quantum gravity illustration from the old 2009 version of quantumfieldtheory.org (a PDF linked here, containing useful Feynman quotations about this). To hear to a very brief Feynman tongue-in-cheek talk on spin-1 graviton mechanism problems, please click here.


Above: the dilemma of “looking clever” or being humble and honestly searching for the facts, no matter how “heretical” or unexpected they turn out to be. This review of Surely You’re Joking My Feynman is a lot better than the autobiography itself which rambles on a lot and needs severe editing for busy readers, like all of Feynman’s books. Feynman does relate several incidents that led him to the conclusion that a major error in fashionable consensus is groupthink. Working on the bomb at Los Alamos, he found he could break into any secret safe very easily. People left the last digit of their combination on the lock dial, and he could extrapolate the other digits using logic about the simple mind of the physicist or mathematician. E.g., a 3 digit combination safe showing 7 implies the combination 137, 4 implies the combination 314, 1 implies 271, and so on. When a very complex safe of a military top brass was opened very quickly by the locksmith at Los Alamos, Feynman spent weeks getting to know the guy to find out the “secret”. It turned out that there was no magic involved: the combination that opened the safe was simply the safe manufacturer’s one, which the top brass hadn’t got around to changing! Feynman was then told by a painter that he made yellow by mixing white and red paint, which sounded like “magic”. After a mishap (pink), he went back to the painter, who informed him he added yellow to the mixture to give it the right tint. Another time, he was falsely accused of being a magician by fixing radios by switching over valves/vacuum tubes (they used the same kind of vacuum tube in different circuits, so an old output amplifier tube which was failing under high current could be switched for a similar valve used for lower currents in a pre-amplifier circuit, curing the problem). In a later book, What Do You Care What Other People Think Feynman’s time on the Presidential investigation into NASA’s January 1986 Challenger explosion is explained. Upon close inspection, Challenger was blown up by engineers not in a mistake involving some weird mathematical error of the fashionable magical “rocket science” that is supposedly beyond mortal understanding, but just regular groupthink delusion: the low-level engineers and technicians in charge of O-rings knew that rubber turns brittle at low temperatures in cold weather, and that brittle rubber O-rings sealing the Challenger booster rockets would leak fuel as the rocket vibrated, and they knew that gravity and air drag would cause the leaking fuel to run towards the rocket flames, blowing it up.

However, those technicians who knew the facts had Orwellian doublethink and crimestop: if they made a big scene in order to insist that the Challenger space shuttle launch be postponed until warmer weather when the rubber O-ring seals in the boosters would be flexible and work properly, they would infuriate their NASA bosses at launch control and all the powerful senators who had turned up to watch the Challenger take off, so the NASA bigwigs might give contracts to other contractors in future. They would be considered unAmerican fear-mongers, decrepid incompetent fools with big egos. It was exactly the same for the radar operators and their bosses at Pearl Harbor. There are no medals given out for preventing disasters that aren’t obvious threats splashed over the front pages of the Washington Post. It was not 100% certain the shuttle would explode anyway. So they crossed their fingers, said little, and nervously watched Challenger blow up on TV. Feynman was told the truth not by fellow committee investigator Neil Armstrong, or by any NASA contractor (they were just as good at covering up afterwards as keeping quiet beforehand), but by the military missile expert who investigated the 1980 Arkansas Titan military missile explosion. Feynman used a piece of rubber and a plastic cup of iced water to expose the cause at a TV news conference, but the media didn’t want to know about the corruption of science and peer-reviewed risk prediction rubbish in NASA’s computers and groupthink lies. His written report was nearly censored out, despite the committee chairman being a former student! It was included as a minority report, Appendix F, which concluded that NASA safety analyses were a confidence trick for public relations:

“… reality must take precedence over public relations, for Nature cannot be fooled.”

Nobel Laureate Professor Brian Josephson emailed me (exchanged email PDFs are located here and here) that he used 2nd quantization in his Nobel Prize QM calculations, but is still stuck in 1st quantization groupthink when it comes to “wavefunction collapse” in the EPR paradox! Er, Brian, nobody has ever seen an epicycle or a wavefunction! Nobody has ever measured an epicycle or wavefunction! Schrodinger guessed H*Psi = -i*h-bar*d{Psi}/dt. This is a complex transmogrification from Maxwell’s displacement current law, {energy transfer rate} = constant*dE/dt (for energy transfer via “electric current” flowing by the vacuum through an electric field E effect, akin to half-a-cycle of a radio wave). Note that H*Psi = -i*h-bar*d{Psi}/dt is a relativistic equation (it is only non-relativistic when his non-relativistic Hamiltonian H for energy is included; Dirac’s equation is no different Schroedinger’s except in replacing H with a relativistic spinor where particle spin is included, hence making the law relativistic). Dirac later showed that H*Psi = -i*h-bar*d{Psi}/dt is “analogous to” its solution, Psit/Psi0 = exp(-iHt/h-bar), which Feynman modified with -Ht = S with action S defined in units of h-bar, so the “wavefunction” (epicycle) varies in direct proportion to exp(iS). This creates the complex circle (rotation of a unit length vector on a Argand diagram, as a cyclic function of S). Feynman in his 1985 book QED reduced this exp(iS) using Euler’s “jewel” to simply cos S, where the lagrangian for S is expressed so that the direction of the vector is fixed as the relativistic axis (the relativistic axis is the simple direction of the arrow for the path of zero action S = 0, because the “relativistic action” is actually defined as that action which is invariant to a change of coordinates!). So we now have the “reinvented wheel” called a Euclidean circle, whose resultant in the on-shell or relativistic axis is simply the scalar amount cos S, for each path. This gets rid of complex Hilbert space and with it, Haag’s theorem as an objection to the mathematical self-consistency of renormalized QFT. All photons have 4 polarizations (like virtual or off-shell photons), not just 2 polarizations (as presumed from direct measurements). The extra 2 polarizations determine the cancellations and additions of phases: there is no “wavefunction collapse upon measurement”. The photon goes through both slit in Young’s experiment and interferes with itself, with no need for an observer. As Feynman writes in QED (1985) we don’t “need” the 1st quantization “uncertainty principle” if we sum the paths.


Above: here we have Feynman pushed to explain why similar poles of magnets repel, using it as an excuse to talk about why ice is slippery and why good husbands call an ambulance for their wives who slip on the ice and break their hip, unless they are drunk and violent. He does end up saying that he can’t explain why magnets repel in terms of anything else with which the non-mathematician is familiar. However, in his 1985 book QED he explains that virtual photons are exchanged between magnets, and this process creates the magnetic force field. The problem for Feynman was knowing what the virtual photon wavefunction means physically. In the 1985 book, he draws pictures of a rotating arrow accompanying each virtual photon, that rotates in step with the frequency of oscillation of the photon, i.e. each oscillation of the virtual photon is accompanied by a full rotation of the phase factor (which is the “hidden variable” behind the so-called “wavefunction”, itself just an epicycle from 1st quantization, with no direct physical reality behind it, despite obfuscation efforts from the “nobody understands quantum mechanics”-Gestapo and the “parallel worlds” fantasy of Hugh Everett III and, with varying laws of nature, the 10500 parallel universes of the superstring theory Gestapo/arXiv “peer”-reviewers).

Above: like Dr Zaius said to Charlton Heston in 1968, don’t search for the facts if you have a weak stomach. It might turn out that a “unified theory” is analogous to merely a bunch of bananas, so many groupthink “bury my head in the sand” simplicity-deniers will feel sub-ape because they can’t, don’t, and won’t be assed to put two sticks together to reach the facts. A pretty good example, discussed in detail in one way later in this post and in other ways two posts back, is Einstein’s relativity, which has multiple levels of explanation. The strongest formulation of relativity is the statement that our laws of motion must give the same predictions regardless of the chosen reference frame, i.e. we get the same prediction of the reference frame is that of the earth or that of the sun. This makes the laws “invariant” of the selected reference frame. Then there are progressive weaker formulations of relativity, used in “simplified” explanations for the layman, such as “there is no spacetime fabric, there is nothing in space which can produce forces”, or “relativity doesn’t say an absolute reference frame is unnecessary for doing our sums, relativity actually disproves the existence of any absolute reference frame!

These “simplified” relativism “explanations” are a continuation of the best traditions of Egyptian priesthood and the Pythagorean mathematical cult. The objective of science is to act as a magician, to make the masses of the people believe whatever you say, “trust me with political power, I’m a scientist!” Then you trust them and you get mishaps, because they turn out to be humans, or more often than not, subhumans, even subape! Hence the mishaps of caloric, phlogiston, Maxwell’s mechanical gear cog aether, Kelvin’s stable vortex atom, Piltdown Man, peer-review, unprecedented climate change, nuclear winter theory, lethal cobalt bomb theory, superstring, etc. Groupthink science is not the kind of thing Newton and Darwin were doing, or Feynman was doing before and during the 1948 Pocono conference. Groupthink science education doesn’t train people to put up with the taunts for doing unorthodox revolutionary work, so it progresses only slowly and haltingly, the “alternative ideas” are developed slowly, with the mainstream ignoring it.

Above: Dr Zaius is alive and well, ensuring that consensus censors facts, as shown in this BBC propaganda programme, Horizon: Science Under Attack where groupthink pseudophysics is labelled “science” and the facts are dismissed because they have been censored out by “peer”-review pseudoscientific bigotry. Telegraph online Journalist James Delingpole, who exposed to the world the “hide the decline” climategate email of Dr Phil Jones is dismissed by Dr Zaius on the pretext that people must define science as the consensus of “peer”-reviewed literature. Great. So we can go on pretending that there is nothing to worry about, and using “peer”-review to prevent human progress. Ah, if only it were that easy to sweep the facts under the carpet or wallpaper over them. A PDF version of the errors in the BBC Horizon: Science Under Attack episode is located here, with additional relevant data (20 pages, 2 MB download). To read the 1960s background about Dr Zaius, see the wikipedia page linked here: “Zaius serves a dual role in Ape society, as Minister of Science in charge of advancing ape knowledge, and also as Chief Defender of the Faith. In the latter role, he has access to ancient scrolls and other information not given to the ape masses. [Dr Phil Jones and the FOIA/Freedom of Information Act “harrassment” controversy.] Zaius … blames human nature for it all. Zaius seems to prefer an imperfect, ignorant ape culture that keeps humans in check, to the open, scientific, human-curious one … The idea of an intelligent human … threatening the balance of things frightens him deeply.”

“The common enemy of humanity is man. In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then, is humanity itself.” – Club of Rome, The First Global Revolution (1993). (That report is available here, a site that also contains a very similar but less fashionable pseudoscientific groupthink delusion on eugenics.)

The error in the Club of Rome‘s groupthink approach is the lie that the common enemy is humanity. This lie is the dictatorial approach taken by paranoid fascists, both on the right wing and the left wing, such as Stalin and Hitler. (Remember that the birthplace of fascism was not Hitler’s Germany, but Rome in October 1914, when the left-wing, ex-communist Mussolini joined the new Revolutionary Fascio for International Action after World War I broke out.) The common enemy of humanity is not humanity but is fanaticism, defined here by the immoral code: “the ends justify the means”. It is this fanaticism that is used to defend exaggerations and lies for political ends. Exaggeration and lying about weapons effects in the hope it will be justified by ending war is also fanaticism. Weapons effects exaggerations both motivated aggression in 1914, and prevented early action against Nazi aggression in the mid-1930s.

From: Phil Jones

To: “Michael E. Mann”
Subject: HIGHLY CONFIDENTIAL
Date: Thu Jul 8 16:30:16 2004

… I didn’t say any of this, so be careful how you use it – if at all. Keep quiet also that you have the pdf. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is! …

– Dr Phil Jones to Dr Michael Mann, Climategate emails, July 8th 2004.

For NASA’s “peer-review” suppression of its own climate research contractor, please see:

https://nige.files.wordpress.com/2011/02/dr-miskolczi-nasa-resignation-letter-2005.pdf

“Since the Earth’s atmosphere is not lacking in greenhouse gases, if the system could have increased its surface temperature it would have done so long before our emissions. It need not have waited for us to add CO2: another greenhouse gas, H2O, was already to hand in practically unlimited reservoirs in the oceans. … The Earth’s atmosphere maintains a constant effective greenhouse-gas content [although the percentage contributions to it from different greenhouse gases can vary greatly] and a constant, maximized, “saturated” greenhouse effect that cannot be increased further by CO2 emissions (or by any other emissions, for that matter). … During the 61-year period, in correspondence with the rise in CO2 concentration, the global average absolute humidity diminished about 1 per cent. This decrease in absolute humidity has exactly countered all of the warming effect that our CO2 emissions have had since 1948. … a hypothetical doubling of the carbon dioxide concentration in the air would cause a 3% decrease in the absolute humidity, keeping the total effective atmospheric greenhouse gas content constant, so that the greenhouse effect would merely continue to fluctuate around its equilibrium value. Therefore, a doubling of CO2 concentration would cause no net “global warming” at all.”

– https://nige.files.wordpress.com/2011/02/saturated-greenhouse-effect-fact.pdf page 4.

CO2 only drives climate change in NASA and IPCC computer climate fantasies when positive-feedback from H2O water vapour is assumed. In the real world, there is negative feedback from H2O which cancels out the small effect of CO2 rises: the hot moist air rises to form clouds, so less sunlight gets through to surface air. Homeostasis! All changes in the CO2 levels are irrelevant to temperature variations. CO2 doesn’t drive temperature, it is balanced by cloud cover variations. Temperature rises in the geological record have increasing the rate of growth of tropical rainforests relative to animals, causing a fall in atmospheric CO2, while temperature falls kill off rainforests faster than animals (since rainforests can’t migrate like animals), thus causing a rise in atmospheric CO2. These mechanisms for CO2 variations are being ignored. Cloud cover variations prevent useful satellite data on global mean temperature, the effects of cloud cover on tree growth obfuscate the effects of temperature, and the effects of upwind city heat output obfuscate CO2 temperature data on weather stations. Thus we have to look to sea level rise rates to determine global warming.

We’re been in global warming for 18,000 years, during which time the sea level has risen 120 metres (0.67 cm/year mean, often faster than this mean rate). Over the past century, sea level has risen at an average rate of 0.20 cm year, and even the maximum rate of nearly 0.4 cm/year recently is less than the rates humanity has adapted to and flourished with in the past. CO2 annual output limits and wind farms etc are no use in determining the ultimate amount of CO2 in the atmosphere anyway: if you supplement fossil fuels with wind farms, the same CO2 simply takes longer to be emitted, maybe 120 years instead of 100 years. The money spent on lying “green” eco-fascism carbon credit trading bonuses can be spent on humanity instead.

For background info on how H2O cloud cover feedback cancelling CO2 variations on temperature has been faked in IPCC NASA “peer”-review bigotry, see http://www.examiner.com/civil-rights-in-portland/blacklisted-scientist-challenges-global-warming-orthodoxy and https://nige.files.wordpress.com/2011/02/the-saturated-greenhouse-effect-theory-of-ferenc-miskolczi.pdf:

(1) increased cloud cover doesn’t warm the earth. True, cloud cover prevents rapid cooling at night. But it also reduces the sunlight energy received in the day, which is the source of the heat emitted during the night. Increase cloud cover, and the overall effect is a cooling of air at low altitudes.

(2) rainfall doesn’t carry latent heat down to be released at sea level. The latent heat of evaporation is released in rain as soon as the droplets condense from vapour, at high altitudes in clouds. Air drag rapidly cools the drops as they fall, so the heat is left at high altitudes in clouds, and the only energy you get when the raindrops land is the kinetic energy (from their trivial gravitational potential energy).

Obfuscation of the fact that hot moist air rises and condenses to form clouds from oceans that cover 70% of the earth (UNLIKE any “greenhouse!!!!) caused this whole mess. IPCC models falsely assume that H2O vapour doesn’t rise and condense into clouds high above the ground: they assume hot air doesn’t rise! That’s why they get the vital (FALSE) conclusion that H2O vapour doubles (amplifies) projected temperature rises from CO2, instead of cancelling them out! Their models are plain wrong.




Above: electric current is essentially displacement current under disguise. Juice in Joules coming out of wires isn’t due to the 1 mm/second drift of conduction band electrons, so much as Heaviside energy current. Moreover, charge up a capacitor which has a vacuum for its “dielectric”, and energy flows in at light velocity, has no mechanism to slow down, and when discharged flows out at light velocity in a pulse twice as long as its length and with just half the voltage (PD) of its static charged state. It turns out that the simplest way to understand electricity is as electromagnetic energy, so we’re studying the off-shell field quanta of QED, which causes slow electric drift current more like a side-show than the main-show. So by looking at IC’s published cross-talk experiments, we can learn about how the phase cancellations work. E.g., Maxwell’s wave theory of light can be improved upon by reformulating it in terms of path integrals.

Above: Dr Robert D. Klauber in 2010 accidentally misrepresented Feynman 1985 book QED in terms of Feynman’s earlier (complex) phase factor! The actual idea of Feynman in his 1985 book dispenses with the Argand diagram and converts exp (iS) into cos S (where S is in units of h-bar of course), as shown above. Notice that the path integral of cos S gives the resolved component of the resultant (final arrow) which lies in the x-direction only. To find the total magnitude (length) of the final arrow we simply have to choose the x-axis to be the direction of the resultant arrow, which is easy: the direction of the resultant is always that of the classical action, because the contributions to the resultant are maximized by the coherent summation of paths with the least amounts of action (the classical laws correspond to least action!). In other words, we don’t need to find the direction of the quantum field theory resultant arrow in the path integral, we only need to find its length (scalar magnitude). We easily know the arrow’s direction from the principle of least action, so the work of doing the path integral is then just concerned with finding the length, not the direction, of the resultant arrow. In practice, this is done automatically by relativistic formulation of the Lagrangian for action S. The definition of the path of least action as the real or “on shell” relativistic path automatically sets up the path integral coordinate system correctly.

“… every particle is associated with waves and these waves may be considered as a field. … very close to the charges that are producing the fields, one may have to modify Maxwell’s field theory so as to make it a non-linear electrodynamics. … with field theory, we have an infinite number of degrees of freedom, and this infinity may lead to trouble [Haag’s theorem implies that the renormalization process for taking account of field polarization is ambiguous and flawed if done in the complex, infinite dimensional “Hilbert space”]. We have to solve equations in which the unknown … involves an infinite number of variables [i.e., an infinite number of Feynman diagrams for a series of ever more complicated quantum interactions, which affect the classical result by an ever increasing amount if the field quanta are massive and significantly charged, compared to the charges of the on-shell particles whose fields they constitute]. The usual method … is to use perturbative methods in which … one tries to get a solution step by step [by adding only the first few terms of the increasingly complicated infinite number of terms in the perturbative expansion series to the path integral]. But one usually runs into the difficulty that after a certain stage the equations lead to divergent integrals [thus necessitating an arbitrary “cutoff” energy to prevent infinite field quanta momenta occurring, as you approach zero distance between colliding fundamental particles].”

– Paul A. M. Dirac, Lectures on Quantum Mechanics, Dover, New York, 2001, pages 1, 2, and 84.

Above: Feynman’s 1985 book QED is actually an advanced and sophisticated treatment of path integrals without mathematics (replacing complex space with real plane rotation of a polarization plane during motion along every possible path for virtual photons, as shown for reflection and refraction of light in the “sum over histories” given graphically above), unlike his 1965 co-authored Quantum Mechanics and Path Integrals. The latter however makes the point in Fig. 7-1 on page 177 (of the 2010 Dover reprint) that real particles only follow differentiable (smooth) “classical” paths when seen on macroscopic scales (where the action is much larger than h-bar): “Typical paths of a quantum-mechanical particle are highly irregular on a fine scale … Thus, although a mean velocity can be defined, no mean-square velocity exists at any point. In other words, the paths are nondifferentiable.” The fact that real paths are actually irregular and not classical when looked at closely is what leads Feynman away from the belief in differential geometry, the belief for instance that space is curved (which is what Einstein argued in his general relativity tensor analysis of classical motion). The idea that real (on shell) particle paths are irregular on very small scales was suggested by Schroedinger in 1930, when arguing (from an analysis of Dirac’s spinor) that the half integer spin of a fermion moving or stationary in free space can be modelled by a “zig-zag” path which he called “zitterbewegung”, which is a light-velocity oscillation with a frequency of 2mc2/h-bar or about 1021 Hz for an electron. Zitterbewegung suggests that an electron is not a static particle but is trapped light-velocity electromagnetic energy, oscillating very rapidly.

My recent comment to Dr Woit’s blog:

“One of my criticisms of the two organizations would be that they don’t support research of the sort that Witten has had success with, at the intersection of mathematics and quantum field theory.”

Do you think it possible that the future of quantum field theory could lie in a completely different direction, namely Feynman’s idea of greater mathematical simplicity. E.g. the path integral sums many virtual particle path phase amplitudes, each of the form of Dirac’s exp (-iHt) -> exp(iS). The sum over these histories is the path integral: the real resultant path is that for small actions S. Feynman in his 1985 book QED shows graphically how the summation works: you don’t really need complex Hilbert space from the infinite number of Argand diagrams.

To plot his non-mathematical (visual) path integral for light reflecting off a mirror, Feynman shows that you can simply have a phase polarization rotate (in real not complex space) in accordance to the frequency of the light, e.g. what he does is to take Euler’s exp(iS) = (i*sin S) + cos S and drop the complex term, so the phase factor exp(iS) is replaced with cos S, which is exactly the same periodic circular oscillation function as exp(iS), but with the imaginary axis replaced by a second real axis. E.g., a spinning plane of polarization for a photon! This gets rid of the objection of Haag’s theorem, since you get rid of Hilbert space when you dump the imaginary axis for every history!

Feynman makes the point in his Lectures on Physics that the origin of exp(iS) is the Schroedinger/Dirac equation for energy transfer via a rate of change of a wavefunction (Dirac’s of course has a relativistic spinor Hamiltonian), which just “came out of the mind of Schroedinger”. It’s just an approximation, a guess. Dirac solved it to get exp(-iHt) which Feynman reformulated to exp(iS). The correct phase amplitude is indeed cos S (S measured in units of h-bar, of course). Small actions always have phase amplitudes of ~1, while large actions have phase amplitudes that vary periodically in between +1 and -1, and so on average cancel out.

Are graphs mathematics? Are Feynman diagrams mathematics? Is mathematical eliticism (in the mindlessly complexity-loving sense) obfuscating a simple truth about reality?

Feynman points out in his 1985 book QED that Heisenberg’s and Schroedinger’s intrinsic indeterminancy is just the old QM theory of 1st quantization which is wrong because it assumes a classical coulomb field, with randomness attributed to intrinsic (direct) application of the uncertainty principle which is non-relativistic (Schroedinger’s 1st quantization Hamiltonian treats space and time differently) and is unnecessary since Dirac’s 2nd quantization shows that the field is quantized not the classical coulomb field. Dirac’s theory is justified by predicting magnetic moments and antimatter, unlike 1st quantization. The annihilation and creation operators of the quantized field only arise in 2nd quantization, not in Schroedinger’s 1st quantization where indeterminancy has no physical explanation in chaotic field quanta interactions:

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

“… Bohr [at Pocono, 1948] … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

Bohr and other 1st quantization people never learned that uncertainty is caused by field quanta acting on fundamental particles like Brownian motion of air molecules acting on pollen grains. Feynman was censored out at Pocono in 1948 and only felt free to explain the facts after winning the Nobel Prize. In the Preface to his co-authored 1965 book Quantum Mechanics and Path Integrals he describes his desire to relate quantum to classical physics via the least action principle, making classical physics appear for actions greater than h-bar. But he couldn’t make any progress until a visiting European physicist mentioned Dirac’s solution to Schroedinger’s equation, namely that the wavefunction’s change over time t is directly proportional to, and therefore (in Dirac’s words) “analogous to” the complex exponent, exp(-iHt). Feynman immediately assumed that the wavefunction change factor indeed is equal to exp(-iHt), and then showed that -Ht -> S, the action for the path integral (expressed in units of h-bar).

Hence, Feynman sums path phase factors exp(iS), which is just a cyclic function of S on an Argand diagram. In his 1985 book QED, Feynman goes further still and uses what he called in his Lectures on Physics (vol. 1, p. 22–10) the “jewel” and “astounding” (p. 22-1) formula of mathematics, Euler’s equation exp(iS) = i (sin S) + cos S to transfer from the complex to the real plane by dropping the complex term, so the simple factor cos S replaces exp (iS) on his graphical version of the path integral. He explains in the text that the cos S factor works because it’s always near +1 for actions small compared to h-bar, allowing those paths near (but not just at) least action to contribute coherently to the path integral, but varies cyclically between +1 and -1 as a function of the action for actions large compared to h-bar, so those paths in average will cancel out each other’s contribution to the path integral. The advantage of replacing exp (iS) with cos S is that it gets rid of the complex plane that makes renormalization mathematically inconsistent due to the ambiguity of having complex infinite dimensional Hilbert space, so Haag’s theorem no longer makes renormalization a difficulty in QFT.

The bottom line is that Feynman shows that QFT is simple, not amazingly complex mathematics: Schroedinger’s equation “came out of the mind of Schroedinger” (Lectures on Physics). It’s just an approximation. Even Dirac’s equation is incorrect in assuming that the wavefunction varies smoothly with time, which is a classical approximation: quantum fields ensure that a wavefunction changes in a discontinuous (discrete) manner, merely when each quantum interaction occurs:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

“When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

“You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

“Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

Above: the path integral interference for light relies on the cancelling of photon phase amplitudes with large actions, but for the different case of the fundamental forces (gravitation, electromagnetism, weak and strong), the path integral for the virtual or “gauge” bosons involves a geometrical cancellation. E.g., an asymmetry in the isotropic exchange can cause a force! The usual objections against virtual particle path integrals of this sort are the kind of mindless arguments that equally would apply to any quantum field theory, not specifically this predictive one. E.g., physicists are unaware that the event horizon size for a black hole electron is smaller than the Planck length, and thus (in Planck’s argument), a radius of 2GM/c2 is more physically meaningful as the basis for the grain-size cross-section for fundamental particles than Planck’s ad hoc formulation of his “Planck length” from dimensional analysis. Quantum fields like the experimentally-verified Casimir radiation which pushes metal plates together don’t cause drag or heating, they just deliver forces. “Critics” are thus pseudo-physicists!

My great American friend Dr Mario Rabinowitz brilliantly points out the falsehood of Einstein’s general relativity “equivalence principle of inertial and gravitational mass” as the basis for mainstream quantum gravity nonsense in his paper Deterrents to a Theory of Quantum Gravity, pages 1 and 7 (18 August 2006), http://arxiv.org/abs/physics/0608193. General relativity is based on the false equivalence principle of inertial and gravitational mass, whereby Einstein falsely assumed that Galileo’s law for falling bodies is accurate, whereas of course it is a falsehood, because the mass in any falling body is not accelerated purely by Earth’s mass, but is also “pulling” the Earth upwards (albeit by a small amount in the case of an apple or human, where one of the two masses is relatively small compared to the mass of the Earth). But for equal masses of fundamental particles (e.g. for the simplest gravitational interaction of two similar masses) this violation of Galileo due to mutual attraction violates Einstein’s equivalence principle as explained below by Dr Rabinowitz. Einstein forgot about the error in Galileo’s principle when formulating general relativity on the basis of the equivalence principle (note that genuine errors are not a crime, unlike arrogantly continuing to use personal worship by the charlatan media as a sword to “defend” the errors of GR BS against genuine competent critics for another 40 years, which was Einstein’s real crime against progress in science, which continues to this day under the guise of protecting a Jew no matter how factually wrong his physics is):

“As shown previously, quantum mechanics directly violates the weak equivalence principle in general and in all dimensions, and thus violates the strong equivalence principle in all dimensions. …

“Most bodies fall at the same rate on earth, relative to the earth, because the earth’s mass M is extremely large compared with the mass m of most falling bodies for the reduced mass … for M [much bigger than] m. The body and the earth each fall towards their common center of mass, which for most cases is approximately the same as relative to the earth. … When [heavy relative to earth’s mass] extraterrestrial bodies fall on [to] earth, heavier bodies fall faster relative to the earth [because they “attract” the earth towards them, in addition to the earth “attracting” them; i.e., they mutually shield one another from the surrounding inward-converging gravity field of distant immense masses in the universe] making Aristotle correct and Galileo incorrect. The relative velocity between the two bodies is vrel = [2G(m + M)(r2-1 – r1-1)]1/2, where r1 is their initial separation, and r2 is their separation when they are closer.

“Even though Galileo’s argument (Rabinowitz, 1990) was spurious and his assertion fallacious in principle – that all bodies will fall at the same rate with respect to the earth in a medium devoid of resistance – it helped make a significant advance [just like Copernicus’s solar system with incorrect circular orbits and epicycles prior to Kepler’s correct elliptical orbits, or Lamarke’s incorrect early theory of “acquired characteristic” evolution pathing the way for Darwin’s later genetic theory of evolution] in understanding the motion of bodies. Although his assertion is an excellent approximation … it is not true in general. Galileo’s alluring assertion that free fall depends solely and purely on the milieu and is entirely independent of the properties of the falling body, led Einstein to the geometric concept of gravity. [Emphasis added to key, widely censored, facts against GR.]

“Einstein and his successors have regarded the effects of a gravitational field as producing a change in the geometry of space and time. At one time it was even hoped that the rest of physics could be brought into a geometric formulation, but this hope has met with disappointment, and the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like “metric,” “affine connection,” and “curvature,” but is not otherwise very useful. The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effect of gravitational fields on the motion of planets and photons or to a curvature of space and time.”

– Professor Steven Weinberg, Gravitation and Cosmology, Wiley, New York, 1972, p. 147.

Above: “Could someone please explain how or why, if, as SR tells us, c is the ceiling velocity throughout the Universe, and thus gravity presumably cannot propagate at a speed faster than the ceiling velocity, the Earth is not twice as far away from the Sun every thousand years or so which is the obvious consequence of gravity propagating at such a low speed as c and not, as everyone since Newton had always supposed, near-instantaneously?” – James Bogle (by email). Actually this supposed problem is down to just ignoring the facts: gravity isn’t caused by gravitons between Earth and Sun; it’s caused instead by exchange of gravitons between us and the surrounding immense distant masses isotropically distributed around us, with particles in the Sun acting as a slight shield.

If you have loudspeakers on your PC and use an operating system that supports sound files in websites, turn up the volume and visit www.quantumfieldtheory.org. The gravitons are spin-1 not spin-2, hence they are not going between the sun and earth but the sun is an asymmetry: the speed that “shadows” move is not light speed but infinite. This is because shadows don’t exist physically as light velocity moving radiation! The sun causes a pre-existing shadowing of gravitons ahead of any position that the earth moves into, so the speed of the gravitons had nothing to do with the speed that the earth responds to the sun’s gravity. The sun sets up an anisotrophy in the graviton field of space in all directions around it in advance of the motion of earth. The mainstream “gravity must go at light speed” delusions are based purely on the ignorant false assumption that the field only exists between earth and sun. Wrong. The sun’s “gravity field” (anisotropy in graviton flux in space from distant immense masses) is pre-existing in the space ahead of the motion of the planet, so the speed of gravitational effects is instant, not delayed.

The exchange of gravitons between masses (gravitational charges) has a repulsive-only effect. The Pauli-Fierz spin-2 graviton is a myth; see the following two diagrams. Spin-1 gravitons we’re exchanging with big distant masses result in a bigger repulsion from such massive distant masses (which are very isotropically distributed, around us in all directions) than nearby masses, so the nearby masses have an asymmetric LeSage shadowing effect: we’re pushed towards them with the very accurately predicted coupling (diagrams below for quantitative proof) G by the same particles that cause the measured cosmological acceleration ~Hc. So we have an accurate quantitative prediction, predicting the numbers accurately via connecting the observed cosmological acceleration with gravitational coupling G, and this connection was made in May 1996 and was published two years before the cosmological acceleration was even discovered! Notice that the acceleration and expansion of the universe is not an effective expansion! I.e., as discussed later in this post, all the fundamental forces have couplings (e.g. G, alphaEM, etc.) that are directly proportional to the age of the universe. This is implied by the formula derived below, which is the correct quantum gravity proof for Louise Riofrio’s empirical equation GM = tc3, where M is the mass of the universe and t is its age.

Edward Teller in 1948 made the erroneous claim that any variation (which had been predicted in a an error-filled guesswork way by Dirac) of G is impossible because it would vary the fusion rate in the big bang or in a star, but actually a variation of G does not have the effect Teller calculated because (as shown in this and several earlier posts) all fundamental couplings are varying, electromagnetic as well as gravity! Hence, although Teller was incorrect in claiming that a doubling of G increases fusion via proton-proton gravitational compression in a star or the big bang fusion: it can’t do that, because the force of electromagnetic repulsion between colliding protons is increased by exactly the same factor as the gravitational force is increased! Therefore, fusion rates are essentially unaffected (uncharged particles like radiation pressure have a relatively small effect). Louise’s investigation of a presumed of c with as the reciprical of the cube-root of the age of the universe in her equation is spurious: her GM = tc3 is instead evidence of a direct proportionality of G and age of universe t (the full reasons are explained in earlier posts). Now, galaxies, solar systems, atoms and nuclei are all orbital systems of stars, planets, shells of electrons and shells of nucleons, respectively, with their size controlled by fundamental forces by equations like F = m1m2G/r2 = m1v2/r or m1G/r = v2, so if m1 and v are constants while coupling G (or a Standard Model force coupling like alphaEM) is directly proportional to the age of the universe, it follows G must be directly proportional to radius r, so that the radius of a galaxy, solar system, atom or nucleus is directly proportional to the age of the universe t. If the horizon radius of the flat spacetime universe and the radii of galaxies, solar systems, atoms and nuclei are all directly proportional to t it follows that although distant matter is receding from us, the expansion of all objects prevents any relative change in the overall (scaled) universe: rulers, people, planets, etc. expand at the same rate, so receding galaxy clusters will not appear smaller. You might think that this is wrong, and that increasing G and alphaEM should pull the Earth’s orbit in closer to the sun, and do the same for the electron. However, this more obvious solution or increasing orbital velocities it is not necessarily consistent with the very slow rate of increase of G, so that it’s more consistent to think of a scaling up of sizes everything as force strengths increase: a stronger gravitational field can stabilize a galaxy of larger radius but containing the same mass! It is possible that a stronger alphaEM can stabilize a larger electron ground state radius; whether this is the case depends on whether or not the orbital velocity is altered as the electromagnetic coupling is varied.

However, if course, the Lambda-CDM cosmological model, basically a Friedmann-Walker metric from general relativity which implicitly assumes constant G is totally incorrect viewed from the new quantum gravity theory. Is spacetime really flat? From a naive extrapolating using the false old framework of cosmology, hyped by Sean Carroll and other bigots who refuse to accept these facts of quantum gravity proved over a decade ago, you might expect that the linear increase of G with age of the universe will cause the universe to eventually collapse. However, remember that the cosmological acceleration (a repulsion that supposedly flattens out the spacetime curvature on cosmological distance scales by opposing gravitation) is itself a quantum gravity effect: on the largest scales, mutual repulsion of masses predominates over LeSage shadowing and its pseudo-attraction.

Nevertheless, back in 1996 when we predicted the same cosmological acceleration using two completely different calculations, only one way was from the quantum gravity theory. The other correct prediction of the a ~ Hc cosmological acceleration was simply from the effect of spacetime on the Hubble v = HR observed recession rate law (see proof linked here). That analysis is a complementary duality with the similar prediction of comological acceleration from a different calculation method via quantum gravity, the cosmological acceleration should be viewed as an artifact of the problem that the receding galaxies we see are being seen at times in our past which are related to distances by R = ct. If we represent time since the big bang by t, then the time, T in our past of a supernova apparently distance R away is related to time t by simply t + T = 1/H. So the cosmological acceleration is just a result of the fact that radiation comes back to us at light velocity, not instantly. So if there was not a time delay, we wouldn’t see any cosmological acceleration: the acceleration is physically being caused by the effective reference frame in which greater distances correspond to looking backwards in time. The universe horizon radius expands at the velocity of light, a linear expansion. This produces cosmological acceleration forces and thus gravitation due to the increasing time-lag for the exchange all forms of radiation, including gravitons. At the same time, masses and rulers expand by the mechanism already explained, so the relative scale of the universe remains constant while gravitation and cosmological acceleration operate.

Correction of mainstream errors in Electroweak Symmetry

Over Christmas, Dr Dorigo kindly permitted some discussion and debate over electroweak symmetry at his blog posting comments section, http://www.science20.com/quantum_diaries_survivor/blog/rumors_about_old_rumor, which helped to clarify some of the sticking points in the mainstream orthodoxy and possibly to highlight the best means of overcoming them in a public arena.

Some arguments against electroweak symmetry follow, mostly from replies to Dr Dorigo and Dr Rivero. The flawed logic of the “Higgs boson” assumption is based on the application of gauge theory for symmetry breaking to the supposed “electroweak symmetry” (never observed in nature). Only broken “electroweak symmetry”, i.e. an absence of symmetry and thus separate electromagnetic and weak interactions, have actually been observed in nature. So the Higgs boson required to break the “electroweak symmetry” is an unobserved epicycle required to explain an unobserved symmetry! What’s interesting is the nature of the groupthink “electroweak symmetry”. Above the “electroweak unification” energy, there is supposed to be equality of electromagnetic and weak forces into a single electroweak force. Supposedly, this is where the massive weak bosons lose their mass and this gain light velocity, long range, and thus stronger coupling, equal in strength to the electromagnetic field.

This unification guess has driven other possibilities out of sight. There are two arguments for it. First, the breaking of Heisenberg’s neutron-proton SU(2) chiral “isospin symmetry” leads to pions as Nambu-Goldstone bosons; so by analogy you can argue for Higgs bosons from breaking electroweak symmetry. This is unconvincing because, as stated, there is no electroweak symmetry known in nature; it’s just a guess. (It’s fine to have a guess. It’s not fine to have a guess, and use the guess as “evidence” for “justifying” another guess! That’s just propaganda or falsehood.) Secondly, the supposed “electroweak theory” of Weinberg and others. Actually, that they is better called a hypercharge-weak theory, since U(1) in the standard model is hypercharge, which isn’t directly observable. The electromagnetic theory is produced by an adjustable epicycle (the Weinberg angle) that is forced to make the hypercharge and weak theories produce the electromagnetic field by ad hoc mixing. The prediction of the weak boson masses from the Weinberg angle isn’t proof of the existence of an electroweak symmetry, because the weak bosons only have mass when the “symmetry” is broken. All evidence to date suggests that electroweak symmetry (like aliens flying around in UFOs) is just a fiction, the Higgs is a fiction, and mass is not generated through symmetry breaking. Yet so much hype based on self-deception continues.

The funny thing about the Glashow-Weinberg-Salam model is that it was formulated in 1967-8, but was not well received until its renormalizability had been demonstrated years later by ‘t Hooft. The electroweak theory they formulated was perfectly renormalizable prior to the addition of the Higgs field, i.e. it was renormalizable with massless SU(2) gauge bosons (which we use for electromagnetism), because the lagrangian had a local gauge invariance. ‘t Hooft’s trivial proof that it was also renormalizable after “symmetry breaking” (the acquisition of mass by all of the SU(2) gauge bosons, a property again not justified by experiment because the weak force is left-handed so it would be natural for only half of the SU(2) gauge bosons to acquire mass to explain this handedness) merely showed that the W-boson propagator expressions in the Feynman path integral are independent of mass when the momentum flowing through the propagator is very large. I.e., ‘t Hooft just showed that for large momentum flows, mass makes no difference and the proof of renormalization for massless electroweak bosons is also applicable to the case of massive electroweak bosons.

‘t Hooft plays down the trivial physical nature of his admittedly mathematically impressive proof since his personal website makes the misleading claim: “…I found in 1970 how to renormalize the theory, and, more importantly, we identified the theories for which this works, and what conditions they must fulfil. One must, for instance, have a so-called Higgs-particle. These theories are now called gauge theories.”

That claim that he has a proof that the Higgs particle must exist is totally without justification. He merely showed that if the Higgs field provides mass, the electroweak theory is still renormalizable (just as it is with massless bosons). He did not disprove all hope of alternatives to the Higgs field, so he should not claim that! He just believes in electroweak theory and won a Nobel Prize for it, and is proud. Similarly, the string theorists perhaps are just excited and proud of the theory they work on, and they believe in it. But the result is misleading hype!

I’m not denying that the interaction strengths run with energy and may appear to roughly converge when extrapolating towards the Planck scale. You get too much noise from hadron jets when doing such collisions, to get an unambiguous signal. Even if you just collide leptons at such high energy, hadrons are created in the pair production at such high energies, and then it’s a reliant on extremely difficult QCD jet calculations to subtract the gluon field “noise” before you can see any signals clearly from the relatively weak (compared to QCD) electromagnetic and weak interactions.

… continued with part 2 here

The Standard Model and Quantum Gravity: Identifying and Correcting Errors (part 2)

[… continued from part 1]

I’m simply pointing out that there is no evidence given for electroweak symmetry, by which I refer not to the weak bosons losing their mass at high energy. I don’t accept as evidence for electroweak symmetry a mere (alleged) similarity of the weak and electromagnetic cross-sections at very high energy (differing rates of running with energy in different couplings due to unknown vacuum polarization effects could cause apparent convergence simply by coincidence, without proving a Higgs field mechanism or the existence of electroweak symmetry). It’s hard to interpret the results of high energy collisions because you create hadronic jets which add epicycles into the calculations needed to deduce the relatively small electromagnetic and weak interactions. The energies needed to try to test for electroweak symmetry are so high they cause a lot of noise which fogs the accuracy of the data. If you wanted to use these HERA data to prove the existence of electroweak symmetry (massless weak bosons), you would need to do more than show convergence in the cross-sections.

“I am talking about very clean events of hard deep inelastic scattering, where the bosons are seen with great clarity due to their leptonic decays.”

You’re thinking possibly about weak SU(2) symmetry and electromagnetic symmetry, and you think these two separate symmetries together as “electroweak symmetry”. I’m 100% behind the extensive evidence gauge theory for weak interactions and 100% behind gauge theory for electromagnetic interactions. These separate symmetries, produced in the “electroweak theory” by mixing U(1) hypercharge boson with the SU(2) bosons, are not however “electroweak symmetry”, which only exists if massless weak bosons exist at very high energy. The Higgs field is supposed to give mass to those bosons at low energy, breaking the symmetry. At high energy, the weak bosons are supposed to lose mass, allowing symmetry of weak isospin and electromagnetic interactions by making the range of both fields the same.

I really need to find any alleged evidence for “electroweak symmetry” in my research for a paper, so if you ever recall the paper with the HERA data which you say contains evidence for electroweak symmetry, please let me know! So far I’ve read all the QFT books I can get (Weinberg, Ryder, Zee, etc.) and electroweak theory papers on arXiv, and I have not found any evidence for electroweak symmetry.

My understanding (correct me if I’m wrong here) is that if you collide protons and electrons at TeV energies, you knock free virtual quarks from the sheer energy of the collision? These virtual quarks gain the energy to become real (onshell) quarks, forming hadron jets. These jets are difficult to accurately predict because they are dominated by QCD/strong forces and the perturbative expansion for QCD is divergent, so you need lattice calculations which are inaccurate. So you can’t compare what you see with a solid prediction. You can measure what you see, but you can’t analyze the data very accurately. The color charge of the QCD jets can’t interact with the weak bosons, but the jets also have electromagnetic and weak charges which do interact with weak bosons. So you cannot do a precise theoretical analysis of the entire event. All you can really do is to produce particles and see what they are and how they interact. You can’t do a complete theoretical analysis that’s accurate enough to deduce electroweak symmetry.

Yes, definitely SU(2) weak symmetry is based on an enormous amount of good empirical evidence: what I’m questioning is “electroweak symmetry”. Evidence for the broken and mixed U(1) symmetry and SU(2) symmetry is not at issue. What should be regarded as an open question is whether electroweak symmetry exists. The simplest default alternative to the Higgs-electroweak theory is to have a mixed but broken “electroweak symmetry”, i.e. no electroweak symmetry. This is precisely what Feynman argued in the 1980s. Instead of having a Higgs field which makes weak field quanta massive at low energy but massless at high energy, you instead add a quantum gravity gauge theory to the standard model, which gives mass to the weak field quanta at all energies (as well as giving masses to other massive particles). The quantum gravity gauge theory has mass-energy as its charge and it has gravitons as its bosons. In other words, the Higgs/electroweak symmetry theory is a complete red-herring. If its advocates are allowed to continue their propaganda, then there will be no well-developed alternative to the Higgs/electroweak symmetry when the LHC rules out the Higgs. The result will be the usual last-minute panic with a consensus of ill-informed opinions promoting new epicycles to prop up nonsense (save face).

Feynman’s opposition to “electroweak symmetry” is in Gleick’s biography of Feynman:

When a historian of science pressed him on the question of unification in his Caltech office, he resisted. “Your career spans the period of the construction of the standard model,” the interviewer said.

” ‘The standard model,’ ” Feynman repeated dubiously. . . .

The interviewer was having trouble getting his question onto the table. “What do you call SU(3) X SU(2) X U(1)?”

“Three theories,” Feynman said. “Strong interactions, weak interactions, and the electromagnetic. . . . The theories are linked because they seem to have similar characteristics. . . . Where does it go together? Only if you add some stuff we don’t know. There isn’t any theory today that has SU(3) X SU(2) X U(1) — whatever the hell it is — that we know is right, that has any experimental check. . . . “

Virtual quarks form in pairs due to pair production around the proton. The pairs get knocked free in high energy collision. I do know that individual quarks can’t exist by themselves. I wrote that the quarks are produced in pair production, and get knocked free of the field of the proton in a high energy inelastic collision. I didn’t write that individual quarks exist alone.

The mass term in the lagrangian always exists, but it doesn’t have the same value. If m = 0, that is the same as getting rid of the mass term. Reference is for instance Zee’s QFT book. You can’t formulate a QFT very conveniently without the field having mass. Sidney Coleman is credited by Zee with the trick of adding a mass term for the massless QED virtual photon field, for example. You have to have a mass term in the field to get the gauge theory lagrangian, but at the end you can set the mass equal to zero. It’s a mathematical trick. It’s not physics, just math.

The precise reference is Zee, 1st ed., 2003, pp 30-31: “Calculate with a photon mass m and set m = 0 at the end … When I first took a field theory course as a Student of Sidney Coleman this was how he treated QED in order to avoid discussing gauge invariance.” He ends up with an electromagnetic potential of (e^{-mr})/(4 Pi r). The exponential part of this, e^{-mr}, is due to the mass term. Setting m = 0 gives e^{-mr} = 1, so the mass term has no effect, and you get the expected potential for a massless field. By exactly the same argument, mass terms in the weak field need to be eliminated for “electroweak symmetry” by making m = 0 where such symmetry exists. Otherwise, you end up with a weak field potential which has an exponential term (reducing the range and field strength) due to the mass of the weak field quanta. To get “electroweak symmetry”, the weak field potential must become similar to the electromagnetic field potential at unification energy. That’s the definition of this “symmetry”.

Pauli first applied Weyl’s gauge theory to electrodynamics and was well aware that that for electromagnetic interactions, it really doesn’t matter if you have a mass term in the propagator like 1/[(k^2)-(m^2)], because it just represents the momentum delivered by the field boson in the Feynman diagram. You can treat the relativistic field quanta (moving with velocity c) as non-relativistic, allow the rest mass momentum in the propagator to represent the relativistic momentum of photons, and then simply edit out the problem of field quanta mass in the field potential by letting m = 0 in the final stage. This math trick complements the physics of gauge invariance so there is no problem. Pauli however knew that the mass in the propagator is a real problem for non-Abelian fields that carry electric charge, so he objected to the Yang-Mills theory when Yang gave his lecture in 1954. Yang and Mills could not treat the mass of the field and Pauli made such a fuss Yang had to sit down. Electrically charged field quanta can’t propagate without rest mass (their magnetic self-inductance opposes their motion), so they must really have a mass in the propagator, as far as Pauli was concerned. This doesn’t apply to uncharged field quanta like photons, where you don’t need a massive propagator. Now the problem is: how do you get electroweak symmetry with electrically charged, massless SU(2) quanta at electroweak unification energy. As far as I can see, most of the authors of modern physics textbooks ignore or obfuscate the physics (which they mostly disrespect or frankly hate as being a trivial irrelevance in “mathematical physics”). But Noether makes all of the “mathematical symmetries” simple physical processes:

Noether’s theorem: every conservation law corresponds to an invariance or symmetry.

Gauge symmetry: conservation of charge (electric, weak, or color).

Electroweak symmetry: equality of couplings (strengths) of electromagnetic and weak interactions at electroweak unification energy.

Langrangian symmetry or local phase invariance: produced by a lagrangian that varies with changes in the wavefunction, so that emission of field quanta compensate for the energy used to change the wavefunction.

When you switch from describing massive to massless field quanta in electromagnetism, the equation for field potential loses its exponential factor and thus ceases to have short range and weak strength. However, the field quanta still carry momentum because they have energy, and energy has momentum. So there is no problem. Contrast this to the problems with getting rid of mass for SU(2) electrically charged W bosons!

“… concentrate on the hard subprocess, where the real (perturbative) physics is. There, the gamma and the W/Z have similar strengths once you reach virtualities of the order of the boson masses.”

You seem to be arguing is that “electroweak symmetry” is defined by similarity of the strengths of the weak and electromagnetic forces at energies equivalent to the weak boson masses (80 and 91 GeV). There is some confusion in QFT textbooks on exactly what the difference is between “electroweak symmetry” and “electroweak unification”.

At energies of 80 and 91 GeV (weak W and Z boson masses), the electromagnetic (gamma) and W/Z don’t seem to have very similar strengths: http://www.clab.edc.uoc.gr/materials/pc/proj/running_alphas.html

Yes, the electrically neutral Z weak boson has higher mass (91 GeV) than the electrically charged W weak bosons (80 GeV), but that’s just because the weak isospin charge coupling (g_W) has a value of only half the weak hypercharge coupling (g_B). The weak hypercharge for left-handed leptons (ie those which actually participate in weak interactions) is always Y = -1, while they have a weak isospin charge Y = +/-1/2. (Forget the right handed lepton hypercharge, because right handed leptons don’t participate in weak interactions.) So the weak isospin charge has just half the magnitude of the weak hypercharge! The Weinberg mixing angle Theta_W is defined by:

tan (Theta_W) = (g_W)/(g_B)

The masses of the weak bosons Z and W then have the ratio:

cos (Theta_W) = (M_W)/(M_Z)

Therefore, the theory actually predicts the difference in masses of the Z and W weak bosons from the fact that the isospin charge is half the hypercharge. This is all obfuscated in the usual QFT textbook treatment, and takes some digging to find. You would get exactly the same conclusion for the left-handed weak interaction if you replace weak hypercharge by electric charge for leptons (not quarks, obviously) above. Because isospin charge takes a value of +/-1/2 while electric charge for leptons takes the value +/-1, the ratio of isospin to electric charge magnitude is a half. Obviously for quarks you need an adjustment for the fractional electric charges, hence the invention of weak hypercharge. Physically, this “(electric charge) = (isospin charge) + (half of hypercharge)” formula models the compensation for the physical fact that quarks appear to have fractional electric charges. (Actually, the physics may go deeper than this neat but simplistic formula, if quarks and leptons are unified in a preon model.) I’m well aware of the need for some kind of mixing, and am well aware that the difference in W and Z boson masses was predicted ahead of discover at CERN in 1983.

I’m writing a paper clarifying all this, and it is good to be able to discuss and defend a criticism of electroweak symmetry here, to see what kind of arguments are used to defend it. It will help me to write the paper in a more concise, focussed way. Thank you Alejandro, thanks to Tommaso for tolerating a discussion, and other commentators.

For the record: the essential “tan (Theta_W) = (g_W)/(g_B)” is equation 10.21 in David McMahon’s 2008 “QFT Demystified” textbook.

The problem with the usual interpretation of the top quark mass for Higgs boson studies is that to counter this argument, I would have to discuss an alternative theory in detail, instead of just pointing out inconsistencies in the mainstream theory. Then critics will dismiss me as a crackpot and stip listening. But the top quark coupling seems to me to be evidence pointing exactly the other way, towards a quantum gravity gauge theory. The top quark mass fits in perfectly to a simple model for particle masses. The foundation is model for masses was a relationship between the Z boson mass and the electron mass (or similar) in a paper you wrote with Hans de Vries, so thank you for that. To summarize the essentials, we put a quantum gravity gauge group into the standard model in a very neat way (treating it like hypercharge), and remove the Higgs mass model. Mixing gives masses to the massive particles in a very novel way (not). A charged fundamental particle, eg a lepton, has a vacuum field around it with pair production producing pairs of fermions which are briefly polarized by the electric field of the fermion, and this shields the core charge (thus renormalization). The energy absorbed from the field by the act of polarization (reducing the electric field strength observed at long distances) moves the virtual fermions apart, and thus gives then a longer life on average before they annihilate. Ie, it causes a statistical violation of the uncertainty principle: the energy the off-shell (virtual) fermions absorb makes them move closer towards being on-shell. For the brief extra period of time (due to polarization) which they exist before annihilation, they therefore start to feel the Pauli exclusion principle and to behave more like on-shell fermions with a structured arrangement in space. One additional feature of this vacuum polarization effect in giving energy to virtual particles is that they briefly acquire a real mass. So the vacuum polarization has the effect of turning off-shell virtual fermions briefly into nearly on-shell fermions, simply by the energy they absorb from the electric field as they polarize! This vacuum mass and the Pauli exclusion principle have the effect of turning leptons into effectively the nuclei of little atoms, surrounded by virtual fermions which when being polarized add a Pauli exclusion principle structured real mass. It is this vacuum mass effect from the vacuum which is all-important for the tauon and also the top quark. The neutral Z acquires its mass by mixing of SU(2) with a quantum gravity gauge group. https://nige.wordpress.com/2010/05/07/category-morphisms-for-quantum-gravity-masses-and-draft-material-for-new-paper/

Theta_W or θ_W is empirically determined to be 29.3 degrees at 160 MeV energy using the 2005 data from parity violation in Møller scattering (sin^2 θ_W = 0.2397 ± 0.0013 was obtained at 160 MeV) and it was determined to 28.7 degrees at 91.2 GeV energy in 2004 data using the minimal subtraction renormalization scheme (sin^2 θ_W = 0.23120 ± 0.00015). This difference is usually cited as evidence of the running of the Weinberg angle with energy, due to the running coupling which is caused by vacuum polarization (shielding the core charges, which is a bigger effect at low energy than at high energy). See http://en.wikipedia.org/wiki/Weinberg_angle

What I stated was that, ignoring the running coupling effect (which is smaller for the weak isospin field than in QED, because of the weakness of the weak force field relative to QED), the Weinberg angle is indeed

tan θ_W =1/2.

This is gives θ_W = 26.57 degrees. Remember, empirically it is 29.3 degrees at 160 MeV and it is 28.7 degrees at 91.2 GeV. The higher the energy, the less vacuum polarization we see (we penetrate closer to the core of the particle, and there is therefore less intervening polarized vacuum to shield the field) Therefore, the figure for higher energy, 28.7 degrees is predicted to be closer to the theoretical bare core value (26.57 degrees) than the figure observed at low energy (29.3 degrees). The value of θ_W falls from 29.3 degrees at 160 MeV to 28.7 degrees at 91.2 GeV, and to an asymptotic value for the bare core of 26.57 degrees at much higher energy.

Yes, there must be a mixing of SU(2) and U(1). But no, I’ve never been against such a mixing. My incomplete draft paper from last October explains what I mean: https://nige.files.wordpress.com/2010/10/paper-draft-pages-1-5-2-oct-2010.pdf (ignore underlined Psi symbols; they should have an overbar). My argument is that the mathematics of the Standard Model are being misapplied physically. The electroweak unification is achieved by mixing SU(2) with U(1) but not anywhere near the way it is done in the Standard Model. SU(2) is electroweak symmetry: the three gauge bosons exist in massless and massive forms. Massless charged bosons can’t propagate unless the magnetic self inductance is cancelled, which can only happen in certain circumstances (e.g. a perfect equilibrium of exchange between two similar charge, so that the charged bosons going in each opposite direction have magnetic vectors than cancel one another, preventing infinite self-inductance, just electromagnetic energy in a light velocity logic step propagating along a two-conductor power transmission line). This effectively makes electric charge the extra polarizations that virtual photons need to account for attraction and repulsion in electromagnetism. The massive versions of those SU(2) bosons are the weak bosons, and arise not from a Higgs field but from a U(1) hypercharge/spin-1 quantum gravity theory.

There is a massive error of the Standard Model’s CKM parameter matrix in the “electroweak” theory, which has the contradiction that when a lepton like a muon or tauon decays, it decays via the intermediary step of a weak gauge boson to give a lepton, but when a quark decays it doesn’t delay into a lepton via the weak gauge boson, but instead into another quark: https://nige.files.wordpress.com/2010/08/diagram1.jpg. See
https://nige.wordpress.com/2010/05/07/category-morphisms-for-quantum-gravity-masses-and-draft-material-for-new-paper/ and
https://nige.wordpress.com/2010/06/29/professor-jacques-distler-disproves-the-alleged-anomaly-in-beta-decay-analysis/. When you correct this theoretical beta decay analysis error, all of the problems of the Standard Model evaporate and you get a deep understanding (this draft PDF paper is incomplete and underlined Psi symbols should have overbars, but most of the rest of the theory is on other blog posts).

www.quantumfieldtheory.org

“… it comes about that, step by step, and not realizing the full meaning of the process, mankind has been led to search for a mathematical description … mathematical ideas, because they are abstract, supply just what is wanted for a scientific description of the course of events. This point has usually been misunderstood, from being thought of in too narrow a way. Pythagoras had a glimpse of it when he proclaimed that number was the source of all things. In modern times the belief that the ultimate explanation of all things was to be found in Newtonian mechanics was an adumbration of the truth that all science as it grows towards perfection becomes mathematical in its ideas. … In the sixteenth and seventeenth centuries of our era great Italians, in particular Leonardo da Vinci, the artist (born 1452, died 1519), and Galileo (born 1564, died 1642), rediscovered the secret, known to Archimedes, of relating abstract mathematical ideas with the experimental investigation of natural phenomena. Meanwhile the slow advance of mathematics and the accumulation of accurate astronomical knowledge had placed natural philosophers in a much more advantageous position for research. Also the very egoistic self-assertion of that age, its greediness for personal experience, led its thinkers to want to see for themselves what happened; and the secret of the relation of mathematical theory and experiment in inductive reasoning was practically discovered. … It was an act eminently characteristic of the age that Galileo, a philosopher, should have dropped the weights from the leaning tower of Pisa. There are always men of thought and men of action; mathematical physics is the product of an age which combined in the same men impulses to thought with impulses to action.”

– Dr Alfred North Whitehead, An Introduction to Mathematics, Williams and Norgate, London, revised edition, undated, pp. 13-14, 42-43.

Einstein’s tensors (second order differential equations) presuppose a classical distribution of matter and a classical, continuously acting acceleration. Einstein and others have problems with the fact that all mass and energy is particulate, in setting up the stress-energy tensor (gravity charge for causing spacetime curvature) in general relativity. How do we use a tensor formulation, that can only model a continuous distribution of matter, to represent discrete particles of mass and energy?

Simple: we don’t. They average out the density of discrete particle mass-energy in a volume of space by replacing it with the helpful approximation of an imaginary “perfect fluid” which is a continuum, not composed of particles. So all the successes of general relativity are based on lying, averaging out the discrete locations of quanta in a volume, to feed into the stress-energy tensor. If you don’t do this lie, general relativity fails completely: for discrete point-like particles in the stress-energy tensor, the curvature takes just two possible values, both of them unreal (zero and infinity!). So general relativity is just a classical approximation, based on lying about the nature of quantum fields and discrete particles!

“In many interesting situations… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’… A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighbouring fluid elements is pressure.”

– B. Schutz, A First Course in General Relativity, Cambridge University Press, 1986, pp. 89-90.

However, there is one thing that Einstein did do that was a step beyond Newton in general relativity, which is explained well at http://www.mathpages.com/home/kmath103/kmath103.htm:

It is this “trace” term that Einstein had to introduce to make the stress-energy tensor’s divergence zero (satisfying the conservation of mass-energy) that makes light deflect twice as much more due to gravity than Newton’s law predicts. But as Feynman showed in the final chapter to the second edition (not included in the first edition!) of the second volume of his “Lectures on Physics”, this special feature of curved spacetime is simple to understand as being a gravitational field version of the Lorentz-FitzGerald contraction. Earth’s radius is contracted by (1/3)MG/c2 = 1.5 millimetres to preserve mass-energy conservation in general relativity. Just as Maxwell predicted displacement current by looking physically at how capacitors with a vacuum for a dielectric allow current to flow through a circuit while they charge up, you don’t need a physically false tensor system to predict this. The fact that Maxwell used physical intuition and not mathematics to predict displacement current is contrary to the lying revisionist history at http://www.mathpages.com/home/kmath103/kmath103.htm, the author of which is apparently ignorant of the fact that Maxwell never used vector calculus (which was an innovation due to self-educated Oliver Heaviside, a quarter century later), messed up his theory of light, never unified electricity and magnetism consistently despite repeated efforts, and came up with an electrodynamics which (contrary to Einstein’s ignorant claims in 1905 and for fifty years thereafter) is only relativistic for a (non-existent) “zero action” approximation, and by definition fails to be relativistic for all real-world situations (that comprise of small not non-zero actions which vary as a function of the coordinate system and thus motion, and so are not generally invariant). You don’t need tensors to predict the modifications to Newtonian gravity that arise when conservation of mass-energy in fields is included; you don’t need general relativity to predict the excess radius that causes the apparent spacetime curvature, because a LeSage type quantum gravity predicts that spin-1 gravitons bombarding masses will compress them, explaining the contraction. And a light photon deflects twice as much due to a perpendicular gravity field than slow-moving bullets deflect, because of the Lorentz-FitzGerald contraction of the energy in the light photon: 100% of the energy is in the plane of the gravitational field, instead of just 50% for a bullet. So light photons interact twice as strongly with the gravity field. There is no magic!

Prediction of gravitational time-dilation

When light travels through a block of glass it slows down because the electromagnetic field of the light interacts with the electromagnetic fields in the glass. This is why light is refracted by glass. Light couples to gravitational fields as well as electromagnetic. The gravitational time dilation from the Einstein field equation is proved in an earlier blog post to be simply the same effect. The gravitons are exchanged between gravitational charges (mass/energy). Therefore, the concentration of gravitons per cubic metre is higher near mass/energy than far away. When a photon enters a stronger gravitational field, it interacts at a faster rate with that field, and is consequently slowed down. This is the mechanism for gravitational time dilation. It applies to electrons and nuclei, indeed anything with mass that is moving, just as it applies to light in a glass block. If you run through a clear path, you go faster than if you try to run through a dense crowd of people. There’s no advanced subtle mathematical “magic” at work. It’s not rocket science. It’s very simple and easy to understand physically. You can’t define time without motion, and motion gets slowed down by dense fields just like someone trying to move through a crowd.

Length contraction with velocity and mass increase by the reciprocal of the same factor are simply physical effects as FitzGerald and Lorentz explained. A moving ship has more inertial mass than its own mass, because of the flow of water set up around it (like “Aristotle’s arrow”, fluid moving out at the bows, flows around the sides and pushes in at the stern). As explained in previous posts, the “ideal fluid” aproximation for the effect of velocity on the drag coefficient of an aircraft in the 1920s was predicted theoretically to be the factor (1 – v2/c2)-1/2, where c is the velocity of sound: this is the “sound barrier” theory. It breaks down because although the shock wave formation at sound velocity carries energy off rapidly in the sonic boom, it isn’t 100% efficient at stopping objects from going faster. The effect is that you get an effective increase in inertial mass from the layer of compressed, dense air in the shock wave region at the front of the aircraft, and the nose-on force has a slight compressive effect on the aircraft (implying length contraction). Therefore, from an idealized understanding of the basic physics of moving through a fluid, you can grasp how quantum field theory causes “relativity effects”!

“… light … “smells” the neighboring paths around it, and uses a small core of nearby space.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

Above: classical light “wave” illustration from Wikipedia. Most people viewing such diagrams confuse the waving lines with axes labelled field strengths in a single physical dimension, for field lines waving in three dimensional space! Don’t confuse field strength varying along one axis for a field line waving in two dimensions. It’s interesting that field lines are just a mathematical convenience or abstract model invented by Faraday, and are no more real in the physical sense than isobars on weather maps or contour lines on maps. If you scatter iron filings on a piece of paper held over a magnet several times, the absolute positions of the apparent “lines” that the filings clump along occur in randomly distributed locations, although they are generally spaced apart by similar distances. The random “hotspot” locations in which high random concentrations of the first-deposited filings land, form “seeds”, which – under the presence of the magnetic field – have induced magnetism (called paramagnetism), which attract further filings in a pole-to-pole arrangement that creates the illusion of magnetic field lines.

This classical theory of light (the diagram is a colour one of the version in Maxwell’s original Treatise on Electricity and Magnetism, final 3rd ed., 1873) is wrong: it shows fields along a single, non-transverse, dimension: a longitudinal “pencil of light” which violates the experimental findings of the double slit experiment! (If you look, you will see only one spatial direction shown, the z axis! The apparent y and z axes are not actually spatial dimensions but just the electric E and magnetic B field strengths, respectively! You can draw a rather similar 3-dimensional diagram of the speed and acceleration of a car as a function of distance, with speed and acceleration plotted as if they are dimensions at right angles to the distance the car has gone. Obfuscating tomfoolery doesn’t make the graph spatially real in three dimensions.)

The real electromagnetic photon, needed to explain the double slit experiment using single photons (as Feynman shows clearly in his 1985 book QED), is entirely different to Maxwell’s classical photon guesswork of 1873: it is spatially extended in a transverse direction, due to the reinforcement of multiple paths (in the simultaneous sum of histories) where the action of the paths is small by comparison to about 15.9% of Planck’s constant (i.e., to h-bar or h divided by twice Pi). However, this quantum theory path integral theory of the light photon is today still being totally ignored in preference to Maxwell’s rubbish in the ignorant teaching of electromagnetism. The classical equations of electromagnetism are just approximations valid in an imaginary, unreal world, where there is simply one path with zero action! We don’t live in such a classical universe. In the real world, there are multiple paths, and we have to sum all paths. The classical laws are only “valid” for the physically false case of zero action, by which I mean an action which is not a function of the coordinates for motion of the light, and which therefore remains invariant of the motion (i.e. a “pencil” of light, following one path: this classical model of a photon fails to agree with the results of the double slit diffraction experiment using photons fired one at a time).

To put that another way, classical Maxwellian physics is only relativistic because its (false) classical action is invariant of the coordinates for motion. As soon as you make the action a variable of the path, so that light is not a least-action phenomenon but instead takes a spread of actions each with different motions (paths), special relativity ceases to apply to Maxwell’s equations! Nature isn’t relativistic as soon as you correct the false classical Maxwell equations for the real world multipath interference mechanism of quantum field theory on small scales, precisely because action is a function of the path coordinates taken. If it wasn’t a function of the motion, there would simply be no difference between classical and quantum mechanics. The invariance of path action as a false classical principle and its variance in quantum field theory is a fundamental fact of nature. Just learn to live with it and give up worshipping Dr Einstein’s special relativity fraud!

Thus, in quantum field theory we recover the classical laws by specifying no change in the action when the coordinates are varied, or as Dirac put it in his 1964 Lectures on Quantum Mechanics (Dover, New York, 2001, pp. 4-5):

“… when one varies the motion, and puts down the conditions for the action integral to be stationary, one gets the [classical, approximately correct on large-scales but generally incorrect on small scales] equations of motion. … In terms of the action integral, it is very easy to formulate the conditions for the theory to be relativistic [in the real contraction, FitzGerald-Lorentz-Poincare spacetime fabric, emergent relativity mechanism, not Einstein’s damnable lies against a quantum field existing in the vacuum; remember Dirac’s public exposure of Einstein’s damned lies in his famous Nature v168, 1951, pp. 906-7 letter, “Is there an aether?”: ‘Physical knowledge has advanced much since 1905, notably by the arrival of quantum mechanics, and the situation has again changed. If one examines the question in the light of present-day knowledge, one finds that the aether is no longer ruled out by relativity, and good reasons can now be advanced for postulating an aether. . . . Thus, with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’!]: one simply has to require that the action integral shall be invariant. … [this] will automatically lead to equations of motion agreeing with [Dirac’s aether-based] relativity, and any developments from this action integral will therefore also be in agreement with [Dirac’s aether-based] relativity.”

Classical physics corresponds falsely to just the path of least action, or least time, whereas real (“sum over multiple path interference”) physics shows us that even in simple situations, light does not just follow the path of least action, but the energy delivered by a photon is actually spread over a range of paths with actions that are small compared to h-bar, but are not zero! There is a big difference between a path having zero action and a spread of paths having actions which are not zero but merely small compared to h-bar! This “subtle” difference (which most mathematical physicists fail to clearly grasp even today) is, as Feynman explained in his 1985 book QED, the basis of the entirely different behaviour of quantum mechanics from the behaviour of classical physics!

We have experimental evidence (backed up with a theory which correctly predicts observed force couplings) that the force-causing off-shell radiation of the vacuum isn’t a one-way inflow, but is falling in to the event horizon radius of a fundamental particle, then being re-emitted in the form of charged (off-shell) Hawking exchange radiation. The reflection process in some sense is analogous to the so-called normal reflection of on-shell light photons by mirrors, as Feynman explained in QED in 1985. Light isn’t literally reflected by a mirror, as Feynman showed by graphical illustration of path integral phase amplitude summation in QED (1985), light bounces off a mirror randomly in all directions and all paths of large action have random phase amplitudes which cancel one another out, leaving just paths with small path actions to add together coherently. The path integral for off-shell virtual photons (gauge bosons) is exactly the same. They go everywhere, but the net force occurs in the direction of least action, where their phases add together coherently, rather than cancelling out at random! The effective reflection of similarly charge polarized gauge bosons between similar charges is just the regular exchange process as depicted in basic (non-loopy) Feynman diagrams for fundamental interactions.


Above: electric fields carry electric charge (nobody has ever seen the core charge of an electric field), although this is contrary to the mainstream reasoning based on historical accident, which assumes that the virtual photons of electromagnetism are electrically neutral and are distinguished for positive and negative fields due to some magical, unobservable extra polarizations! It’s obvious that charged massless exchange radiation can propagate simultaneously along paths in opposite directions (although it can’t go along a one-way path only, due to infinite magnetic self-inductance at light velocity!), because of the cancellation of the superimposed magnetic field vectors as shown in the diagram above (for theoretical introduction, see linked paper here, although note that underlying of Psi symbols should be overbars).

Wow. You’d think this would be immediately taken up in education and the media and explained clearly to the world, wouldn’t you? No chance! What’s wrong is that Feynman’s 1985 book QED is simply ignored. When Feynman first tried to publish his simple “Feynman diagrams” and his multipath interference theory of quantum mechanics at the Pocono conference in 1948, he was opposed bitterly by the old 1st quantization propagandarists like Niels Bohr, Oppenheimer, Pauli, and many others. They thought he didn’t understand Heisenberg’s uncertainty principle! They hated the idea of simple Feynman diagrams to guide physical understanding of nature by allowing the successive terms in the path integral’s perturbative expansion to be given a simple physical meaning and mechanism, in place of obscure, obfuscating guesswork pseudo-mathematical physics. They hated progress then. People still do!

In 2002 and 2003 I wrote two papers in the Electronics World journal (thanks to the kind interest or patience of two successive editors) about a sketchy quantum field theory that replaces, and makes predictions way beyond, the Standard Model. Now in 2011, we can try an alternative presentation to clarify all of the technical details not by simply presenting the new idea, but by going through errors in the Standard Model and general relativity. This is because, after my articles had been published and attacked with purely sneering ad hominem “academic” non-scientific abuse, Leslie Green then wrote a paper in the August 2004 issue of the same journal, called “Engineering versus pseudo-science”, making the point that any advance that is worth a cent by definition must conflict with existing well established ideas. The whole idea that new ideas are supplementary additions to old ideas is disproved time and again. The problem is that the old false idea will be held up as some kind of crackpot evidence that the new idea must be wrong. Greene stated in his paper:

“The history of science and technology is littered with examples of those explorers of the natural world who merely reported their findings or theories, and were vehemently attacked for it. … just declaring a theory foolish because it violates known scientific principles [e.g. Aristotle’s laws of motion, Ptolemy’s idea that the sun orbits the earth, Kelvin’s stable vortex atoms of aether, Einstein’s well-hyped media bigotry – contrary to experimental evidence – that quantum field theory is wrong, Witten’s M-theory of a 10 dimensional superstring brane on an 11 dimensional supergravity theory, giving a landscape 10500 parallel universes, etc.] is not necessarily good science. If one is only allowed to check for actions that agree with known scientific principles, then how can any new scientific principles be discovered? In this respect, Einstein’s popularisation of the Gedankenexperiment (thought-experiment) is potentially a backward step.”

So the only way to get people to listen to facts is to kill the rubbish holding them back from being free to think about a vital innovation that breaks past the artificial barriers imposed by mainstream groupthink ideology and its suppressive and corrosive treason to the case of genuine scientific advance and human progress in civilization.

Fig. 1a: the primary Feynman diagram describing a quantum field interaction with a charge is similar for mathematical modelling purposes for all of the different interactions in the Standard Model of particle physics. The biggest error in the Standard Model is the assumption that the physically simplest or correct model for electromagnetism is an Abelian gauge theory in which the field is mediated by uncharged photons, rather than a Yang-Mills theory in which the field carries charge. This blog post will explain in detail the very important advantages to physics to be obtained by abandoning the Abelian theory of electromagnetism, and replacing it by a physically (but not mathematically) simpler Yang-Mills SU(2) theory of electromagnetism, in which the massless field quanta can be not merely neutral, but can carry either positive or negative electric charge. (Source: Exchange Particles internet page. For clarity I’ve highlighted an error in the direction of an arrow in the weak interaction diagram; this is of course nothing to do with the error in electromagnetism which I’m describing in this post.)

Note also the very important point for high-energy physics where particles approach very closely into field strengths exceeding Schwinger’s 1.3 x 1018 volts/meter cutoff for vacuum fermion pair production, i.e. spacetime annihilation and creation “loops” as shown on a Feynman diagram, have been excluded for these simplified diagrams. In understanding the long range forces pertinent to the kind of low energy physics we see everyday, we can usually ignore spacetime loops in Feynman diagrams, because in QED the biggest effect for low energy physics is from the simplest Feynman diagram, which doesn’t contain any loops. The general effect of such spacetime loops due to pair-production at high energies is called “vacuum polarization”: the virtual fermions suck in energy from the field, depleting its strength, and as a result the average distance between the positive and negative virtual fermions is increased slightly owing to the energy they gain from their polarization by the field. This makes them less virtual, so they last slightly longer than predicted by Heisenberg’s uncertainty principle, before they approach and annihilate back into bosonic field quanta. During when gaining extra energy from the field, they modify the apparent strength of the charge as seen at lower energies or longer distances, hence the need to renormalize the effective value of the charge for QFT calculations, by allowing it to run as a function of energy. In QCD there is gluon antiscreening, which we explained in previous posts is due to the creation of gluons to accompany virtual hadrons created by pair production in very strong electric fields, so the QCD running coupling at the highest energies runs the opposite way to the QED running coupling. Field energy must be conserved, so the QED field loses energy, the QCD field gains energy, hence asymptotic freedom for quarks over a certain range of distances. This total field energy conservation mechanism is completely ignored by QFT textbooks! As the virtual fermions gain some real energy from the field via the vacuum polarization, they not only modify the apparent charge of the particle’s core, but they also get modified themselves. Onshell fermions obey the Pauli exclusion principle. Thus, the virtual fermions in strong fields can actually start to become structured like electron shells around the particle core. This mechanism for vacuum structuring, as shown in earlier blog posts, gives rise to the specific discrete spectrum of fundamental particle masses, a fact that has apparently led to the repeated immediate deletion of arXiv-submitted papers, due to ignorance, apathy, and hostility of mainstream physicists towards checkable, empirically based mechanisms in QFT. Elitist superstring theorists preach (off the record, on Dr Lubos Motl’s superstring theory blog, or in anonymous sneering comments) that all of this progress is merely “heuristically based” physics, that such experimentally guided theory makes them sick, is mathematically naive, inelegant or repulsive, and that it would “just” reduce physics to a simple mechanical understanding of nature that the person in the street could grasp and understand. (Amen to that last claim!)

Fig. 1b: Maxwell’s equations (Maxwell wrote them in long-hand first-order differential term summarizing the laws of Gauss, Ampere and Faraday with the addition of his own, now textbook-obfuscated, law of “displacement current” through the aether for the vital case of open circuits, e.g. the effects of net energy transfer through space from of accelerating and decelerating currents in the plates of a charging or discharging of a capacitor which has a vacuum as its “dielectric”; the advanced curl and div operator notation was introduced by self-taught mathematical physicist Oliver Heaviside) contradict reality experimentally in what is called the Aharonov–Bohm effect (or Ehrenberg–Siday–Aharonov–Bohm effect). The failure of Maxwell’s equations is their neglect of energy in fields in general, and neglect of the conservation of energy in supposedly “cancelled” fields in particular! E.g., inside a block of glass through which light travels, there is positive electric field energy density from atomic nuclei and negative electric field energy density from orbital electrons. The two fields superimpose and neatly “cancel”, leaving no effect according to Maxwell’s equations (which don’t predict the variation of relativity permittivity as a function of “cancelled” fields!). So why does light slow down and thus refract in glass? Answer: the energy density of the “cancelled” electric fields is still there, and “loads” the vacuum. The photon’s electromagnetic field interacts with the electromagnetic energy in the glass, and this slows it and can deflect its direction. All you can do with Maxwell’s equations to allow for this is to make an ad hoc modification to the permittivity of the vacuum, fiddling with the “constants” in the equation to make it agree with experiments! The same effect applies to magnetic fields, as the experimental confirmation of the Aharonov–Bohm effect proves. To correct Maxwell’s equations, we replace them with a similarly first-order but more comprehensive “field potential” vector which includes a term that allows for the energy of cancelled fields in the vacuum. Note, however, that this modification to Maxwell’s equations under some conditions leads to conflicts with “special relativity”. E.g., if the zero point vacuum itself is viewed as consisting of “cancelled” field energy by analogy to a block of glass, then the modified Maxwell equations no longer necessarily necessitate the principle of special relativity, but under some circumstances necessitate absolute motion instead. This fact is usually obfuscated either to defend mathematical mysticism in theoretical physics, or to “protect Einstein’s authority”, much as people used to reject Newton’s laws in deference to the more-ancient “authority” of Aristotle’s laws of motion.

The problem that the zero-point electromagnetic energy in the vacuum might constitute an absolute frame of reference due to gravitational effects is clearly stated by Richard P. Feynman and Albert R. Hibbs, Quantum Mechanics and Path Integrals, Dover, New York, corrected edition, 2010, page 245:

“… if we were to sum this ground-state energy over all of the infinite number of possible modes of ever-increasing frequency which exist even for a finite box, the answer would be infinity. This is the first symptom of the difficulties which beset quantum electrodynamics. … Suppose we choose to measure energy from a different zero point. … Unfortunately, it is really not true that the zero point of energy can be assigned completely arbitrarily. Energy is equivalent to mass, and mass has a gravitational effect. Even light has a gravitational effect, for light is deflected by the sun. So, if the law that action equals reaction has qualitative validity, then the sun must be attracted by the light. This means that a photon of energy {h-bar}*{omega} has a gravity-producing effect, and the question is: Does the ground-state energy term {h-bar}*{omega}/2 [this assumes two modes per k] also have an effect? The question stated physically is: Does a vacuum act like a uniform density of mass in producing a gravitational field?”

On page 254, they point out that if the charged and neutral Pi mesons differ only in charge, then their observed differences in mass (the charged Pi meson has a greater mass than the neutral Pi meson) implies that this extra mass in the case of a charged particle comes from “the different way they couple to the electromagnetic field. So presumably the mass difference … represents energy in the electromagnetic field.” Using the same cutoff that works here for the electromagnetic field of an electron, on page 255 they find that the corresponding correction to the mass of the electron for electromagnetic field interactions “is only about 3 percent, but there is no way to test this, for we do not recognize a neutral counterpart to the electron.” As we pointed out since 1996, there are two separate long-range zero-point fields in the vacuum: gravitational (gravitons) and electromagnetic (off-shell photons), with very different energy densities due to the factor of 1040 difference in their long-distance couplings (the coupling at the low-energy IR cutoff limit, i.e. asymptotic limit of the running coupling that is valid for the low-energy physics domain, below ~1 MeV kinetic energy). The confusion in the value of the pseudo “cosmological constant” from the zero point vacuum comes from confusing the immense electromagnetic field energy density of the vacuum for the relatively tiny gravitational field energy density of the vacuum. It is the latter, manifested (as we proved effectively in 1996) by spin-1 gravitons, which causes the small observed cosmological acceleration of the universe, a ~ Hc. This is so because electric charge comes in two forms which balance, preventing long-range electromagnetic forces in the universe, whereas all observed gravitational charge has the same single sign and cannot cancel out. Gravitation thus pushes the matter apart (over long distances), causing cosmological acceleration. (On relatively small distance scales, the shielding of an observer by the presence of a relatively nearby mass, from the immense convergence of exchange gravitons with the surrounding isotropic universe, pushes the observer towards the nearby mass. The details of this have been carefully checked and confirmed to experimental accuracy!)

Fig. 1c: the SU(2) Yang-Mills field strength equation for electromagnetism utilizing massless charged field quanta reduces to the Maxwellian U(1) equation (equivalent to uncharged gauge bosons) under all necessary conditions, because of the motion-denying magnetic self-inductance of charged massless field quanta of SU(2). Note that the transfer of electric charge by Yang-Mills gauge bosons is not unaccompanied by a force. The charged gauge bosons carry both force-causing energy and charge. SU(2) includes one neutral boson as well as two charged bosons, so the neutral boson can deliver forces without carrying charge. SU(2) is thus a rich mathematical theory that can do a lot, and it is tempting with massless exchange radiation to attribute the neutral boson to graviton and the charged ones to electromagnetism (with left-handed interacting massive versions also existing to produce weak interactions). An addictive “drunkards walk” of charged massless gauge bosons between the ~1080 real fermion pairs in the universe the produces a path integral resultant that “conveniently” predicts the low-energy electromagnetism coupling IR limit to be (~1080)1/2 stronger than gravitation, because the neutral bosons (gravitons) don’t undergo such an addictive path integral! However, the theory is stronger than such superficial conveniences suggest, because it also predicted two years ahead of observation the correct observed cosmological acceleration of the universe, and vice-versa, it predicts the observed gravitational coupling (not using the 1040 factor just mentioned). It turns out that the simplest fully-consistent theory of nature has the graviton emerge from U(1) hypercharge which mixes with the neutral massless gauge boson of SU(2). Ignorant critics may claim that this correct limit proves that the SU(2) model is unnecessary under Occam’s Razor since for most cases it reduces to U(1) for practical calculations in electromagnetism, but this is a false criticism. The SU(2) electromagnetic theory is necessary to properly understand the relationship between electromagnetism and weak interactions (only left-handed interacting spin field quanta effectively acquire mass and partake in weak interactions)! The Abelian U(1) theory is a hypercharge which – when mixed with SU(2) – gives rise to the masses of the weak field quanta and also gives rise to a neutral field quantum, a spin-1 graviton. This is necessary. The spin-1 graviton pushes masses together, and this was falsely rejected by Pauli and Fierz in 1939 on the basis of a hidden implicit assumption which has been proved false. The currently fashionable claim that, because Maxwell’s equations are rank-1 tensors and general relativity’s Ricci tensor curvature is rank-2, electromagnetic field quanta are spin-1 and gravitons are spin-2, is a complete fraud; it is an expression of the most puerile physical and mathematical confusion between physical reality and the different mathematical models that can be used to represent that physical reality. We can, for instance, express electromagnetic forces in terms of rank-2 curvature equations. We don’t, not because this is the “wrong” thing to do, but because it is unnecessary, and it is far more convenient to use rank-1 equations (divs and curls of Faraday’s “field lines”).

Regarding mathematics being confused for reality, the great Eugene Wigner in 1960 published a paper called “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”, Communications in Pure and Applied Mathematics, vol. 13, No. I. It’s mainly hand-waving groupthink fashion, that is a “not even wrong” confusion between reality and continuously-evolving mathematical models that are just approximations, e.g. differential equations for wavefunctions in quantum mechanics implicitly assume that wavefunctions are continuously variable – not discretely variable – functions, which disagrees with the physical premise of quantum field theory, namely that every change in the state of a particle is a discrete event! (This definition, of a change of “state” as being a discrete change, doesn’t include purely rotational phase amplitudes due to the spin of a particle which has a transverse polarization; the wavefunction for phase amplitude will be a classical-type continuous variable, but other properties such as accelerations involve forces which in quantum field theory are mediated by discrete particle interactions, not continuous variables in a spacetime continuum.)

The first false specific claim that Wigner makes in his paper is his allusion, very vaguely (the vagueness is key to his confusion), to the fact that the integral of the Gaussian distribution, exp(-x2), over all values of x between minus infinity and plus infinity, is equal to the square root of Pi. He feels uneasy that the square root of Pi, the ratio of the circumference to diameter of a circle, is the result of a probability distribution. However, he ignores the fact that there is –x2 in the natural exponent, so this is a natural geometric factor. If x is a scaled distance, then x2 is an area, and you’re talking geometry. It’s no longer simply a probability that is unconnected to geometry. For example, the great RAND Corporation physicist Kahn in Appendix I to his 1960 thesis on deterrence, On Thermonuclear War, shows that the normal or Gaussian distribution applies to the effect of a missile aimed at a target; the variable x is the ratio of distance from intended ground zero, to the CEP standard error distance for the accuracy of the missile. We see in this beautiful natural example of falling objects hitting targets that the Gaussian distribution is implicitly geometric, and it is therefore no surprise that its integral should contain the geometric factor of Pi.

The circular area that objects fall into is the product Pi*r2 where r is radius, which is directly proportional to the scaled radius, x. This is mathematically why the square root of Pi comes out of the integral of exp(-x2) over x from minus to plus infinity (i.e., over an infinitely extensive flat plane, that the objects fall upon). Quite simply, the Gaussian distribution law fails to include the factor Pi in its exponent, so you get the square root of Pi coming out of the integral (thus the square root of Pi is the normalization factor for the Gaussian distribution in statistics). If only the great Gauss in 1809 had half a brain and knew what he was doing, he’d have included the Pi factor in the exponent, giving an integral output of 1, so we wouldn’t get the fictitious square root of Pi! The Gaussian or normal distribution is just the plain old negative exponential distribution of coin-tossing, with relative area as its variable! It’s therefore simply the insertion of area as the variable that introduces Pi (either directly in the exponent, or else as the square root of Pi in the integral result and related normalization factor). The error of Wigner was in not recognising that the square of dimensionless relative radius, x2, needs to be accompanied by the equally dimensionless geometric factor Pi, in the negative exponent. It is a classic error of theoretical physicists to believe, on the basis of a mistaken understanding of dimensional analysis, that dimensionless geometric conversion factors like Pi only apply to dimensionful, absolute distances or areas, not to relative distances or areas. In fact, factors like Pi obviously also apply to dimensionless relative measures of distance or area, because it is self-evident that if the radius of a circle is one dimensionless unit, then its area is obviously Pi dimensionless units, and not one dimensionless unit, as confused people like Gauss and Wigner believed with their obfuscating formula for the normal distribution!

Statistician Stephen M. Stigler (best known for Stigler’s law of eponymy) first suggested replacing the Gaussian distribution exp(-x2) with exp(-Pi*x2) in his 1982 paper, “A modest proposal: a new standard for the normal”, The American Statistician v36 (2). However, Stigler was too modest and therefore failed to make the point with sufficient physical force to get the world’s mathematics teachers and users to dump Laplace’s and Gauss’s obfuscating, fumbling nonsense and make statistics physically understandable to clear-thinking students. So even today, Wigner’s lie continues to be believed by the fashionable groupthink ideology of pseudo-mathematical physics prevailing in the world, as the following illustration indicates (note that the hoax began with Laplace, who infamously claimed that God was an unnecessary hypothesis in his crackpot mathematics!!!):

Wigner also ignores the fact that the mathematical concept of Pi is ambiguous in physics because of excess radius of mass in general relativity; general relativity and quantum gravity predict that around a spherical mass M, its radius shrinks by excess radius (1/3)MG/c2 metres, but the transverse direction (circumference) is unaffected, thus varying Pi unless there is curved spacetime. Since curved spacetime seems to be a classical large-scale approximation incompatible on the deeper level with quantum fields, where all actions consist of not of continuously variable differential equations but rather of a series of discrete impulsive particle interactions, it appears that the “excess radius” effect proves that the mathematical textbook value of Pi is wrong, and the real value of Pi is a variable quantity, which is the effect of the gravitational field warping spatial dimensions. Wigner simply ignores this mathematical failure of Pi, implicitly assuming that the textbook formula is correct. Actually, nobody verified the textbook formula precisely to more than a few significant figures, and since gravity is so small, the variation in Pi is small. So the point remains: mathematics has nothing to do with physics, beyond constituting a puerile tool or model for imperfect but often helpful calculations and is a danger in leading to arcane worship as an alternative to religion, a problem that goes back to the very roots of mathematics in the Egyptian priesthood and in the Greek Pythagorean cult.

The failure of mathematics to make deterministic predictions precisely even for classical systems like the collision of three balls in the “three body problem” which is beyond Newton’s laws, shows this mathematical failure so very clearly. Newton only came up with laws of motion that are deterministic when applied to an artificially simplistic situation which never really occurs precisely in our universe! Even if you tried to collide two balls in the vacuum of space, particles of radiation would affect the outcome! Nature isn’t mathematical! It’s physical. So Pi isn’t really the ratio of the circumference of a circle to its diameter; that’s only an approximation!

Wigner’s “mathematical reality” ideology nearly cost America the vital Nagasaki plutonium bomb that finally convinced Japan to agree to a conditional surrender without a horrific million plus casualties in an invasion of Japan, after Hiroshima and the Russian declaration of war against Japan failed. Wigner designed the plutonium production reactors but arrogantly tried to prevent the engineers from enlarging the core size to allow for unknowns. He raged that the engineers were ignorant of the accuracy of the cross-sections for fission and the accuracy of the mathematical physics of nuclear chain reactions, and were delaying plutonium production by insisting on bigger reactor cores than were needed. After the first reactor started up, it shut itself down a few hours later. Wigner’s data on the 200 fission products had been incomplete, and it turned out that some fission products like Xe-135 had large cross-sections to absorb neutrons, so after a few hours enough had been produced to “poison” the chain reaction. It was only because the engineers had made the cores bigger than Wigner specified, knowing that mathematical physics predictions are often wrong, that they were able to overcome the poisoning by adding extra uranium to the core to keep it critical!

Fig. 1d: he was unable to understand the immoral perils of relativism in blocking progress in physics, and was unable to understand the simplicity of physical mechanisms for fundamental forces, but at least Einstein was able to make the equations look pretty and attractive to the children who have only learned to count up to the number three, and who like patterns and very simple equations (a PDF version of above table is linked here, since I can’t easily put Greek symbols into html blog posts that will display correctly in all browsers; notice that the top-left to bottom-right diagonal of zero terms are the trace of the tensor, which is zero in this case). Actually, using the field tensor formulation to represent the various components of electric and magnetic fields, is quite a useful – albeit usually obfuscated – reformulation of Maxwell’s equations. However, mathematical models should never be used to replace physical understanding of physical processes, e.g. by deliberate attempts to obfuscate the simplicity of nature. If you’re not blinded by pro-tensor hype, you can see an “anthropic landscape” issue very clearly with Einstein’s tensor version of Maxwell’s equations in this figure: the field strength tensor and its partial derivative are indeed capable of modelling Maxwell’s equations. But only in certain ways, which are “picked out” specifically because they agree with nature. In other words, it’s just ad hoc mathematically modelling; it’s not a predictive theory. If you chisel a beautiful woman out of marble, all well and good; but you are a liar if you claim she was already in the marble waiting to be chiselled out. Your chisel work created the statue: it’s not natural. Similar arguments apply to mathematical modelling in Maxwell’s theory!

(On the subject of Einstein’s relativism worship as an alternative to religion, see the earlier post linked here. While many liars still try to “defend” relativism by claiming falsely that proponents of quantum field theory are racists out to gas Jews, the sad fact is the precisely the opposite: Einstein tried to get a handful of Jews out of Germany, including Leopold Infeld, but his popular relativism helped Professor Cyril Joad attack Winston Churchill’s call for an arms race with the Nazis in the early 1930s, making it politically unacceptable to the nation, and thus weakening the hand of the already weak-brained Prime Minister at the Munich watershed in September 1938. E.g., Joad was standing at the back of one of Churchill’s popular lectures. Churchill made the point that we could deter Hitler by having an arms race. Joad then stood up and “innocently” asked Churchill “whether this advice was what he would tell the enemy”, triggering cheers and applause and media criticism of Churchill. It is certainly true that if everything were relative with no absolute truth and no absolute distinction between good and evil, Churchill’s advice is rubbish. This relativism, however, is not the case in morality, any more than in light velocity under a real FitzGerald-Lorentz contraction. Joad’s popular deceit led to millions of unnecessary deaths, as Kahn proved in 1960. Joad’s successors simply attacked Kahn while ignoring the facts, and then tried the same error of relativism during the Cold War with the Soviet Union. “The people suffering in the Soviet Union had a right to be free to be forced by the KGB to live under Soviet communism, just as we are free to have “a different system of government”, you see! Relatively speaking, neither side was right, and it was just “playground politics” to have a Cold War instead of sensibly disarming to ensure peace and safety from the horrible risk of deterring invasions, you see!” After President Nixon’s Watergate scandal and failure in Vietnam, to deflect media attacks from Nixon, America began to press ahead with negotiations with the Soviet Union for SALT treaties just when the Soviet threat was reaching parity with the Western arms stockpile, and when Soviet civil defense was being transferred from civilian control to military control with vastly increased spending. If the arms race had been stopped, the Soviet Union might have survived instead of going effectively bankrupt when Reagan manipulated oil prices in the 1980s. In 1975, America signed the Helsinki Act, for the first time agreeing to the borders of the Soviet Union and its Warsaw Pact in Europe. This officially handed over those countries and people to Soviet control. After it was signed, the Chairman of the Soviet KGB (secret police), Yuri Andropov, stated in a letter to the Soviet Central Committee on 29 December 1975: “It is impossible at present to cease criminal prosecutions of those individuals who speak out against the Soviet system, since this would lead to an increase in especially dangerous state crimes and anti-social phenomena.” Einstein’s “peaceful co-existence” propaganda was a falsehood. How on earth can anyone surrender to such lying relativist evil?)

Fig. 1e: clever field strength tensor in SO(3,3): Lunsford using 3+3d obtains the Pauli-Lubanski vector for particle spin, hence obtaining a quantum phenomenon from classical electrodynamics! The quantum number of particle spin is crucial to classical physics because, as we shall see, it determines how the phase amplitudes of paths with different actions vary. The quantum path with least action in the path integral has the classical equations of motion. The other paths are excluded due to spin-related phase amplitude cancellation. It’s really that simple! Bosons with spin-1 are transformed by Dirac-Anderson pair-production into pairs of spin-1/2 fermions (the charged radiations in pair-production are trapped in loops by gravitation, thus giving the black hole event horizon cross-sectional area for quantum gravity interactions, which is confirmed by empirically-checked quantum gravity calculations, which allows the magnetic field of any portion of the loop to be cancelled by the magnetic field from the opposite side of the loop which has the opposite direction, allowing stable spin without self-inductance issues; this is shown in my 2003 Electronics World paper), so just as fermions combine at low temperatures into a Bose-Einstein condensate composed of Cooper pairs of electrons (or other fermions) that together behave like a frictionless, superconducting, low-viscosity boson, so too a spin-1 boson of radiation at any temperature is physically equivalent to a superposition of two spin-1/2 fermion-like components. (Higher temperatures cause random brownian motion with enough energy to break up the delicate Cooper pair spin-coupling, thus preventing superconductivity, etc.)

Fermion amplitudes during scatter subtract, while boson amplitudes add together with a positive sign, because of the superposition of the magnetic field self-induction vectors that are the consequence of spinning charges! (This rule applies to the scatter of similar particles in similar spin states with one another, not to unpolarized beams.) It is related to the Pauli exclusion principle, because Pauli stipulated that no two fermions with the same set of quantum numbers can exist in the same location; in a sense, therefore, the Pauli exclusion principle (only an empirically confirmed principle, not a mechanism or really deep explanation) causes fermions with originally similar sets of quantum numbers to change their states when they approach closely enough to interact. Bosons don’t obey Pauli’s exclusion principle, so they don’t need to change their states when they scatter! This problem is discussed – but it’s simple solution is ignored – by Feynman in the Lectures on Physics, v3, p.4-3:

“We apologise for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two [boson and fermion interaction amplitude sign rules] must necessarily go together, but we have not been able to find a simple way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply [for particles with identical spin states: fermion scattering amplitudes subtract in scatter, but boson scattering amplitudes add with a positive sign], but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics [QFT]. This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world.”

(But don’t be fooled. Just because Feynman said that, doesn’t prove that peer-reviewers and journal editors are interested in the nurture and publication of deep-explanations to long-existing problems. Instead, the situation is the exact opposite. The longer an anomaly or “issue” has existed, the better the textbook authors learn to live with it, to camouflage it behind a wallpaper of obfuscating symbolism, and to reinterpret it as a badge of pride: “nobody understands quantum mechanics”. This is spoken with the “nobody” snarled as a threat accompanied by a motion of the hand towards the bulging holster, after you have just explained the answer! Progress comes from change, which is violently opposed by bigots. Niccolò Machiavelli, The Prince (1513), Chapter 6: “And let it be noted that there is no more delicate matter to take in hand, nor more dangerous to conduct, nor more doubtful in its success, than to set up as the leader in the introduction of changes. For he who innovates will have for his enemies all those who are well off under the existing order of things, and only lukewarm supporters in those who might be better off under the new.” The struggle for progress against the vested interests of the status quo is called politics, and the extension of politics against unreasonable opponents who won’t really listen or actually try to block progress is, as Clausewitz defined it, war: “War is not merely a political act, but also a real political instrument, a continuation of political commerce, a carrying out of the same by other means.”)

Danny Ross Lunsford’s magnificent paper Gravitation and Electrodynamics over SO(3,3) overcame the hurdles required to unify gravitation and electrodynamics dynamically, making confirmed predictions (unlike the reducible gravitation-electrodynamics unification ideas of 4+1d Kaluza-Klein, Pauli, Einstein-Mayer, and Weyl; Pauli showed that “any generally covariant theory may be cast in Kaluza’s form”, hence the mindless and fruitless addition of 6/7 extra spatial dimensions in “not even wrong” string theory), but despite acceptance and publication in a peer-reviewed journal (International Journal of Theoretical Physics, Volume 43, Number 1, 161-177), and despite supplying the required arXiv endorsement, his brilliant paper was mindlessly removed from the stringy unification theory dominated arXiv (U.S. Government part-funded) pre-print server, thus denying its circulation via the accepted mainstream electronic route to physicists around the world. To summarize Lunsford’s great idea is very easy. There is not one time dimension, but three, making a total of three spatial and three time dimensions. In other words, spacetime is symmetric, with one timelike dimension per spatial dimension.

One way to grasp this is to note that the age of the universe can be deduced (since the universe has been found to have a flat overall geometry, i.e. dark energy offsets the gravitational curvature on large scales), from looking at the redshift of the universe to obtain the Hubble parameter: the age of the universe is the reciprocal of that parameter. Since we build geometry on the basis of 90 degree angles between spatial dimensions, we have three orthagonal dimensions of space, SO(3). Measuring the Hubble constant in these 3 orthagonal dimensions by pointing a telescope in the three 90-degree different directions and measuring the redshift-distance Hubble parameter in each of them, would give 3 separate ages for the universe, i.e. 3 time dimensions! Obviously, if we happen to see isotropic redshift, all the 3 age measurements for the universe will be similar, and we will live under the delusion that there is only one time dimension, not three. But in reality, there may be a simple reason why the universe has an isotropic expansion rate in all directions, and thus why time appears to have only one discernable dimension: nature may be covering up two time dimensions by making all time dimensions appear similar to us observers. If this sounds esoteric, remember that unlike string theorists who compactify 6/7 unobservable extra spatial dimensions, creating a landscape of 10500 metastable vacua, Lunsford’s SO(3,3) is the simplest possible and thus the best dynamical electromagnetic-gravitational unification according to Occam’s razor. Lunsford proves that the the SO(3,3) unification of electrodynamics and gravitation eliminates the spurious “cosmological constant” from general relativity, so that the “dark energy” causing the acceleration must be spin-1 repulsive quantum gravity, just as we predicted in 1996 when predicting the small but later measured acceleration of the universe, a ~ Hc. (A prediction published via Electronics World, October 1996, p896, and also Science World ISSN 1367-6172, February 1997, after the paper had been rejected for “being inconsistent with superstring theory”, an (as yet) “unconfirmed speculation”, etc. (after confirmation, they just gave no reason for rejection when repeated submissions were made!) by the so-called “peer-reviewers” who censor predictive theories from publication for CQG, Nature, et al. Unfortunately, just like those mainstream bigots, IC – despite claiming to champion progress, and despite my efforts to write about his work which culminated in publications – has never in fifteen years agreed host a single discussion on his website of QFT, nor in his numerous scientific publications, but instead like the crank string theorists resorted to shouting the idea down and wasting time!)

Lunsford finishes his paper: “It thus appears that the indeterminate aspect of the Einstein equations represented by the ordinary cosmological constant, is an artifact [in general relativity, not in nature!] of the decoupling of gravity and electromagnetism. … the Einstein-Maxwell equations are to be regarded as a first-order approximation to the full calibration-invariant system. One striking feature of these equations that distinguishes them from Einstein’s equations is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behaviour (see Weinberg, 1972 [S. Weinberg, Gravitation and Cosmology, Wiley, p. 7.1, p. 10.8, 1972]).”


Fig. 1f: Oleg D. Jefimenko and Richard P. Feynman (equation 28.3 in the Feynman Lectures on Physics, vol. 1) independently solved Maxwell’s equations in the early 1960s, which allows quantum field theory effects to be easily seen in the Maxwell correction to Coulomb’s force law for steady charges to an equation which allows for charge motion. The Jifimenko-Feynman equation for electric field strength is a three component equation in which the first component is from Coulomb’s law (Gauss’s field divergence equation in the Maxwell equations) where force F = qE so that electric field E = q/(4*Pi*Permittivity*R2) . The Feynman-Jefimenko solution to Maxwell’s equations for field directions along the line of the motion and acceleration of a charge yields the simple summation of terms: Ev/m = [q/(4*Pi*Permittivity)] { R-2 + [v(cos z)/(cR2)] + [a(sin z)/(Rc2)] }

The sine and cosine factors in the two motion related terms are due to the fact that they depend on whether the motion of a charge is towards you or away from you (they come from vectors in the Feynman-Jefimenko solution; z is the angle between the direction of the motion of the charge and the direction of the observer). The first term in the curly brackets is the Coulomb law for static charges. The second term in the curly brackets with a linear dependence on v/c is simply the effect of the redshift (observer receding from the charge) or blue shift (observer approaching the charge) of the force field quanta, which depends on whether you are moving towards or away from the charge q; as the Casimir effect shows, field quanta or virtual photons do have physically-significant wavelengths. The third term in the curly brackets is the effect of accelerations of charge, i.e. the real (on-shell) photon radio wave emission: this radio emission field strength drops off inversely with distance rather than as the inverse square of distance. (The time-dependence of E at distance R in the equation is the retarded time tR/c, which allows for the light speed delay due to the field being composed of electromagnetic field quanta and waves which must transverse that distance from charge to observer before the field can be observed.)

This solution to Maxwell’s equations is important for the analysis of quantum field theory effects due to gauge bosons.

Physical mechanism of electric forces

Fig. 1a shows the Feynman diagrams used for the main force-causing interactions (there are many others too; for example the pions aren’t the only mesons involved in the strong nuclear force that operates between nucleons).

Fig. 2: mathematical concepts like plots of electric and magnetic field strengths or even “field lines” inside photons are not physically real but they do constitute a useful tool, when mathematically shown on a graph, for establishing the physical distinctions and mechanisms for on-shell (real) and off-shell (virtual) radiations in quantum field theory, and it should be remembered that Maxwell’s equations are an incomplete description of electromagnetism (the field potential A{mu} is needed to account for effects of the superimposed energy density in so-called “cancelled fields”, e.g. the Aharonov-Bohm effect, where the superimposed field energy loads the vacuum and thus affects quantum phenomena, just as the “cancelled” negative and positive fields from electrons and nuclei in a block of glass load the vacuum with energy density and thus slow down light).

This diagram is a revision of one from my 2003 Electronics World article, the main updates being due to a continuing study of IC experimental work (which he interpreted sadly using an obsolete theory) on electromagnetic energy currents, and a forceful argument in an email from Guy Grantham, which stated that the only realistic way to make a simple exchange-radiation mechanism for both attraction and repulsion is to have electrically charged field quanta (although being he didn’t help in actually working out the details shown above!). The whole point is that off-shell electrically massless field quanta can’t propagate in the vacuum, due to magnetic self-inductance! Therefore, they will only propagate if there is an ongoing exchange in both directions such that the magnetic fields are cancelled out. This physical mechanism for transmission and cancellation is obviously at the root of the phase amplitude in quantum field theory, whereby spinning quanta can be supposed to take all possible routes through the vacuum, although the wildly varying phases at large actions cause the paths with large actions to cancel one another out, e.g. to be stopped by field effects like non-cancellation of magnetic self-inductance.

Fig. 3: physical basis of path integrals for the simple case of light reflection by a mirror. Classically the reflection law is that the angle of incidence equals the angle of reflection, which is of course the path that light travels in the least time or least “action” (action is defined as the integral of the lagrangian over time; for classical systems the lagrangian is the difference between the kinetic and potential energy of a particle at any given time). Light follows all paths, but most of them have randomly orientated “phases” and thus cancel out in the vector summation. Only for small actions do the phases add together coherently. Thus, light effectively occupies not a one dimensional line as it propagates, but is spread out spatially in space due to the reinforcement of all those paths with actions small compared to Planck’s constant, h = E/f (which has units of action, and when divided by twice Pi, is equal to the proper unit of quantum action in quantum field theory). Hence Feynman’s great statement: “Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)” – R. P. Feynman, QED (Penguin, 1990, page 54).

Fig. 4: a minor mathematical modification of Feynman’s path integral theory involving replacing the imaginary (complex) phase amplitude with the real term in its expansion by Euler’s equation, which is needed to overcome Dr Chris Oakley’s mathematical problem with today’s sloppy (mathematically non-rigorous) textbook quantum field theory, namely Haag’s theorem, which proves that essential renormalization is impossible in a complex space like Foch space (an infinite-dimensional vector space) or Hilbert space (a complex inner product space, in which a complex number is associated to each pair of coordinate elements), because the isomorphism that maps the free-field Hilbert space on to the renormalized-field Hilbert space is ambiguous! (This theorem was proved by Hall and Wightman. The reason why the mainstream ignores Haag’s theorem is that Haag postulated that the whole interaction picture doesn’t exist, an interesting possibility which was investigated without great success by Dr Chris Oakley. Nobody seems to have grasped the obvious solution, namely that Hilbert space doesn’t exist and the phase factor is mathematically fictitious and in the real world must lose it’s complexity (this lack of sense is probably due to groupthink or mathematical respect to Euler, Hilbert, Schroedinger, Dirac, et al.; by analogy Newton should have resisted suggesting his laws of motion, purely out of respect for the dead genius Aristotle?). We must express the phase vectors as arrows in real space if we want quantum field theory to be renormalizable in a self-consistent, non-ambiguous manner. The path integral as shown above works just as well this way, it just eliminates the problem of Haag’s theorem. (Haag’s theorem is the argument behind Dr Oakley quotations from both Feynman and Dirac, who point out that because of renormalization, quantum field theory can’t be proved to be self-consistent. As Feynman wrote in his 1985 classic, QED, the lack of proof of self-consistency due to Haag’s theorem is embarrassing to any self-respecting mathematical physicist working in quantum field theory.) The diagram proves the equivalence of the resultant amplitudes when using eiS and cos S for the phase factor in the path integral (sum over path histories). Basically, what we are suggesting is that we take Euler’s eiS = cos S + i sin S then drop the complex term i sin S, which cuts out the use of the imaginary axis from the Argand diagram, giving only real space!

Fig. 5: how simply replacing the complex eiS phasor with its real component cos (iS) replaces complex space with real space, averting the inability to prove self-consistency in quantum field theory due to Haag’s theorem. This allows the spatially distributed (truly transverse) on-shell and off-shell photons (unlike Maxwell’s idea of the photon) shown in Fig. 2 to have a physically real phase factor to be modelled, with the phase denoting a real physical property of the photons taking different paths, e.g. the phase factor can denote differing angles of spin polarization or differing charge combinations, unlike the imaginary, unphysical phase factor. The reasons why this isn’t done in textbooks is the fashionable groupthink argument that, historically, the origins of the textbook complex exponential phase factor are rooted in the solution to the time-dependent form of Schroedinger’s equation, and the time-dependent form of Schroedinger’s equation survives as Dirac’s equation because Dirac’s equation is only different from Schroedinger’s in its Hamiltonian (i.e., the spacetime-compatible Dirac “spinor”). However, as Feynman explained in his Lectures on Physics, Schroedinger’s equation was just a guess that “came out of the mind of Schroedinger”! It’s not a physical fact, and it’s actually contrary to physical facts because in quantum field theory it should take a discrete quantum interaction to cause a discrete wavefunction change, but Schroedinger’s equation intrinsically assumes a classical, continuously varying wavefunction! The error here is obvious. Why defend a guesswork derivation error which prevents renormalized quantum field theory from being rigorously, unambiguously formulated mathematically and proved self-consistent? Dr Thomas Love has explained that all of the problems of wavefunction collapse in quantum mechanics originate from this guess by Schroedinger: “‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

Just as Bohr’s atom is taught in school physics, most mainstream general physicists with training in quantum mechanics are still trapped in the use of the “anything goes” false (non-relativistic) 1927-originating “first quantization” for quantum mechanics (where anything is possible because motion is described by an uncertainty principle instead of a quantized field mechanism for chaos on small scales). The physically correct replacement is called “second quantization” or “quantum field theory”, which was developed from 1929-48 by Dirac, Feynman and others.

The discoverer of the path integrals approach to quantum field theory, Nobel laureate Richard P. Feynman, has debunked the mainstream first-quantization uncertainty principle of quantum mechanics. Instead of anything being possible, the indeterminate electron motion in the atom is caused by second-quantization: the field quanta randomly interacting and deflecting the electron.

“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn’t give up!)

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

His path integrals rebuild and reformulate quantum mechanics itself, getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific baggage like ‘entanglement hype’ it brings with it:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger’s wave equation and Heisenberg’s matrix mechanics being the first two attempts, which both generate nonsense ‘interpretations’]. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.

‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’

– Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

– Richard MacKenzie, Path Integral Methods and Applications, pp. 2-13.

‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

There are other serious and well-known failures of first quantization aside from the nonrelativistic Hamiltonian time dependence:

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

First quantization for QM (e.g. Schroedinger) quantizes the product of position and momentum of an electron, rather than the Coulomb field which is treated classically. This leads to a mathematically useful approximation for bound states like atoms, which is physically false and inaccurate in detail (a bit like Ptolemy’s epicycles, where all planets were assumed to orbit Earth in circles within circles). Feynman explains this in his 1985 book QED (he dismisses the uncertainty principle as complete model, in favour of path integrals) because indeterminancy is physically caused by virtual particle interactions from the quantized Coulomb field becoming important on small, subatomic scales! Second quantization (QFT) introduced by Dirac in 1929 and developed with Feynman’s path integrals in 1948, instead quantizes the field. Second quantization is physically the correct theory because all indeterminancy results from the random fluctuations in the interactions of discrete field quanta, and first quantization by Heisenberg and Schroedinger’s approaches is just a semi-classical, non-relativistic mathematical approximation useful for obtaining simple mathematical solutions for bound states like atoms:

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Sound waves are composed of the group oscillations of large numbers of randomly colliding air molecules; despite the randomness of individual air molecule collisions, the average pressure variations from many molecules obey a simple wave equation and carry the wave energy. Likewise, although the actual motion of an atomic electron is random due to individual interactions with field quanta, the average location of the electron resulting from many random field quanta interactions is non-random and can be described by a simple wave equation such as Schroedinger’s.

This is fact, it isn’t my opinion or speculation: professor David Bohm in 1952 proved that “brownian motion” of an atomic electron will result in average positions described by a Schroedinger wave equation. Unfortunately, Bohm also introduced unnecessary “hidden variables” with an infinite field potential into his messy treatment, making it a needlessly complex, uncheckable representation, instead of simply accepting that the quantum field interations produce the “Brownian motion” of the electron as described by Feynman’s path integrals for simple random field quanta interactions with the electron.

Quantum tunnelling is possible because electromagnetic fields are not classical, but are mediated by field quanta randomly exchanged between charges. For large charges and/or long times, the number of field quanta exchanged is so large that the result is similar to a steady classical field. But for small charges and small times, such as the scattering of charges in high energy physics, there is some small probability that no or few field quanta will happen to be exchanged in the time available, so the charge will be able to penetrate through the classical “Coulomb barrier”. If you quantize the Coulomb field, the electron’s motion is indeterministic in the atom because it’s randomly exchanging Coulomb field quanta which cause chaotic motion. This is second quantization as explained by Feynman in QED. This is not what is done in quantum mechanics, which is based on first quantization, i.e. treating the Coulomb field V classically, and falsely representing the chaotic motion of the electron by a wave-type equation. This is a physically false mathematical model since it omits the physical cause of the indeterminancy (although it gives convenient predictions, somewhat like Ptolemy’s accurate epicycle based predictions of planetary positions):

Schroedinger error

Fig. 6: Schroedinger’s equation, based on quantizing the momentum p in the classical Hamiltonian (the sum of kinetic and potential energy for the particle), H. This is an example of ‘first quantization’, which is inaccurate and is also used in Heisenberg’s matrix mechanics. Correct quantization will instead quantize the Coulomb field potential energy, V, because the whole indeterminancy of the electron in the atom is physically caused by the chaos of the randomly timed individual interactions of the electron with the discrete Coulomb field quanta which bind the electron to orbit the nucleus, as Feynman proved (see quotations below). The triangular symbol is the divergence operator (simply the sum of the gradients in all applicable spatial dimensions, for whatever it operates on) which when squared becomes the laplacian operator (simply the sum of second-order derivatives in all applicable spatial dimensions, for whatever it operates on). We illustrate the Schroedinger equation in just one spatial dimension, x, above, since the terms for other spatial dimensions are identical.

Dirac’s quantum field theory is needed because textbook quantum mechanics is simply wrong: the Schroedinger equation has a second-order dependence on spatial distance but only a first-order dependence on time. In the real world, time and space are found to be on an equal footing, hence spacetime. There are deeper errors in textbook quantum mechanics: it ignores the quantization of the electromagnetic field and instead treats it classically, when the field quanta are the whole distinction between classical and quantum mechanics (the random motion of the electron orbiting the nucleus in the atom is caused by discrete field quanta interactions, as proved by Feynman).

Dirac was the first to achieve a relativistic field equation to replace the non-relativistic quantum mechanics approximations (the Schroedinger wave equation and the Heisenberg momentum-distance matrix mechanics). Dirac also laid the groundwork for Feynman’s path integrals in his 1933 paper “The Lagrangian in Quantum Mechanics” published in Physikalische Zeitschrift der Sowjetunion where he states:

“Quantum mechanics was built up on a foundation of analogy with the Hamiltonian theory of classical mechanics. This is because the classical notion of canonical coordinates and momenta was found to be one with a very simple quantum analogue …

“Now there is an alternative formulation for classical dynamics, provided by the Lagrangian. … The two formulations are, of course, closely related, but there are reasons for believing that the Lagrangian one is the more fundamental. … the Lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; while the Hamiltonian method is essentially nonrelativistic in form …”

Schroedinger’s time-dependent equation is: Hy= iħ.dy /dt, which has the exponential solution:

yt = yo exp[-iH(t – to)/ħ].

This equation is accurate, because the error in Schroedinger’s equation comes only from the expression used for the Hamiltonian, H. This exponential law represents the time-dependent value of the wavefunction for any Hamiltonian and time. Squaring this wavefunction gives the amplitude or relative probability for a given Hamiltonian and time. Dirac took this amplitude e-iHT/ħ and derived the more fundamental lagrangian amplitude for action S, i.e. eiS/ħ. Feynman showed that summing this amplitude factor over all possible paths or interaction histories gave a result proportional to the total probability for a given interaction. This is the path integral.

Schroedinger’s incorrect, non-relativistic hamiltonian before quantization (ignoring the inclusion of the Coulomb field potential energy, V, which is an added term) is: H = ½ p2/m. Quantization is done using the substitution for momentum, p -> -iħ{divergence operator} as in Fig. 6 above. The Coulomb field potential energy, V, remains classical in Schroedinger’s equation, instead of being quantized as it should.

The bogus ‘special relativity’ prediction to correct the expectation H = ½ p2/m is simply: H = [(mc2)2 + p2c2]2, but that was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ2d2y/dt2 = [(mc2)2 + p2c2]y.

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction. Dirac’s equation simply makes the time-dependent Schroedinger equation (Hy = iħ.dy/dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H = apc + b mc2,

where p is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’. This is not to be confused for the Weyl spinors used in the gauge theories of the Standard Model; whereas the Dirac spinor represents massive spin-1/2 particles, the Dirac equation yields two Weyl equations for massless particles, each with a 2-component Weyl spinor (representing left- and right-handed spin or helicity eigenstates). The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E = ±[(mc2)2 + p2c2]1/2.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ½ ħ = ± h/(4p). This explains two of the four solutions! The electron is spin-1/2 so it has a spin of only half the amount of a spin-1 particle, which means that the electron must rotate 720 degrees (not 360 degrees!) to undergo one revolution, like a Mobius strip (a strip of paper with a twist before the ends are glued together, so that there is only one surface and you can draw a continuous line around that surface which is twice the length of the strip, i.e. you need 720 degrees turning to return it to the beginning!). Since the spin rate of the electron generates its intrinsic magnetic moment, it affects the magnetic moment of the electron. Zee gives a concise derivation of the fact that the Dirac equation implies that ‘a unit of spin angular momentum interacts with a magnetic field twice as much as a unit of orbital angular momentum’, a fact discovered by Dirac the day after he found his equation (see: A. Zee, Quantum Field Theory in a Nutshell, Princeton University press, 2003, pp. 177-8.) The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc2. This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc2. The fact that in fact E = ± mc2, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity. E = ± mc2 allowed Dirac to predict antimatter, such as the anti-electron called the positron, which was later discovered by Anderson in 1932 (anti-matter is naturally produced all the time when suitably high-energy gamma radiation hits heavy nuclei, causing pair production, i.e., the creation of a particle and an anti-particle such as an electron and a positron).

(To be continued when time allows. In the meanwhile, as linked on an earlier post, the introductory pages from my draft PDF paper can be found at https://nige.files.wordpress.com/2010/10/paper-draft-pages-1-5-2-oct-2010.pdf, although please note that there are some trivial mathematical symbol typos that are outside my control, e.g. the QuarkXpress software I used doesn’t contain any apparent way of writing Psi with an overbar, so I’ve had to underline Psi instead. I also gave some comments about errors in “electroweak symmetry” on Tommaso’s blog which are of relevance, posts on this blog discuss particle masses and the quantum gravity mechanism.)



Above: a quantitative prediction of the cosmological acceleration of the universe in 1996, two years ahead of the discovery, was ignored! Pseudo-physicists at the so-called Classical and Quantum Gravity and also Physics Review Letters think anything fundamental that doesn’t agree with superstring liars must be wrong! Maybe the gravitons heat up or slow down planets? If so this should apply also to the well established off-shell Casimir radiation in the vacuum which would have dragged and slow the planets making them glow, slow down, and spiral into the sun millions of years ago. They didn’t. Contrary to string theorists who are ignorant of the basics of quantum field theory, field quanta are off-shell particles, which impart kinetic energy to accelerate charges thus causing forces, without causing direct heating or drag, merely the Lorentz mass increase and the real FitzGerald-Lorentz contraction effect. Maybe rank-2 tensors prove spin-2 gravitons? Nope, rank-1 tensors are first order field line gradients, and rank-2 tensors are second-order equations of motion. You can use either rank-1 or rank-2 equations for electromagnetism or gravity; it depends not on spin but purely on whether the theory is formulated as field lines (rank-1 equations) or accelerations in spacetime (rank-2).

Update (20 January 2011):

Sadly, superstring theorist Dr Lubos Motl, a Facebook friend who is 100% right about global warming hype, left-wing dangers and political correctness, has called for the famous superstring theorist Professor Greene at Columbia University to fire superstring critic Dr Peter Woit. Dr Woit, whose blog and paper on representation theory and quantum field theory since 2002 has led me to my current approach to the problem of fundamental interactions and unification, has replied robustly: “It seems that some unemployed guy in Pilsen who reads this blog thinks Brian Greene is my employer and is upset that Brian is not having me fired. For the record, my position as “Senior Lecturer” in the math department is not tenured, but I have a long-term contract and whether it gets renewed at some point in the distant future will have nothing to do with what Brian thinks about this blog, or with what I think about his books. Actually, my impression is that if most string theorists could choose one well-known blog dealing with string theory to shut down, it wouldn’t be this one …”

Elsewhere, Dr Woit writes: “The controversy over the multiverse is … the idea that string theory implies a multitude of completely separate universes with different physical laws. This is quite different than many-worlds, which is an interpretation of standard quantum mechanics, with one fixed set of physical laws.”

Dr Peter Woit, “Is the Multiverse Immoral?”: “One of the lessons of superstring theory unification is if that a wrong idea is promoted for enough years, it gets into the textbooks and becomes part of the conventional wisdom about how the world works. This process is now well underway with multiverse pseudo-science, as some theorists who should know better choose to heavily promote it, and others abdicate their responsibility to fight pseudo-science as it gains traction in their field.”

Friend says (January 29, 2011 at 8:00 pm): “I think that multiverses are a misinterpretation of the Path Integral used in QFT, etc. Instead of it predicting the actual existence of alternative paths/universes, it really predicts that it takes ALL possibilities to make just one universe. Thus it is impossible for multiverses to exist.”

{NC note: the “Friend” who wrote this comment, which goes on to another paragraph of abject speculation, is not me, although I have contributed comments under anonymity where I can’t otherwise contribute comments. The probability that “Friend” is Dr Woit writing an anonymous comment on his own blog, or a friend of his doing so, is not 0. However I don’t really know what Dr Woit thinks about Feynman’s 1985 book QED. My wild guess from reading Dr Woit’s 2002 arXiv paper on “Quantum Field Theory and Representation Theory” is that he hasn’t really spent time on Feynman’s 1985 book, doesn’t physically put too much stress on the “heuristic” picture of 2nd quantization/QFT as virtual particles following every path and interfering to cause “wavefunction” chaos; he works in a mathematics department and is fixed into a belief that sophisticated maths is good, only objecting to misrepresentations of mathematics for hype and funding by superstring theorists and others. “Friend”, in a later comment time-stamped 9:29pm, writes about another pet interest of Dr Woit’s: “how about the financial market;-) Unsatisfied with economic progress, they’ve invented extravagant financial theories of prime-lending rates and complicated security instruments. Funny, I’ve heard that some physicists have found work in the financial industry. Perhaps their theories work in some other universe.” Dr Baez replied to Friend: ‘That’s a nice analogy because it seems to have been caused by a desperate search for “high rates of return”.’}

John Baez says (January 29, 2011 at 8:45 pm): “Maybe a branch of science is ripe for infection by pseudoscience whenever it stops making enough progress to satisfy the people in that field: as a substitute for real progress, they’ll be tempted to turn to fake progress. One could expect this tendency to be proportional to the loftiness of the goals the field has set for itself… and to the difficulty its practitioners have in switching to nearby fields that are making more progress. But is this really true? …”

Thomas Larsson says (January 31, 2011 at 12:12 pm): “Medieval astronomers knew that the universe is a mechanical clockwork with at least 13 epicycles. The point is that Nature’s answers depend on how the question is posed. If you ask her about epicycles, she will answer with epicycles, even if that has little to do with the correct dynamics. And if you ask her about dark matter and dark energy, she will answer in terms of dark matter and energy. Perhaps this is the right framework. But perhaps it is not.”

Update (2 Feb 2011): IC’s February 2011 Electronics World article has now been published, linked here. For the first part (in the same excellent on-line user friendly format), see the link here. I disagree with IC’s simplistic theoretical interpretation of his experimentally valid findings, for the reasons given in the blog post on QFT help for electromagnetic experiments, linked here. IC suffers from the same problem Einstein had with relativism, although IC is “sticking to his guns” (like Einstein did) despite having no real benefit from promoting a false interpretation of his results. If IC was genuinely famous for a discovery which turned out to be misinterpreted, he would have some unethical but at least “logical” reason to censor out attempts to improve the theoretical analysis of his results. He has no such fame to lose, his self-promotion on the internet by having several “personal name” websites exploits his unusual name to give high ranking google results when his name is searched, but if you look at the webcounters on his sites, he gets little interest. Few people google his name in the first place. If I buy a domain called “zzzuuuyy”, and it gets top rank results on google when searching for that “name”, it doesn’t mean anything because nobody will search for it.

I regularly write that groupthink fashion and popularity are no measure of science. But that doesn’t mean that writing such useless and boring material is a measure of helpfulness! IC should aim to write in a useful way, which unfortunately for him means confronting the depressing fact that his research work is relevant to path integrals in quantum field theory. Remaining prejudiced against QFT is irrational. The only way to destroy classical Maxwellian lies about light is to do so using the best replacement theory, Feynman’s QFT. By the time IC ever tries that, it will be too late and he will have polluted the world with too much boring, vacuous drivel to be taken seriously. There are only so many times you can cry wolf and get away with it. The best thing you can say about IC is that, with friends like him, who needs enemies?

The politics of science

http://www.cgoakley.org/qft/corres.pdf (extract from page 8, demonstrating hegemony and possible corruption of power-politics in the most arcane of physical sciences, quantum field theory)

“It would be unrealistic to believe that dogmatism in science ended … flagrant examples [are] the Nazi doctrine of Aryan racial supremacy and the Communist credo of dialectic materialism … less publicized instances … are known in every discipline in small or large degree. Every area of knowledge at the present time has its ‘big names’ whose opinions in science … prevail over the views of lesser lights just because they are recognised … Dogmatism is a frequent concomitant of a systematized creed and a well-institutionalized priestly hierarchy … unified control with a discipline that is dedicated to its unquestioning support. This condition directly parallels the requirement for authoritative secular administration. … there be only one source of truth … the source be afforded enough power to enforce its dictates. … Heretical views may not be tolerated … because they threaten the economic and the ideological commitment …”

– Professor H. G. Barnett, Innovation: the Basis of Cultural Change, McGraw-Hill, New York, 1953, pages 69-70.

“… when innovations creep into their games and constant changes are made in them, the children cease to have a sure standard of what is right … There can be no worse evil … Change … is most dangerous …”

– Plato (429-347 B.C.), The Laws, Book VII, 360 B.C. (A general defense of authoritative despotism.)

“Fallible as we may be in our upbringing of children, we now cherish and defend their freedom to develop their own minds. It seems unnatural to us that these growing minds, in which the future of the human race lies, should be subjected to gross manipulation at the hands of propagandists. People who are inclined to say that we could be just as well off under the ****s should pause to reflect … For if you want children’s minds to develop, you must not poison them with important illusions. You must let their minds be free to observe and judge.”

– Dr Edward Glover, The Psychology of Fear and Courage, Penguin, 1940, pp. 125-6.

“A general State education is a mere contrivance for moulding people to be exactly like one another: and the mould in which it casts them is that which pleases the predominant power in the government, whether this be a monarch, a priesthood, an aristocracy, or the majority of the existing generation; in proportion as it is efficient and successful, it establishes a despotism over the mind …”

– John Stuart Mill, On Liberty, 1859.

“The very magnitude of the power over men’s minds that a highly centralised and government-dominated system of education places in the hands of the authorities ought to make one hesitant before accepting it too readily.”

– Professor F. A. Hayek, The Constitution of Liberty, Routledge and Kegan Paul, London, 1960, p. 379.

“The student … is accustomed to being told what he should believe, and to the arbitration of authority. … Ultimately, self-confidence requires a rational foundation. … we should face our tasks with confidence based upon a dispassionate appreciation of attested merits. It is something gained if we at least escape the domination of inhibiting ideas.”

– Professor Cecil Alec Mace, The Psychology of Study, 1963, p90.

“Children lose interest … because a natural interest in the world around them has been replaced by an unnatural acceptance of the soundness of certain views, the correctness of particular opinions and the validity of specific claims.”

– David Lewis, You can teach your child intelligence, Book Club Associates, London, 1982, p. 258.

“Scepticism is … directed against the view of the opposition and against minor ramifications of one’s own basic ideas, never against the basic ideas themselves. Attacking the basic ideas evokes taboo reactions … scientists only rarely solve their problems, they make lots of mistakes … one collects ‘facts’ and prejudices, one discusses the matter, and one finally votes. But while a democracy makes some effort to explain the process so that everyone can understand it, scientists either conceal it, or bend it … No scientist will admit that voting plays a role in his subject. Facts, logic, and methodology alone decide – this is what the fairy-tale tells us. … This is how scientists have deceived themselves and everyone else … It is the vote of everyone concerned that decides fundamental issues … and not the authority of big-shots hiding behind a non-existing methodology. … Science itself uses the method of ballot, discussion, vote, though without a clear grasp of its mechanism, and in a heavily biased way.”

– Professor Paul Feyerabend, Against Method, 1975, final chapter.

Here’s the distinction between the two “kinds of science” quoted on my other blog against dangerous groupthink delusions for propaganda:

“There are two distinct meanings to the word ‘science’. The first meaning is what physicists and mathematicians do. The second meaning is a magical art … What is of harm is the blind faith in an imposed system that is implied. ‘Science says’ has replaced ‘scripture tells us’ but with no more critical reflection on the one than on the other. … reason is no more understandable this year than prayer a thousand years ago. Little Billy may become a scientist as earlier he might have turned priest, and know the sacred texts … The chromed apparatus is blessed by distant authority, the water thrice-filtered for purity, and he wears the white antiseptic gown … But the masses still move by faith. … I have fear of what science says, not the science that is hard-won knowledge but that other science, the faith imposed on people by a self-elected administering priesthood. … In the hands of an unscrupulous and power-grasping priesthood, this efficient tool, just as earlier … has become an instrument of bondage. … A metaphysics that ushered in the Dark Ages is again flourishing. … Natural sciences turned from description to a ruminative scholarship concerned with authority. … On the superstition that reduction to number is the same as abstraction, it permits any arbitrary assemblage of data to be mined for relations that can then be named and reified in the same way as Fritz Mauthner once imagined that myths arise. … Our sales representatives, trained in your tribal taboos, will call on you shortly. You have no choice but to buy. For this is the new rationalism, the new messiah, the new Church, and the new Dark Ages come upon us.”

– Jerome Y. Lettvin, The Second Dark Ages, paper given at the UNESCO Symposium on “Culture and Science”, Paris, 6-10 September 1971 (in Robin Clarke, Notes for the Future, Thames and Hudson, London, 1975, pp. 141-50).

“Crimestop means the faculty of stopping short at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunder-standing the simplest arguments … and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.”

– George Orwell, 1984

“Denialism” can be directed both ways in science. It’s just a vacuous piece of playground name-calling. What matters is the substance of the science, not how fashionable something is. Fashionability matters for getting funding, of course, and this is where Lord Acton’s “All power corrupts…” comes in. Scientists are no more ethical than anyone else.

Educational psychologist Lawrence Kohlberg (Lawrence Kohlberg, “Stage and Sequence: the Cognitive Development Approach to Socialization,” in D. A. Goslin, Ed., Handbook of Socialization Theory and Research, Rand-McNally, Co., Chicago, 1969, pp. 347-380) has found that peoples go through six stages of ethical development:

(1) Conformity to rules and obediance to authority, to avoid punishment.
(2) Conformity to gain rewards.
(3) Conformity to avoid rejection.
(4) Conformity to avoid censure. (Chimps and baboons.)
(5) Arbitrariness in enforcing rules, for the common good.
(6) Conscious revision and replacement of unhelpful rules.

The same steps could be expected to apply to scientific ethical development. However, the disguised form of politics which exists in science, where decisions are taken behind closed doors and with no public discussion of evidence, stops at stage (4), the level of ethics that chimpanzees and baboons have been observed to achieve socially in the wild.

“… it is once for all clear from the very appearances that the earth is in the middle of the world and all weights move towards it. … Now some people, although they have nothing to oppose to these arguments, agree on something, as they think, more plausible. … the earth as turning on the same axis from west to east very nearly one revolution a day … never would a cloud be seen to move toward the east nor anything else that flew or was thrown into the air. For the earth would always outstrip them in its eastward motion, so that all other bodies would seem to be left behind and to move towards the west.”

– Claudius Ptolemy (100-178 AD), Almagest, Book I, part 7, That the Earth does not in any way move locally. Translated by R. C. Taliaferro, Great Books of the Western World, volume 16, 1952, pp. 11-12. (This proves that Aristarchus’s solar system was not simply ignored, but was falsely attacked by the mainstream using false, deluded “arguments” which were speculative and built on a basis of fluff or quicksand. Note also that when Bruno was burned at the stake in February 1600 for saying that the earth rotates, he had evidence for the solar system in that the planets Venus and Mars are always observed to be in the same hemisphere as the sun when seen from Earth: neither planet was ever seen in the opposite direction to the sun. This, Bruno argued, is because they orbit the Sun, not the Earth, and are orbiting closer to the sun than the earth. This is the reason Bruno was burned. If he was simply talking without evidence, he would have been ignored, which is the first line of defense of status quo against radical progress. The second line of defense is to ridicule progressives. The third is to burn them. Many politically biased “historians” and “scientists” incorrectly claim that the problem was simply a lack of evidence for the solar system proposed in 250 BC by Aristarchus of Samos. Not so. It was bias. Note in particular that Copernicus failed to get rid of epicycles; he simply applied epicycles to Aristarchus’s solar syetem. It was Kepler in 1609 who began making progress in removing epicycles by replacing them with elliptical orbits which better fitted the motion of the planet Mars as observed carefully by Brahe.)

“Ptolemy and the Peripatetics think that nature must be thrown into confusion, and the whole structure and configuration of our globe destroyed by the Earth’s so rapid rotation … what structure of iron can be imagined so strong, so tough, that it would not be wrecked and shattered to pieces by such a mad and unimaginable velocity? …all atmosphere … rotate with the globe: the space above … is a vacuum; in passing through vacuum even the lightest bodies and those of least coherence are neither hindered nor broken up. Hence the entire terrestrial globe, with all its appurtenances, revolves placidly and meets no resistance.”

– Dr William Gilbert (1540-1603), On the Loadstone and Magnetic Bodies and on the Great Magnet the Earth, 1600, book 6, chapter 3. (Translation: P. Fleury Mottelay, John Wiley and Sons.) (This shows how the vacuous arguments attacking a new theory were dismissed. However, the bigoted would simply ignore or dismiss Gilbert’s refutation as being – ironically – “speculative”. This is still the political method used in “science” to censor out alternative ideas from being carefully studied, checked, and discussed. The key problem for status quo is maintaining hegemony, even hubris. It is not the number one priority of status quo to permit radical discussions of the foundations of mainstream theories.)

“It is indeed a most absurd fiction to explain natural phenomena by false causes.”

– Kepler, quoted by G. Abetti, History of Astronomy, London, 1974, p. 74.

“… the evidence of an ecological Kristallnacht is as clear as the sound of glass shattering in Berlin.”

– Al Gore, Earth in the Balance, 1992.

“A fascinating article by Mark Musser in American Thinker on one of the pioneers of apocalyptic global warming theory. Turns out – whoulda thunk? – that he was a eugenicist and a Nazi. … the quest for Lebensraum [habitat/living space] did not die with Hitler in his bunker in 1945 …”

– James Delingpole, Why do I call them Eco Nazis? Because they ARE Eco Nazis, Telegraph.

“After the war in the 1950’s, Guenther Schwab’s brand of environmentalism also played a fundamental role in the development of the green anti-nuclear movement in West Germany. The dropping of the atom bomb and the nuclear fallout of the Cold War helped to globalize the greens into an apocalyptic ‘peace’ movement with Guenther Schwab being one of its original spokesmen. The unprecedented destruction in Germany brought on by industrialized warfare never before seen in the history of the world only served to radicalize the German greens into an apocalyptic movement. Their hatred toward global capitalism became even more vitriolic precisely because the capitalists were now in charge of a dangerous nuclear arsenal that threatened the entire planet.”

– Mark Musser, “The Nazi Origins of Apocalyptic Global Warming Theory”, American Thinker, February 15, 2011.

Above: Dr Alexis Carrel, a medical Nobel Laureate and eugenicist, wrote the pro-Nazi “scientific” eugenics gas chamber-recommending bestseller Man the Unknown, the first book to popularize Hitler’s “final solution”. It was still going strong in 1948 when Penguin reprinted it, after simply removing text that praised Hitler (which can be found in the 1936 and 1939 editions). Page 291: “Those who have … misled the public in important matters [to the Nazis this meant the Jews], should be humanely and economically disposed of in small euthanasic institutions supplied with proper gases.”

Carrel also uses his alleged “authority” to be conveyor of consensus as a “scientific expert” to “pass off” as fact the lie that all feminists are ignorant of basic biology on page 91 (this is analogous to an ignorant claim in the BBC Horizon: Science Under Attack propaganda that the only possible problem with GM food critics may be that “critics” don’t know that plants contain genes), the lie that telepathy pseudoscience is science on page 121, the lie that Mussolini built up a “great nation” on page 205 (Penguin/Pelican books in 1948 quietly edited out Carrel’s praise of Hitler’s Nazis from the 1936 German and 1939 American editions), the lie that democracy is wrong on page 249 (where he claims “The feeble-minded and the man of genius should not be equal before the law”, without realizing that he himself is feeble-minded for writing eugenics gas chamber evil), the lie on page 269 that cities are “inhuman”, the lie on page 273 that “Modern nations will save themselves by developing the strong, not by protecting the weak”, and the lie on page 274 that “Eugenics is indispensable for the perpetuation of the strong. A great race must propagate its best elements.”

The environment is always changing and the problems are always changing. So how on earth can anybody know today ahead of time, even in principle, what is going to be “best” for the rest of eternity? It’s complete rubbish, composed of ignorant assertions that contradict the facts of evolution that requires the diversity in order to allow natural selection. If you choose to propagate an “element” that seems to be doing well today, you may find it lacks some vital gene necessary for protection against a new disease that appears tomorrow! In 1970, an analogous narrow-genetic-base plant eugenics failure was demonstrated in the USA: 70-90% of corn hybrids carried the T gene for male sterility, and these were highly vulnerable to the corn leaf blight fungus. So eugenics is a lie, because reducing diversity makes uniformity greater, so all individuals share the same vulnerability: this lack of diversity is a weakness, not a strength.

Eugenics is strongly connected in psychology with uniformity of thought: groupthink. If everyone follows the same leader (a dictator figure like Hitler or whoever), then if the leader is wrong, they all suffer. This is why even outside of genetics, eugenics of thought is a bad idea. Diversity = freedom. Eugenics = lack of diversity. Ironically since the time of Marx, socialism has been on the side of “thought eugenics”, trying to censor out alternatives of ideas, trying to make everyone think the same thoughts, trying to kill off any criticisms of mainstream thinking. Tring to stamp out dissent is the problem with socialism which Churchill warned of in 1945 in the first postwar election campaign. He lost the election, in no small part because he went over the top in making his point crystal clear: comparing socialist groupthink to “Gestapo” state police. While Churchill is widely deplored for this, it should be noted that this is actually the greatest threat to freedom, since force was required to stamp out dissent and “subversion” within fascist and USSR dictatorships. Churchill’s political honesty and openness was inexpedient, but he gave the electorate a real warning of what groupthink could lead to. Attlee, the socialist Prime Minister who was elected in 1945, did secretly order the building of Britain’s first nuclear weapons against the USSR threat in January 1947 (less than a year after Churchill’s “iron curtain” speech), although he had been a pacifist opposed to rearmament during the 1930s when the Hitler regime was expanding its borders. Labour only forgot the lessons of the appeasers when the bomb’s effects were exaggerated so much that dictatorship seemed a smaller threat in the 60s.

“The process of indoctrination is made even easier by the fact that a small success rate is sufficient. During World War II, Dr H. V. Dicks made an extensive study of the psychological and political characteristics of German prisoners. Only 11 percent were Nazi ‘fanatics’, all others having some or many reservations about Nazi doctrine. This percentage did not change with the fortunes of war, nor did it change much after the war ended. In 1948, 15 percent of Germans expressed an admiration for Goebbels; and even by 1955, 10 or 11 percent of Germans under twenty-five still admired Hitler. … Ten percent, coupled with powerful leaders, can bring about world war. War, it seems, is an activity fomented by the few for the detriment of the many.”

– Robin Clarke, Science of War and Peace, Jonathan Cape, London, 1971, page 220.

“We cannot go on trying to separate the responsible from the irresponsible, punish the guilty … We are not capable of judging men. However, the community must be protected against troublesome and dangerous elements. How can this be done? Certainly not by building larger and more comfortable prisons, just as real health will not be promoted by larger and more scientific hospitals. Criminality and insanity can be prevented only by a better knowledge of man, by eugenics… Those who have … misled the public in important matters [Jews in Nazi propaganda], should be humanely and economically disposed of in small euthanasic institutions supplied with proper gases.”

– bestseller by Alexis Carrel, 1912 medical Nobel Prize winning eugenicist and Nazi eugenics praiser and appeaser, Man the Unknown, 1939 edition.

So it seems that the Nazis ideas like hot air and eugenics racism survived the destruction of WWII and were simply relabelled “eco-warriorism” and “political correctness”. The idea that evil fascist ideas died because Hitler was defeated is a big lie, according to Fredrick Forsyth in Daily Express 11 February 2011, page 13:

FASCISM DIDN’T GO – IT FOUND ANOTHER NAME

MANY years ago … I spent hours with an elderly rabbi who had fought fascism all his life. One of the wisest men I had ever met, he had the rare gift of original thought.

He was adamant fascism was not a political creed but a deeply imbued standard of behaviour. In other words, if you treat your fellow man in a fascist way, that makes you one. And he insisted there were four pillars to this behaviour.

One was a total and blind commitment to the current political and moral orthodoxy. The second was the angry repudiation of any possibility of variant thought.

He concluded this blinkered bigotry was seldom the standard of the truly evil (these were right at the very top) but of the deeply stupid.

At number three he listed a relentless no-mercy persecution of those refusing or unable to conform to the imposed orthodoxy often stemming from the anonymous denunciation and presaged by the intimidating phrase: “We have received a complaint that you …”

The final criterion of fascist behaviour is the demand for total control of thought, speech, writing – even body language and gesture.

Looking round at the persecution, often at staggering public expense and on the basis of anonymous denunciation, of harmless Christians and others, I am struck by this. The rabbi’s four criteria of practising fascism are absolutely identical to the tenets of political correctness.

In the 1960s the BBC first banned the broadcast of eugenics holocaust facts from Fredrick Forsyth from the Nigerian Civil War, where British Prime Minister Harold Wilson’s supply of arms to Federal Nigeria was resulting in genocide of the Biafrans who had declared themselves an independent state. It is important that Anne Frank was murdered by neglect in Bergen-Belsen concentration camp, dying not from a gas chamber but from typhus in March 1945: this murder by a lack of humanity is the threat from the new Hitler Youth Movement, which is using a new Goebbels propaganda liar to saturate the media, diverting money from humanity to anti-humanity activities that will kill not by gas chambers but by this kind of evil deliberate neglect, as was the case for Anne Frank, and tens of millions under the Stalin communist regime genocide, and many in other dictatorships after 1945:

“The common enemy of humanity is man. In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then, is humanity itself.” – Club of Rome, The First Global Revolution (1993). (That report is available here, a site that also contains a very similar but less fashionable pseudoscientific groupthink delusion on eugenics.)

The error in the Club of Rome’s groupthink approach is the lie that the common enemy is humanity. This lie is the dictatorial approach taken by paranoid fascists, both on the right wing and the left wing, such as Stalin and Hitler. (Remember that the birthplace of fascism was not Hitler’s Germany, but Rome in October 1914, when the left-wing, ex-communist Mussolini joined the new Revolutionary Fascio for International Action after World War I broke out.) The common enemy of humanity is not humanity but is fanaticism, defined here by the immoral code: “the ends justify the means”. It is this fanaticism that is used to defend exaggerations and lies for political ends. Exaggeration and lying about weapons effects in the hope it will be justified by ending war is also fanaticism. Weapons effects exaggerations both motivated aggression in 1914, and prevented early action against Nazi aggression in the mid-1930s.

It is a fact that six million Jews were murdered by the Nazis, and this gas chamber fact is not grounds to refuse to acknowledge the even bigger genocide by Stalin and other nutters in what has been politely named “ethnic cleansing” by the BBC Hitler Youth, to make murder sound “clean”:

 

Irving L. Janis, Victims of Groupthink, Houghton Mifflin, Boston, 1972:

“I use the term “groupthink” … when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.”(p. 9)

“… the group’s discussions are limited … without a survey of the full range of alternatives.”(p. 10)

“The objective assessment of relevant information and the rethinking necessary for developing more differentiated concepts can emerge only out of the crucible of heated debate [to overcome inert prejudice/status quo], which is anathema to the members of a concurrence-seeking group.”(p.61)

“Eight main symptoms run through the case studies of historic fiascoes … an illusion of invulnerability … collective efforts to … discount warnings … an unquestioned belief in the group’s inherent morality … stereotyped views of enemy leaders … dissent is contrary to what is expected of all loyal members … self-censorship of … doubts and counterarguments … a shared illusion of unanimity … (partly resulting from self-censorship of deviations, augmented by the false assumption that silence means consent)… the emergence of … members who protect the group from adverse information that might shatter their shared complacency about the effectiveness and morality of their decisions.”(pp.197-8)

“… other members are not exposed to information that might challenge their self-confidence.”(p.206)

Don’t be fooled: we’re not arguing that censorship is wrong or that individualism is right, but that subjective censorship is wrong (we need more objective censorship, i.e. less authority-based dismissals of hard evidence, and more technical fact-driven debate rather then debates driven by the mere opinions of famous bigots or personalities who act as “expert” authorities who assert lies), and that socialism in science can only work if heated debates are allowed to break down Hitler-type eugenics pseudoscience fantasies. The present version of socialism used in science protects bigots by (1) ignoring polite statements of facts and (2) censoring more assertive statements of facts as being “rude”, precisely the Nazis eugenics “hard words make wounds” censorship-technique. (The idea of “penetrating” the existing regime in disguise to force revolutionary change is like saying Churchill in 1935 should have volunteered to serve as a Nazi concentration camp guard in order to try to destroy eugenics pseudoscience from within. Bad, rather than good, comes from collaboration with bigots.)

The only way to make real progress is not to assert individualism or to ban censorship, but to ban bigotry within socialism and to enforce fact-based rather than dogma-based censorship. We need more censorship in science of the fact-based type, to get rid of existing dogmatic eugenics-type pseudosciences (the incremental progress side of science, which fills the journals up with politically-correct trivia, such as adding more and more epicycles to mainstream pseudosciences). We need more socialism in science of the unbigoted type, with heated debates rather than dictatorship by Stalin-like bigots (who claim they are morally and ethically “maintaining nice politeness” in debates by sending “rude” critics into exile or worse).


“In Fiscal Year 2010, NASA spent over 7.5% – over a billion dollars – of its budget on studying global warming/climate change. The bulk of the funds NASA received in the stimulus went toward climate change studies. Excessive growth of climate change research has not been limited to NASA. Overall, the government spent over $8.7 billion across 16 Agencies and Departments throughout the federal government on these efforts in FY 2010 alone.” – Reps Posey, Adams and Bishop Join Colleagues in Calling on House Leaders to Reprioritize NASA for Human Space Flight Missions, Drop Climate Change, U.S. House of Representatives, Tuesday, February 8, 2011.

“… wisdom itself cannot flourish, and even the truth not be established, without the give and take of debate and criticism. The facts, the relevant facts … are fundamental to an understanding of the issue of policy.” – J. Robert Oppenheimer, 1950

“We can’t notice and know everything: the cognitive limits of our brain simply won’t let us. That means we have to filter or edit what we take in. So what we choose to let through and to leave out is crucial. We mostly admit the information that makes us feel great about ourselves, while conveniently filtering whatever unsettles our fragile egos and most vital beliefs. It’s a truism that love is blind; what’s less obvious is just how much evidence it can ignore. Ideology powerfully masks what, to the uncaptivated mind, is obvious, dangerous or absurd … Fear of conflict, fear of change keeps us that way. An unconscious (and much denied) impulse to obey and conform shields us from confrontation … It oils the wheels of social intercourse … Perhaps it is the sheer utility of wilful blindness that sucks us into the habit in the first place. It seems innocuous and feels efficient. … Ideologues, refusing to see data and events that challenge their theories, doom themselves to irrelevance. Fraudsters succeed because they rely on our desire to blind ourselves to the questions that would expose their schemes.”

– Margaret Heffernan, Wilful Blindness, Simon and Schuster, 2011, Introduction.


The best documented analogy to climate conspiracy for NASA’s life-costing groupthink is NASA’s “accidental” 1986 Challenger space shuttle explosion. It blew up because the booster rocket’s rubber seal rings are brittle and leak at icy temperatures (the shuttle boosterss were reusable, so were composed of a series of sections). (Physics professor Feynman proved this using a glass of iced water on TV, after a tip off from military expert General Donald Kutyna, not from NASA staff who knew but covered-up the problem, or from the “expert” Armstrong astronaut who was vice-chair on the Presidental Commission!)

An engineer from the Thiokol Company, a Mr. [Allan] McDonald, wanted to tell us something. He had come to our meeting on his own, uninvited. Mr. McDonald reported that the Thiokol engineers had come to the conclusion that low temperatures had something to do with the seals problem, and they were very, very worried about it. On the night before the launch, during the flight readiness review, they told NASA the shuttle shouldn’t fly if the temperature was below 53 degrees — the previous lowest temperature — and on that morning it was 29.

Mr. McDonald said NASA was “appalled” by that statement. The man in charge of the meeting, a Mr. [Lawrence] Mulloy [manager of the NASA booster rocket program], argued that the evidence was “incomplete” — some flights with erosion and blowby had occurred at higher than 53 degrees — so Thiokol should reconsider its opposition to flying.

Thiokol reversed itself, but McDonald refused to go along, saying, “If something goes wrong with this flight, I wouldn’t want to stand up in front of a board of inquiry and say that I went ahead and told them to go ahead and fly this thing outside what it was qualified to.”

That was so astonishing that Mr. Rogers had to ask, “Did I understand you correctly, that you said…,” and he repeated the story. And McDonald says, “Yes, sir.”

The whole commission was shocked, because this was the first time any of us had heard this story: not only was there a failure in the seals, but there may have been a failure in management, too.

– Professor Richard P. Feynman, “What Do You Care What Other People Think?”, Bantam Books, London, pp. 101-104

… it struck me that there were several fishinesses associated with the big cheeses at NASA.

Every time we talked to higher level managers, they kept saying they didn’t know anything about the problems below them. … this kind of situation was new to me: either the guys at the top didn’t know, in which case they should have known, or they did know, in which case they’re lying to us.

When we learned that Mr. Mulloy [Lawrence Mulloy, manager of the NASA booster rocket program] had put pressure on Thiokol to launch, we heard time after time that the next level up at NASA knew nothing about it. You’d think Mr. [Lawrence] Mulloy would have notified a higher-up during this big discussion, saying something like, “There’s a question as to whether we should fly tomorrow morning, and there’s been some objection by the Thiokol engineers, but we’ve decided to fly anyway — what do you think?” But instead, Mulloy said something like, “All the questions have been resolved.” There seemed to be some reason why guys at the lower level didn’t bring problems up to the next level.

– Richard P. Feynman, “What Do You Care What Other People Think?”, Bantam Books, London, pp. 158-159.

Lawrence Mulloy, manager of the NASA booster rocket program since 1982, was the culprit. Howard Berkes was the first to discover the way NASA groupthink and obfuscation of errors and uncertainities deliberately deceived everyone and caused the disaster:

“NASA’s Lawrence Mulloy reacted to the resistance this way: “My God, Thiokol. When do you want me to launch? Next April?” That turned the tide of the discussion. The Thiokol managers pressed their engineers to reverse themselves. When that failed, the managers simply overruled them, and submitted their own launch recommendation.

“The next morning, two of the engineers told us, they fully expected Challenger to blow up at launch ignition. One of the engineers silently prayed during the countdown. At liftoff, with no explosion, he began to wonder whether he’d been wrong. The relief didn’t last. Seventy-three seconds into the flight, as the spacecraft began an expected roll, the forces on the solid rocket motors began to pull one of them apart. The cold and stiff o-rings at one joint didn’t flex and seal as designed. Searing hot gasses escaped. In an instant, the sky was filled with smoke and debris. The engineers were filled with grief. And as one later told Zwerdling, “…we all knew exactly what happened.””

The important thing is that the NASA manager Lawrence Mulloy felt pressurised to launch the space shuttle and not to worry people higher up with the evidence that the risk of a disaster was present, despite having been told that it was too cold to launch. Part of the problem here is the 1 in 100,000 risk factor NASA was using (which Feynman condemns), which wasn’t a function of temperature. Mulloy probably had no clear idea of precisely what the numerical risk was. It was all intuitive judgement, wishy-washy emotional thinking, not a hard estimate that the shuttle would have a 30% chance of exploding if launched in the cold conditions that morning. In addition, the whole media TV coverage showmanship by NASA was such that the “tragegy” would be a DELAY to the launch schedule for health and safety reasons, with Mulloy taking the blame for the disappointment on children’s faces if the shuttle did not launch on schedule and they had to wait months for warmer weather. There is always a set of competing “risks” in the real world, which is why errors occur, especially when there are no hard numbers on exactly what the risks of disaster are.

This tragic situation of “conflicting interests” and poor judgement is very much analogous to the peer-pressure “groupthink” backed appeasement of aggressors by pacifists in the 1930s that Herman Kahn blamed for causing WWII, angering of the publisher and book reviewer of Scientific American.

“I remember very vividly, a few months after the famous pacifist resolution at the Oxford Union visiting Germany and having a talk with a prominent leader of the young Nazis. He was asking about this pacifist motion and I tried to explain it to him. There was an ugly gleam in his eye when he said, ‘The fact is that you English are soft’. Then I realized that the world enemies of peace might be the pacifists.”

– Liberal MP Robert Bernays, House of Commons, 20 July, 1934.

Nobody wanted a criminal case against those idiots, and there are “revisionist” historians like David Irving who try to defend the neo-Nazi fellow travellers of the who ignored the risk of “peaceful” gas chambers in their endless bleating exaggerations of aerial bombing effectiveness for “peaceful” disarmament propaganda. Groupthink continues to this day because there is no accountability and responsibility. These people behave like Nazis because of the lack of any risk of ever being personally imprisoned. Even if Sir Paul Nurse or the BBC ever admit to getting it wrong, so what? If the defendant can’t pay, you lose your legal costs in suing even if you do prove criminal neglect (Sir Paul will say he “made an error” then walk away laughing with nothing lost, and lots of green eco-fascists in the Guardian writing what a great big guy he is to admit to being wrong, as if that repays defrauded taxpayers). Nobody will ever be able to claim back the immense sums of money being squandered on carbon credits by suing Sir Paul Nurse, the BBC, or anybody else behind the lies. But forget money, and concentrate on human lives. Who is going to resurrect the dead when Sir Paul Nurse or the Mein Fuhrer the BBC Director General’s pension funds have been pumped by on the bubble of green carbon credit trading capitalist liars? Why is going to do something about the lives lost due to this pseudoscientific fashionable groupthink horseshit, Hitler’s final solution?

“I’m against ignorance, I’m against sloppy, emotional thinking. I’m against fashionable thinking. I am against the whole cliché of the moment.”

– Herman Kahn (quoted in Paul Dragos Aligica and Kenneth R. Weinstein, Editors, “The Essential Herman Kahn: In Defense of Thinking,” Lexington Books, 2009, p271).

Above: 1986 Challenger space shuttle explosion cover-up exposed by Feynman (not Armstrong, the vice-chairman of the Rogers-chaired Presidential Commission!) on TV. Feynman used a cheap cup of iced water but found that there was a crazy management groupthink quango of lying culture at NASA, because the experts knew that the rubber O-rings sealing the boosters lost ductility and became brittle at low temperatures, but they dared not say so for fear of angering TV crews and VIPs assembled to watch in the freezing January conditions. So they obfuscated their knowledge and blew the shuttle up, “accidentally” killing everyone on board, rather than have the GUTS to say it was dangerous to launch it. The great Armstrong failed to spot the simple cause of the disaster, despite being supposedly an expert on rockets because he knew how to walk on the Moon! Feynman had help from an expert on rockets, General Kutyna, who was the military expert that determined the cause of the Titan II missile silo explosion at Damascus, Arkansas in 1980. (In that case, a technician had dropped a wrench socket down the hole, which set off a chain of unpleasant events resulting in the detonation of the 100 tons of propellant and the missile’s massive 9 megaton W-53 thermonuclear warhead being blown straight out of the silo.)

Above: my YouTube video exposing the always-known lies of climate change propaganda against Telegraph journalist James Delingpole (who first exposed Dr Phil Jones’ “hide the decline” Nature peer-reviewed journal “climategate”) by the unelected BBC quango, which just like the unelected British Government Civil Service, is really in control of the country, keeping the politicians in the dark or more often than not actually in receipt of lies masquerading as facts. These quangos have always made Britain a danger to the world. Either they go, or Britain is finished. The Prime Minister David Cameron is a bigot (albeit not as dangerous to the economy as the previous prime minister, Brown) who refuses to listen to criticisms of climate propaganda or anything else. He must be made to listen or removed one way or another. The problem is that decent people in Britain are not very good at the lying needed to climb the greasy poles of political expediency, so the whole of British politics is almost as corrupt as its lying “science” propaganda (and that is, believe me, a very big insult to the politicians). This is not a joke, it is hard fact.

Yesterday the BBC sent me a deceptive fact-dodging “response” to my complaint about Horizon: Science Under Attack. I’ve published their response as a PDF here: https://nige.files.wordpress.com/2011/02/bbc-horizon-science-under-attack-complaint-response.pdf

Dear Mr Cook

Reference CAS-561749

Thanks for your correspondence regarding ‘Horizon: Science Under Attack’ , broadcast on BBC Two on 24 January.

I understand that you feel this edition of the programme was biased in favour of the theory of man-made climate change.

Your concerns were raised with the producer of the programme – Emma Jay who replies as follows:

“I’m sorry you felt the film was biased. … In the course of the programme Paul Nurse argued that scientists need to focus on the science and keep politics and ideologies out of the way; that scientists need to be more open in the way they do their science, and be more willing to communicate the uncertainties that are sometimes inherent in their work.

“A substantial part of the film did use the example of climate science to look at this dynamic between science and society, and at the question of public trust. But I don’t accept that the film was biased in its representation of the state of the scientific debate about anthropogenic global warming. The overwhelming majority of scientists and scientific institutions accept the link; in scientific terms it is not controversial and the programme’s approach reflected that.

“I fully acknowledge that, even now, not everyone accepts this view and that there is still a continuing political debate. That is why the programme included Professor Fred Singer’s views on the primacy of solar activity and James Delingpole’s views on ‘Climategate’, the perils of scientific consensus, and how peer review in science was being challenged by peer-to-peer review. These were significant parts of the film.”

… Kind Regards

Mark Roberts
BBC Complaints

Emma made no comment about the lie I specifically raised in my complaint, namely the fact that the ONE reliable indicator or the rate of climate change (aside from cloud cover affected tree ring temperature “proxies” and weather stations downwind of direct heat sources like growing cities) is sea level rise rates, 120 metres over past 18,000 years = 0.67 cm/year mean compared to much smaller rates of rise at all times over the past century. Emma doesn’t answer my scientific complaint. It’s absolutely sickening propaganda, just like Dr Goebbels claiming that the inclusion of edited film of Jews in his racist propaganda films made the Nazis unbiased. If any decent politician ever censors this lying left-wing quango (Cameron won’t), people like her will be to blame.

See Darrel Huff, “How to Lie with Statistics”, 1954. You plot graphs and find that the number of telegraph poles is rising and the infectious disease rate is rising, and hence you have “proof” that telegraph poles are causing disease. There is no evidence whatsoever that CO2 causes temperature rises. Correlation does not imply causation. There are lies, damned, lies and statistics.

1. H2O due to water evaporation is a far bigger greenhouse gas than CO2, and the annual emission of CO2 from “unnatural” sources is only

2. Cloud cover presently covers 62% of the surface area of the globe and the fraction increases as a function of injected CO2, caused by evaporation of water from the oceans and lakes that cover 70% of the area of the globe. This additional “global dimming” causes a negative feedback cancelling out the temperature rise from CO2, as the oceans warm up (there is a slight time lag due to the high specific heat capacity of water and the wintertime mixing of warm thermocline waters with deeper water dueing storms).

3. Cloud cover has an average altitude of 2 km, so the lower altitude air and surface below the cloud is unable to benefit from CO2 which only absorbs infrared (the infrared energy is absorbed or reflected near the top of the clouds, where the heated air rises, and is unable to transfer warmth to lower altitudes efficiency due to the buoyancy of warm air).

4. “… there is … a very grave danger for science in so close an association with the State … it may lead to dogmatism in science and to the suppression of opinions which run counter to official theories.”

– J. B. S. Haldane (1892–1964), The Causes of Evolution, Longmans, London, 1932, p. 225.

5. Beware of NASA’s $1,000,000,000 annual budget to fight Delingpole with big lies:

To make a name for learning
When other ways are barred
Take something very easy
And make it very hard

“[Hitler’s] primary rules were: never allow the public to cool off; never admit a fault or wrong; never concede that there may be some good in your enemy; never leave room for alternatives; never accept blame; concentrate on one enemy at a time and blame him for everything that goes wrong; people will believe a big lie sooner than a little one; and if you repeat it frequently enough people will sooner or later believe it.” – http://en.wikipedia.org/wiki/Big_Lie

6. Dr Ferenc Miskolczi points out that NOAA data on atmospheric H2O over 61 years from 1948-2008 show a fall in humidity. It seems that this implies an increase in cloud cover so that H2O varies in such a way as to cancel out the effect of CO2 on temperature.

Notice that Rob van Dorland and Piers M. Forster wrote a paper “Rebuttal of Miskolczi’s alternative greenhouse theory” (hosted at realclimate.org) which falsely states on page 4:

“… there is ample observational evidence that the most important greenhouse gases, water vapour and carbon dioxide have increased in the last four decades, meaning that the total infrared optical depth is indeed increasing. Finally, direct satellite observations of the outgoing infrared spectrum show that the greenhouse effect has been enhanced over this period.”

This contradicts the NOAA data Dr Miskolczi gives. I don’t have any interest in Miskolczi’s idealized calculations which are irrelevant to the real world (regardless of whether they are correct or not), just in the actual data from observations and the mechanism he proposed. All of Miskolczi’s critics ignore the data and the cloud cover mechanism and focus on showing that his model is imperfect or beyond their understanding (by which they try to imply he is wrong, rather than they haven’t made the effort to understand the details!), which is obvious since it is just an idealized model.

As Dr Miklos Zagoni shows in his paper (CO2 cannot cause any more “global warming”: Dr Ferenc Miskolczi’s saturated greenhouse effect theory, SPPI Original paper, December 18, 2009), when you ignore Dr Miskolczi’s idealized calculations, and simply look at the data he unearthed from NOAA, you see the evidence for the cloud cover feedback mechanism.

The variations of CO2 during Earth’s geological record were all caused by rapid temperature changes by means other than CO2 variations, such as cycles in the Earth’s orbit or geological processes that created large mountain ranges. These variations produce the climate change, which in turn caused an imbalance between CO2 absorbers and emitters. Rainforests (CO2 sinks) can be killed off by temperature fall rates which can be compensated for by the migration of CO2 emitting animals. A drop in global temperature caused an increase in the atmospheric CO2 level indirectly, due to the fact that rainforests cannot migrate as quickly as animals, and are therefore more likely to be killed. An increase in global temperatures had the opposite effect, allowing dense rainforests to proliferate faster than the rate of increase of CO2 emitting animals. Therefore, the fossil record correlation between CO2 and temperature has nothing to do with a direct mechanism for CO2 to affect temperature.

“Since the Earth’s atmosphere is not lacking in greenhouse gases, if the system could have increased its surface temperature it would have done so long before our emissions. It need not have waited for us to add CO2: another greenhouse gas, H2O, was already to hand in practically unlimited reservoirs in the oceans. … The Earth’s atmosphere maintains a constant effective greenhouse-gas content [although the percentage contributions to it from different greenhouse gases can vary greatly] and a constant, maximized, “saturated” greenhouse effect that cannot be increased further by CO2 emissions (or by any other emissions, for that matter). … During the 61-year period, in correspondence with the rise in CO2 concentration, the global average absolute humidity diminished about 1 per cent. This decrease in absolute humidity has exactly countered all of the warming effect that our CO2 emissions have had since 1948. … a hypothetical doubling of the carbon dioxide concentration in the air would cause a 3% decrease in the absolute humidity, keeping the total effective atmospheric greenhouse gas content constant, so that the greenhouse effect would merely continue to fluctuate around its equilibrium value. Therefore, a doubling of CO2 concentration would cause no net “global warming” at all.”

– Dr Miklos Zagoni, CO2 cannot cause any more “global warming”: Dr Ferenc Miskolczi’s saturated greenhouse effect theory, SPPI Original paper, December 18, 2009, page 4.

1. I’m quoting qualified climate scientists Dr Miklos Zagoni, Dr Miklos Zagoni. I’m a qualified technical author (not a PhD yet, but the PhD is just a badge of groupthink consensus outside the specialism concerned anyway), who has read the “peer”-reviewed crap.

2. Hitler’s “big lie” propaganda trick is ESSENTIAL to understanding climate groupthink.

3. Lawyer Godwin’s law in his own words: “When you get these glib comparisons you lose perspective on what made the Nazis and the Holocaust particularly terrible.”
(Source: http://www.bbc.co.uk/news/10618638 .)

I think that says it all, however it appears Godwin takes his bible too seriously and believes in the myth that the Jews alone are God-win’s “chosen people”, and the “ethnic cleansing” of many peoples since 1945 doesn’t count as a holocaust worthy of comparison to the six millions gassed by Hitler’s brainwashing “science” of genetics.

See how genetics and weapons effects were perverted for appeasement of the Nazis; it all started out with the big lie from 1912 Nobel Laureate Alexis Carrel best-selling eugenics book:

“Those who have … misled the public in important matters, should be humanely and economically disposed of in small euthanasic institutions supplied with proper gasses.”

– L’Homme, cet Inconnu (Man the unknown)

Adding in the 1936 German edition preface:

“… German government has taken energetic measures against the propagation of the defective, the mentally diseased, and the criminal.”

To the 1936 German government, the “defective” included Jews. Now maybe you get the drift? If Carrel could succeed in misleading the world into not grasping the danger of Nazi eugenics in 1936, allowing Britain to appease Hitler instead of stopping him, does that not tell you the danger from “Godwin’s law” today? Sea levels have risen 120 metres over the past 18,000 years at faster rates than they’re rising today. We’re still here. Trying to stop the sea level rising was tried without success by King Canute. It cost him a fortune and he’d have been better off spending the money helping people, not putting carbon credit trading billions into politicians pockets, BBC pension funds, and politically correct windfarms.

The real problem is that the media is being manipulated by an age old conspiracy of fascist officialdom in science which gets beaten back at every scientific revolution (Copernicus, Galileo, Newton, Einstein), then creeps back to shore up status quo against simple facts.

“Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

– Dr Phil Jones to Warwick Hughes.

There is nothing complex to understand, Disgruntled. It’s very simple. You heat water and it evaporates, right? Steam rises? Steam condenses when it reaches cool air high up? Clouds form? This simply adds to the “global dimming” effect which works against CO2 induced temperature rises. The global dimming effect caused the failure of tree ring data to proxy temperatures after 1960, contrary to Nurse’s claim.

There is evidence therefore from several different sources, not just global humidity measurements since 1948, indicating that cloud cover has been increasing, offsetting temperature effects from CO2. It keeps low altitude air cool, and warm air layers high in the atmosphere are unable to warm the ground because their positive buoyancy.

I’ve put a PDF of medical Nobel Laureate Alexis Carrel’s 1935 eugenics bestseller, Man, the Unknown online on my domain

http://quantumfieldtheory.org/ALEXIS%20CARREL%20Man%20the%20Unknown%201935.pdf

https://nige.files.wordpress.com/2011/02/alexis-carrel-man-the-unknown-1935.pdf

Page 165 of the PDF:

“There remains the unsolved problem of the immense number of defectives … an enormous burden … Why do we preserve these useless and harmful beings? … Why should society not dispose of the criminals and the insane in a more economical manner? We cannot go on trying to separate the responsible from the irresponsible, punish the guilty … We are not capable of judging men. However, the community must be protected against troublesome and dangerous elements. How can this be done? Certainly not by building larger and more comfortable prisons, just as real health will not be promoted by larger and more scientific hospitals. Criminality and insanity can be prevented only by a better knowledge of man, by eugenics… Those who have … misled the public in important matters [Jews in Nazi propaganda], should be humanely and economically disposed of in small euthanasic institutions supplied with proper gases.” – bestseller by Alexis Carrel, 1912 medical Nobel Prize winning eugenicist and Nazi eugenics praiser and appeaser, Man the Unknown, 1935 and 1939 (died awaiting trial for collaboration).
My argument is that if such fashionable gas chamber bullshit in best selling books by pseudoscientific “leaders” had not been so adored and loved due to Nobel’s warmongering prize (financed by deliberately supplying explosives to both sides in the Crimean War, an act of abject evil that makes Hitler’s gas chamber massacres look heavenly), maybe Nazi appeasement could have been stopped by Churchill. As it was, peer-review groupthink prevailed. Godwin should study what happened to cause the holocaust, which was the racism due to eugenics pseudoscience in the 1930s. There was a form of Godwin’s law then, where anyone criticising fascist evil was simply censored out of the British evil as being unpleasant. It didn’t help Churchill to stop Hitler.

All evil springs from perverted “science”, pseudoscience, enforced by petty dictatorial officaldom, wasting money. The abuse of anonymous “peer”-review power politics to censor rival theories is manifest in particle physics, where money from our pockets to fund CERN’s LHC fascist search for imaginary particles in a fascist attempt to prove mainstream hocus pocus theories that Feynman long ago exposed as speculative claptrap. These fascists are always portrayed as great Nobel Laureates in the right wing media, when their success comes not from originality or hard work, but from the corruption of “peer”-review.

The BBC has just made a big issue about a fiddled Nature journal (remember “Mike’s Nature trick” climategate email? Dr Phil Campbell, editor at Nature, and his Physical Sciences Editor Dr Karl Zemelis in 1996 used Edward Witten’s pseudoscientific M-theory string theory obfuscation to censor my paper on quantum gravity, correctly predicting an cosmological acceleration from a quantum gravity mechanism of a ~ Hc in 1996, two years before it was measured and confirmed) CO2 computer model flood risk assessment:

“Using publicly volunteered distributed computing [11,12], we generate several thousand seasonal-forecast-resolution climate model simulations of autumn 2000 weather, both under realistic conditions, and under conditions as they might have been had these greenhouse gas emissions and the resulting large-scale warming never occurred. … The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases our model results indicate that twentieth-century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.”

– Pardeep Pall, et al., Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000, Nature, v470, pp382–385, issue date 17 February 2011, http://www.nature.com/nature/journal/v470/n7334/full/nature09762.html

There are two immediate obfuscation problems in this paper. First, the computer models used assume positive feedback from water vapour (not the negative feedback due to buoyant moist air forming extra cloud and thus “global dimming” that cancels out the CO2 effect over timescales of decades), so they are just assuming that CO2 is causing the massaged data on temperature rises from tree rings, weather stations in or downwind of direct heat pollution (e.g. growing cities, factories, etc.), and satellite data (62% of the surface is covered by cloud, so the satellite data is biased towards seeing ground blackbody temperatures for clear sky areas, etc.). Second, they are giving increased percentage risks as 20% and 90%, and the BBC is interpretating this (as evidently intended by the dishonest presentation of the abstract in terms of percentages) as a causal proof that the floods of 2000 were 20% or 90% likely to be due to global warming.

Nothing could be further from the facts. Those are just the percentage increases on very small percentages. If the risk is 1% then a 90% increase means 1.9% not 90%. But the spin the BBC gives is misleading because of their green pension fund managers. They have a vested interest in reporting deceitful spin, and adding to the spin even more. The biggest British floods, costing the worst casualty toll, were actually in 1953, when my father (in the Civil Defence Corps in Essex) helped out. The whole of the Essex coast was affected to some degree, 1,600 km of coast was flooded in Britain with over 300 people were killed in Britain. The BBC seem to have forgotten this event, which was FAR worse than the 2000 floods! If they want climate change flood probabilities, here’s one that they can’t go wrong with: over the past 18,000 years sea level rose 120 metres, causing massive floods. The probability that it was due to climate change was 100%.

Holocaust deniers have jumped in to defend Al Gore’s association with Holocaust denial via spending diverting money from life-supporting schemes to holocaust supporting ones, based on the principle of murder by starvation enforced by NASA eugenics HQ as proved in my video in the previous blog post (where the NASA spokesman lies on camera about CO2 emissions!). NASA, please remember, hired the Nazi Dr Werner Von Braun (previously employed by Hitler to kill british kids using supersonic V2 rockets) to design the Saturn V for God’s greatest human being Niel Armstrong (yep, the great genius who – as reported in the last post – couldn’t find the rubber O-ring failure mechanism because he didn’t look for it during the Challenger inquiry in 1986) to pollute the moon, detracting vital money and American public sympathy from the Vietnam War for freedom and liberty from communist tyranny. I explained the following graphs on Delingpole’s blog, linked here: “Climate scepticism: not just the new paedophilia, but the new racism and homophobia too!” NASA is a really great organization, not.

Fig. 1: the earth’s surface is 70% water and currently about 62% is covered with clouds at a mean altitude of 2 km. There is some evidence that pushing extra CO2 into the atmosphere slightly increases cloud cover rather than global climatic mean temperature. We’re been in global warming for 18,000 years, during which time the sea level has risen 120 metres (0.67 cm/year mean, often faster than this mean rate). Over the past century, sea level has risen at an average rate of 0.20 cm year, and even the maximum rate of nearly 0.4 cm/year recently is less than the rates humanity has adapted to and flourished with in the past. CO2 annual output limits and wind farms etc are no use in determining the ultimate amount of CO2 in the atmosphere anyway: if you supplement fossil fuels with wind farms, the same CO2 simply takes longer to be emitted, maybe 120 years instead of 100 years.

Dr Ferenc Miskolczi resigned from a NASA contractor due to their censorship of observational evidence of global humidity levels between 1948 and 2008, showing that global mean humidity has not increased and consequently the positive-feedback effect of CO2 emissions on H2O is not substantiated. Moist warm air rises, expands, and condenses into increased cloud cover. This is the “anti-greenhouse effect”. Put simply, the Earth has large oceans and a large mass, which effectively prevents water vapour escaping into outer space, unlike small planets with runaway CO2 greenhouse effects due to loss of water.

The variations of CO2 during Earth’s geological record were all caused by rapid temperature changes by means other than CO2 variations, such as cycles in the Earth’s orbit or geological processes that created large mountain ranges. These variations produce the climate change, which in turn caused an imbalance between CO2 absorbers and emitters. Rainforests (CO2 sinks) can be killed off by temperature fall rates which can be compensated for by the migration of CO2 emitting animals. A drop in global temperature caused an increase in the atmospheric CO2 level indirectly, due to the fact that rainforests cannot migrate as quickly as animals, and are therefore more likely to be killed. An increase in global temperatures had the opposite effect, allowing dense rainforests to proliferate faster than the rate of increase of CO2 emitting animals. Therefore, the fossil record correlation between CO2 and temperature has nothing to do with a direct mechanism for CO2 to affect temperature.

Venus, which is closest to the sun than earth is, allegedly has a runaway greenhouse effect due to an atmosphere which is 96.5% CO2 and a surface temperature of 462 °C, but the CO2 percentage alone is not causing it alone, it’s the fact that the atmospheric pressure at the surface of Venus is 93 earth atmospheres which is to blame. Neglecting for the moment effects due to orbital radii, Mars is similar to Venus in having a large fraction of its atmosphere composed of CO2 (96%) but has a low total surface air pressure, only about 0.64% of earth’s, and a mean surface temperature is a chilly −46 °C. The “runaway greenhouse effect” that keeps Venus roasting hot is not possible on earth, where the large oceans regulate the climate (Figure 2 below). Venus only has a runaway greenhouse effect because the total atmospheric pressure is so high, 93 times earth’s sea-level atmospheric pressure, and it is nearer the sun than earth!

Fig. 2: “Since the Earth’s atmosphere is not lacking in greenhouse gases, if the system could have increased its surface temperature it would have done so long before our emissions. It need not have waited for us to add CO2: another greenhouse gas, H2O, was already to hand in practically unlimited reservoirs in the oceans. … The Earth’s atmosphere maintains a constant effective greenhouse-gas content [although the percentage contributions to it from different greenhouse gases can vary greatly] and a constant, maximized, “saturated” greenhouse effect that cannot be increased further by CO2 emissions (or by any other emissions, for that matter). … During the 61-year period, in correspondence with the rise in CO2 concentration, the global average absolute humidity diminished about 1 per cent. This decrease in absolute humidity has exactly countered all of the warming effect that our CO2 emissions have had since 1948. … a hypothetical doubling of the carbon dioxide concentration in the air would cause a 3% decrease in the absolute humidity, keeping the total effective atmospheric greenhouse gas content constant, so that the greenhouse effect would merely continue to fluctuate around its equilibrium value. Therefore, a doubling of CO2 concentration would cause no net “global warming” at all.”

– Dr Miklos Zagoni, CO2 cannot cause any more “global warming”: Dr Ferenc Miskolczi’s saturated greenhouse effect theory, SPPI Original paper, December 18, 2009, page 4.

Now the 1930s was not the gas chamber era in Nazi history, it was the time when the Nazis could still have been stopped by threat of force, if only eugenics pseudoscience was not supported by Nobel Laureates like surgeon Alexis Carrel, who won the 1912 medical Nobel prize then collaborated with the Nazi propagandarists of eugenics by suggesting the use of gas chambers to create a super-race (claiming this is scientifically confirmed by Darwin’s “survival of the fittest” when of course you need diversity, which eugenics eliminates). Carrel is still today celebrated by fascists and the evil for his contributions to genocide and of course his end-to-end technique for arterial anastamoses, yet died awaiting trial for collaboration, but his 1930s collaboration is a fact published in his support for the Nazis eugenics programme in the 1930s:

In 1935, Carrel published a book titled L’Homme, cet inconnu (Man, The Unknown), which became a best-seller. The book discussed “the nature of society in light of discoveries in biology, physics, and medicine”.[2] It contained his own social prescriptions, advocating, in part, that mankind could better itself by following the guidance of an elite group of intellectuals, and by implementing a regime of enforced eugenics. Carrel claimed the existence of a “hereditary biological aristocracy” and argued that “deviant” human types should be suppressed using techniques similar to those later employed by the Nazis.

“A euthanasia establishment, equipped with a suitable gas, would allow the humanitarian and economic disposal of those who have killed, committed armed robbery, kidnapped children, robbed the poor or seriously betrayed public confidence,” Carrel wrote in L’Homme, cet Inconnu. “Would the same system not be appropriate for lunatics who have committed criminal acts?” he suggested.

In the 1936 preface to the German edition of his book, Alexis Carrel added a praise to the eugenics policies of the Third Reich, writing that:

(t)he German government has taken energetic measures against the propagation of the defective, the mentally diseased, and the criminal. The ideal solution would be the suppression of each of these individuals as soon as he has proven himself to be dangerous.[16]

Carrel also wrote in his book that:

(t)he conditioning of petty criminals with the whip, or some more scientific procedure, followed by a short stay in hospital, would probably suffice to insure order. Those who have murdered, robbed while armed with automatic pistol or machine gun, kidnapped children, despoiled the poor of their savings, misled the public in important matters, should be humanely and economically disposed of in small euthanasic institutions supplied with proper gasses. A similar treatment could be advantageously applied to the insane, guilty of criminal acts.[17]

The danger of Al Gore’s global warming denial movement is that the diversion of funds from impoverished nations to green carbon credit trader’s wallets and quack green person fund holders will divert funds from life-saving third world projects which in times of global economic recession will have to be sacrificed.

Like Carrel, Al Gore certainly does not want a holocaust for eugenics, but like Carrel, his policies and propaganda lies are stamping on scientific facts and are a danger to civilization by the very fascist tactics employed to censor out genuine science criticism.

To read my book-length blog post on the exact mechanism by which science lying in Britain about fascism led to false claims that eugenics is science, exaggerated claims about weapons effects and the “impossibility” of civil defence against Nazi bombing, and hence to many leading “experts” convincing Prime Minister Chamberlain to appease fascists, please click here. People need to oppose lying cover-ups and deceptions from NASA and the British Royal Society President!

The IPCC ignores the increasing future depletion of fossil fuels, and predicts that spending $100 billion will constrain temperature rises by 1.5 C. In any case, the suggested “countermeasure” of throwing billions upon billions of dollars at building alternative technology such as wind power stations (which shut down in strong winds to prevent damage, and also generate no power in hot calm periods where there is a major power demand for air conditioning), will just supplement fossil fuel use and therefore will not reduce the eventual CO2 release from the use of fossil fuels, but will merely protract the rate of its release, so the politics of global warming suffer from inherent problems:

(1) CO2 is not a pollutant but is the vital source of carbon for all plant growth on land and in the sea on this planet, and rising levels of CO2 therefore promote life rather than destroying it – it is an essential gas for the life on Earth. It doesn’t lead to rapid temperature rises on any planet with large quantities of water, since any initial slight temperature rise causes more water to evaporate forming clouds, thus increasing cloud cover and protecting the planet against further temperature rises from the increasing level of atmospheric CO2. As Dr Lubos Motl points out, CO2 only becomes unpleasant for humans at concentrations of around 10,000 ppm while the current one is 388 ppm and with the depletion of fossil fuel reserves it cannot ever exceed 1,000 ppm. CO2 has a net positive impact on life on Earth.

(2) fossil fuels are not inexhaustible and are being depleted anyway, and as oil and coal supplies dwindle the remaining reserves are more expensive to tap and so the price rises, and people are pushed naturally away from using such fuels towards safe nuclear energy (which doesn’t produce collateral CO2 emissions if nuclear power is used to generate electricity to power the trains that deliver the fuel, etc.) and renewable biofuels (plants which lock up the same amount of CO2 while growing that they release on subsequent burning, so there is no net increase in global CO2), so global warming is not a long term doomsday problem anyway unless fossil fuels can be shown to be inexhaustible,

(3) the immense expenditure on trying to reduce CO2 emissions from existing sources and building wind power stations doesn’t cause a significant reduction in global carbon dioxide. For example, if the total fossil fuel reserve (oil, coal, etc.) is X tons, then supplementing it with wind power will simply mean that the carbon in the X tons of fuel is given out over a longer period of time, say 120 years instead of 100 years. Once all of the fossil fuels have been used up, all of the CO2 will be released and the “countermeasures” which consist of reducing the rate at which the CO2 is released will not affect the ultimate level of CO2 in the atmosphere. So it is a confidence trick to waste taxpayers money under false pretenses.

(4) It’s also pretty obvious that before the coal and oil deposits were formed, the atmosphere had a higher CO2 because the carbon in fossil fuels came from the atmosphere in the first place. True, the coal and oil was formed over many millions of years, but nevertheless at the beginning the carbon which is not (now) locked in fossil fuels was essentially all present in the atmosphere. The oxygen levels over the Phanerozoic have been analyzed in detail by Berner and Canfield’s model.

Large amounts of atmospheric CO2 was what fuelled the plant growth which produced much of the fossil fuels around 300 million years ago when the terrific conversion of carbon dioxide into wood released enough oxygen by photosynthesis to make the earth’s atmosphere 35% oxygen (compared to 21% today), fuelling the early inefficient lungs of the first amphibians when they moves on to the land, and also fuelling giant now extinct flying insects which utilized the high oxygen levels. It’s interesting that such high oxygen levels are associated with high ignition probabilities under today’s conditions. E.g., for typical forest fine kindling (dry leaves, etc) today, there is a 70% increase in the probability of a fire being started by lightning for every 1% rise in the oxygen percentage. However, this fire risk would automatically be compensated for over long time periods by a structuring of the forests by evolution to reduce intense fire risks: regular fires reduce ignition probabilities by clearing away kindling like deadwood and underbrush, trees would be spaced on average further apart than they are now, and so fires would spread less easily and burn less fiercely than you would expect by simply scaling up the oxygen percentage and assuming that primeval forests were similar to those today.

Between 300 and 250 million years ago, the oxygen content of the atmosphere fell from 35% to 21% and then dropped to around 15% about 200 million years ago, before rising to 27% 30 million years ago and falling to 21% now (the current level seems to be on a downward slope).

(5) human beings are not unnatural and have always been changing the world. The world can adapt to changes, as it has done many times before in the history of this planet, which has included long periods with much higher temperatures than are forecast for global warming even under the most pessimistic conditions. There was a period when do-gooders tried to stop forest fires: they extinguished all the fires, and gradually the amount of dead wood and underbrush increased until the forest had become a massive bonfire waiting to be ignited. Eventually a fire started which couldn’t be extinguished, and the forest was destroyed completely, not just to the superficial (surface charring of bark) extent that fires usually caused. Then they realized that the policy of trying to stop fires in the forest had been an error. Interfering with global warming may seem just as “obvious” as trying to stop fires in a forest. But are we sure that such interference is the right thing to do? Could the money be better spent on defenses against sea level rises and extreme weather? Global temperature naturally varies and so it is not clear exactly what value you are even trying to change the global mean temperature to. Never mind, ignorant politicians don’t care about these “technical details”, just about being seen to address a problem by flushing trillions of taxpayers money down the drain so more people will vote for them.

“Ever since writing my TV shows in the Eighties I have been talking to students, teachers and the general public and enthusing about the amazing possibilities for science and technology in the future. But over 30 years I have seen a terrible change in science education. Role models such as Dalton, Faraday and Curie are hardly ever mentioned … Kids are introduced to science as something that is life-threatening and deprived of exploration … They are being brainwashed into believing that science and technology is crippling the Earth and our future when exactly the opposite is true. Science education has been turned upside down by worry merchants and it is already costing us dearly in a widespread lack of understanding – it is ignorance that breeds fear … If we scrapped completely the foolhardy and scientifically unsound chase to reduce carbon, while still aiming for greater efficiency in energy usage, we would have all the money needed to bring the Third World out of poverty, save millions of lives year on year, and create a fairer and far more balanced world …”

– Johnny Ball, “It’s Not the End of the World”, Daily Express, 21 December 2009, p. 13.

The lies of Al Gore’s Oscar winning film, An Inconvenient Truth

1. Gore, who lost the 2000 Presidential election to Bush, claims in An Inconvenient Truth that the injury to his child by a car converted him into an genuine environmentalist. But after winning his Oscar for An Inconvenient Truth the media revealed that Gore’s household consumed 221,000 kilowatt hours of energy in 2006, which is over 20 times the American average. So Gore was proved to be a traditional “Do as I say, not as I do” lying politician, not an honest environmentalist.

2. Gore falsely claims that the only solution to global carbon dioxide increases is to reduce emissions, which is a lie, for it neglects the fact that proper sea wall defenses in Holland today permit much of the country to operate safely while being 15 feet below sea level! Gore also ignores other countermeasures such as growing crops further north as the earth warms, and instead just lies that the only solution is to reduce emissions.

3. Gore with political expediency avoids the nuclear solution to global warming explained right back in 1958 by Edward Teller and Albert L. Latter in their book Our Nuclear Future: Facts, Dangers, and Opportunities (Criterion Books, New York, 1958), page 167:

‘If we continue to consume [fossil] fuel at an increasing rate, however, it appears probable that the carbon dioxide content of the atmosphere will become high enough to raise the average temperature of the earth by a few degrees. If this were to happen, the ice caps would melt and the general level of the oceans would rise. Coastal cities like New York and Seattle might be innundated. Thus the industrial revolution using ordinary chemical fuel could be forced to end … However, it might still be possible to use nuclear fuel.’

4. Gore lies that sea levels could rise by 20 feet due to global warming causing the Antarctic ice sheet to melt. The report of the International Panel on Climate Change (which probably overestimated the effect greatly) predicted a rise of just over 1 foot by 2100.

5. Gore claims of temperature rise: “in recent years, it is uninterrupted and it is intensifying.” Actually, the “effective temperature” for tree growth (which includes cloud cover effects on sunlight) as measured by tree rings has been declining and this has been deliberately covered-up by the fraudulent “scientists” assembling the International Panel on Climate Change data, who have had to resort to data manipulation tricks to “hide the decline”.

6. Gore lies by including Hurricane Katrina and its devastation of New Orleans in 2005 as a global warming debate phenomenon: the effects of the hurricane were a random result of happening to strike a highly populated coast with poor defenses and actually imply that better sea defenses are needed for such cities, because cutting CO2 emissions can’t stop hurricanes any more than Gore’s lying hot air!

7. Gore lies that the disappearing glaciers and snow on places like Mount Kilimanjaro are due to global warming, when in fact deforestation around those areas is the reason for the reduced precipitation (snowfall), just as deforestation in warmer areas reduces local rainfall! (This is well established: “deforestation of Amazonia was found to severely reduce rainfall in the Gulf of Mexico, Texas, and northern Mexico during the spring and summer seasons when water is crucial for agricultural productivity. Deforestation of Central Africa has a similar effect, causing a significant precipitation decrease in the lower U.S Midwest during the spring and summer and in the upper U.S. Midwest in winter and spring.”)

8. The film’s images of the abandoned ships on the dried-up bed of the Aral Sea are a massive irrelevancy for global warming because it is very well-known that the Soviet Union actually caused the Aral Sea to dry up by diverting the rivers which fed that sea! The Aral Sea did not dry up due to global warming!

9. Gore claims global warming threats are all real because a peer-reviewed review paper of 928 peer-reviewed articles found that none disagreed with global warming. Professor Feynman warned that such peer-reviewed pseudoscience claims about authority and consensus are actually political rubbish of no consequence to the natural world around us and are hence anti-science in their very nature:

“You must here distinguish – especially in teaching – the science from the forms or procedures that are sometimes used in developing science. … great religions are dissipated by following form without remembering the direct content of the teaching of the great leaders. In the same way, it is possible to follow form and call it science, but that is pseudo-science. In this way, we all suffer from the kind of tyranny we have today in the many institutions that have come under the influence of pseudoscientific advisers. … We have many studies in teaching, for example, in which people make observations, make lists, do statistics, and so on … They are merely an imitative form of science … The result of this pseudoscientific imitation is to produce experts, which many of you are. …. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.”

– Richard P. Feynman, “What is Science?”, presented at the fifteenth annual meeting of the National Science Teachers Association, 1966 in New York City, published in The Physics Teacher Vol. 7, issue 6, 1968, pp. 313-320.

Science is the belief in the ignorance of expert opinion, of political consensus. Science is the rejection of everything except factual evidence. The object of science is not to achieve harmony or consensus but, on the contrary, to find the facts no matter whether the facts agree with expert opinions and expert prejudices, or not!

In case anyone doesn’t grasp this point by Feynman that statistics alone don’t prove causes, remember the example from How to Lie With Statistics of the Dutch researcher who proved a definite correlation between the number of babies in families and the number of storks nests on the roofs of their homes! This didn’t prove that storks were the cause, and delivered babies like traditional mythology! There was a simple alternative reason: the bigger families tended to buy larger, older houses which naturally tended to have more storks nests on their roofs because they were both bigger and older!

Statistics don’t prove causes. Science isn’t about finding correlations and then lying that the correlation itself proves the cause of the correlation to be this or that! Science is about searching for facts, not making lying claims founded on prejudice. “Coincidence” is a word often said with a sneer, but sometimes it is the factual explanation for a correlation! A statistically proved correlation between curve A and curve B is not statistical proof that A causes B or proof that B causes A, or even that there is any connection at all: it merely proves that the curves are similar, a fact that may be down to pure coincidence, like it or not! A good example of this problem (with the scientific “elbow grease” type solution required) was given during the June 1957 U.S. Congressional Hearings on page 1001 of the Nature of Radioactive Fallout and Its Effects on Man in testimony by Dr H. L. Friedell, Director of the Atomic Energy Medical Research Project in the School of Medicine at Western Reserve University:

“It is difficult trying to make this decision from the statistics alone.

“An example of how this might occur is something that was presented by George Bernard Shaw … Statistics were presented to him to show that as immunization increased, various communicable diseases decreased in England. He hired somebody to count up the telegraph poles erected in various years … and it turned out that telegraph poles were being increased in number. He said, ‘Therefore, this is clear evidence that the way to eliminate communicable diseases is to build a lot more telegraph poles’.

“All I would like to say here is that the important point is that if you really want to understand it, you have to look at the mechanism of the occurrence. I think this is where the emphasis should lie.”

Above: Al Gore’s falsehood about geothermal energy, claiming that the centre of the Earth is at a temperature of “several million degrees” when in fact it is well established from mining data that the Earth’s temperature rises by roughly 1 oC per km of depth and is only around 5,400 oC at the core (the core is hot essentially due partly to tidal effects from the gravity of the Moon as it orbits, and partly to the radioactive decay energy from dense, high mass number elements such as uranium and thorium). But all such scientific facts are apparently irrelevant to the political propaganda lies of Al Gore. Dr Lubos Motl comments:

“It’s very clear that he can’t possibly have the slightest clue about physics, geology, and energy flows on the Earth. It’s sad that many politicians lack the basic science education. …

“Don’t get me wrong, I am no foe of geothermal energy. But it currently produces about 0.3% of the global energy demand. Only near the tectonic plate boundaries, the installation is relatively doable today. That’s why geothermal power plants may thrive in Iceland but not in the bulk of Europe or America.

“There’s surely some room for expansion of this source of energy but it doesn’t seem realistic to expect that geothermal energy will replace the fossil fuels in the bulk of their current applications.

“If you want to have a sensible idea about the amount of geothermal energy we can get by sensible tools, it’s excellent to imagine the ‘hot water bubbling up at some places’ (usually in combination with lots of methane, ammonia, and hydrogen sulfide, besides innocent carbon dioxide) – exactly the right idea that Al Gore doesn’t like because it cools the irrational hype (or downright lies) surrounding the alternative sources of energy.”

Read more about Herr Fuhrer Al Gore’s evil associations with Nazi eugenics or other pseudoscientific lies in my blog post from over a year ago here, and my other blog on weapons effects exaggerating and civil defence denying lies here. I’m writing a lengthy book exposing ALL of the evil thugs who refuse to stand up to continuing neo-Nazi eugenicists and their new pseudoscience today. If like Sir Paul Nurse and BBC Nazis, you support diverting life-saving money into the pockets of Al Gore’s shady henchmen, you may find yourself in it! Please just leave a comment below and we’ll track you down from your IP address.

Janis, Irving L. Victims of Groupthink. Boston. Houghton Mifflin Company, 1972 http://en.wikipedia.org/wiki/Groupthink

“Groupthink is a type of thought within a deeply cohesive in-group whose members try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. It is a second potential negative consequence of group cohesion.

“Irving Janis studied a number of ‘disasters’ in American foreign policy, such as failure to anticipate the Japanese attack on Pearl Harbor (1941); the Bay of Pigs fiasco (1961) when the US administration sought to overthrow Fidel Castro; and the prosecution of the Vietnam War (1964–67) by President Lyndon Johnson. He concluded that in each of these cases, the decisions were made largely due to the cohesive nature of the committees which made them. Moreover, that cohesiveness prevented contradictory views from being expressed and subsequently evaluated. As defined by Janis, “A mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action”.[1].

“Individual creativity, uniqueness, and independent thinking are lost in the pursuit of group cohesiveness, as are the advantages of reasonable balance in choice and thought that might normally be obtained by making decisions as a group.[citation needed] During groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.”

I’ve collected a lot of information for a book on censored ideas, but I’d had difficulty finding a way to assemble the information in order to overcome censorship.

Clearly some things like lying Nazi propaganda and porn need censorship, so for Ivor catt or anyone to cling on to “censorship” as the root problem is obviously wrong.

Science is the experimentally-based censorship of drivel and lies. Ivor Catt wants lies to be censored out: his term “spring cleaning” or “Occam’s Razor” means censorship of bad ideas. Why then is he engaging in doublespeak elsewhere on his silly internet site where he claims he is against censorship? This is the problem!

Groupthink seems a more specific way to address the problem than censorship. We want censorship to flush lying pseudoscientific dogma down the drain. We want censorship to get rid of ideas that are being falsely defended by “science” without experimental backup. The problem then, is not censorship per se, but a perversion of genuine censorship by a religion of orthodox beliefs dressed up as science, which is a mystical phenomenon going back historically to the Ancient Egyptian priesthood with their star aligned pyramids, the Stonehenge Beaker People, Witchcraft, Voodoo, Scientism, Epicycles, Caloric, Phlogiston, Relativism, Supersymmetry, Alchemy, Creationism, and the Pythagorean mathematical cult that asserted without any evidence that atoms are regular geometric solids.

Has Ivor read Irving Janis’s 1972 “Victims of Groupthink” or not? Janis should have added EUGENICS and the 1986 CHALLENGER NASA disaster to his list of groupthink failures: http://www.quantumfieldtheory.org

http://www.swans.com/library/art9/xxx099.html contains an excerpt from Irving L. Janis, “Victims of Groupthink,” 1972; Houghton Mifflin Company; ISBN: 0-395-14044-7 (pp. 197-204)

The groupthink syndrome: Review of the major symptoms

In order to test generalization about the conditions that increase the chances of groupthink, we must operationalize the concept of groupthink by describing the symptoms to which it refers. Eight main symptoms run through the case studies of historic fiascoes. Each symptom can be identified by a variety of indicators, derived from historical records, observer’s accounts of conversations, and participants’ memoirs. The eight symptoms of groupthink are:

1. an illusion of invulnerability, shared by most or all the members, which creates excessive optimism and encourages taking extreme risks;

2. collective efforts to rationalize in order to discount warnings which might lead the members to reconsider their assumptions before they recommit themselves to their past policy decisions;

3. an unquestioned belief in the group’s inherent morality, inclining the members to ignore the ethical or moral consequences of their decisions;

4. stereotyped views of enemy leaders as too evil to warrant genuine attempts to negotiate, or as too weak and stupid to counter whatever risky attempts are made to defeat their purposes;

5. direct pressure on any member who expresses strong arguments against any of the group’s stereotypes, illusions, or commitments, making clear that this type of dissent is contrary to what is expected of all loyal members;

6. self-censorship of deviations from the apparent group consensus, reflecting each member’s inclination to minimize to himself the importance of his doubts and counterarguments;

7. a shared illusion of unanimity concerning judgments conforming to the majority view (partly resulting from self-censorship of deviations, augmented by the false assumption that silence means consent);

8. the emergence of self-appointed mindguards – members who protect the group from adverse information that might shatter their shared complacency about the effectiveness and morality of their decisions.

When a policy-making group displays most or all of these symptoms, the members perform their collective tasks ineffectively and are likely to fail to attain their collective objectives. Although concurrence-seeking may contribute to maintaining morale after a defeat and to muddling through a crisis when prospects for a successful outcome look bleak, these positive effects are generally outweighed by the poor quality of the group’s decision-making. My assumption is that the more frequently a group displays the symptoms, the worse will be the quality of its decisions. Even when some symptoms are absent, the others may be so pronounced that we can predict all the unfortunate consequences of groupthink.

[…]

Psychological functions of the eight symptoms

Concurrence-seeking and the various symptoms of groupthink to which it gives rise can be best understood as a mutual effort among the members of a group to maintain self-esteem, especially when they share responsibility for making vital decisions that pose threats of social disapproval and self-disapproval. The eight symptoms of groupthink form a coherent pattern if viewed in the context of this explanatory hypothesis. The symptoms may function in somewhat different ways to produce the same result.

A shared illusion of invulnerability and shared rationalizations can counteract unnerving feelings of personal inadequacy and pessimism about finding an adequate solution during a crisis. Even during noncrisis periods, whenever the members foresee great gains from taking a socially disapproved or unethical course of action, they seek some way of disregarding the threat of being found out and welcome the optimistic views of the members who argue for the attractive but risky course of action. (4) At such times, as well as during distressing crises, if the threat of failure is salient, the members are likely to convey to each other the attitude that “we needn’t worry, everything will go our way.” By pooling their intellectual resources to develop rationalizations, the members build up each other’s confidence and feel reassured about unfamiliar risks, which, if taken seriously, would be dealt with by applying standard operating procedures to obtain additional information and to carry out careful planning.

The member’s firm belief in the inherent morality of their group and their use of undifferentiated negative stereotypes of opponents enable them to minimize decision conflicts between ethical values and expediency, especially when they are inclined to resort to violence. The shared belief that “we are a wise and good group” inclines them to use group concurrence as a major criterion to judge the morality as well as the efficacy of any policy under discussion. “Since our group’s objectives are good,” the members feel, “any means we decide to use must be good.” This shared assumption helps the members avoid feelings of shame or guilt about decisions that may violate their personal code of ethical behavior. Negative stereotypes of the enemy enhance their sense of moral righteousness as well as their pride in the lofty mission of the in-group.

(4) Campbell, D. T., “Stereotypes and the perception of group differences.” American psychologist, 1967, 22, 817-829.

· · · · · ·

Irving L. Janis (1918-1990) obtained a Ph.D. in Social Psychology from Columbia University. He was a faculty member in the Psychology Department at Yale from 1947 to 1985, and was appointed Adjunct Professor of Psychology Emeritus at the University of California, Berkeley in 1986.

“War is an ugly thing, but not the ugliest of things. The decayed and degraded state of moral and patriotic feeling which thinks that nothing is worth war is much worse. The person who has nothing for which he is willing to fight, nothing which is more important than his own personal safety, is a miserable creature and has no chance of being free unless made and kept so by the exertions of better men than himself.”

– John Stuart Mill

If you debunk any exaggeration, the debunking has no news value because it fails to motivate and inspire people to the degree that the exaggeration does. You can only kick out one king by imposing another. You need a new peril, like CO2 taxation induced starvation, to outweigh the previous one, before people want to know.

I’m optimistic and pray that at some point the evidence will become so obvious to even the most gullible that there will be a proper revolution, hopefully along the civilized lines of the French Revolution (complete with guillotine to save money and prevent victims paying tax to keep their tormentors living in luxury hotels or prisons), so that bad heads will literally roll. A pleasant dream.