Quantum Gravity Film and Scientific Paper in Amazon paperback Format

Spin-2 graviton deceivers: the stress-energy tensor of general relativity is a classical continuously differentiable entity that can’t represent discrete quantum fields realistically (they are put in as “perfect fluids”, not real discrete particles). So the argument that quantum gravity must be spin 2 because classical general relativity says so, is using the most flawed part of classical theory to dictate what quantum gravity must look like. A complete delusion. See also this link and this paper please, admit you repeatedly censored out the hard facts and abused the ethics of science, and apologise now, please.

Typical spin-2 delusion example: Steven Weinberg’s paper “Photons and Gravitons in S-Matrix Theory: Derivation of Charge Conservation and Equality of Gravitational and Inertial Mass,” Phys. Rev. 135 (1964) B1049-B1056, shows that spin-2 gravitons couple to the rank-2 stress energy tensor. Steven Weinberg refuted the stress-energy tensor in “Gravitation and Cosmology” Wiley, 1972, page 147:

“At one time it was even hoped that the rest of physics could be brought into a geometric formulation, but this hope has met with disappointment, and the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like ‘metric’, ‘affine connection’, and ‘curvature’, but is not otherwise very useful. The important thing is to be able to make predictions about the images on the astronomer’s photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effect of a gravitational field on the motion of planets and photons or to a curvature of space and time.”

I’m sure Weinberg is back to the stress-energy tensor now to shore up spin-2 graviton delusion-based string theory hype. Nevertheless, the stress-energy tensor is 4×4 matrix of continuous differential equations which can’t represent discrete particles; you typically have to represent actual mass (particles of matter, quanta of energy) by a physically false “perfect fluid” continuum distribution, just so that general relativity pops out the smooth (pseudo) curvature (not a quantum field theory).

Using a classical entity like the rank-2 stress-energy tensor to “determine” the spin of the quanta of quantum gravity is like using epicycles to determine the structure of the universe. It’s absurd. If Riemann had never been born, and Einstein had formulated gravity in rank-1 (curving field lines) tensors instead of rank-2 (spacetime curvature), you would still have been able to include all the observed features of relativistic gravitation correctly with spin-1 field quanta. Hence there’s no logic here. Going to rank-1 tensors (ordinary vectors) gives the confirmed 1996 quantitatively accurate prediction that spin-1 graviton exchange between similar sign gravitational charges (e.g. masses) causes cosmological repulsion aka “dark energy”, as well as observed gravity effects (you can still keep your rank-2 general relativity as a classical duality for constructing the metric with its energy conservation spacetime contraction effects):

Watermelons by James Delingpole (a book review)
Quantum Gravity Successes

Quantum gravity paper overview: http://vixra.org/abs/1111.0111 Further information: http://www.quantumfieldtheory.org and paperback book version of paper: http://www.amazon.co.uk/dp/1470997452/ or http://www.amazon.com/Quantum-Gravity-Standard-Model-Nigel/dp/1470997452 If you sweep away 1st quantization, and allow all “wave effects” and eigenvalues (discrete energy levels of electrons in the atom etc) to arise from multipath interference, then the “uncertainty principle” becomes a result of multipath interference. No wavefunction collapses, because there is no single wavefunction in the path integral. The whole basis of the path integral is summing wavefunction amplitudes from all paths between source and receiver (instrument). The instrument only plays a part in determining the end point for the path, hence the understandable mechanism for relativistic quantum mechanics:

“… My way of looking at things was completely new [path integrals], and I could not deduce it from other known mathematical schemes … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …”

- The Beat of a Different Drum: The Life and Science of Richard Feynman, by Jagdish Mehra (Oxford 1994, pp. 245-248).

“Scepticism is … directed against the view of the opposition and against minor ramifications of one’s own basic ideas, never against the basic ideas themselves. Attacking the basic ideas evokes taboo reactions … scientists only rarely solve their problems, they make lots of mistakes … one collects ‘facts’ and prejudices, one discusses the matter, and one finally votes. But while a democracy makes some effort to explain the process so that everyone can understand it, scientists either conceal it, or bend it … No scientist will admit that voting plays a role in his subject. Facts, logic, and methodology alone decide – this is what the fairy-tale tells us. … This is how scientists have deceived themselves and everyone else … Science itself uses the method of ballot, discussion, vote, though without a clear grasp of its mechanism, and in a heavily biased way.”

– Professor Paul Feyerabend, Against Method, 1975, final chapter

“‘Science says’ has replaced ‘scripture tells us’ but with no more critical reflection on the one than on the other. … the masses still move by faith. … I have fear of what science says, not the science that is hard-won knowledge but that other science, the faith imposed on people by a self-elected administering priesthood. … In the hands of an unscrupulous and power-grasping priesthood, this efficient tool, just as earlier … has become an instrument of bondage. … A metaphysics that ushered in the Dark Ages is again flourishing. … Natural sciences turned from description to a ruminative scholarship concerned with authority. … Our sales representatives, trained in your tribal taboos, will call on you shortly. You have no choice but to buy. For this is the new rationalism, the new messiah, the new Church, and the new Dark Ages come upon us.”

- Jerome Y. Lettvin, The Second Dark Ages, paper given at the UNESCO Symposium on “Culture and Science”, Paris, 6-10 September 1971 (in Robin Clarke, Notes for the Future, Thames and Hudson, London, 1975, pp. 141-50).

“Crimestop means the faculty of stopping short at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunder-standing the simplest arguments … and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.”

- George Orwell, 1984

“Denialism” can be directed both ways in science. It’s just a vacuous piece of playground name-calling. What matters is the substance of the science, not how fashionable something is. Fashionability matters for getting funding, of course, and this is where Lord Acton’s “All power corrupts…” comes in. Scientists are no more ethical than anyone else.

Educational psychologist Lawrence Kohlberg’s “Stage and Sequence: the Cognitive Development Approach to Socialization” (in D. A. Goslin, Ed., Handbook of Socialization Theory and Research, Rand-McNally, Co., Chicago, 1969, pp. 347-380) lists six stages of ethical development:

(1) Conformity to rules and obediance to authority, to avoid punishment.
(2) Conformity to gain rewards.
(3) Conformity to avoid rejection.
(4) Conformity to avoid censure. (Chimps and baboons.)
(5) Arbitrariness in enforcing rules, for the common good.
(6) Conscious revision and replacement of unhelpful rules.

The same steps could be expected to apply to scientific ethical development. However, the disguised form of politics which exists in science, where decisions are taken behind closed doors and with no public discussion of evidence, stops at stage (4), the level of ethics that chimpanzees and baboons have been observed to achieve socially in the wild.

(It’s a fact that “entanglement” is 1st quantization – non-relativistic – single-wavefunction nonsense. There are no single wavefunctions for particles, as Feynman discovered! There’s a separate wavefunction amplitude for every possible path, and indeterminancy is not due to wavefunction collapse, but instead is due to multipath interference. Do you grasp the analogy between multipath interference of HF skywave radio from partial reflection by different regions of the ionosphere – D, E, and F layers – and multipath interference in the path integral? The whole of Bell’s inequality/wavefunction collapse/entanglement is a propaganda exercise of 1st quantization disinformation. It’s aim, like Complementarity, is to promote mathematical misunderstanding and obfuscation to revert science to ancient metaphysical dogma. Bohr’s statements prove that he wanted no understanding of nature: he wanted to freeze 1st quantization at the 1926 level for all time with correspondence and complementarity principles. He and others wanted nobody to understand, or progress physics realistically with proved predictions empirically confirmed. The “nobody understands quantum mechanics” statement, presented as a factual proof of the non-existence of simple mechanisms, is extremely destructive. If people are prejudiced and not looking, even if they find facts they’ll ignore them. They will declare that the people promoting facts are giving out boring propaganda, or are just plain wrong because they have been denied any publicity compared to the over-hyped mainstream liars. They will object to calling Hitler a “liar” because of the Nazi dogma that “hard words make wounds”. They are socially evil dictators: deliberately marketing ignorant propaganda and drivel that makes no confirmed predictions unlike this paper which predicted the cosmological acceleration in 1996 correctly, and their aim is to increase the “noise level” in journals and popular media to help the mainstream use “guilt by false conflation of all alternative ideas” as a pseudo-argument in order to “justify” censoring calculative papers that predict facts later demonstrated in nature. What I mean is the increase in the noise level to drown out real physics, analogous to the sheep bleating continually and loudly “Four Legs Good, Two Legs Bad” in George Orwell’s book Animal Farm: the objective of people like Lee Smolin and Garrett Lisi, in addition to consistent histories propaganda, is to drown out all realistic physics. Then people like Ed Witten can announce that the sheep are making a lot of incoherent noise, and he gets applauded for it. Real physics remains unheard. Few today – as a result of this successful arXiv.org policy – have time to even read nevermind check confirmed predictions, when both the mainstream and the loudest bleating alternative ideas which are even more decrepit put them off the whole subject of understanding the world.)

“I would like to put the [1st quantization dogma/wavefunction collapse/entanglement/quantum computing/quackery] uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [wavefunction phase amplitudes] for all the ways an event can happen – there is no need for an [1st quantization lying] uncertainty principle! … on a small scale [path actions small compared to h-bar], such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by 2nd quantization field quanta] becomes very important [providing an understandable multipath interference mechanism for indeterminancy, taking the metaphysics from quackery and replacing it with understandable, predictive path integrals which work unlike metaphysics dogma; quack obfuscators who fill up the journals with pseudoscientific non-relativistic gibberish increase the "noise level" in a way that helps the mainstream dogma censors to find an excuse to "discredit" alternatives in general without bothering to even check them properly first; hence it is completely taboo to understand physics and a sign of stupidity to make checkable predictions]”

- Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84. (Beware of Feynman’s older books from the 1960s which are pro-quackery and contain statements like “nobody understands quantum mechanics”. Once you get lots of people making illucid claims that QM or SR are wrong, but ignoring 2nd quantization, criticisms backfire and enable mainstream thought eugenicists to censor all future critics of status quo by peer review politics.)

“The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

- Dr Thomas Love, Departments of Physics and Mathematics, California State University, by email.

The first half of the video disproves quacks by proving that there is a physical difference between 1st and 2nd quantization beyond the description of antimatter and the path integral: as Feynman explained in his 1985 book “QED”, in the path integral formulation of quantum mechanics, all wave-particle duality effects arise from a physical mechanism, multipath interference of the cyclically varying wavefunction amplitudes for each path. This means, quoting Feynman’s book, “you don’t NEED an uncertainty principle”, in other words, multipath interference is the mechanism normally ascribed to the equation of the uncertainty principle. Put another way, you can derive the uncertainty principle from the multipath interference mechanism of the path integral. In relativistic 2nd quantization (contrary to Bohr/Schroedinger/Heisenberg/Bell/Bohm 1st quantization wavefunction entabglement/collapse) there is no single wavefunction for any particle and no collapse of that single wavefunction or entanglement of that single wavefunction (instead there is a sum over histories of many wavefunctions’, with multipath interference totally replacing the uncertainty principle of 1st quantization with a simple physical mechanism for indeterminancy); wavefunction amplitudes must be added up for each different possible path; there is no single wavefunction to collapse and thus no “entanglement” or Bell inequality test as in non-relativistic 1st quantization “quantum computing” quackery.

There’s a mechanism for indeterminacy, the interference between multiple paths, and it’s similar to the multipath interference mechanism that caused skywave interference in HF radio. This occurs when some radio energy is reflected back to earth by the different layers in the ionosphere, at different altitudes, so the different paths taken were received out of phase, causing the received signal to suffer from self-interference. The exchange of quanta is behind the Coulomb field binding electrons to nuclei, and since this is a discrete interaction, the electron’s motion on small scales is non-classical and indeterministic.

A “wave” is just a a periodic oscillation. Every particle’s path has an oscillating phase or “wave amplitude” which is exp(iS), S being “action” (the particle energy multiplied by the time taken, or more precisely, the integral of the Lagrangian) in Planck units. Since exp(iS) = cos S + i sin S, it follows the wave amplitude is periodic oscillating function of the distance the particle goes on a path between emission and absorption (i.e. the time taken). If it is possible for it to go several different ways (e.g. if two slits exist in a screen), there’s a separate wave amplitude for each possible path, and you add them up (path integral). The result maximises contributions from paths of least action (or least time, if the energy is constant). Feynman explains how this explains all wave-particle duality issues in his 1985 book QED, e.g. the double slit experiment, so the wave-type nature is light is due to multipath interference of periodically oscillating amplitudes from each path.

Note that the path integral always gives real (not complex) results for cross-sections and probabilities, so the “resultant arrow” on an Argand diagram (the sum over histories that Feynman draws in many illustrations in his QED 1985 book) is always parallel to the real axis, with no complex component (Feynman’s diagrams don’t make this detail crystal clear). This means that you don’t really need exp(iS) for the phase amplitude, at least mathematically. You can drop i sin S from Euler’s equation and just replace it by cos S in all path integral calculations. It makes absolutely no difference in all real-world checked calculations whether you you complex space (an Argand diagram with one axis in units of i) or whether the phase amplitude rotates in the real Euclidean space plane. It turns out that the only reason why exp(iS) is dogma instead of cos S, is that Weyl’s first attempt to quantize gravity in 1918 tried to scale the metric in proportion to exp(iX) where X is a function of the electromagnetic field.

Einstein debunked it, but Schroedinger loved the idea and in 1922 scaled the periodic real solutions to exp(iN) to represent Bohr’s discrete energy levels for an electron in a hydrogen atom (with all unobserved energy levels conveniently located in the complex plane!). (Ref: Schroedinger, “On a remarkable property of the quantum-orbits of a single electron”, Zeitschrift f. Physik v12, 1922, p13). After Heisenberg’s matrix mechanics, Schroedinger then reformulated the idea as his “wave equation”, since i dY/dt = HY has the solution: Y is proportional to exp(iHt), knowing from basic math that exponential solutions always exist for equations of the form dY/dt = Y.

Dirac then converted Schroedinger’s equation back into exp(iHt) in 1933, and in 1948 Feynman reinterpreted it correctly as applying to each possible path (not just to the a single path or a classical path) so that different paths interfere to produce wave effects. So exp(iHt) or its more general form exp(iS) is really a relic of 1st quantization and Weyl. It’s not needed anymore. We can replace it with cos S if we forget 1st quantization (Schoedinger’s single wave amplitude equation) and go for path integrals instead. Quantization occurs due to multipath interferences, not complex space, which was just a stopgap idea dating back to Weyl and Schoedinger 1922. There is no effect on a path integral’s mathematical results whatsoever: it’s still a real cross-section or probability in all checked experiments.

The electron’s discrete energy levels occur because “non-permitted” orbits don’t have paths of minimal action and so are eliminated by multipath interference. In QFT, the integral of exp(iS) over all paths can be replaced with the integral of cos S, which has no effect except from eliminating Hilbert space and with it all the problems of Haag’s theorem for renormalization. This makes quantum mechanics explicable, and understandable, in terms of simple physical concepts not requiring complex space.

My argument follows Maxwell’s SU(2) electromagnetic theory: Maxwell makes the point in his 1861-2 papers “On Physical Lines of Force” that magnetic fields are physically propagated by spinning field quanta or vortices. The handedness of the magnetic field vector, which loops or curls in the perpendicular plane around the direction of an electric current, is therefore a chiral handedness effect. In other words, magnetism involves a chiral handedness of gauge bosons, by analogy to SU(2) weak interactions which are chiral in involving left-handed neutrinos. Thus, by making SU(2) a full electroweak theory, the role of U(1) is no longer the SM’s fiddled hypercharge (fractional quark charges result from a vacuum polarization cloaking mechanism, where some electromagnetic field energy is converted into strong field quanta energy by vacuum interactions between pairs or triplets of nearby quarks which exchange gluons).

U(1) charge is now gravitational charge (mass), i.e. the charge of quantum gravity. U(1) mixing with chiral SU(2) gives rise to massive weak bosons. By including gravitation correctly as a gauge symmetry in the Standard Model, it is possible to predict masses since mass is the natural unit for the “charge” of quantum gravity. Because U(1) hypercharge mixing with SU(2) now yields predominantly (from the mixing angle standpoint) U(1) gravity and predominantly (from the mixing angle standpoint) SU(2) electro-weak forces rather than U(1) electrodynamics and SU(2) weak interactions as in the SM, we can see that the breaking of SU(2) into weak and electromagnetic interactions is chiral and depends on the spin of the single U(1) charge (mass). Giving mass to left-handed SU(2) gauge bosons breaks the SU(2) symmetry (creating Goldstone bosons which gain mass which have been experimentally mistaken for the SM’s “Higgs bosons” by SM propagandarists) and thus gives massive bosons which undergo weak interactions (their mass provides inertia to overcome magnetic self-inductance, which blocks one-way flows of charged massless gauge bosons); the remaining massless SU(2) gauge bosons convey electromagnetic fields as explained in detail in the video.

Massless electric charges can’t move in one direction only, because they have no inertial mass to overcome the magnetic self-inductance due to motion. However, they can be exchanged in a equilibrium (exchange radiation) between charges of similar sign, because the geometry then cancels out the magnetic field curls, totally preventing the self-inductance problem! This mechanism of equilibrium for current exchange (well known in electric circuits and logic signals) also has another vital effect: it constrains to zero the net charge transfer term in the SU(2) Yang-Mills equation! This physical constraint reduces the massless boson SU(2) Yang-Mills equation to Maxwell’s equations. So the theory is completely self-consistent, in addition to having made confirmed predictions!

It’s the fashionable preoccupation with string theory which has drawn mainstream attention away from efforts to find a simple and useful way to put gravity into the Standard Model as a gauge theory, since the arguments for this rely on a spin-2 graviton (the basis of string theory arguments) which is based on the rank-2 general relativity field tensors. It doesn’t seem to be a strong scientific fact, bearing in mind that general relativity is not a quantum theory, and you can describe gravity using rank-1 vector field equations like Poisson’s equation, with a relativistic metric to correct for spacetime contraction.

The theory successfully predicted the cosmological acceleration (dark energy) of the universe in May 1996, published in October 1996, two years before experimental confirmation by Perlmutter. In 1996, when the cosmological acceleration calculation was sent to Mike Renardson, editor of “First Thoughts” magazine, his initial reaction was that the roughly 10-10 ms-2 cosmological acceleration predicted was far too small to ever observe in the real world.

Yet just two years later, Perlmutter’s computer automated CCD (charge coupled device) telescopes detected the signature from a fixed energy-sized supernova at half the age of the universe, confirming quantum gravity’s prediction!

Understanding science corruptions and deception

Science liars don’t think they are liars for the most part, e.g. the typical example of “eugenics” pseudoscience (popularized by media censorship in democracies by fashionable bigots like Sir Francis Galton, the gas chamber final solution proposer and later Nazi collaborator Medical Nobel Laureate Alexis Carrel. Society needs a diversity of ideas. The mixing of the “educational” establishment with science research in the 1850s was a disaster, standardizing theories and thinking methodologies prematurely (it’s always premature to edit out diversity down to a single way of thinking) in order to set simplistic teaching syllabuses and exams, cloning scientists into a groupthink approach to fundamentals which are taboo.

Now you might construct a straw-man argument against this. You might say, OK, but in the real world lots of diversity leads to chaos, and we don’t have time to teach lots of diverse ideas, but need to simplify and censor for time constraints (Plato’s defense of bigotry). In that case you can still lean over backwards to repeatedly point out what you are doing in defining the theory you select as being preferable in particular specific ways, that for example Bertrand Russell said that as an alternative to evolution God made the universe 5 minutes ago including the fossil record: in other words, you use evolution not because it is the only possible theory (it isn’t) but because it is the most useful theory for scientific reasons (although for religious-promotion reasons, other theories may be more useful). In other words, you test theories but don’t “prove” them. This is a very difficult point. I can’t “prove” quantum gravity, I “just” have evidence which nobody else has: the prediction however doesn’t involve any extravagant hypotheses, just well defensible empirical data.

Nature’s editors Phil Campbell and Karl Zemelis wrote letters refusing to publish the “extremely unlikely” prediction in 1996 (along with several other prominent journals), and then chickened out of publishing the fact they had got their decision wrong when the experiments confirmed the prediction in 1998 (despite repeated letters sent by recorded delivery, and un-returned phone calls)! So did the New Scientist’s “ecowarrior” editors Richard Fifield and Jeremy Webb and even their letters editor, who all went silent, into abusive tantrums, or claimed that confirmed predictions were a “waste of time for the science news media unless they were FIRST published in journals “peer” reviewed by (bigoted) string theorists” (EW publications were ignored by New Scientist). Classical and Quantum Gravity (Institute of Physics, Bristol) sent the paper for “peer” review to an anonymous and brilliant string theorist, who astutely reported on that: “This paper is detached from current work in superstring theory.” Duh, we told you! Superstring theory doesn’t make a single falsifiable prediction despite thirty years of mainstream effort, so it’s turned into crank groupthink (worse than phlogiston crackpotism, which at least was a falsifiable conjecture that could be disproved by experiments!). The only role of defenders of superstring is to silence falsifiable predictions from genuine alternative theories, using the bogus “peer” (not!) review system!

The recent BBC “news” bias controversy (censoring a programme exposing its own 1980s star TV hero to abuse allegations in December 2011 and then on Friday last week admitting it transmitted a programme making damning allegations against a senior Thatcher politician without even bothering to first check if alleged attacker had been correctly identified, which he hadn’t been) should tell you that we don’t live in a free world: priesthoods of unelected greasy-pole-climbing liars act as both censors and witchfinder generals, and control the TV that channel that you are forced to pay for by law and threat of prison if you have a television, precisely the propaganda trick of the Nazis and the USSR (which also had public elections where voters could choose between two public relations expert clones every few years, a dictatorial propaganda process which in our country is given the misleading term “democracy”, despite having nothing to do with democracy, which was a daily referendum on issues – something that would be easy using the internet given internet banking security techniques today – not an election choice between two dictators for a period of five years).

Nothing wrong there: I am not objecting any of these decisions to tell lies but just to the censorship of those who point out the facts which disprove the lies; we are merely suggesting that defensible, free criticism of these mainstream lies is being censored out in order to allow the lies to continue to brainwash people in an undemocratic, non-free, Third Reich style corrupted media groupthink of “politically correct” thought dictatorship. If you want to tell lies, do so by all means, but my point is that in order to make progress we need be able to criticise lying statements objectively. THE FREEDOM TO DO OBJECTIVE CENSORSHIP OF LIES IN THE FASHIONABLY PREJUDICED MEDIA IS MISSING. Censorship is only wrong when used in a one-sided dictatorial manner, an emotional and fact-evading manner by thugs who ignore (will not respond to) objective criticisms because they don’t need to. It’s not wrong when done objectively to sort out and distinguish the facts, and to ask challenging questions. You can’t make progress if you can’t criticise status quo. The doublethink whereby we pretend we live with freedom of speech when in fact one-to-many USSR type quango media like the BBC saturate the world with groupthink lying propaganda from dictators like Paul Nurse (see linked page here) is dangerous. It happened before with eugenics and the use of the gas chamber, proposed first for use against critics of government policy by medical Nobel Laureate Alexis Carrel in his 1935 French bestseller – a bestseller in Germany with a Nazi foreword in 1936 – Man the Unknown. Deliberate misunderstanding of evolution for eugenics was convenient, so the big shots did not scientifically criticise it. Would the holocaust have occurred if Darwin had shot down Sir Francis Galton’s eugenics? Maybe not! Science isn’t an abstract game. It has human consequences.

Deliberate misunderstanding of quantum mechanics (non-relativistic 1st quantization complex space lies – not simple multipath interference in relativistic 2nd quantization real space path integrals – being the implicit assumption for “nobody understands” propaganda lies) is unnecessary and leads to a culture of misunderstanding and hatred towards progress in understanding physics. The liquid droplet model of the nucleus explains why the nuclear fission of a large nucleus into smaller nuclei releases energy: the surface tension (binding energy) is proportional to the surface area of the nucleus which scales as the square of the radius, whereas the number of nucleons present in a nucleus scales for very heavy nuclei as roughly the volume or the cube of radius. So the binding energy per nucleon gets smaller in very heavy nuclei, because they have less surface area per nucleon. If you look at the curve of binding energy, you see that very heavy nuclei like uranium have about 7 MeV of binding energy per nucleon, compared to about 8.7 Mev/nucleon for iron. It’s this fall in binding energy per nucleon for very heavy nuclei which allows them to fission.

However, the total amount of binding energy increases after fission: from about 7 MeV/nucleon for uranium to well over 8 MeV/nucleon for fission fragments. Nuclear binding energy is not being released, it’s getting bigger in fission! Fission doesn’t release energy from the nuclear (strong) force, on the contrary, fission increases that binding energy. What physicists call “nuclear energy” from fission is electromagnetic (Coulomb field) energy. The electromagnetic repulsion between protons in the nucleus is trying to push it apart, while the strong nuclear force mediated (at its maximum range) by pions (gluons are exchanged between quarks on shorter distance scales) and when you hit the nucleus of uranium with a neutron (preferably a uranium isotope with an odd number of nucleons, which is less stable than the closed nuclear shell structures with even numbers of nucleons), it causes a distortion of the nucleus which may allow the electromagnetic repulsion force to briefly overcome the strong binding force and break the nucleus up. The point is, “nuclear energy” is not nuclear energy.

It’s electromagnetic energy. It’s Coulomb repulsion of protons, accelerating the fission fragments apart and thus imparting energy to them. It should be called electromagnetic energy (or atomic energy) to reduce confusion. But it isn’t. It’s called “nuclear energy” for political purposes, not scientific ones. The energy doesn’t come from nuclear binding energy. The fission fragments have altogether more nuclear binding energy than the unfissioned uranium! Now if you look in popular books on science, they say that nuclear energy is explained by Einstein’s E = mc^2. Nope. An equation doesn’t explain the mechanism, and it’s not even “matter” that is being converted into energy anyway as we have already explained: some electromagnetic Coulomb field energy is released in fission, and for the most part this energy is being converted into matter (increasing the binding energy per nucleon of the fission products; essentially all of the mass of atoms comes not from quarks or electrons but from the short-ranged, short-lived field quanta around the quarks). Fission converts Coulomb electromagnetic field energy (which has mass but is not real, on-shell matter) into the kinetic energy of matter. Now most physicists have learned that “there is nothing to be understood” and see no value in understanding mechanisms, which just obfuscate complex numbers in their equations, so they censor this out. What’s new? Remember Feynman’s immediate acceptance by all the great despots of the age:

“… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …”

- The Beat of a Different Drum: The Life and Science of Richard Feynman, by Jagdish Mehra (Oxford 1994, pp. 245-248).

This attitude of Bohr persists today with regard to the difference between 1st and 2nd quantization; the attitude is that because non-relativistic 1st quantization was discovered first, and is taught first in courses, it must somehow take precedence over the mechanism for indeterterminancy in quantum field theory (2nd quantization). The doublethink of most textbooks omits this and glues on 2nd quantization as a supplement to 1st quantization, rather than as a replacement of it! Why not have doublethink, with two reasons for indeterminancy: intrinsic, unexplained, magical indeterminancy typified by the claim “nobody understands quantum mechanics (1st quantization)”, plus the mechanism that virtual particles in every field randomly deflect charges on small scales (like Brownian motion on dust)!

Einstein and Infeld in their book “Evolution of Physics” discuss the randomness of Brownian motion. When the random, indeterministic motion of fragments of pollen grains was first seen under a microscope, the water molecules bombarding the fragments were invisible, and Brown actually believed that the motion was intrinsic to small particles, an inherent indeterminancy on small scales in space and time! This error is precisely Bohr’s 1st quantization error. It is no wonder that Bohr was so ignorantly opposed to Feynman’s path integral, or that most people still profess that they can’t understand mechanisms.

Feynman’s answer of course is that 1st quantization is plain wrong, since it is non-relativistic and also Occam’s Razor tells us that we need 2nd quantization only because it explains everything mechanically without needing an 1st quantization (intrinsic or magical) uncertainty principle:

“I would like to put the [1st quantization] uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [wavefunction phase amplitudes] for all the ways an event can happen – there is no need for an [1st quantization] uncertainty principle! … on a small scale [path actions small compared to h-bar], such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by 2nd quantization field quanta] becomes very important …”

- Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84.

Statistical correlation tests are the most easily corrupted form of science, and this is rife: you test for “correlation” between one model and the experimental data, given a null (default) hypothesis that the “correlation” is just random coincidence. The flaw here is that the “evidence” you gain from a successful correlation test only tells you that the model accords with the data better than random noise. It doesn’t tell you anything about the problem that another theory may also agree, e.g. FitzGerald’s, Lorentz’s, Poincare’s and Larmor’s equations match Einstein’s special relativity’s transformation and E = mc2 law, so “experimental tests” of these equations doesn’t specifically support Einstein’s theory over the more mechanical derivations of the same equations by the earlier investigators. It’s also been shown that the confirmed predictions of general relativity come from energy conservation and are not specific confirmation of the geometric space-time continuum model. Therefore, it is Popperian sophistry to claim that a specific theory is “confirmed” by experiments merely when its predictions are confirmed, unless you have somehow disproved the possibility of any other theory predicting the same results by a different route. Politically, this sophistry gives rise to the “historical accident syndrome” whereby the first theory which gives the correct prediction in a politically-correct, fashionable manner, is hyped by the popular media as having been “confirmed” by experiment, when in fact only the predictions (which are also given by totally different theoretical frameworks sharing the same mathematical duality in the limits of the experimental regime) are confirmed. This is fascist hubris. We saw it with the earth-centred universe of Ptolemy. Once you have a fashionable model, it gets into the educational textbooks, it is “understood” by the popular media, and any alternative framework is wrongly dismissed as superfluous, unnecessary, boring, etc., without first being properly investigated to see if it fits more data more accurately.

It’s important to note that this is a general problem in politics and human endeavour generally. The advice is to keep to well-worn paths or you will get lost. However, you’re unlikely to find much on well-worn paths, because so many people keep to them, and the probability of finding anything on them is therefore low. Ironically, this point is “controversial” because you get the counter-argument that you’re unlikely to find anything if you go off the beaten track. More to the point, if you do find anything off the beaten track, you still have a difficulty in convincing anybody that it actually exists, as Niccolò Machiavelli explains in the political context (The Prince, Chapter VI): “the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.”

It’s quite correct that that a lukewarm argument on a radical and unpopular proposal leads either nowhere or to failure (suppression). You cannot easily overthrow a tyrant with kindly, gentle words alone. By the time a tyrant is susceptible to arguments (in dementia), it is easier to overthrow the regime by other means anyway. Diplomacy is the policy of feeding wolves in the expectation of achieving peace through appeasement. Groupthink is never revolutionary: it is always counter-revolutionary, developing political structures to stabilize a success by preventing a further revolution. New ideas are only welcome within the narrow confines of an existing theory, like epicycles.

Irving L. Janis, Victims of Groupthink, Houghton Mifflin, Boston, 1972

Janis, civil defense research psychologist and author of Psychological Stress (Wiley, N.Y., 1958), Stress and Frustration (Harcourt Brace, N.Y., 1971), and Air War and Emotional Stress (RAND Corporation/McGraw-Hill, N.Y., 1951), begins Victims of Groupthink with a study of classic errors by “groupthink” advisers to four American presidents (page iv):

“Franklin D. Roosevelt (failure to be prepared for the attack on Pearl Harbor), Harry S. Truman (the invasion of North Korea), John F. Kennedy (the Bay of Pigs invasion), and Lyndon B. Johnson (escalation of the Vietnam War) … in each instance, the members of the policy-making group made incredibly gross miscalculations about both the practical and moral consequences of their decisions.”

Joseph de Rivera’s The Psychological Dimension of Foreign Policy showed how a critic of Korean War tactics was excluded from the advisory group, to maintain a complete consensus for President Truman. Schlesinger’s A Thousand Days shows how President Kennedy was misled by a group of advisers on the decision to land 1,400 Cuban exiles in the Bay of Pigs to try to overthrow Castro’s 200,000 troops, a 1:143 ratio. Janis writes in Victims of Groupthink:

“I use the term “groupthink” … when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.”(p. 9)

“… the group’s discussions are limited … without a survey of the full range of alternatives.”(p. 10)

“The objective assessment of relevant information and the rethinking necessary for developing more differentiated concepts can emerge only out of the crucible of heated debate [to overcome inert prejudice/status quo], which is anathema to the members of a concurrence-seeking group.”(p.61) ["Let's all be friends" was the initial approach of both Hitler and Stalin to their enemies, Hitler, especially, hated rudeness and encouraged his enemies to stick to the rules of gentlemanly behavior. The German proverb was that "hard words make wounds". It's easier to for tyrants to censor those who are polite without "even making a scene". Hence Hitler's repeated meetings and peace accords - later broek of course - with Neville Chamberlain which gave Hitler time to start WWII. Also the cold-blooded use of gas and classical music played to keep concentration camps "in order" with minimal conflict and hot-blooded violence which would be "bad for morale". The only useful, understandable, communication with despots is hot-blooded violence, as proved by WWII. The pacifist belief in the "reasonableness of man" to resolve problems by the method of calm negotiation is a delusion prevalent in those who have had a cushy time in life, away from desperate thugs. This was why Chamberlain was taken in, later lying that Britain had been "rearming" when the arms gap had been widening with every second prior to war increasing the relative strength of the Nazis and making the war when it did come more and more dangerous and costly in human lives. For the "let's all be friends" approach to Hitler, see the book by Professor Cyril Joad, Why War 1st ed August 1939, 2nd ed September 1939, which exaggerates weapons effects and then tells the reader that the author believes in his heart without proof that all people are reasonable and we can just negotiate with Hitler. Sure we could. Just what Hitler wanted and tried to do: peaceful conquests and geoncide entirely in concentration camps and gas chambers, without expending ammunition. The problem isn't war. The problem is socialist Professor Joad, who led the Oxford Union 1933 pacifist "we won't fight" motion to victory straight after Hitler's election as Chancellor. No popular historian mentions this, naturally. Are they all liars or ignorant?]

“One rationalization, accepted by the Navy right up to December 7 [1941], was that the Japanese would never dare attempt a full-scale assault against Hawaii because they would realize that it would precipitate an all-out war, which the United States would surely win. It was utterly inconceivable … But … the United States had imposed a strangling blockade … Japan was getting ready to take some drastic military counteraction to nullify the blockade.”(p.87)

“… in 1914 the French military high command ignored repeated warnings that Germany had adopted the Schlieffen Plan, which called for a rapid assault through Belgium … their illusions were shattered when the Germans broke through France’s weakly fortified Belgian frontier in the first few weeks of the war and approached the gates of Paris. … the origins of World War II … Neville Chamberlain’s … inner circle of close associates … urged him to give in to Hitler’s demands … in exchange for nothing more than promises that he would make no further demands”(pp.185-6)

“Eight main symptoms run through the case studies of historic fiascoes … an illusion of invulnerability … collective efforts to … discount warnings … an unquestioned belief in the group’s inherent morality … stereotyped views of enemy leaders … dissent is contrary to what is expected of all loyal members … self-censorship of … doubts and counterarguments … a shared illusion of unanimity … (partly resulting from self-censorship of deviations, augmented by the false assumption that silence means consent)… the emergence of … members who protect the group from adverse information that might shatter their shared complacency about the effectiveness and morality of their decisions.”(pp.197-8)

“… other members are not exposed to information that might challenge their self-confidence.”(p.206)

“Higgs boson” propaganda

Dr Woit reports “On Monday LHCb will report the latest results on B(s)->mu+mu-, and the latest Higgs news should come at the Higgs parallel session on Wednesday.” The problems for the SM Higgs boson arise from the non-prediction of the mass of the SM Higgs boson:

“Higgs did not resolve the dilemma between the Goldstone theorem and the Higgs mechanism. … I emphasize that the Nambu-Goldstone boson does exist in the electroweak theory. It is merely unobservable by the subsidary condition (Gupta condition). Indeed, without Nambu-Goldstone boson, the charged pion could not decay into muon and antineutrino (or antimuon and neutrino) because the decay through W-boson violates angular-momentum conservation. … I know that it is a common belief that pion is regarded as an “approximate” NG boson. But it is quite strange to regard pion as an almost massless particle. It is equivalent to regard nuclear force as an almost long-range force! The chiral invariance is broken in the electroweak theory. And as I stated above, the massless NG boson does exist.”

- Professor N. Nakanishi (Not Even Wrong blog comment, November 14, 2010 at 9:42 pm).

“Pion’s spin is zero, while W-boson’s spin is one. People usually understand that the pion decays into a muon and a neutrino through an intermediate state consisting of one W-boson. But this is forbidden by the angular-momentum conservation law in the rest frame of the pion.”

- Professor N. Nakanishi, Not Even Wrong blog comment, November 15, 2010 at 1:46 am.

Nakanishi states that despite the Higgs mechanism which produces massive weak bosons (Z and W massive particles), a massless Nambu-Goldstone boson is also required in electroweak theory, in order to permit the charged pion with spin-0 to decay without having to decay into a spin-1 massive weak boson. In other words, there must be a “hidden” massless alternative to weak bosons as intermediaries. This is explained clearly in our theory of SU(2).

Update (12 November 2012):

Leo McKinstry on BBC Common Purpose fanatics and liars

Biggest BBC Science Politics Ignorant Fact-Abuser and Obfuscator to be Appointed next BBC Director General?

The Guardian newspaper’s Stalinist clone (Daily Telegraph) blogs editor Damian Thompson has written a piece headed: “The next director-general of the BBC should be Jeremy Paxman. No, seriously” (linked here). This is typical of the ploys used to enforce evil: the pretence that Stalinist extremists are actually right-wingers and are putting forward objective ideas. A typical example of Jeremy Paxman’s science groupthink propaganda falsehoods were exposed by Dr Julian Lewis MP, who points out in his letter to the Sunday Telegraph on 29 August 1999 that Paxman’s own statements disingeniously contradict data given by his own book:

“In recounting the story of the discovery of deadly nerve gases by the Nazis, Jeremy Paxman surprisingly states: “Why Hitler chose not to use the weapons is one of the enduring mysteries of the Second World War” (Comment, August 22). … “… no matter how tempted he felt to use his secret gases, Hitler had always to balance in his mind the conviction of his scientists that the Allies had them too.” That quotation is to be found on page 64 of a book about chemical and biological warfare, entitled A Higher Form of Killing and published in 1982. Its authors were Robert Harris and Jeremy Paxman.”

Of course, Paxman wasn’t “simply lying”. Let’s invent some typical political style excuses for this “little anomaly”. (1) Maybe Paxman can’t or won’t proof-read book galleys before publication, and the quotation is stuff written by his co-author which he signed off without even reading. (2) Maybe he forgot his own views. (3) Maybe he changed his mind on the issue. Yes. Of course. Lots of excuses. (When someone in the media like Paxman gets everything wrong, it’s other people’s fault or has some “simple explanation”, but when these “big shots” inquisition others who make a similar slip up, it’s a different story; they’re “hero” witchfinder generals!)

Note that Dr Julian Lewis in 1982 book-reviewed Paxman’s highly biased pseudo-science book A Higher Form of Killing in the The Times (8 April 1982):

“… in June 1940, Sir John Dill, Chief of the Imperial General Staff, declared: ‘At a time when our National existence is at stake, when we are threatened by an implacable enemy who himself recognizes no rules save those of expediency, we should not hesitate to adopt whatever means appear to offer the best chance of success.’

“What the authors of this book clearly demonstrate – albeit reluctantly and with various critical asides – is the sheer irrelevance of unenforceable conventions aimed at limiting the application of science to warfare. … The 1925 Geneva Protocol on Gas and Bacteriological Warfare was to have negligible influence upon the conflicts that followed. Its prohibition of the first use of Chemical weapons did nothing to deter Mussolini in Abyssinia in 1936, and would probably not have prevailed with the British had an invasion been mounted after Dunkirk. Hitler’s failure to exploit his monopoly in nerve-gases was likewise determined by purely military factors [LEWIS IS SADLY IGNORANT OF THE FACTUAL BASIS FOR THE EFFICIENCY OF SIMPLE BRITISH WWII ANTI-NERVE GAS-PROOFING OF ROOMS AGAINST SKIN CONTAMINATION BY LIQUID NERVE AGENT DROPLETS OR NERVE GAS VAPOUR AND THE UTILITY OF WWII GAS MASKS AGAINST NERVE GASES, see detailed experimental proof in the papers I have personally published on the internet archive, linked here and also the 1999 nerve gas absorption experiments by buildings with closed windows, linked here, William K. Blewett and Victor J. Arca, Experiments in Sheltering in Place: How Filtering Affects Protection Against Sarin and Mustard Vapor (report ADA365348): “sorption of the agent by the shell and interior surfaces of the building ... was found to produce substantially higher protection factors than are predicted simply by air exchange. In hour-long challenges with mustard vapor, passive filtering increased the protection provided by the cottage by a factor ranging from 15 to 50. Increases in protection factor were significant with sarin, the more volatile agent ..."] …

“Faced with the problem of retained documents and incomplete archives, Messrs Harris and Paxman inevitably tend to stray into the realms of speculation. … At least Robert Harris’s notorious televised claim that Churchill pressed for a biological attack which would have left German cities indefinitely contaminated, is not resurrected. The Prime Minister’s advocacy of gas retaliation to the V-weapons is now carefully distinguished from questions of germ warfare.”

Fredrick Forsyth on EUSSR intimidation attempt

Above: brief extract from Frederick Forsyth’s exposure of the problems of Hitler’s “National Socialism” as it is now stands with Britain bailing out Greece’s communist credit card spending sprees as well as its own home-brewed thugs. Britain and the USA had to drop 1.3 megatons of conventional bombs on Germany in WWII to stop Hitler’s eugenics racism pseudoscience and “European Integration” lunacy back in the 40s. Why on earth do German Chancellors keep on trying to start wars they can’t win? For more words of wisdom from Mr Forsyth please see page 58 of my paper http://vixra.org/abs/1111.0111 linked here and also the blog post on the politics of science linked here (quotation from Frederick Forsyth on evil fascism dressed up as “political correctness” or groupthink do-gooder fascist-socialist “liberalism” lies).

James Delingpole, “28 Gates later”, The Telegraph, November 13th, 2012 :

“…the unsuccessful attempt by blogger Tony Newbery (Harmless Sky) to get to the truth of the now-infamous January 2006 seminar where the BBC decided to give up even pretending to be balanced on the climate change issue and start reporting it like a full-on Greenpeace activist. The BBC’s excuse: clever experts made us do it. But this won’t wash …

“Here are allegedly ‘the best scientific experts’ who attended:

BBC Television Centre, London
Specialists:

Blake Lee-Harwood, Head of Campaigns, Greenpeace
Li Moxuan, Climate campaigner, Greenpeace China
Kevin McCullough, Director, Npower Renewables
Sacha Baveystock, Executive Producer, Science
Helen Boaden, Director of News
Andrew Lane, Manager, Weather, TV News
Anne Gilchrist, Executive Editor Indies & Events, CBBC
Dominic Vallely, Executive Editor, Entertainment
Eleanor Moran, Development Executive, Drama Commissioning
Elizabeth McKay, Project Executive, Education
Emma Swain, Commissioning Editor, Specialist Factual
Fergal Keane, (Chair), Foreign Affairs Correspondent
Fran Unsworth, Head of Newsgathering
George Entwistle, Head of TV Current Affairs
Glenwyn Benson, Controller, Factual TV
John Lynch, Creative Director, Specialist Factual
Jon Plowman, Head of Comedy
Jon Williams, TV Editor Newsgathering
Karen O’Connor, Editor, This World, Current Affairs
Catriona McKenzie, Tightrope Pictures catriona@tightropepictures.com
BBC Television Centre, London (cont)
Liz Molyneux, Editorial Executive, Factual Commissioning
Matt Morris, Head of News, Radio Five Live
Neil Nightingale, Head of Natural History Unit
Paul Brannan, Deputy Head of News Interactive
Peter Horrocks, Head of Television News
Peter Rippon, Duty Editor, World at One/PM/The World this Weekend
Phil Harding, Director, English Networks & Nations
Steve Mitchell, Head Of Radio News
Sue Inglish, Head Of Political Programmes
Frances Weil, Editor of News Special Events …

Good work, Maurizio. Nice job! … ‘…now the BBC has yet another big problem on its hands. It turns out it has lied to the public who pay for it … This is no small matter considering the billions of pounds involved in the Green energy industry. Additional carbon taxation has directly led to fuel poverty for hundreds of thousands. The excess cold related deaths in the UK have shot up in the last few years.”

“BBC’s latest excuse: forget Jimmy Savile, blame Nigel Lawson

“by James Delingpole, November 12th, 2012

“The other day I argued that, following the Jimmy Savile and Lord McAlpine disasters [McAlpine has been hired for advertising by the BBC using a massive out-of-courte settlement paid for licence payers cash as an allegedly contractual bribe to declare the BBC immaculate; conservatives will always lie for brown envelopes of cash], the BBC will learn nothing and do nothing. Patten – I’ll bet you: and there’s no bet I’d more happily lose – will keep his well-upholstered rear stuck firmly in the Chairman’s seat. The BBC will remain, as it is now, a bastion of entrenched left-liberal orthodoxy. If you need proof, have a read of this astonishing speech just delivered to Oxford University by the BBC’s ex-Director General Mark Thompson.

“Though Thompson probably bore more responsibility than anyone for the Jimmy Savile fiasco – he was in charge when the BBC took its ludicrous decision to shelve a programme exposing Savile and run one praising him instead – he escaped in the nick of time to go to his new cushy £4 million a year job editing one of the few media institutions in the world even more nauseatingly bien-pensant than the BBC – the New York Times, aka Pravda. … Nigel Lawson’s Global Warming Policy Foundation has been a consistent thorn in the side of the BBC, by exposing the lamentable bias of its climate change coverage. Its publications include Christopher Booker’s devastating report The BBC and Climate Change: A Triple Betrayal … No one likes being told they’ve been naughty and done a bad thing. Especially not the gingery-beardie Mark Thompson … He quotes the Doran survey (‘97 per cent of scientists say…’), quite unaware that it has been exposed as rubbish; he is impressed by Bob Ward whom he seeks to brandish as an expert in the field; he constructs his whole speech around the argumentum ad verecundiam – blissfully unaware throughout that by citing supposed authorities such as the Royal Society he is guilty of precisely the rhetorical fallacy he is striving to criticise. … he resorts to yet another rhetorical fallacy (the argumentum ad populum) to demonstrate that ‘scientists’ are considered in opinion surveys to be much more trustworthy than ‘journalists.’ … my immediate thought is: wow! These people are so shameless.”

Update

Feedback on first quantum gravity YouTube video from Dr Mario Rabinowitz:

From: Mario Rabinowitz
To: Nige Cook
Sent: Sunday, November 11, 2012 4:26 PM
Subject: Re: My recent paper, Challenges to Bohr’s Wave-Particle Complementarity Principle

Hello Nige,

NG: “What do you think of my brief YouTube video which goes through my quantum gravity paper quickly? https://nige.wordpress.com/2012/11/11/quantum-gravity-film-11-november-2012-upload/N

Thanks, I enjoyed your well done YouTube video for a number of reasons:

*You cover some material that I and probably most physicists have not seen before.

*I think the historical perspective you provide is valuable.

*Most people do not have the time to unearth the material you provide.

*I’m glad you credited Maxwell for some of his seminal contributions. Maxwell is one of my heros. He was an exceptionally gifted and honest scientist. I read many of his papers decades ago. I recall one in which he said that he had been wrong on a thermodynamic question, and Causius was right.

*It is the first time I’ve heard your voice and seen your face.

*You are to be applauded for the hard work you put into it.

Here are a couple of suggestions in case you revise your video:

*Feynman’s voice is not identified until well after it is heard. Might be good to do it sooner. You may have done this intentionally to raise the viewer’s curiousity. However it may be unnecessarily distracting to many who don’t recognize his voice right away.

*There is a flash of something from Glasstone & Dolan. You may have done this intentionally to be subliminal. But it may be puzzling if not perplexing to those who are not familiar with their book on the effects of nuclear weapons.

Thanks also for the Feynman info. He is another of my heros who honored me by visiting with me in my office for about an hour.

In my recent paper, Challenges to Bohr’s Wave-Particle Complementarity Principle, in the ArXiv at

http://arxiv.org/abs/1211.1916

I conclude that violation of complementarity breaches the prevailing probabilistic (Copenhagen) interpretation of Quantum Mechanics. Do you agree with me?

Best, Mario

On Sun, Nov 11, 2012 at 1:19 AM, Nige Cook wrote:

Hello Mario,

Thank you very much. It is very interesting!

What do you think of my brief YouTube video which goes through my quantum gravity paper quickly? https://nige.wordpress.com/2012/11/11/quantum-gravity-film-11-november-2012-upload/

“… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …”

- The Beat of a Different Drum: The Life and Science of Richard Feynman, by Jagdish Mehra (Oxford 1994, pp. 245-248).

This attitude of Bohr persists today with regard to the difference between 1st and 2nd quantization; the attitude is that because non-relativistic 1st quantization was discovered first, and is taught first in courses, it must somehow take precedence over the mechanism for indeterterminancy in quantum field theory (2nd quantization). The doublethink of most textbooks omits this and glues on 2nd quantization as a supplement to 1st quantization, rather than as a replacement of it! Why not have doublethink, with two reasons for indeterminancy: intrinsic, unexplained, magical indeterminancy typified by the claim “nobody understands quantum mechanics (1st quantization)”, plus the mechanism that virtual particles in every field randomly deflect charges on small scales (like Brownian motion on dust)!

Einstein and Infeld in their book “Evolution of Physics” discuss the randomness of Brownian motion. When the random, indeterministic motion of fragments of pollen grains was first seen under a microscope, the water molecules bombarding the fragments were invisible, and Brown actually believed that the motion was intrinsic to small particles, an inherent indeterminancy on small scales in space and time! This error is precisely Bohr’s 1st quantization error. It is no wonder that Bohr was so ignorantly opposed to Feynman’s path integral, or that most people still profess that they can’t understand mechanisms.

Feynman’s answer of course is that 1st quantization is plain wrong, since it is non-relativistic and also Occam’s Razor tells us that we need 2nd quantization only because it explains everything mechanically without needing an 1st quantization (intrinsic or magical) uncertainty principle:

“I would like to put the [1st quantization] uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [wavefunction phase amplitudes] for all the ways an event can happen – there is no need for an [1st quantization] uncertainty principle! … on a small scale [path actions small compared to h-bar], such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by 2nd quantization field quanta] becomes very important …”

- Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84.

Kind regards,

Nige

Yuri Milner’s Fundamental Physics Prize of $3 million each to Edward Witten, Alan Guth, Andrei Linde, Arkani-Hamed, Juan Maldacena, Nathan Seiberg, Alexei Kitaev, Maxim Kontsevich, and Ashoke Sen (4 September 2012 update)


Above: here’s something really annoying to me. Facebook suggested adding $3 million Yuri Milner prize winner Edward Witten (“Luboš Motl and Jack Sarfatti are mutual friends”) – who is widely regarded as both the greatest and quietly modest mathematical physics genius of his generation. To see why this automated nonsense is funny, look at the jokes and Witten’s Nature article telling string theory colleagues to abstain from controversy, linked here. Witten currently uses a picture of a brain on his profile, which is hardly the emblem of “quiet modesty” which the big “science” hyping media prefers. (A quick scan through the 176 friends listed reveals anthropic landscape multiverse proponent Professor Susskind, drug smuggling-charged Professor Paul Frampton, and popular physics author Professor Lawrence Krauss, so this appears to be Witten’s genuine Facebook profile.) One thing you realize early on is that any association with fashionable status quo groupthink is toxic poison.

Dr Peter Woit of Columbia University maths department comments on Not Even Wrong, the fundamental physics blog critical of non-falsifiable speculations:

“… I noticed what is odd about this prize, after realizing that the winners are kind of a list of the most prominent people in the field who haven’t won a Nobel Prize. What this does is turn the Nobel Prize on its head; you get it for doing work that is untestable or wrong, but that has a high profile … The Fundamental Physics Prize winners get about six times more [than the Nobel Prize] for ideas that have gotten a lot of hype, but no experimental test (or at least not enough to satisfy the Nobel Committee of physicists). Even better, you get the prize for your over-hyped ideas even if experiment does show them to be wrong … One wonders about the implications of this for the future of theoretical physics: why should young theorists work on unpopular ideas and/or try hard to find testable ones? …”


“… it’s now all too clear where we end up: the textbooks of string theory and supersymmetry have already been written, and that will be codified as humanity’s best understanding of fundamental physical reality for the indefinite future. …” – Dr Woit.

Dr Woit: “… if string theorists all of a sudden calculate accurately all the parameters of the SM, using a string theory landscape calculation that implies a multiverse, that would be good evidence for a multiverse. It’s also true that if I discover tonight a wonderful TOE which explains everything perfectly and has no multiverse, that will be evidence against the multiverse. At this point, both of those eventualities seem equally irrelevant: I’m a lot more likely to get run over by a truck on my way home tonight. As for … the smart high school student, what if he or she instead of getting excited about the hype sees it for what it is, decides “frontier science” is BS, and decides to become a lawyer instead of a scientist? I think a lot of that has been happening in recent years ….”
Nine string theorists, inflationary cosmology theorists, and salesmen of non-existing quantum computers have been awarded $3 million each and the opportunity to control the awarding of future similar prizes, funded by Russian Facebook investor Yuri Milner. All the ideas have failed, and quantum computing hype rests on logic based upon non-relativistic 1st quantization, i.e. single wavefunction quantum mechanics from 1926, not the relativistic 1948 Feynman path integral (relativistic 2nd quantization) in which electrons have multiple wavefunctions for different paths, which interfere on small scales (path actions near Planck’s constant) to cause indeterminancy.

The foundations of quantum field theory and quantum mechanics

As documented (with literature references) in detail here and in several blog posts, the basis of quantum mechanics and quantum field theory as they now stand is Weyl’s 1918 gauge theory of quantum gravity. Weyl there suggested that the metric of general relativity is directly proportional to exp(iF), where F is a function of the electromagnetic field. Euler’s equation gives: exp(iF) = (i sin F) + cos F. In other words, there are an infinite series of real value, discrete (“quantized”) multipliers for the metric, like a series of railway tracks of different widths (which is where the name “gauge” theory comes from).

Weyl’s theory was wrong (as Einstein pointed out, it would mess up spectral lines in light from heavy stars with intense gravitational curvature), but the idea of using the discrete real solutions of exp(iF) to quantize or “gauge” a theory persisted since Schroedinger read Weyl’s paper and tried to apply exp(iF) to explain Bohr’s discrete quantum states in a paper he wrote in 1922. Schroedinger was well aware that any equation of the type, dF/dt = F, must have an exponential solution, because you get a natural logarithm when you rearrange to get similar factors on the same sides, and integrate, since the integral of (1/X)dX = dt gives ln (X1 /X2) = t, which when you get rid of the natural logarithm is equivalent to X1 = X2 exp(t). If you have dX/dt = -iHX, then the solution is X1 = X2 exp(-iHt).

Hence, the phase factor of exp(-iHt) or exp(iS) which is integrated to give the path integral in quantum field theory, is very simply a solution to an equation of the form dX/dt = -iHX, which we call Schroedinger’s equation (X is the wavefunction since this blog does not support the Greek symbol for the letter Psi). Schroedinger in 1926 came up with his equation dX/dt = -iHX after being familiar with Weyl’s gauge theory and being given de Broglie’s wave-particle duality paper; Feynman writes in his Lectures on Physics that it isn’t possible to derive the wave equation, i.e. it is taken as an axiom rather than a result of quantum mechanics theory. Clearly Schroedinger was just extending his 1922 idea of applying Weyl’s gauge theory to quantum mechanics. Weyl eventually (following London’s suggestion) reapplied his gauge theory of gravity to the electromagnetic field, scaling the local phase wavefunction (rather than than the metric of general relativity) in direct proportion to exp(iF) where F is a function of the electromagnetic field. This gave electromagnetic gauge theory. Put in causal terms (rather than Noether’s mathematical theorem): if a force field does work upon a particle to change its wavefunction, then the conservation of energy implies that there is an effect on the field which did work in causing the phase change.

Basically, the wavefunction amplitude for each actual offshell or “virtual” quanta in the double-slit experiment (or each possible interaction involving a virtual quanta and fermion in bound states like the quantized Coulomb field binding between orbital electrons and nuclei) is proportional to exp(iF). The periodic circular variation of the resulting complex wavefunction amplitude around the origin of an Argand diagram allows paths of different lengths (taken by different offshell quanta) to arrive with varying phases, which will cancel out if their actions are much larger than Planck’s constant divided by twice Pi.

What Weyl did in producing the first quantum mechanics gauge theory in 1929 was to consider the derivative of this relationship. As proved on page 7 of our paper, what Weyl noticed was that the equation

{wavefunction_(t)} = {wavefunction_(0)}{exp(iF)}

does not yield the same ratio when the wavefunctions are differentiated with respect to spacetime variables to find their rates of change. The product rule of differentiation and the rule for differentiating any exponent give an extra term, which turns out to be a compensation for the energy taken from the electromagnetic field in order to produce a change in the wavefunction amplitude of a particle. In other words, it’s the mechanism of quantum field theory: to change the state (wavefunction amplitude) of a particle you must supply energy from the surrounding quantum field by means of the absorption of an offshell field quanta. The energy lost from the field is the energy gained by the onshell particle which absorbs the field quanta, as depicted in Feynman diagrams. Noether’s theorem linking conservation laws to symmetries in physics influenced Weyl’s original development of gauge theory: if two electrons are paired in an orbital with opposite directions of spin (by the Pauli exclusion principle) then inverting the relative direction of both spins will preserve symmetry because although each electron will now have a changed spin, its both will still have opposite spins. This conserves energy because symmetry is preserved. (Electrons 1 and 2 have spins up and down before inversion; then down and up respectively afterwards. Although the spins have inverted, each electron still has opposite spins relative to its neighbor.)

What Weyl did physically here was to quantify the modification (“Gauge transformation”) to the electromagnetic field (Maxwell field potential), which corresponds to a given rate of change or derivative of the wavefunction. Weyl found that the derivative of the wavefunction is a function of the electromagnetic field; symmetry is preserved where the energy of the electromagnetic field is conserved. Only when a quantum of field energy is imparted to a particle, is work done. Classical physics ignores this entirely. Newton’s law of gravity contains no allowance for the gravitational field energy to be reduced by the amount equal to the kinetic energy gained by a falling apple. This is a flaw in classical physics, which generally ignores the application of the conservation of energy to fields.

In terms of gravity, if you drop an apple, the acquisition of kinetic energy by the apple’s mass comes from the gravitational force field which surrounds it (relativity shows that the apple gains mass slightly by accelerating, so the acceleration of the apple is not due to energy originally stored in its mass). The gravitational force field surrounding the apple is depleted, therefore, by precisely the amount of energy which the apple gains from the field as it falls. When it hits the ground, the sound waves are energy which was originally gravitational field energy (the potential energy of the apple in gravitational field). So we know (from energy conservation) that some graviton energy is used i.e. converted into kinetic energy.

Similarly, in Weyl’s electromagnetic gauge theory, every phase charge of a particle requires the absorption of a virtual particle from the electromagnetic field in which it is immersed. This is the basis of the simple “tree” level Feynman diagrams which tend to contribute the vast majority of the amplitude of fundamental forces like electromagnetism, at least at low energy. Weyl’s theory quantifies the relationship between the size of the phase change and the amount of electromagnetic field energy required for that phase change. Quantum gravity works the same way.

Euler’s equation shows that the phase space amplitude, exp(iS) = (cos S) + i sin S. You can’t have a non-real phase amplitude or a non-real path integral result (the non-observable complex states are precisely what quantize quantum mechanics). So the resultant of any path integral comes only from the non-complex term, i.e. exp(iS) can be replaced by cos S. (Ever heard of a cross-section or probability which is complex? Me neither, so goodbye i sin S.) Then the phase amplitude is a simple rotation of some hidden variable in real space. If you don’t like hidden variables, then you don’t like quantum mechanics, because the complex wavefunction itself is a “hidden variable”; you can only see probabilities and cross-sections which are proportional to the square of its modulus, so it’s not directly observable and thus is a hidden variable. This is entirely consistent with all experimental results in QFT and QM!

This has nothing to do with 1st quantization “hidden variables” such as Bohm’s derivation of the Schroedinger equation or the experimental tests of the Bell inequality; both of which are 1st quantization quantum mechanics, i.e. they implicitly assume a single wavefunction exists. In fact, a single wavefunction doesn’t exist in relativistic quantum mechanics, i.e. 2nd quantization. Each path has a wavefunction amplitude, and there is no (single) “wavefunction” to collapse when a measurement is made. As Feynman states in his book QED (1985, not to be confused with his earlier works), you “don’t need” the Heisenberg uncertainty principle as an axiom in 2nd quantization, because multipath wavefunction interference causes indeterminancy and produces quantitatively the same result, without any single wavefunction collapsing. It is the superposition of varying real wavefunction amplitudes from multiple paths that causes the interference phenomena in the double slit experiment and the indeterminancy of the electron’s orbit in the atom. Feynman explains in QED that for an orbital electron, the path integral is indeterministic because the Coulomb field (binding it) consists of discrete interactions with field quanta, which cause deflections to the motion of the electron. In other words, the electron doesn’t take all possible paths, it’s just a case that to model the electron statistically (to find the probability of finding it any any given location), you need to include in the calculation all of the various possible interactions which can occur between the electron and field quanta.

Similarly, if you want to predict the path of a pollen grain fragment, you need to include the various possible random interactions with air molecules in the calculation; the result is Brownian motion “random walk” statistics. The pollen fragment doesn’t take all possible paths. But because a large number of possible paths are possible, the model that mathematically describes the motion is indeterministic (giving only a statistical description), and this originally caused confusion for Brown who discovered Brownian motion but attributed it initially to some kind of intrinsic law of nature that particles are chaotic on small scales (the reason was that he could not see the water molecules which were impacting the pollen grain fragments with his microscope). Einstein and Infeld discuss this history in their book, The Evolution of Physics. It should be noted, however, that although by Occam’s razor the simplest path integral model for the electron is that it doesn’t take all possible paths (you just have to take all possible paths into account in the calculation of probabilities by the path integral), the photon does take multiple paths (since the double slit experiment gives interference with “single” photons, which must therefore be spatially extended and influenced by both paths which are given by the two slits).

Those who are proud of mathematics will try to defend epicycles as elegant, by obfuscation techniques. Are we “losing solutions” if we replace exp(iS) with cos S, in an analogous manner to the fact that you do lose negative solutions if you square a real function which can be positive or negative? No, because the resultant arrow for a sum over histories (path integral) on an Argand diagram must always be parallel to the real axis! It can’t be pointed any other way, or else the result of a path integral or quantum mechanics probability will not be a real number! “Maybe”, they will allege (with lots of arm-waving), “the complex exp(iS) epicycle carries vital interference information which magically goes to work in the multipath interference of the path integral, and ensures the Standard Model and QM give valid results?” Wrong!

It is a fact that cos S is just as periodic as i sin S; the only difference is that one is on the real plane (which we deal with) and the other isn’t on the real plane (so it’s a hidden variable that’s not ony hidden, but superfluous as well). Schroedinger needed the complex number, i, because he used first quantization, i.e. a single wavefunction rather than wavefunctions interfering for all paths. He did not have the path integral (second quantization) where discrete results arise from multipath interference, each path having a separate wavefunction amplitude which contributes to the path integral’s resultant arrow or amplitude.

Schroedinger had to explain discrete lines and discrete orbit diameters (most probable electron locations) with a defective, non-relativistic model in which there is only one wavefunction and there is no multipath interference mechanism for the quantization phenomena. The path integral provides this mechanism. We no longer need Schroedinger’s complex conjugate, if we use second quantization rather than first quantization.

The multipath interference mechanism of the path integral makes Hilbert space obsolete. We simply don’t need the exp(iS) wavefunction amplitude in the path integral; we can use cos S instead. It does the same job. No information is lost; there is no information input which gets lost in the output if we do the path integral as an integral of phase factor cos S in place of exp(iS). The only differences are advantages: we stop thinking about imaginary (Hilbert) space, which is at best a spurious epicycle in the theory, and we lose the problems with Haag’s theorem (which makes renormalization impossible to prove self-consistent if we continue to use imaginary Hilbert space). Haag’s theorem doesn’t apply to a real phase factor of cos S, only to exp(iS). So cos S solves a lot of problems, with no drawbacks. It reduces the number of hidden variables in QFT and QM. Very nice!

Errors from fashionably dogmatic ignorance and Orwellian doublethink

Let’s examine the string theorists like Ed Witten and the inflationary universe people like Alan Guth. Our paper explains on pages 1-14 what’s wrong with the stress-energy tensor’s spin-2 gravity coupling, and on pages 15-16 what’s wrong with inflation and the correct prediction for the flatness of the early universe.

Feynman explains clearly in his 1985 book QED that particles can’t exist in a 1st quantization Schroedinger-Heisenberg superposition of classical states, because the indeterminancy is produced by the mechanism multiple path interference.  So there is simply no single real wavefunction with an eternal “superposition” of states.  There is just one state for each quantum number of a fermion, whose indeterminism is due to the multiple wavefunctions from field quanta exchange which interfere with one another, without a constant superposition that can store any data whatsoever. Although a two-state, spin-1/2 “qubit” such as the up/down spin direction of an electron may appear to store information, it’s a fantasy because field quanta from the electrons’ own fields are continually interacting with them and reversing their spins; the only “information” is that each electron in a pair has opposite spin directions. The actual spin directions are continually changing. David Deutsch simply ignores Feynman’s 1985 QED book and in his article on “Quantum Computation” in Physics World, 1/6/92, misleadingly asserts that Feynman backed quantum computing in 1982! To make a quantum computer requires turning the universe non-relativistic so there is just one wavefunction per fermion, or else using massless bosons – photons – which store information during their light-velocity journey because of relativity. The problem is the same with quantum computing as with string theory and inflation cosmology: these are failed superficial speculations built on shifting foundations (the stress-energy tensor in general relativity doesn’t represent discontinuous matter, so there is no smooth curvature unless it is fiddled with a false continuous fluid approximation), shored up with false public relations claims. The media fails to report these deliberate or inadvertent religious-dogma-style deceptions (science fantasy is popular and sells).

Dr Peter Woit of Columbia University maths department comments on Not Even Wrong, the fundamental physics blog critical of non-falsifiable speculations:

“… I noticed what is odd about this prize, after realizing that the winners are kind of a list of the most prominent people in the field who haven’t won a Nobel Prize. What this does is turn the Nobel Prize on its head; you get it for doing work that is untestable or wrong, but that has a high profile … The Fundamental Physics Prize winners get about six times more [than the Nobel Prize] for ideas that have gotten a lot of hype, but no experimental test (or at least not enough to satisfy the Nobel Committee of physicists). Even better, you get the prize for your over-hyped ideas even if experiment does show them to be wrong … One wonders about the implications of this for the future of theoretical physics: why should young theorists work on unpopular ideas and/or try hard to find testable ones? …”

In other words, the worst version of the Matthew effect. Awards to celebrities are self-publicity for the award-giver. To put in rudely, if Milner had rewarded unhyped ideas which have been confirmed experimentally such as the 1996 prediction of the dark energy of the universe, then the media would have either ignored him or at best crucified him with their usual self-righteous drivel about everyone unfamous being wrong/unworthy/failures/bitter/pathetic or simply non-newsworthy. It’s exceptionally hard for the media to hype anything unless it already has popular appeal, because the media is useless at marketing completely unpopular ideas (I obtained grade A on a marketing course dealing with the use of the media for publicity, so I do know how this works).  No editor can sell plain old news (ordinary births, deaths, marriages, etc.) without front-page controversy/fame/infamy/celebrity gossip, which is popular.  Nobody has the time to know what every Tom, Dick and Harry do.

The media interest is in promoting the press-releases of the popular status quo “Max Clifford” spin-doctors, the serial murderers, megarich, or megapowerful politicians, actors, or rock stars.  Even in the supposedly fair and level playing fields of the Olympic Games coverage, the mainstream media in fact hypes, interviews, and endlessly promotes the profile and public recognition of the “expected” winners (or the previous winners) ahead of the new contests and new results, and in preference to the new competitors who are unknowns.  This is the Matthew effect.  Take Michael Phelps who has 17 Olympic medals from previous Games.  The media has already devoted more coverage to the “news” that he didn’t win a medal in his first Olympic competition of 2012, than to the new winners who did!  This “paradox” has a simple reason: Phelps established a profile and a massive fan base, and well deserved his publicity for the 17 previous medals.  The self-serving media priority is thus pandering to this established popularity and established interests of the viewers and readers, not simply reporting what I’d consider to be the real “news”.  Past history prejudices the media coverage, by defining in advance what “news” is considered worthy of reporting!  In other words, the power of the media is corrupted (like all other power), and for precisely the same reason, there is no media market for real “news” (non-string) of checked predictions in fundamental physics!  Dr Woit stated yesterday, 30 July 2012 (even before Milner’s prize was announced):

“The lesson … from the failure of … one trendy subject is just to change to a different trendy subject.” – Dr Woit

The danger is that fashionable groupthink will be encouraged, wasting money by concentrating too many eggs in one basket or in baskets located on one bandwaggon:

“You really think that this year’s winners will continue to do research as if nothing has happened? And given their financial power over the rest of their colleagues, you think their relationships will stay as natural? If someone had an incredible amount of money and wanted to sabotage a subject, you think there is a more effective way? Mr Milner could have started a new, well-funded institute dedicated to fundamental research in physics, along the lines of the Perimeter Institute, but this time in a different continent. He could have subsidized the research of a very large number of young, talented scientists (including many in Russia who live hand to mouth). But he decided to take the easy way and splash incredible amounts of cash on those who need it least.” – Not Even Wrong commentator MathPhysics

“As several have pointed out, it makes the problem of follow-my-leader physics worse. As it is there are too many young people whose work is based on what is fashionable at Princeton, and the prospect of a 100k/3M dollar carrot will just make this worse.” – Not Even Wrong comentator P.

“… these theories have (or this one [Witten’s "M-theory"] has) the remarkable property of predicting gravity [this emphasis is Witten's own] - that is, of requiring the existence of a massless spin-2 particle whose couplings at long distances are those of general relativity. (There are also calculable, generally covariant corrections that are unfortunately unmeasurably small under ordinary conditions.) This result is in striking contrast to the situation in conventional quantum field theory, where gravity is impossible because of the singularities of the Feynman graphs.”

- Edward Witten, “Reflections on the fate of spacetime”, Physics Today, April 1996, p24.

Notice that Pauli and Fierz introduced spin-2 gravitons in 1939 as “gravitational waves” (which will be composed of gravitons) by stating:

“In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …”

– Conclusion of the paper by M. Fierz and W. Pauli, “On relativistic wave equations for particles of arbitrary spin in an electromagnetic field”, Proc. Roy. Soc. London., v. A173, pp. 211-232 (1939).

It is well “accepted” by the theoretical physics community that gravitational waves “must” be composed of gravitons of spin-2, because they couple to the rank-2 stress-energy tensor of general relativity.  This is despite the fact that the stress-energy tensor requires an unrealistic ideal/perfect classical fluid approximation to yield a differentiably smooth distribution of mass, energy etc., so that the resulting curvature is a smooth, differentiable function of spacetime.  Similarly, it was well accepted by the theoretical physics community in 1867 that atoms were composed of the stable aether vortices of Lord Kelvin, von Helmholtz, and Tait, because they agreed mathematically with the “established laws of nature” such as the conservation of mass and momentum. Any messenger who ridiculed Kelvin for this error of not making falsifiable predictions was simply censored out.  Planck, inventor of the quantum theory of radiation and editor of the journal which published Einstein’s first papers, remarked depressingly in his autobiography that new ideas triumph “one death at a time” as famous bigots/leaders are rat poisoned/die. Should critics of any status quo group-think fashion really be “ridiculed” and then banned from the right to reply properly? Or should the media stop chickening out and start investigating the liars instead of believing the current epicycle theory like religious dogma?

“The student … is accustomed to being told what he should believe, and to the arbitration of authority. … Ultimately, self-confidence requires a rational foundation. … we should face our tasks with confidence based upon a dispassionate appreciation of attested merits. It is something gained if we at least escape the domination of inhibiting ideas.”

- Cecil Alec Mace, The Psychology of Study, 1963, p90

String theorists have arXiv to host their preprints (and censor out critical trackbacks), so they don’t really need Physical Review Letters for peer review. Once upon a time, peer reviewed journals were the mechanism for peer-to-peer communication. Now, however, the top journals are merely a prestige publicity/marketing mechanism used to communicate to the media and thus the public (advertising media). Top string theorists can now simply put out arXiv papers with “press releases”, instead.

That’s precisely the problem: all hard-core stringers do think that dissent from the dominant theoretical approach should be discouraged (i.e. ignored). This is because they feel that string theory is the only decent idea out there and – unless or until a better option emerges – it makes sense to focus on string and temper old-fashioned Popperian prejudices about theories being judged on their falsifiable predictions. Dissent amounts to defeatism, which lowers morale. The power of status quo is measured by its ability to ignore all opposition (critics and dissenters).

Woit comments on Witten’s defense of string theory dismally, too:

“I don’t think any of his examples addressed the real issue, which is not that practical tests of string theory are far away, but that it makes no predictions, even if you had the technology to test it. To defend the falsifiability of string theory he gave the dubious argument that if table-top experiments showed quantum mechanics to be wrong, that would show string theory was wrong.”

Witten’s 1996 “defense” of string theory using spin-2 gravitons implicitly assumes that the rank-2 stress-energy tensor which requires spin-2 graviton coupling, is correct. In fact, the rank-2 stress-energy tensor is long known to be false because it can’t model discontinuities: the discrete particles of mass and energy are not represented accurately by the stress-energy tensor! Instead, a falsely smooth distribution is required to force the stress energy tensor to give a smooth Ricci curvature. In addition, you can write a rank-1 tensor, i.e. a vector, equation for gravity – e.g. the Poisson equation – which is analogous to a spin-1 force law in QED, so Witten’s argument is subjective to the easily disproved assumption that the smooth distributions (required for the differential rank-2 stress-energy tensor in general relativity) are perfectly correct model of mass distributions in quantum gravity! Duh! Even Phoebe grasped this!

Quantizing general relativity is the deeper argument for string theory, not falsifiable experiments. The problem is that this gives an opportunity to move the goalposts from tests to the need to overcome singularities in general relativity, by replacing them with Planck length strings of compactified dimensions.

“Wow, the theoretical physics field is crazy, now a bunch of ‘top’ physicists in string theory and other areas with untestable theories get 3 million dollars each for ostensibly over hyping their discoveries? It seems you should be a better PR guy than physicist now a days and you’ll be more successful. Plus as several people have stated earlier in posts, this just reinforces the old guard. They get to choose who gets next years prizes? Wow. As an aside, how can physicists who champion untestable, unproven ideas past any reasonable time frame remain so revered?” – Hack.

They remain so revered because it is taboo to send them a message, just as the Emperor’s New Clothes were a taboo subject for discussion. It’s no coincidence that society’s liars just happen to be protected by the rules of taboo. On the contrary, these people seek to hide their attire from criticism precisely because they work in subject areas which the media considers taboo for discussion. Why didn’t someone point out to Hitler that eugenics theory – promoted by revered Medical Nobel Laureate Alexis Carrell and Darwinian psuedoscientist Sir Galton – was a lie simply because evolution utilises diversity? Answer: taboo. Nobody ever wants to discuss mechanisms, causes, and understanding the evidence objectively. People want lies and spin, and that’s therefore precisely what they get, masquerading as fact.

Another popular example: IPCC taboo’s on negative feedback data from cloud cover

The IPCC ignores entirely cloud feedback data from Spencer et al. It’s precisely because of the lower air pressure and lower temperature above the surface that the rising water vapour expands and condenses: the Wilson cloud chamber effect. If you reduce the air pressure, the parcel of warm moist air expands, so its temperature falls, and cool air holds less water vapour so the super-saturated (excess) water content then condenses into cloud droplets.

Go further up in altitude and the air gets cleaner, with less dust and condensation nuclei. It’s exactly like a Wilson cloud chamber, in which air ions from cosmic rays act as condensation nuclei which attract water molecules and set off cloud formation. This produces vapour trails around the tracks of alpha and beta particles, and charged cosmic ray collision particles. Nigel Calder, former New Scientist editor, has correlated the inverse cosmic ray cycle with radiosonde temperature: http://calderup.files.wordpress.com/2012/03/101.jpg and http://calderup.wordpress.com/2012/03/03/climate-physics-101/ The lower the cosmic ray intensity, the greater the temperature! This is precisely what the Wilson cloud chamber mechanism predicts for cloud cover such as cirrus (around 15,000 feet). The more Wilson cloud cover, the greater Earth’s albedo, and thus the cooler the temperature because more sunshine is reflected away by the cloud cover. The fewer the cosmic rays, the less high altitude cloud cover, and the warmer the surface is.

The Wilson cloud chamber is not an opinion or a speculative theory, it’s hard fact. Whether Calder’s correlation is based on the world’s best data for temperatures is another question, but I think this is the kind of mechanism that at least contributes to the Earth’s temperature fluctuations. By ignoring this physical mechanism entirely, the IPCC descends into pseudoscience. Their approach is to ignore Spencer and Calder, instead of objectively investigating mechanism other than AGW.

The hockey stick curve is wrong due to negative feedback from cloud cover. The variation in cloud cover as a function of temperature opposes the effect of air temperature on tree growth and ice molecule sublimation. When earth is hot, there is more high altitude cirrus cloud due to evaporation of water, and this reduces the sunshine for photosynthesis and ice sublimation. Fact. This effect opposes the effect of air temperature (which promotes tree growth. Fact.

The effect is quantitative: greenhouse experiments on the rate of growth of trees under varying air temperature do not allow for the fact that there is more cloud cover when the planet heats up. Therefore, the correlation used between air temperature and tree growth is inaccurate. I state that this is a quantitative effect on the error margins in the IPCC tree ring proxies: they underestimate the temperature fluctuations error bars. The actual air temperature varies dramatically, but because cloud cover increases with global warming, tree growth is less affected than their greenhouse data suggest.

Trees of identical species in similar soil grow at very different rates depending on exposure to sunshine for photosynthesis. (What stops this kind of objective quantitative research is the fact that it’s not going to profit anyone, apart from the taxpayer. The politicians and professional (quack) “scientists” are in it for research grants, political “saving the universe” hero worship/votes, etc. However, I’m more interested in the science.)

I’ve explained in detail what’s wrong with the “error bounds” in the hockey stick curve. Earth’s temperature fluctuates widely, but this has less effect on tree ring growth and ice sublimation than the IPCC believes, because as the air temperature goes up the cirrus cloud cover increases which partially cancels the increased growth of trees and the increased sublimation of ice (both of which depend on sunlight exposure to trees and ice, not just air temperature as the IPCC assume).

The models are incorrect because they omit the Wilson cloud chamber effect entirely, and Spencer’s negative feedback water data. All of the IPCC models are wrong!

Try saying this, and you are into classic taboo territory, in which it is socially nice to tell lies and pretend that CO2 is causing the temperature rise in the contrived hockey stick, which mashes together a horizontal line from tree ring proxies where naturally variable temperature swings are cancelled out by corresponding cloud cover variations, to more recent satellite data which shows a real temperature swing upward which isn’t seen in the tree ring proxy falsehood. There is a 50% chance of increasing or decreasing natural temperature swings, since a variable can increase or decrease with time (two possibilities). CO2 has an effect, but due to negative feedback (increased cloud cover to reflect sunlight away as the earth warms up), there is a thermostat in place which the IPCC exclude from the entire range of their climate models. The IPCC assumes (without evidence) that 100% of the temperature rise since satellite data arrived has been due to CO2 and related greenhouse gases.

To make this assumption look credible, the IPCC uses the lie of the tree ring proxy data, which don’t correlate to temperature since cloud cover affects photosynthesis, just as cloud cover affects the sublimation of oxygen isotopes from surface ice which goes on to form the ice-core “temperature record”. This allows them the hockey stick fiddle, and to claim that recent temperature changes are unprecedented, correlate with CO2 output, and are not natural random fluctuations. The geological evidence shows that negative feedback from cloud cover prevents CO2 rises from affecting temperature: most major CO2 levels changes lag behind temperature swings. Temperature is regulated by the Wilson cirrus cloud chamber effect, which controls the natural global variations in temperature. When cloud cover decreases, temperature rises and this results in a rise in CO2 due to a proliferation of CO2 emitting animals in the warmer climate, faster than CO2 absorbing rainforests can expand. Hence, geological record temperature rises preceded CO2 rises. The IPCC approach to science is epicycles and lying propaganda.

There is no data correction method known for cloud cover; all these studies assume implicitly (never explicitly) that by taking more and more data, local variations in cloud cover will cancel out. This assumes that the mean global cloud cover is not varying as a function of the global mean temperature. In fact, as temperature rises, mean cloud cover increases due to evaporation, and this reduces the mean amount of sunlight available to trees. Tree ring growth consequently doesn’t correlate with mean global temperature as strongly as greenhouse-calibrated tree ring proxies, which suggest falsely that temperature variations prior to say 1900 were smaller than the real temperature variations. This is why the “official” error bars on individual data sets of tree-ring proxies are far too small. The real fluctuations individual temperature sets would be far greater still than the fluctuations on the data in the “official” hockey stick curve.

This also applies to the temperature proxy of using the ration of oxygen-16/18 isotopes, since sunshine on the surface of Vostok ice increases sublimation of ice to H2O vapour, regardless of the air temperature: sunlight supplies infrared energy directly to the water molecules in the ice crystals. This cloud cover effect on Vostok ice core data is ignored by the IPCC. There is no foolproof correction method. You simply can’t resolve two variables from one piece of measured data. You cannot deduce both temperature and cloud cover variations from tree rings or oxygen ratios.

There is direct evidence in the data since 1960, where tree ring proxies indicate a smaller temperature variation than direct temperature data measurements. Ice cores aren’t available over the entire earth’s surface for a fairly obvious reason (summer temperatures), so it’s polar data only. Tree rings are the major proxies, introducing random noise into the “temperature” data sets whose average gives the flat part of the hockey stick curve.

There’s a huge scatter and disagreement in the temperature proxies (oxygen isotope ratios, tree ring data) used for the hockey stick prior to circa 1900, when you take account of the cloud cover effect I’ve explained. So Mann averaged a huge number of differently fluctuating temperature proxies, to obtain the constant temperature part of the hockey stick. If that’s wrong, and the temperature really was fluctuating wildly before the 20th century (as critics claim citing the Medieval warm period and the iced Dickensian Thames in the 1850s), then the correlation between recent CO2 and temperature rise may not be so impressive. If the temperature is always fluctuating with a period of a century or so, then for any given century there’s a 50% chance of rising temperature and 50% of falling. So the correlation is not proof of causation. Even if you have a billion or a trillion falsely analyzed oxygen isotope ice core and tree ring data sets, if you ignore cloud cover variations (increasing cloud cover as global temperature rises), you’re not doing science.

The only way the IPCC get a big disaster prediction is to assume positive feedback from water evaporation, boosting global warming. However, water vapour can’t have a self-feedback that’s positive, or else Earth would be boiling in a runaway greenhouse effect. Because Earth isn’t in a runaway greenhouse effect naturally, you know that the greenhouse properties of ocean evaporated H2O are somehow limited in nature. You don’t see anyone announcing that dihydrogen oxide must be banned because it could all evaporate from the oceans and roast the world.

Although water vapour absorbs IR, when too much water evaporates, it heats up, rises buoyantly, then expands and cools until the air gets saturated and the water turns to cloud droplets which shadow (and cool) the surface below. Dr Roy Spencer published some data on this negative feedback from clouds in monsoons; it seems H2O has positive feedback (as IPCC assume) for small temperature rises due to CO2, but has negative feedback (opposing CO2) for higher temperature rises. This subtle effect is what’s been missed out. Clearly, it must exist or we wouldn’t exist; it would be in a runaway greenhouse world.

The danger is that science is being perverted by the usual conflation of data and politically correct interpretations (involving “reasonable looking” assumptions that must not be questioned for political correctness reasons, not scientific reasons). It is a fact, not a mere hypothesis, that here is a conflict between individualism and the union of people into tribes, religious sects, and so on. If you want evidence that mainstream science can become corrupted, take a look at medical Nobel Laureate Alexis Carrell’s 1935 book “Man the Unknown”, which suggested gas chambers as a totally civilized and humane method to deal with people deemed undesirable to the state government. The book was published with an enthusiastic foreword in Germany the next year, and later implemented with disastrous consequences. The origins of “civilized” gas chamber eugenics go back to “greats” like Sir Francis Galton, who asserted Darwinian evolution could be put to good use to purify humanity. This is the danger: it has happened before when fashionable authority has been worshipped by the public and its attending media in Nazi and USSR scams. If it is an “insult” to claim this occurs in a “democracies” then democracies as functioning today need insulting badly. Very badly.

The eugenics society was still powerful in Britain in the late 1940s (Penguin reprinted Carrell in 1946, omitting to mention the use of gas chambers and Carrell’s collaboration charges), until the full evidence of eugenics results – in photos and films – were published after being presented in evidence at Nuremberg. But there are lots of examples. Marxism as a perversion of science went right through a major segment of Western academic idealism from 1917 to the end of the Cold War, with endless physicists claiming that nuclear power is proof we must embrace socialism or be vapourised. They were wrong. In those parts of science where personal attacks take the place of scientific criticism, there is a problem of groupthink (corruption due to funding pressures) which is like fascism. Fascism is the short cut whereby you assert and suppress dissent. Pseudosciences like eugenics were promoted with paranoid scare-mongering tactics; why take the risk with the Jews? If some famous and fashionable people become dictators who suppress the facts, then they need to be insulted, which is the only option left after they have banned all discussion and rational debate about understanding the facts scientifically. Conflicts don’t occur because the weaker side prefers war to rational discussion, but because the weaker side is banned from discussion. The problem with AGW dogma is the money sucked into the industry. Get trade unions fighting for the jobs of workers who build solar panels and windfarms, and the media supporting them, and there will be a political intervention to control science. Once you have invested too much in something, you have to see it through even if it turns out wrong. There is too much momentum to stop or reverse it. If Michael Mann or Phil Jones reversed their position, that wouldn’t reverse the financial bandwaggon build on the back of AGW dogma. There is no conspiracy, just momentum. If a large ball of snow begins rolling down a ski slope, getting bigger as it goes, you don’t need a conspiracy to explain why people who get in its way disappear suddenly.

“Never ascribe to malice that which is adequately explained by incompetence” (Napoleon). AGW started out in 1896 in the “genuine” idea by Arrhenius (famous for his reaction rate equation) that CO2 will increase temperature. He falsely believed without any evidence about cloud cover and its negative feedback or the reason why there was not a runaway greenhouse effect from water vapour on earth, that trace gases were responsible for ice ages. If you look at the actual correlations in Vostok ice core data for trapped CO2 bubbles and oxygen isotope ratios, you see some correlation, but vitally the temperature changes in many spikes slightly before the CO2. So it suggests that a temperature rise killed off some rainforests and thus reduced global photosynthesis of CO2 to oxygen, allowing CO2 levels to rise as a result of temperature rises. This is the exact opposite to the CO2 temperature-driving mechanism Arrhenius speculated. Arrhenius was wrong. The problem I have is with authority being mistaken for fact in mainstream science. I think the whole basis of science as a political or officialdom based totem-pole of power is wrong. This is not about “conspiracy”.

We see this in the January 1986 Challenger space shuttle enquiry. The engineers testing the rubber O-rings knew that an immense risk was being taken by launching the shuttle in cold conditions, because the rubber was brittle and leaked fuel when icy cold. Their boss however had to maintain the contract with NASA, or they might all lose their jobs. He asked them sarcastically if he should tell NASA to delay launch until April. No concerns were raised with NASA until the shuttle exploded and the Presidential inquiry with Feynman was done. It’s no good claiming a “conspiracy”. Once you have an error but money is flowing, you don’t need to order people to shut up. The pressures to conform in the usual management structure prevent any clear message being passed upwards to a level where it will do any good. In fact, like the “Emperor’s New Clothes”, even if the Emperor at the top hears that he’s been conned, he is in a jam and can’t do anything without becoming unpopular or looking even more of a complete fool. So he keeps his Godly act up until he is out of power, or dies.

You can never be “wrong” when you want to save the planet. It’s not about science, so much as a emotional claptrap. The same is true of superstring theory: it’s emotionally defended as “beautiful” and “the work of great minds”. This kind of emotion is a goalpost that is switched for falsifiable predictions whenever needed. It’s pure hubris, the kind of propaganda poured out by Dr Goebbels and later the Moscow based World Peace Council during the Cold War. There is a point at which conformity becomes dogma, and professionalism becomes conformity. Then professionalism is concerned with dogma. At some point, however, let’s assume that the critics come up with a viable alternative theory. By that time, the mainstream science has hardened into a orthodoxy supported by billions of people and trillions of dollars. The popularity of the facts will then be received about as well as Jesus by the High Priest. The only option is to ignore or shoot the messenger. This is where the fascism come in. Take the “Physics Forums” issue. I posted a discussion thread on gravity. The only people to comment hadn’t bothered to read my paper, but asserted it was wrong because of errors in other people’s papers or asserting that spin-2 graviton dogma is a proved fact because two masses exchanging gravitons must exchange spin-2 gravitons in order to attract (which is true, but irrelevant if the two masses exchanging gravitons would actually repel if a two mass universe existed which it doesn’t; “attraction” from repulsion because there are lots of masses all around around). Anyway, one string theory student commented there that if he thought I was really correct, he would give up physics because the universe would be ugly and mechanical and the elegance of the maths of string theory was the whole reason for his passion in physics. I think this is really bitter fascism, this complete and unashamed loss of scientific honesty in favour of fashionable lies. Someone else, Professor Sean Carroll who has Feynman’s old desk at Caltech, blogged that there is no censorship of alternative ideas, and if anyone really has the final theory of quantum gravity he would see it into print. He hasn’t done so, but by making such false claims he appeases those who would otherwise be worried by groupthink in science.

(Furthermore, if only a “final theory” is going to be supported, that would rule out Newton’s and Einstein’s papers, and everything previously done in science. If you claim that everyone is free to pursue any new idea in science because, should they find the final theory, some Professor claims he will publish it, you’re ruling out anything short of a final theory, which would rule out everything in science to the present time. In other words, it’s too stringent a criterion. Newton and Einstein in any case didn’t work out new ideas in complete isolation, they relied on the data and tools of people like Galileo, Brahe and Kepler, and Riemann, Levi-Civita and Ricci. If you block off alternative ideas unless or until a “final theory” emerges from them, it’s just fascism, because it takes away the motivation to try to publish the intermediate stages on the way to a final theory in the alternative framework. AGW does the same thing, by asserting authority to suppress controversy using peer-review politics.)

This was the liebestraum problem tackled by a German chancellor in the 1930s. The traditional solutions to overcrowding is starvation. AGW will help ensure this because the economic resources being invested in AGW will take away those resources from the usual poverty-fighting efforts, as the global recession deepens. You can’t have your cake and eat it. Sustainable wind power and carbon balancing schemes are expensive and what is spent preventing an imaginary AGW disaster will be unavailable to help prevent mass starvation when harvests fail. Debt will limit responses. However, I don’t think any doomsday scenario is real. There are automatic feedback mechanisms in place. When overpopulation really gets bad, most people (with the exception of some regional irresponsibles) will start having smaller families because the expense of having many kids is excessive. Similarly, when pollution really gets bad, if something can be done about it, people will do it. E.g., the New York sewage system and London sewage system histories. People live with problems until a real nuisance, then solve them. Predictions of doom creeping up by accident while everyone looks the other way except for scientific journals that censor alternatives and criticisms of the lying propaganda, are absurd (see Herman Kahn’s “The resourceful earth”). Doom creeps up because of censorship of criticisms by mainstream dictatorial fascist movements which disguise themselves as planet saving, zero-risk groupthink idealism. The pacifist movement led by Cyril Joad’s Oxford Union 1933 pacifist motion (which encouraged the new dictator Hitler to do what he wanted) is a perfect example of the “why take the risk?” approach of these idealists. They always claim – without proof – that the only risk is from the alleged danger they hype (i.e., the “risk” that Britain would become “war minded” if it tried to stop the Nazis by force rather than by peaceful collaboration, civilized talking, peace deals, and mutual cooperation pacts). They ignore the risks from the courses of action they propose, while exaggerating the risks from the course of action they oppose. In order to prevent criticism, they shoot the messenger in fascist sytle whenever anyone disagrees with them, e.g. see Cyril Joad’s attack on Winston Churchill in his August 1939 best-seller “Why War?” The danger since the time of Jeremiah has been excessive doom-mongering (usually for fame, political power, or financial profit), not doomsday. Doomsday claims are used to “justify” costly political moves like unjustified wars, dictatorships, and genocide. Ignoring critics is key to this ongoing process.

Update: Dr Woit had published an article in the left-wing Italian newspaper Il Manifesto and comments depressingly on his blog:

http://www.math.columbia.edu/~woit/wordpress/?p=4997

“… it’s now all too clear where we end up: the textbooks of string theory and supersymmetry have already been written, and that will be codified as humanity’s best understanding of fundamental physical reality for the indefinite future. …”

This historically has always happened in what the media call “science”: the social education side of knowledge (exam syllabus and media hero-worship-of-alleged-”genius”-bigots) forces fundamental physics to turn its reigning “best guess” theory into educational dogma. Then the best guess theory (flat earth/creationism/epicycles/vortex atoms/aether) hardens into orthodoxy, and fascist doorkeepers shoot alternative ideas by the simple lie that anythig that disagrees with the mainstream belief must be “wrong” by definition. The arguments for this:

(1) There are no viable alternatives, so you must support it or you are a terrible proponent of anarchy. (This amounts to saying if you live in dictatorial regimes, you must support dictatorship because you have “no alternative”.)

(2) For harmony, civilized behaviour and politeness in science, everybody must always sing from the same hymn sheet for the common good or for socialist/fascist/environmentalist/universe preserving/antinuclear ideals. Otherwise, the result is confusion or ugly chaos. (This amounts to the support of power through corrupt unity; sheer group-think power politics. By analogy, the argument would be that if you oppose USSR/Nazi dictatorship, you should join it, because then you will have more chance of reforming them in the direction you want, than you ever had while on the outside of that group. If you refuse to cooperate with the dictators, Dr Goebbels becomes angry with you, calling you a “rebel”.)

(3) Science is defined by human socialist consensus, and not by experiments or confirmed predictions. (Despite the lessons of Ptolemy’s epicycles, Maxwell’s mechanical aether, Kelvin’s vortex aether atoms, Witten’s M-theory, and so on, this statement is still taboo. Those brainwashed in lies will claim that science is a consensus of experimental evidence, despite string theory, and then they move the goalposts specifically to excuse the difficulties with today’s dogma.)

My first contact with the problems of science was when my hair changed colour from red to brown when a teenager, despite the reigning educational genetics dogma that genes produce permanent, unchangable characteristics. This was not dye, and was not a speculative theory. We inherit two versions of each gene, and the old genetic theory of dominant and recessive genes is wrong: no gene is 100% dominant or 100% recessive. (I’m not saying that hair colour is controlled by just a single dominant gene, but gene switching does control colour change.) Further, the “actual” percentages of deviation from Mendel’s simplistic dominant/recessive genetics theory (based on peas) are simple not fixed constants, as was originally believed when epicycles were inserted into the original theory to allow for discrepancies. They are variable, depending on circumstances: hence “gene switching” between the supposedly dominant and supposedly recessive gene is possible. During life, the concentrations of different chemicals in the blood stream vary (due to hormones, diet, stress, exercise, etc.) and these chemical changes can sometimes be sufficient to cause “gene switching”; the “dominant” gene in the pair of genes in each cell is not fixed, but depends on the chemical environment it is immersed in. Hence the “genetic” diseases inherited identically by both individuals in a pair of identical twins do not occur with equal likelihood in each twin. Although they each contain the same pairs of genes, the dominance of each gene in a pair within an individual is a function of the environmental circumstances (work stress, diet, exercise, sunshine exposure, etc.). Therefore, it is possible for differences to occur between identical twins, due to gene switching.

Suppose you have a faulty gene for protein P53, a DNA repair enzyme which repairs breaks in DNA strands that result from free radicals and natural water molecule bombardment at body temperature. If the DNA breaks are not repaired rapidly enough (before further breaks occur), the DNA fragments when eventually repaired can be transposed (out of order), causing a cancer risk. Therefore, if gene switching at some point during your life turns on a faulty version of protein P53, you are at risk from cancer. If this gene switching does not occur, your good version of P53 remains in operation, and you have very much better protection. It follows, then, that the old fatalistic idea that “genes are immutable” is false dogma. The way to prevent cancers and other genetically related diseases is to understand the epigenesis mechanism by which “dominant” genes are expressed, as a function of their chemical environment. Thus, the role of some empirically-discovered cancer drugs whose theoretical mechanisms are not understood chemically “very well”, is probably related to gene switching. Some of these chemicals probably work by gene switching: turning off the genes of defective cancer-suppressing enzymes, and turning back on the working versions of those genes. This would explain some statistical anomalies in the effectiveness of these treatments. E.g., a person who has inherited two versions of a defective cancer suppressing gene will be at risk from cancer from an early age and will not respond to these chemical treatments because switching from one defective gene to the other equally defective gene will make no difference in the cancer situation. Most people will statistically be likely to only have one bad gene, and therefore will respond to treatment. In summary, the switching role of drugs used for disease treatment at present may be obfuscated by ignorant accepted dogma. This affects the funding and the research priorities.

Another emerging taboo is the effect of the insulin-like growth hormone activator IGF-1 in the ageing process and disease. By promoting rapid cell division and inhibiting cell death, high levels of IGF-1 in the blood promote cancer proliferation and ageing. Goodwin and others showed in 2002 (Journal of Clinical Oncology, v20, pp42-51) that excess insulin promotes cancer growth and correlates with mortality. (Unfortunately Goodwin’s research studied the end results on people who had cancer, not the risk of getting cancer in the first place, as a function of insulin level.) Malignant cells are continuously dividing, with high energy requirements and cannot survive fasting. Non-cancer cells can regulate their metabolism to survive fasting. Fasting affects cancer risks. Pity this isn’t better researched (drug companies have a very different approach!).

Dr Woit: how to be greasy on the subject of Gerard ’t Hooft

’t Hooft won a Nobel Prize share for proving mathematically that the Higgs mechanism used for electroweak symmetry breaking in the Standard Model of particle physics is mathematically renormalizable. I.e., at very high energy the Higgs mechanism (which makes weak bosons massive at low energy) allows symmetry to exist between electromagnetic and weak interactions, by making the weak gauge bosons (W and Z bosons) massless. Since it is the mass of the weak bosons at low energy which slows them down and makes the weak force less strong than the electromagnetic force at low energy, taking away their mass at high energy makes the weak force coupling the same as that of the electromagnetic force, thus “unifying” electromagnetism and weak interactions. However, as people like Dr Woit have pointed out, the problem with electroweak unification is that the weak force is chiral (only acting on left handed helicity spinors), but the electromagnetic force isn’t supposed to be. Maxwell in 1861 argued that magnetism is due to field quanta (he called it vacuum vortices or aether, but that was the fashion in 1861) spin as a result of charges spinning while in motion and imparting some angular momentum to force-mediating vacuum field quanta. According to Maxwell, therefore, the fixed direction of the curl of the magnetic field which loops around a wire carrying an electric current is evidence that electromagnetism is a chiral effect, so electromagnetism has a preferred handedness. This is completely ignored in textbook QFT. The chiral handedness of electrons for the weak force only emerges as a function of their velocity. At low velocity they don’t have a helicity, just a spin whose axis is not necessarily aligned with its direction of motion. However, as the velocity approaches that of light, the spin becomes aligned along the direction of motion due to relativity (i.e. the Lorentz contraction, which flattens the electron): this is helicity. For an electric current of 1 amp in a wire, the electrons typically flow at only 1 mm/second, so you don’t expect much helicity since their velocity is so small compared to the velocity of light. However, the magnetic force is relatively weak, and the way it emerges as a function of the velocity of the electrons is what you would expect for helicity of spin on the basis of Maxwell’s model of magnetic fields. All of this is ignored in the Standard Model, which does not explain the emergence of the left-handed weak force when electroweak symmetry breaks at low energy.

In his post http://www.math.columbia.edu/~woit/wordpress/?p=5022 , Dr Woit states:

“Gerard ’t Hooft in recent years has been pursuing some idiosyncratic ideas about quantum mechanics … Personally I find it difficult to get at all interested in this (for reasons I’ll try and explain in a moment) … One of ’t Hooft’s motivations is a very common one, discomfort with the non-determinism of the conventional interpretation of quantum mechanics. The world is full of crackpots with similar feelings who produce reams of utter nonsense. ’t Hooft is a scientist though of the highest caliber, and as with some other people who have tried to do this sort of thing, I don’t think what he is producing is nonsense. It is, however, extremely speculative, and, to my taste, starting with a very unpromising starting point.

“Looking at the results he has, there’s very little of modern physics there, including pretty much none of the standard model (which ’t Hooft himself had a crucial role in developing). If you’re going to claim to solve open problems in modern physics with some radical new ideas, you need to first show that these ideas reproduce the successes of the estabished older ones. From what I can tell, ‘t Hooft may be optimistic he can get there, but he’s a very long way from such a goal.

“Another reason for taking very speculative ideas seriously, even if they haven’t gotten far yet, is if they seem to involve a set of powerful and promising ideas. This is very much a matter of judgement: what to me are central and deep ideas about mathematics and physics are quite different than someone else’s list. In this case, the central mathematical structures of quantum mechanics fit so well with central, deep and powerful insights into modern mathematics (through symmetries and representation theory) that any claim these should be abandoned in favor of something very different has a big hurdle to overcome. Basing everything on cellular automata seems to me extremely unpromising: you’re throwing out deep and powerful structures for something very simple and easy to understand, but with little inherent explanatory power.”

’t Hooft commented on these remarks on the blog post (August 13, 2012 at 6:24 pm
):

http://www.math.columbia.edu/~woit/wordpress/?p=5022&cpage=1#comment-121935

“Even though my work is here sketched as “not even wrong”, I will avoid any glimpse of hostility, as requested; I do think I have the right to say something here in my defense … I want to stress as much as I can that I am striving at a sound and interesting mathematical basis to what I am doing; least of all I would be tempted to throw away any of the sound and elegant mathematics of quantum mechanics and string theory. Symmetries, representation theory, and more, will continue to be central themes. I am disappointed about the reception of my paper on string theory, as I was hoping that it would open some people’s eyes. Perhaps it will, if some of my friends would be prepared to put their deeply rooted scepsis against the notion of determinism on hold. I think the mathematics I am using is interesting and helpful. I encounter elliptic theta functions, and hit upon an elegant relation between sets of non-commuting operators p and q on the one hand, with integer, commuting variables P and Q on the other. All important features of Quantum Mechanics are kept intact as they should. I did not choose to side with Einstein on the issue of QM, it just came out that way, I can’t help that. It is also not an aversion of any kind that I would have against Quantum Mechanics as it stands, it is only the interpretation where I think I have non-trivial observations.
If you like the many world interpretation, or Bohm’s pilot waves, fine, but I never thought those have anything to do with the real world; my interpretation I find far superior, but I just found out from other blogs as well as this one, that most people are not ready for my ideas. Since the mud thrown at me is slippery, it is hard to defend my ideas but I think I am making progress. They could well lead to new predictions, such as a calculable string coupling constant g_s, and (an older prediction) the limitations for quantum computers. They should help investigators to understand what they are doing when they discuss “quantum cosmology”, and eventually, they should be crucial for model building. G. ’t H.”

Dr Woit then responded (August 13, 2012 at 6:39 pm ):

http://www.math.columbia.edu/~woit/wordpress/?p=5022&cpage=1#comment-121939

“Prof. ‘t Hooft,

“Thanks for writing here with your reaction to and comments on the blog posting. I hope you’ll keep in mind that I often point out that “Not Even Wrong” is where pretty much all speculative ideas start life. Some of the ideas I’m most enthusiastic about are certainly now “Not Even Wrong”, in the sense of being far, far away from something testable.

“While my own enthusiasms are quite different than yours, and lead me to some skepticism about your starting point, the reason for this blog posting was not to launch a hostile attack, but to point others to what I thought was an interesting discussion, one which many of my readers might find valuable to know about.

“Good luck pursuing these ideas, may you show my skepticism and that of others to be mistake…”

Subsequent anonymous comments, which Dr Woit allowed to be published, falsely claimed that Hooft was wrong because of Bell’s inequality had dismissed deterministic hidden variable theories:

Anonymous says:
August 13, 2012 at 8:10 pm

http://www.math.columbia.edu/~woit/wordpress/?p=5022&cpage=1#comment-121956

“Prof. ‘t Hooft,

“While I am not familiar with your particular work, I am familiar with previous explorations on the theme of interpretations on quantum mechanics and determinism, particularly with old things such as de Broglie-Bohm’s theory, Bell’s contextual ontological model, Kochen-Specker’s model, and newer things such as Harrigan & Spekkens classification of ontological models, Lewis et al. psi-epistemic model, Hardy’s excess baggage theorem, etc. But after studying them with interest for a while, I gradually developed the opinion that they have no good motivation, use uninteresting mathematics, and have been generally fruitless. Since then I have stopped paying attention to this area of research …”

What Dr Woit should learn is that Darwin was deterred from publishing his evolution theory for twenty years, not just because of the religious taboo, but because of the Lamarkian evolution theory which came earlier, but contained errors and was rejected. There is an industry of “peer” review censorship liars, which responds to every new advance with something like:

“There is nothing new under the sun. While I am not familiar with your particular work, and can’t be bothered to read it and check it carefully, I am familiar with previous explorations on the theme. Because these are known to be, their authors are of higher profile than you are, which proves them more intelligent than you. If they got it all wrong, what hope is there that your paper contains anything worthy of being published? Previous research had no good motivation, used uninteresting mathematics, and was generally fruitless. Since then I have stopped paying attention to this area of research. If you can convince someone like me who won’t read your work or check it that it is correct, then I will read your work. But note: I won’t read your paper until after you have convinced me. If I need to read your paper to be convinced, then too bad…”

Deterministic hidden variables theories and Bell’s inequality have nothing to do with real world physics, which isn’t 1st quantization. It’s like epicycles. You can still use Ptolemaic epicycles today to give rough predictions of apparent (two dimensional celestial sphere) planetary positions, despite the theory having nothing to do with real world physics (planets have elliptical orbits around the sun, not epicycle orbits around the earth; Ptolemy’s model was complex and failed to account for the distances of the planets from the earth correctly). Just because a 1st quantization looks good at first glance (just as the sun appears to orbit earth, at first glance), does not prove it to be relativistic and correct. This is of course completely taboo, despite being factually correct. The industry of “wavefunction collapse” popularization has succeeded in selling false, epicycles to the public. There is no indeterministic wavefunction that collapses upon measurement; multiple wavefunctions exist, one wavefunction amplitude per path, and it is the interference of these multiple wavefunctions which gives rise to indeterminism. We still use words like “sunrise” even though we know that the earth’s rotation is bringing the sun into our field of view; the sun is not orbiting the earth daily. This is the situation with 1st quantization; the taboo over mechanisms in quantum field theory allows both 1st and 2nd quantization to co-exist side by side. Fine for rough calculations. Not so good for understanding what is really going on. When is multipath interference of many wavefunctions going to replace the non-relativistic single wavefunction “collapse” dogma?

History is the problem. Dirac in 1928 only half introduced 2nd quantization: he made the 1st quantization Schroedinger equation relativistic by his relativistic spinor equation for the Hamiltonian energy (which replaced the non-relativistic Hamiltonian of 1st quantization). While the spinor Dirac introduced implied that the field was quantized, Dirac failed to correctly realize that the single wavefunction of the Dirac equation (Schroedinger equation with Dirac’s relativistic Hamiltonian energy operator) was rendered obsolete by the quantized field. Interviewed in America when the Weyl gauge theory of quantum electrodynamics was published, Dirac stated that he didn’t understand Weyl’s work. The fact is, thre is an amplitude (wavefunction) for every possible quantum field interaction with a charge, so you require a path integral, summing all the amplitudes, to make a probabilistic prediction of what will occur due to the interference of those amplitudes. This was finally grasped by Feynman, but continued to be opposed (and rejecte) until Dyson battled Oppenheimer in 1948. There were numerous spurious reasons given by “greats” like Einstein, Bohr (who said that modelling electron orbit paths with a path integral was against the dogma of the uncertainty principle) and others, to dismiss path integrals as obviously wrong. They aren’t, but the taboo over their reality persists today, to the detriment of progress in physics. Instead of having progress in the mechanism of quantum field theory well funded and published, it is censored out and the messengers with useful confirmed predictions are dismissed by people who are too grand to even read the messages and check them.

A long term solution to this problem would involve replacing today’s subjective and abusive form of so-called “peer” review (“your theory is about evolution so it must be wrong because Lamarke came up with a theory of evolution which turned out to be wrong, and he is more famous and thus more intelligent than you are!”) with objective and scientific genuine peer-review, where the “peer” reviewers are actual peers, interested in communicating progress in science more than publishing fashionable papers by fashionable scientists. You know how this works in the real world. You put forward a confirmed prediction, and the response is a rhetorical question (to which answers are not permitted) or inaccuracy-filled “responses” which ignore the point you make and point out the errors in somebody else’s theory instead. As you calmly correct the errors and give scientific answers to rhetorical questions, the “critic” becomes more and more infuriated, instead of being won over. Such people are not behaving rationally. The problem with science is not peer review, therefore, but the absence of peer review. If there was constructive criticism, there would be no problem. Instead, there is dogmatic bigotry masquerading as peer review. The corruption of power peer-review is getting worse.

One way to look at this is the nature of evolution or special relativity in the context of Popper’s definition of science. In special relativity, Lorentz contraction, time-dilation and mass increase are all functions of the velocity of a particle relative to the observer. This relativism is also present in Maxwell’s equations, where a magnetic field is observed if an electric charge is in motion relative to the observer. This is all well justified by experiments. What’s not so clear is whether the great utility of relativism is a proof that there is no absolute motion or absolute time. This is where pedalogical sophistry come in. What is science? Is it a proof of the nature of the universe, or just a way of making some falsifiable predictions? The teacher wanted both, despite the failures of past theories. The teacher had to attract students, and you do that better by offering truths about curved space or extra dimensions, than just making predictions with a handy mathematical model (whose ultimate physical validity is controversial).

Popper insisted that science is not absolute truth and is just a best guess theory, justified by the failure of experiments to disprove (falsify) it. Occam’s razor says science is the simplest theory to fit the facts. Feyerabend says in his book “Against Method” that science is pragmatic: it is whatever method works best for those who have to use it. Thus, if the people using a theory don’t need very great accuracy they can choose non-relativistic physics such as 1st quantization, but if they want better accuracy they have to go over to using relativistic 2nd quantization quantum field theory. Similarly, the Bohr atom is still taught in high school physics courses simply because it uses less sophisticated mathematics than 1st quantization quantum mechanics or 2nd quantization path integrals. This mathematics effect is very important: it introduces Orwellian doublethink into physics. People get used to false models being used for pragmatic purposes, to facilitate quick calculations.

Anyone trying to point out the “correct” theory in this situation is then dismissed as being ignorant of the fact that simplistic theories can be used for convenient calculations. In other words, wrong theories end up surviving and cluttering up the scene, preventing the right questions from being asked (since they allow the goalposts to be changed whenever a question is asked) and advanced mathematics theories are less widely understood than pedalogical sophistry like the claim that general relativity has experimentally proved space to be curve. Eugenics is such a wrong theory. Popper’s idea that you can falsify a theory by experimental test is naive. Anyone can usually add epicycles to a wrong theory to bring it into agreement with the data. The world is complicated, and sometimes it is impossible to avoid modifying a theory to include variables which were originally omitted and ignored. In other cases, it might be best to re-examine the foundations of the theory when experiments come out against it.

Newtonian gravity failed to predict the precession of the perhelion of Mercury. This did not “falsify” Newtonian gravity. People use the most useful available theory for the problem they have. There are issues with all theories, but this doesn’t falsify all theories in any sense. If science research runs up against a wall, there are two popular pieces of advice: (1) “when in a hole, stop digging”, and (2) “when going through hell, keep going.” These contradict. Diversity is needed in science, because it’s a subjective judgement call when the groupthink herd decides to move away from one particular idea, or to approach another idea. If everyone sticks to existing fashion, you end up with a technician-led science which just concerned with applying and using existing theories like superstrings, not developing new ideas. So the superstring technicians keep hyping their fiddling as being “new” in press-releases. Likewise, Ptolemaic epicycles were added to and modified for generations, giving the appearance of a dynamic, progressive scientific discipline, with spin-offs like trigonometry. It is the cult-like dogma of reigning “scientific” orthodoxies which leads to uninformed claims about them being justified by predicting gravity (non-quantitatively). Bertrand Russell said that, as a rival theory to evolution, God could have created the universe 5 minutes ago, including the fossil record, the works of Darwin, and everybody’s memories, just for entertainment. You cannot disprove this “simple theory”, because there are no falsifiable predictions. Just like superstring theory.

Nobel Laureate Prof. Josephson has a discussion of arXiv initially barring a paper of his http://www.tcm.phy.cam.ac.uk/~bdj10/archivefreedom/main.html

Professor Josephson’s discussion of arXiv censorship finishes:

“It is true, of course, that standards should be maintained. But the problem with the uninspired persons who operate the archive is that they seem unable to make the distinction between ‘nutty’ ideas (which either have no scientific meaning or contain serious errors), which should be barred from the archive, and unusual ideas which may or may not be right, and also may turn out to be important, which should be allowed on the archive.”

The arXiv itself states at http://arxiv.org/help/endorsement :

What are my responsibilities as an endorser?

“The endorsement process is not peer review. You should know the person that you endorse or you should see the paper that the person intends to submit. We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author if the author is unfamiliar with the basic facts of the field, or if the work is entirely disconnected with current work in the area.”

This bans endorsers from permitting radical new ideas (“entirely disconnected with current work”), while permitting the more usual incremental development of stringy ideas. I.e., Copernicus would be banned since his solar system was entirely disconnected with current work on epicycles in the Earth centred universe. Likewise, other radical new breakthroughs from outsiders like Patent Examiner Einstein would not fit into the current work. This disconnection from current work is the whole definition of a radical breakthrough. If arXiv had been around before quantum theory, it could have kept physics classical by deleting quantum submissions and blocking the hosting of those papers. “Better safe than sorry” has two sides to it when it comes to censorship. If you want to ban ideas without reading them to check them (you don’t have time, like Hitler), you’re into Nazi book burning territory. It’s amazing how so many Guardian or Washington Post newspaper readers have no concern about the early symptoms of dictatorial fascism, and are prepared to declare that the press is free because their bigoted and incorrect views are represented without informed debate. In true Orwellian “1984″ style, emotional claptrap is used to “justify” the banning of any meaningful dissent against the fashionable and popular ideology which aims to “save the world” by causing an insignificant decrease in carbon emissions at economically disastrous cost. “Four legs good, two legs bad” as Orwell put the endless “protestor” bleatings in another book. This endless chanting of hype and half-truths actually works. That’s why they use it in adverts!

Update (5 September 2012): Dr Woit on the alleged abc conjecture proof by Shin Mochizuki

http://www.math.columbia.edu/~woit/wordpress/?p=5104

http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20IV.pdf

“In the case of the Szpiro proof, the techniques he was using were relatively straightforward and well-understood, so experts very quickly could read through his proof and identify places there might be a problem. This is a very different situation. What Mochizuki is claiming is that he has a new set of techniques, which he calls “inter-universal geometry”, generalizing the foundations of algebraic geometry in terms of schemes first envisioned by Grothendieck. In essence, he has created a new world of mathematical objects, and now claims that he understands them well enough to work with them consistently and show that their properties imply the abc conjecture.

“What experts tell me is that, very much unlike the case of Szpiro’s proof, here it may take a very long time to see if this is really a proof. They can’t just rely on their familiarity with the usual scheme-theoretic world, but need to invest some serious time and effort into becoming familiar with Mochizuki’s new world. Only then can they hope to see how his proof is supposed to work, and be able to check carefully that a proof is really there, not just a mirage. It’s important to realize that this is being taken seriously because such experts have a high opinion of Mochizuki and his past work. If someone unknown were to write a similar paper, claiming to have solved one of the major open questions in mathematics, with an invention of a strange-sounding new world of mathematical objects, few if any experts would think it worth their time to figure out exactly what was going on, figuring instead this had to be a fantasy. Even with Mochizuki’s high reputation, few were willing in the past to try and understand what he was doing, but the abc conjecture proof will now provide a major motivation.” [Emphasis added to key sentences in bold print.]

This is precisely analogous to the rebuilding of the foundations of quantum field theory and the Standard Model built on it, which yields quantum gravity with checked predictions. The whole way of thinking about what the “problems” in unifying the Standard Model with general relativity is traditionally biased in favour of the existing framework built on foundations which are inadequate and misleading in some key respects. This means that, as with Mochizuki’s proof, you have a situation where “few if any experts would think it worth their time to figure out exactly what was going on, figuring instead this had to be a fantasy.” In other words, there is a pedalogical and marketing problem in presenting a predictive theory that renovates the foundations of a subject in order to work.

This statement by Dr Woit is enlightening in view of his statements in the past about “elitism” in science, which are only partly helpful. The world has always had different kinds of “elitism”:

1. Dictatorial obfuscation. Become “respected” by force or by cunningly sneakiness. Appear mysterious by using secrecy or obscuring the unpleasant facts that most people don’t want to hear.

2. Innovate, predict, check results, correct errors.

Both Woit and Witten have commented on “elitism” unhelpfully, by failing to distinguish what “elitism” they refer to. The word has two diametrically-opposed meanings. It can mean the elite leadership of a dictatorship, media or popular fashion, or it can mean an attempt to achieve genuine scientific integrity (thus people like Galileo being put under house arrest for innovation). It is convenient for most people to conflate both these opposing meanings together into Orwellian “doublethink”, so as to pretend that “science” is an all inclusive term for both teaching “established” educational group-think dogma about today’s fashionable theories, and for innovating and being critical. Then they can switch between opposite meanings of the same word when people object to “elitism”. If critics object to “elitism”, they’re objecting to ignorant dictatorship, but Witten’s letter to Nature seems to conveniently change the goalposts at the critical moment, interpreting “elitism” not as ignorant dictatorship but scientific integrity. We need more good “elitism”, and less bad “elitism”.

Identical semantic sophistry occurs with the word “censorship”, something that again we need more of in the positive sense. We need more censorship to objectively criticise fashionable speculation and to publish factual, confirmed predictions and corrections to errors in existing “well established” theories. We need less censorship of ideas on the basis that they contradict unconfirmed fashionable speculation. This fact, that we need more objective censorship, is routinely ignored. If you are constructively critical of censorshi censors try to “defend” themselves by lying that you are simply against “censorship”, and then “explaining” why “censorship” is necessary to reduce the noise level. Yes, censorship is necessary to reduce the noise level and so to allow communication of facts, but it must be objective, not based on fashion. We need objective censorship, not lazy censorship.

Data on cross-sections (relative reaction rates) for Higgs boson decay processes

Enrico Fermi suggested that when a neutron decays into a proton, electron, and antineutrino, the process is identical to a neutron and a neutrino scattering (a reaction with an effective cross-sectional target area or “cross-section”) with a change of charge and mass, so that a proton and an electron emerge. This enabled weak decay to be treated as a “simple” particle scattering interaction, with an effective cross-section. In 1967 the “electroweak theory” was developed which unified the weak strength of this weak reaction with the electromagnetic gauge theory force (which is far stronger at low energy) by inserting a massive (80 GeV) charged W vector boson into the weak interaction process, this mass being necessary to explain the observed weakness of the weak force relative to the electromagnetic force. The W boson with 80 GeV mass and other properties as predicted was discovered in 1983 at CERN, and now the “Higgs” particle which is postulated to give the 80 GeV mass to the W boson has supposedly been discovered, again at CERN:

“The only fly in the ointment is its decay rate to two photons. This is nearly twice as large as expected. The significance of the discrepancy with the standard model is about 2.5 sigma. It could be a fluke. We have learnt to show some healthy skepticism when it comes to observations of physics beyond the standard model. However it is also consistent with an enhancement due to the presence of another charged boson. If that boson exists it must have a mass at least a bit larger than the W otherwise the Higgs would decay to this particle in pairs and we would see the effect on the other decay rates. It can’t be too massive otherwise it would not enhance the diphoton rate enough.” – Dr Philip Gibbs

The latest data on the quantities of 125 GeV massive spin-0 bosons seen by the CMS and ATLAS detectors at CERN’s LHC can be compared to the Higgs boson cross-sections for different reactions (e.g. decay processes) predicted by the Standard Model of particle physics (the electroweak theory).  The results show that the ratios of observed/expected signals for different decays are:

1.0 for two neutral weak bosons (ZZ),

1.75 for two gamma rays, and

0.75 for two two charged weak bosons (WW).

Dr Woit comments: “The bottom line is that, within errors, everything is consistent with the SM predictions. The gamma-gamma channel is the one to watch, it is about 2 sigma high.”

A preprint issued yesterday by Pier Paolo Giardin and others, called “Is the resonance at 125 GeV the Higgs boson?”, states: “The recently discovered resonance at 125 GeV has properties remarkably close to those of the Standard Model Higgs boson.”

A comment today by Mohit Sinha on Woit’s blog discusses the discrepancies in decays, suggesting that the 2-sigma excess in the double gamma ray production (i.e. 2 statistical standard deviations in a Gaussian/normal dfistribution error curve; not to be confused with the observed/expected ratios) “could be pointing to another not-yet-discovered boson along with the Higgs-like boson just discovered”, while the underestimated double W production decay data may weaken the case for spin-0 and instead suggest that the new 125 GeV boson is a massive spin-2 vector boson (of relevance to quantum gravity gauge theories).  The detection of double gamma ray decay rules out spin-1, which would violate the conservation of momentum, since gamma rays are spin-1, but doesn’t rule out spin-0 or spin-2.  If the low WW production debunks spin-0, then that would leave spin-2 by elimination.  However, gravity itself is long-ranged and so its quanta can’t have rest mass, so if there is a spin-2 massive boson it’s not the graviton, although if quantum gravity is a gauge theory which connects into the Standard Model, you can expect some symmetry breaking boson (although conventional stringy ideas would suggest that the quantum gravity symmetry breaking scale would be near the immense Planck mass, far greater than the LHC can see).  But the most probable explanation is simply that the relatively small amount of data available on WW production in spin-0 decays has given an inaccurate result, which will improve when more data is accumulated.

One good example of a symmetry breaking massive pseudo-Goldstone boson which acts as a vector boson is the pion, which mediates the strong nuclear attractive force between nucleons (neutrons, protons) in the nucleus, keeping it bound together against the mutual electromagnetic repulsion from the protons.  The pion is a QCD symmetry breakdown pseudo-Goldstone boson, but acts as a vector boson.  Note that the pion is a composite particle, containing one quark and one anti-quark, each having spin-1/2.  The combination acts as an effective spin-1 boson, just in the same way that superconductivity arises from the Cooper pairs of electrons (fermions, each spin-1/2) coupling their spins together to form effective “bosons” of spin-1, which lose all electrical resistance and propagate like massive (slower than light) photons.  It’s possible that the spin-0 massive boson is a composite, by analogy to these examples.  The pion is not a fundamental particle, since it contains two fundamental particles, but nevertheless (1) it arises through symmetry breaking, and (2) it acts as a vector boson for the nuclei-scale strong force (gluons of course mediate the QCD force between individual quarks).  What concerns me, as my paper shows, is that the electroweak Z boson’s 91 GeV mass seems to be the building block of the masses of fundamental particles.

Another commentator today on Woit’s blog, “truth” (who seems to think like a string theorist) claims: “The Goldstone models couple to the W, Z bosons to give them mass and the vev gives mass to the fermions. None of that requires the extra degree of freedom which is the Higgs boson. The only reason we have to add this extra degree of freedom is to ensure the theory is unitary at high energies. So what the LHC has discovered is that unitarity is respected by nature. This is the real content of the discovery. It is quite interesting to me that unitarity is the guiding principle of string theory, i.e., string theory is the only known consistent theory of gravity that exactly respects unitarity. This is extremely interesting.”

This is just a circular argument or assertion of dogma.  The data available are no proof that the massive spin-0 boson detected is precise confirmation of the electroweak theory with Higgs mechanism, so interpreting the data this way and then asserting that this speculative assertion amounts to a proof of unitarity and string theory is absurd.

Another commentator on Woit’s blog, David Nataf, points out that there is a “4-sigma signal of a gamma-ray emission line (that could be a dark matter annihilation line) toward the Galactic center at an energy 130 GeV”, i.e. close in energy to the 125 GeV massive spin-0 LHC particle, in the paper, “A Tentative Gamma-Ray Line from Dark Matter Annihilation at the Fermi Large Area Telescope” by Christoph Weniger, http://arxiv.org/abs/1204.2797 and “Strong Evidence for Gamma-ray Line Emission from the Inner Galaxy” by Meng Su and Douglas P. Finkbeiner, http://arxiv.org/abs/1206.1616 Nataf states: “The first version of the abstract of the second paper comments on how the energy is very close to that of the Higgs, I think they suggest the dark matter particle might decay into the Higgs.” 

Peter Shor in the same comments section on Woit’s blog states: “you can easily add sterile heavy right-handed neutrinos to the Standard Model, and that these could both explain dark matter and the low mass of the left-handed neutrinos [using the see-saw mechanism], so maybe Occam’s razor actually predicts the Standard Model with added heavy sterile neutrinos.”  Massive (125 GeV) right-handed neutrinos could decay, but since they are fermions (with spin-1/2) it’s hard to see how they can decay into bosons (with integer spin), unless there is some mechanism for spin angular momentum to be conserved.  For example, to conserve spin angular momentum, a massive spin-0 boson could be emitted when a 125 GeV right-handed neutrino decayed into a left-handed, trivial-mass neutrino.

Wave-particle duality

David A Chalmers 1997 article on double slit experiment

Double slit experiment energy balance (David A Chalmers Science World ISSN 1367 6172 Feb 1997).

Wave-particle duality: the conflict of 1st quantization (one wavefunction per onshell particles) and 2nd quantization (multiple wavefunctions per particle, with a sum over histories)

The Schroedinger 1st quantization wavefunction amplitude is exp(iS), where S is the action of the path, measured in units of h-bar.  This is a simple solution to Schroedinger’s wave equation, which has a single wavefunction.

Feynman’s genius was explicitly updating this 1st quantization wavefunction to the multipath interference of 2nd quantization, where the uncertainty principle is no longer a mystery but a simple result of multipath (multiple wavefunctions) interference.

2nd quantization quantizes the field, and the field quanta then provide the stochastic or random interactions with charges that account for non-classical behaviour of particles whose path actions are small compared to h-bar.

1st quantization is based on mystery, and merely re-asserts Planck’s E = hf in the form of the uncertainty principle Et = h (remember f = 1/t) allowing no mechanism.  Dirac proved the necessity for 2nd quantization in 1927, on the basis that the Hamiltonian energy as written in Schroedinger’s 1st quantization wave equation makes Schroedinger’s single-wavefunction equation (basically the whole of undergraduate quantum mechanics) non-relativistic and thus wrong.  Dirac’s replacement, relativistic Hamiltonian spinor has negative energy states, predicting pair production in the vacuum, so the vacuum’s field is quantized.  Feynman recognised that the virtual particles of the quantized force field interact with charges in a manner partly analogous to real radiation, at least from the perspective of imparting force by interaction, i.e. virtual photon scattering by charges in Feynman diagrams.  Summing all possible virtual photon interactions with a charge, appropriately weighted using exp(iS) as the amplitude for each path of action S, gives the path integral.

However, we live in an Orwellian “doublethink” world when it comes to 1st and 2nd quantization.  For a pure mathematician, no equation that has any applicability is “wrong”.  Take epicycles, the incorrect earth centred universe of Ptolemy’s highly popular Almagest (published 150 AD, some 400 years after Aristarchus of Samos had correctly postulated the solar system with spinning earth in 250 BC!).  If you are a mathematician, it really doesn’t matter whether the sun orbits the earth or the vice-versa, so long as the equations are interesting.  A mathematician will happily try to find dualities.  Dr Lubos Motl, I recall, suggested to me that Ptolemy’s epicycle equations for planetary motions were not wrong in the sense that they were useful for predictions.  (However, although Ptolemy could predict the positions of planets as seen in the sky, where only two degrees of freedom – latitude and longitude – are the variables, Ptolemy’s epicycle model is not a mathematical duality of the correct 3-dimensional motion of the planets, since it fails to properly model the variations in the distances of the planets from the earth.)

The point is, Maxwell, unifier of electricity and magnetism, tried to find a mechanism for the equations of electromagnetism using a complex model of space, filled with moving parts.  When that model failed, more “pure” mathematicians and philosophers like Mach combined forces in a mathematical revolution in physics which, like all revolutions, is sustained by Orwellian “doublethink”.  While it is the basis of the Standard Model that Feynman’s 2nd quantization gives fundamental particles uncertainty by random interactions with quantized (non classical) field quanta, no efforts are made to deal with this by radiation transport Monte Carlo simulations of the vacuum dynamics.  Instead, the physicist hides behind the mathematics of the path integral, just as medieval Ptolemaic believers hid behind the trignometry of epicycles, for obfuscation.  In addition, 1st quantization (single wavefunction per particle!) non-relativistic quantum mechanics continues to be taught because it is easier for students to apply to atomic energy levels than 2nd quantization.  In this “doublethink”, the errors of 1st quantization persist as a hardened dogma, despite being overturned by relativistic 2nd quantization, where indeterminancy arises simply from field quanta interactions.

Einstein and Infeld in their book “Evolution of Physics” discuss the randomness of Brownian motion.  When the random, indeterministic motion of fragments of pollen grains was first seen under a microscope, the water molecules bombarding the fragments were invisible, and Brown actually believed that the motion was intrinsic to small particles, an inherent indeterminancy on small scales in space and time!  This error is precisely Bohr’s 1st quantization error.  It is no wonder that Bohr was so ignorantly opposed to Feynman’s path integral:

“… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …”

-  The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

This attitude of Bohr persists today with regard to the difference between 1st and 2nd quantization; the attitude is that because non-relativistic 1st quantization was discovered first, and is taught first in courses, it must somehow take precedence over the mechanism for indeterterminancy in quantum field theory (2nd quantization).  The doublethink of most textbooks omits this and glues on 2nd quantization as a supplement to 1st quantization, rather than as a replacement of it!  Why not have doublethink, with two reasons for indeterminancy: intrinsic, unexplained, magical indeterminancy typified by the claim “nobody understands quantum mechanics (1st quantization)”, plus the mechanism that virtual particles in every field randomly deflect charges on small scales (like Brownian motion on dust)!  Feynman’s answer of course is that 1st quantization is plain wrong, since it is non-relativistic and also Occam’s Razor tells us that we need 2nd quantization only because it explains everything mechanically without needing an 1st quantization (intrinsic or magical) uncertainty principle:

“I would like to put the [1st quantization] uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [wavefunction phase amplitudes] for all the ways an event can happen – there is no need for an [1st quantization] uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by 2nd quantization field quanta] becomes very important …” – Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84.

This blog post is motivated by a kind email from Dr Mario Rabinowitz on wave-particle duality in the double-slit experiment, which was sent as a result of yet another “not even wrong” paper published in a journal which uses non-relativistic (single wavefunction!) 1st quantization quantum mechanics to analyze quantum indeterminancy in the double-slit experiment.  Whenever you use the earth centred planetary theory of Ptolemy to try to get higher accuracy, you always “discover” more evidence for endless epicycles, so the dogma is becomes a self-fullfilling cult, sucking in research funding and peddling science fantasy in place of fact.  The facts don’t speak for themselves, because they aren’t as exciting as dogmatic indeterminism:

Dear Mario,

Thank you for emailing me your paper “Examination of wave-particle duality via two-slit interference”. In Section 5.1, at page 26, you state:

“In Bohm’s quantum mechanical theory, there is no wave-particle duality. [The Undivided Universe, 1993] For Bohm, the particles shot at the slit-plate have definite trajectories, and each particle goes through only one slit or the other. In this theory as excellently presented by Holland [Quantum Theory of Motion, 1993], the interference pattern results from the interaction of each particle with the quantum potential determined by its own wave function and the presence of the two slits. … 5.2 Prosser, and Wesley’s Poynting vector particle guidance In 1976 Prosser made a ground-breaking suggestion that, at least for the case of light, the underlying causal reality for the formation of interference and diffraction patterns is the energy flow given by the Poynting vector. [Intl. J. Theoretical Phys. 15, 169 (1976).]“

I don’t know what you mean by “wave-particle duality”, which is as vague as the word “God”. Feynman explains the double slit using path integrals, although he explains that the spatial extent of a photon transversely is a “small core of space” surrounding the classical path (the path of least action), in his 1985 book QED, stating:

“Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)” – R. P. Feynman, QED, Penguin Books, London, 1990, page 54.

The amplitude contribution of each path with action S to the path integral is exp(iS/[h bar]) which reduces by Euler’s equation cos (S/[h bar]) relative to the path of least action (where we don’t need the complex exponent, the additional information being merely the direction of the resultant, which is always parallel to the axis of least action when the path integral is done by summing arrows on an Argand diagram).

Therefore, with amplitude cos (S/[h bar]), only paths with actions within plus-or-minus h-bar around the path of least action contribute to the net amplitude significantly (the paths with larger actions cancel one another out). So it is indeed a very small core of space around the path of least action where the alternative paths of the path integral are significant and cause the double-slit phenomena.

The actual mechanism for the diffraction is very simple: the slits in the screen contain atoms with electromagnetic field quanta, which interact with a passing photon, diffracting it. When a photon travels through an electromagnetic field, it interacts with the virtual photons of the field, which is also why light is slowed down and refracted by glass. Because these field quanta are stochastic or random in timing and paths taken, an element of uncertainty is thereby introduced into the change of momentum of the passing photon. In addition, the “small core” of paths taken around the classical path means that if the slits are close enough together, some of the multiple paths taken by a “single” (sum over histories) photon will pass through each of the slits, before recombining on the other side. This causes the double slit interference pattern, seen with so-called “single” photons.

On page 27 you state:

“In 1984 Wesley [J. P. Wesley, Found. Phys. 14, 155 (1984)] independently formulated a similar theoretical concept of the role of the Poynting vector in two-slit interference. Wesley gave due credit to Prosser, and referenced his two papers. He pointed out that smaller slits with wider separation would more clearly show the flow needed to explain two-slit interference.”

On page 30, you state:

“5.4 Marmet’s relativistic waveless and photonless two-slit interference Marmet [Absurdities in Modern Physics: A Solution,1993] uses an original if not peculiar invocation of relativity theory to obtain interference without either waves or photons. He says, “The wave or photon interpretations are not only useless, they are not compatible with physical reality. Waves are simply the relativistically distorted appearance of relativistic coupling between two atoms exchanging energy.”

This is not a “peculiar” idea as far as I can see, it is QED, the standard model’s gauge theory of electrodynamics. All electric charges and magnets produce force fields by the exchange of offshell photons with one another. This “fills the vacuum” with offshell radiation which produces only fundamental forces. What I’ve never been able to understand is why it is still taboo to try to produce a Monte Carlo or simple geometric model of this exchange of virtual photons, as a duality to the usual mathematical technique of integrating exp(iS/[h bar]) over all paths.

This anti-mechanism taboo is a “doublethink” disease of the mathematical priesthood in physics. What happened when Maxwell’s mechanical aether failed was that mechanical models became taboo, and this taboo survives today. It is a lurch from one extreme to another. Really, QED is a theory of offshell radiation being exchanged between charges to produce fundamental forces, and the appearance of the real (onshell) photon is just an asymmetry in the normally unobservable exchange of virtual photons between charges (the asymmetry being caused by the acceleration of a charge).

You finish:

6. Conclusion … If a photon goes through both slits at the same time, there is little or no momentum transfer to the slit plate compared with a photon traversing only one slit. … It is extraordinary from a particle point of view that more photons reach the screen when one slit is closed than when both slits are open.”

What is the evidence that more photons reach the screen when one slit is closed than when both are open? If I make lots of pinholes in a screen, more light will definitely get through. Have you thought about the conservation of energy, taken over the pattern of light and dark fringes in the interference pattern?

What happens to photon energy if a single photon “lands” at a “dark interference fringe”?

Clearly, it would violate conservation of energy, since a photon’s energy doesn’t “disappear” from the universe when it travels through two slits and “interferes with itself”. What is the “cancellation” process? If I send two water waves in opposite directions towards one another from oscillators at opposite ends of a water tank, when the waves meet and pass through another, for a brief period the water surface is completely calm. The wave amplitudes have temporarily cancelled out. However, the energy still exists, and is seen a moment later when both waves, having passed through one another, magically reappear and the calm water surface rears up into two waves travelling away from one another.

Photons only arrive at the bright bands in the interference pattern. Nothing arrives at the dark bands in the interterference pattern, which means that the usual explanation by Young is plain wrong: individual photons don’t arrive “out of phase” to form the dark bands. This fact is obfuscated by the usual diagrams based on Young’s analysis which show that light waves arrive out of phase at the dark bands. Are you aware of this?

Kind regards,

Nigel

 

—– Original Message —–

From:

Mario Rabinowitz

To:

Nigel Cook

Sent:

Thursday, June 07, 2012 1:36 AM

Subject:

May be one of the most important experiments in the last two centuries on the Wave-Particle Duality

Hi Nigel,

I hope all is going well for you.

In case you haven’t already heard, I thought you might like to know a little about what may be one of the most important experiments in the last two centuries on the Wave-Particle Duality.

 

A recent experiment by Menzel et al observes through which of two slits a photon passes, while still preserving the customary interference pattern of Young’s original 1802 experiment. This violates both the Uncertainty Principle and Bohr’s Complementarity Principle.

 

Interestingly I was the first to propose and analyze this experiment in my 1995 paper, Examination of Wave-Particle Duality Via Two-Slit Interference. It was published in Modern Physics Letters B 9 pp. 763 – 789 (1995), and appeared as ArXiv 0302062 in 2003. My description and analysis of this novel experiment that determines which slit the particle/photon goes through and still preserves the interference pattern is in Sec. 4 pp. 18 – 38, and illustrated in Figs. 1 & 2 of my ArXiv paper.

 

A copy of my ArXiv paper is attached; as is the Abstract of the Menzel et al paper which is expected to be published in the Proceedings of the National Academy of sciences. The Web Sites for these two papers are:

http://arxiv.org/abs/physics/0302062

http://www.pnas.org/content/early/2012/05/23/1201271109.abstract?sid=5bb98396-2381-49b5-ba43-6d738aeb3734

I recall that Thomas Young’s monumental paper was roundly criticized by some authors in the same 1802 issue of Philosophical Transactions in which his paper was published. Lucky for him and for posterity, they were not in a position to reject his paper from publication.

 

Warm Regards,

Mario

Aside

Who cares about…

Update (18 March 2012):

Copy of my comment submitted to Calder’s blog post on Quantum Computing:

“If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be both states at once.”

Thanks for this article. The quantum computing idea depends on intrinsic indeterminism, the single wavefunction of Schrodinger’s equation. This gives a spread of probabilities for the energy state, until the wavefunction is “collapsed” by an actual measurement.

The quantum computing question is whether the single wavefunction (1st quantization quantum mechanics) mathematical model is an accurate, experimentally justified model. It’s non-relativistic, and in 1929 Dirac showed that the Hamiltonian in Schroedinger’s equation needs to be replaced by an SU(2) spinor to make it relativistic, which quantizes the field.

This is Feynman’s path integral (2nd quantization, or QFT), where there is no single wavefunction amplitude. Instead, each path has a separate wavefunction amplitude, and apparent indeterminist is just multipath interference from the virtual particles (similar to multipath interference of old HF radio waves due to partial reflection by different charged layers in the ionosphere). Feynman explains this fact clearly in his 1985 book QED, stating that Heisenberg’s uncertainty principle is unnecessary. All indeterminism is multipath interference, a physical mechanism. So if Feynman is right, there is no real mathematical magic, and the 1st quantization single wavefunction states at the heart of quantum computing research are a delusion.

The Majorana fermions news is very interesting, but again is a spin story. The “pair of Majorana fermions” described in the paper referenced by the Nature article (R. M. Lutchyn et al. http://arxiv.org/abs/1002.4033; 2010) is simply an electron and a semi-conductor “hole” at the interface between a superconductor and a semiconducting nanowire. The hole behaves as a fermion, and is electrically like a positron. So this Majorana pair is electrically neutral, and with entangled wavefunctions would prove useful for quantum computing.

But according to Feynman, the only entangled wavefunctions are from the 1st quantization non-relativistic model. Aspect’s experiments alleging quantum entanglement, and others, are fully explained by Feynman’s 2nd quantization multipath interference mechanism in path integrals, which simply isn’t included in Bell’s equality (a statistical test of 1st quantization). There is no discrimination between 1st and 2nd quantization in these experiments. Experimental spin correlation is assumed to be the entanglement of single wavefunctions. They simply ignore the path integral’s multipath interference mechanism. The use of statistical hypothesis testing is fiddled with a false selection of explanations: it is assumed that the experiments are a test of whether 1st quantization is right or wrong. Of course, under this assumption, it appears correct.

A more scientific version of Bell’s inequality would include a third possibility, namely Feynman’s path integral where all indeterminism is due to multipath interference, so there are no single wavefunctions to begin with. Supposed pairs of spin-correlated particles actually follow all paths, most of which cancel one another. There is no single wavefunction; instead, Aspect’s two apparently correlated wavefunctions (one for each detected particle) are each the sum of wavefunction amplitudes for all the virtual paths taken. This provides the physical mechanism for what is actually taking place.

Renormalization in quantum field theory and its physical mechanism

Here’s the current solution to the old problem of whether Haag’s theorem prevents axiomatic proof of the self-consistency of the (essential) running charge cut-off (charge renormalization) in quantum field theory:

 

Enough about renormalization, please. At this level of arguing about whether perturbative divergences in general are

 

1. A serious problem indicating the theory is ill-defined
2. Things that can be eliminated with mathematical machine X
3. Not there if you use the renormalization group properly

 

the discussion is stuck in a 40-50 year old time warp. I don’t think that endlessly repeating geriatric arguments is fruitful. – Peter Woit

 

The argument Dr Woit was responding to was between Dr Chris Oakley and Dr Igor Khavkine.  The successive terms in a path integral’s perturbative expansion each represent the magnitude of the contribution from a successively more complex Feynman diagram, which pictorially describes interactions between off-shell (virtual) particles.  Virtual fermions are polarized around a real charge, absorbing energy from the field and reducing (shielding) the charge as seen from a greater distance (i.e. a distance beyond the location of the polarized pairs of virtual fermions, which extend out to the low-energy or IR cutoff, the limit for spontaneous pair production in the vacuum given by Schwinger).

 

There is a groupthink denial about the details of this physical mechanism and the mathematics of renormalization procedures.  The fashion is wooden mathematics.  Weyl in 1918 gave a flawed quantum gravity gauge quantization by trying to quantize the metric of general relativity, scaling it by a complex exponential function of the electromagnetic field S using exp(iS).  After Einstein pointed out it was wrong, Schroedinger in 1922 modified Weyl’s idea into a new mathematical “eigenvalue” model of the Bohr atom, changing the scaling from the metric to the probability of the existence of a discrete energy level existing as function of the electron’s orbital path, the periodic real plane solutions to exp(iS) = cos S + i sin S represented the eigenvalues for “stationary states” of orbital electrons.  Finally, after de Broglie’s particle-wave duality became fashionable, Schroedinger published the famous complex plane time-dependent wave equation to which exp(iS) is a solution.

 

In quantum field theory, as Dirac developed it, Schroedinger’s time-dependent wave equation is supplied with a new Hamiltonian (Dirac’s spinor) to make it treat space and time the same way, to meet relativistic requirements.  The new Hamiltonian, however, quantizes the field.  Instead of just having one one particle interacting and behaving unpredictably with no mechanism (which is what the single wavefunction model in Schroedinger’s equation or Heisenberg’s matrix says), in quantum field theory you suddenly have a mechanism: lots of virtual particles deflecting an electron whose path action is small compared to h bar.   Each of interactions between a virtual particle in the field and the electron has an aplitude and thus a wavefunction.  The 2nd quantization (QFT) path integral in quantum field theory, as Feynman points out in his book QED (1985) is now a physical sum of physical processes, so the 1st quantization (non-relativistic QM) “uncertainty principle” is “not needed (Feynman).  Uncertainty is now not a metaphysical law from the mind of Heisenberg, it’s good old “multipath interference”, exactly the effect that causes radio interference.

 

So why exp(iS)?  If we have the mechanism of multiple path interference determining eigenvalues in 2nd quantization, why use Schroedinger’s purely ad hoc complex wave equation, whose complex Hilbert space defies a self-consistent axiomatic proof of renormalization (Haag’s theorem)? Why not accept that exp(iS) and the complex wave equation is a historical vestige?  What do must replace it with is real space: exp(iS) can be replaced simply with cos S, as Feynman demonstrates graphically in his 1985 book, QED.  Thinking physically (without the wooden fuzziness of “believing” in ad hoc mathematical models as a religious belief), you can see that the path integral is always giving a real plane solution: the only variable is the amplitude not the direction of the arrow on an Argand diagram.  A path integral can either add up unit length arrows with variable directions which the mainstream method today, using exp(iS), or you can get precisely the same result by making the arrows all point in the same direction (the real axis) but have varying lengths.  The path integral is always the same so far as observation is concerned: nobody can see any non-real plane final arrows in the laboratory.  Inteferences only affect amplitudes on the real plane so far as we observe them.  The cross-sections and probabilities you get from the path integral are always real numbers, never containing i.  If that is true, exp(iS) = i sin S + cos S can be replaced by dropping to i sin S to give cos S.  This should have been done by Dirac and Feynman when 2nd quantization was developed.  Instead, Hilbert space – despite Haag’s theorem – is a religion in quantum field theory.

Update (15 february 2011): relevant extracts from an email on this subject to Dr Mario Rabinowitz

From: Nige Cook
To: Mario Rabinowitz
Sent: Wednesday, February 15, 2012 5:15 PM

… “As you may recall I think that existing QM and GR are presently mutually incompatible, being a deterrent to a consistent theory of QG.”

Years ago, I read your excellent paper, “Deterrents to a theory of quantum gravity”, which is very helpful and provides some vital insights. Your approach defines QM by the mainstream Schroedinger 1926 equation of 1st quantization:

i * {h-bar} * d{Psi}/dt = H *{Psi}

Any equation of this form (where the rate of charge of a variable, Psi is directly proportional to Psi) will have an exponential solution, i.e.

{Psi}_t = {Psi}_0 exp(iHt).

This is what Dirac came up with in 1933. I’m sure you’re well aware of the mathematics, but maybe the history and the physical interpretation are less familiar:

1. Weyl came up with the complex exponent, exp(iX), in 1918 as a multiplying factor for the metric of general relativity. This quantized the metric, the original gauge theory of quantum gravity (references are in my paper). Weyl’s factor X was a function of the electromagnetic field, so he claimed to unify electromagnetism and gravity in his theory. Einstein pointed out that Weyl’s 1918 theory contradicted observed data (e.g. line spectra from stars with strong gravitational fields).

2. In 1922, Schroedinger reapplied Weyl’s exp(iX) factor to model the quantized electron energy levels in the atom (the references are in my paper): exp(iX) is a cyclic function on an Argand diagram (complex plane). Schroedinger’s 1922 paper defined the periodic real plane of exp(iX) = i sin X + cos X as the observed electron states corresponding to line spectra, so that exp(iX) was unity (probability of finding the electron = 1) for “real” (observable) electron states.

This was a brilliant application of mathematical intuition to “explain” why lines are quantized: the electron is in some sense in a complex plane (unobservable) when inbetween discrete energy levels.

3. In 1926, after being asked to give a lecture on de Broglie’s wave particle duality, Schroedinger presented his famous reverse-engineered wave equation, to which his 1922 paper’s probability = exp(iX) is the solution. (Feynman claimed in his Lectures on Physics that the wave-equation was a guess which came out of the “mind of Schroedinger”. It actually came out of the mind of Weyl’s gauge theory in 1918, but was changed by Schroedinger from scaling the gravitational metric to scaling the wavefunction.)

4. In 1929, Dirac had to change the non-relativistic Hamiltonian to an SU(2) matrix type spinor in order to make the Schroedinger theory relativistic. Dirac found that this quantizes the field (2nd quantization).

5. in 1933, Dirac suggested following the wavefunction over a path by {Psi}_t = {Psi}_0 exp(iHt). This is really a circular argument physically, since it is what Schroedinger did in his 1922 paper.

My argument is that the amplitude exp(iHt) or its equivalent in the path integral for least action, exp(iS), is only necessary in 1st quantization quantum mechanics where you have a single wavefunction. In this case, you have to rely on the complex conjugate to quantize phenomena.

In 2nd quantization, you have more than one wavefunction (the path integral, one wavefunction for every path). All the many wavefunctions interfere to produce probabilities. For classical situations (path actions minimal compared to h-bar), exp(iS) ~ exp(i*0) ~ 1, so the classical path takes is roughly 100% likely (thus all non-classical paths have trivial contributions).

Mathematically you don’t need the complex wavefunction amplitude exp(iS) in the path integral (2nd quantization). It’s just a vestige of 1st quantization. Use Euler’s equation for exp(iS) and drop the complex terms which have no value: exp (iS) = i sin S + cos S. You can replace exp(iS) with cos S in the path integral.

Benefits:

1. No more complex Hilbert space, and no more complex Schroedinger wave equation. No complex space. No problems of Hilbert space in trying to reconcile quantum mechanics and gravity!

2. No more problems axiomatically in quantum field theory! Haag’s disproof of self-consistent axioms for renormalization is based on complex (Hilbert) space. Drop complex (Hilbert) space, and self-consistent renormalization is no longer a crack to be covered with renormalization group wall-paper.

3. Nobody has ever seen any need for a complex space in 2nd quantization. The path integral’s outputs are alway real numbers: real plane cross-sections, and real probabilities. If quantum mechanics is defended on the basis of empiricism by Bohr and Heisenberg, why include non-observables like complex space, when they are no longer needed. As Feynman states in his 1985 book QED, in the path integral probabilities arise from multipath (multiple wavefunction) interferences, just like the old HF sky-wave radio interference from partial reflection of radio by several different (D, E, and F) layers in the ionosphere. There is a physical mechanism in 2nd quantization so you no longer need exp(iS) which is vital to explain energy level quantization if you only have a single wavefunction (1st quantization)

Notice that if you replace the path wavefunction (amplitude) Psi = exp(iS) with Psi = cos S (obviously S is in units of h bar), what you are doing in the path integral looks a bit different graphically, but is an exact mathematical duality for all real outputs (probabilities, cross-sections).

On an Argand diagram, using exp(iS) as a path’s amplitude means that to determine the “sum over histories” (path integral) you add arrows of fixed length for but variable direction for each path, and the path integral is then the resultant arrow (the sum over the histories).

This sounds as if it involves a vector result, i.e. generates two variables: the path integral (final arrow) has both length (amplitude) and direction. However, although technically “true” in a “wooden” mathematical sense, it is contrived sophistry in a physical sense, because the direction of the vector is always zero, i.e. on the real (horizontal) axis. To repeat, the path integral only produces scalar (not complex vector) probabilities and cross-sections, since the direction of the final arrow is always real. There are only two axes on the Argant diagram: real and imaginary (complex). The fact the path integral is always on the real axis, allows us to replace exp(iS) with cos S (using Euler’s identity), without loss of information. We’re not physically breaking any mathematical rules by “replacing a vector with a scalar”. It’s a scalar at output anyway.
In other words, the only true variable in every experimentally checked and confirmed path integral is the amplitude of a path along the real axis, i.e. cos S. So forget exp(iS), it’s unnecessary in 2nd quantization where we’re calculating real numbers like real probabilities and real cross-sections.

I’m arguing is that the whole of 1st quantization is rendered obsolete by 2nd quantization, and by Haag’s theorem to achieve self-consistent axiomatic renormalization we need to dump the Weyl/Schroedinger/Dirac/Feynman exp(iS) wavefunction amplitude and move over to using cos S as its replacement.

This ends all the doublethink and mathematical duplicity that have held up the development of quantum field theory for the past 80 years. Each path now has no complex vector, just the scalar amplitude cos S. The path integral produces precisely the same checkable cross-sections and probabilities with cos S as with exp(iS). However, we are now dealing with real spacetime, not complex space, so the mathematical barriers to axiomatical progress and unification with gravity are eliminated.

The drawback is that there is a great deal of “genius” invested in exp(iS) and the complex Schroedinger equation, and we can expect a great deal of hostility to progress by replacing exp(iS) with cos S. Mathematical geeks (Pythagorean cult worshippers) like Ed Witten will not find my humble suggestion praiseworthy, but destructive to educational syllabuses, existing textbooks, and the confusion of students. It would make physics less arcane, less mysterious, less attractive to B grade pure mathematics students. You would get more technician-calibre* Michael Faraday’s getting into the ivory towers and upsetting status quo by making discoveries “out of turn”. Physics might start making some real, revolutionary progress again, like it did in the 1920s.* Tragic for the old guard.

_____
*Politically incorrect footnote: a tragedy nearly occurred with Oliver Heaviside, who turned Maxwell’s differential equations into vector calculus without bringing any kudos to Oxbridge (or any academia), but fortunately Sir William Preece had Heaviside censored out when Heaviside started including in published papers sarcastic “bitter” jokes at the expense of perplexed leading Oxbridge educated academia. (Here’s a beautiful specific example from a published article of Heaviside, reprinted in his book Electromagnetic Theory, vol 1, 1893, p337: “Internal obstruction and superficial construction … If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner – even though he sat at the feet of Faraday. Beetles could do that. Besides, the old practitioner [any so-called “professional” scientist in general as well] is apt to measure the value of science by the number of dollars he thinks it is likely to bring into his pocket, and if he does not see the dollars, he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaky manner [plagiarism], not unmixed with bluster, and make believe he knew about it when he was a little boy! He sees a prospect of dollars in the distance, that is the reason. The perfect obstructor [“peer”-review bias] having failed, try the perfect conductor. … Prof. Tait [the famed quaternionic hyper] says he cannot understand my vectors, though he can understand much harder things. But men who have no quaterionic prejudices can understand them, and do.”) As another example, Dirac studied electrical engineering at Bristol University (which also taught bricklaying and shoemaking) before coming up with the Dirac spinor (the foundation of quantum field theory), but despite his arguments with Heisenberg over whether 1st quantization QM was a subject “closed” for all time or not, at least he was politically correct enough to end up Lucasian Professor of Mathematics at Cambridge from 1932 to 1969. The brilliant new groupthink ideology is to encourage a diversity of ideas in physics by eliminating anybody who doesn’t think within the (existing flawed status quo) box. The elimination technique is based on mathematical sophistry. If you accept superfluous unobservables and use them to hold back progress, all is well. :-)

Further discussion:

From: Nige Cook
To: Mario Rabinowitz
Sent: Wednesday, February 15, 2012 8:28 PM
Subject: Re: I was just skimming your paper, “U(1) ´ SU(2) ´ SU(3) quantum gravity successes.”

… My approach is that the Schroedinger equation is misleading because it only has a single wavefunction and was an ad hoc model formulated before the path integral. People cling on to vestiges long after the reason for them has disappeared. In 2nd quantization you don’t need exp(iS), because cos S does the same job, better, avoiding Hilbert space (Haag’s theorem). If you accept the necessity for a path integral, then each path has a separate wavefunction, and as Feynman explains in QED (his lucid 1985 book), multipath interference between many wavefunctions – one for each path – produces all indeterminancy. There is no intrinsic indeterminancy. All indeterminancy is due to multipath interference. Keeping 1st quantization vestiges in place after 2nd quantization had made them unnecessary obfuscations is like Copernicus’s attempt to retain epicycles in the solar system: it is a half-baked mainstream theory.

Love is an ex-USAF pilot who has a maths PhD and he emailed me a paper called “Towards an Einsteinian Quantum Theory”, which tries to replace the Standard Model, however he doesn’t seem to find any problem with U(1) electrodynamics, just replacing the SU(2) and SU(3) weak and strong gauge group symmetries.

My approach is the opposite. There is an enormous amount of evidence for SU(2) weak and SU(3) strong symmetries. The problem, I find, is U(1) electrodynamics which is really a disguised SU(2) Yang-Mills symmetry. You can see the SU(2) nature of electrodynamics in both Dirac’s SU(2) spinor of relativistic QED, and in the asymmetry in Maxwell’s vector calculus equations: div.B = 0 is not matched by div.E = charge density per unit permittivity. It really seems that magnetic fields are not a U(1) symmetry but an SU(2) symmetry, deriving from spin. This lack of magnetic monopoles is an asymmetry between electricity and magnetism, analogous to the left-handed asymmetry (parity violation) in the weak interaction when electromagnetism is represented by a massless boson SU(2) symmetry. Weyl actually predicted in 1929 that Dirac’s spinor (Weyl’s spinor) breaks parity in electromagnetic interactions, although he didn’t interpret this physically as the lack of magnetic monopoles in Maxwell’s equations, and Pauli dismissed it. Parity conservation was only confirmed for weak interactions (beta decay) in the late 1950s, nobody bothered to see if electromagnetism could be derived from SU(2) Yang-Mills with massless gauge bosons.

Instead of unifying electromagnetism and weak interactions by electromagnetism an SU(2) Yang-Mills theory which reduces to an asymmetric U(1) Maxwell theory due to the massless bosons in electromagnetism (which prevent the charge transfer term in the Yang-Mills term from operating), the simplistic mainstream (wooden mathematics) approach has been to “predict (non-observed) magnetic monopoles”, and despite failing to discover magnetic monopoles in searches, to continue looking and hyping the “prediction” (analogous to the politically convenient “search” for cosmic strings). Maxwell’s original 1861 paper, “On Magnetic Lines of Force”, as quoted in my paper, argues that magnetic fields are just the angular momentum of field quanta spin. Maxwell used vacuum vortices, not field quanta, which did not arise until QED was developed to the stage of Moller scattering theory due to virtual photon elechange. The virtual photons will convey magnetic fields by spin angular momentum.

Update (1 March 2012)

Mathematician Dr Marni Sheppeard has closed her Arcadian Pseudofunctor blog, a post after commenting (sarcastically) that the scientific conference disclaimer: “In particular, no bona fide scientist will be excluded from participation on the grounds of national origin, nationality, or political considerations unrelated to science” is “cute”. “Hubris” is perhaps the best word for the censorship of politically incorrect nascent science by elite greasy pole climbing geniuses who use what they call “science” to fill their wallets. Great, I say, just be careful to give honest results in return for your wages. What is wrong is not just groupthink science or the politics of science that comes from commercializing research with fancy PR conferences, fancy brochure magazine journals, and other elitist advertising, but the corruption of fashion and orthodoxy in frontier research which gradually creeps into science indirectly as a result, and the labelling of the corruption as “science methodology”. Like Orwellian big brother politics, once you have an establishment which knows it’s heart is in the right place, it finds making excuses for extending the corruption very easy, just as it finds it very easy to keep making promises to discover new exciting epicycles if the taxpayer or big business stumps up every more cash.

Eventually, it’s defensiveness in labelling all critics as conspiracy theorists, merely for suggesting that the existing research directions are failures which are being pursued because they bring in research grants from deceived sponsors, starts to look like bitter paranoia, even to Brezhnev era jobsworths who would rather be verbally crucified by long oppressed dissenters and critics, than be disloyal to their dear Party Comrade. Another post of Dr Sheppeard’s quotes a string theorist: “It is really the case that there are brilliant loners out there and that there is some kind of conspiracy by the physics “establishment” to prevent their voices being heard?” Again, too much of this kind of defensiveness can eventually sound like bitter or paranoid hubris. By analogy, if the medical establishment is reducing suffering in return for taxpayer’s cash grants, then fine. But if were to go off into some kind of alternative therapy for 30 years which failed to achieve any checkable evidence in that time, and then started to burn critics merely for suggesting that alternative nascent ideas exist that have been starved of funding, then the credibility and respect of the public in that medical establishment might be affected. “Weak point, shout louder” is advice that has a limited shelf-life, then looks like propaganda, or even the dictatorship of a band of corrupted self-deceived geniuses.

But maybe I’m completely wrong about this. I hope so.