The Standard Model and Quantum Gravity: Identifying and Correcting Errors (part 1)



Above: spin-1 quantum gravity illustration from the old 2009 version of quantumfieldtheory.org (a PDF linked here, containing useful Feynman quotations about this). To hear to a very brief Feynman tongue-in-cheek talk on spin-1 graviton mechanism problems, please click here.


Above: the dilemma of “looking clever” or being humble and honestly searching for the facts, no matter how “heretical” or unexpected they turn out to be. This review of Surely You’re Joking My Feynman is a lot better than the autobiography itself which rambles on a lot and needs severe editing for busy readers, like all of Feynman’s books. Feynman does relate several incidents that led him to the conclusion that a major error in fashionable consensus is groupthink. Working on the bomb at Los Alamos, he found he could break into any secret safe very easily. People left the last digit of their combination on the lock dial, and he could extrapolate the other digits using logic about the simple mind of the physicist or mathematician. E.g., a 3 digit combination safe showing 7 implies the combination 137, 4 implies the combination 314, 1 implies 271, and so on. When a very complex safe of a military top brass was opened very quickly by the locksmith at Los Alamos, Feynman spent weeks getting to know the guy to find out the “secret”. It turned out that there was no magic involved: the combination that opened the safe was simply the safe manufacturer’s one, which the top brass hadn’t got around to changing! Feynman was then told by a painter that he made yellow by mixing white and red paint, which sounded like “magic”. After a mishap (pink), he went back to the painter, who informed him he added yellow to the mixture to give it the right tint. Another time, he was falsely accused of being a magician by fixing radios by switching over valves/vacuum tubes (they used the same kind of vacuum tube in different circuits, so an old output amplifier tube which was failing under high current could be switched for a similar valve used for lower currents in a pre-amplifier circuit, curing the problem). In a later book, What Do You Care What Other People Think Feynman’s time on the Presidential investigation into NASA’s January 1986 Challenger explosion is explained. Upon close inspection, Challenger was blown up by engineers not in a mistake involving some weird mathematical error of the fashionable magical “rocket science” that is supposedly beyond mortal understanding, but just regular groupthink delusion: the low-level engineers and technicians in charge of O-rings knew that rubber turns brittle at low temperatures in cold weather, and that brittle rubber O-rings sealing the Challenger booster rockets would leak fuel as the rocket vibrated, and they knew that gravity and air drag would cause the leaking fuel to run towards the rocket flames, blowing it up.

However, those technicians who knew the facts had Orwellian doublethink and crimestop: if they made a big scene in order to insist that the Challenger space shuttle launch be postponed until warmer weather when the rubber O-ring seals in the boosters would be flexible and work properly, they would infuriate their NASA bosses at launch control and all the powerful senators who had turned up to watch the Challenger take off, so the NASA bigwigs might give contracts to other contractors in future. They would be considered unAmerican fear-mongers, decrepid incompetent fools with big egos. It was exactly the same for the radar operators and their bosses at Pearl Harbor. There are no medals given out for preventing disasters that aren’t obvious threats splashed over the front pages of the Washington Post. It was not 100% certain the shuttle would explode anyway. So they crossed their fingers, said little, and nervously watched Challenger blow up on TV. Feynman was told the truth not by fellow committee investigator Neil Armstrong, or by any NASA contractor (they were just as good at covering up afterwards as keeping quiet beforehand), but by the military missile expert who investigated the 1980 Arkansas Titan military missile explosion. Feynman used a piece of rubber and a plastic cup of iced water to expose the cause at a TV news conference, but the media didn’t want to know about the corruption of science and peer-reviewed risk prediction rubbish in NASA’s computers and groupthink lies. His written report was nearly censored out, despite the committee chairman being a former student! It was included as a minority report, Appendix F, which concluded that NASA safety analyses were a confidence trick for public relations:

“… reality must take precedence over public relations, for Nature cannot be fooled.”

Nobel Laureate Professor Brian Josephson emailed me (exchanged email PDFs are located here and here) that he used 2nd quantization in his Nobel Prize QM calculations, but is still stuck in 1st quantization groupthink when it comes to “wavefunction collapse” in the EPR paradox! Er, Brian, nobody has ever seen an epicycle or a wavefunction! Nobody has ever measured an epicycle or wavefunction! Schrodinger guessed H*Psi = -i*h-bar*d{Psi}/dt. This is a complex transmogrification from Maxwell’s displacement current law, {energy transfer rate} = constant*dE/dt (for energy transfer via “electric current” flowing by the vacuum through an electric field E effect, akin to half-a-cycle of a radio wave). Note that H*Psi = -i*h-bar*d{Psi}/dt is a relativistic equation (it is only non-relativistic when his non-relativistic Hamiltonian H for energy is included; Dirac’s equation is no different Schroedinger’s except in replacing H with a relativistic spinor where particle spin is included, hence making the law relativistic). Dirac later showed that H*Psi = -i*h-bar*d{Psi}/dt is “analogous to” its solution, Psit/Psi0 = exp(-iHt/h-bar), which Feynman modified with -Ht = S with action S defined in units of h-bar, so the “wavefunction” (epicycle) varies in direct proportion to exp(iS). This creates the complex circle (rotation of a unit length vector on a Argand diagram, as a cyclic function of S). Feynman in his 1985 book QED reduced this exp(iS) using Euler’s “jewel” to simply cos S, where the lagrangian for S is expressed so that the direction of the vector is fixed as the relativistic axis (the relativistic axis is the simple direction of the arrow for the path of zero action S = 0, because the “relativistic action” is actually defined as that action which is invariant to a change of coordinates!). So we now have the “reinvented wheel” called a Euclidean circle, whose resultant in the on-shell or relativistic axis is simply the scalar amount cos S, for each path. This gets rid of complex Hilbert space and with it, Haag’s theorem as an objection to the mathematical self-consistency of renormalized QFT. All photons have 4 polarizations (like virtual or off-shell photons), not just 2 polarizations (as presumed from direct measurements). The extra 2 polarizations determine the cancellations and additions of phases: there is no “wavefunction collapse upon measurement”. The photon goes through both slit in Young’s experiment and interferes with itself, with no need for an observer. As Feynman writes in QED (1985) we don’t “need” the 1st quantization “uncertainty principle” if we sum the paths.


Above: here we have Feynman pushed to explain why similar poles of magnets repel, using it as an excuse to talk about why ice is slippery and why good husbands call an ambulance for their wives who slip on the ice and break their hip, unless they are drunk and violent. He does end up saying that he can’t explain why magnets repel in terms of anything else with which the non-mathematician is familiar. However, in his 1985 book QED he explains that virtual photons are exchanged between magnets, and this process creates the magnetic force field. The problem for Feynman was knowing what the virtual photon wavefunction means physically. In the 1985 book, he draws pictures of a rotating arrow accompanying each virtual photon, that rotates in step with the frequency of oscillation of the photon, i.e. each oscillation of the virtual photon is accompanied by a full rotation of the phase factor (which is the “hidden variable” behind the so-called “wavefunction”, itself just an epicycle from 1st quantization, with no direct physical reality behind it, despite obfuscation efforts from the “nobody understands quantum mechanics”-Gestapo and the “parallel worlds” fantasy of Hugh Everett III and, with varying laws of nature, the 10500 parallel universes of the superstring theory Gestapo/arXiv “peer”-reviewers).

Above: like Dr Zaius said to Charlton Heston in 1968, don’t search for the facts if you have a weak stomach. It might turn out that a “unified theory” is analogous to merely a bunch of bananas, so many groupthink “bury my head in the sand” simplicity-deniers will feel sub-ape because they can’t, don’t, and won’t be assed to put two sticks together to reach the facts. A pretty good example, discussed in detail in one way later in this post and in other ways two posts back, is Einstein’s relativity, which has multiple levels of explanation. The strongest formulation of relativity is the statement that our laws of motion must give the same predictions regardless of the chosen reference frame, i.e. we get the same prediction of the reference frame is that of the earth or that of the sun. This makes the laws “invariant” of the selected reference frame. Then there are progressive weaker formulations of relativity, used in “simplified” explanations for the layman, such as “there is no spacetime fabric, there is nothing in space which can produce forces”, or “relativity doesn’t say an absolute reference frame is unnecessary for doing our sums, relativity actually disproves the existence of any absolute reference frame!

These “simplified” relativism “explanations” are a continuation of the best traditions of Egyptian priesthood and the Pythagorean mathematical cult. The objective of science is to act as a magician, to make the masses of the people believe whatever you say, “trust me with political power, I’m a scientist!” Then you trust them and you get mishaps, because they turn out to be humans, or more often than not, subhumans, even subape! Hence the mishaps of caloric, phlogiston, Maxwell’s mechanical gear cog aether, Kelvin’s stable vortex atom, Piltdown Man, peer-review, unprecedented climate change, nuclear winter theory, lethal cobalt bomb theory, superstring, etc. Groupthink science is not the kind of thing Newton and Darwin were doing, or Feynman was doing before and during the 1948 Pocono conference. Groupthink science education doesn’t train people to put up with the taunts for doing unorthodox revolutionary work, so it progresses only slowly and haltingly, the “alternative ideas” are developed slowly, with the mainstream ignoring it.

Above: Dr Zaius is alive and well, ensuring that consensus censors facts, as shown in this BBC propaganda programme, Horizon: Science Under Attack where groupthink pseudophysics is labelled “science” and the facts are dismissed because they have been censored out by “peer”-review pseudoscientific bigotry. Telegraph online Journalist James Delingpole, who exposed to the world the “hide the decline” climategate email of Dr Phil Jones is dismissed by Dr Zaius on the pretext that people must define science as the consensus of “peer”-reviewed literature. Great. So we can go on pretending that there is nothing to worry about, and using “peer”-review to prevent human progress. Ah, if only it were that easy to sweep the facts under the carpet or wallpaper over them. A PDF version of the errors in the BBC Horizon: Science Under Attack episode is located here, with additional relevant data (20 pages, 2 MB download). To read the 1960s background about Dr Zaius, see the wikipedia page linked here: “Zaius serves a dual role in Ape society, as Minister of Science in charge of advancing ape knowledge, and also as Chief Defender of the Faith. In the latter role, he has access to ancient scrolls and other information not given to the ape masses. [Dr Phil Jones and the FOIA/Freedom of Information Act “harrassment” controversy.] Zaius … blames human nature for it all. Zaius seems to prefer an imperfect, ignorant ape culture that keeps humans in check, to the open, scientific, human-curious one … The idea of an intelligent human … threatening the balance of things frightens him deeply.”

“The common enemy of humanity is man. In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then, is humanity itself.” – Club of Rome, The First Global Revolution (1993). (That report is available here, a site that also contains a very similar but less fashionable pseudoscientific groupthink delusion on eugenics.)

The error in the Club of Rome‘s groupthink approach is the lie that the common enemy is humanity. This lie is the dictatorial approach taken by paranoid fascists, both on the right wing and the left wing, such as Stalin and Hitler. (Remember that the birthplace of fascism was not Hitler’s Germany, but Rome in October 1914, when the left-wing, ex-communist Mussolini joined the new Revolutionary Fascio for International Action after World War I broke out.) The common enemy of humanity is not humanity but is fanaticism, defined here by the immoral code: “the ends justify the means”. It is this fanaticism that is used to defend exaggerations and lies for political ends. Exaggeration and lying about weapons effects in the hope it will be justified by ending war is also fanaticism. Weapons effects exaggerations both motivated aggression in 1914, and prevented early action against Nazi aggression in the mid-1930s.

From: Phil Jones

To: “Michael E. Mann”
Subject: HIGHLY CONFIDENTIAL
Date: Thu Jul 8 16:30:16 2004

… I didn’t say any of this, so be careful how you use it – if at all. Keep quiet also that you have the pdf. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is! …

– Dr Phil Jones to Dr Michael Mann, Climategate emails, July 8th 2004.

For NASA’s “peer-review” suppression of its own climate research contractor, please see:

https://nige.files.wordpress.com/2011/02/dr-miskolczi-nasa-resignation-letter-2005.pdf

“Since the Earth’s atmosphere is not lacking in greenhouse gases, if the system could have increased its surface temperature it would have done so long before our emissions. It need not have waited for us to add CO2: another greenhouse gas, H2O, was already to hand in practically unlimited reservoirs in the oceans. … The Earth’s atmosphere maintains a constant effective greenhouse-gas content [although the percentage contributions to it from different greenhouse gases can vary greatly] and a constant, maximized, “saturated” greenhouse effect that cannot be increased further by CO2 emissions (or by any other emissions, for that matter). … During the 61-year period, in correspondence with the rise in CO2 concentration, the global average absolute humidity diminished about 1 per cent. This decrease in absolute humidity has exactly countered all of the warming effect that our CO2 emissions have had since 1948. … a hypothetical doubling of the carbon dioxide concentration in the air would cause a 3% decrease in the absolute humidity, keeping the total effective atmospheric greenhouse gas content constant, so that the greenhouse effect would merely continue to fluctuate around its equilibrium value. Therefore, a doubling of CO2 concentration would cause no net “global warming” at all.”

– https://nige.files.wordpress.com/2011/02/saturated-greenhouse-effect-fact.pdf page 4.

CO2 only drives climate change in NASA and IPCC computer climate fantasies when positive-feedback from H2O water vapour is assumed. In the real world, there is negative feedback from H2O which cancels out the small effect of CO2 rises: the hot moist air rises to form clouds, so less sunlight gets through to surface air. Homeostasis! All changes in the CO2 levels are irrelevant to temperature variations. CO2 doesn’t drive temperature, it is balanced by cloud cover variations. Temperature rises in the geological record have increasing the rate of growth of tropical rainforests relative to animals, causing a fall in atmospheric CO2, while temperature falls kill off rainforests faster than animals (since rainforests can’t migrate like animals), thus causing a rise in atmospheric CO2. These mechanisms for CO2 variations are being ignored. Cloud cover variations prevent useful satellite data on global mean temperature, the effects of cloud cover on tree growth obfuscate the effects of temperature, and the effects of upwind city heat output obfuscate CO2 temperature data on weather stations. Thus we have to look to sea level rise rates to determine global warming.

We’re been in global warming for 18,000 years, during which time the sea level has risen 120 metres (0.67 cm/year mean, often faster than this mean rate). Over the past century, sea level has risen at an average rate of 0.20 cm year, and even the maximum rate of nearly 0.4 cm/year recently is less than the rates humanity has adapted to and flourished with in the past. CO2 annual output limits and wind farms etc are no use in determining the ultimate amount of CO2 in the atmosphere anyway: if you supplement fossil fuels with wind farms, the same CO2 simply takes longer to be emitted, maybe 120 years instead of 100 years. The money spent on lying “green” eco-fascism carbon credit trading bonuses can be spent on humanity instead.

For background info on how H2O cloud cover feedback cancelling CO2 variations on temperature has been faked in IPCC NASA “peer”-review bigotry, see http://www.examiner.com/civil-rights-in-portland/blacklisted-scientist-challenges-global-warming-orthodoxy and https://nige.files.wordpress.com/2011/02/the-saturated-greenhouse-effect-theory-of-ferenc-miskolczi.pdf:

(1) increased cloud cover doesn’t warm the earth. True, cloud cover prevents rapid cooling at night. But it also reduces the sunlight energy received in the day, which is the source of the heat emitted during the night. Increase cloud cover, and the overall effect is a cooling of air at low altitudes.

(2) rainfall doesn’t carry latent heat down to be released at sea level. The latent heat of evaporation is released in rain as soon as the droplets condense from vapour, at high altitudes in clouds. Air drag rapidly cools the drops as they fall, so the heat is left at high altitudes in clouds, and the only energy you get when the raindrops land is the kinetic energy (from their trivial gravitational potential energy).

Obfuscation of the fact that hot moist air rises and condenses to form clouds from oceans that cover 70% of the earth (UNLIKE any “greenhouse!!!!) caused this whole mess. IPCC models falsely assume that H2O vapour doesn’t rise and condense into clouds high above the ground: they assume hot air doesn’t rise! That’s why they get the vital (FALSE) conclusion that H2O vapour doubles (amplifies) projected temperature rises from CO2, instead of cancelling them out! Their models are plain wrong.




Above: electric current is essentially displacement current under disguise. Juice in Joules coming out of wires isn’t due to the 1 mm/second drift of conduction band electrons, so much as Heaviside energy current. Moreover, charge up a capacitor which has a vacuum for its “dielectric”, and energy flows in at light velocity, has no mechanism to slow down, and when discharged flows out at light velocity in a pulse twice as long as its length and with just half the voltage (PD) of its static charged state. It turns out that the simplest way to understand electricity is as electromagnetic energy, so we’re studying the off-shell field quanta of QED, which causes slow electric drift current more like a side-show than the main-show. So by looking at IC’s published cross-talk experiments, we can learn about how the phase cancellations work. E.g., Maxwell’s wave theory of light can be improved upon by reformulating it in terms of path integrals.

Above: Dr Robert D. Klauber in 2010 accidentally misrepresented Feynman 1985 book QED in terms of Feynman’s earlier (complex) phase factor! The actual idea of Feynman in his 1985 book dispenses with the Argand diagram and converts exp (iS) into cos S (where S is in units of h-bar of course), as shown above. Notice that the path integral of cos S gives the resolved component of the resultant (final arrow) which lies in the x-direction only. To find the total magnitude (length) of the final arrow we simply have to choose the x-axis to be the direction of the resultant arrow, which is easy: the direction of the resultant is always that of the classical action, because the contributions to the resultant are maximized by the coherent summation of paths with the least amounts of action (the classical laws correspond to least action!). In other words, we don’t need to find the direction of the quantum field theory resultant arrow in the path integral, we only need to find its length (scalar magnitude). We easily know the arrow’s direction from the principle of least action, so the work of doing the path integral is then just concerned with finding the length, not the direction, of the resultant arrow. In practice, this is done automatically by relativistic formulation of the Lagrangian for action S. The definition of the path of least action as the real or “on shell” relativistic path automatically sets up the path integral coordinate system correctly.

“… every particle is associated with waves and these waves may be considered as a field. … very close to the charges that are producing the fields, one may have to modify Maxwell’s field theory so as to make it a non-linear electrodynamics. … with field theory, we have an infinite number of degrees of freedom, and this infinity may lead to trouble [Haag’s theorem implies that the renormalization process for taking account of field polarization is ambiguous and flawed if done in the complex, infinite dimensional “Hilbert space”]. We have to solve equations in which the unknown … involves an infinite number of variables [i.e., an infinite number of Feynman diagrams for a series of ever more complicated quantum interactions, which affect the classical result by an ever increasing amount if the field quanta are massive and significantly charged, compared to the charges of the on-shell particles whose fields they constitute]. The usual method … is to use perturbative methods in which … one tries to get a solution step by step [by adding only the first few terms of the increasingly complicated infinite number of terms in the perturbative expansion series to the path integral]. But one usually runs into the difficulty that after a certain stage the equations lead to divergent integrals [thus necessitating an arbitrary “cutoff” energy to prevent infinite field quanta momenta occurring, as you approach zero distance between colliding fundamental particles].”

– Paul A. M. Dirac, Lectures on Quantum Mechanics, Dover, New York, 2001, pages 1, 2, and 84.

Above: Feynman’s 1985 book QED is actually an advanced and sophisticated treatment of path integrals without mathematics (replacing complex space with real plane rotation of a polarization plane during motion along every possible path for virtual photons, as shown for reflection and refraction of light in the “sum over histories” given graphically above), unlike his 1965 co-authored Quantum Mechanics and Path Integrals. The latter however makes the point in Fig. 7-1 on page 177 (of the 2010 Dover reprint) that real particles only follow differentiable (smooth) “classical” paths when seen on macroscopic scales (where the action is much larger than h-bar): “Typical paths of a quantum-mechanical particle are highly irregular on a fine scale … Thus, although a mean velocity can be defined, no mean-square velocity exists at any point. In other words, the paths are nondifferentiable.” The fact that real paths are actually irregular and not classical when looked at closely is what leads Feynman away from the belief in differential geometry, the belief for instance that space is curved (which is what Einstein argued in his general relativity tensor analysis of classical motion). The idea that real (on shell) particle paths are irregular on very small scales was suggested by Schroedinger in 1930, when arguing (from an analysis of Dirac’s spinor) that the half integer spin of a fermion moving or stationary in free space can be modelled by a “zig-zag” path which he called “zitterbewegung”, which is a light-velocity oscillation with a frequency of 2mc2/h-bar or about 1021 Hz for an electron. Zitterbewegung suggests that an electron is not a static particle but is trapped light-velocity electromagnetic energy, oscillating very rapidly.

My recent comment to Dr Woit’s blog:

“One of my criticisms of the two organizations would be that they don’t support research of the sort that Witten has had success with, at the intersection of mathematics and quantum field theory.”

Do you think it possible that the future of quantum field theory could lie in a completely different direction, namely Feynman’s idea of greater mathematical simplicity. E.g. the path integral sums many virtual particle path phase amplitudes, each of the form of Dirac’s exp (-iHt) -> exp(iS). The sum over these histories is the path integral: the real resultant path is that for small actions S. Feynman in his 1985 book QED shows graphically how the summation works: you don’t really need complex Hilbert space from the infinite number of Argand diagrams.

To plot his non-mathematical (visual) path integral for light reflecting off a mirror, Feynman shows that you can simply have a phase polarization rotate (in real not complex space) in accordance to the frequency of the light, e.g. what he does is to take Euler’s exp(iS) = (i*sin S) + cos S and drop the complex term, so the phase factor exp(iS) is replaced with cos S, which is exactly the same periodic circular oscillation function as exp(iS), but with the imaginary axis replaced by a second real axis. E.g., a spinning plane of polarization for a photon! This gets rid of the objection of Haag’s theorem, since you get rid of Hilbert space when you dump the imaginary axis for every history!

Feynman makes the point in his Lectures on Physics that the origin of exp(iS) is the Schroedinger/Dirac equation for energy transfer via a rate of change of a wavefunction (Dirac’s of course has a relativistic spinor Hamiltonian), which just “came out of the mind of Schroedinger”. It’s just an approximation, a guess. Dirac solved it to get exp(-iHt) which Feynman reformulated to exp(iS). The correct phase amplitude is indeed cos S (S measured in units of h-bar, of course). Small actions always have phase amplitudes of ~1, while large actions have phase amplitudes that vary periodically in between +1 and -1, and so on average cancel out.

Are graphs mathematics? Are Feynman diagrams mathematics? Is mathematical eliticism (in the mindlessly complexity-loving sense) obfuscating a simple truth about reality?

Feynman points out in his 1985 book QED that Heisenberg’s and Schroedinger’s intrinsic indeterminancy is just the old QM theory of 1st quantization which is wrong because it assumes a classical coulomb field, with randomness attributed to intrinsic (direct) application of the uncertainty principle which is non-relativistic (Schroedinger’s 1st quantization Hamiltonian treats space and time differently) and is unnecessary since Dirac’s 2nd quantization shows that the field is quantized not the classical coulomb field. Dirac’s theory is justified by predicting magnetic moments and antimatter, unlike 1st quantization. The annihilation and creation operators of the quantized field only arise in 2nd quantization, not in Schroedinger’s 1st quantization where indeterminancy has no physical explanation in chaotic field quanta interactions:

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

“… Bohr [at Pocono, 1948] … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”

– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

Bohr and other 1st quantization people never learned that uncertainty is caused by field quanta acting on fundamental particles like Brownian motion of air molecules acting on pollen grains. Feynman was censored out at Pocono in 1948 and only felt free to explain the facts after winning the Nobel Prize. In the Preface to his co-authored 1965 book Quantum Mechanics and Path Integrals he describes his desire to relate quantum to classical physics via the least action principle, making classical physics appear for actions greater than h-bar. But he couldn’t make any progress until a visiting European physicist mentioned Dirac’s solution to Schroedinger’s equation, namely that the wavefunction’s change over time t is directly proportional to, and therefore (in Dirac’s words) “analogous to” the complex exponent, exp(-iHt). Feynman immediately assumed that the wavefunction change factor indeed is equal to exp(-iHt), and then showed that -Ht -> S, the action for the path integral (expressed in units of h-bar).

Hence, Feynman sums path phase factors exp(iS), which is just a cyclic function of S on an Argand diagram. In his 1985 book QED, Feynman goes further still and uses what he called in his Lectures on Physics (vol. 1, p. 22–10) the “jewel” and “astounding” (p. 22-1) formula of mathematics, Euler’s equation exp(iS) = i (sin S) + cos S to transfer from the complex to the real plane by dropping the complex term, so the simple factor cos S replaces exp (iS) on his graphical version of the path integral. He explains in the text that the cos S factor works because it’s always near +1 for actions small compared to h-bar, allowing those paths near (but not just at) least action to contribute coherently to the path integral, but varies cyclically between +1 and -1 as a function of the action for actions large compared to h-bar, so those paths in average will cancel out each other’s contribution to the path integral. The advantage of replacing exp (iS) with cos S is that it gets rid of the complex plane that makes renormalization mathematically inconsistent due to the ambiguity of having complex infinite dimensional Hilbert space, so Haag’s theorem no longer makes renormalization a difficulty in QFT.

The bottom line is that Feynman shows that QFT is simple, not amazingly complex mathematics: Schroedinger’s equation “came out of the mind of Schroedinger” (Lectures on Physics). It’s just an approximation. Even Dirac’s equation is incorrect in assuming that the wavefunction varies smoothly with time, which is a classical approximation: quantum fields ensure that a wavefunction changes in a discontinuous (discrete) manner, merely when each quantum interaction occurs:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

“When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

“You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

“Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.”

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

Above: the path integral interference for light relies on the cancelling of photon phase amplitudes with large actions, but for the different case of the fundamental forces (gravitation, electromagnetism, weak and strong), the path integral for the virtual or “gauge” bosons involves a geometrical cancellation. E.g., an asymmetry in the isotropic exchange can cause a force! The usual objections against virtual particle path integrals of this sort are the kind of mindless arguments that equally would apply to any quantum field theory, not specifically this predictive one. E.g., physicists are unaware that the event horizon size for a black hole electron is smaller than the Planck length, and thus (in Planck’s argument), a radius of 2GM/c2 is more physically meaningful as the basis for the grain-size cross-section for fundamental particles than Planck’s ad hoc formulation of his “Planck length” from dimensional analysis. Quantum fields like the experimentally-verified Casimir radiation which pushes metal plates together don’t cause drag or heating, they just deliver forces. “Critics” are thus pseudo-physicists!

My great American friend Dr Mario Rabinowitz brilliantly points out the falsehood of Einstein’s general relativity “equivalence principle of inertial and gravitational mass” as the basis for mainstream quantum gravity nonsense in his paper Deterrents to a Theory of Quantum Gravity, pages 1 and 7 (18 August 2006), http://arxiv.org/abs/physics/0608193. General relativity is based on the false equivalence principle of inertial and gravitational mass, whereby Einstein falsely assumed that Galileo’s law for falling bodies is accurate, whereas of course it is a falsehood, because the mass in any falling body is not accelerated purely by Earth’s mass, but is also “pulling” the Earth upwards (albeit by a small amount in the case of an apple or human, where one of the two masses is relatively small compared to the mass of the Earth). But for equal masses of fundamental particles (e.g. for the simplest gravitational interaction of two similar masses) this violation of Galileo due to mutual attraction violates Einstein’s equivalence principle as explained below by Dr Rabinowitz. Einstein forgot about the error in Galileo’s principle when formulating general relativity on the basis of the equivalence principle (note that genuine errors are not a crime, unlike arrogantly continuing to use personal worship by the charlatan media as a sword to “defend” the errors of GR BS against genuine competent critics for another 40 years, which was Einstein’s real crime against progress in science, which continues to this day under the guise of protecting a Jew no matter how factually wrong his physics is):

“As shown previously, quantum mechanics directly violates the weak equivalence principle in general and in all dimensions, and thus violates the strong equivalence principle in all dimensions. …

“Most bodies fall at the same rate on earth, relative to the earth, because the earth’s mass M is extremely large compared with the mass m of most falling bodies for the reduced mass … for M [much bigger than] m. The body and the earth each fall towards their common center of mass, which for most cases is approximately the same as relative to the earth. … When [heavy relative to earth’s mass] extraterrestrial bodies fall on [to] earth, heavier bodies fall faster relative to the earth [because they “attract” the earth towards them, in addition to the earth “attracting” them; i.e., they mutually shield one another from the surrounding inward-converging gravity field of distant immense masses in the universe] making Aristotle correct and Galileo incorrect. The relative velocity between the two bodies is vrel = [2G(m + M)(r2-1 – r1-1)]1/2, where r1 is their initial separation, and r2 is their separation when they are closer.

“Even though Galileo’s argument (Rabinowitz, 1990) was spurious and his assertion fallacious in principle – that all bodies will fall at the same rate with respect to the earth in a medium devoid of resistance – it helped make a significant advance [just like Copernicus’s solar system with incorrect circular orbits and epicycles prior to Kepler’s correct elliptical orbits, or Lamarke’s incorrect early theory of “acquired characteristic” evolution pathing the way for Darwin’s later genetic theory of evolution] in understanding the motion of bodies. Although his assertion is an excellent approximation … it is not true in general. Galileo’s alluring assertion that free fall depends solely and purely on the milieu and is entirely independent of the properties of the falling body, led Einstein to the geometric concept of gravity. [Emphasis added to key, widely censored, facts against GR.]

“Einstein and his successors have regarded the effects of a gravitational field as producing a change in the geometry of space and time. At one time it was even hoped that the rest of physics could be brought into a geometric formulation, but this hope has met with disappointment, and the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like “metric,” “affine connection,” and “curvature,” but is not otherwise very useful. The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effect of gravitational fields on the motion of planets and photons or to a curvature of space and time.”

– Professor Steven Weinberg, Gravitation and Cosmology, Wiley, New York, 1972, p. 147.

Above: “Could someone please explain how or why, if, as SR tells us, c is the ceiling velocity throughout the Universe, and thus gravity presumably cannot propagate at a speed faster than the ceiling velocity, the Earth is not twice as far away from the Sun every thousand years or so which is the obvious consequence of gravity propagating at such a low speed as c and not, as everyone since Newton had always supposed, near-instantaneously?” – James Bogle (by email). Actually this supposed problem is down to just ignoring the facts: gravity isn’t caused by gravitons between Earth and Sun; it’s caused instead by exchange of gravitons between us and the surrounding immense distant masses isotropically distributed around us, with particles in the Sun acting as a slight shield.

If you have loudspeakers on your PC and use an operating system that supports sound files in websites, turn up the volume and visit www.quantumfieldtheory.org. The gravitons are spin-1 not spin-2, hence they are not going between the sun and earth but the sun is an asymmetry: the speed that “shadows” move is not light speed but infinite. This is because shadows don’t exist physically as light velocity moving radiation! The sun causes a pre-existing shadowing of gravitons ahead of any position that the earth moves into, so the speed of the gravitons had nothing to do with the speed that the earth responds to the sun’s gravity. The sun sets up an anisotrophy in the graviton field of space in all directions around it in advance of the motion of earth. The mainstream “gravity must go at light speed” delusions are based purely on the ignorant false assumption that the field only exists between earth and sun. Wrong. The sun’s “gravity field” (anisotropy in graviton flux in space from distant immense masses) is pre-existing in the space ahead of the motion of the planet, so the speed of gravitational effects is instant, not delayed.

The exchange of gravitons between masses (gravitational charges) has a repulsive-only effect. The Pauli-Fierz spin-2 graviton is a myth; see the following two diagrams. Spin-1 gravitons we’re exchanging with big distant masses result in a bigger repulsion from such massive distant masses (which are very isotropically distributed, around us in all directions) than nearby masses, so the nearby masses have an asymmetric LeSage shadowing effect: we’re pushed towards them with the very accurately predicted coupling (diagrams below for quantitative proof) G by the same particles that cause the measured cosmological acceleration ~Hc. So we have an accurate quantitative prediction, predicting the numbers accurately via connecting the observed cosmological acceleration with gravitational coupling G, and this connection was made in May 1996 and was published two years before the cosmological acceleration was even discovered! Notice that the acceleration and expansion of the universe is not an effective expansion! I.e., as discussed later in this post, all the fundamental forces have couplings (e.g. G, alphaEM, etc.) that are directly proportional to the age of the universe. This is implied by the formula derived below, which is the correct quantum gravity proof for Louise Riofrio’s empirical equation GM = tc3, where M is the mass of the universe and t is its age.

Edward Teller in 1948 made the erroneous claim that any variation (which had been predicted in a an error-filled guesswork way by Dirac) of G is impossible because it would vary the fusion rate in the big bang or in a star, but actually a variation of G does not have the effect Teller calculated because (as shown in this and several earlier posts) all fundamental couplings are varying, electromagnetic as well as gravity! Hence, although Teller was incorrect in claiming that a doubling of G increases fusion via proton-proton gravitational compression in a star or the big bang fusion: it can’t do that, because the force of electromagnetic repulsion between colliding protons is increased by exactly the same factor as the gravitational force is increased! Therefore, fusion rates are essentially unaffected (uncharged particles like radiation pressure have a relatively small effect). Louise’s investigation of a presumed of c with as the reciprical of the cube-root of the age of the universe in her equation is spurious: her GM = tc3 is instead evidence of a direct proportionality of G and age of universe t (the full reasons are explained in earlier posts). Now, galaxies, solar systems, atoms and nuclei are all orbital systems of stars, planets, shells of electrons and shells of nucleons, respectively, with their size controlled by fundamental forces by equations like F = m1m2G/r2 = m1v2/r or m1G/r = v2, so if m1 and v are constants while coupling G (or a Standard Model force coupling like alphaEM) is directly proportional to the age of the universe, it follows G must be directly proportional to radius r, so that the radius of a galaxy, solar system, atom or nucleus is directly proportional to the age of the universe t. If the horizon radius of the flat spacetime universe and the radii of galaxies, solar systems, atoms and nuclei are all directly proportional to t it follows that although distant matter is receding from us, the expansion of all objects prevents any relative change in the overall (scaled) universe: rulers, people, planets, etc. expand at the same rate, so receding galaxy clusters will not appear smaller. You might think that this is wrong, and that increasing G and alphaEM should pull the Earth’s orbit in closer to the sun, and do the same for the electron. However, this more obvious solution or increasing orbital velocities it is not necessarily consistent with the very slow rate of increase of G, so that it’s more consistent to think of a scaling up of sizes everything as force strengths increase: a stronger gravitational field can stabilize a galaxy of larger radius but containing the same mass! It is possible that a stronger alphaEM can stabilize a larger electron ground state radius; whether this is the case depends on whether or not the orbital velocity is altered as the electromagnetic coupling is varied.

However, if course, the Lambda-CDM cosmological model, basically a Friedmann-Walker metric from general relativity which implicitly assumes constant G is totally incorrect viewed from the new quantum gravity theory. Is spacetime really flat? From a naive extrapolating using the false old framework of cosmology, hyped by Sean Carroll and other bigots who refuse to accept these facts of quantum gravity proved over a decade ago, you might expect that the linear increase of G with age of the universe will cause the universe to eventually collapse. However, remember that the cosmological acceleration (a repulsion that supposedly flattens out the spacetime curvature on cosmological distance scales by opposing gravitation) is itself a quantum gravity effect: on the largest scales, mutual repulsion of masses predominates over LeSage shadowing and its pseudo-attraction.

Nevertheless, back in 1996 when we predicted the same cosmological acceleration using two completely different calculations, only one way was from the quantum gravity theory. The other correct prediction of the a ~ Hc cosmological acceleration was simply from the effect of spacetime on the Hubble v = HR observed recession rate law (see proof linked here). That analysis is a complementary duality with the similar prediction of comological acceleration from a different calculation method via quantum gravity, the cosmological acceleration should be viewed as an artifact of the problem that the receding galaxies we see are being seen at times in our past which are related to distances by R = ct. If we represent time since the big bang by t, then the time, T in our past of a supernova apparently distance R away is related to time t by simply t + T = 1/H. So the cosmological acceleration is just a result of the fact that radiation comes back to us at light velocity, not instantly. So if there was not a time delay, we wouldn’t see any cosmological acceleration: the acceleration is physically being caused by the effective reference frame in which greater distances correspond to looking backwards in time. The universe horizon radius expands at the velocity of light, a linear expansion. This produces cosmological acceleration forces and thus gravitation due to the increasing time-lag for the exchange all forms of radiation, including gravitons. At the same time, masses and rulers expand by the mechanism already explained, so the relative scale of the universe remains constant while gravitation and cosmological acceleration operate.

Correction of mainstream errors in Electroweak Symmetry

Over Christmas, Dr Dorigo kindly permitted some discussion and debate over electroweak symmetry at his blog posting comments section, http://www.science20.com/quantum_diaries_survivor/blog/rumors_about_old_rumor, which helped to clarify some of the sticking points in the mainstream orthodoxy and possibly to highlight the best means of overcoming them in a public arena.

Some arguments against electroweak symmetry follow, mostly from replies to Dr Dorigo and Dr Rivero. The flawed logic of the “Higgs boson” assumption is based on the application of gauge theory for symmetry breaking to the supposed “electroweak symmetry” (never observed in nature). Only broken “electroweak symmetry”, i.e. an absence of symmetry and thus separate electromagnetic and weak interactions, have actually been observed in nature. So the Higgs boson required to break the “electroweak symmetry” is an unobserved epicycle required to explain an unobserved symmetry! What’s interesting is the nature of the groupthink “electroweak symmetry”. Above the “electroweak unification” energy, there is supposed to be equality of electromagnetic and weak forces into a single electroweak force. Supposedly, this is where the massive weak bosons lose their mass and this gain light velocity, long range, and thus stronger coupling, equal in strength to the electromagnetic field.

This unification guess has driven other possibilities out of sight. There are two arguments for it. First, the breaking of Heisenberg’s neutron-proton SU(2) chiral “isospin symmetry” leads to pions as Nambu-Goldstone bosons; so by analogy you can argue for Higgs bosons from breaking electroweak symmetry. This is unconvincing because, as stated, there is no electroweak symmetry known in nature; it’s just a guess. (It’s fine to have a guess. It’s not fine to have a guess, and use the guess as “evidence” for “justifying” another guess! That’s just propaganda or falsehood.) Secondly, the supposed “electroweak theory” of Weinberg and others. Actually, that they is better called a hypercharge-weak theory, since U(1) in the standard model is hypercharge, which isn’t directly observable. The electromagnetic theory is produced by an adjustable epicycle (the Weinberg angle) that is forced to make the hypercharge and weak theories produce the electromagnetic field by ad hoc mixing. The prediction of the weak boson masses from the Weinberg angle isn’t proof of the existence of an electroweak symmetry, because the weak bosons only have mass when the “symmetry” is broken. All evidence to date suggests that electroweak symmetry (like aliens flying around in UFOs) is just a fiction, the Higgs is a fiction, and mass is not generated through symmetry breaking. Yet so much hype based on self-deception continues.

The funny thing about the Glashow-Weinberg-Salam model is that it was formulated in 1967-8, but was not well received until its renormalizability had been demonstrated years later by ‘t Hooft. The electroweak theory they formulated was perfectly renormalizable prior to the addition of the Higgs field, i.e. it was renormalizable with massless SU(2) gauge bosons (which we use for electromagnetism), because the lagrangian had a local gauge invariance. ‘t Hooft’s trivial proof that it was also renormalizable after “symmetry breaking” (the acquisition of mass by all of the SU(2) gauge bosons, a property again not justified by experiment because the weak force is left-handed so it would be natural for only half of the SU(2) gauge bosons to acquire mass to explain this handedness) merely showed that the W-boson propagator expressions in the Feynman path integral are independent of mass when the momentum flowing through the propagator is very large. I.e., ‘t Hooft just showed that for large momentum flows, mass makes no difference and the proof of renormalization for massless electroweak bosons is also applicable to the case of massive electroweak bosons.

‘t Hooft plays down the trivial physical nature of his admittedly mathematically impressive proof since his personal website makes the misleading claim: “…I found in 1970 how to renormalize the theory, and, more importantly, we identified the theories for which this works, and what conditions they must fulfil. One must, for instance, have a so-called Higgs-particle. These theories are now called gauge theories.”

That claim that he has a proof that the Higgs particle must exist is totally without justification. He merely showed that if the Higgs field provides mass, the electroweak theory is still renormalizable (just as it is with massless bosons). He did not disprove all hope of alternatives to the Higgs field, so he should not claim that! He just believes in electroweak theory and won a Nobel Prize for it, and is proud. Similarly, the string theorists perhaps are just excited and proud of the theory they work on, and they believe in it. But the result is misleading hype!

I’m not denying that the interaction strengths run with energy and may appear to roughly converge when extrapolating towards the Planck scale. You get too much noise from hadron jets when doing such collisions, to get an unambiguous signal. Even if you just collide leptons at such high energy, hadrons are created in the pair production at such high energies, and then it’s a reliant on extremely difficult QCD jet calculations to subtract the gluon field “noise” before you can see any signals clearly from the relatively weak (compared to QCD) electromagnetic and weak interactions.

… continued with part 2 here

5 thoughts on “The Standard Model and Quantum Gravity: Identifying and Correcting Errors (part 1)

Leave a comment