Update (12 January 2010): around us, the accelerating mass of the universe causes an outward force that can be calculated by Newton’s 2nd law, which in turn gives an equal inward reaction force by Newton’s 3rd law. The fraction of that inward force which causes gravity is simply equal to the fraction of the effective surface area of the particle which is shadowed by relatively nearby, non-accelerating masses. If the distance R between the two particles is much larger than their effective radii r for graviton scatter (exchange), then by geometry the area of the shadow cast on surface area 4*Pi*r^{2} by the other fundamental particle is Pi*r^{4}/R^{2}, so the fraction of the total surface area of the particle which is shadowed is (Pi*r^{4}/R^{2})/(4*Pi*r^{2}) = (1/4)(r/R)^{2}. This fraction merely has to be multiplied by the inward force generated by distant mass m undergoing radial outward observed cosmological acceleration a, i.e. force F = ma, in order to predict the gravitational force, which is not the same thing as LeSage’s non-factual, non-predictive gas shadowing (which is to quantum gravity what Lemarke’s theory was to Darwin’s evolution, or what Aristotle’s laws of motion were to Newton’s, i.e. mainly wrong). In other words, the source of gravity and dark energy is the same thing: spin-1 vacuum radiation. Spin-2 gravitons are a red-herring, originating from a calculation which assumed falsely that gravitons either would not be exchanged with distant masses, or that any effect would somehow cancel out or be negligible. Woit states:

“Many of the most well-known theorists are pursuing research programs with the remarkable features that:

Although making the most basic quantum gravity predictions can be done with “high school mathematics”, the deeper gauge symmetry connection of quantum gravity to the Standard Model of particle physics does require more advanced mathematics, as does the job of deriving a classical approximation (i.e. a corrected general relativity for cosmology) to this quantum gravity theory, for more detailed checks and predictions. When Herman Kahn was asked, at the 1959 congressional hearings on nuclear war, whether he agreed with the Atomic Energy Commission name of “Sunshine Unit” for strontium-90 levels in human bones, he replied that although he engaged in a number of controversies, he tried to keep the number down. He wouldn’t get involved. Doubtless, Woit has the same policy towards graviton spin. What I’m basically saying is that the fundamental particle is that causing cosmological repulsion, which has spin-1. This causes gravity as a “spin-off” (no pun intended). So if spin-1 gravitons are hard to swallow, simply name them instead spin-1 dark energy particles! Whatever makes the facts easier to digest…

Above: the latest illustration (updated 27 September 2009) which has replaced the older illustration included in the post below. Improvements have been made.

We’re surrounded by immense visible, receding masses totalling *m* = 3 × 10^{52} kg (the Hubble Space Telescope gives an estimate of stars in the observable universe of 9 × 10^{21} observable stars, with a mean mass assumed to be the solar mass of 2 × 10^{30} kg on the basis that the large population of dwarf stars balances out the population of stars whose mass is greater than the solar mass; the source of this estimate is page 5 of the NASA report linked here, so complain to NASA and the Hubble Space Telescope inventors if you don’t like scientific facts, not me!), which are accelerating radially away from us at acceleration, *a = Hc *= 6 × 10^{-10} ms^{-2} (L. Smolin, *The Trouble With Physics,* Houghton Mifflin, N.Y., 2006, p. 209), giving an outward effective force by Newton’s 2nd law of about *F = ma* = (3 × 10^{52})×(6 × 10^{-10}) = 1.8 × 10^{43} Newtons!

By Newton’s 3rd law of motion, every force has an equal and opposite reaction force, so there is an inward force towards us from distant receding masses of 1.8 × 10^{43} Newtons! What particles do we know of that can mediate such large forces? As proved above, they act like spin-1 gauge boson radiation in causing gravity *by pushing relatively small masses (compared to the mass of the universe) together, *so should be called gravitons (gravity field quanta). Why apply Newton’s old laws (first published on 5 July 1687) to the acceleration of the universe? Professor Feynman said:

‘… we must take our concepts and extend them to places where they have not yet been checked.’

- R. P. Feynman *et al., Feynman Lectures on Physics, v. 3, Quantum Mechanics,* Addison-Wesley, 1965, c. 2, p. 9.

Professor Alfred North Whitehead stated (in his Presidential Address to the London Branch of the Mathematical Association in 1914):

‘The art of reasoning consists in getting hold of the subject at the right end, of seizing on the few general ideas that illuminate the whole, and of persistently organizing all subsidiary facts round them. Nobody can be a good reasoner unless by constant practice he has realized the importance of getting hold of the big ideas and hanging on to them like grim death.’

Notice in the diagram above that the calculation of gravity *appears at first glance* to be using the suspect assumption of fixed distance *R* for the radius of the entire accelerating mass of the universe around us. But notice from the diagram that the radius *R* actually cancels out in the calculation! It does not come into the final result because it is in both the numerator and the denominator! The spherical area of shell *R* is 4 Pi *R*^{2}, which represents the *full* area of the sky that contributes the inward 1.8 × 10^{43} Newtons reaction force. This full force from the whole sky cancels out in the absence of nearby masses, because it’s extremely isotropic. But a nearby mass introduces an asymmetry, since it interacts with gravitons coming towards you from the direction of that mass! The area of a fundamental particle of such a nearby mass is *A*. The fraction of the area 4 Pi *R*^{2} which is being shadowed by *A* is equal to the fraction of the 1.8 × 10^{43} Newtons net inward graviton force which is pushing you towards the nearby mass with area *A*. So we project *A* from radius *r* (near you) to radius *R* to see how much of the area 4 Pi *R*^{2} is covered (and blocked) by the local mass.

This tells us that the net force acting on two fundamental particle masses each with graviton interaction cross-sectional area *A* = Pi(2*GM/c*^{2})^{2} is for two electron masses:

*F* = 2*maA*/(Pi *r*^{2}) = 8*maG*^{2}*M*^{2}/(*c*^{4}*r*^{2}) = 2(1.8 × 10^{43})(Pi(1.35 × 10^{-57})^{2})/(Pi *r*^{2})

**= 6.6 × 10 ^{-71}/r^{2} Newtons.**

Now compare this to Newton’s law (derived by combining Kepler’s observational third law with a guess, and checking it for the Moon; although Newton’s law was of course not useful in general until Cavendish first measured the gravitational coupling constant *G,* which was not included in the qualitative relationship by Newton but was added by Laplace), for two electrons each of mass *M:* *F* = *GM ^{2}*/

*r*

^{2}

**= 5.5 × 10 ^{-71}/r^{2} Newtons.**

QED. Compare the above result of quantum gravity *F* = 8*maG*^{2}*M*^{2}/(*c*^{4}*r*^{2}) with the result of Newtonian classical gravity *F* = *GM ^{2}*/

*r*

^{2}. See how close the results are.

Now you must go back to deluding yourself that non-predictive extradimensional stringy speculations about spin-2 gravitons are still the truth, and that the spin-1 quantum gravity is plain wrong, ‘somehow’.

Hopefully Gates will put the second Messenger lecture, ‘The Relation of Mathematics to Physics’ on YouTube permanently. This is what Feynman says there:

‘What does the planet do? Does it look at the sun, see how far away it is, and decide to calculate on its internal adding machine the inverse of the square of the distance, which tells it how much to move? This is certainly no explanation of the machinery of gravitation!’

Feynman then gives the LeSage ‘mechanism’ whereby gravitons cause gravity by scattering off nuclei, but deliberately dismisses it when not bothering to distinguish between on-shell (real) particles and off-shell (virtual) particles, so when he says that gravitons would slow down planets like real dust would (by being heated up), his objection would debunk spin-2 gravitons of string theory just as much as any other particle. In fact, virtual bosons don’t steal energy when interacting with steadily moving charges (you get a net emission of real radiation and hence energy loss only when charges accelerate; if the accelerating charge is electric then this radiation is photons and if it is gravitational charge i.e. mass-energy, the radiation is gravitational waves, which are real particles carrying net energy and are related to gravitons like virtual photons are related to real photons: *the virtual particles are off-shell and the real particles are on-shell*). Feynman was author of *Feynman Lectures on Gravitation,* and wasn’t ignorant that virtual particles (gauge bosons) don’t behave like real particles! His objection to LeSage is only applicable for on-shell real particles as gravitons, not to gauge bosons. Gauge bosons or virtual radiations still impart momentum as proved by the Casimir force, which is experimentally substantiated. However, vacuum radiation phenomena never steals kinetic energy from moving bodies unless there is acceleration! Acceleration causes contraction of the body, which readjusts the exchange radiation (graviton) equilibrium so that when the acceleration ends, no further energy is lost. This is a fact of nature.

Feynman’s ‘objection’ to any exchange radiation theory of gravitation in claiming that vacuum radiation would cause drag is obviously bunk; we know the vacuum radiation pushes plates together in the Casimir force without slowing down charged particles in the vacuum! We know that on-shell particles don’t behave like off-shell particles! If his objection was valid, it would debunk all the Standard Model forces which rely on vacuum exchange radiations! It isn’t valid. What Feynman was really doing was *popularizing the idea that some kind of simple mechanism might underly physics.* Feynman said:

‘Suppose that in the world everywhere there are a lot of particles, flying through us at very high speed. They come equally in all directions – just shooting by – and once in a while they hit us in a bombardment. We, and the sun, are practically transparent for them, practically but not completely, and some of them hit. Look, then, at what would happen.

‘If the sun were not there, particles would be bombarding the Earth from all sides, giving little impulses by the rattle, bang, bang of the few that hit. This will not shake the Earth in any particular direction, because there are as many coming from one side as from the other, from top as from bottom.

‘However, when the sun is there the particles which are coming from that direction are partly absorbed [or reflected, as in the case of Yang-Mills gravitons, an exchange radiation!] by the sun, because some of them hit the sun and do not go through. Therefore, the number coming from the sun’s direction towards the Earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see that the farther the sun is away, of all the possible directions in which particles can come, a smaller proportion of the particles are being taken out.

‘The sun will appear smaller – in fact inversely as the square of the distance. Therefore there will be an impulse on the Earth towards the sun that varies inversely as the square of the distance. And this will be a result of large numbers of very simple operations, just hits, one after the other, from all directions. Therefore the strangeness of the mathematical relation will be very much reduced, because the fundamental operation is much simpler than calculating the inverse square of the distance. This design, with the particles bouncing, does the calculation.

‘The only trouble with this scheme is that … If the Earth is moving, more particles will hit it from in front than from behind. (If you are running in the rain, more rain hits you in the front of the face than in the back of the head, because you are running into the rain.) So, if the Earth is moving it is running into the particles coming towards it and away from the ones that are chasing it from behind. So more particles will hit it from the front than from the back, and there will be a force opposing any motion. This force would slow the Earth up in its orbit… So that is the end of that theory.

‘”Well,’ you say, ‘it was a good one … Maybe I could invent a better one.’ Maybe you can, because nobody knows the ultimate. …

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

Notice that when we equate the quantum gravity and classical gravity force laws, we get

*F* = 8*maG*^{2}*M*^{2}/(*c*^{4}*r*^{2}) = *GM ^{2}*/

*r*

^{2}.

Inserting *a = Hc* (this relationship is proved in detail further on in this post), we get 8*mHG* = *c*^{3} where for the observed flat spacetime *H* = 1/*t* where *t* is the age of the universe, thus 8*mG* = *c*^{3}*t*. Apart from the dimensionless factor of 8, this is Louise Riofrio’s fundamental equation, *Gm* = *tc*^{3}. String theorist Lubos Motl denounced it and then published a picture of Louise beside a lizard and, together with other string theory supporters, made sexist comments, so we showed how it is equivalent to not just our work but also that of John Hunter who first argued (in a notice in *New Scientist*) that the rest mass energy *E* = *mc*^{2} of a particle may be equated to its gravitational potential energy with respect to the distant immense masses in the surrounding universe! We developed this simple idea into a more detailed derivation of Louise Riofrio’s equation linked here. Naturally, one thing heretical about any equation like *Gm* = *tc*^{3} is that because *t* increases (it is the age of the universe), *G, m* and *c* can’t all be constants! What varies?

Louise has investigated a solution to her equation assuming that *c* varies inversely as the cube root of the age of the universe, i.e., *c* = (*Gm/t*)^{1/3}. We know that light slows down in glass because the photon’s electromagnetic fields get ‘loaded’ by interacting with the electromagnetic fields of the electrons and nuclei in the glass, so by analogy if the vacuum is full of virtual charges and is expanding as the universe expands, then you might expect some kind of effect on the velocity of light: one problem here is, *you’d expect light to speed up with time rather than slow down*. There is also an error in the popular assumption that the vacuum is populated with virtual charges (virtual fermions) beyond 33 fm from fundamental unit charges: the assumption neglects Julian Schwinger’s proof that vacuum virtual pair production requires very strong steady electric field strengths and can’t occur beyond 33 fm from electric charges (we’ll discuss his formula and the mechanism in detail later in this post).

The velocity of light may vary effectively when it is emitted by a distant receding mass, e.g. it may slow down because its frequency is redshifted, and the frequency is measured by us as the number of crests we *receive* per second. The mainstream formula for the cosmological background radiation says that its energy density falls as the fourth power of the age of the universe. This fourth power of time fall is based on an arm-waving argument based on the cube-of-time fall for spherical *divergence* of radiation in an linearly expanding volume of space (which doesn’t apply when the radiation is *converging* inward to any observer from a shell nearly 13,700 million light years distant), and an additional fall in direct proportion to time for the stretching out of the radiation due to redshift (this is fine). The real motivation for the fourth-power of time fall is that it fits other evidence about the big bang, i.e. it is an *ad hoc* fiddle used to make the theory work.

It may well be correct, but it needs to be replaced by a proper derivation which will undoubtedly throw light on what is occurring physically. One possibility is that the cosmic background radiation isn’t effectively coming to us at velocity *c,* but at a much slower speed (reduced in proportion to the amount of redshift of the light). The Michelson-Morley experiment just determined that it is not possible to observe effects from the velocity of light because the instrument contracts in the direction of its motion (clocks also slow down), so you can’t measure a change: the instrument is distorted offsetting the effect of any change in the velocity of light. The effect of ‘restricted’ (not the hype term ‘special’!) relativity is similar to that of the Heisenberg ‘uncertainty principle’ where you can’t measure both the position and the momentum of a particle with perfect accuracy even in principle: it generates pseudoscientific dogma. Actually, nothing can ever be measured with absolute precision. This doesn’t tell you anything about physics. (Dogmatic belief systems don’t operate like rational science. When they fail to measure something, the failure is interpreted as a proof of decisive knowledge that it is impossible for *anybody ever to achieve that thing*. E.g., if string theory can’t predict anything, then that’s not the failure of string theory. It’s proof that ‘nature is simply not predictable’. See? We then have to live with that big lie, and every journal has then to censor predictions from being published, like the dictatorship Orwell describes in *1984*.)

What is actually varying is *G*. Dirac first investigated this but got the facts wrong and generally did a poor job (when he earlier predicted the positron as the anti-particle for the electron, it was only after trying falsely for years to make the anti-electron the proton which was already known but had too much mass!), and was opposed successfully by an obviously vacuous argument from Edward Teller (who ignored the fact that the other inverse-square law force, electromagnetism, would vary the same way as gravity and this would prevent any effect of a varying *G* on fusion rates: increased electrostatic repulsion will cancel out the effect of increased gravitational compression upon the ability of the strong force to fuse colliding protons). Since *G* increases in direct proportion to the age of the universe, the universe was extremely flat (not bumpy) at the early time when the cosmic background radiation emitted because gravity was so weak then, and gravity is what causes clumping. This successful prediction gets rid of the *ad hoc* cosmological inflation theory of the big bang.

Another groupthink delusion that I’ve argued against since 1990 is the lying ‘Planck scale’: Planck used deceptive *dimensional analysis* to derive what lying physicists call the minimum length, 1.6 × 10^{-35} meters; when in fact there is no theoretical proof of this scale, it’s just dimensional analysis numerology which even if it used the correct constants in the correct ways could still be missing any combination of *dimensionless *multiplying factors like Pi or 2, etc. The black hole event horizon radius for an electron is 1.35 × 10^{-57} meters; way smaller than the Planck scale. Nobody listened. Groupthink lies are still shamelessly hyped by quacks. The black hole event horizon radius is physically significant because black hole electrons radiate!

We proved this on 8 March 2007 in an email to Dr Rabinowitz, see: Hawking’s formula for the radiating power of the black hole electron tells us it radiates at *P* = 3 × 10^{92} Watts. The momentum of absorbed radiation is *p = E/c,* but in this case the exchange mechanism means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where *p *= 2*E/c.* (When a photon is absorbed it imparts momentum *p = E/c,* but when that photon is re-emitted back in the direction it came from, it delivers an additional momentum of *p = E/c* due to the recoil, so the total momentum delivered is *p = E/c + E/c* = 2*E/c*.)

The force of this radiation is the rate of change of the momentum, *F = dp/dt* ~ (2*E/c*)/*t* = 2*P/c,* where *P* is power. Hence, *F* = 2*P/c* = 2(3 × 10^{92})/c = 2 × 10^{84} Newtons. This is 10^{41} times the *F* = 1.8 × 10^{43} Newtons total inward graviton force, so this Hawking radiation force is the electromagnetic force strength, which has a ‘coupling constant’ (this is a constant for energies below the IR cutoff, but of course is not constant at higher energies where it increases as a weak logarithmic function of energy due to penetrating through the shield of polarized virtual fermions which exist above the IR cutoff energy) that is higher than gravitation by roughly such a factor (the precise factor depends on the vacuum polarization shielding of the electromagnetic undressed charge, but the result is extremely good, unlike the errors in supersymmetric speculations of the vacuum energy which are off by massive factors like 10^{120}).

Notice that there is an error in the physical arm-waving that Hawking delivers with his formula: he claims that Hawking radiation is emitted when one virtual fermion of a pair spontaneously appearing near the event Horizon falls in, allowing the other to escape and become real, then annihilate with an oppositely charged fermion created in the same way to produce uncharged gamma rays (Hawking radiation). However, this arm-waving mechanism implicitly ignores Julian Schwinger’s cut-off for pair production in quantum field theory; you need electric fields of at least 1.8 × 10^{18} volts/metre to get spontaneous pair production in the vacuum, hence the spacetime creation-annihilation loops only populate the vacuum out to 33 femtometres from a unit fundamental electric charge! (equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040).

*Despite explaining this point time and again, the quacks still don’t grasp its implications: the vacuum isn’t filled with chaotic loops of particles appearing and disappearing.* If it was, there would be no IR cutoff, and the charge of an electron seen at macroscopic distances would be exactly zero (it would be *completely* cancelled out by vacuum polarization shielding, because such shielding would exist *all over the vacuum* not merely out to the small 33 fm distance of approach of fermions in collisions with the IR cutoff energy). *It is therefore only because of Schwinger’s threshold limit on pair production for steady electric fields that we see electric charge at all; without the cutoff, the observable electric charge of the electron seen even a micron away would be zero!* The vacuum (at distances beyond 33 fm from real long-lived fermions) is *only* filled with exchange bosons, *not* with polarizable charged loops containing virtual fermions! This point is so simple and yet so widely misunderstood that we much recognise that not only string theorists but also all published quantum field theorists so far who don’t grasp it, are deluded quacks. It severely affects the mechanism of Hawking radiation, for it prevents the Hawking mechanism working at all unless a black hole has a sufficient electric charge to produce an electric field of at least 1.8 × 10^{18} volts/metre at the event horizon radius!

Electrons don’t have a problem here, but uncharged black holes do. Hence, Hawking radiation is not emitted from large black holes which don’t carry a net electric charge; it is *only* emitted from electrically charged fundamental particles. Not only that, but the very fact that the fundamental particle has to have an electric charge in the mechanism we are analyzing has another effect: it prejudices the fall of virtual fermions into the black hole! Only those with opposite sign to the core particle fall in; the others are repelled away from the event horizon. The charge accumulating beyond the event horizon is then all of one sign (similar sign to the core), so it *can’t* annihilate into gamma rays. This thus modifies the entire Hawking radiation theory for black holes. What we find is that electrically charged black holes radiate charged bosonic radiation: the usual objection to massless charged radiation (infinite magnetic self-inductance) is prohibited from applying because the magnetic fields are cancelled out by the exchange of equal fluxes in opposite directions (exchange radiation means just that; an exchange equilibrium).

This radiation gives us the electromagnetic force mechanisms of attraction and repulsion, as well as telling us how to correct the U(1) × SU(2) electroweak symmetry of the standard model. One objection you can think about is that the electrically charged radiation as gauge bosons in electromagnetism would mean that electromagnetism should be a non-Abelian SU(2) theory described by the Yang-Mills equation, instead of Maxwell’s equations. But that’s wrong: the Maxwell model only differs from the Yang-Mills equation by a term for the ability of charged gauge bosons to change the charge they act on. This term is automatically suppressed in nature by the mechanism, simply because of the condition that the exchange of charged radiation is only possible (i.e., via the need for the elimination of infinite self-inductance) *if there is a perfect equilibrium of exchange of charged radiation* (with each electron receiving the same number of Coulombs per unit time as it radiates!), so the Yang-Mills term that allows charges to be changed by the exchange of charged field quanta is thus prevented from operating in the case of electromagnetism! This mechanistic physical effect disabling a term in the Yang-Mills equation is totally alien to the kind of groupthink worshipping stringy quacks who pretend that they are physicists. Their mathematical skills have become inflexible or ‘wooden’; they literally worship the mathematics like a religion, instead of seeing it for what it is, merely a rough calculating tool that may or may not usefully apply in any situation.

String theorists (yes, that is Edward Witten’s big but hot air-filled head being pushed down to earth by spin-1 gravitons above) use a *lying* Pauli-Fierz ‘proof’ that gravitons (gravity field quanta) exchanged between *two* masses must have spin-2 (in other words, must have 180 degree rotational symmetry, so outgoing gravitons would be identical to incoming gravitons with 180 degrees rotation) in order to achieve universal attraction! Duh! This ‘proof’ implicitly *ignores all the masses in the entire universe surrounding the two masses in question!* It ignores 3 × 10^{52} kg of *distant surrounding masses* when you calculate the gravity of exchange between an apple and the earth or the earth and the sun, or the sun and the centre of the Milky Way. Notice that due to *F = ma* giving a trivially small force for small mass *m,* local masses like the earth below you don’t repel you significantly, you’re instead *repelled downwards* by the distant mass above your head. Not only does this ‘theory’ (it isn’t a stringy theory, it is empirically justified facts, unlike popular groupthink string ‘theory’) predict gravity, it predicts cosmological acceleration and the flatness of the early universe when the CBR was emitted, and it predicted these things in 1996 before the cosmological acceleration was observationally discovered!

Notice from the diagram above that the gravitons coming in are radially *converging,* not diverging. Hence, they do not have a smaller effect than nearby small masses, but in fact cause a *way greater* effect than the exchange of gravitons with nearby relatively small masses! They cause gravity as illustrated above (masses small compared to the mass of the universe get pushed together because they don’t exchange gravitons with much force but are pushed together by immense exchange forces on the opposite side to the other mass!). This proves that the spin of at least the majority of gravitons making up most of the observed strength of gravitation is spin-1. Consequently, we can disprove (falsify) string theory’s basis in assuming gravity to be mediated entirely by spin-2 gravitons. In addition, as predicted in 1996 (before the acceleration of the universe was discovered in 1998), spin-1 gravitons actually cause the acceleration of the universe since their exchange between very large distant masses does not involve the shadowing mechanism in the above diagram, and *simply pushes masses apart.* The physics is discussed on the About page of this blog, which links to other posts. Quite an effort will be made in this new post to see why my research into this new Feynman path integral graviton theory is being violently censored out by vicious string theory groupthink quacks.

I’ve tried to be understanding to these people in the past, but following censorship since 1996 and more recently the ignorant abuse from CERN physicist Tommaso (see the previous post) who presumably was trying to be ‘nice’ to me without having the first clue about the scientific basis of this breakthrough in 1996, I think the time has come to start opposing quackery. I’m not ‘disproving’ as such (any more than Peter Woit or Lee Smolin are!) the existence of ‘strings’, spin-2 gravitons, extra dimensions, parallel universes, and entanglement (there is certainly lots and lots of entanglement in string theorists minds, that’s a hard fact); merely that all of this pseudoscientific fantasy and speculative hype (like UFOs, aliens, God, Cold Fusion, Piltdown Man, and the Lock Ness Monster) is *simply not required in the calculations we made and confirmed * for quantum gravity and – most important of all – *that the lies that these things are useful – lies politely called ‘hype’ by certain string theorists – are preventing the discussion of the facts of physics (hype drowns out this fact-based physics).*

(I now don’t intend to be censored out by abusive dictators, but to hit them back so hard that the result will act as a deterrent to the other quacks; I know people suppressed by such quacks who just end up dying from cancer while still censored out like dissidents in a communist or fascist dictatorship, and I personally don’t believe that this is the freedom from dictatorship that millions of people died to preserve in world wars; see also the epilogue to this blog post about democracy and dictatorship. These thuggist dictators just want to destroy the work of others without bothering to check it properly first. They are evil. I used to think that patience was the way to overcome hostility, but actually the Jews were patient with Hitler, and we all know the result.)

Dr Peter Woit was then attacked out of the blue in a comment made by Professor Jacques Distler on June 12, 2009 at 7:09 am (who was an arXiv adviser when my paper was deleted from arXiv, unread, a few seconds after upload from my university in 2002!) on the Asymptotia blog: ‘There’s a *reason* Peter Woit is such a big fan of his [i.e. algebraic field theorist Professor Bert Schroer, who pointed out to the string theorists that perturbative string theory isn’t really a theory of 1-dimensional extented objects, it’s just an infinite-component local field theory].’ This led Professor Clifford Johnson to attack Woit at June 12, 2009 at 7:17 am: ‘Well, even if it does not hit the mark, at least he tries to write equations and make his objections at least *look* somewhat like science, as opposed to the babble, deception, and fakery tactics of his admirer.’

At this point, Dr Peter Woit commented politely at June 12, 2009 at 10:05 am: ‘Hi Jacques and Clifford, For the record I have no views one way or another concerning Schroer’s claims about world-sheets and T-duality. If you just can’t help yourselves and must find excuses to insult me, I think you can do better than this. Why not actually address something I write? One thing I do agree with Schroer about is the pathetic dishonesty and lack of professionalism of certain string theorists, especially certain prominent bloggers…’

Professor Clifford Johnson refused to read Dr Peter Woit’s book, *Not Even Wrong,* which amounted to ignoring all the points Dr Woit made (also Dr Smolin’s book), and then wrote fanatically about string theory’s 30 years of failure not being enough to judge it a lie and fanatically that we must live with more and more ‘research’ quacks and their eternal and viciously abusive censorship just because a handful of dictators are bad sportsmen who won’t graciously admit to being losers, at June 12, 2009 at 3:14 pm:

‘Excellent. This is truly funny. This is your proof that the entire community of string theorists is misguided and wrong? Before, I might add, we’ve even done with the research? Before we’re done with even fully understanding and defining the theory? This is your proof that it is all a waste of time? That’s just too funny. The point, Peter, is that we don’t know whether it is right or wrong either, nor what the outcome of the entire program of research will be, ultimately. But we’re not (or at least, that large percentage of the field I know and trust) are not presuming the answer at the outset, like you are. That’s why it is called research. We don’t know. You don’t know. You claim (and try to convince the public at large) that you know the outcome.

‘And you offer this joke as a proof of this strong claim. Can you not see how ridiculous that is? (Combined with the fact that you’ve never written a credible paper demonstrating competence in this field that you are condenming.) This is another fancy talk with slides of pretty pictures, and a few equations here and there. Looks much like the thing you offered as laughable rigourous proof of the wrongness of string theory a year or two ago.

‘I’d hope that after two years there’s an actual paper out by now with actual computations, and that it has been discussed in the community? I think I missed it. People would love to see that, so that they can know whether to go work on something else. Do point people to it.’

It is important to try to understand the mentality behind this kind of string delusion, which is called groupthink and is defined as follows:

It is not just Professor Johnson of course. Here is the delusion of Edward Witten, creator of M-theory quackery:

‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies [which must be avoided at all costs, to preserve the quack M-theory hoax].’ – Dr Edward Witten, M-theory originator, *Nature, *Vol 444, 16 November 2006.

All he is saying at the end is that string theorists should ignore critics:

‘*Crimestop* means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. *Crimestop,* in short, means protective stupidity.’ – George Orwell, *Nineteen Eighty Four,* Chancellor Press, London, 1984, p225

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. …’ – Frederick Forsyth, *Daily Express,* 7 October 2005, p. 11.

‘I have observed in teaching quantum mechanics (and also in learning it) that students go through the following experience: The student begins by learning how to make calculations in quantum mechanics and get the right answers; it takes about six months. This is the first stage in learning quantum mechanics, and it is comparatively easy and painless. The second stage comes when the student begins to worry because he does not understand what he has been doing. He worries because he has no clear physical picture in his head. He gets confused in trying to arrive at a physical explanation for each of the mathematical tricks he has been taught. He works very hard and gets discouraged because he does not seem able to think clearly. This second stage often lasts six months or longer, and it is strenuous and unpleasant. Then, quite unexpectedly, the third stage begins. The student suddenly says to himself, “I understand quantum mechanics”, or rather he says, “I understand now that there isn’t anything to be understood”. The difficulties which seemed so formidable have mysteriously vanished. What has happened is that he has learned to think directly and unconsciously in quantum mechanical language, and he is no longer trying to explain everything in terms of pre-quantum conceptions.’ – Freeman Dyson, ‘Innovations in Physics’, *Scientific American,* Vol. 199, No. 3, September 1958, pp. 74-82.

Professor Paul Feyerabend explained in the concluding chapter of his 1975 book *Against Method* that anything goes *which works in science,* regardless of the method:

‘The idea that science can, and should, be run according to fixed and universal rules, is both unrealistic and pernicious. … the idea is *detrimental to science,* for it neglects the complex physical and historical conditions which influence scientific change. It makes our science less adaptable and more dogmatic: every methodological rule is associated with cosmological assumptions, so that using the rule we take it for granted that the assumptions are correct. Naive falsificationism takes it for granted that the laws of nature are manifest and not hidden beneath disturbances of considerable magnitude. … Putting them to a test means that we stop using the methodology … and see what happens. … such tests occur all the time … they speak *against* the universal validity of any rule. All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’. …

‘Scepticism is at a minimum; it is directed against the view of the opposition and against minor ramifications of one’s own basic ideas, never against the basic ideas themselves. Attacking the basic ideas evokes taboo reactions which are no weaker than are the taboo reactions in so-called “primitive societies.” Basic beliefs are protected by this reaction … and whatever fails to fit into the established category system or is said to be incompatible with this system is either viewed as something quite horrifying or, more frequently, it *is simply declared to be non-existent.* …

‘Scientists do not solve problems because they possess a magic wand – methodology, or a theory of rationality – but because they have studied a problem for a long time, because they know the situation fairly well, because they are not too dumb (though that is rather doubtful nowadays when almost anyone can become a scientist), and because the excesses of one scientific school are almost always balanced by the excesses of some other school. (Besides, scientists only rarely solve their problems, they make lots of mistakes, and many of their solutions are quite useless.) Basically there is hardly any difference between the process that leads to the announcement of a new scientific law and the process preceding passage of a new law in society: one informs either all citizens or those immediately concerned, one collects ‘facts’ and prejudices, one discusses the matter, and one finally votes. But while a democracy makes some effort to *explain* the process so that everyone can understand it, scientists either *conceal* it, or *bend* it, to make it fit their sectarian interests.

‘No scientist will admit that voting plays a role in his subject. Facts, logic, and methodology alone decide – this is what the fairy-tale tells us. … This is how scientists have deceived themselves and everyone else about their business, but without any real disadvantage: they have more money, more authority, more sex appeal than they deserve, and the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down in size, and to give them a more modest position in society. …

‘It is the *vote of everyone concerned* that decides fundamental issues such as the teaching methods used, or the truth of basic beliefs such as the theory of evolution, or the quantum theory, and not the authority of big-shots hiding behind a non-existing methodology. There is no need to fear that such a way of arranging society will lead to undesirable results. Science itself uses the method of ballot, discussion, vote, though without a clear grasp of its mechanism, and in a heavily biased way. But the rationality of our beliefs will certainly be considerably increased.’

*Above:* the implosion bomb principle deployed at Nagasaki on 9 August 1945, relied on the same principle as quantum gravity: the inward force *F = ma* that accelerates an apple downward comes from the cosmological acceleration *a* of the distant masses *M* radially outward from an observer. In implosion, TNT detonates around a subcritical plutonium core. Half the force of the TNT explosion goes outward (and is wasted) but by Newton’s 3rd law of motion (the reaction force rocket principle), half the force of the TNT explosion goes radially inward as an implosion wave which hits the metal core and compresses it to over double its normal density: this makes it supercritical for a combination of three very obvious reasons: (1) the ratio of (surface area from which neutrons are lost)/(mass of plutonium in which neutrons are generated) decreases, so the number of neutrons lost per fission falls (only the spaces between atoms decreases, not the cross-sectional areas of the nuclei themselves!), (2) the distance between nuclei decreases, so the average time between fissions is decreased, increasing the fission rate, and (3) by reducing the amount of empty space between nuclei while the size of nuclei is unaffected, the probability of a neutron hitting a nearby nucleus is increased, just as the probability of an archer hitting a target board is increased if the space between the archer and the board is diminished (while the board size is unaffected).

The outward force of the universe is the product of the mass and cosmological acceleration of the universe; an equal and opposite reaction force (by Newton’s 3rd law, which the mainstream neglects without physically valid reason) presses radially inward causing the ‘curvature’ of spacetime by radial compression and thus excess radius (e.g., Earth’s radius is contracted 1.5 mm, causing a distortion to Euclidean geometry which is predictable both from this quantum gravity and from general relativity), and causing gravity where a nearby mass causes an asymmetry in the inward graviton force flux.

But the ignorant string ‘theorists’ don’t even know basic physics of Newton’s laws of motion and the need to apply such laws to every possible situation in order to discover more about nature! They don’t understand the basic fact that the graviton can’t have spin-2 because the spin-2 ‘proof’ relies on ignoring all the mass in the universe except for two masses, and offers no reason why gravitons *shouldn’t* be exchanged between all masses in the universe. Once you include in the path integral *all* the masses in the universe (which is very simple to do, and visually geometric as in the diagram below for low-energy, since there are no vacuum loops beyond the IR cutoff energy which corresponds to all physics at distances beyond 33 femtometres from a unit charge, due to Schwinger’s threshold electric field strength of 1.3 x 10^{18} v/m for spontaneous pair production in the vacuum; this geometric application of path integral techniques is very similar to Feynman’s visual path integrals for light refraction in his book *QED* as we will show later in this blog post), you find that *spin-1 gravitons are required for universal ‘attraction’ and that they also predict long-range repulsion* (cosmological acceleration, i.e. dark energy). The converging inward flux of exchange spin-1 gravitons from the distant immense masses of the universe (clusters of galaxies, etc.) push the apple down to the Earth because they *swamp the local repulsive exchange of spin-1 gravitons between the apple and the Earth*. This is why there is universal attraction of masses within the galaxy. But over immense, cosmological sized, distances, the masses are so big that the spin-1 graviton exchange causes universal repulsion to begin to exceed the attraction effect of more distant masses pushing them toward one another. Eventually, at sufficiently large distances the immense masses involved (clusters of galaxies) ensure that the spin-1 exchange repulsive effect predominates, so the cosmological acceleration becomes apparent between such immense masses but not between smaller masses (like an apple and the Earth, the Earth and Sun, etc.).

The gravity force is simply mediated by the spin-1 graviton which is the uncharged photon; the way that these photons interact with massive vacuum particles (gravitational charges) gives rise to particle masses due to electromagnetic coupling between particle cores and the vacuum. The gravity force is predicted theoretically from the Hubble expansion as proved below. First, it is an empirical fact that expansion of the universe and not ‘tired light’ is the mechanism of redshift (string theorists who deny these facts in science need to read Ned Wright’s page, ‘Errors in Tired Light Cosmology’). Second, Hubble’s law in 1996 predicted the acceleration of the universe (proof below) which was subsequently discovered by observations, *a* = 6*10^{-10} ms^{-2}. This is radially outward acceleration, opposing the radially inward gravitational retardation of the big bang. Newton’s 2nd law tells us that the mass of the universe *M* gives force outward *F = Ma,* while his 3rd law tells us that there’s an equal and opposite force, i.e., an immense inward force which is what produces the excess radius in general relativity (Earth’s radius is compressed radially 1.5 mm by gravitons, and according to general relativity this occurs like the directional contraction of special relativity i.e. without a corresponding non-radial contraction, so the transverse lines – e.g. circumference – are unaffected; general relativity speculatively assumes Pi is constant in 4-dimensions that that this distortion to Euclidean 3-dimensional geometry by gravity is due to an extra time-like dimension causing distortion so that 3 spatial dimensions exist as the surface or ‘brane’ upon a 4 dimensional universe: however, Pi is not a feature of the universe but is a human construction and an alternative to general relativity’s speculation is to vary Pi and keep space Euclidean, and in the simplest quantum gravity model general relativity is wrong and the radial contraction is accompanied by a reduction of circumference, keeping Pi constant! All of the checkable parts of general relativity, such as the deflection of light by twice the classical Newtonian amount due to relativistic situations and conservation of energy – i.e. the field lines of the energy in a photon extend only transversely to its motion not longitudinally as well which occurs with non-relativistic particles, so you get twice as much deflection due to the radial graviton field interacting twice as much more strongly with the transverse field energy in a passing photon than with the field energy of a non -relativistic mass – are matched in quantum gravity, the only discrepancies are *for situations where general relativity has never ever been checked and therefore is unreliable*. The best classical unification of general relativity and electromagnetism which I’ve seen is Lunsford’s, which has 3 spatial dimensions and 3 corresponding time dimensions. This suggests a simple relationship between any given spatial dimension and the corresponding time dimension. Time may naturally be measured from the origin of the universe by the inverse of the Hubble parameter in our flat spacetime universe: *t* = 1/*H*. Since *H* can in principle be measured in 3 different spatial dimensions – looking at stars receding to the left, straight ahead, and upwards – we can in principle have 3 values of measured *H* for those 3 spatial dimensions and thus 3 corresponding times, *t* = 1/*H*. This kind of illustrates that the curved space idea of general relativity is just a fantasy ‘interpretation’ of accelerations, totally devoid of physical substance, and liable – as seen in cosmology – to cause confusion and mislead mainstream cult physicists who believe in it as a dogmatic religion.). This quantitative prediction gives you gravity, and from this – by the electromagnetic coupling theory – you get the strength of electromagnetism. The weak force is weaker than electromagnetism on account of the massive field quanta it uses. The strong force is generated by the electromagnetic energy soaked up by the polarized vacuum at small distances, creating gluons and other particles. Every force can thus be predicted. The spin-1 graviton exchange which causes gravity also causes cosmological scale repulsion of masses.

The observed Hubble recession law states that recession *v = HR, *where *R = cT, T* being time past (when the light was emitted), *not* the time after the big bang for the Earth.

As shown in the diagram below, this time past *T* is related to time since the big bang *t *for the distance of the star in question by the simple expression: *t + T = *1/*H, *for flat spacetime as has been observed since 1998 (the observed acceleration of the universe cancels gravitational deceleration of distant objects, so there is no curvature on large distance scales).

Hence: *v = HR = HcT = Hc*[(1/*H*) - *t*] = *c – (Hct). Thus, a = dv/dt = d*[*c - *(*Hct*)]/*dt = -Hc *= 6×10^{-10} ms^{-2}which is cosmological acceleration of the universe (since observed to be reality, from supernova redshifts!). E.g., Professor Lee Smolin writes in the chapter ‘Surprises from the Real World’ in his 2006 book *The Trouble with Physics: The Rise of String Theory, the fall of a Science, and What Comes next* (Allen Lane, London), pages 209:

‘… *c*^{2}/*R* [which for *R = ct = c/H* gives *a* = *c*^{2}/(*ct*) = *Hc*, exactly the theoretical prediction we earlier published in Oct. 1996 via page 896 of *Electronics World* and then in more detail in Feb. 1997 via the *Science World* peer-reviewed journal, ISSN 1367-6172]… is in fact the acceleration by which the rate of expansion of the universe is increasing – that is, the acceleration produced by the cosmological constant.’

The figure 6×10^{-10} ms^{-2} is the outward acceleration which Smolin quotes as *c*^{2}/*R*. Full credit to Smolin for actually stating what the acceleration of the universe was measured to be! There are numerous popular media articles, books and TV documentaries about the acceleration of the universe which are all so metaphysical that they don’t even state that it is measured to be 6 x 10^{-10} ms^{-2}!

Back in May 1996, I made the mistake of discovering something: gravity implies that the universe is accelerating. Worst, I could make quantitative predictions. The problem with this discovery? The month previously, renowned string ‘theorist’ (speculator) Edward Witten had lied in a journal:

‘String theory has the remarkable property of predicting gravity.’

- Dr Edward Witten, M-theory originator, ‘Reflections on the Fate of Spacetime’, *Physics Today,* April 1996.

‘In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …’

– Conclusion of the paper by M. Fierz and W. Pauli, ‘On relativistic wave equations for particles of arbitrary spin in an electromagnetic field’, Proc. Roy. Soc. London., volume A173, pp. 211-232 (1939). [Notice that Pauli did make errors, such as predicting in a famous 4 December 1930 letter that the neutrino has the mass of the electron!]

‘It is said that more than 200 theories of gravitation have have been put forward; but the most plausible of these have all had the defect that that they lead nowhere and admit of no experimental test.’

- A. S. Eddington, *Space Time and Gravitation,* Cambridge University Press, 1920, p64.

The Feynman diagrams below explain Witten’s lie: spin-2 ‘gravitons’ in string theory are lies because they’re based on a Pauli-Fierz theorem which applies to a universe with *only two masses in it,* and with no exchange of gravitons with distant masses which would overwhelm local exchanges; gravitons coming from the immense galaxies surrounding us in all directions will be converging inward towards our two little test masses and will dwarf the graviton exchange between them, so gravity is not due to spin-2 gravitons but to spin-1 gravitons, disproving Witten and his string ‘theory’ work. Note that as proved on the About page for this blog the vacuum is provably full of exchange radiation which:

(1) causes the Casimir force (pushing two parallel metal plates together because they exclude virtual radiation of longer wavelengths that the gap between the places, so the full spectrum of virtual radiation pushes them towards one another, but only a spectrum with the long wavelengths cut-off it can pushes apart so the net force is attraction),

(2) causes inertia, Newton’s 1st law of motion (resistance to acceleration due to head on pressure until exchange equilibrium is re-established by the emission of radiation which always accompanies the acceleration of charged particles),

(3) causes the FitzGerald-Lorentz contraction of moving objects in the direction of their motion which explains special relativity: measuring rods shrink with increasing velocity, causing mass to increase so that the oscillating parts of any clock – regardless of whether it is mechanical or atomic in nature – gain mass and must therefore slow down in oscillatory speed *v* for momentum *mv* to be conserved, so time-dilation accompanies length contraction.

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’

– Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, *Space Time and Gravitation: An Outline of the General Relativity Theory,* Cambridge University Press, Cambridge, 1921, pp. 20, 152.

*Above:* as proved on the About page (linked here), the final theory is a new physical application of the Standard Model U(1) x SU(2) x SU(3) where SU(2) is no longer just the source of left-handed isospin charge and massive weak gauge bosons due to an indiscriminately mass-producing ‘Higgs field’, but instead (with a modified ‘Higgs field’ which makes checkable predictions about particle masses) not all massless SU(2) gauge bosons are given mass, and those which don’t get mass behave as charged electromagnetic field quanta; leaving the usual mixed U(1) and neutral SU(2) field quanta to produce spin-1 gravitons together with the usual massive neutral weak boson. Thus, instead of having a neutral photon with 4 polarizations as the gauge boson of electromagnetism, massless versions of weak field quanta (unsupplied with mass from a Higgs field) exist at low energy and mediate electromagnetism. The positive electric field around a proton is due to positively charged gauge bosons around a proton. The full gauge boson exchange dynamics of this model predicts the difference in force strengths between electromagnetism and gravity as being due to a random walk of charged gauge bosons between similar charges in the universe (straight line exchange being non-permissible because on average there will be an equal number of positive and negative charges along any given straight line, thus cancelling out). The random walk means that the vector sum of electromagnetism mediated by charged gauge bosons exceeds that from gravity by a factor equal to the square root of the number of charges. Charged massless radiations are of course unable to propagate in one direction only along a single path due to infinite magnetic self inductance (they can’t accelerate by themselves), which is the reason why they can only propagate in both directions: the magnetic field curls of each component then cancel, preventing infinite self inductance.

This feature also conveniently cancels out the term in the Yang-Mills equation that allows charged gauge bosons to alter the electric charge signs of electrons and other particles in the way that weak gauge bosons alter isospin by carrying isospin themselves between particles: the self inductance effect obviously ensures that electrically charged gauge bosons can’t alter electric charges because the charged gauge bosons can only propagate in a perfect equilibrium. (I.e., there is as much negative charge radiated on gauge bosons leaving an electron every second as received by it, so nothing changes. Any disturbance to this equilibrium is called an acceleration and is accompanied by the emission of a photon, which enables the equilibrium to be re-established.)

Above: the photons and exchange radiations implied by advances in electromagnetic cross-talk analysis, which shows that charged massless gauge bosons can propagate as exchange radiation in both directions between two charges, since such exchange cancels out the magnetic field components and prevents infinite self inductance.

*Path integrals*

*Above:* the double slit experiment is as Feynman stated the ‘central paradox of quantum mechanics’. Every single photon gets diffracted by both of two nearby slits in a screen because photon energy doesn’t travel along a single path, but instead, as Feynman states, it travels along multiple paths, most of which normally cancel out to create the illusion that light only travels along the path of least time (where action is minimized), so the double slit and a few other situations are the rare special cases that show up the true nature of light photons as individually traveling along spatially extended paths:

‘Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, *QED,* Penguin, 1990, page 54.

If there are two effective paths that deliver energy to the screen, path 1 and path 2 (as in the double slit experiment with a single photon) then the square of the resultant of the amplitudes for the two paths, *A*_{1} and *A*_{2}, respectively, will be given by |*A*_{1}|^{2} + |*A*_{2}|^{2}, where the squaring is just Born’s suggestion to avoid negative probabilities (it has its roots in the Schroedinger wavefunction which can take negative values, so you need to square that wave function to find the relative probability of finding an electron within a given volume of space represented by the value of the wavefunction, so that the probability is always a number between 0-1, and is never negative!).

Feynman’s genius in discovering path integrals was the amazing intuition it took to realize that Dirac’s ‘propagator’ (derived by Dirac in 1933 from the time-dependent Schroedinger equation’s result for the probability of a path: *e*^{-iHT/h-bar} where *H* is the Hamiltonian for a path, i.e. simply the kinetic energy if dealing with a free particle, and *T* is simply time), namely *e*^{iS/h-bar} where *S* is action, *could be used to represent each path without the need for squaring the modulus of the amplitude!* The complex number in the exponent does it all for you, so you just need to integrate *e*^{iS/h-bar} for all paths contributing energy that affects the overall amplitude. Hence, the amplitude for two paths of the double slit experiment is simply: (|*A*_{1}|^{2} + |*A*_{2}|^{2})^{1/2} = *B*[*e*^{iS(1)/h-bar} + *e*^{iS(2)/h-bar}]

where *B* is a constant of proportionality (easily determined by adding up all paths and normalizing the summation to a total probability of 1, since energy is conserved and the photon definitely ends up *somewhere, *so the sum of all possible path amplitudes must be equal to a probability of exactly 1 of finding the photon!). Dirac had taken the Hamiltonian amplitude *e*^{-iHT/h-bar} and derived the more fundamental lagrangian amplitude for action *S,* i.e. *e*^{iS/h-bar}. Dirac however restricted his work on this problem to merely the *classical* action *S,* whereas Feynman had the genius to extend it to sum over the actions *S* for *all* paths, not just the classical action! However notice that this summation over *all* paths has never, ever, ever been proved to require a summation of any curved paths, where there is no mechanism for such curvature in quantum fields. Curved geodesics in general relativity are merely the results of using differential geometry with a necessarily *false* smoothed-out source term tensor *T*_{ab} to *deliberately and artificially* give rise to a smooth curvature! In place of the *factually proved discontinuous distribution* of particles of matter and energy (photons, etc.) which give rise to all gravitational fields, the stress-energy-momentum tensor *T*_{ab} in the field equation of general relativity uses an artifically smoothed-out averaged distribution, with the real world particulate field discontinuities falsely eliminated! E.g., all of the particles of matter and energy are just ignored and replaced by a totally fictitious ‘perfect fluid’ continuum in general relativity: this false source field continuum then gives rise to the equally unphysical curved spacetime continuum because it is equated to the Ricci curvature tensor minus a contraction term for conservation of mass-energy.

So, instead of calculating the gravitational fields from a large number of discontinuous particles, general relativity averages out the mass per unit volume and uses the average, giving rise to a false model of gravity which is only approximately valid for certain conditions where the statistical number of gravitons is large enough to average out and appear like a classical field! General relativity is therefore not ‘only missing’ a vital ingredient (quantum fields), but it is entirely a false framework to start off with because of the mass-energy-momentum tensor which doesn’t describe real particulate gravity-causing fields, but only represents at best artificial approximations to such fields which are roughly applicable for large masses.

Anyone with a knowledge of calculus and more than one brain cell knows that discontinuities cause problems to differential equations; vertical steps produce infinities when differentiated to find gradients! There is actually no mechanism for a smooth curvature of geodesics in quantum field theory, where nobody has ever proved that particles (including virtual particles and cancelled particle paths in path integrals!) *don’t* travel according to Newton’s 1st law of motion (straight lines in the absence of quantum interactions which impart forces!). Crackpottery is often introduced into mainstream accounts of path integrals by false claims that curved particle paths are ‘permitted’ by the path integral formulation, but that these paths are cancelled out.

This is false, and the reason for it is to introduce false mythology into physics. There is no evidence for it, there is no checkable prediction from it, and it is pseudoscience. It is a lie to claim that physics requires curved paths of particles to be included in path integrals. It doesn’t. See Feynman’s treatment of the refraction of light using graphical illustrations of path integrals (without any equations at all!) in his 1985 book *QED:* the you don’t need wiggly curved paths to be included. All you need to include are straight line paths from light bulb to the water surface, and then after a discrete deflection at the water surface, another straight line path in the water to the receiver. The differing paths consist solely of straight lines with varying angles of deflection at the water surface! You don’t need to include any curved lines.

*Above: *Professor Zee lies in Chapter I.2 of his book Quantum field theory in a nutshell (Princeton, 2003) that if the screen with two slits in the double-slit experiment has more and more holes drilled into it so that it eventually disappears altogether, you get chaotic path integrals because – so he falsely claims on page 9 – the photons will still diffract just as if they are going through small slits! Zee is so stupid that he ignores the whole *mechanism* for diffraction by a slit: the photon interacts with the electromagnetic fields of the electrons in the material along the edge of a slit, and is thus diffracted. When you remove the slits altogether, there are no edges left to cause photons to diffract, so contrary to Zee, photons don’t go loopy in empty space as if they are being diffracted by an infinite number of slits! Think about the refraction of light when entering glass: the electromagnetic fields in the photon interact with those of the electrons in the glass, and the result is a change in the velocity of light, causing refraction of light by glass. The edge of a slit has electrons in it, and the electromagnetic fields of those electrons interact with the nearby photon, causing it to diffract. Drill lots of holes and yes you get more complex interferences, but if you remove the material altogether you suddenly have no edges of slits left to cause diffraction, so the chaos disappears and things become simple!

Not only is he so gullible and mad that Zee ignores this obvious physical mechanism, but *he falsely attributes his crank analysis to Feynman, who did not author it! * (See Feynman’s book, *QED, *Princeton, 1985 for the facts Zee ignores!) Zee is just a liar and a fraudster: he is not just a charlatan but he draws a salary from teaching lies to people and he sells books with lies in them, which makes him a quack. Quack science often becomes mainstream: Hitler’s genocide was based on quack genetics, for example. So we need to catch these perpetrators and prosecute them for fraud, and convict them for willful deception for profit. Zee also makes some purely physical errors about particle spins, and promotes them with false propaganda. E.g., his path integral for quantum gravity presupposes spin-2 gravitons and then tries to justify this lie by excluding all the mass in the universe except for two small test masses. Obviously for just two masses, you would indeed need spin-2 graviton exchange to pull them together. But he does not state that if you* include* all the other masses in the universe (all carry gravitational charge, so there is no way to prevent them from exchanging gravitons with your two little test masses), you don’t need spin-2 gravitons anymore because you can predict gravity with spin-1 gravitons, allowing you to incorporate gravity into the revised Standard Model and have the final theory! But that is just a mistake by Zee, unlike his deception over what Feynman’s path integrals say about the double slit experiment, so it isn’t necessarily a fraud, just plain incompetence which suggests Zee should be sacked from his job for ignorance in the basics of physics. However, I’d like to see Witten kicked out of the Institute of Adcanced Study in Princeton for his massive lie:

‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, ‘Reflections on the Fate of Spacetime’, *Physics Today,* April 1996.

This lie damaged my chances of getting my discovery published in *Classical and Quantum Gravity,* so it has held back scientific progress. I know Witten would possibly argue that my work would have been rejected anyway, but there is such a thing as the straw that breaks the camel’s back; such lies don’t help physics.

In high energy physics above the IR cutoff you get pair production, as explained in detail in the previous post. This causes ‘loops’ in which bosonic field quanta knock pairs of virtual fermions free from the unobservable ground state of the vacuum/ether, which soon annihilate back into radiation again in a ‘loop’ of repeated virtual fermion creation and annihilation. Although it is convenient to depict this process by a circular loop on a spacetime Feynman diagram, even this situation (which is irrelevant below the 1 MeV IR cutoff for all low-energy physics anyway) is not physically composed of curved particle All apparent cases of ‘curvature’ are merely a lot of straight lines joined up with particle interactions occurring at the vertices! Starlight deflected by the sun is deflected in a series of quantum graviton interactions in the vacuum, and the overall result can be statistically modelled to a good approximation by ‘curvature’ but such curvature remains just an approximation. There is no curved continuum spacetime, there is quantum spacetime. This is even clear when you look at the lies needed in general relativity: as soon as you introduce the properly quantized *T*_{ab} energy-momentum-stress tensor as the source of the gravitational field, the theory falls to pieces because the Ricci tensor only represents a continuously variable curved geodesic, not a straight line with discontinuities.

The whole of general relativity is just a classical approximation that usefully allows calculations to be made (albeit with a loss of physical intuition for the nature of the real world) incorporating the conservation of field energy into classical gravitation. It’s a lie to presume that general relativity, or any theory representing discontinuous fields as continuously variables in differential calculus, is a physically correct model. Such calculations are fairly complex approximations to the awesomely simple nature of the physical world, which doesn’t use the calculus.

With Feynman’s innovation, any problem in quantum mechanics generally can be evaluated by integrating the Dirac propagator over all path actions, thus instead of having to follow Born and add up the squares of the moduli of amplitudes for each path, we just instead add up a linear summation of *e*^{iS(n)/h-bar} terms, which is much easier and quicker (even a bright two year old can do it without making a mistake on a calculator). There is no mathematics beyond the trick of summing the amplitudes in such a way that they add up in a physically logical simple way without negative probabilities! For large numbers of paths, we can sum using calculus, by integrating *e*^{iS(n)/h-bar} for an infinite number *n* of possible geometric paths with differing actions *S(n).* (This integration may be mathematically hard, and may lead to infinities and problems in some cases, but that’s a human mathematical problem of using the calculus, it’s not a proof that nature is complex! Duh!) Just so that readers who don’t understand quantum field theory can see what we’re doing, *S* is action: action is the integral of the lagrangian over time, and the lagrangian for a free particle in a field is simply the difference between the kinetic energy, *E* = (1/2)*mv*^{2} for non-relativistic situations, and the potential energy it has from the field it is immersed in. If a free massive particle has no potential energy and only kinetic energy, then the lagrangian is just the kinetic energy, (1/2) *mv*^{2}. Integrate that over time and the result is the action, *S*. The amplitude for the path integral just requires the action *S* and Planck’s constant, *h*. The bar through *h* (i.e. *h-bar*) signifies *h* divided into twice Pi, a result of the geometry of rotational symmetry. There’s absolutely no complex mathematics whatsoever, no stringiness whatsoever, within nature; instead it is beautifully simple and factual. It’s really important for me to stress that Feynman was not, definitely not, *merely* trying to solve the problem of the infinite momenta of field quanta at close to the middle of an electron, and other quantum field theory problems with path integrals by imposing cutoffs for infrared and ultraviolet divergences (i.e. renormalization) in his theory. That is a lie, spread by liars in the mainstream who believe in extradimensional crap. Yes, Feynman did solve problems with renormalization, but what is being suppressed is that that his innovation is not a mere abstract addition to the existing theory of quantum mechanics. It’s a revolution which replaces Bohring physics of multiverse speculations and other nonsense with facts, as you can see by reading the key paper by Feynman which was inevitably rejected by the *Physical Review* (see page 2 of 0004090v1; due to egotistical cranky ‘peer’ reviewers who worship false dogma and abhore factual physics, the *Physical Review* has regularly acted as a typical pseudoscience propaganda journal which believes that religious lobbying is a substitute for hard facts from experimental work on quantum gravity), before being published in *Reviews of Modern Physics, *vol. 20 (1948), p. 367:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger's wave equation and Heisenberg's matrix mechanics being the first two attempts, which both generate nonsense 'interpretations']. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.

‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’

Wow, what an *understatement!* I’m *not alone* in supporting Feynman’s case against the crackpot, backward mainstream which is still stuck in 1927 with obsolete physics and hasn’t grasped path integrals at all. E.g., Richard MacKenzie clearly supports what I’m saying about Feynman where he writes in his paper *Path Integral Methods and Applications,* pages 2-13:

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed *h-bar,* paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> *h-bar*), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

So far so good, but I must point out that MacKenzie goes on to make a terrible error in his analysis of the Aharonov-Bohm effect, where a shielded box containing a magnetic field is placed between the two slits in the double slit experiment, and the photon interference pattern is affected by the magnetic field in the box. (This experiment was first done by Chambers in 1960.) The fatal mainstream error MacKenzie makes is the implicit assumption that the ‘shield’ which eliminates the *observable* magnetic field actually *stops* that magnetic field instead of merely cancelling it by superposition! Magnetic fields work by polarization. Little magnets such as fundamental spinning charges align against an external field in such a way as to oppose and partially ‘cancel’ that field: but this cancellation is a superposition of two fields, not the elimination of a field. Think simply: if you put a child on each end of a see-saw, it may balance, but that doesn’t mean you have cancelled out all the forces. You have only cancelled out some of the forces: you have ensured that the forces balance but there is still a force on the fulcrum that isn’t ‘cancelled out’. Similarly, if you have $1000 credit in one bank account and a debt of $1000 in another, you aren’t free from debt *unless you transfer the money across.*

What happens with magnetic fields is any material is full of magnetic fields because all fundamental particles have have electric charge and spin, but normally the random orientations or the paired up spins (adjacent electrons in an atom are paired with opposite spins under the Pauli exclusion principle) mean that normally the magnetism cancels out. Only when you have an asymmetry, aligning more of the spins one way than the opposite way, do you see the magnetic field. In the absence of alignment, the fields cancel by superposition, but the energy is still there in the field (energy is conserved). Therefore, in the Aharon-Bohm effect, the influence of the ‘shielded’ magnetic field on the interference pattern isn’t ‘magical’ or unexpected. The magnetic fields of the photon are affected by the *energy density* of the ‘cancelled’ magnetic fields, just as light slows down in a block of glass due to the energy density of the electromagnetic fields from the charged matter making up glass!

All of the mainstream ‘physicists’ (quacks) I’ve spoken to believe wrongly that because an ‘uncharged’ block of glass contains as many protons as electrons and hence has a net electric charge of zero, the electric fields ‘don’t exist’ anymore there, just as they claim the magnetic fields ‘don’t exist’ in the Aharonov-Bohm effect. They are so far gone into mystical eastern entanglement quackery that they they just ignore anomalies and become abusive when disproved time after time, and of course they get still more and more angry when you predict gravity factually and all the related predictions from the corrected physics. They are all totally insane, they are bad losers, they hate real physics, they hate the way the world really is!

This is essential to the *checkable aspects* of quantum gravity, i.e., low energy quantum gravity stuff like predicting the gravity force coupling parameter *G,* because at low energy graviton fields will carry a very low energy density (gravity is 10^{40} times weaker than electromagnetism at low energy, everyday physics). Therefore, at low energy, we can ignore the effects of graviton emission from the energy of the gravitational fields (because they are so *weak* at low energy) which ensures that the path integrals for quantum gravity will be similar to those of electromagnetism for low energy physics, where the checkable predictions of quantum gravity will be found. Who – apart from nutty string theorists – cares about the uncheckable speculations of Planck scale quantum gravity? If we first get a quantum gravity theory that makes correct checkable predictions at low energy, then we will be in a position to make confident extrapolations from that particular theory to higher energies. We can’t have that confidence if we start with speculations of high energy that can’t be checked! Duh! Get a grip on reality, all you string theorists and fellow-travellers in the media!

This makes quantum gravity path integrals very simple for low energy, like electromagnetism. So let’s deal with electromagnetism first, then move on to quantum gravity.

Feynman’s explains that all light sources radiate photons in all directions, along *all *paths, but most of those cancel out due to destructive interference. If you throw a stone at an apple, the apple won’t move significantly if someone on the other side of the apple does the same thing with a similar stone! The two impacts will cancel out, apart from a compression of the apple! In other words, there are natural situations where exchange radiation causes destructive interference, and the nature of light is exactly this situation.

The amplitudes of the paths near the classical path reinforce each other because their phase factors, representing the relative amplitude of a particular path, exp(-*iHT*) = exp(*iS*) where *H* is the Hamiltonian (kinetic energy in the case of a free particle), and *S* is the action for the particular path measured in quantum action units of *h*-bar (action *S* is the integral of the Lagrangian field equation over time for a given path).

Because you have to integrate the phase factor exp(*iS*) over all paths to obtain the resultant overall amplitude, clearly radiation is being exchanged over all paths, but is being cancelled over most of the paths somehow. The phase factor equation models this as interferences without saying physically what process causes the interferences.

Thus, in Feynman’s path integral explanation in his 1985 book *QED,* an electron when it radiates actually sends out radiation in *all directions, along all possible paths,* but most of this gets cancelled out because all of the other electrons in the universe around it are doing the same thing, so the radiation just gets exchanged, cancelling out in ‘real’ photon effects. (The electron doesn’t lose energy, because it gains as much by receiving such virtual radiation as it emits, so there is equilibrium). Any “real” photon accompanying this exchange of unobservable (virtual) radiation is then represented by a small core of uncancelled paths, where the phase factors tend to add together instead of cancelling out.

All electrons have centripetal acceleration from spin and so are always radiating, so there is an equilibrium of emission and reception established in the universe, called exchange radiation/vector bosons/gauge bosons, which can only be ’seen’ via force fields they produce; ‘real’ radiation simply occurs when the normally invisible exchange equilibrium gets temporarily upset by the acceleration of a charge.

A conspiracy of mainstream string worshipping physics quacks claims that quantum entanglement exists and that the universe can’t be described in terms of Feynman’s simplicity, but this is a lie as exposed by the following facts:

Editorial policy of the American Physical Society journals (including PRL and PRA):

From: Physical Review A [mailto:pra@aps.org]

Sent: 19 February 2004 19:47

To: ch.thompson1@virgin.net

Subject: To_author AG9055 Thompson

Re: AG9055

Dear Dr. Thompson,

… With regard to local realism, our current policy is summarized succinctly, albeit a bit bluntly, by the following statement from one of our Board members:

“In 1964, John Bell proved that local realistic theories led to an upper bound on correlations between distant events (Bell’s inequality) and that quantum mechanics had predictions that violated that inequality. Ten years later, experimenters started to test in the laboratory the violation of Bell’s inequality (or similar predictions of local realism). No experiment is perfect, and various authors invented ‘loopholes’ such that the experiments were still compatible with local realism. Of course nobody proposed a local realistic theory that would reproduce quantitative predictions of quantum theory (energy levels, transition rates, etc.). This loophole hunting has no interest whatsoever in physics.” …’

‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal’ explanations for the observations have been wrongfully neglected.’

After her tragic death from cancer in 2006, her website was preserved, where she wrote in defiance of the *Physical Review* editor man:

http://freespace.virgin.net/ch.thompson1/EPR_Progress.htm:

‘The story, as you may have realised, is that there is no evidence for any quantum weirdness: quantum entanglement of separated particles just does not happen. This means that the theoretical basis for quantum computing and encryption is null and void. It does not necessarily follow that the research being done under this heading is entirely worthless, but it does mean that the funding for it is being received under false pretences. It is not surprising that the recipients of that funding are on the defensive. I’m afraid they need to find another way to justify their work, and they have not yet picked up the various hints I have tried to give them. There are interesting correlations that they can use. It just happens that they are ordinary ones, not quantum ones, better described using variations of classical theory than quantum optics.

‘Why do I seem to be almost alone telling this tale? There are in fact many others who know the same basic facts about those Bell test loopholes, though perhaps very few who have even tried to understand the real correlations that are at work in the PDC experiments. I am almost alone because, I strongly suspect, nobody employed in the establishment dares openly to challenge entanglement, for fear of damaging not only his own career but those of his friends.’

The stringy mainstream still ignores Feynman’s path integrals as being a reformulation of QM (a third option), seeing them instead as QFT: Feynman’s paper ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, *Reviews of Modern Physics, *volume 20, page 367 (1948), makes it clear that his path integrals are a reformulation of quantum mechanics which gets rid of the uncertainty principle and all the pseudoscience it brings with it.

Richard P. Feynman, *QED,* Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = path phase amplitudes in the path integral, i.e. *e*^{iS(n)/h-bar}] for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

*So classical and quantum field theories differ due to the physical exchange of field quanta between charges. This exchange of discrete virtual quanta causes chaotic interferences to individual fundamental charges in strong force fields. *Field quanta induce Brownian-type motion of individual electrons inside atoms, but this does not arise for very large charges (many electrons in a big, macroscopic object), because statistically the virtual field quanta avert randomness in such cases by averaging out. If the average rate of exchange of field quanta is *N* quanta per second, then the random standard deviation is 100/*N*^{1/2} percent. Hence the statistics prove that the bigger the rate of field quanta exchange, the smaller the amount of chaotic variation. For large numbers of field quanta resulting in forces over long distances and for large charges like charged metal spheres in a laboratory, the rate at which charges exchange field quanta with one another is so high that the Brownian motion resulting to individual electrons from chaotic exchange gets statistically cancelled out, so we see a smooth net force and classical physics is accurate to an extremely good approximation.

Thus, chaos on small scales has a provably beautiful simple physical mechanism and mathematical model behind it: path integrals with phase amplitudes for every path. This is analogous to the Brownian motion of individual 500 m/sec air molecules striking dust particles which creates chaotic motion due to the randomness of air pressure on small scales, while a ship with a large sail is blown steadily by averaging out the chaotic impacts of immense numbers of air molecule impacts per second. So nature is extremely simple: there is no evidence for the mainstream ‘uncertainty principle’-based metaphysical selection of parallel universes upon wavefunction collapse. (Stringers love metaphysics.) Dr Thomas Love, who writes comments at Dr Woit’s Not Even Wrong blog sometimes, kindly emailed me a preprint explaining:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

‘… nature has a simplicity and therefore a great beauty.’

- Richard P. Feynman (*The Character of Physical law, *p. 173)

The double slit experiment, Feynman explains, proves that light uses a small core of space where the phase amplitudes for paths add together instead of cancelling out, so if that core overlaps two nearby slits the photon diffracts through both the slits:

‘Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

– R. P. Feynman, *QED,* Penguin, 1990, page 54.

Hence nature is very simple, with no need for the wavefunction collapse or the ‘multiverse’ lie of crackpot Hugh Everett III who wouldn’t even incorporate the physical dynamics of fallout particle sizes and deposition phenomena in his purely statistical paper allegedly predicting fallout casualties:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, *The Character of Physical Law,* November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

‘The history of science teaches that the greatest advances in the scientific domain have been achieved by bold thinkers who perceived new and fruitful approaches that others failed to notice. If one had taken the ideas of these scientific geniuses who have been the promoters of modern science and submitted them to committees of specialists, there is no doubt that the latter would have viewed them as extravagant and would have discarded them for the very reason of their originality and profundity. As a matter of fact, the battles waged, for example by Fresnel and by Pasteur suffice to prove that some of these pioneers ran into a lack of understanding from the side of eminent scholars which they had to fight with vigor before emerging as the winners. More recently, in the domain of theoretical physics, of which I can speak with knowledge, the magnificent novel conceptions of Lorentz and Planck, and particularly Einstein also clashed with the incomprehension of eminent scientists. The new ideas here triumphed; but, in proportion as the organization of research becomes more rigid, the danger increases that new and fruitful ideas will be unable to develop freely.

‘Let us state in a few words the conclusion to be drawn from the foregoing. While, by the very force of circumstances, research and teaching are weighted down by administrative structures and financial concerns and by the heavy armature of strict regulations and planning, it becomes more indispensable than ever to preserve the freedom of scientific research and the freedom of initiative for the original investigators, because these freedoms have always been and will always remain the most fertile sources for the grand progress of science.’

- Nobel Laureate Louis de Broglie, April 25, 1978

‘The mind likes a strange idea as little as the body likes a strange protein and resists it with similar energy. It would not perhaps be too fanciful to say that a new idea is the most quickly acting antigen known to science. If we watch ourselves honestly we shall often find that we have begun to argue against a new idea even before it has been completely stated.’ – Wilfred Trotter, 1941

‘The study of history is a powerful antidote to contemporary arrogance. It is humbling to discover how many of our glib assumptions, which seem to us novel and plausible, have been tested before, not once but many times and in innumerable guises; and discovered to be, at great human cost, wholly false.’ – Paul Johnson

‘The expression of dissenting views may not seem like much of a threat to a powerful organization, yet sometimes it triggers an amazingly hostile response. The reason is that a single dissenter can puncture an illusion of unanimity. … The existence of suppression of dissent as a pervasive feature of science calls for a reconceptualization of the enterprise. Rather than being solely a search for the truth, science is closely bound up with the exercise of power. This is normally acknowledged for totalitarian regimes and for military dictatorships, where intellectual suppression is overt. But the same sorts of processes occur, usually in a more subtle fashion, in liberal democracies. From Copernicus to Darwin to Einstein, as well as countless others who have challenged the conventional wisdom, it has been the dissidents, the outsiders, the contrarians who have spurred science on. We should protect and encourage dissent, even when we disagree with the dissidents.’ – Brian Martin

‘The notion that a scientific idea cannot be considered intellectually respectable until it has first appeared in a “peer” reviewed journal did not become widespread until after World War II. Copernicus’s heliocentric system, Galileo’s mechanics, Newton’s grand synthesis – these ideas never appeared first in journal articles. They appeared first in books, reviewed prior to publication only by their authors, or by their authors’ friends. Even Darwin never submitted his idea of evolution driven by natural selection to a journal to be judged by “impartial” referees. Darwinism indeed first appeared in a journal, but one under the control of Darwin’s friends. And Darwin’s article was completely ignored. Instead, Darwin made his ideas known to his peers and to the world at large through a popular book: *On the Origin of Species.* I shall argue that prior to the Second World War the refereeing process, even where it existed, had very little effect on the publication of novel ideas, at least in the field of physics. But in the last several decades, many outstanding physicists have complained that their best ideas – the very ideas that brought them fame – were rejected by the refereed journals. Thus, prior to the Second World War, the refereeing process worked primarily to eliminate crackpot papers. Today, the refereeing process works primarily to enforce orthodoxy. I shall offer evidence that “peer” review is NOT peer review: the referee is quite often not as intellectually able as the author whose work he judges. We have pygmies standing in judgment on giants. I shall offer suggestions on ways to correct this problem, which, if continued, may seriously impede, if not stop, the advance of science.’ – Frank J. Tipler, Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?

For the proof that redshift is not caused by light getting tired and losing energy by interacting with particles, please see the really excellent article on the internet by Professor Ned Wright, Errors in Tired Light Cosmology. The whole spectrum of redshifted light is uniformly shifted to lower frequencies, which wouldn’t occur if light was being made red by scattering or absorption effects. We know that force-causing exchange radiation has specific frequencies because that’s how the Casimir effect is produced (long wavelengths of vacuum radiation are excluded between two metal plates, which get pushed together by the full spectrum of radiation beyond the plates). The redshift part of the big bang is solid science with plenty of facts behind it.

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.’ – Albert Einstein and Leopold Infeld, *Evolution of Physics,* 1938, pp. 184-5. (This is a very political comment by them, and shows them acting in a very political – rather than purely scientific – light.)

‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, *Relativity Theory of Protons and Electrons,* Cambridge University Press, Cambridge, 1936, p. 180.

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, *General Relativity,* Cambridge University Press, 1986, pp89-90.

‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, *New Pathways in Science,* vol. 2, p39, 1935.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., *Physical Review Letters,* v.78, 1997, no.3, p.424.

‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, *A History of the Theories of the Aether and Electricity,* 2nd ed., v1, p. v, 1951.

‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, *Modern Physics,* Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), *Philosophy in the Open,* Open University Press, Milton Keynes, 1974.

If he was writing today, maybe he would have to reverse a lot of that to account for the hype-type “success” of string theory ideas that fail to make definite (quantitative) checkable predictions, while alternatives are censored out completely.

No longer could Dr Lakatos claim that:

“What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.”

It’s quite the opposite. The mainstream, dominated by string theorists like Jacques Distler and others at arXiv, can actually stop “silly” alternatives from going on to arXiv and being discussed, as they did with me:

http://arxiv.org/help/endorsement -

‘We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’

What serious researcher is going to treat quantum field theory objectively and work on the simplest possible mechanisms for a spacetime continuum, when it will result in their censorship from arXiv, their inability to find any place in academia to study such ideas, and continuous hostility and ill-informed ‘ridicule’ from physically ignorant string ‘theorists’ who know a lot of very sophisticated maths and think that gives them the authority to act as ‘peer-reviewers’ and censor stuff from journals that they refuse to first read?

Sent: 02/01/03 17:47

Subject: Your_manuscript LZ8276 CookPhysical Review Letters does not, in general, publish papers on alternatives to currently accepted theories.

Yours sincerely,

Stanley G. Brown, Editor,

Physical Review Letters

Now, why has this ‘nice genuine guy’ still not published his personally endorsed proof of what is a ‘currently accepted’ prediction for the strength of gravity? Will he ever do so? Don’t say the editor of Physical Review Letters is another quack? I’m afraid so:

‘… in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory.’

- Sir Roger Penrose, *The Road to Reality,* Jonathan Cape, London, 2004, page 896.

Richard P. Feynman points out in *The Feynman Lectures on Gravitation,* page 30, that gravitons do not have to be spin-2, which has never been observed! Despite this, the censorship of the facts by mainstream ‘stringy’ theorists persists, with professor Jacques Distler and others at arXiv believing with religious zeal that (1) the rank-2 tensors of general relativity prove spin-2 gravitons and (2) string theory is the only consistent theory for spin-2 gravitons, despite Einstein’s own warning shortly before he died:

‘I consider it quite possible that physics cannot be based on the [smooth geometric] field principle, i.e., on continuous structures. In that case, *nothing* remains of my entire castle in the air.’

- Albert Einstein in a letter to friend Michel Besso, 1954.

**Epilogue on groupthink**

*Above:* a still from the video below showing Britain’s Prime Minister Gordon Brown smirking at the complaints in the European Parliament by Daniel Hannan MEP about the £100 billion of taxpayers debt he developed as Chancellor and Prime Minister. Groupthink means he will profit from what he does, just as Witten will escape justice for lying about string theory predicting gravity:

1. His multibillion pound ‘New Deal’ for the young unemployed has failed just as predictable (he made no effort to make it work, it was just a back-of-the-envelope media spin idea to waste money): there are now 850,000 young people who are ‘NEET’: Not in Employment, Education, or Training. What a failure!

2. His tax credits system has rewarded single mothers for having as many children by different men as they can, fuelling dependency, juvenile delinquency and family breakdown.

3. He blocked Frank Field’s attempts for welfare reform, allowing alcoholics and drug addicts to live on premium-rate incapacity benefit.

4. He threw tens of billions of pounds into the unreformed National Health Service where it was poured down the drain, while he was reducing the freedoms to be offered to foundation hospitals.

5. He frustrated Tony Blairs plans to give more freedom to head teachers and more choice to parents while he was Chancellor.

6. He sank the country into debt to make the powerful Labour Party backers (the public sector unions and the left wingers) support his leadership ambitions. It was the personal greed of one man for power and glory as a leading statesman which sank Britain into crisis.

7. “Mr Brown’s expenses claim receipts, part of a batch of ministerial claims obtained by The Daily Telegraph, show that he paid his brother, a senior executive of EDF Energy, £6,577 over 26 months for cleaning services. Downing Street said that the brothers had shared a cleaner for a number of years.” – Philippe Naughton, ‘No 10 releases Gordon Brown’s cleaning contract’, From Times Online, May 8, 2009.

This proves how sinister Prime Minister Gordon Brown is: squandering vast sums of public expenses money from taxpayers on cleaning his flat after adding £100 billion to the British national debt as Chancellor and Prime Minister. Gordon Brown the prime minister was previously Chancellor of finances and is reponsible for the money wasting and UK debt over the last 12 years or so (from 1997). During the global economic boom years from 1997-2008, he squandered taxpayers money on rubbish nobody wanted like the Millennium Dome, and funded the squandering by borrowing money, adding £100 billion to the public debt. Recently newspapers have exposed that he personally has been claiming thousands of pounds in expenses for having a small flat cleaned. As Chancellor, he a decade ago deregulated the banks in the UK, enabling them to lend vast amounts to risky debtors and thus cause the banking crisis in the UK recently. Not only that, he got rid of the UK gold standard when it was at its lowest value, just before the value of gold shot up, thus making a fantastic loss for the taxpayer. But he didn’t worry because he doesn’t use his own elbow grease, let alone pay out of his own pocket, to have his flat cleaned. The taxpayer gets the bill, as always, for his incompetence and failure.

*Above:*this is the U-tube attack on him to his face in the European Parliament by Daniel Hannan MEP:

‘The truth, Prime Minister, is that you have run out of our money. The country as a whole is now in negative equity. Every British child is born owing around £20,000. Servicing the interest on that debt is going to cost more than educating the child. … it is true that we are all sailing together into the squall – but not every vessel in the convoy is in the same dilapidated condition. Other ships used the good years to caulk their hulls and clear up their rigging – in other words, to pay off debt – but you used the good years to raise borrowing yet further. As a consequence, under your captaincy, our hull is pressed deep into the water line, under the accumulated weight of your debt. We are now running a deficit that touches almost 10% of GDP – an unbelievable figure. More than Pakistan, more than Hungary – countries where the IMF has already been called in.

‘Now, it’s not that you’re not apologising – like everyone else, I’ve long accepted that you’re pathologically incapable of accepting responsibility for these things – it’s that you’re carrying on, wilfully worsening the situation, wantonly spending what little we have left. Last year, in the last twelve months, 125,000 private sector jobs have been lost – and yet you’ve created 30,000 public sector jobs. Prime Minister you cannot go on forever squeezing the productive bit of the economy in order to fund an unprecedented engorging of the unproductive bit.

‘You cannot spend your way out of recession or borrow your way out of debt. And when you repeat, in that wooden and perfunctory way, that our situation is better than others, that we’re well place to weather the storm, I have to tell you, you sound like a Brezhnev-era Apparatchik giving the party line. You know, and we know, and you know that we know that it’s nonsense. Everyone knows that Britain is the worst placed to go into these hard times. The IMF has said so. The European Commission has said so. The markets have said so, which is why our currency has devalued by 30% …’

In the same way to the continuing debacle of Gordon Brown as Prime Minister, we will have to put up with Edward Witten and his fellow travellers like Lubos Motl ignoring the factual reality of the world and promoting lies with continuing spin and propaganda. People like Gordon Brown and Edward Witten have obtained positions of power and trust, but the same was true of Hitler in 1933 so it doesn’t prove that they are currently acting professionally. Politics is the opposite of science:

‘Unfortunately, I am very sceptical of the potential impact of this book on the field of particle physics. The Emperor is naked, but he is perceived as irrelevant as well.’

- Dr. Bojan Tunguz, http://www.amazon.com/Not-Even-Wrong-Failure-Physical/dp/0465092756

I’m sure that anyone reading this particular post will grasp Dr Tunguz’s point. String theory cheapens physics by replacing the well proved mathematical physics structure with abject unproved speculations. Nothing in string theory is internally self-consistent; it’s not even been proved self-consistent to a few terms in the perturbative expansion. Furthermore, it can’t make predictions, it doesn’t deal with reality, it’s a religion. This is why people don’t study physics so much now in England (A-level uptake of physics is falling, university physics departments have been closing as I have reported on this blog for years); intelligent people see the dogma of science for the political spin confidence trick it is, irrelevant to the real world. The mainstream sees the solution as the need for more lying spin and hype, just like Gordon Brown sees the solution to his political problems as more lies and more spin and hype. In other words, treat people with hatred and contempt and insult their intelligence! Those living under Witten and those who in the past lived under Hitler, can do nearly nothing about it. Once you give a dictator power, it’s too late to change your mind. If you try to oppose the lies and spin, you will just draw attention to yourself in a negative way and receive abuse in consequence! If you try to be reasonable, you will be ignored completely! All your objections to fashionable dictators like Gordon Brown, Hitler, and Witten have as much relevance to politics or physics as Gordon Brown, Hitler, and Witten themselves have to useful politics or useful physics. In other words, the mere ability to recognise the problem and its solution is irrelevant, because there is no mechanism available to get anyone to listen and to implement the facts. Everything must be approved by groupthink, which is automatically hostile to advance!

*Forum: On the importance of being creative – Innovative thinkers should be allowed to come to the fore. From issue 1692 of New Scientist magazine, 25 November 1989, by HOWARD FIRTH (Howard Firth is an independent science consultant, and was director of the first Edinburgh Science Festival.)*

‘It’s not merely that people with creativity and flair are not properly paid; in many places they are not wanted, as they unsettle those in more established positions.

‘The problem is that the result of all the training in the dominant disciplines of finance, personnel and marketing is not to encourage new ways of thinking, but to keep people thinking along established lines. The skills we are recruiting for are those of the fast talker and the forceful personality, the utilisers of the here and now, rather than the creative minds that constantly question the given order of things.

‘And, of course, each new layer of conventional-thinking, establishment-minded people has to protect itself by appointing more conventional-thinking and establishment-minded people below, thereby building up every year an even stronger wall against the creative thinkers who find that, as time goes on, even their most positive attitudes crumble into bitterness. Every year, some new government initiative comes along – and successive governments deserve credit for at least trying. The trouble with enterprise and training initiatives is that the people who are put in charge of them are often the type of people who have got there because of their ability in conventional ways of doing things. …

‘Catt argues that as bodies of knowledge grow, they become stronger in keeping out any new items of knowledge that appear to question the fundamental base of the established knowledge and its practitioners. To assist the propagation of new ideas, he proposes the creation of an electronic information-sharing network.’

As Machiavelli noted in Chapter VI of his medieval book of guidance for politicians, The Prince: ’… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new.’

This, plus being on the receiving end of groupthink orthodoxy as a kid with a speech problem, is what has led me to hate the kind of majority-is-always-right politics and politicians which often win the majority of votes in so-called ‘democracy’ today, a ‘democracy’ in which the amount of choice is pitifully small – basically a choice every four years between two parties which are relatively similar in trying to be popular enough to win! This type of ‘democracy’ seems fairly distant from the original form of democracy practised in the city states of Ancient Greece, where all citizens would vote daily for policies. The number of decisions each citizen is able to participate in has therefore fallen by a factor of at least (365 days)*(4 years) = 1,460, and in modern ‘democracy’ people have to form pressure groups, get the media on their side, and try to shame elected politicians in to taking notice of some problem and acting upon it: in the real world you have to fight not vote for freedom even in a democracy, using tactics of pestering the media and those who don’t want to know which are hardly that much different from those used by many of the dissenters in the Soviet Union (which had effectively one-party elections, and claimed in its propaganda to be democratic, too!). So there are different forms of democracy, and ours is not necessarily as good as it could be. This problem applies not only to mainstream party politics, but all the way through the expert quangos, educational committees, technical journal editorial committees and ‘peer’-reviewer politics. Science is entirely different in nature from the political system which prints, promotes and teaches science. Propaganda can make a scientific failure look like a success by excluding a better theory and by dismissing errors as ‘mere anomalies’.

UPDATE:

copy of a comment to Louise Riofrio’s blog:

http://riofriospacetime.blogspot.com/2009/06/another-voice.html

Hi Louise,

Thank you very much for blogging about the problems in the mainstream lambda cold dark model.

General relativity falsely replaces the discontinuous (particulate) distribution of fields and matter with a smooth artificial stress-energy-momentum tensor, T_{ab}.

This is equated to the curvature Ricci tensor and contraction term, so the whole of general relativity is artificial to begin with, regardless of gravitons! It’s a false continuum source model being used to represent particulate (quantized) energy fields and particulate matter!

Anyone can see general relativity is obsolete classical junk, only relevant for physics as a fully relativistic correction to classical Newtonian gravity, which gets the light curvature etc. correct by energy conservation!

It’s not a complete theory of gravity. General relativity implicitly assumes (by taking a universal fixed Newtonian coupling constant G for the entire universe) – without any evidence – that there is no gravitational mechanism within the universe.

When I point out these problems, the mainstream says falsely “well it’s the best theory of gravity until we have a quantum theory of gravity”. Duh! Some people have worked out a theory of gravity, and they just suppress it from journals and even from arXiv,

http://arxiv.org/help/endorsement -

‘We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’

Thanks also for the update about Jacqui Smith going. That’s excellent news. Maybe if the new Home Secretary spends less time submitting claims to get the taxpayer to pay for her husband’s dirty porn on Virgin Cable, work visas will be done more efficiently! (Or is that just wishful thinking?)

I’ve suspended my Facebook account to avoid distraction until I’ve finished writing a new paper on gravity, reviewing all the theories.

Copy of a comment to Louise Riofrio’s blog:

http://riofriospacetime.blogspot.com/2009/05/inconsistent-with-inflationary-lcdm.html

Hi Qubit,

Planck had to introduce quantum theory in order to explain the blackbody radiation spectrum curve which doesn’t go to infinity at high frequencies which classical theory predicted; and Bohr had to introduce quantized orbits to explain observed line spectra.

The trouble was started by Ernest Rutherford’s impatient and insulting letter to Niels Bohr dated 20 March 1913, where Rutherford stated:

“There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”

(Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)

Rutherford made two errors here.

1. New correct theories will introduce anomalies at first, until a lot more research is sponsored to sort out the problems. E.g., Dalton’s theory that all atoms were composed of integer masses was ridiculed and rejected at first because the mass of chlorine is 35.5 times that of hydrogen. Later, it became clear that chlorine contains istotopes with the same basic chemistry (the same number of protons and electrons) but differing numbers of neutrons.

2. The real reason why electrons ‘know’ when to stop radiating (i.e., when they are in the ground state), is simply that *all electrons in the universe radiate and exchange gauge boson radiation with one another. This radiation constitutes the electric fields around charges. Because they are all radiating, achieve equilibrium and radiate as much energy as they receive when in thr ground state!*

Rutherford by similar “reasoning” could have also denied Prevost’s 1792 thermodynamic discovery that hot objects forever radiate, by claiming that if this were so, the ground would freeze! Once a hot object has cooled to the same temperature as the surroundings, its temperature can’t decrease any further due to its continued emission of radiation, because it then *receives back from the surroundings radiation at the same rate that it emits radiation!*

Bohr used wavefunction collapse to oppose realism in nature, but Dr Thomas S. Love of California State University emailed me the following:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

Caroline H. Thompson of University of Wales, Aberystwyth, stated in http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf:

‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified.’

http://freespace.virgin.net/ch.thompson1/EPR_Progress.htm:

‘The story, as you may have realised, is that there is no evidence for any quantum weirdness: quantum entanglement of separated particles just does not happen. This means that the theoretical basis for quantum computing and encryption is null and void. …. the funding for it is being received under false pretences.’

http://freespace.virgin.net/ch.thompson1/Papers/Crasemann-CHT%20correspondence%202004.htm

Editorial policy of the American Physical Society journals (including PRL and PRA):

“This loophole hunting has no interest whatsoever in physics.”

(For more information about this, please see my earlier blog post: https://nige.wordpress.com/2009/05/10/feynman-versus-mainstream-quantum-mechanics-uncertainty-principle/.)

Copy of a comment to Arcadian Functor:

http://kea-monad.blogspot.com/2009/06/question.html

Qubit,

Should I presume that your insult is directed to my comment?

I’m sorry for being born if that helps. But [there] is no reason why gravity isn’t simple in nature, being mediated by radiation (gravitons) exchanged by gravitational charges (mass and energy, like photons and electrons).

The falling apple is forced to accelerate due to graviton exchange. Feynman’s path integral sums a lot of graviton interactions by weighting them according to their influence. Many cancel out due to geometric reasons. E.g., if equal amounts of graviton exchange with distant masses occurs to the left and right of the apple, it is not accelerated right to left. The asymmetry is vertical.

String theorists begin with the Fietz-Pauli argument that quantum gravity is due to only the apple and the earth, thus ignoring the surrounding mass of 9 × 10^21 stars, totalling 3 × 10^52 kg.

By ignoring the 3 × 10^52 kg observable mass around us and assuming that the apple only exchanges gravitons with the earth, Fierz and Pauli found that gravitons would need to be spin-2 (180 degrees rotational symmetry, so outgoing and incoming gravitons look identical):

‘In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …’

– Conclusion of the paper by M. Fierz and W. Pauli, ‘On relativistic wave equations for particles of arbitrary spin in an electromagnetic field’, *Proc. Roy. Soc. London,* volume A173, pp. 211-232 (1939).

This is where string theory starts, building on error. What’s needed is a correct summation of graviton exchanges. I can do it geometrically using various mathematical tricks, but don’t have the time to build up an elaborate mathematical obfuscation that looks professionally impressive to mainstream physicists. It would be great if Categorical theorists could sort out quantum gravity!