Quantum gravity mechanism and predictions (6 May 2009 update)


Why the rank-2 stress-energy tensor of general relativity does not imply a spin-2 graviton

“If it exists, the graviton must be massless (because the gravitational force has unlimited range) and must have a spin of 2 (because the source of gravity is the stress-energy tensor, which is a second-rank tensor, compared to electromagnetism, the source of which is the four-current, which is a first-rank tensor). To prove the existence of the graviton, physicists must be able to link the particle to the curvature of the space-time continuum and calculate the gravitational force exerted.” – False claim, Wikipedia.

Previous posts explaining why general relativity requires spin-1 gravitons, and rejects spin-2 gravitons, are linked here, here, here, here, and here. But let’s take the false claim that gravitons must be spin-2 because the stress-energy tensor is rank-2. A rank 1 tensor is a first-order (once differentiated, e.g. da/db) differential summation, such as the divergence operator (sum of field gradients) or curl operator (the sum of all of the differences in gradients between field gradients for each pair of mutually orthagonal directions in space). A rank 2 tensor is some defined summation over second-order (twice differentiated, e.g. d2a/db2) differential equations. The field equation of general relativity has a different structure from Maxwell’s field equations for electromagnetism: as the Wikipedia quotation above states, Maxwell’s equations of classical electromagnetism are vector calculus (rank-1 tensors or first-order differential equations), while the tensors of general relativity are second order differential equations, rank-2 tensors.

The lie, however, is that this is physically deep. It’s not. It’s purely a choice of how to express the fields conveniently. For simple electromagnetic fields, where there is no contraction of mass-energy by the field itself, you can do it easily with first-order equations, gradients. These equations calculate fields with a first-order (rank-1) gradient, e.g. electric field strength, which is the gradient of potential/distance, measured in volts/metre. Maxwell’s equations don’t directly represent accelerations (second-order, rank-2 equations would be needed for that). For gravitational fields, you have to work with accelerations because the gravitational field contracts the source of the gravitational field itself, so gravitation is more complicated than electromagnetism.

The people who promote the lie that because rank-1 tensors apply to spin-1 field quanta in electromagnetism, rank-2 tensors must imply spin-2 gravitons, offer no evidence of this assertion. It’s arm-waving lying. It’s true that you need rank-2 tensors in general relativity, but it is not necessary in principle to use rank-1 tensors in electromagnetism: it’s merely easiest to use the simplest mathematical method available. You could in principle use rank-2 tensors to rebuild electromagnetism, by modelling the equations to observable accelerations instead of unobservable rank-1 electric fields and magnetic fields. Nobody has ever seen an electric field: only accelerations and forces caused by charges. (Likewise for magnetic fields.)

There is no physical correlation between the rank of the tensor and the spin of the gauge boson. It’s a purely historical accident that rank-1 tensors (vector calculus, first-order differential equations) are used to model fictitious electric and magnetic “fields”. We don’t directly observe electric field lines or electric charges (nobody has seen the charged core of an electron, what we see are effects of forces and accelerations which can merely be described in terms of field lines and charges). We observe accelerations and forces. The field lines and charges are not directly observed. The mathematical framework for a description of the relationship between the source of a field and the end result depends on the definition of the end result. In Maxwell’s equations, the end result of a electric charge which is not moving relative to the observer is a first-order field, defined in volts/metre. If you convert this first-order differential field into an observable effect, like force or acceleration, you get a second-order differential equation, acceleration a = d2x/dt2. General relativity doesn’t describe gravity in terms of a first-order field like Maxwell’s equations do, but instead describes gravitation in terms of a second-order observable, i.e. space curvature produced acceleration, a = d2x/dt2.

So the distinction between rank-1 and rank-2 tensors in electromagnetism and general relativity is not physically deep: it’s a matter of human decisions on how to represent electromagnetism and gravitation.

We choose in Maxwell’s equations to represent not second-order accelerations but using Michael Faraday’s imaginary concept of a pictorial field, radiating and curving “field lines” which are represented by first-order field gradients and curls. In Einstein’s general relativity, by contrast, we don’t represent gravity by such half-baked unobservable field concept, but in terms of directly-observable accelerations.

Like first-quantization (undergraduate quantum mechanics) lies, the “spin-2” graviton deception is a brilliant example of historical physically-ignorant mathematical obfuscation in action, leading to groupthink delusions in theoretical physics. (Anyone who criticises the lie is treated with a similar degree of delusional, paranoid hostility directed to dissenters of evil dictatorships. Instead of examining the evidence and seeking to correct the problem – which in the case of an evil dictatorship is obviously a big challenge – the messenger is inevitably shot or the “message” is “peacefully” deleted from the arXiv, reminiscent of the scene from Planet of the Apes where Dr Zaius – serving a dual role as Minister of Science and Chief Defender of the Faith, has to erase the words written in the sand which would undermine his religion and social tea-party of lying beliefs. In this analogy, the censors of the arXiv or journals like Classical and Quantum Gravity are not defending objsctive science, but are instead defending subjective pseudo-science – the groupthink orthodoxy which masquerades as science – from being exposed as a fraud.)

Dissimilarities in tensor ranks used to describe two different fields originate from dissimilarities in the field definitions for those two different fields, not to the spin of the field quanta. Any gauge field whose field is written in a second order differential equation, e.g., acceleration, can be classically approximated by rank-2 tensor equation. Comparing Maxwell’s equations in which fields are expressed in terms of first-order gradients like electric fields (volts/metre) with general relativity in which fields are accelerations or curvatures, is comparing chalk and cheese. They are not just different units, but have different purposes. For a summary of textbook treatments of curvature tensors, see Dr Kevin Aylward’s General Relativity for Teletubbys: “the fundamental point of the Riemann tensor [the Ricci curvature tensor in the field equation general relativity is simply a cut-down, rank-2 version Riemann tensor: the Ricci curvature tensor, Rab = Rxaxb, where Rxaxb is the Riemann tensor], as far as general relativity is concerned, is that it describes the acceleration of geodesics with respect to one another. … I am led to believe that many people don’t have a … clue what’s going on, although they can apply the formulas in a sleepwalking sense. … The Riemann curvature tensor is what tells one what that acceleration between the [particles] will be. This is expressed by

[Beware of some errors in the physical understanding of some of these general relativity internet sites, however. E.g., some suggest – following a popular 1950s book on relativity – that the inverse-square law is discredited by general relativity, because the relativistic motion of Mercury around the sun can be approximated within Newton’s framework by increasing the inverse-square law power slightly from its value of 1/R2 to 1/R2 + X where X is a small fraction, so that the force appears to get stronger nearer the sun. This is fictitious and is just an approximation to roughly accommodate relativistic effects that Newton ignored, e.g. the small increase in planetary mass due to its higher velocity when the planet is nearer the sun on part of its elliptical orbit, than it has when it is moving slower far from sun. This isn’t a physically correct model; it’s just a back-of-the-envelope fudge. A physically correct version of planetary motion in the Newtonian framework would keep the geometric inverse square law and would then correctly modify the force by making the right changes for the relativistic mass variation with velocity. Ptolemy’s epicycles demonstrated the danger of constructing approximate mathematical model which have no physical validity, which then become fashion.]”

Maxwell’s theory based on Faraday’s field lines concept employs only rank-1 equations, for example the divergence of the electric field strength, E, is directly proportional to the charge density, q (charge density is here defined as the charge per unit surface area, not the charge per unit volume): div.E ~ q. The reason this is a rank-1 equation is simply because the divergence operator is the sum of gradients in all three perpendicular directions of space for the operand. All it says is that a unit charge contributes a fixed number of diverging radial lines of electric field, so the total field is directly proportional to the total charge.

But this is just Faraday’s way of visualizing the way the electric force operates! Remember that nobody has yet seen or reported detecting an “electric field line” of force! With our electric meters, iron filings, and compasses we only see the results of forces and accelerations, so the number and locations of electric or magnetic field lines depicted in textbook diagrams is due to purely arbitrary conventions. It’s merely an abstract aetherial legacy from the Faraday-Maxwell era, not a physical reality that has any experimental evidence behind it. If you are going to confuse Faraday’s and Maxwell’s imaginary concept of field “lines” with experimentally defensible reality, you might as well write down an equation in which the invisible intermediary between charge and force is an angel, a UFO, a fairy or an elephant in an imaginary extra dimension. Quantum field theory tells us that there are no physical lines. Instead of Maxwell’s “physical lines of force”, we have known since QED was verified that there are field quanta being exchanged between charges.

So if we get rid of our ad hoc prejudices, getting rid of “electric field strength, E” in volts/metre and just expressing the result of the electric force in terms of what we can actually measure, namely accelerations and forces, we’d have a rank-2 tensor, basically the same field equation as is used in general relativity for gravity. The only differences will be the factor of ~1040 difference between field strengths of electromagnetism and gravity, the differences in the signs for the curvatures (like charges repel in electromagnetism, but attract in gravity) and the absence of the contraction term that makes the gravitational field contract the source of the field, but supposedly does not exist in electromagnetism. The tensor rank will be 2 for both cases, thus disproving the arm-waving yet popular idea that the rank number may be correlated to the field quanta spin. In other words, the electric field could be modelled by a rank-2 equation if we simply make the electric field consistent with the gravitational field by expressing both field in terms of accelerations, instead of using the gradient of the Faraday-legacy volts/metre “field strength” for the electric field. This is however beyond the understanding of the mainstream, who are deluded by fashion and historical ad hoc conventions. Most of the problems in understanding quantum field theory and unifying Standard Model fields with gravitational fields result from the legacy of field definitions used in Maxwellian and Yang-Mills fields, which for purely ad hoc historical reasons are different from the field definition in general relativity. If all fields are expressed in the same way as accelerative curvatures, all field equations become rank-2 and all rank-1 divergencies automatically disappear, since are merely an historical legacy of the Faraday-Maxwell volts/metre field “line” concept, which isn’t consistent with the concept of acceleration due to curvature in general relativity!

However, we’re not advocating the use of any particular differential equations for any quantum fields, because discontinuous quantized fields can’t in principle be correctly modelled by differential equations, which is why you can’t properly represent the source of gravity in general relativity as being a set of discontinuities (particles) in space to predict curvature, but must instead use a physically false averaged distribution such as a “perfect fluid” to represent the source of the field. The rank-2 framework of general relativity has relatively few easily obtainable solutions compared to the simpler rank-1 (vector calculus) framework of electrodynamics. But both classical fields are false in ignoring the random field quanta responsible for quantum chaos (see, for instance, the discussion of first-quantization versus second-quantization in the previous post here, here and here).

Summary:

1. The electric field is defined by Michael Faraday as simply the gradient of volts/metre, which Maxwell correctly models with a first-order differential equation, which leads to a rank-1 tensor equation (vector calculus). Hence, electromagnetism with spin-1 field quanta has a rank-1 tensor purely because of the way it is formulated. Nobody has ever seen Faraday’s electric field, only accelerations/forces. There is no physical basis for electromagnetism being intrinsically rank-1; it’s just one way to mathematically model it, by describing it in terms of Faraday rank-1 fields rather than the directly observable rank-2 accelerations and forces which we see/feel.

2. The gravitational field has historically never been expressed in terms of a Faraday-type rank-1 field gradient. Due to Newton, who was less pictorial than Faraday, gravity has always been described and modelled directly in terms of the end result, i.e. accelerations/forces we see/feel.

This difference between the human formulations of the electromagnetic and gravitational “fields” is the sole reason for the fact that the former is currently expressed with a rank-1 tensor and the latter is expressed with a rank-2 tensor. If Newton had worked on electromagnetism instead of aether crackpots like Maxwell, we would undoubtedly have a rank-2 mathematical model of electromagnetism, in which electric fields are expressed not in volts/metre, but directly in terms of rank-2 acceleration (curvatures), just like general relativity.

Both electromagnetism and gravitation should define fields the same way, with rank-2 curvatures. The discrepancy that electromagnetism uses instead rank-1 tensors is due to the inconsistency that in electromagnetism fields are not defined in terms of curvatures (accelerations) but in terms of a Faraday’s imaginary abstraction of field lines. This has nothing whatsoever to do with particle spin. Rank-1 tensors are used in Maxwell’s equations because the electromagnetic fields are defined (inconsistently with gravity) in terms of rank-1 unobservable field gradients, whereas rank-2 tensors are used in general relativity purely because the definition of a field in general relativity is acceleration, which requires a rank-2 tensor to describe it. The difference is purely down to the way the field is described, not the spin of the field.

The physical basis for rank-2 tensors in general relativity

I’m going to rewrite the paper linked here when time permits.

Groupthink delusions

The real reason why gravitons supposedly “must” be spin-2 is due to the mainstream investment of energy and time in worthless string theory, which is designed to permit the existence of spin-2 gravitons. We know this because whenever the errors in spin-2 gravitons are pointed out, they are ignored. These stringy people aren’t interested in physics, just grandiose fashionable speculations, which is the story of Ptolemy’s epicycles, Maxwell’s aether, Kelvin’s vortex atom, Piltdown Man, S-matrices, UFOs, Marxism, fascism, etc. All were very fashionable with bigots in their day, but:

“… reality must take precedence over public relations, for nature cannot be fooled.” – Feynman’s Appendix F to Rogers’ Commission Report into the Challenger space shuttle explosion of 1986.Update (12 January 2010): around us, the accelerating mass of the universe causes an outward force that can be calculated by Newton’s 2nd law, which in turn gives an equal inward reaction force by Newton’s 3rd law. The fraction of that inward force which causes gravity is simply equal to the fraction of the effective surface area of the particle which is shadowed by relatively nearby, non-accelerating masses. If the distance R between the two particles is much larger than their effective radii r for graviton scatter (exchange), then by geometry the area of the shadow cast on surface area 4*Pi*r2 by the other fundamental particle is Pi*r4/R2, so the fraction of the total surface area of the particle which is shadowed is (Pi*r4/R2)/(4*Pi*r2) = (1/4)(r/R)2. This fraction merely has to be multiplied by the inward force generated by distant mass m undergoing radial outward observed cosmological acceleration a, i.e. force F = ma, in order to predict the gravitational force, which is not the same thing as LeSage’s non-factual, non-predictive gas shadowing (which is to quantum gravity what Lemarke’s theory was to Darwin’s evolution, or what Aristotle’s laws of motion were to Newton’s, i.e. mainly wrong). In other words, the source of gravity and dark energy is the same thing: spin-1 vacuum radiation. Spin-2 gravitons are a red-herring, originating from a calculation which assumed falsely that gravitons either would not be exchanged with distant masses, or that any effect would somehow cancel out or be negligible. Woit states:

“Many of the most well-known theorists are pursuing research programs with the remarkable features that:

“•You don’t need to have any idea what the fundamental degrees of freedom are.
“•You don’t need any fundamental dynamical laws either.
“•You can do everything with high school mathematics.”

Although making the most basic quantum gravity predictions can be done with “high school mathematics”, the deeper gauge symmetry connection of quantum gravity to the Standard Model of particle physics does require more advanced mathematics, as does the job of deriving a classical approximation (i.e. a corrected general relativity for cosmology) to this quantum gravity theory, for more detailed checks and predictions. When Herman Kahn was asked, at the 1959 congressional hearings on nuclear war, whether he agreed with the Atomic Energy Commission name of “Sunshine Unit” for strontium-90 levels in human bones, he replied that although he engaged in a number of controversies, he tried to keep the number down. He wouldn’t get involved. Doubtless, Woit has the same policy towards graviton spin. What I’m basically saying is that the fundamental particle is that causing cosmological repulsion, which has spin-1. This causes gravity as a “spin-off” (no pun intended). So if spin-1 gravitons are hard to swallow, simply name them instead spin-1 dark energy particles! Whatever makes the facts easier to digest…
zzz
AreaShielding
new illustration of quantum gravity
Above: the latest illustration (updated 27 September 2009) which has replaced the older illustration included in the post below. Improvements have been made.

gravity mechanismX

String ‘theory’ (abject uncheckable speculation) combines a non-experimentally justifiable speculation about forces unifying at the Planck scale, with another non-experimentally justifiable speculation that gravity is mediated by spin-2 particles which are only exchanged between the two masses in your calculation, and somehow avoid exchanging with the way bigger masses in the surrounding universe.

When you include in your path integral the fact that exchange gravitons coming from distant masses will be converging inwards towards an apple and the earth, it turns out that this exchange radiation with distant masses actually predominates over the local exchange and pushes the apple down to the earth, so it is easily proved that gravitons are spin-1 not spin-2.

The proof below also makes checkable predictions and tells us exactly how quantum gravity fits into the electroweak symmetry of the Standard Model alongside the other long range force at low energy, electromagnetism, thus altering the usual interpretation of the Standard Model symmetry groups and radically changing the nature of electroweak symmetry breaking from the usual poorly predictive mainstream Higgs field.

The entire mainstream modern physics band waggon has ignored Feynman’s case for simplicity and understanding what is known for sure, and has gone off in the other direction (magical unexplainable religion) and built up a 10 dimensional superstring model whose conveniently ‘explained’ Calabi-Yau compactification of the unseen 6 dimensions can take 10500 different forms (conveniently explained away as a ‘landscape’ of unobservable parallel universes, from which ours is picked out using the anthropic principle that because we exist, the values of fundamental parameters we observe must be such that they allow our existence).

Professor Richard P. Feynman’s paper ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, volume 20, page 367 (1948), makes it clear that his path integrals are a censored explicit reformulation of quantum mechanics, not merely an extension to sweep away infinities in quantum field theory!

Richard P. Feynman explains in his book, QED, Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

Take the case of simple exponential decay: the mathematical exponential decay law predicts that the dose rate never reaches zero, so effective dose rate for exposure to an exponentially decaying source needs clarification: taking an infinite exposure time will obviously underestimate the dose rate regardless of the total dose, because any dose divided into an infinite exposure time will give a false dose rate of zero. Part of the problem here is that the exponential decay curve is false: it is based on calculus for continuous variations, and doesn’t apply to radioactive decay which isn’t continuous but is a discrete phenomenon. This mathematical failure undermines the interpretation of real events in quantum mechanics and quantum field theory, because discrete quantized fields are being falsely approximated by the use of the calculus, which ignores the discontinuous (lumpy) changes which actually occur in quantum field phenomena, e.g., as Dr Thomas Love of California State University points out, the ‘wavefunction collapse’ in quantum mechanics when a radioactive decay occurs is a mathematical discontinuity due to the use of continuously varying differential field equations to represent a discrete (discontinuous) transition!

[‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’ – Dr Thomas Love, Departments of Physics and Mathematics, California State University, ‘Towards an Einsteinian Quantum Theory’, preprint emailed to me.]

Alpha radioactive decay occurs when an alpha particle undergoes quantum tunnelling to escape from the nucleus through a ‘field barrier’ which should confine it perfectly, according to classical physics. But as Professor Bridgman explains, the classical field law falsely predicts a definite sharp limit on the distance of approach of charged particles, which is not observed in reality (in the real world, there is a more gradual decrease). The explanation for alpha decay and ‘quantum tunnelling’ is not that the mathematical laws are perfect and nature is ‘magical and beyond understanding’, but simply that the differential field law is just a statistical approximation and wrong at the fundamental level: electromagnetic forces are not continuous and steady on small scales, but are due to chaotic, random exchange radiation, which only averages out and approaches the mathematical ‘law’ over long distances or long times. Forces are actually produced by lots of little particles, quanta, being exchanged between charges.

On large scales, the effect of all these little particles averages out to appear like Coulomb’s simple law, just as on large scales, air pressure can appear steady, when in fact on small scales it is a random bombardment of air molecules which cause Brownian motion. On small scales, such as the distance between an alpha particle and other particles in the nucleus, the forces are not steady but fluctuate as the field quanta are randomly and chaotically exchanged between the nucleons. Sometimes it is stronger and sometimes weaker than the potential predicted by the mathematical law. When the field confining the alpha particle is weaker, the alpha particle may be able to escape, so there is no magic to ‘quantum tunnelling’. Therefore, radioactive decay only behaves the smooth exponential decay law as a statistical approximation for large decay rates. In general the exponential decay rate is false and for a nuclide of short half-life, all the radioactive atoms decay after a non-infinite time; the prediction of that ‘law’ that radioactivity continues forever is false.

There is a stunning lesson from human ‘groupthink’ arrogance today that Feynman’s fact-based physics is still censored out by mainstream string theory, despite the success of path integrals based on this field quanta interference mechanism!

Regarding string theory, Feynman said in 1988:

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’

– Richard P. Feynman, in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195.

Regarding reality, he said:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Mathematical physicist Dr Peter Woit at Columbia Univerity mathematics department has written a blog post reviewing a new book about Dirac, the discoverer of the Dirac equation, a relativistic wave equation which lies at the heart of quantum field theory (the Schroedinger equation of quantum mechanics is a good approximation for some low energy physics, but is not valid for relativistic situations, i.e. it doesn’t ensure the field moves with the velocity of light while conserving mass-energy, so it is not a true basis for quantum field descriptions; additionally in quantum field theory but not in quantum mechanics math, pair-production occurs, i.e. “loops” in spacetime on Feynman diagrams, caused by particles and antiparticles briefly gaining energy to free themselves from the normally unobservable ground state of the vacuum of space or Dirac sea, before they annihilate and disappear again, analogous to steam temporarily evaporating from the ocean to create visible clouds which condense into droplets of rain and disappear again, returning back to the sea), http://www.math.columbia.edu/~woit/wordpress/?p=1904 where he writes:

‘As it become harder and harder to get experimental data relevant to the questions we want to answer, the guiding principle of pursuing mathematical beauty becomes more important. It’s quite unfortunate that this kind of pursuit is becoming discredited by string theory, with its claims of seeing “mathematical beauty” when what is really there is mathematical ugliness and scientific failure.’

In 1930 Dirac wrote:

‘The only object of theoretical physics is to calculate results that can be compared with experiment.’

– Paul A. M. Dirac, The Principles of Quantum Mechanics, 1930, page 7.

But he changed slightly in his later years and on 7 May 1963 Dirac actually told Thomas Kuhn during an interview:

‘It is more important to have beauty in one’s equations, than to have them fit experiment.’

– Dirac, ‘The Evolution of the Physicist’s Picture of Nature’, Scientific American, May 1963, 208, 47.

Other guys stuck to their guns:

‘… nature has a simplicity and therefore a great beauty.’

– Richard P. Feynman (The Character of Physical law, p. 173)

‘The beauty in the laws of physics is the fantastic simplicity that they have … What is the ultimate mathematical machinery behind it all? That’s surely the most beautiful of all.’

– John A. Wheeler (quoted by Paul Buckley and F. David Peat, Glimpsing Reality, 1971, p. 60)

‘If nature leads us to mathematical forms of great simplicity and beauty … we cannot help thinking they are true, that they reveal a genuine feature of nature.’

– Werner Heisenberg (http://www.ias.ac.in/jarch/jaa/5/3-11.pdf, page 2 here)

‘A theory is the more impressive the greater the simplicity of its premises is. The more different kinds of things it relates, and the more extended is its area of applicability.’

– Albert Einstein (in Paul Arthur Schilpp’s Albert Einstein: Autobiographical Notes, p. 31)

‘My work always tried to unite the true with the beautiful; but when I had to choose one or the other, I usually chose the beautiful.’
– Hermann Weyl (http://www.ias.ac.in/jarch/jaa/5/3-11.pdf, page 2 here)

Now in a new blog post, ‘The Only Game in Town’, http://www.math.columbia.edu/~woit/wordpress/?p=1917, Dr Woit quotes The First Three Minutes author and Nobel Laureate Steven Weinberg continuing to depressingly hype string theory using the poorest logic imaginable to New Scientist:

“It has the best chance of anything we know to be right,” Weinberg says of string theory. “There’s an old joke about a gambler playing a game of poker,” he adds. “His friend says, ‘Don’t you know this game is crooked, and you are bound to lose?’ The gambler says, ‘Yes, but what can I do, it’s the only game in town.’ We don’t know if we are bound to lose, but even if we suspect we may, it is the only game in town.”

Dr Woit then writes in response to a comment by Dr Thomas S. Love of California State University, asking Woit if he plans to write a sequel to his book Not Even Wrong:

‘Someday I would like to write a technical book on representation theory, QM and QFT, but that project is also a long ways off right now.’

– Peter Woit, May 5, 2009 at 12:48 pm, http://www.math.columbia.edu/~woit/wordpress/?p=1917&cpage=1#comment-48177

Well, I need such a book now, but I’m having to make do with the information currently available in available lecture notes and published quantum field theory textbooks.

It may be interesting to compare the post below to the physically very impoverished situation five years ago when I wrote http://cdsweb.cern.ch/record/706468?ln=en which contains the basic ideas, but with various trivial errors and without the rigorous proofs, predictions, applications and diagrams which have since been developed. Note that Greek symbols in the text display in Internet Explorer with symbol fonts loaded but do not display in Firefox:

1. Masses that attract due to gravity are in fact surrounded by an isotropic distribution of distant receding masses in all directions (clusters of galaxies), so they must exchange gravitons with those distant masses as well as nearby masses (a fact ignored by the flawed mainstream path integral extensions of the Fierz-Pauli argument for gravitons having spin-2 in order for ‘like’ gravitational charges to attract rather than to repel which of course happens with like electric charges; see for instance pages 33-34 of Zee’s 2003 quantum field theory textbook).

2. Because the isotropically distributed distant masses are receding with a cosmological acceleration, they have a radial outward force, which by Newton’s 2nd law is F = ma, and which by Newton’s 3rd law implies an equal inward-directed reaction force, F = -ma.

3. The inward-directed force, from the possibilities known in the Standard Model of particle physics and quantum gravity considerations, is carried by gravitons:

gravitymechanism
R in the diagram above is the distance to distant receding galaxy clusters of mass m. The distribution of matter around us in the universe can simply be treated as a series of shells of receding mass at progressively larger distances R, and the sum of contributions from all the shells gives the total inward graviton delivered force to masses.

This works for spin-1 gravitons, because:

a. the gravitons coming to you from distant masses (ignored completely by speculative spin-2 graviton hype) are radially converging upon you (not diverging), and

b. the distant masses are immense in size (clusters of galaxies) compared to local masses like the planet earth, the sun or the galaxy.

Consequently, the flux from distant masses is way, way stronger than from nearby masses; so the path integral of all spin-1 gravitons from distant masses reduces to the simple geometry illustrated above and will cause ‘attraction’ or push you down to the earth by shadowing (the repulsion between two nearby masses from spin-1 graviton exchange is trivial compared to the force pushing them together).

casimir-effect1Above: an analogous effect well demonstated experimentally (the experimental data now matches the theory to within 15%) is the Casimir force where the virtual quantum field bosons of the vacuum push two flat metal surfaces together (‘attractive’ force) if they get close enough. The metal plates ‘attract’ because their reflective surfaces exclude virtual photons of wavelengths longer than the separation distance between the plates (the same happens with a waveguide, a metal box like tube which is used to pipe microwaves from the source magnetron to the disc antenna; a waveguide only carries wavelengths smaller than the width and breadth of the metal box!). The exclusion of long wavelengths of virtual radiation from the space between the metal places in the Casimir effect reduces the energy density between the plates compared with that outside, so that – just like external air pressure collapsing a slightly evacuated petrol can in the classic high school demonstration (where you put some water in a can and heat it so the can fills with steam with the cap off, then put the cap on and allow the can to cool so that the water vapour in it condenses, leaving a partial vacuum in the can) – the Casimir force pushes the plates toward one another. From this particular example of a virtual particle mediated force, note that virtual particles do have specific wavelengths! This is essentially important for redshift considerations when force-causing virtual particles (gauge bosons) are exchanged between receding matter in the expanding universe. ‘String theorists’ like Dr Lubos Motl have ignorantly stated to me that virtual particles can’t be redshifted, claiming that they don’t have any particular frequency or wavelength. [Illustration credit: Wikipedia.]

theoryAbove: incorporation of quantum gravity, mass (without Higgs field) and charged electromagnetic gauge bosons into the Standard Model. Normally U(1) is weak hypercharge. The key point about the new model is that (as we will prove in detail below) the Yang-Mills equation applies to electromagnetism if the gauge bosons are charged, but is restricted to Maxwellian force interactions for massless charged gauge bosons due to the magnetic self-inductance of such massive charges. Magnetic self-inductance requires that charged massless gauge bosons must simultaneously be transferring equal charge from charge A to charge B as from B to A. In other words, only an exact equilibrium of exchanged charge per second in both directions between two charges is permitted, which prevents the Yang-Mills equation from changing the charge of a fermion: it is restricted to Maxwellian field behaviour and can only change the motion (kinetic energy) of a fermion. In other words, it can deliver net energy but not net charge. So it prevents the massless charged gauge bosons from transferring any net charge (they can only propagate in equal measure in both directions between charges so that the curling magnetic fields cancel, which is any equilibrium which can deliver field energy but cannot deliver field charge). Therefore, the Yang-Mills equation reduces for massless charged gauge bosons to the Maxwell equations, as we will prove later on in this blog post. The advantages of this model are many for the Standard Model, because it means that U(1) now does the job of quantum gravity and the ‘Higgs mechanism’, which are both speculative (untested) additions to the Standard Model, but does it in a better way, leading to predictions that can be checked experimentally.

In the case of electromagnetism, like charges repel due to spin-1 virtual photon exchange, because the distant matter in the universe is electrically neutral (equal amounts of charge of positive and negative sign at great distances cancel). This is not the case for quantum gravity, because the distant masses have the same gravitational charge sign, say positive, as nearby masses (there is only one observed sign for all gravitational charges!). Hence, nearby like gravitational charges are pushed together by gravitons from distant masses, while nearby like electric charges are pushed apart by exchange spin-1 photons with one another but not significantly by virtual photon exchanges with distant matter (due to that matter being electrically neutral).

The new model has particular advantages to electromagnetism and leads to quantitative predictions of the masses of particles, and predicting the force coupling strengths for the various interactions. E.g., as shown in previous posts, a random walk of charged electromagnetic gauge bosons between similar charges randomly scattered around the universe gives a path integral with a total force coupling that is 1040 times that from quantum gravity, and so it predicts quantitatively how electromagnetism is stronger than gravity.

mass

Above: If a discrete number of fixed-mass gravitational charges clump around each fermion core, ‘miring it’ like treacle, you can predict all lepton and hadron inertial and gravitational masses. The gravitational charges have inertia because they are exchanging gravitons with all other masses around the universe, which physically holds them where they are (if they move, they encounter extra pressure from graviton exchange in the direction of their motion, which causes contraction, requiring energy; hence resistance to acceleration, which is just Newton’s 1st law, inertia). The illustration of a miring particle mass model shows a discrete number of 91 GeV mass particles surrounding the IR cutoff outer edge of the polarized vacuum around a fundamental particle core, giving mass. A PDF table comparing crude model mass estimates to observed masses of particles is linked here. There is evidence for the quantization of mass from the way the mathematics work for spin-1 quantum gravity. If you treat two masses being pushed together by spin-1 graviton exchanges with the isotropically distributed mass of the universe accelerating radially away from them (viewed in their reference frame), you get the expected correct a prediction of gravity as illustrated here. But if you do the same spin-1 quantum gravity analysis but only consider one mass and try to work out the acceleration field around it, as illustrated here, you get (using the empirically defensible black hole event horizon radius to calculate the graviton scatter cross-section) a prediction that gravitational force is proportional to mass2, which suggests all particles masses are built up from a single fixed size building block of mass. The identification of the number of mass particles to each fermion (fundamental particle) in the illustration and in the table here is by the analogy of nuclear magic numbers: in the shell model of the nucleus the exceptional stability of nuclei containing 2, 8, 20, 50 or 82 protons or 2, 8, 20, 50, 82 or 126 neutrons (or both), which are called ‘magic numbers’, is explained by the fact that these numbers represent successive completed (closed) shells of nucleons, by analogy to the shell structure of electrons in the atom. (Each nucleon has a set of four quantum numbers and obeys the exclusion principle in the nuclear structure, like electrons in the atom; the difference being that for orbital electrons there is generally no interaction of the orbital angular momentum and the spin angular momentum, whereas such an interaction does occur for nucleons in the nucleus.) Geometric factors like twice Pi appear to be obtained from spin considerations, as discussed in earlier blog posts, and they are common in quantum field theory. E.g., Schwinger’s correction factor for Dirac’s magnetic moment of the electron is 1 + (alpha)/(2*Pi).

emforcemechanism1

Above: the mechanism for electromagnetic forces explains physically how those force interactions occur by the exchange of gauge bosons (without invoking the magical ‘4-polarization photon’), allowing the understanding of fields and resolving anomalies in electromagnetism. The first evidence that the gauge bosons of electromagnetism are charged was the transmission line of electricity: both conductors propagate a charged field at the velocity of light for the surrounding insulator, as demonstrated on the blog page here. So in addition to the new quantum gravity model’s ability to predict masses of fundamental particles and the coupling constant for gravity, it also deals with electromagnetism properly, showing why its force strength differs from gravity so much (electromagnetism theoretically involves a random walk between all charges in the universe, which makes it stronger than gravity by the square root of the number of fermions, 1080/2 = 1040).

randomwalk

Above: a random walk of charged massless electromagnetic gauge bosons between N = 1080 fermions in the universe would create a force that adds up to the square root of that number (N1/2 = 1040) multiplied by the force between just two particles, explaining why the force we calculate for quantum gravity between just two particles is smaller by a factor of 1040 than the force of electromagnetism! In other words, electromagnetism is much stronger than gravity because its path integral includes addictive contributions from all the charges in the universe (albeit with gross inefficiency due to the vector directions not adding up coherently, i.e. the random walk summation gives an effective sum of only 1040 not 1080 times gravity), whereas gravity carried by uncharged spin-1 gauge bosons does not involve force enhancing contributions from all the mass in the universe but just the line-of-sight shadowing of one mass by another.

photons

Above: photons, real and virtual, compared to Maxwell’s photon illustration. The Maxwell model photon is always drawn as an electric and magnetic ‘fields’ both at right angles (orthagonal) to the direction of propagation; however this causes confusion because people assume that the ‘fields’ are directions, whereas they are actually field strengths. When you plot a graph of a field strength versus distance, the field strength doesn’t indicate distance. It is true that a transverse wave like a photon has a transverse extent, but this is not indicated by a plot of E-field strength and B-field strength versus propagation distance! People get confused and think it is a three dimensional plot of a photon, when it is just a 1-dimensional plot and merely indicates how the magnetic field strength and electric field strength vary in the direction of propagation! Maxwell’s theory is empty when you recognise this, because you are left with a 1-dimensional photon, not a truly transverse photon as observed. So we illustrate above how photons really propagate, using hard facts from the study of the propagation of light velocity logic signals by Heaviside and Catt, with corrections for their errors. The key thing is that massless charges won’t propagate in a single direction only, because the magnetic fields it produces cause self-inductance which prevent motion. Massive charges overcome this by radiating electromagnetic waves as they accelerate, but massless charges will only propagate if there is an equal amount of charge flowing in the opposite direction at the same time so cancel out their magnetic field (because the magnetic fields curl around the direction of propagation, they cancel in this situation if the charges are similar). So we can deduce the mechanism of propagation of real photons and virtual (exchange) gauge bosons, and the mechanism is compatible with path integrals, the double slit diffraction experiment with single photons (the transverse extent of the photon must be bigger than the distance between slits for an interference pattern), etc.

final-theory2

Above: the incorporation of U(1) charge as mass (gravitational vacuum charge is quantized and always have identical mass to the Z0 as already shown) and mixed neutral U(1) x SU(2) gauge bosons as quantum spin-1 gravitons into the empirical, heuristically developed Standard Model of particle physics. The new model is illustrated on the left and the old Standard Model is illustrated on the right. The SU(3) colour charge theory for strong interactions and quark triplets (baryons) is totally unaltered. The electroweak U(1) x SU(2) symmetry is radically altered in interpretation but not in mathematical structure! The difference is that the massless charged SU(2) gauge bosons are assumed to all acquire mass in low energy physics low energy from some kind of unobserved ‘Higgs field’ (there are several models with differing numbers of Higgs bosons). This means that in the Standard Model, a ‘special’ 4-polarization photon mediates the electromagnetic interactions (requiring 4 polarizations so it mediate both positive and negative force fields around positive and negative charges, not merely the 2 polarizations we observe with photons!).

Correcting the Standard Model so that it deals with electromagnetism correctly and contains gravity simply requires the replacement of the Higgs field with one that only couples to one spin handedness of the electrically charged SU(2) bosons, giving them mass. The other handedness of electrically charged SU(2) bosons remain massless even at low energy and mediate electromagnetic interactions!

To understand how this works, notice that the weak force isospin charges of the weak bosons, such as W and W+, is identical to their electric charges! Isospin is acquired when an electrically charged massless gauge boson (with no isotopic charge) acquires mass from the vacuum. The key difference between isotopic spin and electric charge is the massiveness of the gauge bosons, which alone determines whether the field obeys the Yang-Mills equation (where particle charge can be altered by the field) or the Maxwell equations (where a particle’s charge cannot be affected by the field). This is a result of magnetic self-inductance created by the motion of a charge:

(1) A massless electric charge can’t propagate in one direction by itself, because such motion of massless charge would cause infinite magnetic self-inductance that prevents motion. (Massless radiations can’t be accelerated because they only travel at the velocity of light, so unlike massive charged radiations, they cannot compensate for magnetic effects by radiating energy while being accelerated.) Therefore massless charged radiation cannot propagate to deliver charge to a particle! Massless charged radiation can only ever behave as ‘exchange radiation’, whereby there is an equal flux of charged massless radiation from charge A to B and simultaneously back from B to A, so that the opposite magnetic curls of each opposite-directed flux of exchange radiation oppose one another and cancel out, preventing the problem of magnetic self-inductance. In this situation, the charge of fermions remains constant and cannot ever vary, because the charge each fermion loses per second is equal to the amount of charge delivered to it by the equilibrium of exchange radiation. In other words, the energy delivered by the charged massless gauge bosons can vary (if charges move apart for instance, it can be redshifted), but the electric charge delivered is always in equilibrium with that emitted each second. Hence, the Yang-Mills equation is severely constrained in the case of electromagnetism and reduces to the Maxwell theory, as observed.

(2) Massive charged gauge boson can propagate by themselves in one direction because they can accelerate due to having mass which makes them move slower than light (if they were massless this would not be true, because a massless gauge boson goes at light velocity and cannot be accelerated to any higher velocity): this acceleration permits the charged massive particle to get around infinite magnetic self-inductance by radiating electromagnetic waves while accelerating. Therefore, massive charged gauge bosons can carry a net charge and can affect the charge of the fermions they interact with, which is why they obey the Yang-Mills equation not Maxwell’s.

Electromagnetism is described by SU(2) isospin with massless charged positive and negative gauge bosons

The usual argument against massless charged radiation propagating is infinite self-inductance, but as discussed in the blog page here this doesn’t apply to virtual (gauge boson) exchange radiations, because the curls of magnetic fields around the portion of the radiation going from charge A to charge B is exactly cancelled out by the magnetic field curls from the radiation going the other way, from charge B to charge A. Hence, massless charged gauge bosons can propagate in space, provided they are being exchanged simultaneously in both directions between electric charges, and not just from one charge to another without a return current.

You really need electrically charged gauge bosons to describe electromagnetism, because the electric field between two electrons is different in nature to that between two positrons: so you can’t describe this difference by postulating that both fields are mediated by the same neutral virtual photons, unless you grant the 2 additional polarizations of the virtual photon (the ordinary photon has only 2 polarizations, while the virtual photon must have 4) to be electric charge!

The virtual photon mediated between two electrons is negatively charged and that mediated between two positrons (or two protons) is positively charged. Only like charges can exchange virtual photons with one another, so two similar charges exchange virtual photons and are pushed apart, while opposite electric charges shield one another and are pushed together by a random-walk of charged virtual photons between the randomly distributed similar charges around the universe as explained in a previous post.

What is particularly neat having electrically charged electromagnetic virtual photons is that it automatically requires a SU(2) Yang-Mills theory! The mainstream U(1) Maxwellian electromagnetic gauge theory makes a change in the electromagnetic field induce a phase shift in the wave function of a charged particle, not in the electric charge of the particle! But with charged gauge bosons instead of neutral gauge bosons, the bosonic field is able to change the charge of a fermion just as the SU(2) charged weak bosons are able to change the isospin charges of fermions.

We don’t see electromagnetic fields changing the electric charge of fermions normally because fermions radiate as much electric charge per second as they receive, from other charges, thereby maintaining an equilibrium. However, the electric field of a fermion is affected by its state of motion relative to an observer, when the electric field line distribution appears to make the electron “flatten” in the direction of motion due to Lorentz contraction at relativistic velocities. To summarize:

U(1) electromagnetism: is described by Maxwellian equations. The field is uncharged and so cannot carry charge to or from fermions. Changes in the field can only produce phase shifts in the wavefunction of a charged particle, such as acceleration of charges, and can never change the charge of a charged particle.

SU(2) electromagnetism (two charged massless gauge bosons): is described by the Yang-Mills equation because the field is electrically charged and can change not just the phase of the wavefunction of a charged particle to accelerate a charge, but can also in principle (although not in practice) change the electric charge of a fermion. This simplifies the Standard Model because SU(2) with two massive charged gauge bosons is already needed, and it naturally predicts (in the absence of a Higgs field without a chiral discrimination for left-handed spinors) the existence of massless uncharged versions of the these massive charged gauge bosons which were observed at CERN in 1983.

The Yang-Mills equation is used for any bosonic field which carries a charge and can therefore (in principle) change the charge of a fermion. The weak force SU(2) charge is isospin and the electrically charged massive weak charge gauge bosons carry an isospin charge which is IDENTICAL to the electric charge, while the massive neutral weak boson has zero electric charge and zero isospin charge. The Yang-Mills equation is:

dFmn/dxn + 2e(An x Fmn) + Jm = 0

which is similar to Maxwell’s equations (Fmn is the field strength and Jm is the current), apart from the second term, 2e(An x Fmn), which describes the effect of the charged field upon itself (e is charge and An is the field potential). The term 2e(An x Fmn) doesn’t appear in Maxwell’s equations for two reasons:

(1) an exact symmetry between the rate of emission and reception of charged massless electromagnetic gauge bosons is forced by the fact that charged massless gauge bosons can only propagate in the vacuum where there is an equal return current coming from the other direction (otherwise they can’t propagate, because charged massless radiation has infinite self-inductance due to the magnetic field produced, which is only cancelled out if there is an identical return current of charged gauge bosons, i.e. a perfect equilibrium or symmetry between the rates of emission and reception of charged massless gauge bosons by fermionic charges). This prevents fermionic charges from increasing or decreasing, because the rate of gain and rate of loss of charge per second is always the same.

(2) the symmetry between the number of positive and negative charges in the universe keeps electromagnetic field strengths low normally, so the self-interaction of the charge of the field with itself is minimal.

These two symmetries act together to prevents the Yang-Mills 2e(An x Fmn) term from having any observable effect in laboratory electromagnetism, which is why the mainstream empirical Maxwellian model works as a good approximation, despite being incorrect at a more fundamental physical level of understanding.

Quantum gravity is supposed to be similar to a Yang-Mills theory in regards to the fact that the energy of the gravitational field is supposed (in general relativity, which ignores vital quantum effects the mass-giving “Higgs field” or whatever and its interaction with gravitons) to be a source for gravity itself. In other words, like a Yang-Mills field, the gravitational field is supposed to generate a gravitational field simply by virtue of its energy, and therefore should interact with itself. If this simplistic idea from general relativity is true, then according to the theory presented on this blog page, the massless electrically neutral gauge boson of SU(2) is the spin-1 graviton. However, the structure of the Standard Model implies that some field is needed to provide mass even if the mainstream Higgs mechanism for electroweak symmetry breaking is wrong.

Therefore, the massless electrically neutral (photon-like) gauge boson of SU(2) may not be the graviton, but is instead an intermediary gauge boson which interacts in a simple way with massive (gravitational charge) particles in the vacuum: these massive (gravitational charge) particles may be described by the simple Abelian symmetry U(1). So U(1) then describes quantum gravity: it has one charge (mass) and one gauge boson (spin-1 graviton).

‘Yet there are new things to discover, if we have the courage and dedication (and money!) to press onwards. Our dream is nothing else than the disproof of the standard model and its replacement by a new and better theory. We continue, as we have always done, to search for a deeper understanding of nature’s mystery: to learn what matter is, how it behaves at the most fundamental level, and how the laws we discover can explain the birth of the universe in the primordial big bang.’ – Sheldon L. Glashow, The Charm of Physics, American Institute of Physics, New York, 1991. (Quoted by E. Harrison, Cosmology, Cambridge University press, London, 2nd ed., 2000, p. 428.)

Conventionally U(1) represents weak hypercharge and SU(2) the weak interaction, with the unobserved B gauge boson of U(1) ‘mixing’ (according to the Glashow/Weinberg mixing angle) with the W0 unobserved gauge boson of SU(2) to produce the observed electromagnetic photon, and the observed weak neutral current gauge boson, the Z0. (It’s not specified in the Standard Model whether this mixing is supposed to occur before or after the weak gauge bosons have actually acquired mass from the speculative Higgs field. The Higgs field is a misnomer since there isn’t one specific theory but various speculations, including contradictory theories with differing numbers of ‘Higgs bosons’, none of which have been observed. Woit mentions in Not Even Wrong that since Weinberg put a Higgs’s mechanism into the electroweak theory in 1967, the Higgs theory is called ‘Weinberg’s toilet’ by Glashow – Woit’s undergraduate adviser – because, although a mass-giving field is needed to give mass to weak bosons and thus break electroweak symmetry into separate electromagnetic and weak forces at low energy, Higgs’ theories stink.)

In the empirical model we will describe in this post, U(1) is still weak hypercharge but SU(2) without mass is electromagnetism (with charged massless gauge bosons, positive for positive electric fields and negative for negative electric fields) and left-handed isospin charge (forming quark doublets, mesons). The Glashow/Weinberg mixing remains, but the massless electrically neutral product is the graviton, an unobservably high-energy photon whose wavelength is so small that it only interacts with the tiny black hole-sized cores of the gravitational charges (masses) of fundamental particles. This has the advantage of making U(1) x SU(2) x SU(3) a theory of all interactions, without changing the experimentally-confirmed mathematical structure significantly (we will show below how the Yang-Mills equation reduces to the Maxwell equations for charged massless gauge bosons). The addition of mass to the half of the electromagnetic gauge bosons gives them their left-handed weak isospin charge so they interact only with left-handed spinors. The other features of the weak interaction (apart from just acting on left-handed spinors) such as the weak interaction strength and the short range, are also due to the massiveness of the weak gauge bosons.

Beta radioactivity, controlled by the weak force, is the process whereby neutrons decay into protons, electrons and antineutrinos by a downquark decaying into an upquark by emitting a W weak boson which they decays into an electron and an antineutrino.

This weak interaction is asymmetric due to the massive gauge bosons: free protons can’t ever decay by an upquark transforming into a downquark through emitting a W+ weak boson which then decays into a neutrino and a positron. The reason? Violation of mass-energy conservation! The decay of free protons into neutrons is banned because neutrons are heavier than protons, and mass-energy is conserved. (Protons bound in nuclei get extra effective mass from the binding energy of the strong force field in the nucleus, so in some cases – such as radioactive carbon-11 which is used in PET scanners – protons decay into neutrons by emitting a positive weak gauge boson which decays into a positron and a neutrino.) The left-handedness of the weak interaction is produced by the coupling of the gauge bosons to massive vacuum charges. The short-range, strength and the left-handedness of weak interactions are all due to the effect of mass on electromagnetic gauge bosons and charges. Mass limits the range and strength of the weak interaction and it prevents right-handed spinors undergoing weak interactions. The whole point about electroweak theory is that the electromagnetic and weak interactions are identical in strength and nature apart from the effect that the weak gauge bosons are massive. Whereas the electromagnetic force charge for beta decay is {alpha} ~ 1/137.036…, the corresponding weak force charge for low energies (proton-sized distances) is {alpha}*(Mproton/MW)2, so that it depends on the square of the ratio of the mass of the proton (the decay product in beta decay) to the mass of the weak gauge boson involved, MW. Since Mproton ~ 1 GeV and MW ~ 80 GeV, the low-energy weak force charge is on the order of 1/802 of the electromagnetic charge, alpha. It is the fact that the weak interaction involves massive virtual photons, with over 80 times the mass of the decay products (!), which cause it to be so much weaker than electromagnetism at low energies (nuclear-sized distances). Neglecting the effects of mass on the interaction strength and range of the weak force, it is the same thing as electromagnetism. At very high energies exceeding 100 GeV (short distances, less than 10-18 metre), the massive weak gauge bosons can be exchanged without the distance being an issue, and the weak force is then similar in strength to the electromagnetic force! The weak and strong forces can only act over a maximum distance of about 1 Fermi (10-15 metre) due to the limited range of the gauge bosons (massive W’s for the weak interaction, and pions for the longest-range residual component of the strong colour force).

Electromagnetic and strong forces conserve the number of interacting fermions, so that the number of fermion reactants is the same as the number of fermion products, but the weak force allows the number of fermion products to differ from the number of fermion reactants. The weak force involves neutrinos which have weak charges, little mass and no electromagnetic or strong charge, so they are weakly interacting (very penetrating).

beta-decayAbove: beta decay is controlled by the weak force which is similar to the electromagnetic interaction (on a Feynman diagram, an ingoing neutrino is equivalent to having an antineutrino as an interaction product). In place of electromagnetic photons mediating particle interaction, there are three weak gauge bosons. If these weak gauge bosons were massless, the strength of the weak interaction would be the same as the electromagnetic interaction. However, the weak gauge bosons are massive, and that makes the weak interaction much weaker than electromagnetism at low energies (i.e. relatively big, nucleon-sized distances). This is because the massive virtual weak gauge bosons are short-ranged dut to their massiveness (they suffer rapid attenuation with distance in the quantum field vacuum), so the weak boson interaction rate drops sharply with increasing distance. Hence, by analogy to Yukawa’s famous 1935 theoretical prediction of the pion mass using the experimentally known radius of pion-mediated nuclear interactions, it was possible to predict that the mass of the weak gauge bosons using Glashow’s theory of weak interactions and the experimentally known weak interaction strength, giving was 82 (W and W+) and 92 GeV (Z0), predictions which were closely confirmed by the 27-km diameter LEP (large electron-positron) collider experiments announced at CERN on 21 January 1983. (The masses are now established to be 80.4 and 91.2 GeV respectively.) Neutral currents due to exchange of electrically neutral massive W0 or Z0 (as it is known after it has undergone Weinberg-Glashow mixing with the photon in electroweak theory) gauge bosons had already been confirmed experimentally in 1973, leading to the award of the 1979 Nobel Prize to Glashow, Salam and Weinberg for the SU(2) weak gauge boson theory (Glashow’s work of 1961 had been extended by Weinberg and Salam in 1967). (No Nobel Prize has been awarded for the entire electroweak theory because nobody has detected the speculative Higgs field boson(s) postulated to give mass and thus electroweak symmetry breaking in the mainstream electroweak theory.) One neutral current interaction is illustrated above. However, other Z0 neutral currents exist and are very similar to electromagnetic interactions, e.g. the Z0 can mediate electron scattering, although at low energies this process will be trivial in comparison to electromagnetic (Coulomb) scattering on account of the mass of the Z0 which makes the massive neutral current interaction weak and trivial compared to electromagnetism at low energies (i.e. large distances).

How this gravity mechanism updates the Standard Model of particle physics

‘The electron and antineutrino [both emitted in beta decay of neutron to proton as argued by Pauli in 1930 from energy conservation using the experimental data on beta energy spectra; the mean beta energy is only 30% of the energy lost in beta decay so 70% on average must be in antineutrinos] each have a spin 1/2 and so their combination can have spin total 0 or 1. The photon, by contrast, has spin 1. By analogy with electromagnetism, Fermi had (correctly) supposed that only the spin 1 combination emerged in the weak decay. To further the analogy, in 1938, Oscar Klein suggested that a spin 1 particle (‘W boson’) mediated the decay, this boson playing a role in weak interactions like that of the photon in the electromagnetic case [electron-proton scattering is mediated by virtual photons, and is analogous to the W-mediated ‘scattering’ interaction between a neutron and a neutrino (not antineutrino) that results in a proton and an electron/beta particle; since an incoming (reactant) neutrino has the same effect on a reaction as a released (resultant) antineutrino, this process W-mediated scattering is equivalent to the beta decay of a neutron].

‘In 1957, Julian Schwinger extended these ideas and attempted to build a unified model of weak and electromagnetic forces by taking Klein’s model and exploiting an analogy between it and Yukawa’s model of nuclear forces [where pion exchange between nucleons causes the attractive component of the strong interaction, binding the nucleons into the nucleus against the repulsive electric force between protons]. As the pion+, pion, and pion0 are exchanged between interacting particles in Yukawa’s model of the nuclear force, so might the W+, W, and [W0] photon be in the weak and electromagnetic forces.

‘However, the analogy is not perfect … the weak and electromagnetic forces are very sensitive to electrical charge: the forces mediated by W+ and W appear to be more feeble than the electromagnetic force.’ – Professor Frank Close, The New Cosmic Onion, Taylor and Francis, New York, 2007, pp. 108-9.

The Yang-Mills SU(2) gauge theory of 1954 was first (incorrectly but interestingly) applied to weak interactions by Schwinger and Glashow in 1956, as Glashow explains in his Nobel prize award lecture:

‘Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).

‘Another electroweak synthesis without neutral currents was put forward by Salam and Ward in 1959. Again, they failed to see how to incorporate the experimental fact of parity violation. Incidentally, in a continuation of their work in 1961, they suggested a gauge theory of strong, weak and electromagnetic interactions based on the local symmetry group SU(2) x SU(2) [A. Salam and J. Ward, Nuovo Cimento, 19, 165 (1961)]. This was a remarkable portent of the SU(3) x SU(2) x U(1) model which is accepted today.

‘We come to my own work done in Copenhagen in 1960, and done independently by Salam and Ward. We finally saw that a gauge group larger than SU(2) was necessary to describe the electroweak interactions. Salam and Ward were motivated by the compelling beauty of gauge theory. I thought I saw a way to a renormalizable scheme. I was led to SU(2) x U(1) by analogy with the appropriate isospin-hypercharge group which characterizes strong interactions. In this model there were two electrically neutral intermediaries: the massless photon and a massive neutral vector meson which I called B but which is now known as Z. The weak mixing angle determined to what linear combination of SU(2) x U(1) generators B would correspond. The precise form of the predicted neutral-current interaction has been verified by recent experimental data. …’

Glashow in 1961 published an SU(2) model which had three weak gauge bosons, the neutral one of which could mix with the photon of electromagnetism to produce the observed neutral gauge bosons of electroweak interactions. (For some reason, Glashow’s weak mixing angle is now called Weinberg’s mixing angle.) Glashow’s theory predicted massless weak gauge bosons, not massive ones.

For this reason, a mass-giving field suggested by Peter Higgs in 1964 was incorporated into Glashow’s model by Weinberg as a mass-giving and symmetry-breaking mechanism (Woit points out in his book Not Even Wrong that this Higgs field is known as ‘Weinberg’s toilet’ because it was a vauge theory which could exist in several forms with varying numbers of speculative ‘Higgs bosons’, and it couldn’t predict the exact mass of a Higgs boson).

I’ve explained in a previous post, here, where we depart from Glashow’s argument: Glashow and Schwinger in 1956 investigated SU(2) using for the 3 gauge bosons 2 massive weak gauge bosons and 1 uncharged electromagnetic massless gauge boson. This theory failed to include the massive uncharged Z gauge boson that produces neutral currents when exchanged. Because this specific SU(2) electro-weak theory is wrong, Glashow claimed that SU(2) is not big enough to include both weak and electromagnetic interactions!

However, this is an arm-waving dismissal and ignores a vital and obvious fact: SU(2) has 3 vector bosons but you need to supply mass to them by an external field (the Standard Model does this with some kind of speculative Higgs field, so far unverified by experiment). Without that (speculative) field, they are massless. So in effect SU(2) produces not 3 but 6 possible different gauge bosons: 3 massless gauge bosons with long range, and 3 massive ones with short range which describe the left-handed weak interaction.

It is purely the assumed nature of the unobserved, speculative Higgs field which tries to get rid of the 3 massless versions of the weak field quanta! If you replace the unobserved Higgs mass mechanism with another mass mechanism which makes checkable predictions about particle masses, you then arrive at an SU(2) symmetry with in effect 2 versions (massive and massless) of the 3 gauge bosons of SU(2), and the massless versions of those will give rise to long-ranged gravitational and electromagnetic interactions. This reduces the Standard Model from U(1) x SU(2) x SU(3) to just SU(2) x SU(3), while incorporating gravity as the massless uncharged gauge boson of SU(2). I found the idea that that chiral symmetry features of the weak interaction connects with electroweak symmetry breaking in Dr Peter Woit’s 21 March 2004 ‘Not Even Wrong’ blog posting The Holy Grail of Physics:

‘An idea I’ve always found appealing is that this spontaneous gauge symmetry breaking is somehow related to the other mysterious aspect of electroweak gauge symmetry: its chiral nature. SU(2) gauge fields couple only to left-handed spinors, not right-handed ones. In the standard view of the symmetries of nature, this is very weird. The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time. Surely there’s a connection here… So, this is my candidate for the Holy Grail of Physics, together with a guess as to which direction to go looking for it.’

As discussed in previous blog posts, e.g. this, the fact that the weak force is left-handed (affects only particles with left-handed spin) arises from the coupling of massive bosons in the vacuum to the weak gauge bosons: this coupling of massive bosons to the weak gauge bosons prevents them from interacting with particles with right-handed spin. The massless versions of the 3 SU(2) gauge bosons don’t get this spinor discrimination because they don’t couple with massive vacuum bosons, so the massless 3 SU(2) gauge bosons (which give us electromagnetism and gravity) are not limited to interacting with just one handedness of particles in the universe, but equally affect left- and right-handed particles. Further research on this topic is a underway. The ‘photon’ of U(1) is mixed via the Weinberg mixing angle in the standard model with the electrically neutral gauge boson of SU(2), and in any case it doesn’t describe electromagnetism without postulating unphysically that positrons are electrons ‘going backwards in time’; however this kind of objection is an issue you will get with new theories due to problems in the bedrock assumptions of the subject and so such issues should not be used as an excuse to censor the new idea out; in this case the problem is resolved either by Feynman’s speculative time argument – speculative because there is no evidence that positive charges are negative charges going back in time! – or as suggested on this blog, by dumping U(1) symmetry for electrodynamics and adopting instead SU(2) for electrodynamics without the Higgs field, which then allows two charges – positive and negative without one going backwards in time, and three massless gauge bosons and can therefore incorporate gravitation with electrodynamics. Evidence from electromagnetism:

‘I am a physicist and throughout my career have been involved with issues in the reliability of digital hardware and software. In the late 1970s I was working with CAM Consultants on the reliability of fast computer hardware. At that time we realised that interference problems – generally known as electromagnetic compatibility (emc) – were very poorly understood.’

– Dr David S. Walton, co-discoverer in 1976 (with Catt and Malcolm Davidson) that the charging and discharging of capacitors can be treated as the charging and discharging of open ended power transmission lines. This is a discovery with a major but neglected implication for the interpretation of Maxwell’s classical electromagnetism equations in quantum field theory; because energy flows into a capacitor or transmission line at light velocity and is then trapped in it with no way to slow down – the magnetic fields cancel out when energy is trapped – charged fields propagating at the velocity of light constitute the observable nature of apparently ‘static’ charge and therefore electromagnetic gauge bosons of electric force fields are not neutral but carry net positive and negative electric charges. Electronics World, July 1995, page 594.

Above: the Catt-Davidson-Walton theory showed that the transmission line section as capacitor could be modelled by the Heaviside theory of a light-velocity logic pulse. The capacitor charges up in a lot of small steps as voltage flows in, bounces off the open circuit at the far end of the capacitor, and then reflects and adds to further incoming energy current. The steps are approximated by the classical theory of Maxwell, which gives the exponential curve. Unfortunately, Heaviside’s mathematical theory is an over-simplification (wrong physically, although for most purposes it gives approximately valid results numerically) because it assumes that at the front of a logic step (Heaviside signalled using Morse code in 1875 in the undersea cable between Newcastle and Denmark) the rise is a discontinuous or abrupt step, instead of a gradual rise! We know this is wrong because at the front of a logic step the gradual rise in electric field strength with distance is what causes conduction electrons to accelerate to drift velocity from the normal randomly directed thermal motion they have.

Above: some of the errors in Heaviside’s theory are inherited by Catt in his theoetical work and in his so-called “Catt Anomaly” or “Catt Question”. If you look logically at Catt’s original anomaly diagram (based on Heaviside’s theory), you can see that no electric current can occur: electric current is caused by the drift of electrons which is due to the change of voltage with distance along a conductor. E.g. if I have a conductor uniformly charged to 5 volts with respect to another conductor, no electric current flows because there is simply no voltage gradient to cause a current. If you want an electric current, connect one end of a conductor to say 5 volts and the other end to some different potential, say 0 volts. Then there is a gradient of 5 volts along the length of the conductor, which accelerates electrons up to drift velocity for the resistance. if you connect both ends of a conductor to the same 5 volts potential, there is no gradient in the voltage along the conductor so there is no net electromotive force on the electrons. The vertical front on Catt’s original Heaviside diagram depiction of the “Catt Anomaly” doesn’t accelerate electrons in the way that we need because it shows an instantaneous rise in volts, not a gradient with distance.

Once you correct some of the Heaviside-Catt errors by including a real (ramping) rise time at the front of the electric current, the physics at once becomes clear and you can see what is actually occurring. The acceleration of electrons in the ramps of each conductors generates a radiated electromagnetic (radio) signal which propagates transversely to the other conductor. Since each conductor radiates an exactly inverted image of the radio signal from the other conductor, both superimposed radio signals exactly cancel when measured from a large distance compared to the distance between the two conductors. This is perfect interference, and prevents any escape of radiowave energy in this mechanism. The radiowave energy is simply exchanged between the ramps of the logic signals in each of the two conductors of the transmission line. This is the mechanism for electric current flow at light velocity via power transmission lines: what Maxwell attributed to ‘displacement current’ of virtual charges in a mechanical vacuum is actually just exchange of radiation!

There are therefore three related radiations flowing in electricity: surrounding one conductor there are positively-charged massless electromagnetic gauge bosons flowing parallel to the conductor at light velocity (to produce the positive electric field around that conductor), around the other there are negatively-charged massless gauge bosons going in the same direction again parallel to the conductor, and between the two conductors the accelerating electrons exchange normal radiowaves which flow in a direction perpendicular to the conductors and have the role which is mathematically represented by Maxwell’s ‘displacement current’ term (enabling continuity of electric current in open circuits, i.e. circuits containing capacitors with a vacuum dielectric that prevents stops real electric current flowing, or long open-ended transmission lines which allow electric current to flow while charging up, despite not being a completed circuit).

Commenting on the mainstream focus upon string theory, Dr Woit states (http://arxiv.org/abs/hep-th/0206135 page 52):

‘It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental “M-theory” is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbation expansion. This whole situation is reminiscent of what happened in particle theory during the 1960’s, when quantum field theory was largely abandoned in favor of what was a precursor of string theory. The discovery of asymptotic freedom in 1973 brought an end to that version of the string enterprise and it seems likely that history will repeat itself when sooner or later some way will be found to understand the gravitational degrees of freedom within quantum field theory. While the difficulties one runs into in trying to quantize gravity in the standard way are well-known, there is certainly nothing like a no-go theorem indicating that it is impossible to find a quantum field theory that has a sensible short distance limit and whose effective action for the metric degrees of freedom is dominated by the Einstein action in the low energy limit. Since the advent of string theory, there has been relatively little work on this problem, partly because it is unclear what the use would be of a consistent quantum field theory of gravity that treats the gravitational degrees of freedom in a completely independent way from the standard model degrees of freedom. One motivation for the ideas discussed here is that they may show how to think of the standard model gauge symmetries and the geometry of space-time within one geometrical framework.’

That last sentence is the key idea that gravity should be part of the gauge symmetries of the universe, not left out as it is in the mainstream ‘standard model’, U(1) x SU(2) x SU(3).

How the pressure mechanism of quantum gravity reproduces the contraction in general relativity

As long ago as 1949 a Dirac sea was shown to mimic the relativity contraction and mass-energy:

‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2/c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2/c2)1/2, where E0 is the potential energy of the dislocation at rest.’ – C. F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, vol. A62 (1949), pp. 131-4.

Feynman explained that the contraction of space around a static mass M due to curvature in general relativity is a reduction in radius by (1/3)MG/c2 which is 1.5 mm for the Earth. You don’t need the tensor machinery of general relativity to get such simple results for the low energy (classical) limit. (Baez and Bunn similarly have a derivation of Newton’s law from general relativity, that doesn’t use tensor analysis: see http://math.ucr.edu/home/baez/einstein/node6a.html.) We can do it just using the equivalence principle of general relativity plus some physical insight:

The velocity needed to escape from the gravitational field of a mass M (ignoring atmospheric drag), beginning at distance x from the centre of the mass is by Newton’s law v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards. This is just a simple result of the conservation of energy! Therefore, the gravitational potential energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling down to that distance from an infinite distance, and this gravitational potential energy is – by the conservation of energy – equal to the kinetic energy of a mass travelling with escape velocity v.

By Einstein’s principle of equivalence between inertial and gravitational mass, the effects of the gravitational acceleration field are identical to other accelerations such as produced by rockets and elevators. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2

However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:

Fitzgerald-Lorentz contraction effect: g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + …

Gravitational contraction effect: g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + …,

where for radial symmetry (x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the relative radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2]

Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. Space is contracted radially around mass M by the distance (1/3)GM/c2.

It is not contracted in the transverse direction, i.e. along the circumference of the Earth (the direction at right angles to the radial lines which originate from the centre of mass). This is the physical explanation in quantum gravity of so-called curved spacetime: because graviton exchange compresses masses radially but leaves them unaffected transversely, radius is reduced but circumference is unaffected so the value of Pi (circumference/diameter of a circle) would be altered for a massive circular object if we use Euclidean 3-dimensional geometry! General relativity’s spacetime is a system for keeping Pi from changing by simply invoking an extra dimension: by treating time as a spatial dimension, Euclidean 3-dimensional space can be treated as a surface on 4-dimensional spacetime, with the curvature ensuring that Pi is never altered! Spacetime curves due to the extra dimension instead of Pi altering. However, this is a speculative explanation and there is no proof that contraction effects are really due to this curvature. For example, Lunsford published a unification of electrodynamics and general relativity which 6 dimensions including 3 time-like dimensions, so that there is a perfect symmetry between space and time, with each spatial dimension having a corresponding time dimension. This makes sense when you measure time from the big bang by means of the t = 1/H where H is the Hubble parameter H = v/x: because there are 3 spatial dimensions, the expansion rate measured in each of those three spatial dimensions will give you 3 separate ages of the universe, i.e. 3 time dimensions (unless the expansion is isotropic, when all three times are indistinguishable, as appears to be the case!). (As with a paper of mine, Lunsford’s paper was censored off arXiv after being published in a peer-reviewed physics journal under the title, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, because it disagrees with the number of speculative unobserved dimensions in arXiv endorsed mainstream string theory sh*t: it can be downloaded here. Therefore when ‘string theorists’ claim that there is ‘no alternative’ to their brilliant sh*t landscape of 10500 metastable vacua solutions from all the combinations of 6 unobservable, compactified Calabi-Yau extra spatial dimensions in 10 dimensional string, tell them and their stupid arXiv censors to go and f*ck off with their extra spatial dimensions insanity.)


Above: Feynman’s illustration of general relativity by the 1.5 mm radial contraction of the Earth. Experiments like the Casimir effect demonstrate that the vacuum is filled with virtual bosonic quantum radiation which causes forces. When you move in this sea of bosonic radiation, you experience inertia (resistance to acceleration) from the radiation, and during your acceleration you get contracted in the direction of motion. Gravitational fields have a similar effect; graviton exchange radiation pressure contracts masses radially but not transversely, so the radius of the Earth is contracted by 1.5 mm, but the circumference isn’t affected. Hence there would be a slight change to Pi if space is Euclidean. This is why in general relativity 3-dimensional space is treated as a curved surface on a 4-dimensional spacetime, so that curvature of 3-dimensional space keeps Pi from being altered. However, in quantum gravity we have a physical mechanism for the contraction so the 4-dimensional spacetime theory and ‘curved space’ is just a classical approximation or calculating trick for the real quantum gravity effects! Gravitational time-dilation accompanies curvature because all measures of time are based on motion, and the contraction of distance means that moving clock parts (including things like oscillating electrons, oscillating quartz crystal atoms, etc) travel a smaller distance in a given time, making time appear to slow down. It also applies to the electric currents in nerve impulses, so a the electric impulses in person’s brain will move a smaller distance in a given interval of time, making the person slow down: everything slows down in time-dilation. Professor Richard P. Feynman explains this time-dilation effect on page 15-6 of volume 1 of The Feynman Lectures on Physics (Addison Wesley, 1963) by considering the motion of light inside a clock:

‘… it takes a longer time for light to go from end to end in the moving clock than in the stationary clock. Therefore the apparent time between clicks is longer for the moving clock … Not only does this particular kind of [light-based] clock run more slowly, but … any other clock, operating on any principle whatsoever, would also appear to run slower, and in the same proportion …

‘Now if all moving clocks run slower, if no way of measuring time gives anything but a slower rate, we shall just have to say, in a certain sense, that time itself appears to be slower in a space ship. All the phenomena there – the man’s pulse rate, his thought processes, the time he takes to light a cigar, how long it takes to grow up and get old – all these things must be slowed down in the same proportion, because he cannot tell he is moving.’

Reference frame for the calculations

The confirmation that the universe has a small positive (outward) acceleration via computer-automated CCD-telescope observations of the signature flashes of distant, redshifted supernovae in 1998 confirmed the prediction of a = Hc made in 1996 and published via Electronics World (October 1996, letters pages) and Science World (ISSN 1367-6172), February 1997. But regardless of the fact that we predicted it before it was observed, the acceleration is well confirmed by observations and is therefore a fact. It is an acceleration seen from our frame of reference on the universe, in which we are looking back in time by the amount x/c when we look out to distance x, due to the delay time in light reaching us. Gravitational fields propagate at the velocity of light according to the empirically defensible basics of general relativity (which has little to do with its fitting to cosmology with arbitrary ad hoc adjustment factors). So the reference frame for calculating the force of the outward accelerating matter of the universe is that of the observer, for whom the surrounding universe has a small apparent acceleration and large apparent mass, giving a large outward force, F = ma.

Suppose we look to distance R. We’re looking to earlier times due to the travel time of light to us! The age of the universe t at distance R plus the time light takes to travel from distance R to us T is equal to 1/H in flat spacetime: t + T = 1/H.

Suppose a supernova is a billion light years away. In that case

t = 13.7 – 1 = 12.7 billion years after big bang

T = 1 billion years light travel time to reach us

T + t = 13.7 billion years = 1/H, in the observed (flat) spacetime. This is a simple fact!

Hence Hubble’s empirical law tells us another simple fact, namely that the distant supernova is subject to acceleration as seen from our frame of reference:

v = HR = H(cT) = Hc[(1/H) – t] = c – (Hct)

a = dv/dt = d[c – (Hct)]/dt = –Hc = -6×10-10 ms-2

This is a tiny cosmological acceleration, only observable when looking very distant objects, so you have to observe constant energy (Type IA) supernova explosions in order to have a bright enough flash of light to observe redshifted spectra from that distance. This is what was discovered in 1998 by two teams of astronomers, led respectively by Saul Perlmutter of the Lawrence Berkeley National Laboratory and by Brian Schmidt of the Australian National laboratory.

data
Above: in 1998 two independent groups, the High-z Supernova Search Team (Riess et al., Astronomical Journal, pages 116 and 1009, 1998) and the Supernova Cosmology Project (Perlmutter et al., Astrophysical Journal, 1999, pages 517 and 565) both came up with observational evidence that the universe is accelerating, not slowing down due to gravitation as had been expected from Friedmann’s metric of general relativity. So cosmologists quickly introduced an ad hoc ‘correction’ in the form of a cosmological constant (lambda) to make the universe 70% dark energy and 30% matter (most of this being dark matter which has never been directly observed in the laboratory; only 5% of it is normal matter and another 5% seems to be neutrinos and antineutrinos). For a criticism of the resulting ad hoc lambda-CDM ‘theory’ see the paper by Richard Lieu of the Physics Department, University of Alabama, ‘Lambda-CDM cosmology: how much suppression of credible evidence, and does the model really lead its competitors, using all evidence?’, http://arxiv.org/abs/0705.2462.

However, the acceleration itself which offsets gravitational attraction at great distances is not in question. The Type IA supernovae all release a similar amount of energy (7*1026 megatons of TNT equivalent; 3*1042 J) because they result from the collapse of white dwarfs when their mass just exceeds 1.4 times the mass of the sun (the Chandrasekhar limit). When this mass limit is exceeded (due to matter from a companion star falling into the white dwarf), the free electron gas in the white dwarf can no longer support the star against gravity, so the white dwarf collapses due to gravity, releasing a lot of gravitational potential energy which heats the star up to such a high temperature and pressure that the nuclei of carbon and oxygen can then fuse together, creating large amounts of radioactive nickel-56, and a massive explosion in space called a supernova. These Type IA supernovae occur roughly once every 400 years in the Milky Way galaxy. They all have a similar light spectrum regardless of distance from us, indicating that they are similar in composition. So their relative brightness indicates their distance from us (by the inverse-square law of radiation) while the redshift of the line spectra, such as lines from nickel-56, indicates recession velocity. Redshift is only explained by recession.

Now consider how this prediction of the cosmological aceleration differs from the mainstream treatment of cosmological acceleration and dark energy. Professor Sean Carroll, who uses Feynman’s old desk, has a paper here called ‘Why is the Universe Accelerating?’ Einstein’s field equation of general relativity relates the geometry of spacetime (i.e., the curvature of space which would be needed to cause accelerations if space is non-quantum but is instead a continuum that can be curved by the presence of a fourth, time-like dimension) to the sources of gravitational fields (i.e., the supposed continuum curvature) such as mass-energy, pressure etc.

Because of relativistic effects on the source of the gravitational field (i.e., accelerating bodies contract in the direction of motion and gain mass, which is gravitational charge, so a falling apple becomes heavier while it accelerates), the curvature of spacetime is affected in order for energy to be conserved when the gravitational source is changed by relativistic motion. This means that the Ricci tensor for curvature is not simply equal to the source of the gravitational field. Instead, another factor (equal to half the product of the trace of the Ricci tensor and the metric tensor) must be subtracted from the Ricci curvature to ensure the conservation of energy. As a result, general relativity makes predictions which differ from Newtonian physics. General relativity is correct as far as it goes, which is mathematical generalization of Newtonian gravity and a correction for energy conservation. It’s not, however, the end of the story. There is every reason to expect general relativity to hold good in the solar system, and to be a good approximation. But if gravity has a gauge theory (exchange radiation) mechanism in the expanding universe which surrounds a falling apple, there is a reason why general relativity is incomplete when applied to cosmology.

Sean’s paper ‘Why is the Universe Accelerating?’ asks why the energy of the vacuum is so much smaller than predicted by grand unification theories of supersymmetry, such as supergravity (a string theory). This theory states that the universe is filled with a quantum field of virtual fermions which have a ground state or zero-point energy of E = (1/2){h-bar}{angular frequency}. Each of these oscillating virtual charges radiates energy E = hf, so integrating over all frequencies gives you the total amount of vacuum energy. This is infinity if you integrate frequencies between zero and infinity, but this problem isn’t real because the highest frequencies are the shortest wavelengths, and we already know from the physical need to renormalize quantum field theory that the vacuum has a minimum size scale (the grain size of the vacuum), and you can’t have shorter wavelengths (or corresponding higher frequencies) than that size. Renormalization introduces cutoffs on the running couplings for interaction strengths; such couplings would become infinite at zero distance, causing infinite field momenta, if they were not cutoff by a vacuum grain size limit. The mainstream string and other supersymmetric unification ideas assume that the grain size is the Planck length although there is no theory of this (dimensional analysis isn’t a physical theory) and certainly no experimental evidence for this particular grain size assumption, and a physically more meaningful and also smaller grain size would be the black hole horizon radius for an electron, 2GM/c2.

But to explain the mainstream error, the assumption of the Planck length as the grain size tells the mainstream how closely grains (virtual fermions) are packed together in the spacetime fabric, allowing the vacuum energy to be calculated. Integrating the energy over frequencies corresponding to vacuum oscillator wavelengths which are longer than the Planck scale gives us exactly the same answer for the vacuum energy as working out the energy density of the vacuum from the grain size spacing of virtual charges. This is the Planck mass (expressed as energy using E = mc2) divided into the cube of the Planck length (the volume which each of the supposed virtual Planck mass vacuum particles is supposed to occupy within the vacuum).

The answer is 10112 ergs/cm3 in Sean’s quaint American stone age units, or 10111 Jm-3 in physically sensible S.I. units (1 erg is 10-7 J, and there are 106 cm3 in 1 m3). The problem for Sean and other mainstream people is why the measured ‘dark energy’ from the observed cosmological acceleration implies a vacuum energy density of merely 10-9 Jm3. In other words, string theory and supersymmetric unification theories in general exaggerate the vacuum energy density by a factor of 10111 Jm-3/10-9 Jm-3 = 10120.

That’s an error! (although, of course, to be a little cynical, such errors are common in string theory, which also predicts 10500 different universes, exceeding the observed number).

Now we get to the fun part. Sean points out in section 1.2.2 ‘Quantum zero-point energy’ at page 4 of his paper that:

‘This is the famous 120-orders-of-magnitude discrepancy that makes the cosmological constant problem such a glaring embarrassment. Of course, it is somewhat unfair to emphasize the factor of 10120, which depends on the fact that energy density has units of [energy]4.’

What Sean is saying here is that the mainstream-predicted vacuum energy density is at since the Planck length is inversely proportional to the Planck energy, the vacuum energy density of {Planck energy}/{Planck length3} ~ {Planck energy4} which exaggerates the error in the prediction of the energy. So if we look at the error in terms of energy rather than energy density for the vacuum, the error is only a factor of 1030, not 10120.

What is pretty obvious here is that the more meaningful 1030 error factor is relatively close to the factor 1040 which is the ratio between the coupling constants of electromagnetism and gravity. In other words, the mainstream analysis is wrong in using the electromagnetic (electric charge) oscillator photon radiation theory, instead of the mass oscillator graviton radiation theory: the acceleration of the universe is due to graviton exchange.

In a blog post dated 2004, Sean wrote:

‘Yesterday we wondered out loud whether cosmological evidence for dark matter might actually be pointing to something more profound: a deviation of the behavior of gravity from that predicted by Einstein’s general relativity (GR). Now let’s ask the same question in the context of dark energy and the acceleration of the universe. … For example, maybe general relativity works for ordinary bound systems like stars and galaxies, but breaks down for cosmology, in particular for the expansion rate of the universe. In GR the expansion rate is described by the Friedmann equation … So maybe Friedmann was somehow wrong. For example, maybe we can solve the problem of the mismatch between theory and experiment by saying that the vacuum energy somehow doesn’t make the universe accelerate like ordinary energy does.’

Duh! The error is the assumption of fundamental force unification, which would make all the fundamental interaction couplings identical at the unification energy:
unification
Above: supersymmetry which is based on the false guess that, at very high energy, all fundamental force couplings have the same numerical value; to be specific, the minimal supersymmetric standard model – the one which contains 125 parameters instead of just the 19 in the standard model – makes all force couplings coincide at alpha = 0.04, near the Planck scale. Although this extreme prediction can’t be tested, quite a lot is known about the strong force at lower and intermediate energies from nuclear physics and also from various particle experiments and observations of very high energy cosmic ray interactions with matter, so, in the book Not Even Wrong (UK edition), Dr Woit explains on page 177 that – using the measured weak and electromagnetic forces – supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. At the top of the diagram above is the theory that there is no ‘unification’ of force couplings at high energy, and that the unification instead consists of energy conservation for the different fields at high energy.

The relative strength of electromagnetic interactions has been experimentally observed (in electron scattering) to increase from Coulomb’s low-energy law by 7% as the collision energy increases from about 0.5 MeV to about 90 GeV (I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424), but the strong force behaves differently, falling as energy increases (except for a peaking effect at relatively low energy), as if the strong force is powered by gauge bosons created in the vacuum from the energy that the virtual fermions in the vacuum absorb from the electromagnetic field in the act of being radially polarized by that electromagnetic field, the virtual fermions being unleashed from the ground state of the vacuum by pair production in electric fields exceeding Schwinger’s 1018 volts/metre IR cutoff (equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 and equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040):

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

‘The cloud of virtual particles acts like a screen or curtain that shields the true value of the central core. As we probe into the cloud, getting closer and closer to the core charge, we ’see’ less of the shielding effect and more of the core. This means that the electromagnetic force from the electron as a whole is not constant, but rather gets stronger as we go through the cloud and get closer to the core. Ordinarily when we look at or study an electron, it is from far away and we don’t realize the core is being shielded.’ – Professor David Koltick

‘… we [experimentally] find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

What these people don’t consider is what happens to the electromagnetic field energy which is absorbed by the virtual fermions within 1 femtometre from a particle core! It turns out that this energy powers short-range interactions. The way unification works isn’t by making force strengths equal at a very high energy, it’s by sharing energy via the absorption of electromagnetic field energy by polarized virtual fermions close to core of a real particle. The energy density of an electromagnetic field is known as a function of field strength, and the field strength can be calculated for any distance from an electric charge using Maxwell’s equations (specifically Gauss’s law, the electric field form of Coulomb’s force law). There is no positive evidence for coupling strength unification, there is some evidence (quoted by Woit, as explained) that it is in error, and is a good reason – from energy conservation – why the fact that the strong interaction charge gets bigger with increasing distance (out to a certain limit!) requires the fact that it is powered from energy being absorbed over that distance from the electromagnetic field by virtual fermions!

Furthermore, since quantum gravity is a two-step mechanism with gravitons only interacting with observed particles via an intermediary unobserved Higgs-type field that provides ‘gravitational charge’ to mass-energy, then we must face the possibility that gravitons don’t necessarily have gravitational charge (mass). In this case, gravitational couplings don’t run, but stay small at all distances and energies. This is the reason why the unification theory overestimates the cosmological acceleration of the universe:

The acceleration is caused by gravitons, and since gravitons don’t carry gravitational charge (only the Higgs-like field is charged) they are like the photons in the old gauge theory of U(1) electrodynamics and don’t interact with one another to cause the coupling to increase at short distances or high collision energies. Since the gravity coupling actually remains small at high energies and small distances, and since it is powering the cosmological acceleration, we can see why the mainstream assumption that gravity is enhanced by a factor of 1040 at the smallest distances causes the mainstream estimate of cosmological acceleration/dark energy to be too high (another error, albeit a smaller one, in the mainstream calculation is their assumption of the Planck scale as the grain size or cutoff wavelength instead of the black hole event horizon size, which is smaller and more meaningful physically than the bigger length given by Planck’s arbitrary dimensional analysis).

(On related topic, it is a fact that gravity waves haven’t been observed from the acceleration and oscillation of gravitational charges (masses) unlike the observation of electromagnetic waves from accelerating and oscillating electric charges, precisely because the gravity coupling is 1040 times smaller than the electromagnetic coupling. Gravity waves are related to gravitons in the way that photons of light are related to virtual photons that mediate electromagnetic fields.)

Pair-production, vacuum polarization, and the physical explanation of the IR and UV cutoffs which prevent infinities in quantum field theory

The so-called ultraviolet (UV) divergence problem in quantum field field is having to take an upper limit cutoff for the charge renormalization to prevent a divergence of loops of massive nature occurring at extremely high energy. The solution to this problem is straightforward (it is not a physically real problem): there physically just isn’t room for massive loops to be polarized above the UV cutoff because at higher energy you get closer to the particle core, so the space is simply too small in size to have massive loops with charges being polarized along the electric field vector. There is a normally unobservable Dirac sea in the vacuum which affects reality when charges in it gain enough energy close to allow pairs of fermions to pop into observable existence close to electrons where the electric field strength is above Schwinger’s pair-production cut-off, 1.3*1018 volts/metre. This lower limit on the energy required for pair-production explains physically the cause for the IR cutoff on running couplings and loop effects in quantum field theory. The UV cutoff at extremely high energy is also explained by a correspondingly simple mechanism: at high energy, the corresponding distance is so small there is not likely to be any Dirac sea particles available in that small space (i.e., the distance becomes smaller than the ultimate grain-size of the vacuum, or the physical size of normally unobservable ground state Dirac sea fermions), so you physically can’t get pair production or vacuum polarization because the distance is too small to allow those processes to occur! So the intense electric field strength is unable to produce any massive loops if the distance is smaller than the distance you are applying your calculations to is smaller than the size of the vacuum particles:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

gravity2

Above: how graviton exchanges cause both the attaction of masses which are nearby (compared to the size scale of the universe) and small (compared to the mass of the universe) and the repulsion of masses which are at relatively large distances (using the size scale of the universe) and large (using the mass of the universe). Think of a raisin cake baking: the dough exerts pressure and pushes nearby raisins together (because there is not much dough pressure between them, but lots of dough pressure acting on the other sides!) while pushing distant raisins apart. There is no genius required to see that the long-distance repulsion of mass inherent in the acceleration of the universe is caused by gravitons which cause ‘attraction’ locally.

Consider the force stength (coupling constant) in addition to the inverse-square law: Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is at least h-bar. Let uncertainty in momentum p = mc, and the uncertainty in distance be x = ct. Hence the product of momentum and distance, px = (mc).(ct) = (mc2t = Et = h-bar, where E is energy (Einstein’s mass-energy equivalence). This Heisenberg relationship (the product of energy and time equalling h-bar) is used in quantum field theory to determine the relationship between particle energy and lifetime: E = h-bar/t. The maximum possible range of a virtual particle is equal to its lifetime t multiplied by c. Now for the slightly clever bit:

px = h-bar implies (when remembering p = mc, and E = mc2):

x = h-bar /p = h-bar /(mc) = h-bar*c/E

so E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx):

F = E/x = (h-bar*c/x)/x

= h-bar*c/x2

So we get the force strength, and we just need to remember that this inverse-square law only holds for ranges shorter than the limiting distance a particle can go at nearly c (if relativistic) in the time allowed by the uncertainty principle, x = h-bar /p = h-bar /(mc) = h-bar*c/E. Notice that the force strength this treatment gives for repulsion between two electrons is 137.036… times the force given by Coulomb’s law. This can be explained by the vacuum polarization or virtual fermions around a charge core screening the core electric charge by the 137.036 factor, so that a proportionately lower electric charge and Coulomb force is observed at a distance beyond a few femtometres from an electron core. (This effect was experimentally confirmed by Koltick et al., in high energy electron scattering experiments published in the journal Physical Review Letters in 1997.)

What is important to notice is that this treatment from quantum theory naturally gives the electromagnetic force result, not gravity, which is about 1040 times weaker. So the cosmological acceleration estimated from the mainstream treatment of photon radiation from oscillating virtual electric charges in the ground state of the vacuum will exaggerate the graviton emission from oscillating virtual gravitational charges (masses) in the ground state of the vacuum by a similar factor.

The graviton exchanges between masses will cause expansion on cosmological distance scales and attraction of masses over smaller distances! If masses are significantly redshifted, the exchanged gravitons between them push them apart (cosmological acceleration, dark energy); if they aren’t receding they won’t exchange gravitons with a net force, so the gravitons which are involved then are those exchanged between each mass and the distant receding masses in the universe. Because nearby masses don’t exchange gravitons forcefully, they shield one another from gravitons coming from distant masses in the direction of the other (nearby) mass, and so get pushed together.

That ‘attraction’ and repulsion can both be caused by the same spin-1 gravitons (which are dark energy) can be understood by a semi-valid analogy, the baking raisin cake. As the cake expands, the distant raisins in it recede with the expansion of the cake, as if there is a repulsion between them. But nearby raisins in the cake will be pressed even closer together by the surrounding pressure from the dough on each side (the dough – not the raisins – is what physically expands as carbon dioxide is released in it from yeast or baking soda), because the raisins are being pressed on all sides apart from the sides facing nearby adjacent raisins. So because there is no significant pressure of dough inbetween them but plenty of dough pressure from other directions, nearby raisins shield one another and so get pressed closer together by the expansion of the surrounding dough! Therefore, the raisin cake analogy serves to show how one physical process – a pressure in space against mass-energy created by graviton exchange radiation – causes both the repulsion that accelerates the expansion of the universe on large scales, as well as causing the attraction of gravity on smaller distance scales where the masses involved are not substantially receding (redshifted) from one another.

raisin-cake

Above: think of the analogy of a raisin cake expanding due to the motion of the baking dough. Nearby raisins (with little or no dough between them) will be pushed closer together like ‘attraction’ because there is little or no dough pressure between them but a lot of dough pressure from other directions, while distant raisins will be accelerated further apart during the cooking because they will have a lot of expanding dough around them on all sides, causing a ‘repulsion’ effect. So there are two phenomena – cosmological repulsive acceleration and gravitation – provided for the price of one graviton field! No additional dark energy or CC, just the plain old gravitational field. I think this is missed by the mainstream because:

(1) they think LeSage came up with quantum gravity and was disproved (he didn’t, and the objections to his ideas came because he didn’t have gravitons exchange radiation, but a gas), and

(2) they believe the false Pauli-Fierz ‘proof’ that gravitons exchanged between 2 masses to cause them to attract, must suck, which implies spin-2 suckers.

Actually the Pauli-Fierz proof is fine if the universe only contains 2 masses which ‘attract’. Problem is, the universe doesn’t just contain 2 masses. Instead, we’re surrounded by masses, clusters of immense receding galaxies with enormous redshift and accelerating with a large outward force away from us in all directions, and there is no mechanism to stop graviton exchanges with those masses and the two little masses in our study. As the gravitons propagate from such distant masses to the two little nearby ones we are interested in, they converge (not diverge), so the effects of the distant masses are larger (not smaller) that that of nearby masses. This destroys the mainstream ‘proof’ using path integrals that aims to show that spin-2 gravitons are required to provide universal attraction, because the path integral is no longer that between just two lumps of mass-energy. It must take account of all the mass-energy in the whole universe, as in Fig. 1 above. Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has never been observed!

As explained in the earlier blog post on path integrals, for the low energy situations that constitute long-range force field effects, you don’t have pair production loops, so there is only one kind of simple (‘tree’ type) Feynman diagram (without loops and with just a single interaction vertex) involved in the path integral, and the integral is just summing that one kind of interaction over all geometric possibilities to reproduce classical physics: this is exactly what we do in Fig. 1 above. (Feynman shows in his book QED that for low energy physics, the path integral formulation reduces to the classical limit of simply finding the path of least action for a light ray where most paths have different phases which cancel out; the case of spin-1 quantum gravity by analogy reduces to the situation whereby most graviton exchanges produce force effects that geometrically cancel out, so the resultant is due to asymmetry and is simple to calculate.)

Once you include the masses of the surrounding universe in the path integral, the whole basis of the Pauli-Fietz proof evaporates; gravitons no longer need to be suckers and thus don’t need to have a spin of 2. Instead of having spin-2 gravitons sucking 2 masses together in an otherwise empty universe, you really have those masses being pushed together by graviton exchanges with the immense masses in the surrounding universe. This is so totally obvious, it’s amazing why the mainstream is so obsessed with spin-2 suckers. (Probably because string theory is the framework for spin-2 suckers.)

‘The problem is not that there are no other games in town, but rather that there are no bright young players who take the risk of jeopardizing their career by learning and expanding the sophisticated rules for playing other games.’

– Prof. Bert Schroer, http://arxiv.org/abs/physics/0603112, p46

‘It is said that more than 200 theories of gravitation have have been put forward; but the most plausible of these have all had the defect that that they lead nowhere and admit of no experimental test.’

– A. S. Eddington, Space Time and Gravitation, Cambridge University Press, 1920, p64. This problem continues with Witten’s stringy hype:

‘String theory has the remarkable property of predicting gravity.’ [He means spin-2 gravitons, which don’t lead to any facts about gravity.]

– Edward Witten, superstring 10/11 dimensional M-theory originator, ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996.

Spin of gauge boson

‘In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …’ – Conclusion of the paper by M. Fierz and W. Pauli, ‘On relativistic wave equations for particles of arbitrary spin in an electromagnetic field’, Proc. Roy. Soc. London., volume A173, pp. 211-232 (1939). [Notice that Pauli did make errors, such as predicting in a famous 4 December 1930 letter that the neutrino has the mass of the electron!]

To explain the mainstream spin-2 graviton idea, any particle having spin-2 will look identical after a 180 degree rotation in physical space, which will return the particle to its original form: a spin-n particle needs to be rotated by 360/n degrees to be returned to its original state. The spin of a particle dictates whether it obeys Bose-Einstein statistics (applies to bosons, i.e. where n is an integer) which can condense together, or Fermi-Dirac statistics (applies to fermions, i.e. where n is a half-integer) which only pair up with opposite spins and can’t individually behave as bosons condense into the same state. However, when two half-integer fermions pair up together, either as individual free particles or as the single electrons bound to the outer orbit of atoms, the resulting pair of fermions behaves as a boson if they are both in exactly the same energy state: this happens for example to helium which has two electrons that together (paired up) behave as a zero-viscosity “superfluid” at very low temperatures, 2 Kelvin or less, where they both share exactly the same “ground” energy state with opposite spins, forming a bosonic composite particle (the so-called “Bose-Einstein condensate”). Similarly in superconductivity, two conduction electrons pair up to form a “Cooper pair” of electrons, which behaves like a Bose-Einstein condensate, moving with very little resistance!

  1. For spin-(1/2) particles such as electrons, the particle is like a Mobius strip loop (with a half a twist so that both surfaces on the strip are connected into one surface on the loop) and so it needs to be rotated by 720 degrees to be restored to its original form.
  2. For spin-1 particles such as photons, the particle is simple and needs only be rotated by 360 degrees to be returned to its original state.
  3. For spin-2 particles such as the mainstream stringy graviton idea, the particle needs to be rotated by only 180 degrees to be returned to its original state.

The idea is that when spin-2 gravitons are exchanged between 2 masses, A and B, those going from A to B will be in the same state as those going from B to A, because they are distinguished by only a 180 degree rotation, and this will make them look indistinguishable and will produce an always attractive force. However, as explained above, this idea neglects other masses in the universe, and requires extra spatial dimensions which can’t be observed, so that the compactification of the unobserved spatial dimensions can be achieved in a vast number of different ways, precluding any possibility of making falsifiable predictions.

Crackpotism and the spin-2 graviton of stringy theory

Regarding Pauli’s crackpot spin-2 graviton theory, Pauli also made an error in predicting the neutrino: he thought it had the mass of the electron! But being wrong was better than being ‘not-even-wrong’ like the stringy landscape of Witten and others. See Pauli’s original letter of 4 Dec 1930 predicting neutrinos: http://wwwlapp.in2p3.fr/neutrinos/aplettre.html

This is significant because string theorists often falsely claim that Pauli’s prediction of the neutrino was speculative or apparently uncheckable (notice that Pauli’s letter ends with by saying to the experimentalists: ‘dear radioactive people, look [test] and judge’]. Pauli’s evidence for predicting the neutrino (which he called the neutron before Chadwick used that word to name a different, really massive neutral particle in 1932) was beta decay. By 1930 the beta spectrum was known as well as the mass change during beta decay: the beta particle [on average] emitted only carries 30% of the energy emitted. Hence 70% [on average] must be carried by a so-far unobserved particle. By conservation of energy, angular momentum and charge, Pauli could predict its properties.

Feynman explains that there was only one rival explanation to the neutrino on p. 75 of his book The Character of Physical Law (Penguin, London, 1992):

‘Two possibilities existed. … it was proposed by Bohr for a while that perhaps the conservation law worked only statistically, on the average. But it turns out now that the other possibility is the correct one … there is something else coming out [besides a beta particle], something which we now call an anti-neutrino.’

This isn’t like the landscape of 10500 vacua ‘predicted’ by the speculations of string theory. Apart from neutrinos, quarks and atoms are claimed by string theorists as examples of stringy-type speculative predictions with no evidence behind them. But was scientific evidence for quarks from the fact that the electrically neutral neutron has a magnetic moment from its spin (indicating that it contains electric charges) and from the SU(3) symmetry of hadron properties, and SU(3) symmetry correctly made predictions such as the omega-minus meson. For atoms, see Glasstone’s Sourcebook on Atomic Energy, Van Nostrand, 2nd ed., London, 1958, p. 2:

‘… Dalton made the theory quantitative … The atomic theory of the classical thinkers was somewhat in the nature of a vague philosophical speculation, but the theory as enunciated by Dalton … acted as a guide to further experimentation …’

DISCUSSION OF STRINGY SPIN-2 GRAVITON AND STRING THEORY AT http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/

‘I hear this “string theory demands the graviton” thing a lot, but the only explanation I’ve seen is that it predicts a spin-2 particle.’ – Rob Knop, May 24th, 2007 at 12:37 pm, http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/#comment-28901

‘A massless spin 2 particle is pretty much required to be a graviton by some results that go back to Feynman, I think.’ – String theorist Aaron Bergman, May 24th, 2007 at 12:44 pm, http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/#comment-28902 (Actually Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has not been observed.)

‘Rob, in addition to all the excellent reasons why a massless spin 2 particle must be the graviton, there are also explicit calculations demonstrating that forgone conclusion …’ – Moshe, May 24th, 2007 at 2:12 pm, http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/#comment-28891

‘Aaron Bergman wrote: ‘A massless spin 2 particle is pretty much required to be a graviton by some results that go back to Feynman, I think.’

‘Hmm. That sounds like a “folk theorem”: a theorem without assumptions, proof or even a precise statement.

‘Whatever these results are, they need to have extra assumptions. … Well, you can write down lots of ways a spin-2 particle can interact with other fields. Most of these have nothing to do with gravity. A graviton has got to interact with every other field — and in a very specific way.

‘Of course, most of these ways give disgusting quantum field theories that probably don’t make sense: they’re nonrenormalizable.

‘But, so is gravity!

‘So, it would be interesting to look at the results you’re talking about, and see what they actually say. Maybe the Einstein-Hilbert action is the “least nonrenormalizable” way for a spin-2 particle to interact with other particles… whatever that means?’ – Professor John Baez, May 25th, 2007 at 10:55 am, http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/#comment-28868

‘B writes: ‘The spin 2 particle can only couple to the energy-momentum tensor – as gravity does.’
Oh? Why?’ – Professor John Baez, May 25th, 2007 at 12:23 pm, http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/#comment-28878

‘…the point is that the massless spin-2 field is described by a symmetric two-index [rank-2] tensor with a certain gauge symmetry. … So its source must be a symmetric divergenceless two-index tensor. Basically you don’t have that many of them lying around, although I don’t know the rigorous statement to that effect.’ – Professor Sean Carroll, May 25th, 2007 at 12:31 pm, http://blogs.discovermagazine.com/cosmicvariance/2007/05/24/string-theory-not-dead-yet/#comment-28879

Dr Christine Dantas then draws attention at http://egregium.wordpress.com/2007/05/24/is-there-more-to-gravity-than-gravitons/ to the following paper:

T. Padmanabhan, ‘From Gravitons to Gravity: Myths and Reality’, http://arxiv.org/abs/gr-qc/0409089. which states on page 1:

‘There is a general belief, reinforced by statements in standard textbooks, that: (i) one can obtain the full non-linear Einstein’s theory of gravity by coupling a massless, spin-2 field hab self-consistently to the total energy momentum tensor, including its own; (ii) this procedure is unique and leads to Einstein-Hilbert action and (iii) it only uses standard concepts in Lorentz invariant field theory and does not involve any geometrical assumptions. After providing several reasons why such beliefs are suspect — and critically re-examining several previous attempts — we provide a detailed analysis aimed at clarifying the situation. First, we prove that it is impossible to obtain the Einstein-Hilbert (EH) action, starting from the standard action for gravitons in linear theory and iterating repeatedly. … Second, we use the Taylor series expansion of the action for Einstein’s theory, to identify the tensor Sab, to which the graviton field hab couples to the lowest order (through a term of the form Sabhab in the lagrangian). We show that the second rank tensor Sab is not the conventional energy momentum tensor Tab of the graviton and provide an explanation for this feature. Third, we construct the full nonlinear Einstein’s theory with the source being spin-0 field, spin-1 field or relativistic particles by explicitly coupling the spin-2 field to this second rank tensor Sab order by order and summing up the infinite series. Finally, we construct the theory obtained by self consistently coupling hab to the conventional energy momentum tensor Tab order by order and show that this does not lead to Einstein’s theory.’

Now we will check a predictive proof of the acceleration of the universe which originated before the observation that the universe is accelerating, a = Hc.

Notice that the outward acceleration of repulsion is opposed by the inward acceleration due to attraction of gravity, so the data is showing an absence of acceleration. Because cosmologists knew from the Friedmann metric of general relativity that the recession should be slowing down (expansion should be decelerating) at large distances, the absence of that deceleration implied the presence of an acceleration. Thus, as Nobel Laureate Phil Anderson says, the observed fact regarding the imaginary cosmological constant and dark energy is merely that:

“… the flat universe is just not decelerating, it isn’t really accelerating …”

http://cosmicvariance.com/2006/01/03/danger-phil-anderson

This flat spacetime occurs where the outward acceleration of the universe offsets the inward acceleration due to gravitational acceleration, making spacetime flat (no acceleration/’curvature’). However, this balance with exactly flat spacetime only applies to receding matter at a particular distance from us (a few thousand million light years): for bigger distances than that, the cosmological acceleration exceeds gravitation and those receding objects have a net acceleration.

fig1
Fig. 2 – the basis of the Hubble acceleration. This figure comes from a previous post, here. But now I will add some clarifying comments about it. The diagram distinguishes the time since the big bang (which from our perspective on the universe is about 13,700 million years) from the time past that we see when we look to greater distances due to the delay caused by the transit time of the light radiation in its propagation to us observers here on Earth from a large distance. It’s best to build physical models upon directly observed facts like the Hubble recession law, not upon speculative metrics of general relativity which firstly is only at best an approximation to quantum gravity (which will differ from general relativity because quantum field gravitons will be subject to redshift when exchanged between receding masses in the expanding universe), and secondly depends on indirect observations such as theories of unobserved dark matter and unobserved dark energy to overcome observational anomalies. The observed Hubble recession law states that recession v = HR, where R = cT, T being time past (when the light was emitted), not the time after the big bang for the Earth.

As shown in Fig. 2, this time past T is related to time since the big bang t for the distance of the star in question by the simple expression: t + T = 1/H, for flat spacetime as has been observed since 1998 (the observed acceleration of the universe cancels gravitational deceleration of distant objects, so there is no curvature on large distance scales).

Hence:

v = HR = HcT = Hc[(1/H) – t] = c – (Hct)

a = dv/dt = d[c – (Hct)]/dt = -Hc = 6×10-10 ms-2

which is cosmological acceleration of the universe (since observed to be reality, from supernova redshifts!). E.g., Professor Lee Smolin writes in the chapter ‘Surprises from the Real World’ in his 2006 book The Trouble with Physics: The Rise of String Theory, the fall of a Science, and What Comes next (Allen Lane, London), pages 209:

‘… c2/R [which for R = ct = c/H gives a = c2/(ct) = Hc, the result we derived theoretically in 1996, unlike Smolin’s ad hoc dimensional analysis numerology of 2006]… is in fact the acceleration by which the rate of expansion of the universe is increasing – that is, the acceleration produced by the cosmological constant.’

The figure 6×10-10 ms-2 is the outward acceleration which Smolin quotes as c2/R. Full credit to Smolin for actually stating what the acceleration of the universe was measured to be! There are numerous popular media articles, books and TV documentaries about the acceleration of the universe which are all so metaphysical they they don’t even state that it is measured to be 6 x 10-10 ms-2! On the next page, 210, Smolin however ignores my published prediction of the cosmological acceleration two years before its discovery and instead discusses an observation by Mordehai Milgrom who ‘published his findings in 1983, but for many years they were largely ignored’. Smolin explains that galactic rotation curves obey Newtonian gravitation near the middle of the galaxy and only require unobserved ‘dark matter’ near the outside: Milgrom found that the radius where Newtonian gravity broke down and required ‘dark matter’ assumptions was always where the gravitational acceleration was 1.2 x 10-10 ms-2, on the order of the cosmological acceleration of the universe. Smolin comments on page 210:

‘As long as the [centripetal] acceleration of the star [orbiting the centre of a galaxy] exceeds this critical value, Newton’s law seems to work and the acceleration predicted [by Newton’s law] is the one seen. There is no need to posit any dark matter in these cases. But when the acceleration observed is smaller than the critical value, it no longer agrees with the prediction of Newton’s law.’

(The theoretical derivation of the acceleration we have given above is valid for all matter regardless of distance, but as we have noted there is a mechanism involved and gravitons only produce repulsive acceleration where the masses are extremely large, but in some cases this could influence the galactic rotation curves where the centripetal accelerations involved are of the same order of magnitude as the cosmological acceleration. Milgrom’s 1983 ‘Modified Newtonian Dynamics (MOND)’ claimed that Newton’s law only holds down to values of a = MG/r2 = 1.2 x 10-10 ms-2, and for lower accelerations he thought that the gravity acceleration fell only inversely as the distance rather than as the inverse square law. This would get rid of the need for dark matter. But the actual law including cosmological acceleration may be like a = (MG/r2) – (Hc) where the cosmological repulsive acceleration Hc is given a minus sign because it acts in the opposite direction to gravitational attraction.)

This is a simple result obtained in May 1996 and published via Electronics World in October 1996 (journals like Classical and Quantum Gravity and Nature censored it because it leads to a quantum gravity theory which is different to mainstream-defended string theory, which makes checkable predictions and survives tests unlike mainstream-defended string theory). It was only in 1998 that Dr Saul Perlmutter finally made the discovery using CCD telescopes that yes, indeed, the universe is accelerating as predicted in May 1996, although for an obvious reason (ignorance) he did not refer to the prediction made earlier! The editors of Nature, which published Perlmutter, again in 1998 onwards have refused to publish the fact that the observation confirmed the earlier prediction! This is the problem with the scientific method: the politics of censorship ban radical advances. Relevant to this fact is Professor Freeman Dyson’s observation in his 1981 essay Unfashionable Pursuits (quoted by Tony Smith):

‘… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …’

Smith quotes Professor Richard P. Feynman complaining about how he was censored out by famous physicists Teller, Dirac and Bohr when he tried to explain his path integrals formulation of quantum electrodynamics to them at the 1948 Pocono conference:

‘… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right. … For instance, take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory.

‘I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … I gave up, I simply gave up …”.’ (The Beat of a Different Drum: The Life and Science of Richard Feynman, by Jagdish Mehra, Oxford University Press, 1994, pp. 245-248.)

Teller dismissed Feynman’s work because it ignored the exclusion principle, Dirac dismissed it because it didn’t have a unitary operator to make the sum of probabilities for all alternatives always equal to 1 (only the final result of the path integral was normalized to a total probability of 1, so that only one electron arrives at say the screen in the double slit experiment: clearly the whole basis of the path integral seems to violate unitary for intermediate times when the electron is supposed to take all paths like an infinite number of particles, and thus interfere with ‘itself’ before arriving – as a single particle on the screen!), and Bohr dismissed it because he claimed Feynman didn’t know the uncertainty principle, and claimed that the uncertainty principle dismissed any notion of path integrals representing the trajectory of an electron!

As a result of such dismissive peer-review, Feynman’s brilliant paper reformulating quantum field theory, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, was actually rejected for publication by the Physical Review (see http://arxiv.org/PS_cache/quant-ph/pdf/0004/0004090v1.pdf page 2) before finally being published instead by Reviews of Modern Physics (v. 20, 1948, p. 367).

Feynman concluded: ‘… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”.’ (The Beat of a Different Drum: The Life and Science of Richard Feynman, by Jagdish Mehra, Oxford University Press, 1994, pp. 245-248.)

Feynman in 1985 in his book QED explained clearly in a footnote that the uncertainty principle is not a separate law from path integrals so Bohr’s objection was invalid; interferences due to the random virtual photon exchanges between the charges in the atom – which cause the non-classical Coulomb force – cause the uncertainty in the position of an electron within an atom!

But if this kind of ignorant dismissal and rejection from numerous top ‘experts’ can happen to Feynman’s path integrals, it surely can happen to any radical-enough-to-be-helpful quantum gravity ideas! Consequently, Feynman denounced such ‘expert’ opinion/belief when it is not based on facts:

‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, The Trouble with Physics, 2006, p. 307).

‘Science is the belief in the ignorance of experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p. 187.

Teller, Dirac and Bohr had a very easy job dismissing Feynman’s path integrals; they simply picked out bits of his work they couldn’t grasp, falsely declared those bits to be wrong or nonsense, and then ignored the rest!

Against this kind of unconstructive ‘criticism’ [do you really suffer a ‘criticism’ if someone falsely attacks you and uses their prestige to silence you from making any reply or defending yourself against their ignorant assertions, or are they really the ones who are making fools of themselves? – the answer to this will depend on whether there are any bystanders of influence and if they can grasp the facts or are duped, or unwilling/unable to help science], Feynman didn’t see any point in responding. If the egos of other people prevent them from taking a real interest in your work, if those other people have more to gain to their already massive egos by dismissing little people than by listening to those they consider to be little people, what is the point in trying to talk to them? You would need to be a politician to diplomatically nurture their egos enough to get them to invest a moment in your advance. They won’t do it willingly; they don’t do it for the sake of physics. They live in a string community that they call physics, a community which exists to offer help and assistance to group members, which believes in speculative groupthink and isn’t concerned with factual predictions that have been confirmed, and which seeks to defend itself and seek status by attacking others.

Professor Freeman Dyson in a dramatic interview on Google Video explains how in addition to the nonsensical egotistical ‘objections’ by Teller, Dirac and Bohr, the famous physicist Oppenheimer also tried to destroy Dyson’s efforts to explain Feynman’s work, using the tactic of meaningless, rude interruptions to his lecture.

fig3

Above: Dyson explains how leading physicist Oppenheimer was a ‘bigoted old fool’ in egotistically sneering at the wording of new ideas and refusing to listen to new ideas outside his area of interest. Dyson and Bethe had to struggle to get him to listen, in order to get Feynman’s work taken seriously. Without the eventual backing of Oppenheimer, it would have remained hidden from mainstream attention. [This is quite common mainstream methodology to secure continued attention by stamping on alternative ideas, contrary to the claim certain string theorists make that there would be an overnight scientific revolution if only someone came up with a theory of quantum gravity that works better than string theory – see the comments section of this post.]

Just to add another example, apart from Feynman’s path integral, of an idea which is now central to quantum field theory yet which started off being ridiculed and objected to, take Yang-Mills theory which is central to the Standard Model of particle physics! In this case, Pauli objected to C. N. Yang’s presentation of the Yang-Mills theory at Princeton in February 1954 so strongly that Yang had to stop and sit down, although – to give the devil his due – Oppenheimer was more reasonable at that time, and encouraged Yang to continue his lecture. Yang writes:

‘Wolfgang Pauli (1900-1958) was spending the year in Princeton, and was deeply interested in symmetries and interactions…. Soon after my seminar began … Pauli asked, “What is the mass of this field …?” I said we did not know. Then I resumed my presentation but soon Pauli asked the same question again. I said something to the effect that it was a very complicated problem, we had worked on it and had come to no definite conclusions.

‘I still remember his repartee: “That is not sufficient excuse”. I was so taken aback that I decided, after a few moments’ hesitation, to sit down. There was general embarrassment. Finally Oppenheimer said, “We should let Frank proceed”. I then resumed and Pauli did not ask any more questions during the seminar.’

This episode is somewhat reminiscent of Samuel Cohen’s account of Oppenheimer’s own behaviour at Los Alamos when a nervous physicist – Dick Erlich – was trying to give a lecture:

‘On another occasion, to expose another side of Oppenheimer’s personality, which could be intolerant and downright sadistic, he showed up at a seminar to hear Dick Erlich, a very bright young physicist with a terrible stuttering problem, which got even worse when he became nervous. Poor Dick, who was having a hard enough time at the blackboard explaining his equations, went into a state of panic when Oppenheimer walked in unexpectedly. His stuttering became pathetic, but with one exception everyone loyally stayed on trying to decipher what he was trying to say. This exception was Oppenheimer, who sat there for a few minutes, then got up and said to Dick: “You know, we’re all cleared to know what you’re doing, so why don’t you tell us.” With that he left, leaving Dick absolutely devastated and unable to continue. Also devastated were the rest of us who worshipped Oppenheimer, for very good reasons, and couldn’t believe he could act so cruelly.’

– S. Cohen, ‘F— You! Mr. President: Confessions of the Father of the Neutron Bomb’, 3rd Edition, 2006, page 24.

Path integrals for fundamental forces in quantum field theory

zee

Above: the path integral performed for the Yukawa field, the simplest system in which the exchange of massive virtual pions between two nucleons causes an attractive force. Virtual pions will exist all around nucleons out to a short range, and if two nucleons get close enough for their virtual pion fields to overlap, they will be attracted together. This is a little like Lesage’s idea where massive particles push charges together over a short range (the range being limited by the diffusion of the massive particles into the ‘shadowing’ regions). (See page 26 of Zee for discussion, and page 29 for integration technique. But we will discuss the components of this and other path integrals in detail below.) Zee comments on the result above on page 26: “This energy is negative! The presence of two … sources, at x1 and x2, has lowered the energy. In other words, two sources attract each other by virtue of their coupling to the field … We have derived our first physical result in quantum field theory.” This 1935 Yukawa theory explains the strong nuclear attraction between nucleons in a nucleus by predicting that the exchange of pions (discovered later with the mass Yukawa predicted) would overcome the electrostatic repulsion between the protons, which would otherwise blow the nucleus apart.

But the way the mathematical Yukawa theory has been generalized to electromagnetism and gravity is the basic flaw in today’s quantum field theory: to analyze the force between two charges, located at positions in space x1 and x2, the path integral is done including only those two charges, and just ignoring the vast number of charges in the rest of the universe which – for infinite range inverse-square law forces – are also exchanging virtual gauge bosons with x1 and x2!

A potential energy which varies inversely with distance implies a force which varies as the inverse-square law of distance, because work energy W = Fr, where force F acts over distance r, hence F = W/r, and since energy W is inversely proportional to r, we get F ~ 1/r2. (Distances x in the integrand result in the radial distance r in the result for potential energy above.) In the case of gravity and electromagnetism, the effective mass of the gauge boson in this equation m ~ 0, which gets rid of the exponential term (spin-2 gravitons are supposed to have mass to enable graviton-graviton interactions to enhance the strength of the graviton interaction at high energy in strong fields enough to “unify” the strength of gravity with standard model forces near the Planck scale – an assumption about “unification” which is physically in error (see Figures 1 and 2 in the blog post https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/) – and additionally, we’ve shown why spin-2 gravitons are based on error and anyway in the standard model all mass arises from an external vacuum “Higgs” field and is not intrinsic). The exponential term is however important in the short-range weak and strong interactions. Weak gauge bosons are supposed to get their mass from some vacuum (Higgs) field, while massless gluons cause a pion-mediated attraction of nucleons, where the pions have mass so the effective field theory for nuclear physics is of the Yukawa sort.

A path integral calculates the amplitude for an interaction which can occur by a variety of different possible paths through spacetime. The numerator of the path integral integrand above is derived from the phase factor, representing the relative amplitude of a particular path, is simply the exponential term eiHT = eiS where H is the Hamiltonian (which for the free-particle of mass m is simply representing kinetic energy of the particle, H = E = p2/(2m) = (1/2)mv2; in the event of there being a force-field present, the Hamiltonian must subtract the potential energy V, due to the force field, from the kinetic energy: H = p2/(2m) – V), T is time, and S is the action for the particular path measured in quantum action units of h-bar (the action S is the integral of the Lagrangian field equation over time for a particular path, S = òL dt).

The origin of the phase factor for the amplitude of a particular path, eiHT, is simply the time-dependent Schroedinger equation of quantum mechanics: i{h-bar}d{Psi}/dt = H, where H is the Hamiltonian (energy operator). Solving this gives wavefunction amplitude, {Psi} = eiHT/h-bar, or simply eiHT if we express HT in units of h-bar. Behind the mathematical symbolism, it’s extremely simple physics, just being a description of the way that waves can reinforce if in phase and adding together, or cancel out if they are out of phase.

The denominator of the path integral integrand above is derived from the propagator, D(x), which Zee on page 23 describes as being: ‘the amplitude for a disturbance in the field to propagate from the origin to x.’ This amplitude for calculating a fundamental force using a path integral is constructed using Feynman’s basic rules for conservation of momentum (see page 53 of Zee’s 2003 QFT textbook).

1. Draw the Feynman diagram for the basic process, e.g. a simple tree diagram for Møller scattering of electrons via the exchange of virtual photons.
2. Label each internal line in the diagram with a momentum k and associate it with the propagator i/(k2m2 + ie). (Note that when k2 = m2, momentum k is “on shell” and is the momentum of a real, long-lived particle, but k can also take many values which are “off shell” and these represent “virtual particles” which are only indirectly observable from the forces they produce. Also note that ie is infinitesimal and can be dropped where k2m2 is positive, see Zee page 26.)
3. Associate each interaction vertex with the appropriate coupling constant for the type of fundamental interaction (electromagnetic, weak, gravitational, etc.) involved, and set the sum of the momentum going into that vertex equal to the sum of the momentum going out of that vertex.
4. Integrate the momentum associated with internal lines over the measure d4k/(2p)4.

Clearly, this kind of procedure is feasible when a few charges are considered, but is not feasible at step 1 when you have to include all the charges in the universe! The Feynman diagram would be way too complicated if trying to include 1080 charges, which is precisely why we have used geometry to simplify the graviton exchange path integral when including all the charges in the universe.

Zee gives some interesting physical descriptions of the way that forces are mediated by the exchange of virtual photons (which seem to interact in some respects like real photons scattering off charges to impart forces, or being created at a charge, propagating to another charge, being annihilated on absorption by that charge, with a fresh gauge boson then being created and propagating back to the first charge again) on pages 20, 24 and 27:

“Somewhere in space, at some instant in time, we would like to create a particle, watch it propagate for a while, then annihilate it somewhere else in space, at some later instant in time. In other words, we want to create a source and a sink (sometimes referred to collectively as sources) at which particles can be created and annihilated.” (Page 20.)

“We thus interpret the physics contained in our simple field theory as follows: In region 1 in spacetime there exists a source that sends out a ‘disturbance in the field’, which is later absorbed by a sink in region 2 in spacetime. Experimentalists choose to call this disturbance in the field a particle of mass m.” (Page 24.)

“That the exchange of a particle can produce a force was one of the most profound conceptual advances in physics. We now associate a particle with each of the known forces: for example, the [virtual, 4-polarizations] photon with the electromagnetic force, and the graviton with the gravitational force; the former is experimentally well established [virtual photons push measurably nearby metal plates together in the Casimir effect] and the latter, while it has not yet been detected experimentally, hardly anyone doubts its existence. We … can already answer a question smart high school students often ask: Why do Newton’s gravitational force and Coulomb’s electric force both obey the 1/r2 law?

“We see from [E = -(emr)/(4pr)] that if the mass m of the mediating particle vanishes, the force produced will obey the 1/r2 law.” (Page 27.)

The problem with this last claim Zee makes is that mainstream spin-2 gravitons are supposed to have mass, so gravity would have a limited range, but this is a trivial point in comparison to the errors already discussed in mainstream (spin-2 graviton) approaches to quantum gravity. Zee in the next chapter, Chapter I.5 “Coulomb and Newton: Repulsion and Attraction”, gives a slightly more rigorous formulation of the mainstream quantum field theory for electromagnetic and gravitational forces, which is worth study. It makes the same basic error as the 1935 Yukawa theory, in treating the path integral of gauge bosons between only the particles upon which the forces appear to act, thus inaccurately ignoring all the other particles in the universe which are also contributing virtual particles to the interaction!

Because of the involvement of mass with the propagator, Zee uses a trick from Sidney Coleman where you work through the electromagnetic force calculation using a photon mass m and then set m = 0 at the end, to simplify the calculation (to avoid dealing with gauge invariance). Zee then points out that the electromagnetic Lagrangian density L = -(1/4)FmnFmn (where Fmn = 2dAmn = dmAndnAm, Am(x) being the vector potential) has an overall minus sign in the Lagrangian so that action is lost when there is a variation in time! Doing the path integral with this negative Lagrangian (with a small mass added to the photon to make the field theory work) results in a positive sign for the potential energy between two lumps of similar charge, so: “The electromagnetic force between like charges is repulsive!” (Zee, page 31.)

This is quite impressive and tells us that the quantum field theory gives the right result without fiddling in this repulsion case: two similar electric charges exchange gauge bosons in a relatively simple way with one another, and this process, akin to people firing objects at one another, causes them to repel (if someone fires something at you, they are forced away from you by the recoil and you are knocked away from them when you are hit, so you are both forced apart!). Notice that such exchanged virtual photons must be stopped (or shielded) by charges in order to impart momentum and produce forces! Therefore, there must be an interaction cross-section for charges to physically absorb (or scatter back) virtual photons, and this fact offers a simple alternative formulation of the Coulomb force quantum field theory using geometry instead of path integrals!

Zee then goes on to gravitation, where the problem – from his perspective – is how to get the opposite result for two similar-sign gravitational charges than you get for similar electric charges (attraction of similar charges, not repulsion!). By ignoring the fact that the rest of the mass in the universe is of like charge to his two little lumps of energy, and so is contributing gravitons to the interaction, Zee makes the mainstream error of having to postulate a spin-2 graviton for exchange between his two masses (in a non-existent, imaginary empty universe!) just as Fierz and Pauli had suggested back in 1939.

At this point, Zee goes into fantasy physics, with a spin-2 graviton having 5 polarizations being exchanged between masses to produce an always attractive force between two masses, ignoring the rest of the mass in the universe.

It’s quite wrong of him to state on page 34 that because a spin-2 graviton Lagrangian results in universal attraction for a totally false, misleading path integral of graviton exchange between merely two masses, “we see that while like [electric] charges repel, masses [gravitational charges] attract.” This is wrong because even neglecting the error I’ve pointed out of ignoring gravitational charges (masses) all around us in the universe, Zee has got himself into a catch 22 or circular argument: he first assumes the spin-2 graviton to start off with, then claims that because it would cause attraction in his totally unreal (empty apart from two test masses) universe, he has explained why masses attract. However, the only reason why he assumes a spin-2 graviton to start off with is because that gives the desired properties in the false calculation! It isn’t an explanation. If you assume something (without any physical evidence, such as observation of spin-2 gravitons) just because you already know it does something in a particular calculation, you haven’t explained anything by then giving that calculation which merely is the basis for the assumption you are making! (By analogy, you can’t pull yourself up in the air by tugging at your bootstraps.)

This perversion of physical understanding gets worse. On page 35, Zee states:

“It is difficult to overstate the importance (not to speak of the beauty) of what we have learned: The exchange of a spin 0 particle produces an attractive force, of a spin 1 particle produces a repulsive force, and of a spin 2 particle an attractive force, realized in the hadronic strong interaction, the electromagnetic interaction, and the gravitational interaction, respectively.”

Notice the creepy way that the spin-2 graviton – completely unobserved in nature – is steadily promoted in stature as Zee goes through the book, ending up the foundation stone of mainstream physics, string theory:

‘String theory has the remarkable property of predicting [spin-2] gravity.’ – Professor Edward Witten (M-theory originator), ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996.

“For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy … It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” [Emphasis added.]

– Dr Peter Woit, http://arxiv.org/abs/hep-th/0206135, page 52.

The consequence of Witten’s spin-2 graviton mindset (adopted by string theorists without any reservations) is that when I submitted a paper to Classical and Quantum Gravity ten years ago (by post), the editor sent it for ‘peer-review’ and received a rejection decision by an anonymous ‘referee’ which he forwarded to me, just ignorant attack which ignored the physics altogether and just claimed it was wrong because it didn’t connect with the spin-2 graviton of string theory!

Sent: 02/01/03 17:47
Subject: Your_manuscript LZ8276 Cook

Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories.

Yours sincerely,
Stanley G. Brown, Editor,
Physical Review Letters

Now, why has this nice genuine guy still not published his personally endorsed proof of what is a “currently accepted” prediction for the strength of gravity? Will he ever do so?

“… in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory.”

– Sir Roger Penrose, The Road to Reality, Jonathan Cape, London, 2004, page 896.

Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has never been observed! Despite this, the censorship of the facts by mainstream “stringy” theorists persists:

nigel says:
February 24, 2006 at 5:26 am

http://arxiv.org/help/endorsement

‘We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’

They don’t want any really strong evidence of dissent. This filtering means that the arxiv reflects pro-mainstream bias. It sends out a powerful warning message that if you want to be a scientist, don’t heckle the mainstream or your work will be deleted.

In 2002 I failed to get a single brief paper about a crazy-looking yet predictive model on to arxiv via my university affiliation (there was no other endorsement needed at that time). In emailed correspondence they told me to go get my own internet site if I wasn’t contributing to mainstream [stringy] ideas.

Now let’s examine what Feynman (1918-88) says about this mechanism. In November 1964, the year before receiving the Nobel Prize for path integrals, Feynman gave a series of lectures at Cornell University on ‘The Character of Physical Law’, which were filmed by the BBC for transmission on BBC2 TV in 1965. The transcript has been published as a book by the BBC in 1965 and MIT press in 1967, ‘The Character of Physical Law,’ and is still in print as a publication of Penguin Books in England.

I’ll discuss the Penguin Books edition. In the first lecture, The Law of Gravitation, an example of Physical Law, Feynman explains that Kepler [1571-1630, the discoverer of the three laws of planetary motion and assistant to Brahe] used the heuristic scientific method – trial and error – to discover the way planets go around the sun, saying on page 16:

‘At one stage he thought he had it … they went round the sun in circles with the sun off centre. Then Kepler noticed that … Mars was eight minutes of arc off, and he decided that this was too big for Tycho Brahe [1546-1601, the astronomer who collected Kepler’s data] to have made an error, and that this was not the right answer. So because of the precision of the experiments, he was able to proceed to another trial and ultimately found out three things. … the planets went in ellipses around the sun with the sun as a focus. … the area that is enclosed in the orbit of the planet and the two lines [from sun to planet] that are separated by the planet’s position three weeks apart is the same, in any part of the orbit. So that the planet has to go faster when it is closer to the sun, and slower when it is farther away, in order to show precisely the same area [equal areas are swept out in equal times].

‘Some several years later, Kepler found a third rule … It said that the time the planet took to go all around the sun … varied as the square root of the cube of the size of the orbit and for this the size of the orbit is the diameter across the biggest distance on the ellipse.’

These laws allowed Hooke and Newton to formulate the inverse-square law of gravity. They knew the Moon is 60 times as far from the centre of the Earth as an observer on the surface of the earth is, so Earth’s field of acceleration due to gravity is 60*60 = 3,600 times weaker at the Moon than at the Earth’s surface. Hence the acceleration needed to keep the Moon in its orbit, equal to a = v2/r (the square of its orbital speed divided by the distance of the Moon from the centre of the Earth) = 0.003 ms-2 should be 3,600 times weaker than the acceleration due to gravity measured by Galileo on Earth’s surface. Since the result was correct, the inverse square law calculated from Kepler’s laws for planetary motion had been extended from the solar system to falling apples here on the Earth!

At the end of that first lecture, The Law of Gravitation, an example of Physical Law, Feynman says: ‘You will say to me, “Yes, you told us what happens, but what is gravity? Where does it come from? What is it? Do you mean to tell me that a planet looks at the sun, sees how far it is, calculates the inverse square of the distance and then decides to move in accordance with that law?” In other words, although I have stated the mathematical law, I have given no clue about the mechanism. I will discuss the possibility of doing this in the next lecture, The relation of mathematics to physics.

‘In this lecture I would like to emphasize, just at the end, some characteristics that gravity has in common with the other laws … it is mathematical in its expression … it is not exact; Einstein had to modify it, and we know it is not quite right yet, because we have still to put the quantum theory in. That is the same with all our laws – they are not exact.’

(This is the opposite of Eugene Wigner’s false claim in 1960 about the ‘unreasonable effectiveness of mathematics in the physical sciences’ that implicitly suggests that mathematics surprisingly provides a perfect, totally accurate description of nature! Wigner had the job of designing the large-scale plutonium reactors in World War II for nuclear weapons production. When the engineers deliberately increased the reactor core size so that it could hold additional uranium, Wigner was extremely offended and actually threatened to resign, complaining that the engineers didn’t understand how precise the nuclear physics measurements and calculations were. But it turned out that the reactor wouldn’t operate without a lot more uranium because the measurements and calculations were useless: they didn’t include the effect of the gradual build-up over a few hours in a high flux reactor of fission products that absorbed a lot of the neutron flux! So the mathematical physics as wielded by Wigner was wrong, and the cynical engineers were right not to trust the accuracy estimates of Wigner’s calculations. Feynman, who worked applying computers to test bomb designs in the Manhattan Project, was well aware of this lesson from trying to use mathematical laws in the real world: see his signature on the last page of this copy of the Los Alamos Handbook of Nuclear Physics, LA-11)

In the second lecture, The Relation of Mathematics to Physics, Feynman questions how deep the mathematical basis of physical law goes. He starts with the example of Faraday’s law of electrolysis, which states that the amount of material electroplated is proportional to the current and the time the current flows for. He points out that this means that the amount of matter plated by electricity is just proportional to the total charge that flows, and since each atom needs 1 electron to come to be deposited, the physical mechanism behind the law is easy to understand: it is not a mathematical mystery.

Then he moves on to gravity on page 37:

‘What does the planet do? Does it look at the sun, see how far away it is, and decide to calculate on its internal adding machine the inverse of the square of the distance, which tells it how much to move? This is certainly no explanation of the machinery of gravitation! You might want to look further, and various people have tried to look further.

‘… I would like to describe one theory which has been invented, among others, of the type you might want. This theory suggests that this effect of large numbers of actions, which would explain why it is mathematical.

‘Suppose that in the world everywhere there are a lot of particles, flying through us at very high speed. They come equally in all directions – just shooting by – and once in a while they hit us in a bombardment. We, and the sun, are practically transparent for them, practically but not completely, and some of them hit. Look, then, at what would happen.

image003[Illustration credit: http://www.mathpages.com/HOME/kmath131/kmath131.htm]

‘If the sun were not there, particles would be bombarding the Earth from all sides, giving little impulses by the rattle, bang, bang of the few that hit. This will not shake the Earth in any particular direction, because there are as many coming from one side as from the other, from top as from bottom.

‘However, when the sun is there the particles which are coming from that direction are partly absorbed [or reflected, as in the case of Yang-Mills gravitons, an exchange radiation!] by the sun, because some of them hit the sun and do not go through. Therefore, the number coming from the sun’s direction towards the Earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see that the farther the sun is away, of all the possible directions in which particles can come, a smaller proportion of the particles are being taken out.

‘The sun will appear smaller – in fact inversely as the square of the distance. Therefore there will be an impulse on the Earth towards the sun that varies inversely as the square of the distance. And this will be a result of large numbers of very simple operations, just hits, one after the other, from all directions. Therefore the strangeness of the mathematical relation will be very much reduced, because the fundamental operation is much simpler than calculating the inverse square of the distance. This design, with the particles bouncing, does the calculation.

‘The only trouble with this scheme is that … If the Earth is moving, more particles will hit it from in front than from behind. (If you are running in the rain, more rain hits you in the front of the face than in the back of the head, because you are running into the rain.) So, if the Earth is moving it is running into the particles coming towards it and away from the ones that are chasing it from behind. So more particles will hit it from the front than from the back, and there will be a force opposing any motion. This force would slow the Earth up in its orbit… So that is the end of that theory.

‘”Well,’ you say, ‘it was a good one … Maybe I could invent a better one.’ Maybe you can, because nobody knows the ultimate. …

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

The ‘drag’ force Feynman describes (in debunking the obsolete LeSage gas pressure mechanism) doesn’t slow down moving objects in any quantum field because the exchange radiation can’t continually carry away kinetic energy like a gas can. As we observe, it is only when an electron accelerates that it is able to radiate away waves which carry energy in the surrounding field, e.g. radio waves from accelerating charge, so fundamental charged particles only show a resistance to being accelerated (which takes away energy as radiation), not a continuously energy-losing drag that can slow down particles moving at steady velocity, and therefore the interaction due to the motion of a particle with the surrounding quantum field doesn’t cause continuous drag, but instead is actually the mechanism of inertia (resistance to acceleration, i.e. the 1st law of motion) and the Fitzgerald-Lorentz contraction of bodies in the direction of their motion in space. Feynman’s objection doesn’t hold water; it it die it would discredit all quantum graviton theories and there would be no physical mechanism for inertia and the Lorentz-FitzGerald contraction, nor for the (1/3)MG/c2 = 1.5 mm contraction of the Earth’s radius (by graviton exchange pressure!) predicted by general relativity on the basis of conservation of energy in gravitational fields!

The FitzGerald-Lorentz contraction is demonstrated by the Michelson-Morley experiment and occurs whenever acceleration occurs, but remains constant if the velocity is constant. This radiation pressure effect is analogous to the contraction of length due to the compression of an aircraft or ship when moving nose or bow first head-on into air or water, although the graviton field behaves as a perfect, dragless fluid: massless gravitons travel at light velocity and don’t get speeded up like molecules of a massive gas carrying away energy and thus slowing down an object moving through a massive gas. There is a force on the front which tends to cause a small contraction that depends on the velocity. The report http://arxiv.org/abs/gr-qc/0011064 on page 3 shows how the FitzGerald-Lorentz contraction formula for gamma can result from head-on pressure when a charge is moving in the Dirac sea quantum field theory vacuum, citing C. F. Frank, Proceedings of the Physical Society of London, vol. A 62 (1949), pp. 131–134. For further details, see also the previous post on this blog, https://nige.wordpress.com/2007/07/04/metrics-and-gravitation/, plus the information in the comments following that post.

For emphasis: the Feynman ‘drag’ objection to a quantum field that causes gravity is bunk not just because of the contraction of moving bodies and the fact we know that charges only radiate (lose energy) in a field when accelerating, but also because we know quantum fields exist in space from the Casimir effect and other experiments. Reproducable, highly accurate quantum force experiments beyond a shadow of a doubt prove that quantum fields exist in space that produce forces without causing drag, other than a drag to accelerations (observed inertia, observed Lorentz-Fitzgerald contraction). There is therefore motion of charges in quantum fields without drag, because we see it without seeing drag! While it is good to seek theoretical objections to theories where there is some evidence that such objections are real, it is not sensible to keep clinging to an unobservable objection which simply doesn’t exist in the real world. There is no evidence that gauge boson fields cause drag, so drop the dead donkey. Motion in quantum virtual particle fields doesn’t slow down planets, tough luck if you wanted nature to behave differently. Because gravitons don’t slow things down, they don’t heat up the planets. So all the objections of Poincare that planets should be red hot from planets moving in the quantum field vacuum are bunk. No net energy flows into or out of the gravitational field unless the gravitational potential energy of a mass varies, which requires a force Fto do work E = F*x by moving the object the distance x in the direction of the force. In order to do such work, acceleration a = F/m is required according to Newton’s second law of motion.

So the Feynman drag argument is today disproved by the Casimir effect and other experiments of quantum field theory, a theory which he formulated!

casimir-effectAbove: the Casimir effect is that two nearby metal places are electrically conductive so only short wavelengths of virtual photons from the vacuum’s quantum field can exist by fitting into the small gap between them, but all wavelengths of virtual particles can push the plates together from outside! So the places appear to ‘attract’ one another, by mechanism of the the vacuum virtual photons pushing them together! The Casimir force is precisely predicted by quantum field theory and has been experimentally observed in accurate observations. [Illustration credit: Wikipedia.]

Virtual radiations which cause forces, simply don’t behave like ‘real’ particles. They don’t cause heating (which was an objection Poincare had to LeSage, claiming that the exchange of particles to cause gravity or any stronger force would make masses red hot!) because of the mechanism explained in my post Electricity and Quantum field theory:

1. When electric energy enters a vacuum-dielectric capacitor (or vacuum insulator open ended transmission line, which behaves like the capacitor!), it does so as electromagnetic field energy, at light velocity. Around each conductor there is charged (positive or negative) gauge boson field energy propagating forward at the velocity of light for the surrounding insulator (the vacuum in this case).

2. When the electromagnetic energy reaches the end of the capacitor plate or the end of the transmission line, it reflects back, still travelling at the velocity of light! It never slows or stops! Any ‘charged’ capacitor contains light-velocity energy trapped in it. The Heaviside ‘energy current’ or TEM wave (transverse electromagnetic wave) which is then travelling in each direction with equal energy (once a capacitor has been charged up and is in a ‘steady state’) causes no drag to electrons and therefore no electrical resistance (heating) whatsoever because there is no net drift of electrons along the wires or plates: such a drift requires a net variation of the field along the conductor, but that doesn’t happen because the flows of energy in opposite directions are equal. Electrons (and thus electric currents) only flow when there is an asymmetry in the gauge boson exchange rates in different directions.

Exchange radiations are normally in equilibrium. If an electron accelerates, it suffers a drag due to radiation resistance (i.e. it emits radiation in a direction perpendicular to the acceleration direction), while it is contracted in length by the Lorentz-FitzGerald contraction, so its geometry is automatically distorted by acceleration, which restores the equilibrium of gauge boson exchange. Once this occurs (during acceleration), equilibrium of gauge boson exchange to different directions is restored, so no further drag occurs.

contraction

Above: the flattening of a charge in the direction of its motion reduces drag (instead of increasing it!) because the relative number of field lines is reduced in the direction of motion but is unaffected in other directions, such as the transverse direction. This compensates for the motion of the particle by reducing drag from the field quanta. A net force only acts during acceleration when the shape is changing, this force is the inertia! A particle moving at the velocity of light such as a photon is a 1-dimensional pencil in the direction of motion, which makes its field lines 100% transverse since they stick out at right angles. This makes the photon a ‘disc’ shape when you look at the field lines. The more lines per unit volume pointing in one direction, the stronger the field in that direction. There is endless confusion about the ‘shape’ of particles in electromagnetism!

See the recent article by Carlos Barceló and Gil Jannes, ‘A Real Lorentz-FitzGerald Contraction’, published in the peer-reviewed journal Foundations of Physics, Volume 38, Number 2, February 2008, pp. 191-199:

‘Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics, in particular regarding the quest towards a theory of quantum gravity.’

Full text: http://arxiv.org/PS_cache/arxiv/pdf/0705/0705.4652v2.pdf, where page 4 states:

‘The reason that special relativity was considered a better explanation than the Lorentz-FitzGerald hypothesis can best be illustrated by Einstein’s own words: “The introduction of a ‘luminiferous ether’ will prove to be superfluous inasmuch as the view here to be developed will not require an ‘absolutely stationary space’ provided with special properties.” The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.’

Now back to Feynman’s book ‘The Character of Physical Law’, Penguin, 1992, page 97: during his November 1964 lecture Symmetry in Physical Law, Feynman debunks the religion of ‘special’ relativity:

‘We cannot say that all motion is relative. That is not the content of relativity. Relativity says that uniform velocity in a straight line relative to the nebulae is undetectable.’

On pages 7-9 and 7-10 of volume 1 of The Feynman Lectures on Physics, Feynman states: ‘What is gravity? … What about the machinery of it? … Newton made no hypotheses about this; he was satisfied to find what it did without getting into the machinery of it. No one has since given any machinery. It is characteristic of the physical laws to have this abstract character. … Why can we use mathematics to describe nature without a mechanism behind it? No one knows. We have to keep going because we find out more that way.

‘Many mechanisms of gravitation have been suggested [‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64; nowadays 10 dimensional supergravity adds a landscape of another 10500 spin-2 gravity theories to the 200 of Eddington’s time, all of which are wrong or not even wrong]. It is interesting to consider one of these, which many people have thought of from time to time. At first, one is quite excited and happy when he “discovers” it, but he soon finds that it is not correct. It was first discovered about 1750. Suppose there were many particles moving in space at a very high speed in all directions and being only slightly absorbed in going through matter. When they are absorbed, they give an impulse to the earth. However, since there are as many going one way as another, the impulses all balance. But when the sun is nearby, the particles coming toward the earth through the sun are partially absorbed, so fewer of them are coming from the sun than are coming from the other side. Therefore, the earth feels a net impulse toward the sun and it does not take one long to see that it is inversely as the square of the distance – because of the variation of the solid angle that the sun subtends as we vary the distance. What is wrong with that machinery? … the earth would feel a resistance to motion [Duh! Lorentz-FitzGerald contraction and inertia are both resistance effects in the vacuum! Also, the Casimir effect known since 1948 demonstrates that the vacuum is full of virtual radiation, which doesn not slow down the planets!] and would be stopping up in its orbit [and becoming red hot with the heat from the energy delivered by all the impacts of gauge bosons, as Poincare argued in stupidly dismissing quantum fields].’

Feynman further on that same page (p. 7-10, vol. 1) discusses the unification of electricity and gravitation:

‘Next we shall discuss the possible relation of gravitation to other forces. There is no explanation of gravitation in terms of other forces at the present time. … However … the force of electricity between two charged objects looks just like the law of gravitation … Perhaps gravitation and electricity are much more closely related than we think. Many attempts have been made to unify them; the so-called unified field theory is only a very elegant attempt to combine electricity and gravitation; but in comparing gravitation and electricity, the most interesting thing is the relative strength of the forces. Any theory that contains them both must also deduce how strong gravity is. … it has been proposed that the gravitational constant is related to the age of the universe. If that were the case, the gravitational constant would change with time … It turns out that if we consider the structure of the sun – the balance between the weight of its material and the rate at which radiant energy is generated inside it – we can deduce that if gravity were 10 percent stronger [1,000 million years ago], the sun would be much more than 10 percent brighter – by the sixth power of the gravity constant! … the earth would be about 100 degrees centigrade hotter, and all of the water would not have been in the sea, but vapor in the air, so life would not have started in the sea. So we do not now believe that the gravity constant is changing …’

This is wrong because Feynman is firstly following Teller’s stupidity in believing that despite the connection between electricity and gravitation, only the gravitational constant is varying, and secondly he assumes if it varied it was bigger instead of smaller in the past! If gravitation and electromagnetism are unified, both forces vary, so that the sun’s brightness will be independent of either constant. This is because fusion rates are increased on the compression caused by gravity on the mass of the star, but fusion rates are decreased by the electromagnetic repulsion between protons, which offsets the effect of gravity! So the enhanced fusion effect of a variation of the gravity ‘constant’ with time will be masked by the corresponding reduced fusion effect of the variation in the electromagnetic force constant! The same applies to the big bang fusion processes, where again gravitational constant compression variations will be masked by corresponding Coulomb repulsion variations.

I pointed this out to Professor Sean Carroll (who simply ignored me), who in a blog comment stated he knew a student writing papers claiming to disprove varying G by showing that fusion rates in the big bang depend on G! This is complete nonsense, because any variation of G will be accompanied by a variation of Coulomb force between protons which will mask the effect on fusion rates. The fact is that G increases with time instead of falling; this has no major effect on fusion rates in the big bang or in stars because the accompanying increase in Coulomb force repulsion offsets the effect of increasing gravitational compression. One important effect of the time variation of G is in the seeding of galaxies by density fluctuations in the early universe. The tiny ripples in the cosmic background radiation were enough to seed galaxies because G has been increasing all the time; this replaces the need for Guth’s ‘inflationary universe’. Guth’s inflationary theory assumes constant G and hence falsely requires faster-than-light expansion within a fraction of a second of the early universe in order to explain why the ripples in the cosmic background radiation were so smooth all over the sky at 300,000 years when the cosmic background radiation was emitted (just before the universe became de-ionized and transparent as electrons combined with ions).

Instead of ‘inflationary’ faster-than-light expansion explaining why the density fluctuations across the universe were so small at 300,000 years after the big bang, the correct explanation is that G was far smaller than currently believed at that time, because G is in fact directly proportional to the age of the universe.

fig1

Fig. 1 – quantum gravity. Note that general relativity with an ad hoc small positive cosmological constant, lambda (the so-called ‘lambda-Cold Dark Matter’ or ‘lambda-CDM’ model of cosmology) is useful in some ways but is a classical theory which is fitted to observations using ad hoc adjustments for dark energy and dark matter, which it doesn’t predict. General relativity is a step forward from Newtonian physics because it includes relativistic phenomenon and also the conservation of energy in gravitational fields which Newtonian gravitation ignores; but it is still an unquantized classical approximation which can be fitted to a whole ‘landscape’ of different universe models, so it’s predictive power in cosmology is limited: see the paper by Richard Lieu of the Physics Department, University of Alabama, ‘Lambda-CDM cosmology: how much suppression of credible evidence, and does the model really lead its competitors, using all evidence?’, http://arxiv.org/abs/0705.2462.

For more on the relationship of general relativity to quantum gravity, see the previous blog posts https://nige.wordpress.com/2007/07/04/metrics-and-gravitation/ (which contains a discussion of the mathematics of general relativity), https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks (its Fig. 1 shows the difference between a Feynman diagram for general relativity and one for quantum gravity), and https://nige.wordpress.com/2006/09/30/keplers-law-from-kinetic-energy together with https://nige.wordpress.com/2006/09/22/gravity-equation-discredits-lubos-motl together have some relevance. Four other earlier posts which also contain some relevant material are https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics, https://nige.wordpress.com/2007/06/20/path-integrals-for-gauge-boson-radiation-versus-path-integrals-for-real-particles-and-weyls-gauge-symmetry-principle, https://nige.wordpress.com/path-integrals (which is under revision and will be improved), and https://nige.wordpress.com/2007/02/20/the-physics-of-quantum-field-theory (which is also being revised).

In Fig. 1 above, the observer (or test particle of mass) is in the centre of a frame of reference with isotropically receding matter at distance R. Beneath the observer at distance r there is a fundamental particle with mass, which introduces an asymmetry by interacting with some of the gravitons that the observer exchanges with the surrounding universe.

The result is gravity: gravitons accelerate the observer towards the fundamental particle of mass, as air pressure pushes a suction cup against a smooth surface. As proved (see Fig. 2 below with the proof following it), there is a cosmological acceleration of matter a = Hc where H is Hubble’s parameter. We observe an isotropic expansion of the universe about us, so receding masses M give rise to a radial outward force by Newton’s 2nd law F = Ma = MHc.

Newton’s 3rd law tells us that this action has an equal and opposite reaction, which from the possibilities known suggests the source of the gravitational field: the quantum gravity exchange radiation (that is exchanged between fundamental particles with mass/energy to cause gravitational interactions), i.e., gravitons, carry an inward force from distant receding matter. Where this is shielded (small amounts of nearby matter have little outward force, so they don’t exchange gravitons as forcefully as the immense distant receding masses, i.e. nearby mass automatically acts as a shield to forceful graviton exchange with the masses in the universe beyond the shield) the observer is pushed down towards the particle which acts as a shield.

Time past T in Hubble’s galaxy cluster recession law v = HR = HcT is related to the time t since the big bang by the relation

t + T = 1/H

(see Figure 1, above)

=> v = HcT = Hc[(1/H) – t] = c – (Hct)

=> a = dv/dt = d[c – (Hct)]/dt = –Hc,

the outward acceleration discovered observationally in 1998 (it was predicted in 1996). Force, F=ma. Newton’s 3rd law gives reaction force, inward directed gravitons. Since non-receding nearby masses don’t cause such a reaction force, the non-receding nearby mass below the central observer (in Fig. 1 above) shields that observer from graviton exchange with more distant masses in that direction; an asymmetry which produces gravity.

The spin-2 mainstream graviton idea is not even wrong because it falsely assumes that two masses are attracted by graviton exchange, and gives no mechanism to prevent the stronger exchange of gravitons between those masses and all the other masses in the entire universe (the proofs of spin-2 graviton theories fatally ignore graviton exchanges with all other masses in the universe, by implicitly assuming falsely that the universe is completely empty apart from the two attracting masses being considered; correcting this error changes everything!). This model is fact-based unlike extradimensional string theory, and makes falsifiable quantitative predictions!

The cross-sectional area of a fundamental particles of matter for quantum gravity interactions is found (independently of the fact-based assumptions behind this particular calculation) to be the black hole event horizon cross sectional area for the mass of the fundamental particle, π(2GM/c2)2. The net force (downward) in Fig. 1 is the simple product:

F = {total-inward directed graviton force, i.e. F = MHc}.{fraction of total force which is uncancelled, due to the asymmetry caused by mass m below the observer}

= MHc.{fraction of total force which is uncancelled, due to the asymmetry caused by mass m below the observer}

=MHc.{[π(2GM/c2)2].[(R/r)2]/[4πR2]}

Introducing M = (4/3)πR3r (using constant density, r, is just an approximation here to get you to see the key concept to the basic physics; see previous posts for corrections for the variation in effective density r with observable spacetime distance R) gives us three things immediately: (1) the inverse square law of general relativity for weak fields, (2) a checkable quantitative prediction for the strength of gravitation G, and also (3) a basis for quantizing mass in quantum field theory, since the force is proportional to the square of m, showing that m is the building block of all particle masses.

Copy of comment to http://kea-monad.blogspot.com/2008/12/standing-still.html

If I can be a bit unpopular, there is a reality to “dark energy”. The error is in the original fitting of general relativity to Hubble’s recession law. There are two times, time since the big bang t and time past T, which are related to one another by the formula t + T = 1/H (for proof of this relationship, simply see Fig. 1 here) in a flat spacetime cosmology (H being Hubble’s parameter). Hubble’s empirical law v = HR can – if Minkowski’s concept of spacetime R = ct is true – then be written as v = HR = H(ct) = Hc[(1/H) – T] = c(1 – HT). If we differentiate the expansion rate v as a function of time since the big bang T, we get acceleration, a = dv/dt = d[c(1 – HT)]/dT = -Hc = 6*10^(-10) ms^(-2), which is the observed tiny acceleration of the universe (so small that it is only detectable over immense amounts of spacetime, hence the reason why it was only discovered in 1997 by Perlmutter et al., for extremely redshifted supernovae at half the age of the universe). This was predicted and published well before Perlmutter, but that isn’t the point. The main point is that it is still ignored.

“Dark energy” isn’t so wrong, as the use of spacetime in general relativity as applied to cosmology. By choosing to interpret the Hubble recession as v = HR instead of (Minkowski’s equivalent of) v = Hct, the effective variation of velocity as a function of time (acceleration = dv/dt) is obscured from sight, and valuable physical insight is lost from mainstream cosmology. When the facts are pointed out, instead of cosmologists grasping the significance of this, they try to ignore it.

The significance is that the acceleration of the flat universe is inherent in the way the universe is expanding according to Hubble’s 1929 discovery. In other words, the expansion and the acceleration are not two separate things, but different aspects of the same thing: the dark energy isn’t just causing the acceleration of the universe, it’s causing the very expansion of the universe, too. So now there is general repulsive force, powered by dark energy, causing the expansion of the universe.

Now we have to introduce gravitons. Fietz and Pauli in the 1930s originally ignored all the mass-energy in the universe except for two small test particles when analyzing quantum gravity. If a quantum exchange between two masses (similar gravitational charges) results in attraction and there is no other process going on, then the graviton would have to have spin-2.

However, clearly there is a lot more going on, because there is no mechanism to stop gravitons being exchange not merely between two test masses, but between each of those masses and all the other masses in the universe.

Normally we can ignore the other masses in the universe when thinking of gravity, but not with graviton exchange. The problem with ignoring the rest of the mass of the universe is that it is nearly 100% of the total mass partaking in the interaction between your test masses, so [you] would be ignoring nearly all of mass involved. Although classically you can often ignore the rest of the distant mass of the universe because it is quite uniformly distributed across the sky, this doesn’t cancel out when you are considering quantum graviton exchanges. In any case, gravitons will be converging as they travel from the distant masses in the universe to any particular small test mass. This convergence of gravitons has geometric effects. The short story is that Pauli and Fietz’s approximation of ignoring 99.9999…% of the mass in the universe when “proving” that gravitons must be spin-2, is plain wrong. Once you involve the entire mass of the universe – because there is no mechanism known which can stop such graviton exchanges becoming involved in every single quantum gravitational interaction between a few small masses – you find that gravitons must have spin-1 and must produce observed gravitation by pushing masses together over distances up to something like the average supercluster separation distance.

Beyond that distance, the exchange causes the net repulsion that is responsible for the expansion and also the acceleration of the universe.

That “attraction” and repulsion can both be caused by the same spin-1 gravitons (which are dark energy) can be understood by a semi-valid analogy, the baking cake. As the cake expands, the particles in it recede as if there is a repulsion between them. But if there are some nearby raisins in the cake, they will be pressed even closer together by the this pressure, because they are more strongly bound against expansion than the dough, and because they are being pressed on all sides apart from the sides facing adjacent raisins. So because there is no significant amount of expanding dough between them, they shield one another and get pressed closer together by the expansion of the surrounding dough.

In quantum gravity, one simple way to analyze this mathematically is by the empirical laws of mechanics. The acceleration of the universe means that distant receding masses have an acceleration outward from the observer. If the mass of a particular receding object is m and its acceleration a, for non-relativistic recession velocities this mass possesses an effective outward force given by Newton’s 2nd law, F = ma. So a 1 kg mass receding at 6*10^(-10) ms^(-2) will have an outward force of 6*10^(-10) Newtons. This sounds trivial, but actually the mass of the receding universe is very large, so the total outward force is very large indeed. Newton’s 3rd law then tells you of an equal and opposite reaction force. This is the inward-directed graviton-mediated exchange force. So you can make quantitative predictions immediately.

The clever thing is that for two nearby masses which are not significantly receding from one another (say apple and Earth), this mechanics tells you immediately that there is no significant reaction force of gravitons from apple to Earth or vice-versa.

So by this effect there is a “shadowing” of gravitational charges by each other (because gravitons interact with gravitational charges in order to mediate the force of gravity, and don’t go straight through unaffected), providing that they are nearby enough that they are not receding significantly. Thus, the gravitons exchanged between the apple and the receding masses in the universe above it cause most of the observed gravitational effect: the apple is pushed downwards towards the earth by spin-1 gravitons.

So the mainstream QFT gets off beam by focussing on (1) spin-2 graviton errors without correcting them, (2) Hubble v = HR obfuscation in place of the more physically helpful spacetime equivalent of v = Hct, and (3) high energy quantum graviton interactions such as Planck scale unification, instead of focussing on building a empirically-defensible, checkable, testable, falsifiable model of quantum gravity which is successful at the low-energy scale and which resolves problems in general relativity by predicting things such as

(1) G,

(2) the amount of dark energy/cosmological acceleration, and

(3) the flatness of the universe without the speculative inflation hypothesis.

I.e., it’s a non-speculative theory, a fact-based theory which at each step is defensible, and which produces predictions that can be checked.

The reason for the ignorance of the simplicity of QFT at low energy is due to the fact that mainstream QFT is contradictory in:

(1) accepting Schwinger’s renormalization work, in which the vacuum is only chaotic (with spacetime loops of pair-production virtual fermions continually annihilating into virtual bosons, and back again) in electric fields above ~10^18 volts/metre, which occurs only out to a short distance (a matter of femtometres) from fundamental particles like quarks and electrons. These virtual spacetime creation-annihilation “loops” therefore don’t fill the entirity of spacetime, just a small volume around particles of real matter. Hence, the vacuum as a whole isn’t filled with chaotic annihilation-creation loops. If it was, the IR cutoff energy for the QED running coupling would be zero, which it isn’t. There has to be a limiting range to distance out to which there is any virtual pair production in the vacuum around a real fermion, otherwise the virtual fermions would be able to polarize sufficiently to totally cancel out the electric charge of real fermions. Penrose makes this clear with a diagram in “Road to Reality”. The virtual fermions polarize radially around a real fermion core, cancelling out much of the field and explaining why the “bare core” charge of a real fermion is higher in QFT than the observed charge of a fermion as observed in low energy physics. If there was no limit on this range of vacuum polarization due to pair production, you would end up with the electron having an electric charge of zero at low energy. This isn’t true, so as Schwinger argued, the vacuum is only polarized in strong electric fields (ref.: eq. 359 of http://arxiv.org/abs/quant-ph/0608140 or see eq. 8.20 of http://arxiv.org/abs/hep-th/0510040 – this is all entirely mainstream stuff, and is very well tested in QED calculations, and is not speculative guesswork).

(2) claiming that the entire vacuum is filled with chaotic creation-annihilation loops. This claim is made in most popular books by Hawking and many others. They don’t grasp that if the vacuum were filled with virtual fermions in such loops, you’d get not just geometric (inverse-square law) divergence of electric field lines from charges, but also a massive exponential attenuation factor which would cancel out those radial electric field lines within a tiny distance. Even if we take Penrose’s guess that the core electric charge of the electron is 11.7 times the value observed at low energy, then the polarized vacuum reduces the electric charge by this factor over a distance of merely femtometres. Hence, without the Schwinger cutoff on pair-production below ~10^18 v/m, you would get zero observable electric charge at distances beyond a nanometre from an electron. Clearly, therefore, the vacuum is not filled with polarizable virtual fermions, and isn’t therefore filled with annihilation-creation loops of virtual particles.

This argument is experimentally defensible, and so is extremely strong. The vacuum effects which cause chaos are limited to strong fields, very close to fermions. Beyond a matter of mere femtometres, the vacuum isn’t chaotic and is far simpler, with merely virtual (gauge) bosons which can’t undergo pair production until they enter the strong field near a fermion.

It’s simple to understand all this if you know about radiation. Lead, and other high atomic number elements, attenuates real gamma rays of high energy primarily by pair production. The gamma ray passes into the strong field near the nucleus, and is transformed into an electron and positron pair. This pair can then be polarized by an external electric field, attenuating or shielding that external electric field, before the pair annihilate back into a fresh gamma ray. The field shielding process with virtual photons and virtual fermions is similar in principle to that observed with real radiation and with the real dielectrics you put inside capacitors between the plates, so the dielectrics polarize to store large amounts of energy. (There is nothing mysterious or speculative in this basic physics.)

Away from the strong fields that exist very close to real fermions, the vacuum is very simple and just contains virtual bosons flying around. Because they don’t (unlike fermions) behave the exclusion principle, they don’t behave like a compressed gas. They mediate fundamental forces by being exchanged between fermions, simply, without loopy chaos.

For this reason, the complexity normally present in a QFT path integral – due to an infinite number of terms that correct for vacuum loops – [is simply] not present in the real vacuum dynamics that model low energy QED and quantum gravity phenomenon. The path integral reduces to a simple geometric summation of straight lines where there are no loops (i.e. at low energy), as shown by Feynman for the case of light diffraction by glass in his book QED.

Quantum gravity can be done the same way at low energy! It’s a simple geometric situation. Loops are important only at high energy where they occur due to pair-production as already proved, so it’s amazing how much ignorance, apathy and sheer insulting dumbness there is amongst some QFT theorists, obsessed with unobservable Planck scale phenomena and uncheckable imaginary spin-2 gravitons.

“Dark energy” is badly understood by the mainstream, and having a Lambda term in the field equation of GR is not sufficient physics. It’s ad hoc juggling. I just think that for the record, there is evidence that “dark energy” is real, it’s spin-1 gravitons and low energy quantum field theory physics is nothing like the unphysical mathematical obfuscation currently being masqueraded as QFT. Fields are due to physical phenomena, not equations that are approximate models. To understand QFT, what is needed is not just a Lie algebra textbook but understanding of physical processes like pair production (which is real and occurs when high energy gamma rays enter strong fields), polarization of such charges (again a physical fact, well known in electronics since it’s used in electrolytic capacitors), and spacetime.

The right way to deny all progress in the world is to be reasonable and quiet to fit in with status quo, in an attempt to win or keep friends. As Shaw wrote in 1903:

“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

I think Louise is right in her basic equation, and also in dismissing the terrible ad hoc mainstream approach to “dark energy”, but that doesn’t mean that fundamentally there is [no] dark energy in the form of gravitons flying around, allowing predictions to be checked.

Please delete if this comment is unhelpful to the status quo here. (I’ll copy it to my blog as proof of my unreasonableness. Maybe it’s just too long, but it does take space to explain anything in sufficient detail to get the main points across.)

  • **************************

Notes on Lunsford’s comment to the blog post:

http://dorigo.wordpress.com/2009/01/08/black-holes-the-winged-seeds/

Lunsford refers to Cooperstock and Tieu, General Relativity Resolves Galactic Rotation Without Exotic Dark Matter, http://arxiv.org/abs/astro-ph/0507619, pp. 17-18:

‘One might be inclined to question how this large departure from the Newtonian picture regarding galactic rotation curves could have arisen since the planetary motion problem is also a gravitationally bound system and the deviations there using general relativity are so small. The reason is that the two problems are very different: in the planetary problem, the source of gravity is the sun and the planets are treated as test particles in this field (apart from contributing minor perturbations when necessary). They respond to the field of the sun but they do not contribute to the field. By contrast, in the galaxy problem, the source of the field is the combined rotating mass of all of the freely-gravitating elements themselves that compose the galaxy.’

Sean Carroll criticised it on a technical level because he felt it wasn’t rigorous: http://blogs.discovermagazine.com/cosmicvariance/2005/10/17/escape-from-the-clutches-of-the-dark-sector/

But I’ve seen a different, cleaner or more straightforward-looking analysis of the galactic rotation curves by Hunter that appears to tackle the dark matter problem at http://www.gravity.uk.com/galactic_rotation_curves.html (I want to point out though that I don’t agree or recommend the cosmology pages on the rest of that site). His interesting starting point is the equivalence of rest mass energy to gravitational potential energy of the mass with respect to the surrounding universe. If the universe collapsed under gravity, such potential energy would be released. It’s thus a nice conjecture (equivalent to Louise’s equation since cancelling m and inserting r = ct into E = mc^2 = mMG/r gives c^2 = MG/(ct), or Louise’s tc^3 = MG), and leads to flat galactic rotation curves without the intervention of enormous quantities of unobserved matter within galaxies (there is obviously some dark matter, from other observations like neutrino masses, etc.).

But this is pretty trivial compared to the issue of quantum gravity. What should be up for discussion is Lunsford’s paper http://cdsweb.cern.ch/record/688763?ln=en but it is just too abstract for most people. Even people who have done QM and cosmology courses (including basic GR) don’t have the mathematical physics background to understand the use of GR and QFT in that paper, e.g. the differential geometry, variational principle, and so on.

I wish he could write a basic textbook of the mathematical foundations used in his paper. I’ve learned some useful mathematics from McMahon’s Quantum Field Theory Demystified (2008), Zee’s Quantum Field Theory in a Nutshell (2003) Dyson’s http://arxiv.org/PS_cache/quant-ph/pdf/0608/0608140v1.pdf and the QFT lectures http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040v2.pdf

A QFT of gravity will differ from GR. Instead of curved spacetime, you have discrete graviton exchanges causing the interactions (gravity, inertia, and the contraction of both stationary and moving bodies composed of mass-energy). Some energy will exist in the graviton field, and surely this is the dark energy. There is no supplemental ‘cosmological constant’ of dark energy in addition to the gravitational field. Instead, the gravitational field of graviton exchanges between masses will cause expansion on large distances and attraction on smaller ones.

Think of the analogy of a raisin cake expanding due to the motion of the dough. Nearby raisins (with little or no dough between them) will be pushed closer together like ‘attraction’, while distant raisins will be accelerated further apart during the cooking, like a ‘repulsion’ effect. Two phenomena for the price of one graviton field! No additional dark energy or CC, just the plain old gravitational field. I think this is missed by the mainstream because they (1) think LeSage came up with quantum gravity and was disproved (he didn’t, and the objections to his ideas came because he didn’t have gravitons exchange radiation, but a gas), and (2) the Pauli-Fierz ‘proof’ that gravitons exchanged between 2 masses to cause them to attract, must suck, which implies spin-2 suckers.

Actually the Pauli-Fierz proof is fine if the universe only contains 2 masses which ‘attract’. Problem is, it doesn’t contain 2 masses. We’re surrounded by masses, and there is no mechanism to stop graviton exchanges with those masses. As gravitons propagate from distant masses to nearby ones, they converge (not diverge), so the effects of the distant masses are larger (not smaller) that that of nearby masses. Once you include these masses, the whole basis of the Pauli-Fietz proof evaporates; gravitons no longer need to be suckers and thus don’t need to have a spin of 2. Instead of having spin-2 gravitons sucking 2 masses together in an otherwise empty universe, you really have those masses being pushed together by graviton exchanges with the immense masses in the surrounding universe. This is so totally obvious, it’s amazing why the mainstream is so obsessed with spin-2 suckers. (Probably because string theory is the framework for spin-2 suckers.)

Update (18 February 2009):

Dr Woit’s Not Even Wrong weblog has a nice discussion about the current status of the string theory propaganda war:

http://www.math.columbia.edu/~woit/wordpress/?p=1630

Mission Accomplished

A few years ago the asset value of string theory in the market-place of ideas started to take a tumble due to the increasingly obvious failure of the idea of unifying physics with a 10/11 dimensional string/M-theory. Since then a few string theorists and their supporters have decided to fight back with an effort to regain market-share by misleading the public about what has happened. Because the nature of this failure is sometimes summarized as “string theory makes no experimental predictions”, the tactic often used is to claim that “string theory DOES make predictions”, while neglecting to explain that this claim has nothing to do with string theory unification.

A favorite way to do this is to invoke recent attempts to use conjectural string/gauge dualities to provide an approximate calculational method for some strongly coupled quantum systems. There are active on-going research programs to try and see if such calculational methods are useful in the case of heavy-ion collisions and various condensed-matter systems. In the heavy-ion case, we believe we know the underlying theory (QCD), so any contact between such calculations and experiment is a test not of the theory, but of the calculational method. For the condensed matter systems, what is being tested is the combination of the strongly-coupled model and the calculational method. None of this has anything to do with testing the idea that string theory provides a fundamental unified theory. …

The one string theorist involved in all this was Clifford Johnson, who gives a minute-by-minute description of his participation here. It ends by invoking the phrase made famous by the last US president:

“Mission accomplished. (Hurrah!)”

http://www.math.columbia.edu/~woit/wordpress/?p=1630&cpage=1#comment-46927

Acknowledging that this work does not prove string theory unification isn’t the point. Instead of just stating that the research under discussion has nothing to do with the string theory unification, Clifford is claiming that it does (using the logic: “we don’t understand string theory, maybe comparing AdS/CFT-motivated approximations to experimental results in heavy-ion physics will help us understand string theory, and once we understand string theory, we’ll see how to do string theory unification”). He’s welcome to that bit of wishful thinking, but when he uses it on non-experts in the way quoted, it’s not at all surprising that what they take away is the message that string theory unification is moving forward due to this first connection between string theory and experiment. …

There follows an anonymous attack on Dr Woit:

‘As I have said repeatedly, you adamantly refuse to recognize the UNDERSTANDING we have gleamed through string theory, while knocking it for the lack of experiments.’ – Somebody [anonymous attack on Dr Woit]

I thought the kind of physics string theorists were claiming to do was of the “Shut up and calculate!” variety. But now suddenly we have a benefit from string theory that you get an extradimensional “understanding” of physics because of the (unproved) conjecture that 5-dimensional AdS space (which is not cosmological space, because it has a negative CC instead of positive) may be helpful in modelling strong interactions.

Duh! Yes, maybe it’s helpful in a new approximation for QCD calculations, but that’s not understanding physical reality because AdS isn’t real spacetime. It’s just a calculational tool. Similarly, classical physics like GR is just a calculational tool; it doesn’t help you to understand (quantum) nature, just to do approximate calculations.

Because orbits of planets are elliptical with the sun at one focus, the planets speed up when near the sun, and this causes effects like time dilation and it also causes their mass to increase due to relativistic effects (this is significant for Mercury, which is closest to the sun and orbits fastest). Although this effect is insignificant over a single orbit, so it didn’t affect the observations of Brahe or Kepler’s laws upon which Newton’s inverse square law was based, the effect accumulates and is substantial over a period of centuries, because it the perhelion of the orbit precesses. Only part of the precession is due to relativistic effects, but it is still an important anomaly in the Newtonian scheme. Einstein and Hilbert developed general relativity to deal with such problems. Significantly, the failure of Newtonian gravity is most important for light, which is deflected by gravity twice as much when passing the sun as that predicted by Newton’s a = MG/r2.

Einstein recognised that gravitational acceleration and all other accelerations are represented by a curved worldline on a plot of distance travelled versus time. This is the curvature of spacetime; you see it as the curved line when you plot the height of a falling apple versus time.

Einstein then used tensor calculus to represent such curvatures by the Ricci curvature tensor, Rab, and he tried to equate this with the source of the accelerative field, the tensor Tab, which represents all the causes of accelerations such as mass, energy, momentum and pressure. In order to represent Newton’s gravity law a = MG/r2 with such tensor calculus, Einstein began with the assumption of a direct relationship such as Rab = Tab. This simply says that mass-energy tells is directly proportional to curvature of spacetime. However, it is false since it violates the conservation of mass-energy. To make it consistent with the experimentally confirmed conservation of mass-energy, Einstein and Hilbert in November 1915 realised that you need to subtract from Tab on the right hand side the product of half the metric tensor, gab, and the trace, T (the sum of scalar terms, across the diagonal of the matrix for Tab). Hence

Rab = Tab – (1/2)gabT.

[This is usually re-written in the equivalent form, Rab(1/2)gabR = Tab.]

There is a very simple way to demonstrate some of the applications and features of general relativity. Simply ignore 15 of the 16 terms in the matrix for Tab, and concentrate on the energy density component, T00, which is a scalar (it is the first term in the diagonal for the matrix) so it is exactly equal to its own trace:

T00 = T.

Hence, Rab = Tab – (1/2)gabT becomes

Rab = T00 – (1/2)gabT, and since T00 = T, we obtain

Rab = T[1 – (1/2)gab]

The metric tensor gab = ds2/(dxadxb), and it depends on the relativistic Lorentzian metric gamma factor, (1 – v2/c2)-1/2, so in general gab falls from about 1 towards 0 as velocity increases from v = 0 to v = c.

Hence, for low speeds where, approximately, v = 0 (i.e., v << c), gab is generally close to 1 so we have a curvature of

Rab = T[1 – (1/2)(1)] = T/2.

For high speeds where, approximately, v = c, we have gab = 0 so

Rab = T[1 – (1/2)(0)] = T.

The curvature experienced for an identical gravity source if you are moving at the velocity of light is therefore twice the amount of curvature you get at low (non-relativistic) velocities. This is the explanation as to why a photon moving at speed c gets twice as much curvature from the sun’s gravity (i.e., it gets deflected twice as much) as Newton’s law predicts for low speeds. It is important to note that general relativity doesn’t supply the physical mechanism for this effect. It works quantitatively because is its a mathematical package which accounts accurately for the use of energy.

However, it is clear from the way that general relativity works that the source of gravity doesn’t change when such velocity-dependent effects occur. A rapidly moving object falls faster than a slowly moving one because of the difference produced in way the moving object is subject to the gravitational field, i.e., the extra deflection of light is dependent upon the Lorentz-FitzGerald contraction (the gamma factor already mentioned), which alters length (for a object moving at speed c there are no electromagnetic field lines extending along the direction of propagation whatsoever, only at right angles to the direction of propagation, i.e., transversely). This increases the amount of interaction between the electromagnetic fields of photon and the gravitational field. Clearly, in a slow moving object, half of the electromagnetic field lines (which normally point randomly in all directions from matter, apart from minor asymmetries due to magnets, etc.), will be pointing in the wrong direction to interact with gravity, and so slow moving objects only experience half the curvature that fast moving ones do, in a similar gravitational field.

Some issues with general relativity are focussed on the assumed accuracy of Newtonian gravity which is put into the theory as the low speed, weak field solution normalization. E.g., as explained above in this post, gravitons cause both long-range cosmological repulsion (between substantially redshifted masses) and “attraction” between masses which aren’t strongly redshifted (rapidly receding) from one another, just as gas pressure has both “repulsive” (push) effects and apparent “attraction” effects: a sink plunger or rubber “suction” cup is “attracted” to a surface by air pressure pushing on it, while the pressure of gas in an exploding bomb accelerates bits of matter outward in all directions like a “repulsive” force. None of this is very hi-tech.

Here’s a funny claim by the mathematician Gill Kalai defending the spin-2 stringy crackpot theory of gravity which has a landscape of 10500 vacua and can’t predict anything checkable:

http://www.math.columbia.edu/~woit/wordpress/?p=1630&cpage=1#comment-46932

‘Successful applications of ST calculations in other areas can be regarded as a (weak) support for the theory itself.’ – Gil Kalai

Applying the Gill Kalai argument, the major direct successes of classical physics can be regarded as major support for classical theory over quantum theory! Yeah, right!

Update:

Gill has queried my criticism above in the comments section (below), and I have replied there as follows:

You argue that successful applications of a theory (ST) to other areas than the key areas provides weak support for the original theory!

Applying your argument to classical physics, the much stronger successes of Maxwell’s equations of classical electromagnetism (for instance to thousands of situations in electromagnetism) will make that theory win hands-down over the relatively few things that you can specifically calculate using quantum electrodynamics (magnetic moments of leptons, Lamb shift in hydrogen spectra).

Your argument that you can have support for a theory from indirect successes ignores alternative ideas which do a lot better! E.g., there are alternatives to string theory which do make calculations that are checkable. Your argument specifically gives support to a failed theory of gravity because you are:

(1) not demanding that the failed theory of gravity (ST) require falsifiable predictions

and

(2) ignoring alternative ideas to string. Once you include alternative ideas, ST “successes” are shown to be failures by comparison.

What you are neglecting is that indirect calculational successes are no support for a theory: Ptolemy’s epicycles could enable predictions of apparent planetary motions, but that is not evidence. Model building by the AdS/CFT conjecture is not a falsifiable physics, any more than building theories of epicycles. You need falsifiable predictions of the key elements to the theory, to provide scientific evidence that supports the theory. To hype or defend a theory without even a single falsifiable prediction is appalling:

‘String theory has the remarkable property of predicting gravity.’ – Ed Witten, M-theory originator, Reflections on the Fate of Spacetime, Physics Today, April 1996.

This abuse of science is just the defence made for epicycles, phlogiston, caloric, etc. It just stagnates the entire field by leading to hype of drivel which creates so much noise that the more useful ideas can’t be heard.

Update:

Gil has responded in the comments below, saying that he doesn’t understand and that Ptolemy’s predictions via ad hoc epicycles were a major intellectual and scientific advance. My response:

Hi Gil,

Making predictions from a false mathematical model such as Ptolemy’s earth centred system, which is endlessly adjustable, or ST which relies on unobservables such as extra dimensions, may be useful until a better theory comes along, but it is not scientific! Such predictions are not falsifiable because the model is adjusted when it fails, for example by adding more epicycles to “correct” errors. With ST you can select different brane models, different parameters for the moduli of the Calabi-Yau manifold, etc.

In 250 BC, Aristarchus of Samos had the solar system theory. Ptolemy in 150 AD ridiculed Aristarchus’ solar system, falsely claiming that if it was true, with the earth spinning daily, clouds would be continuously shooting over the equator at 1000 miles/hour, and people would be thrown off by the motion. The problem with a false theory being defended for non-falsifiable applications is that it becomes dogma, leading to unwarranted attacks on more useful ideas.

I don’t agree that it was a giant intellectual and scientific achievement of the time. It was a step backwards from Aristarchus’ earlier solar system theory.

Going back to what you say you do not understand: you claimed that indirect applications of a theory provide weak support for the main theory. I point out that if indirect applications provide weak support as you claim, then direct applications of a theory such as classical theories will by analogy provide relatively strong support for the theory. Since you are ignoring other criteria (like the existence of alternative theories which do a better job) in judging whether ST is deemed to be supported by AdS/CFT and company, it follows from using your way of judging support for a theory, that since classical electromagnetism is relatively strong, classical electromagnetism would win out over QED which has fewer specifically unique predictions. Your argument that indirect applications lend some support to string theory is a big step backwards scientifically. Indirect applications don’t support a theory at all, successful falsifiable predictions of the main claims of the theory are needed to provide any support. Even then, the theory isn’t proved. Your statement on Not Even Wrong supports a retreat from Popper’s criterion of science back to the kind of low standards which accommodate fashionable nonsense, ad hoc modelling that doesn’t lead to progress in fundamental physics, but instead creates a belief system akin to religious groupthink, which becomes dogma and leads to correct alternative ideas being ignored or falsely dismissed.

Update:

I just have to quote a new attack on Dr Woit on the Not Even Wrong blog, which is so absurd and stuck-up it makes me laugh out loud!

http://www.math.columbia.edu/~woit/wordpress/?p=1630&cpage=1#comment-46967

‘This blog indeed has some scientific content, but in my opinion it does not qualify as genuine science research activity. You mentioned BRST, which is indeed a scientific topic, but presenting it in a blog, without peer review, without going through the usual channels of academic research, it remains the same category as science journalism and popular science writing. For example, if you would submit your work on BRST to a science journal where others would have the chance to seriously review it and it would get published, it would be a different story. But so far you did not do that. Even if you did, the publication on the blog, in my opinion, still doesn’t count as scientific research.

‘… this is not an attack on you, but the fact must be stated that you operate in an entirely different way than ordinary scientists who are doing active research.’ – Troy (anonymous attack on Dr Woit)

I’ve got to say that this is the most funny comment ever! It’s the most stuck-up example of officialdom in science, groupthink, etc., ever. Especially the ending: ‘you operate in an entirely different way than ordinary scientists’. What about the non-falsifiable speculations of string theorists? What about the culture of groupthink anti-science lies in the increasingly filthy mainstream peer-reviewed journals:

‘String theory has the remarkable property of predicting gravity’: false claim by Edward Witten in his grandiose piece Reflections on the Fate of Spacetime in the April 1996 issue of Physics Today,

and what about the use of such peer-reviewed mainstream lies to censor out the correct facts, as occurred when – to give just one little example – I submitted a paper for ‘peer-review’ to Classical and Quantum Gravity and the editor sent it to string theorists who sent back a report which the editor forwarded to me (without the names of the ‘peer-reviewers’) that ignored the physics and simply dismissed it for not being string theory work!

This argument that anybody who works far outside the fashionable mainstream, where normal peer-review breaks down (because of a lack of any genuine ‘peers’ capable of reviewing the work), can be dismissed as not a scientist because ‘you operate in an entirely different way than ordinary scientists’, in terms of publishing policy, is immensely funny! The way ‘ordinary scientists’ work on string ‘theory’ is a failure, and no amount of peer-review, hype, lying and mutual backslapping congratulations between members of the mainstream string camp will turn their non-falsifiable speculations into scientific facts. All those people have is mutual citation counts, an indicator of fashion not fact, and they really believe that popularity and fashion are scientific criteria! They believe that because they are failures judged by the real criteria of science, falsifiable scientific predictions and experimental checks of key ideas.

(It’s quite appropriate that the anonymous attacker used the name ‘Troy’, the city under siege which was so gullible that it let in the enemy soldiers, which had hidden inside a large wooden gift horse presented to Troy as a present. Believers of absurd claims for 10/11 dimensional string theory that cannot be tested are so gullible.)

Update:

Copy of a comment to Louise’s blog:

http://riofriospacetime.blogspot.com/2009/02/race-for-higgs-or-no-higgs.html

The interesting thing about the Higgs field is that it is linked to quantum gravity by being gravitational “charge”, i.e. mass. So sorting out the electroweak symmetry breaking mechanism in the Standard Model is a major step towards understanding the nature of mass and therefore the charge of quantum gravity. This is a point Dr Woit makes in his 2002 paper on electroweak symmetry, where he shows that you can potentially come up with symmetry groups that give the chiral symmetry features of the Standard Model using lie and clifford algebras. The Higgs field, like string theory, is something not observed yet prematurely celebrated and treated as orthodoxy. But it is not confirmed and is not part of the Standard Model symmetries, U(1) x SU(2) x SU(3). These symmetry groups describe the observed and known particles and symmetries of the universe, not the Higgs boson(s) and graviton. There is no evidence that U(1) x SU(2) is broken at low energy by Higgs. This symmetry is not there at low energy, but that doesn’t prove that the Higgs mechanism breaks it!

In addition to providing mass to SM particles, the role of the Higgs field is to break the electromagnetic interaction U(1) away from the whole U(1) x SU(2) x SU(3) symmetry, so that only U(1) exists at low energy because its gauge boson is massless (it doesn’t couple to the supposed Higgs field) unlike the other gauge bosons which acquire mass by coupling to the Higgs field boson(s).

The way a Higgs field is supposed to break electroweak symmetry is to give mass to all SU(2) weak gauge bosons at low energy, but leave them massless at high energy where you have symmetry.

This is just one specific way of breaking the U(1) x SU(2) symmetry, which has no experimental evidence to justify it, and it is not the simplest way. One simple way of doing adding gravitons and masses to the SM, as seen from my mechanistic gauge interaction perspective might be that, instead of having a Higgs field to give mass to all SU(2) gauge bosons at low energy but to none of them at at high [i.e. above electroweak unification] energy, we could have a chiral effect where one handedness of the SU(2) gauge bosons always has mass and the other is always massless.

The massless but electrically charged SU(2) gauge bosons then replace the usual U(1) electromagnetism, so you have positively charged massless bosons around protons giving rise to the positive electric field observed in the space there, and negatively charged massless bosons around electrons. (This model can casually explain the physics of electromagnetic attraction and repulsion, and makes falsifiable predictions about the strength of the electromagnetic interaction.) The massless, uncharged SU(2) gauge boson left over is the graviton, which explains why the gravitational coupling is 10^40 times weaker than electromagnetism in terms of the different way exchanged charged massless and uncharged massless gauge bosons interact with all the particles in the universe.

The one-handedness of SU(2) gauge bosons which does has mass, then gives rise the weak interaction as observed so far in experiments, explaining chiral symmetry because only one handedness of particles can experience weak interactions.

So my argument is that there the Higgs mechanism for mass is a wrong guess.

Symmetry is hidden in a different way. The gauge bosons of electromagnetism [and gravity] are the one massless handedness of SU(2) gauge bosons, the [massive handedness of SU(2) gauge bosons being the] particles that mediate short-range weak interactions. Although SU(2) x SU(3) expresses the symmetry of this theory, it is not a unified theory because SU(3) strong interactions shouldn’t have identical coupling strength to SU(2) at arbitrarily high energy: SU(2) couplings increase with energy due to seeing less vacuum polarization shielding, and this energy is at the expense of the SU(3) strong interaction which is physically powered by the energy used from SU(2) in producing polarized pair production loops of vacuum particles.

So my argument is that the symmetry of the universe is SU(2) x SU(3). Here, SU(3) is just as in the mainstream Standard Model, but SU(2) does a lot more than just weak interactions; massless versions of its 3 gauge bosons also provide electromagnetism (the 2 electrically charged massless gauge bosons) and gravity (the single electrically uncharged massless gauge boson is a spin-1 graviton).

Instead of the vacuum being filled with a Higgs field of massive bosons that mire charges, a discrete number of massive bosons interact via the electromagnetic interaction with each particle to give it mass; the origin of mass/gravitational charge as distinct from electromagnetic charge arises because the discrete number of massive bosons which interact with each fundamental particle (by analogy to a shell of electrons around a nucleus) each interact directly with gravitons. Electromagnetic charges (particle cores) do not interact directly with gravitons, only indirectly via interaction with massive bosons in the vacuum. This models all lepton and hadron masses if all the massive bosons in the vacuum have a mass equal to that of the Z_0 weak gauge boson, i.e. 91 GeV. I have to try to write up a paper on this.

My (now old) blog post which includes this topic is badly in need of being rewritten and condensed down a lot, to improve its clarity. I’m trying to follow the work of Carl and Kea with respect to neutrino mass matrices and extensions of the Koide formula to hadron masses, as well as working my way through Zee’s book, which tackles most of the questions in quantum field theory which motivate my interest (unlike several other QFT books, such as most of Weinberg).

At first glance, Ryder’s second edition QFT book seemed more accessible than Zee, but it turns out that the best explanation Ryder gives is the tensor form of Maxwell equations and how they relate to the vector calculus forms, which is neat. Zee gives path integral calculations for fundamental forces in gauge theory and for QED essentials such perturbative theory for calculating magnetic moments, which I find more motivating than the totally meaningless drivel that takes up vast amounts of math yet calculates nothing in several other QFT books (particularly those which end up declaring the beauty of string theory in the final part!).

Update: I should add that Ryder’s 2nd ed., pp. 298-306 (section 8.5, ‘The Weinberg-Salam Model’) is also very important and well written.

Update: the funniest blog comment I’ve read so far is Woit’s summary of a blog post by Motl, ‘He does draw some historical lessons, noting correctly that a theory developed for one purpose may turn out not to work for that, but find use elsewhere. For instance, a theory once thought to be a spaceship capable of giving a TOE may turn out to be a toaster capable of approximately describing the viscosity of a quark-gluon plasma….’

Update (25 March 2009): Dr Woit has another summary in his blog post called ‘The Nature of Truth’, which is also well worth quoting:

‘It is the fact that one needs to postulate a huge landscape in string theory in order to have something complicated and intractable enough to evade conflict with experiment that is the problem. … The failure … is … attributable to … the string theory-based assumption that fundamental physical theory involves a hopelessly complicated set of possibilities for low-energy physics.’

Emphasis added. It’s nice to document Dr Woit’s occasional hopeful argument that hopelessly complicated theories that predict nothing are headed in the wrong direction. However, this doesn’t mean that he supports the simplicity of the approach in this blog post. Maybe if I can disentangle the nascent excitement of piecemeal advances reported on posts in this blog and write up a new properly structured scientific paper which sets out the information in a better presentation, it will be more worthy of attention. Still, there is a strong link between elitism – which Dr Woit supports – and extremely complicated mathematical modelling which leads nowhere. String theory has been successfully hyped because – although it doesn’t exist [is not even wrong] at the scientific falsifiability hurdle – it

(a) does exist in popular stringy hype with all sorts of fairy tales of extra spatial dimensions with branes, sparticles, and so on, and

(b) does contain a lot of mathematics, which makes it look impressive.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Update (3 April 2009): Dr Woit on 1 April wrote a sarcastic post on his weblog “Not Even Wrong” called Origin of the World, about the pseudo-scientific hyping of mainstream general relativity-based cosmology by the University of Arizona with big-name experts who know everything and are really humble in telling us so,

“The event will be webcast, so the rest of the world can get the inside dope … Arizona is putting on quite a show, with a major effort to attract cutting-edge researchers in physics to the state, including the recent announcement of proposed new legislation.”

To be truthful, I read Dr Woit’s post on 1 April, visiting the link above, and didn’t realise it was an April fool’s joke because general relativity cosmologists are so crazy anyway this just looked like what they would be doing. He should be careful because when people claiming to be scientists believe in, and try to sell the world, a stringy landscape containing an infinite or large (10500) number of unseen parallel universes with 11 dimensions, with the anthropic principle selecting our universe, and claim to be scientists, it’s hard to make an April fool’s day joke about them which is recognised as such.

There is however a serious comment on the post by Joey Ramone:

Joey Ramone says:
April 2, 2009 at 10:30 am
P.S. Here is the video for Rock’n’Roll Cosmology Center: http://www.youtube.com/watch?v=DhRALq8IsL4

Towards the end of the video, the LHC is turned on and it
1) proves string theory
2) finds seven multiverses
3) locates 17 higher dimensions
4) proves the anthropic principle
5) creates baby black holes which become bouncing universes
6) explodes before any of this can be recorded

————————————————————
So I guess when the Nobel Prize committee read Joey’s comment, they’ll award Professors Ed Witten and Lenny Susskind a prize for string theory’s spin-2 graviton prediction and the anthropic landscape’s prediction that the constants of nature are suitable to allow life to exist in the universe where humans happen to exist. Cool science!

But there’s always the risk that this won’t ever happen and that the genius of Witten and Susskind on superstring will be censored and suppressed, and will therefore go totally unnoticed and unhyped by the sinister media such as Woit.

In this regard, Juan R. gives the terrible censorship statistics that confront the bravery of censored brilliant poor stringers:

‘… I would note that history of science is full of theories which were initially considered crackpot (by some referee or even by entire communities), but broadly accepted at the end. It has been well documented at least 27 cases of future Nobel Laureates encountered resistance on part of scientific community towards their discoveries and instances in which 36 future Nobel Laureates encountered resistance on part of scientific journal editors or referees to manuscripts that dealt with discoveries that on later date would assure them the Nobel Prize. A beautiful example of last is the rejection letter to Hideki Yukawa by a referee of the Physical Review journal to be wrong in a number of important points: forces too small by a factor of 10-20, wrong spin dependence, etc. But his work was not wrong and, some years after, Yukawa received the Nobel Prize for that work.

‘Hermann Staudinger (Nobel Prize for Chemistry, 1953):

‘“It is no secret that for a long time many colleagues rejected your views which some of them even regarded as abderitic.”

‘Howard M. Temin (Nobel Prize for Physiology or Medicine, 1975):

‘“Since 1963-64, I had been proposing that the replication of RNA tumour viruses involved a DNA intermediate. This hypothesis, known as the DNA provirus hypothesis apparently contradicted the so-called ‘central dogma’ of molecular biology and met with a generally hostile reception…that the discovery took so many years might indicate the resistance to this hypothesis.” …

‘Among the more notorious instances of resistance to scientific discovery previous to existence of Nobels, we can cite the Mayer’s difficulties to publish a first version of the first law of thermodynamics. …’

Amusingly, Dr Woit replied to Juan’s statistics by dismissing them as an anti-scientific establishment rant, but didn’t delete it since it mentioned the censorship of Dr Woit’s own great-uncle who won a Nobel Prize for chemistry in 1953:

‘… Sure, there are lots of examples in history of good scientific ideas being discounted and suppressed, but going on about those doesn’t have much to do with the case at hand.

‘I’ll leave Juan’s last rant up for a personal reason. The chemist Hermann Staudinger was my great-uncle.’

In a blog post about Einstein’s own ‘peer-review’ dispute with the Physical Review editor in 1936 (Einstein was so affronted by so-called ‘peer-review’ upon encountering it for the first time in 1936 with his error-containing first draft of a paper on gravity waves that he immediately complained ‘I see no reason to address the in any case erroneous comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere’, withdrew his paper and never submitted to that journal again), Professor Sean Carroll (who has Feynman’s old desk in his office at California Institute of Technology) has amusingly written:

‘If there are any new Einsteins out there with a correct theory of everything all LaTeXed up, they should feel quite willing to ask me for an endorsement for the arxiv; I’d be happy to bask in the reflected glory and earn a footnote in their triumphant autobiography. More likely, however, they will just send their paper to Physical Review, where it will be accepted and published, and they will become famous without my help.

‘If, on the other hand, there is anyone out there who thinks they are the next Einstein, but really they are just a crackpot, don’t bother; I get things like that all the time. Sadly, the real next-Einsteins only come along once per century, whereas the crackpots are far too common.’

It’s amusing because Sean is claiming that he is:

1. better at spotting genuine work that Teller, Pauli, Bohr, Oppenheimer and others were at deciding Feynman’s work was nonsense at Pocono in 1948 (already discussed in detail in this post),

2. better than Pauli was when he dismissed the Yang-Mills theory in 1954 (already discussed in detail in this post), and generally

3. better than all the other ‘ignorant-in-the-new-idea but expert-in-frankly-obsolete-and-therefore-irrelevant-old-ideas’ critics of science.

Furthermore, he is assuming that anyone who wants to help science is really motivated by the desire for fame or its result, prizes. According to him, no censorship has ever really occurred in the world, because it would be illogical for anybody to censor a genuine advance! Seeing the history of the censorship of path integrals and Yang-Mills theory, building blocks of today’s field theories, Sean’s rant is just funny!

Sean amplifies his ignorant attack in a later post, quoting his earlier ‘advice’ and trying to justify it with more hot air:

‘You are not the only person from an alternative perspective who purports to have a dramatic new finding, and here you are asking established scientists to take time out from conventional research to sit down and examine your claims in detail. Of course, we know that you really do have a breakthrough in your hands, while those people are just crackpots. But how do you convince everyone else? All you want is a fair hearing.

‘Scientists can’t possibly pay equal attention to every conceivable hypothesis, they would literally never do anything else. Whether explicitly or not, they typically apply a Bayesian prior to the claims that are put before them. Purported breakthroughs are not all treated equally; if something runs up against their pre-existing notions of how the universe works, they are much less likely to pay it any attention. So what does it take for the truly important discoveries to get taken seriously? … So we would like to present a simple checklist of things that alternative scientists should do in order to get taken seriously by the Man. And the good news is, it’s only three items! How hard can that be, really? True, each of the items might require a nontrivial amount of work to overcome. Hey, nobody ever said that being a lonely genius was easy. …

‘1. Acquire basic competency in whatever field of science your discovery belongs to. …

‘2. Understand, and make a good-faith effort to confront, the fundamental objections to your claims within established science. …

‘3. Present your discovery in a way that is complete, transparent, and unambiguous. …’

Duh! These three simple rules are just what Feynman and his acolyte Dyson, not to mention Yang and Mills, and all the others who were suppressed did! They are so obvious that everyone does spend a lot of time on these points before formulating a theory, while checking a theory, and when writing up the theory. Is Sean saying that Feynman, Dyson, Yang and Mills and everyone else was suppressed because they were ignorant of their field, ignored genuine objections, and were unclear? No, they were suppressed because of a basic flaw in human nature called fashion, which is exactly why Feynman later attacked fashion in science (after receiving his Nobel Prize in 1965, conveniently):

‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, The Trouble with Physics, 2006, p. 307).

‘The one thing the journals do provide which the preprint database does not is the peer-review process. The main thing the journals are selling is the fact that what they publish has supposedly been carefully vetted by experts. The Bogdanov story shows that, at least for papers in quantum gravity in some journals [including the U.K. Institute of Physics journal Classical and Quantum Gravity], this vetting is no longer worth much. … Why did referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don’t understand things.’ – Peter Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 223.

The one thing mainstream people don’t admit to is being ignorant or heaven forbid wrong. This is why string theory is so popular, it doesn’t make a falsifiable prediction so it isn’t even wrong.

Basically, Sean Carroll’s advice to people censored is the ‘let them eat cake!’ advice that Queen Marie Antoinette allegedly gave when receiving complaints that people had no bread to eat. In other words, useless advice that is not only unhelpful but is also abusive in the sense that it implies that there is an obvious solution to a problem that the other person is too plain stupid to see for themselves. However, maybe I’m wrong about Sean and he is really a genius and a nice guy to boot!

Mainstream cosmology: the big bang is an observational fact with evidence behind it, but General Relativity’s Lambda-CDM is a religion

Aristarchus in 250 B.C. argued that the the earth rotates daily and orbits the sun annually. This was correct. But he was ignored for 17 centuries. In 150 A.D. the leading astronomer Ptolemy, author of a massive compendium on the stars, claimed to disprove Aristarchus’ solar system. If the earth was rotating towards the East, Ptolemy claimed, a person jumping up would always land to the West of the position the person jumped at! The earth would be rotating at about 1,000 miles per hour near the equator (circumference of Earth in miles divided into 24 hours). Therefore, to Ptolemy and his naive admirers (everybody), Aristarchus was disproved. In addition, Ptolemy claimed that clouds would appear to zoom around the sky at 1,000 miles/hour.

Ptolemy’s disproof was in error. But the funny thing is, nobody argued with it. They preferred to believe the Earth-centred cosmology of Ptolemy as it made more sense intuitively. Sometimes scientific facts are counter-intuitive. The error Ptolemy made was ignoring Newton’s first law of motion, inertia: standing on the earth, you are being carried towards the East as the Earth rotates and you continue to do so (because there is no mechanism to suddenly decelerate you!) when you jump up. So you continue going Eastwards with the rotating Earth while you are in the air. So does the air itself, which is why the clouds don’t lag behind the Earth’s rotation (the Coriolis acceleration is quite different!). French scientist Pierre Gassendi (1592-1655) dropped stones from the mast of a moving sail ship to test Ptolemy’s claim, and disproved it. The stones fell to the foot of the mast, regardless of whether the ship was moving or not.

Now let’s consider the false application of general relativity to cosmology. The big bang idea was proposed first by Erasmus Darwin, grandfather of the evolutionist, in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

The tragedy is that before evidence for the big bang came along, Einstein falsely believed that his general relativity – despite merely being a classical perturbative correction to classical physics which ignored quantum theory – would describe the universe.

Imagine what would have happened if the big bang had been discovered before general relativity! If general relativity had not been discovered by the time of Hubble’s 1929 discovery of redshifts correlating to distances, Gamow, Alpher and Hermann’s discovery in 1948 that the big bang would produce the observed amounts of hydrogen and helium in the universe, and the discovery of the redshifted heat flash of the big bang by Penzias and Wilson in 1965!

The spacetime dependence of light coming from great distances would have been studied more objectively, with the recognition that because we’re looking back in time as we look out to greater distances (due to the time taken for light to reach us), the Hubble law formulated as v = HR is misleading and is better stated as v = HcT, where T is the time past when the light was emitted.

Since c is the ultimate speed limit, setting v = c then gives you the age of the universe c = HcT thus T = 1/H. The age of the universe t for stars at distance R is then given by t = (1/H) – T, which rearranges to give us T = (1/H) – t. Inserting this into v = HcT,

v = HcT = Hc[(1/H) – t]

differentiating this gives us the acceleration of such receding matter

a = dv/dt = d{Hc[(1/H) – t]}/dt

= d[c – (Hct)]/dt

= –Hc

= -6 × 10-10 ms-2.

So cosmology would have predicted the acceleration of the universe based on observational facts! The tragedy of general relativity is that it confuses people into ad hoc modelling without quantum gravity, without mechanisms, with unexplained dark energy, and without falsifiable predictions. Once they predicted the acceleration, lacking general relativity’s infinitely adjustable pseudo-science they would have been able to apply empirical laws, Newton’s 2nd and 3rd laws of motion, to the acceleration of matter and thus predicted gravity quantitatively as I have shown. Hence, they would have had solid physics.

Just in case anyone reading this blog post disagrees with the Hubble recession law, see Ned Wright’s page linked here for the reasons why it is a fact, and if you need evidence for the basic facts of the big bang see the page linked here for a summary (unfortunately that page confuses speculative metrics of general relativity with the big bang theory, which doesn’t address quantum gravity in an expanding universe, but it does give some empirical data mixed in with the speculations required to fit general relativity to the facts). See Richard Lieu of the Physics Department, University of Alabama, ‘Lambda-CDM cosmology: how much suppression of credible evidence, and does the model really lead its competitors, using all evidence?’, http://arxiv.org/abs/0705.2462. (If you need evidence for other so-called ‘assumptions’ I use in ‘my theory’ which is not a theory but called a proof on this blog page, e.g. you think that say the formula for geometric area of a sphere or Newton’s F = ma is a speculation in a ‘theory’, then you simply need to learn basic mathematics and physics to understand which parts are fact and which – extra dimensions and so on – are speculative, and spend less time listening to stringers whose goal is to get research money for mixing up fact and fiction.)

The general-relativity-as-cosmology hype was started by Sir Arthur Eddington’s 1933 book The Expanding Universe (Pelican books, New York, 1940). The lesson of Ptolemy’s error is not that we must believe (without any proof) that the Earth is not in the centre of the universe; it is that fashion is not in the centre of the universe! Mainstream cosmologists derive the a conclusion from the error of Ptolemy which is diametrically opposed to scientific fact. They derive the conclusion that science is a religion which must believe the Earth is not in a special place, instead of deriving the conclusion that science is not a religion.

Hence, they merely change the religious belief objective instead of abandoning non-factual beliefs altogether: they substitute one prejudice for another prejudice, and start religiously believing that!

Professor Edward Harrison of the University of Arizona is religiously prejudiced in this way on pages 294-295 of his book, Cosmology: the Science of the Universe, 2nd ed., Cambridge University Press, London, 2000:

“… a bounded finite cloud of galaxies expanding at the boundary at the speed of light in an infinite static space restores the cosmic centre and the cosmic edge, and is contrary to modern cosmological beliefs.”

Who cares about f&%king modern cosmological beliefs? Science isn’t a fashion parade! Science isn’t a religion of believing that the Milky Way isn’t in a particular place. In any case, the major 3 mK anisotropy in the cosmic background radiation suggests that the Milky Way is nearly in the centre of the universe: it tells us the Milky Way has a speed relative to the cosmic background radiation emitted soon after the big bang, and that provides an order of magnitude estimate for the motion of the Milky Way matter since the big bang. Multiplying speed by age of universe, we get a distance which is a fraction of 1% of the horizon radius of the universe, suggesting we’re near the centre:

R. A. Muller, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, p. 64-74:

‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s. It is noted that in a frame of reference moving with the original plasma emitted by the big bang, the blackbody radiation would have a temperature of 4500 K.’

Notice that I stated this in Electronics World and the only reaction I received was ignorance. One guy wrote an article – which didn’t even directly mention my article – in the same journal, claiming that the 3 mK anisotropy in the cosmic background radiation was too small to be accurately determined and should therefore be ignored! Duh! It is a massive anisotropy, detected by U2 aircraft back in the 70s, way bigger than the tiny anisotropy (the ripples which indicate the density fluctuations which are the basis for the formation of galaxy clusters in the early universe) discovered by the COBE microwave background explorer satellite with its liquid helium cold-load in 1992! The anisotropies in the cosmic background radiation were measured even more accurately by the WMAP satellite. It’s not ignored because it is inaccurate. It’s ignored due to religion/fashion:

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 184-5. (This is a very political comment by them, and shows them acting in a very political – rather than purely scientific – light.)

‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp89-90.

‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2nd ed., v1, p. v, 1951.

‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

If he was writing today, maybe he would have to reverse a lot of that to account for the hype-type “success” of string theory ideas that fail to make definite (quantitative) checkable predictions, while alternatives are censored out completely.

No longer could Dr Lakatos claim that:

“What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.”

It’s quite the opposite. The mainstream, dominated by string theorists like Jacques Distler and others at arXiv, can actually stop “silly” alternatives from going on to arXiv and being discussed, as they did with me:

http://arxiv.org/help/endorsement

‘We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’

What serious researcher is going to treat quantum field theory objectively and work on the simplest possible mechanisms for a spacetime continuum, when it will result in their censorship from arXiv, their inability to find any place in academia to study such ideas, and continuous hostility and ill-informed “ridicule” from physically ignorant string “theorists” who know a lot of very sophisticated maths and think that gives them the authority to act as “peer-reviewers” and censor stuff from journals that they refuse to first read?

Sent: 02/01/03 17:47
Subject: Your_manuscript LZ8276 Cook

Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories.

Yours sincerely,
Stanley G. Brown, Editor,
Physical Review Letters

Now, why has this nice genuine guy still not published his personally endorsed proof of what is a “currently accepted” prediction for the strength of gravity? Will he ever do so?

“… in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory.”

– Sir Roger Penrose, The Road to Reality, Jonathan Cape, London, 2004, page 896.

Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has never been observed! Despite this, the censorship of the facts by mainstream “stringy” theorists persists, with professor Jacques Distler and others at arXiv believing with religious zeal that (1) the rank-2 tensors of general relativity prove spin-2 gravitons and (2) string theory is the only consistent theory for spin-2 gravitons, despite Einstein’s own warning shortly before he died:

‘I consider it quite possible that physics cannot be based on the [smooth geometric] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air.’

– Albert Einstein in a letter to friend Michel Besso, 1954.

Sir Air Arthur Eddington versus Edward Milne

Sir Eddington’s book The Expanding Universe first popularized the major prejudice:

“For a model of the universe let us represent spherical space by a rubber balloon. Our three dimensions of length, breadth, and thickness ought all to lie on the skin of the balloon; but there is only room for two, so the model will have to sacrifice one of them. That does not matter very seriously. Imagine the galaxies to be embedded in the rubber. Now let the balloon be steadily inflated. That’s the expanding universe.”

(Eddington, quoted on page 294 of Harrison’s Cosmology, 2nd ed., Cambridge University Press, 2000. This statement can also be found on page 67 of the 1940 edition of The Expanding Universe, Pelican, New York.)

This confusion based on general relativity is wrong:

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

The radial contraction (1/3)MG/c2 of spacetime around a mass (the Earth’s radius is contracted 1.5 mm as predicted by general relativity) is a real pressure effect from the quantum gravitons. General relativity attributes this to distortion by a fourth dimension (time, acting as an extra spatial dimension!) so that the radial contraction without transverse contraction (circumference contraction) doesn’t affect Pi. But you get a better physical understanding from quantum gravity, as explained on previous posts which treat this in detail as a quantum gravity effect: the pressure of gravitons squeezes masses. It also causes the Lorentz-FitzGerald contraction. You get the predictions of restricted and general relativity from quantum gravity, but without the mystery and religious manure.

Sir Eddington in The Expanding Universe (1933; reprinted by Pelican, new York, 1940) writes on page 21:

“The unanimity with which the galaxies are running away looks almost as though had a pointed aversion to us. We wonder why we should be shunned as though our system were a plague spot in the universe. But that is too hasty an inference, and there is really no reason to think that the animus is especially directed against our galaxy. … In a general dispersal or expansion every individual observes every other individual to be moving away from him. … We should therefore no longer regard the phenomenon as a movement away from our galaxy. It is a general scattering apart, having no particular centre of dispersal.”

Notice the sneaky way that Eddington moves from fact to speculative assertion: he has no evidence whatsoever that there is no centre of dispersal, he merely shows that that is one possibility. Yet – after showing that it is a possibility – he then claims that we should regard it as being the correct explanation, with no science to back up why he is selecting that explanation! But then he adds more honestly on the same page:

“I do not want to insist on these observational facts dogmatically. It is granted that there is a possibility of error or misinterpretation.”

On page 22 he writes:

“For the present I make no reference to any ‘expansion of space’. I am speaking of nothing more than the expansion or dispersal of a material system. Except for the large scale of the phenomenon the expansion of the universe is as commonplace as the expansion of a gas.”

On page 25 he writes about prejudice in science:

“A scientist commonly professes to base his beliefs on observations, not theories. Theories, it is said, are useful in suggesting new ideas and new lines of investigation for the experimenter; but ‘hard facts’ are the only proper ground for conclusion. I have never come across anyone who carries this profession into practice [Eddington never met me] – certainly not the hard-headed experimentalist, who is the more swayed by his theories because he is less accustomed to scrutinise them. Observation is not sufficient. We do not believe our eyes unless we are first convinced that what they appear to tell us is credible.”

On page 30 he writes about the effect of positive cosmological constant, positive lambda:

“It is a dispersive force like that which I imagined as scattering apart the audience in the lecture-room. Each thinks it is directed away from him. We may say that the repulsion has no centre, or that every point is a centre of repulsion.

“Thus in straightening out his law of gravitation to satisfy ideal conditions, Einstein almost inadvertently added a repulsive scattering force to the Newtonian attraction of bodies. We call this force the cosmological repulsion, for it depends on and is proportional to the cosmological constant. It is utterly imperceptible within the solar system or between the sun and neighbouring stars.

“But since it increases proportionately to the distance we have only to go far enough to find it appreciable, then strong, and ultimately overwhelming. In practical observation the farthest we have yet gone is 150 million light-years. Well within that distance we find that celestial objects are scattering apart as if under a repulsive force. Provisionally we conclude that here cosmological repulsion has become dominant and is responsible for the dispersion.

“We have no direct evidence [in 1933] of an outward acceleration of the nebulae, since it is only the velocities that we observe. But it is reasonable to suppose that the nebulae, individually as well as collectively, follow the rule – the greater the distance the faster the recession. If so, the velocity increases as the nebula recedes, so that there is an outward acceleration. Thus from the observed motions we can work backwards and calculate the repulsive force, and so determine observationally the cosmological constant lambda.”

On page 61, Eddington states:

“To suppose that velocity of expansion in the (fictitious) radial direction involves kinetic energy, may seem to be taking our picture of spherical space too literally; but the energy is so far real that it contributes to the mass of the universe. In particular a universe projected from B to reach A necessarily has greater mass than one which falls back without reaching the vertical (Einstein) position.

“Lemaitre does not share my idea of an evolution of the universe from the Einstein state. His theory of the beginning is a fireworks theory [big bang] – to use his own description of it. The world began with a violent projection from position B, i.e., from the state in which it was condensed to a point or atom; the projection was strong enough to carry it past the Einstein state, so that it is now falling down towards A as observation requires.”

Then on page 65 Eddington discusses Edward Milne’s work on the physics of the real big bang:

“E. A. Milne [Nature, 2 July 1932] has pointed out that if initially the galaxies, endowed with their present speeds, were concentrated in a small volume, those with highest speed would by now have travelled farthest. If gravitational and other forces are negligible, we obtain in this way a distribution in which speed and distance from the centre are proportional. Whilst accounting for the dependence of speed on distance, this hypothesis creates a new difficulty as to the occurrence of the speeds. To provide a moderately even distribution of nebulae up to 150 million light years distance, high speeds must be very much more frequent than low speeds; this peculiar anti-Maxwellian distribution of speeds becomes especially surprising when it is supposed to have occurred originally in a compact aggregation of galaxies.”

The error here is that the big bang is not a simple explosion: graviton exchange between fundamental particles of mass causes the accelerating expansion. It’s more curious that Eddington and Milne also seem to entirely neglect the fact that as we look to greater distances, we’re looking to earlier times in the universe, when the universe was more compressed and therefore of higher density! So it seems that Milne backed the common sense big bang idea, published in a mainstream journal Nature, but got the details wrong, pathing the way for general relativity to be preferred as an endlessly adjustable cosmological model. So on page 67 Eddington writes (as quoted above):

“For a model of the universe let us represent spherical space by a rubber balloon. Our three dimensions of length, breadth, and thickness ought all to lie on the skin of the balloon; but there is only room for two, so the model will have to sacrifice one of them. That does not matter very seriously. Imagine the galaxies to be embedded in the rubber. Now let the balloon be steadily inflated. That’s the expanding universe.”

He adds on pages 67-8:

“The balloon, like the universe, is under two opposing forces; so we may take the internal pressure tending to inflate it to correspond to the cosmological repulsion, and the tension of the rubber trying to contract it to correspond to the mutual attraction of the galaxies, although here the analogy is not very close.”

On page 103, Eddington popularises another speculation, namely the large numbers hypothesis, stating that the universe contains about 1079 atoms, a number which is about the square of the ratio of the electromagnetic force to gravitational force between two unit charges (electron and proton). However, he doesn’t provide a checkable theoretical connection, just numerology.

On page 111 he goes further into numerology by trying to find an ad hoc connection between the Sommerfeld dimensionless fine structure constant (137.036…) and the ratio of proton to electron mass, suggesting the two solutions for mass m to the quadratic equation 10m2 – 136m + 1 = 0 are in the ratio of the mass of the proton to the mass of the electron. The numbers 10 and 136 come from very shaky numerology (maybe we have 10 fingers so that explains 10, and 137 – 1 degree of freedom = 136). The result is not accurate when the latest data for the mass of the proton and electron are put into the equation. It worked much better with the now-obsolete data Eddington had available in 1932. On page 116 Eddington states:

“It would seem that the expansion of the universe is another one-way process parallel with the thermodynamical running-down [3rd law of thermodynamics]. One cannot help thinking that the two processes are intimately connected; but, if so, the connection has not yet been found.”

It’s obvious that the expansion of the universe islinked to the 3rd law of thermodynamics if you think as follows.

First, if the universe was static (not expanding), the radiation of energy by stars would lead to everywhere gradually reaching a thermal equilibrium, in which everything would have equal temperature. In this event, there would be “heat death” because no work would be possible: there would be no heat sink anywhere so you would be unable to transfer waste energy anywhere. The energy all around you would be useless because it could not be directed. You would no more be able to extract useful (work-causing) energy from that chaotic energy than you can extract power from the air molecules bombarding you randomly from all directions at 500 metres per second average speed all the time! You have to have an asymmetry to get energy to do useful work, and without a heat sink you get nowhere: energy doesn’t go anywhere or produce any effect you want. It just makes you too hot.

Second, considering the expansion of the universe: it prevents thermal equilibrium by ensuring that the heat every star radiates into space is redshifted and thus can’t be received by other stars with a power which is equal to the output of power by a star. The expansion of the universe therefore provides a heat sink, preventing thermal equilibrum and “heat death” predicted by the 3rd law of thermodynamics for a static universe.

In his conclusion on page 118, Eddington retreats from his earlier arrogant claims, stating:

“Science has its showrooms and its workshops. The public today, I think rightly, is not content to wander round the showrooms where the tested products are exhibited; the demand is to see what is going on in the workshops. You are welcome to enter; but do not judge what you see by the standards of the showroom.

“We have been going round a workshop in the basement of the building of science. The light is dim, and we stumble sometimes. About us is confusion and mess which there has not been time to sweep away. The workers and their machines are enveloped in murkiness. But I think that something is being shaped here – perhaps something rather big. I do not not quite know what it will be when it is completed and polished for the showroom. But we can look at the present designs and the novel tools that are being used in its manufacture; we can contemplate too the little successes which make us hopeful.”

This reminds you of the kind of political spin used by cranks to defend crackpot extradimensional stringy theory. Now we have finished looking at Eddington’s hype for a lambda-CDM general relativity metric of the expanding universe, let’s return to Professor Edward Harrison’s mainstream Cosmology: The Science of the Universe, 2nd ed., Cambridge University Press, 2000. There are some sections from pages 294 to 507 which are worth discussing. On page 294 Harrison states:

“The [expanding universe] debate began at a British Association science meeting in 1931 and was published as a collection of contributions in Nature under the title ‘The evolution of the universe.’ From this symposium Edward Milne emerged as a principal contributor … But Milne rejected general relativity and strenuously opposed the expanding space paradigm [matter recedes from other matter, but the fabric of space does not expand; it flows around moving fundamental particles and pushes in at the rear, giving a net inward motion of spacetime fabric – exerting a pressure causing gravity – when there is a net acceleration of matter radially away from the observer]. He refused to attribute to space … the properties of curvature and expansion. … Of the expanding space paradigm, he saidin 1934, ‘This concept, though mathematically significant, has by itself no physical content; it is merely the choice of a particular mathematical apparatus for describing and analyzing phenomena. An alternative procedure is to choose a static space, as in ordinary physics, and analyze the expansion phenomena as actual motions in this space’.”

It is this statement which infuriated Harrison into responding on the same page:

“… a bounded finite cloud of galaxies expanding at the boundary at the speed of light in an infinite static space restores the cosmic centre and the cosmic edge, and is contrary to modern cosmological beliefs.”

So, facts are contrary to modern pseudoscientific religious beliefs, so the facts must be ignored! Nice one, Harrison. Here’s another one on page 428:

“An explosion occurs at a point in space, whereas the big bang embraces all of space. In an explosion, gas is driven outward by a steep pressure gradient, and a large pressure difference exists between the centre and edge of the expanding gas. In the universe … no center and edge exist.”

Harrison has, you see, been throughout the entire universe and scientifically confirmed that “the big bang embraces all of space” and that there is no center and no edge. Wow! Hope he gets a Nobel Prize for his amazing discovery. Or maybe he will get the Templeton Prize for Religion instead? It’s hard to know what to do to discredit fashionable horsesh*t that is believed by many with religious zeal – but no evidence whatsoever – to be fact.

Notice that Harrison’s claim that an explosion has a steep pressure gradient bears no relation to the facts of explosions whatsoever: the 1.4 megaton Starfish nuclear test at 400 km altitude sent out its explosive energy as X-rays and radiation, and did not create any blast wave or pressure gradient. There was no sound from the explosion, just a flash of light and other radiation. For the full facts about the 1962 nuclear explosions in space, see my posts here, here, here and here. In a low altitude air burst, you get a pressure gradient, but in a high altitude explosion in space you don’t. Harrison is totally confused about explosions. Does he deny on religious grounds that supernova explosions are “explosions” and choose to call them something else instead? Sir Fred Hoyle, who named the “big bang” explosive universe, was a plain-talking Yorkshire man who believed in being clear. He wrote:

‘But compared with a supernova a hydrogen bomb is the merest trifle. For a supernova is equal in violence to about a million million million million hydrogen bombs all going off at the same time.’ – Sir Fred Hoyle (1915-2001), The Nature of the Universe, Pelican Books, London, 1963, p. 75.

Nuclear explosions are very helpful in understanding the world in general:

‘Dr Edward Teller remarked recently that the origin of the earth was somewhat like the explosion of the atomic bomb…’ – Dr Harold C. Urey, The Planets: Their Origin and Development, Yale University Press, New Haven, 1952, p. ix.

‘In fact, physicists find plenty of interesting and novel physics in the environment of a nuclear explosion. Some of the physical phenomena are valuable objects of research, and promise to provide further understanding of nature.’ – Dr Harold L. Brode, The RAND Corporation, ‘Review of Nuclear Weapons Effects,’ Annual Review of Nuclear Science, Volume 18, 1968, pp. 153-202.

‘It seems that similarities do exist between the processes of formation of single particles from nuclear explosions and formation of the solar system from the debris of a [7*1026 megatons of TNT equivalent type Ia] supernova explosion. We may be able to learn much more about the origin of the earth, by further investigating the process of radioactive fallout from the nuclear weapons tests.’ – Dr P. K. Kuroda, University of Arkansas, ‘Radioactive Fallout in Astronomical Settings: Plutonium-244 in the Early Environment of the Solar System,’ Radionuclides in the Environment (Dr E. C. Freiling, Symposium Chairman), Advances in Chemistry Series No. 93, American Chemical Society, Washington, D.C., 1970, pp. 83-96.

Copy of a comment to Arcadian Functor:

Thanks for that link:

“Theory Failure #1: In order to make string theory work on paper our four dimensional real world had to be increased to eleven dimensions. Since these extra dimensions can never be verified, they must be believed with religious-like faith — not science.

Theory Failure #2: Since there are an incalculable number of variations of the extra seven dimensions in string theory there are an infinite number of probable outcomes.

Theory Failure #3: The only prediction ever made by string theory — the strength of the cosmological constant — was off by a factor of 55, which is the difference in magnitude of a baseball and our sun.

Theory Failure #4: While many proponents have called string theory “elegant,” this is the furthest thing from the truth. No theory has ever proven as cumbrous and unyielding as string theory. With all of its countless permutations it has established itself to be endless not elegant.

Theory Failure #5: The final nail in the coffin of string theory is that it can never be tested.”

Point #2 is wrong and should say a landscape of 10^500 different universes result from the different compactifications of the 6-d manifold, not an incalculable number.

Failure #3 contains a typing error and should read 10^55 not 55. If string theory predicted the cosmological constant off by just a factor of 55, it would be hailed a success. (Interestingly I predicted the acceleration of the universe and hence CC accurately in 1996 and published it, but nobody wants to know because it’s not fashionable to build theory on facts!)

It’s becoming clear that string theory won’t die, and attacking it just leads to greater censorship of alternative ideas.

This is precisely because stringers defend themselves by suppressing alternatives, either by taking the funding and research students who would otherwise go into alternatives, or by directly deleting papers from arXiv as occurred to me, or by pretending to be “peers” of people working on alternative ideas so they can work as “peer-reviewers” and censor alternative ideas from journal publication simply for not being related to string theory. Then they are free to proclaim without the slightest embarrassment that no alternatives exist.

So there is no easy solution to this problem, and pointing out the problem accomplishes nothing. It’s like those people who pointed out that Hitler was up to no good in the 30s before war was declared. Such people were simply ignored.

‘Most people would think that someone who runs around saying they have a wondrous TOE that predicts amazing new things, but they’re not sure whether the amazing new things happen at the Planck scale or the scale of a galaxy, would have to be almost by definition a crackpot.’ – Dr Peter Woit, December 18, 2004 at 12:03 pm, http://www.math.columbia.edu/~woit/wordpress/?p=123&cpage=1#comment-1782

‘Peter and a large number of others, including myself, are looking for a specific predictions which can be tested with specific experiments. If you cannot advance any, then what you do does not deserve the label “physics”.’ – Dr Chris Oakley, December 18, 2004 at 2:20 pm, http://www.math.columbia.edu/~woit/wordpress/?p=123&cpage=1#comment-1779

‘Chris Oakley, you may be looking for unique exact predictions or whatever, but what you’re looking for is absolutely irrelevant for the question how Nature works. If there happens to be a cosmic superstring – macroscopic fundamental string, for example – 10,000 light years from the Sun, then it will become a fact of Nature and we will have to live with it – and scientists will have to give a proper explanation. If this turns out to be the case, it will be absolutely obvious that no one could have predicted this string in advance. … Let me emphasize once again that cynicism of sourballs like you, Chris Oakley, has no consequences for physics whatsoever. You’re just annoying and obnoxious, but your contributions to science are exactly zero. If you think that it is easy to make reliable and unique new predictions of phenomena beyond the Standard Model, try to compete with us.’ – String theorist [then an Assistant Professor of Physics at Harvard University] Dr Lubos Motl, December 18, 2004 at 2:59 pm, http://www.math.columbia.edu/~woit/wordpress/?p=123&cpage=1#comment-1778

‘People like Oakley should be dealt with by the US soldiers with the gun – and I am sort of ashamed to waste my time with such immoral idiots.’ – Dr Lubos Motl, December 19, 2004 at 10:11 am, http://www.math.columbia.edu/~woit/wordpress/?p=123&cpage=1#comment-1768

‘Peter:

‘What’s the point writting a letter to ARXIV? They already said they are not interested in your opinion. Predictable it will take another 3 months before you see any response and the only response you will get is that they ignore you.

‘And it is ridiculous for you to defend ARXIV for having to protect themselves against crackpots. They welcome the biggest crackpot of all, the super string theory. I know that till this day you are not willing to consider super string theory as a crackpot, and you still want to consider it as a science. The point is any theory that fails to make a useful prediction is considered crackpot. It doesn’t matter that string theorists are honestly making the effort to try to come up with a prediction. That is simply not good enough to differentiate their theory from crackpot. All crackpot theorists DO honestly hope for a useful prediction.

‘Until string theorists can show that they can make meaningful predictions, and that their prediction can be verified by experiments, I think it is fair and square that super string theory be classified as a crackpot theory. ARXIV therefore is a major crackpot depository.

‘You might as well instead write to New York Times, or any of the public media.’ – Quantoken, February 25, 2006 at 4:42 am, http://www.math.columbia.edu/~woit/wordpress/?p=353&cpage=1#comment-8765

‘We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’ – http://arxiv.org/help/endorsement

‘They don’t want any really strong evidence of dissent. This filtering means that the arxiv reflects pro-mainstream bias. It sends out a powerful warning message that if you want to be a scientist, don’t heckle the mainstream or your work will be deleted.

‘In 2002 I failed to get a single brief paper about a crazy-looking yet predictive model on to arxiv via my university affiliation (there was no other endorsement needed at that time). In emailed correspondence they told me to go get my own internet site if I wasn’t contributing to mainstream [stringy] ideas.’ – nigel, February 24, 2006 at 5:26 am, http://www.math.columbia.edu/~woit/wordpress/?p=353&cpage=1#comment-8728

‘Witten has made numerous major contributions to string theory, most famously in 1995 after coming up with ideas which spawned a more general 11-dimensional framework called M-theory while on a flight from Boston to Princeton.

‘The 1980s and 90s were dotted with euphoric claims from string theorists. Then in 2006 Peter Woit of Columbia University in New York and Lee Smolin of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, published popular books taking string theory to task for its lack of testability and its dominance of the job market for physicists. Witten hasn’t read either book, and compares the “string wars” surrounding their publication – which played out largely in the media and on blogs – to the fuss caused by the 1995 book The End of Science, which argued that the era of revolutionary scientific discoveries was over. “Neither the publicity surrounding that book nor the fact that people lost interest in talking about it after a while reflected any change in the intellectual underlying climate.”

‘Not that Witten would claim string theory to be trouble-free. He has spent much of his career studying the possible solutions that arise when projecting string theory’s 10 or 11 dimensions onto our 4D world. There is a vast number of possible ways to do this – perhaps 10500 by some counts. But a decade ago what seemed like a problem became a virtue in the eyes of many string theorists. Astronomers discovered that the expansion of the universe is accelerating. This suggests that what appears to us as empty space is in fact pervaded by a mysterious substance characterised by a concept dreamed up by Einstein called the “cosmological constant”.

‘Witten calls it the most shocking discovery since he’s been in the field. … Witten majored in history and then dabbled in economics before switching to mathematics and physics.’ – Matthew Chalmers, ‘Inside the tangled world of string theory’, New Scientist magazine issue 2703, 15 April 2009, http://www.newscientist.com/article/mg20227035.600-inside-the-tangled-world-of-string-theory.html

‘Aside from everything else, what exactly is the prediction that string theory made about RHIC?

‘That viscosity over entropy density (eta/s) is 1/4 pi? Well, this is not anymore a prediction (see, for example, http://arxiv.org/abs/0812.2521): eta/s in theories with string duals can go to lower values to 1/4pi, perhaps all the way to 0 (or quantum mechanics could prevent this. But this was known way before string theory (see, e.g. Physical Review, vol. D31, pp. 53-62, 1985).

‘That eta/s is “low” in a strongly coupled theory? Well, that’s a pretty obvious point that transcends string theory.

‘It is cute that AdS/CFT reproduces many phenomena also observed in hydrodynamics, but there is NO AdS/CFT result that can be sensibly compared with data and used to make a prediction. NONE. Not one. If anyone disagrees, please give an example.

‘AdS/CFT is, currently, a very interesting conceptual exercise. Perhaps tomorrow someone WILL extract predictions relevant to heavy ion collisions out of it. But it hasn’t happened yet. And to claim it has is dishonest Public Relations.’ – luny, April 16, 2009 at 4:52 am, http://www.math.columbia.edu/~woit/wordpress/?p=1817&cpage=1#comment-47974

‘The string theory side of AdS/CFT gives you gravity in 5 dimensional AdS space, not four dimensional space. For this and many other reasons you can’t use it for unification. The 4d physics of the theory is supposed to be N=4 SYM (no gravity), this may be a useful approximation to QCD, but it’s not a unified theory.

‘If you believe in much much more general conjectures about gauge duals of string theories in different “string vacua”, then you could imagine that there are gauge theory duals of the kind of string theory used in unification. These would be 3d gauge theories, and looking for them is an active field of research. As far as I can tell though, if it is successful, all you will get is a different parametrization of the “Landscape”, an infinite number of complicated qfts, corresponding to the infinite number of complicated “string vacua”.’ – Dr Peter Woit, April 16, 2009 at 9:45 am, http://www.math.columbia.edu/~woit/wordpress/?p=1817&cpage=1#comment-47978

‘“So if AdS CFT turns out to work correctly it would be a good argument for string theory. Is this not true?”

‘But does it work correctly? In a recent discussion here, I became aware of the paper arXiv:0806.0110v2. Therein, the following statements are proven:

‘1. AdS/CFT makes a prediction for some quantities c’/c and k’/k, eqn (5).

‘2. This prediction is compared to the exactly known values for the 3D O(n) model at n = infinity, eqns (28) and (30).

‘3. The values disagree. Perhaps not by so much, but they are not exactly right.

‘This may be expressed by saying that the d-dimensional O(n) model does not have a gravitational dual (an euphemism for “AdS/CFT screwed up”?), at least not in some neighborhood of n = infinity, d = 3, and hence not for generic n and d. There might be exceptional cases where a gravitational dual exist, e.g. the line d = 2, but generically it seems disproven by the above result. In particular, I find it unlikely that the 3D Ising, XY and Heisenberg models (n = 1, 2, 3) can be treated with AdS/CFT.’ – Thomas Larsson, April 16, 2009 at 1:33 pm, http://www.math.columbia.edu/~woit/wordpress/?p=1817&cpage=1#comment-47983

‘What I see as a big negative coming out of string theory is the ideology that the way to unify particle physics and gravity is via a 10/11d string/M-theory. This is the idea that I think has completely failed. Not only has it led to nothing good, it has led a lot of the field into bad pseudo-science (anthropics, the landscape, the multiverse…), and this has seriously damaged the reputation of the whole subject.

‘… people keep publicly pushing the same failed idea, discrediting the subject completely. In the process they have somehow managed to discredit the whole idea of using sophisticated mathematics to investigate QFT and string theory at a truly deep level, convincing people that this was a failure caused by not being “physical” enough. Instead, it was a failure of a specific, “physical” idea: that you can get a unified theory by changing from quantum fields to strings.

‘If you want concrete suggestions for what to work on, note that we don’t understand the electroweak theory non-perturbatively at all. There are all sorts of questions about non-perturbative QFT that we don’t understand. Sure, these are not easy problems, but then again, all the problems in string theory are now supposed to be too hard, why not instead work on QFT problems that are too hard?’ – Dr Peter Woit, April 13th, 2009 at 2:49 pm, http://blogs.discovermagazine.com/cosmicvariance/2009/04/09/string-wars-the-aftermath/#comment-71663

‘The problem with your [Peter Woit] book and blog is that they do not offer any way of making progress – all they do is call for a shutdown of string theory (which as you yourself admit above, has lead to useful things). What do you recommend as a better, concrete, alternate way of making progress? Lets hear it, dammit.’ – jamie, April 13th, 2009 at 1:03 pm

‘“jamie” … If you want concrete suggestions for what to work on, note that we don’t understand the electroweak theory non-perturbatively at all. There are all sorts of questions about non-perturbative QFT that we don’t understand. Sure, these are not easy problems, but then again, all the problems in string theory are now supposed to be too hard, why not instead work on QFT problems that are too hard? Personally I’m currently fascinated by the BRST formalism.’ – Peter Woit, April 13th, 2009 at 2:49 pm

‘And about your research advice for me: don’t you think it is more prudent if I took advice from somebody who has, you know, actually made it in academia???’ – jamie, April 13th, 2009 at 4:40 pm

‘First Jamie asks Dr Woit for advice, Dr Woit gives the requested advice, then Jamie says he doesn’t want advice from Dr Woit! It’s funny to see rhetorical questions backfire when answered honestly. Everytime a string theorist asks what alternative ideas there are to work on (as a rhetorical question, the implicit message being ‘string theory is only game in town’), they have to be abusive to the alternative ideas they receive in reply.’ – April 16th, 2009 at 9:24 am, http://blogs.discovermagazine.com/cosmicvariance/2009/04/09/string-wars-the-aftermath/#comment-72004

‘If you look at the history of any failed speculative idea about physics, what you’ll find is that the proponents of the failed idea rarely publicly admit that it’s wrong. Instead they start making excuses about how it could still be right, but it’s just too hard to make progress. … This is what is happening to the speculative idea of string-based unification.’ – Peter Woit, April 13th, 2009 at 8:42 am, http://blogs.discovermagazine.com/cosmicvariance/2009/04/09/string-wars-the-aftermath/#comment-71626

‘Some cases I was thinking of are:

‘1. Heisenberg’s unified field theory
‘2. Chew’s S-matrix theory of the strong interactions
‘3. Cold fusion

‘2. is a complicated story interrelated with string theory. But, one aspect of the story is that in 1973-74 it became clear that QCD was the correct theory of the strong interactions, but there were quite a few people who for the next decade wouldn’t admit this. With AdS/CFT, some of the string theory ideas that grew out of this period did get connected to gauge theory and turned out to be useful. By analogy, I think it’s entirely possible that in the future some very different way of thinking about string theory and unification will have something to do with reality. The problem is that all known ways of doing this have failed, and that’s something proponents are not willing to admit.’ – Peter Woit,
April 13th, 2009 at 10:09 am, http://blogs.discovermagazine.com/cosmicvariance/2009/04/09/string-wars-the-aftermath/#comment-71640

Gauge symmetry: whenever the motion or charge or angular momentum of spin, or some other symmetry, is altered, it is a consequence of conservation laws for momentum and energy in physics that radiation is emitted or received. This is Noether’s theorem, which was applied to quantum physics by Weyl, giving the concept of gauge symmetry. Fundamental interactions are modelled by Feynman diagrams of scattering between gauge bosons or ‘virtual radiation’ (virtual photons, gravitons, etc.) and charges (electric charge, mass, etc.). The Feynman diagrams are abstract, and doesn’t represent the gauge bosons as taking any time to travel between charges (massless radiations travel at light velocity). Two extra polarizations (giving a total of 4-polarizatins!) have to be added to the 2-polarization observed photon on the mainstream model of quantum electrodynamics, to make it produce attractive forces between dissimilar charges. This is an ad hoc modification, similar to changing the spin of the graviton to spin-2 to ensure universal attraction between similar gravitational charge (mass/energy). If you look at the physics more carefully, you find that the spin of the graviton is actually spin-1 and gravity is a repulsive effect: we’re exchanging gravitons (as repulsive scatter-type interactions) more forcefully with the immense masses of receding galaxies above us than we are with the masses in hemisphere of the universe below us because of the Earth’s slight attenuation of gravitons, so the resultant is a downward acceleration. What’s impressive about this is that it makes checkable predictions including the strength (coupling G) of gravity and many other things (see calculations below), unlike string ‘theory’ which is a spin-2 graviton framework that leads to 10500 different predictions (so vague it could be made to fit anything that nature turns out to be, but makes no falsifiable predictions, i.e. junk science). When you look at electromagnetism more objectively, the virtual photons carry an additional polarization in the form of a net electric charge (positive or negative). This again leads to checkable predictions for the strength of electromagnetism and other things. The most important single correct prediction, however, was the acceleration of the universe, due to the long-distance repulsion between large masses in the universe mediated by spin-1 gravitons. This was published in 1996 and confirmed by observations in 1998 published in Nature by Saul Perlmutter et al., but it is still censored out by charlatans such as string ‘theorists’ (quotes are around that word because it is no genuine scientific theory, just a landscape of 10500 different speculations).

Typical string theory deception: ‘String theory has the remarkable property of predicting gravity.’ (E. Witten, ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996.)

Actually what he means but can’t be honest enough to say is that string theory in 10/11 dimensions is compatible with a false spin-2 graviton speculation. Let’s examine the facts:

Above: Spin-1 gravitons causing apparent “attraction” by repulsion, the “attraction” being due to similar charges being pushed together by repulsion from massive amounts of similar sign gravitational charges in the distant surrounding universe.

Nearby gravitational charges don’t exchange gravitons forcefully enough to compensate for the stronger exchange with converging gravitons coming in from immense masses (clusters of galaxies at great distances, all over the sky), due to the physics discussed below, so their graviton interaction cross-section effectively shields them on facing sides. Thus, they get pushed together. This is what we see as gravity.

By wrongly ignoring the rest of the mass in the universe and focussing on just a few masses (right hand side of diagram), Pauli and Fierz in the 1930s falsely deduced that for similar signs of gravitational charges (all gravitational charge so far observed falls the same way, downwards, so all known mass/energy has similar gravitational charge sign, here arbitrarily represented by “-” symbols, just to make an analogy to negative electric charge to make the physics easily understood), spin-1 gauge bosons can’t work because they would cause gravitational charges to repel! So they changed the graviton spin to spin-2, to “fix it”.

This mechanism proves that a spin-2 graviton is wrong; instead the spin-1 graviton does the job of both ‘dark energy’ (the outward acceleration of the universe, due to repulsion of similar sign gravitational charges over long distances) and gravitational ‘attraction’ between relatively small, relatively nearby masses which get repelled more towards one another due to distant masses in the universe than they are repelling one another!

Above: Spin-1 gauge bosons for fundamental interactions. In each case the surrounding universe interacts with the charges, a vital factor ignored in existing mainstream models of quantum gravity and electrodynamics.

The massive versions of the SU(2) Yang-Mills gauge bosons are the weak field quanta which only interact with left-handed particles. One half (corresponding to exactly one handedness for weak interactions) of SU(2) gauge bosons acquire mass at low energy; the other half are the gauge bosons of electromagnetism and gravity. (This diagram is extracted from the more detailed discussion and calculations made below in the more detailed treatment; which is vital for explaining how massless electrically charged bosons can propagate as exchange radiation while they can’t propagate – due to infinite magnetic self-inductance – on a one-way path. The exchange of electrically charged massless bosons in two directions at once along each path – which is what exchange amounts to – means that the curls of the magnetic fields due to the charge from each oppositely-directed component of the exchange will cancel out the curl of the other. This means that the magnetic self-inductance is effectively zero for massless charged radiation being endlessly exchanged from charge A to charge B and back again, even though it infinite and thus prohibited for a one way path such as from charge A to charge B without a simultaneous return current of charged massless bosons. This was suggested by the fact that something analogous occurs in another area of electromagnetism.)

Masses are receding due to being knocked apart by gravitons which cause cosmological-scale repulsion between masses as already explained (pushing distant galaxies apart and also pushing nearby masses together). The inward force, presumably mediated by spin-1 gravitons, from a receding mass m at distance occurs because mass accelerating away from us has an outward force due to Newton’s 2nd law (F = ma), and an equal and opposite (inward) reaction force mediated by gravitons under Newton’s 3rd law (action and reaction are equal and opposite). If its mass m is small, then the inward force of gravitons (being exchanged), which is directed towards you from that small nearby mass, is trivial. So the shielding effect of a nearby small mass (like the planet Earth) slightly shields the gravitons exchange radiation from immense distant masses in the hemisphere of the universe below you (half the mass of the universe), instead of adding to it (the planet Earth isn’t receding from you). So very large masses beyond the Earth (distant receding galaxies) are sending in a large inward force due to their large distance and mass: an extremely small fraction these spin-1 gravitons effectively interact with the Earth by scattering back off the graviton scatter cross-sectional area of some of the fundamental particles in the Earth. So small nearby masses are pressed together, because nearby, non-receding particles with mass cause an asymmetry (a reduction in the graviton field being received from more distant masses in that particular direction), so be pushed towards each other. This gives an inverse-square law force and it uniquely also gives an accurate prediction for the gravitational parameter G as proved later in this post.

When you push two things together using field quanta such as those from the electrons on the surface of your hands, the resulting motions can be modelled as an attractive effect, but it is clearly caused by the electrons in your hands repelling those on the surface of the other object. We’re being pushed down by the gravitational repulsion of immense distant masses distributed around us in the universe, which causes not just the cosmological acceleration over large distances, but also causes gravity between relatively small, relatively nearby masses by pushing them together. (In 1996 the spin-1 quantum gravity proof given below in this post was inspired by an account of the ‘implosion’ principle, used in all nuclear weapons now, whereby the inward-directed half of the force of an explosion of a TNT shell surrounding a metal sphere compresses the metal sphere, making the nuclei in that sphere approach one another as though there was some contraction.)

Notice that in the universe the fact that we are surrounded by a lot of similar-sign gravitational charge (mass/energy) at large distances will automatically cause the accelerative expansion of the universe (predicted accurately by this gauge theory mechanism in 1996, well before Perlmutter’s discovery confirming the predicted acceleration/’dark energy’), as well as causing gravity, and uses spin-1. It doesn’t require the epicycle of changing the graviton spin to spin-2. Similar gravitational charges repel, but because there is so much similar gravitational charge at long distances from us with the gravitons converging inwards as they are exchanged with an apple and the Earth, the immense long range gravitational charges of receding galaxies and galaxy clusters repel a two small nearby masses together harder than they repel one another apart! This is why they appear to attract.

This is an error for the reason (left of diagram above) that spin-1 only appears to fail when you ignore the bulk of the similar sign gravitational charge in the universe surrounding you. If you stupidly ignore that surrounding mass of the universe, which is immense, then the simplest workable theory of quantum gravity necessitates spin-2 gravitons!

The best presentation of the mainstream long-range force model (which uses massless spin-2 gauge bosons for gravity and massless spin-1 gauge bosons for electromagnetism) is probably chapter I.5, Coulomb and Newton: Repulsion and Attraction, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 30-6. Zee uses an approximation due to Sidney Coleman, whereby you have to work through the theory assuming that the photon has a real mass m, to make the theory work, but at the end you set m = 0. (If you assume from the beginning that m = 0, the simple calculations don’t work, so you then need to work with gauge invariance.)

Zee starts with a Langrangian for Maxwell’s equations, adds terms for the assumed mass of the photon, then writes down the Feynman path integral, which is ò DAeiS(A) where S(A) is the action, S(A) = ò d4xL, where L is the Lagrangian based on Maxwell’s equations for the spin-1 photon (plus, as mentioned, terms for the photon having mass, to keep it relatively simple and avoid including gauge invariance). Evaluating the effective action shows that the potential energy between two similar charge densities is always positive, hence it is proved that the spin-1 gauge boson-mediated electromagnetic force between similar charges is always repulsive. So it works mathematically.

A massless spin-1 boson has only two degrees of freedom for spinning, because in one dimension it is propagating at velocity c and is thus ‘frozen’ in that direction of propagation. Hence, a massless spin-1 boson has two polarizations (electric field and magnetic field). A massive spin-1 boson, however, can spin in three dimensions and so has three polarizations.

Moving to quantum gravity, a spin-2 graviton will have 22 + 1 = 5 polarizations. Writing down a 5 component tensor to represent the gravitational Lagrangian, the same treatment for a spin-2 graviton then yields the result that the potential energy between two lumps of positive energy density (mass is always positive) is always negative, hence masses always attract each other.

This has now hardened into a religious dogma or orthodoxy which is used to censor the facts of the falsifiable, predictive spin-1 graviton mechanism as being ‘weird’. Even Peter Woit and Lee Smolin, who recognise that string theory’s framework for spin-2 gravitons isn’t experimentally confirmed physics, still believe that spin-2 gravitons are right!

Actually, the amount of spin-1 gravitational repulsion force between two small nearby masses is completely negligible, and it takes immense masses in the receding surrounding universe (galaxies, clusters of galaxies, etc., surrounding us in all directions) to produce the force we see as gravity. The fact that gravity is not cancelled out is due to the fact that it comes with one charge sign only, instead of coming in equal and opposite charges like electric charge. This is the reason why we have to include the gravitational charges in the surrounding universe in the mechanism of quantum gravity, while in electromagnetism it is conventional orthodoxy to ignore surrounding electric charges which come in opposite types which appear to cancel one another out. There is definitely no such cancellation of gravitational charges from surrounding masses in the universe, because there is only one kind of gravitational charge observed (nobody has seen a type of mass which falls upward, so all gravitational charge observed has the same charge!). So we have to accept a spin-1 graviton, not a spin-2 graviton, as being the simplest theory (see the calculations below that prove it predicts the observed strength for gravitation!), and spin-1 gravitons lead somewhere: the spin-1 graviton neatly fits gravity into a modified, simplified form of the Standard Model of particle physics!

‘If no superpartners at all are found at the LHC, and thus supersymmetry can’t explain the hierarchy problem, by the Arkani-Hamed/Dimopoulos logic this is strong evidence for the anthropic string theory landscape. Putting this together with Lykken’s argument, the LHC is guaranteed to provide evidence for string theory no matter what, since it will either see or not see weak-scale supersymmetry.’ – Not Even Wrong blog post, ‘Awaiting a Messenger From the Multiverse’, July 17th, 2008.

It’s kinda nice that Dr Woit has finally come around to grasping the scale of the terrible, demented string theory delusion in the mainstream, and can see that nothing he writes affects the victory to be declared for string theory, regardless of what data is obtained in forthcoming experiments! His position and that of Lee Smolin and other critics is akin to the dissidents of the Soviet Union, traitors like Leon Trotsky and nuisances like Andrei Sakharov. They can maybe produce minor embarrassment and irritation to the Evil Empire, but that’s all. The general opinion of string theorists to his writings is that it’s inevitable that someone should complain, and they go on hyping string theory. The public will go on ignoring the real quantum gravity facts. Dr Woit writes:

‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length[…] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.

‘This whole situation is reminiscent of what happened in particle theory during the 1960’s, when quantum field theory was largely abandoned in favor of what was a precursor of string theory. The discovery of asymptotic freedom in 1973 brought an end to that version of the string enterprise and it seems likely that history will repeat itself when sooner or later some way will be found to understand the gravitational degrees of freedom within quantum field theory.

‘While the difficulties one runs into in trying to quantize gravity in the standard way are well-known, there is certainly nothing like a no-go theorem indicating that it is impossible to find a quantum field theory that has a sensible short distance limit and whose effective action for the metric degrees of freedom is dominated by the Einstein action in the low energy limit. Since the advent of string theory, there has been relatively little work on this problem, partly because it is unclear what the use would be of a consistent quantum field theory of gravity that treats the gravitational degrees of freedom in a completely independent way from the standard model degrees of freedom. One motivation for the ideas discussed here is that they may show how to think of the standard model gauge symmetries and the geometry of space-time within one geometrical framework.

‘Besides string theory, the other part of the standard orthodoxy of the last two decades has been the concept of a supersymmetric quantum field theory. Such theories have the huge virtue with respect to string theory of being relatively well-defined and capable of making some predictions. The problem is that their most characteristic predictions are in violent disagreement with experiment. Not a single experimentally observed particle shows any evidence of the existence of its “superpartner”.’

– P. Woit, Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135, p. 52.

But notice that Dr Woit was convinced in 2002 that a spin-2 graviton would explain gravity. More recently he has become less hostile to supersymmetric theories, for example by conjecturing that spin-2 supergravity without string theory may be what is needed:

‘To go out on a limb and make an absurdly bold guess about where this is all going, I’ll predict that sooner or later some variant (”twisted”?) version of N=8 supergravity will be found, which will provide a finite theory of quantum gravity, unified together with the standard model gauge theory. Stephen Hawking’s 1980 inaugural lecture will be seen to be not so far off the truth. The problems with trying to fit the standard model into N=8 supergravity are well known, and in any case conventional supersymmetric extensions of the standard model have not been very successful (and I’m guessing that the LHC will kill them off for good). So, some so-far-unknown variant will be needed. String theory will turn out to play a useful role in providing a dual picture of the theory, useful at strong coupling, but for most of what we still don’t understand about the SM, it is getting the weak coupling story right that matters, and for this quantum fields are the right objects. The dominance of the subject for more than 20 years by complicated and unsuccessful schemes to somehow extract the SM out of the extra 6 or 7 dimensions of critical string/M-theory will come to be seen as a hard-to-understand embarassment, and the multiverse will revert to the philosophers.’

Evidence

As explained briefly above, there’s a fine back of the envelope calculation – allegedly proving that a spin-2 graviton is needed for universal attraction – in the mainstream accounts, as exemplified by Zee’s online sample chapter from his ‘Quantum Field Theory in a Nutshell’ book (section 5 of chapter 1). But when you examine that kind of proof closely, it just considers two masses exchanging gravitons with one another, which ignores two important aspects of reality:

1. there are far more than two masses in the universe which are always exchanging gravitons, and in fact the majority of the mass is in the surrounding universe; and

2. when you want a law for the physics of how gravitons are imparting force, you find that only receding masses forcefully exchange gravitons with you, not nearby masses. Perlmutter’s observed acceleration of the universe gives receding matter outward force by Newton’s second law, and gives a law for gravitons: Newton’s third law gives an equal inward-directed force, which by elimination of the possibilities known in the Standard Model and quantum gravity, must be mediated by gravitons. Nearby masses which aren’t receding have outward acceleration of zero and so produce zero inward graviton force towards you for their graviton-interaction cross-sectional area. So they just act as a shield for gravitons coming from immense masses beyond them, which produces an asymmetry, so you get pushed towards non-receding masses while being pushed away from highly redshifted masses.

It’s tempting for people to dismiss new calculations without checking them, just because they are inconsistent with previous calculations such as those allegedly proving the need for spin-2 gravitons (maybe combined with the belief that “if the new idea is right, somebody else would have done it before”; which is of course a brilliant way to stop all new developments in all areas by everybody …).

The deflection of a photon by the sun is by twice the amount predicted for the theory of a non-relativistic object (say a slow bullet) fired along the same (initial) trajectory. Newtonian theory says all objects fall, as does this theory (gravitons may presumably interact with energy via unobserved Higgs field bosons or whatever, but that’s not unique for spin-1, it’s also going to happen with spin-2 gravitons). The reason why a photon is deflected twice the amount that Newton’s law predicts is that a photon’s speed is unaffected by gravity unlike the case of a non-relativistic object which speeds up as it enters stronger gravitational field regions. So energy conservation forces the deflection to increase due to the gain in gravitational potential energy, which in the case of a photon is used entirely for deflection (not speed changes).

In general relativity this is a result of the fact that the Ricci tensor isn’t directly proportional to the stress energy tensor because the divergence of the stress energy tensor isn’t zero (which it should be for conservation of mass-energy). So from the Ricci tensor, half the product of the metric tensor and the trace of the Ricci tensor must be subtracted. This is what causes the departure from Newton’s law in the deflection of light by stars. Newton’s law omits conservation of mass-energy, a problem which is clear when it’s expressed in tensors. General relativity corrects this error. If you avoid assuming Newton’s law and obtain the correct theory direct from quantum gravity, this energy conservation issue doesn’t arise.

Spin 2 graviton exchanges between 2 masses cause attraction.
Spin 1 graviton exchanges between 2 masses cause repulsion.
Spin 1 graviton exchanges between all masses will push 2 nearby masses together.

Similarly if you had two protons nearby and surrounded them with a spherical shell of immense positive charge, they might be pushed together. (Another example is squeezing two things together: the electrons in your hand repel the things, but that doesn’t stop the two things being pushed together as if there is ‘attraction’ occurring between them.) This is what is occurs when spin-1 gravitons cause gravity by pushing things together locally. Gauge bosons are virtual particles, but they still interact to cause forces!

String theory is widely hailed for being compatible with the spin-2 graviton widely held to be true because for universal attraction between two similar charges (all masses and all energy falls the same way in a gravitational field, so it all has similar gravitational charge) you need a gauge boson which has spin-2. This argument is popularized by Professor Zee in section 5 of chapter 1 of his textbook Quantum Field Theory in a Nutshell. It’s completely false because we simply don’t live in a universe with two gravitational charges. There are more than two particles in the universe. The path integral that Zee and others do assume explicitly only two masses are involved in the gravitational interactions which cause gravity.

If you correct this error, the repulsion of similar charges cause gravity by pushing two nearby masses together, just as on large scales it pushes matter apart causing the accelerating expansion of the universe.

There was a sequence of comments on the Not Even Wrong blog post about Awaiting a Messenger From the Multiverse concerning the spin of the graviton (some of which have been deleted since for getting off topic). Some of these comments have been retrieved from my browser history cache and are below. There was an anonymous comment by ‘somebody’ at 5:57 am on 20 July 2008 stating:

‘Perturbative string theory has something called conformal invariance on the worldsheet. The empirical evidence for this is gravity. The empirical basis for QFT are locality, unitarity and Lorentz invariance. Strings manage to find a way to tweak these, while NOT breaking them, so that we can have gravity as well. This is oft-repeated, but still extraordinary. The precise way in which we do the tweaking is what gives rise to the various kinds of matter fields, and this is where the arbitrariness that ultimately leads to things like the landscape comes in. … It can easily give rise to things like multiple generations, non-abelain gauge symmetry, chiral fermions, etc. some of which were considered thorny problems before. Again, constructing PRECISELY our matter content has been a difficult problem, but progress has been ongoing. … But the most important reason for liking string theory is that it shows the features of quantum gravity that we would hope to see, in EVERY single instance that the theory is under control. Black hole entropy, gravity is holographic, resolution of singularities, resolution of information paradox – all these things have seen more or less concrete realizations in string theory. Black holes are where real progress is, according to me, but the string phenomenologists might disagree. Notice that I haven’t said anything about gauge-gravity duality (AdS/CFT). Thats not because I don’t think it is important, … Because it is one of those cases where two vastly different mathematical structures in theoretical physics mysteriously give rise to the exact same physics. In some sense, it is a bit like saying that understanding quantum gravity is the same problem as understanding strongly coupled QCD. I am not sure how exciting that is for a non-string person, but it makes me wax lyrical about string theory. It relates black holes and gauge theories. …. You can find a bound for the viscosity to entropy ratio of condensed matter systems, by studying black holes – thats the kind of thing that gets my juices flowing. Notice that none of these things involve far-out mathematical m***********, this is real physics – or if you want to say it that way, it is emprically based. … String theory is a large collection of promising ideas firmly rooted in the emprirical physics we know which seems to unify theoretical physics …’

To which anon. responded:

‘No it’s not real physics because it’s not tied to empirical facts. It selects an arbitary number of spatial extra dimensions in order to force the theory to give the non-falsifiable agreement with existing speculations about gravity, black holes, etc. Gravity and black holes have been observed but spin-2 gravitons and the detailed properties of black holes aren’t empirically confirmed. Extra spatial dimensions and all the extra particles of supersymmetries like supergravity haven’t been observed. Planck scale unification is again a speculation, not an empirical observation. The entire success of string theory is consistency with speculations, not with nature. It’s built on speculations, not upon empirical facts. Further, it’s not even an ad hoc model that can replace the Standard Model, because you can’t use experimental data to identify the parameters of string theory, e.g., the moduli. It’s worse therefore than ad hoc models, it can’t incorporate let alone predict reality.’

Anon. should have added that the AdS/CFT correspondence is misleading. [AdS/CFT correspondence work, with strong interactions being modelled by anti-de Sitter space with a negative (rather than positive) cosmological constant is misleading. People should be modelling phenomena by accurate models, not returning physics to the days when guys were arguing that epicycles are a clever invention and modelling the solar system using a false model (planets and stars orbiting the Earth in circles within circles) is a brilliant state of the art calculational method! (Once you start modelling phenomenon A using a false approximation from theory B, you’re asking for trouble because you’re mixing up fact and fiction. E.g., if a prediction fails, you have a ready-made excuse to simply add further epicycles/fiddles to ‘make it work’.) See my comment at http://kea-monad.blogspot.com/2008/07/ninja-prof.html]

somebody Says:
July 20th, 2008 at 10:42 am

Anon

The problems we are trying to solve, like “quantizing gravity” are already speculative by your standards. I agree that it is a reasonable stand to brush these questions off as “speculation”. But IF you consider them worthy of your time, then string theory is a game you can play. THAT was my claim. I am sure you will agree that it is a bit unreasonable to expect a non-speculative solution to a problem that you consider already speculative.

Incidentally, I never said a word about supersymmetry and Planck scale unification in my post because it was specifically a response to a question on the empirical basis of string theory. So I would appreciate it if you read my posts before taking off on rants, stringing cliches, .. etc. It was meant for the critics of string theory who actually have scientific reasons to dislike it, and not gut-reactions.

anon. Says:
July 20th, 2008 at 11:02 am

‘The problems we are trying to solve, like “quantizing gravity” are already speculative by your standards.’

Wrong again. I never said that. Quantizing gravity is not speculative by my standards, it’s a problem that can be addressed in other ways without all the speculation involved in the string framework. That’s harder to do than just claiming that string theory predicts gravity and then using lies to censor out those working on alternatives.

‘Incidentally, I never said a word about supersymmetry and Planck scale unification in my post because it was specifically a response to a question on the empirical basis of string theory.’

Wrong, because I never said that you did mention them. The reason why string theory is not empirical is precisely because it’s addressing these speculative ideas that aren’t facts.

‘It was meant for the critics of string theory who actually have scientific reasons to dislike it, and not gut-reactions.’

If you want to defend string as being empirically based, you need to do that. You can’t do so, can you?

somebody Says:
July 20th, 2008 at 11:19 am

‘Quantizing gravity is not speculative by my standards,’
Even though the spin 2 field clearly is.

My apologies Peter, I truly couldn’t resist that.

anon. Says:
July 20th, 2008 at 11:53 am

The spin-2 field for gravity is based on the false speculation that gravitons are exchanged purely between the attracting bodies. (To get universal attraction, such field quanta can be shown to require a spin of 2.) This speculation is contrary to the general principle that every body is a source of gravity. You never have gravitons exchanged merely between two masses in the universe. They will be exchanged between all the masses, and there is a lot of mass surrounding us at long distances.

There is no disproof which I’m aware of that the graviton has a spin of 1 and operates by pushing masses together. At least this theory doesn’t have to assume that there are only two gravitating masses in the universe which exchange gravitons!

somebody Says:
July 20th, 2008 at 12:20 pm

‘The spin-2 field for gravity is based on the false speculation that gravitons are exchanged purely between the attracting bodies. This speculation is contrary to the general principle that every body is a source of gravity.’

How many gravitationally “repelling” bodies do you know?

Incidentally, even if there were two kinds of gravitational charges, AND the gravitational field was spin one, STILL there are ways to test it. Eg: I would think that the bending of light by the sun would be more suppressed if it was spin one than if it is spin two. You need two gauge invariant field strengths squared terms to form that coupling, one for each spin one field, and that might be suppressed by a bigger power of mass or something. Maybe I am wrong about the details (i haven’t thought it through), but certainly it is testable.

somebody Says:
July 20th, 2008 at 12:43 pm

One could have only repelling bodies with spin one, but NOT only attractive ones. Because attractive requires opposite charges.

anon. Says:
July 20th, 2008 at 6:51 pm

‘How many gravitationally “repelling” bodies do you know?’
This repulsion between masses is very well known. Galaxies are accelerating away from every other mass. It’s called the cosmic acceleration, discovered in 1998 by Perlmutter. … F=ma then gives outward force of accelerating matter, and the third law of motion gives us equal inward force. All simple stuff. … Since this force is apparently mediated by spin-1 gravitons, the gravitational force of repulsion from one relatively nearby small mass to another is effectively zero. … the exchange of gravitons only produces a repulsive force over large distances from a large mass, such as a distant receding galaxy. This is why two relatively nearby (relative in cosmological sense of many parsecs) masses don’t repel, but are pushed together because they repel the very distant masses in the universe.

‘One could have only repelling bodies with spin one, but NOT only attractive ones. Because attractive requires opposite charges.’

As already explained, there is a mechanism for similar charges to ‘attract’ by repulsion if they are surrounded by a large amount of matter that is repelling them towards one another. If you push things together by a repulsive force, the result can be misinterpreted as attraction…

*******************************************************************

After this comment, ‘somebody’ (evidently a string supporter who couldn’t grasp physics) then gave a list issues he/she had about this comment. Anon. then responded to each:

anon. Says:
July 20th, 2008 at 6:51 pm

‘1. The idea of “spin” arises from looking for reps. of local Lorentz invariance. At the scales of the expanding universe, you don’t have local Loretz invarince.’

There are going to be graviton exchanges whether they are spin 1 or spin 2 or whatever, between distant receding masses in the expanding universe. So if this is a problem it’s a problem for spin-2 gravitons just as it is for spn-1. I don’t think you have any grasp of physics at all.

‘2. … The expanding background is a solution of the underlying theory, whatever it is. The generic belief is that the theory respects Lorentz invariance, even though the solution breaks it. This could of course be wrong, …’

Masses are receding from one another. The assumption that they are being carried apart on a carpet of expanding spacetime fabric which breaks Lorentz invariance is just a classical GR solution speculation. It’s not needed if masses are receding due to being knocked apart by gravitons which cause repulsion between masses as already explained (pushing distant galaxies apart and also pushing nearby masses together).

‘3. … For spin 1 partciles, this gives an inverse square law. In particular, I don’t see how you jumped … to the claim that the graviton is spin 1.’

… there will be very large masses beyond that nearby mass (distant receding galaxies) sending in a large inward force due to their large distance and mass. These spin-1 gravitons will presumably interact with the mass by scattering back off the graviton scatter cross-section for that mass. So a nearby, non-receding particle with mass will cause an asymmetry in the graviton field being received from more distant masses in that direction, and you’ll be pushed towards it. This gives an inverse-square law force.

‘4. You still have not provided an explanation for how the solar system tests of general relativity can survive in your spin 1 theory. In particular the bending of light. Einstein’s theory works spectacularly well, and it is a local theory. We know the mass of the sun, and we know that it is not the cosmic repulsion that gives rise to the bending of light by the sun.’

The deflection of a photon by the sun is by twice the amount predicted for the theory of a non-relativistic object (say a slow bullet) fired along the same (initial) trajectory. Newtonian theory says all objects fall, as does this theory (gravitons may presumably interact with energy via unobserved Higgs field bosons or whatever, but that’s not unique for spin-1, it’s also going to happen with spin-2 gravitons). The reason why a photon is deflected twice the amount that Newton’s law predicts is that a photon’s speed is unaffected by gravity unlike the case of a non-relativistic object which speeds up as it enters stronger gravitational field regions. So energy conservation forces the deflection to increase due to the gain in gravitational potential energy, which in the case of a photon is used entirely for deflection (not speed changes).

In general relativity this is a result of the fact that the Ricci tensor isn’t directly proportional to the stress energy tensor because the divergence of the stress energy tensor isn’t zero (which it should be for conservation of mass-energy). So from the Ricci tensor, half the product of the metric tensor and the trace of the Ricci tensor must be subtracted. This is what causes the departure from Newton’s law in the deflection of light by stars. Newton’s law omits conservation of mass-energy, a problem which is clear when it’s expressed in tensors. GR corrects this error. If you avoid assuming Newton’s law and obtain the correct theory direct from quantum gravity, this energy conservation issue doesn’t arise.

‘5. The problems raised by the fact that LOCALLY all objects attract each other is still enough to show that the mediator cannot be spin 1.’

I thought I’d made that clear;

Spin 2 graviton exchanges between 2 masses cause attraction.
Spin 1 graviton exchanges between 2 masses cause repulsion.
Spin 1 graviton exchanges between all masses will push 2 nearby masses together.

This is because the actual graviton exchange force causing repulsion in the space between 2 nearby masses is totally negligible (F = mrH^2 with small m and r terms) compared to the gravitons pushing them together from surrounding masses at great distances (F = mrH^2 with big receding mass m at big distance r).

Similarly if you had two protons nearby and surrounded them with a spherical shell of immense positive charge, they might be pushed together. (Another example is squeezing two things together: the electrons in your hand repel the things, but that doesn’t stop the two things being pushed together as if there is ‘attraction’ occurring between them.) This is what is occurs when spin-1 gravitons cause gravity by pushing things together locally. Gauge bosons are virtual particles, but they still interact to cause forces!

somebody Says:
July 21st, 2008 at 3:35 am

Now that you have degenerated to weird theories and personal attacks, I will make just one comment about a place where you misinterpret the science I wrote and leave the rest alone. I wrote that expanding universe cannot be used to argue that the graviton is spin 1. You took that to mean “… if this is a problem it’s a problem for spin-2 gravitons just as it is for spn-1.”

The expanding universe has nothing to do with the spin of the particle was my point, not that it can be used to argue for this or that spin. Spin arises from local Lorentz invariance.

anon. Says:
July 21st, 2008 at 4:21 am

‘The expanding universe has nothing to do with the spin of the particle was my point, not that it can be used to argue for this or that spin.’

Spin-1 causes repulsion. The universe’s expansion is accelerating. I’ve never claimed that particle spin is caused by the expansion of the universe. I’ve stated that repulsion is evident in the acceleration of the universe.

If you want to effectively complain about degeneration into weird theories and personal attacks, try looking at string theory more objectively. 10^500 universes, 10 dimensions, spin-2 gravitons, etc. (plus the personal attacks of string theorists on those working on alternative ideas).

***********************************

This shows that calculations based on checkable physics are vital in physics, because they are something that can be checked for consistency with nature. In string theory, so far there is no experimental possible, so all of the checks done are really concerned with internal (mathematical) consistency, and consistency with speculations of one kind or another. String theorist Professor Michio Kaku summarises the spiritual enthusiasm and hopeful religious basis for the string theory belief system as follows in an interview with the ‘Spirituality’ section of The Times of India, 16 July 2008, quoted in a comment by someone on the Not Even Wrong weblog (notice that Michio honestly mentions ‘… when we get to know … string theory…’, which is an admission that it’s not known because of the landscape problem of 10^500 alternative versions with different quantitative predictions; at present it’s not a scientific theory but rather 10^500):

‘… String theory can be applied to the domain where relativity fails, such as the centre of a black hole, or the instant of the Big Bang. … The melodies on these strings can explain chemistry. The universe is a symphony of strings. The “mind of God” that Einstein wrote about can be explained as cosmic music resonating through hyperspace. … String theory predicts that the next set of vibrations of the string should be invisible, which can naturally explain dark matter. … when we get to know the “DNA” of the universe, i.e. string theory, then new physical applications will be discovered. Physics will follow biology, moving from the age of discovery to the age of mastery.’

As with the 200+ mechanical aether theories of force fields existing the 19th century (this statistic comes from Eddington’s 1920 book Space Time and Gravitation), string theory at best is just a model for unobservables. Worse, it comes in 10^500 quantitatively different versions, worse than the 200 or so aethers of the 19th century. The problems with theorising about the physics at the instant of the big bang and the physics in the middle of a black hole is that you can’t actually test it. Similar problems exist when explaining dark matter because your theory contains invisible particles whose masses you can’t predict beyond saying they’re beyond existing observations (religions similarly have normally invisible angels and devils, so you could equally use religions to ‘explain dark matter’; it’s not a quantitative prediction in string theory so it’s not really a scientific explanation, just a belief system). Unification at the Planck scale and spin-2 gravitons are both speculative errors.

Once you remove all these the errors from string theory, you are left with something that is no more impressive than aether: it claims to be a model of reality and explain everything, but you don’t get any real use from it for predicting experimental results because there are so many versions it’s just too vague to be a science.It doesn’t connect well with anything in the real world at all. The idea that at least it tells us what particle cores are physically (vibrating loops of extradimensional ’string’) doesn’t really strike me as being science. People decide which version of aether to use by artistic criteria like beauty or fitting the theory to observations and arguing that if the aether was different from this or that version we wouldn’t exist to observe it’s consequences (the anthropic selection principle), instead of using factual scientific criteria: there are no factual successes of aether to evaluate. So it falls into the category of a speculative belief system, not a piece of science.

By Mach’s principle of economy, speculative belief systems are best left out of science until they can be turned into observables, useful predictions, or something that is checkable. Science is not divine revelation about the structure of matter and the universe, instead it’s about experiments and related fact-based theorizing which predicts things that can be checked.

**************************************************

Update: If you look at what Dr Peter Woit has done in deleting comments, he’s retained the one from anon which states:

‘[string is] not real physics because it’s not tied to empirical facts. It selects an arbitary number of spatial extra dimensions in order to force the theory to give the non-falsifiable agreement with existing speculations about gravity, black holes, etc. Gravity and black holes have been observed but spin-2 gravitons and the detailed properties of black holes aren’t empirically confirmed. Extra spatial dimensions and all the extra particles of supersymmetries like supergravity haven’t been observed. Planck scale unification is again a speculation, not an empirical observation. The entire success of string theory is consistency with speculations, not with nature. It’s built on speculations, not upon empirical facts. Further, it’s not even an ad hoc model that can replace the Standard Model, because you can’t use experimental data to identify the parameters of string theory, e.g., the moduli. It’s worse therefore than ad hoc models, it can’t incorporate let alone predict reality.’

Although he has kept that, Dr Woit deleted the further discussion comments about the spin 1 versus spin 2 graviton physics, as being off-topic. Recently he argued that supergravity (a spin-2 graviton theory) in low dimensions was a good idea (see post about this by Dr Tommaso Dorigo), so he is definitely biased in favour of the graviton having a spin of 2, despite that being not ‘not even wrong’ but plain wrong for reasons given above. If we go look at Dr Woit’s post ‘On Crackpotism and Other Things’, we find Dr Woit stating on January 3rd, 2005 at 12:25 pm:

‘I had no intention of promulgating a general theory of crackpotism, my comments were purely restricted to particle theory. Crackpotism in cosmology is a whole other subject, one I have no intention of entering into.’

If that statement by Dr Woit still stands, then facts from cosmology about the accelerating expansion of the universe presumably won’t be of any interest to him, in any particle physics context such as graviton spin. In that same ‘On Crackpotism and Other Things’ comment thread, Doug made a comment at January 4th, 2005 at 5:51 pm stating:

‘… it’s usually the investigators labeled “crackpots” who are motivated, for some reason or another, to go back to the basics to find what it is that has been ignored. Usually, this is so because only “crackpots” can afford to challenge long held beliefs. Non-crackpots, even tenured ones, must protect their careers, pensions and reputations and, thus, are not likely to go down into the basement and rummage through the old, dusty trunks of history, searching for clues as to what went wrong. …

‘Instead, they keep on trying to build on the existing foundations, because they trust and believe that …

‘In other words, it could be that it is an interpretation of physical concepts that works mathematically, but is physically wrong. We see this all the time in other cases, and we even acknowlege it in the gravitational area where, in the low limit, we interpret the physical behavior of mass in terms of a physical force formulated by Newton. When we need the accuracy of GR, however, Newton’s physical interpretation of force between masses changes to Einstein’s interpretation of geometry that results from the interaction between mass and spacetime.’

Doug commented on that ‘On Crackpotism and Other Things’ post at January 1st, 2005 at 1:04 pm:

‘I’ve mentioned before that Hawking characterizes the standard model as “ugly and ad hoc,” and if it were not for the fact that he sits in Newton’s chair, and enjoys enormous prestige in the world of theoretical physics, he would certainly be labeled as a “crackpot.” Peter’s use of the standard model as the criteria for filtering out the serious investigator from the crackpot in the particle physics field is the natural reaction of those whose career and skills are centered on it. The derisive nature of the term is a measure of disdain for distractions, especially annoying, repetitious, and incoherent ones.

‘However, it’s all too easy to yield to the temptation to use the label as a defense against any dissent, regardless of the merits of the case of the dissenter, which then tends to convert one’s position to dogma, which, ironically, is a characteristic of “crackpotism.” However, once the inevitable flood of anomalies begins to mount against existing theory, no one engaged in “normal” science, can realistically evaluate all the inventive theories that pop up in response. So, the division into camps of innovative “liberals” vs. dogmatic “conservatives” is inevitable, and the use of the excusionary term “crackpot” is just the “defender of the faith” using the natural advantage of his position on the high ground.

‘Obviously, then, this constant struggle, especially in these days of electronically enhanced communications, has nothing to do with science. If those in either camp have something useful in the way of new insight or problem-solving approaches, they should take their ideas to those who are anxious to entertain them: students and experimenters. The students are anxious because the defenders of multiple points of view helps them to learn, and the experimenters are anxious because they have problems to solve.

‘The established community of theorists, on the other hand, are the last whom the innovators ought to seek to convince because they have no reason to be receptive to innovation that threatens their domains, and clearly every reason not to be. So, if you have a theory that suggests an experiment that Adam Reiss can reasonably use to test the nature of dark energy, by all means write to him. Indeed, he has publically invited all that might have an idea for an experiment. But don’t send your idea to [cosmology professor] Sean Carroll because he is not going to be receptive, even though he too publically acknowledged that “we need all the help we can get” (see the Science Friday archives).’

GAUGE THEORIES

Whenever the motion or charge or angular momentum of spin, or some other symmetry, is altered, it is a consequence of conservation laws in physics that radiation is emitted or received. This is Noether’s theorem, which was applied to quantum physics by Weyl. Noether’s theorem (discovered 1915) connects the symmetry of the action of a system (the integral over time of the Lagrangian equation for the energy of a system) with conservation laws. In quantum field theory, the Ward-Takahashi identity expresses Noether’s theorem in terms of the Maxwell current (a moving charge, such as an electron, can be represented as an electric current since that is the flow of charge). Any modification to the symmetry of the current involves the use of energy, which (due to conservation of energy) must be represented by the emission or reception of photons, e.g. field quanta. (For an excellent introduction to the simple mathematics of the Lagrangian in quantum field theory and its role in symmetry modification for Noether’s theorem, see chapter 3 of Ryder’s Quantum Field Theory, 2nd ed., Cambridge University Press, 1996.)

So, when the symmetry of a system such as a moving electron is modified, such a change of the phase of an electron’s electromagnetic field (which together with mass is the only feature of the electron that we can directly observe) is accompanied by a photon interaction, and vice-versa. This is the basic gauge principle relating symmetry transformations to energy conservation. E.g., modification to the symmetry of the electromagnetic field when electrons accelerate away from one another implies that they emit (exchange) virtual photons.

All fundamental physics is of this sort: the electromagnetic, weak and strong interactions are all examples of gauge theories in which symmetry transformations are accompanied by the exchange of field quanta. Noether’s theorem is pretty simple to grasp: if you modify the symmetry of something, the act of making that modification involves the use or release of energy, because energy is conserved. When the electron’s field undergoes a local phase change to its symmetry, a gauge field quanta called a ’virtual photon’ is exchanged. However, it is not just energy conservation that comes into play in symmetry. Conservation of charge and angular momentum are involved in more complicated interactions. In the Standard Model of particle physics, there are three basic gauge symmetries, implying different types of field quanta (or gauge bosons) which are radiation exchanged when the symmetries are modified in interactions:

1. Electric charge symmetry rotation. This describes the electromagnetic interaction. This is supposedly the most simple gauge theory, the Abelian U(1) electromagnetic symmetry group with one element, invoking just one charge and one gauge boson. To get negative charge, a positive charge is represented as travelling backwards in time, and vice-versa. The gauge boson of U(1) is mixed up with the neutral gauge boson of SU(2), to the amount specified by the empirically based Weinberg mixing angle, producing the photon and the neutral weak gauge boson. U(1) represents not just electromagnetic interactions but also weak hypercharge.

The U(1) maths is based on a type of continuous group defined by Sophus Lie in 1873. Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together. It was the representation theory of these groups that Weyl was studying.

‘A simple example of a Lie group together with a representation is that of the group of rotations of the two-dimensional plane. Given a two-dimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point. This is a symmetry of the plane. The thing that is invariant is the distance between a point on the plane and the central point. This is the same before and after the rotation. One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point. There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.

Not Even Wrong

Argand diagram showing rotation by an angle on the complex plane. Illustration credit: based on Fig. 3.1 in Not Even Wrong.

‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one. If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers). As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1).

‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions]. Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave. This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees. Because of this analogy, U(1) symmetry transformations are often called phase transformations. …

‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N). It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest. The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N). Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large.

‘In the case N = 1, SU(1) is just the trivial group with one element. The first non-trivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3). The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’

2. Isospin symmetry rotation. This describes the weak interaction of quarks, controlling the transformation of quarks within one family of quarks. E.g., in beta decay a neutron decays into a proton by the transformation of a downquark into an upquark, and this transformation involves the emission of an electron (conservation of charge) and an antineutrino (conservation of energy and angular momentum). Neutrinos were a falsifiable prediction made by Pauli on 4 December 1930 in a letter to radiation physicists in Tuebingen based on the spectrum of beta particle energies in radioactive decay (‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding … the continous beta-spectrum … I admit that my way out may seem rather improbable a priori … Nevertheless, if you don’t play you can’t win … Therefore, Dear Radioactives, test and judge.’ – Pauli’s letter, quoted in footnote of page 12, http://arxiv.org/abs/hep-ph/0204104). The total amount of radiation emitted in beta decay could be determined from the difference in mass between the beta radioactive material and its decay product, the daughter material. The amount of energy carried in readily detectable ionizing beta particles could be measured. However, the beta particles were emitted with a continuous spectrum of energies up to a maximum upper energy limit (unlike the line spectra of gamma ray energies): it turned out that the total energy lost in beta decay was equal to the upper limit of the beta energy spectrum, which was three times the mean beta particle energy! Hence, on the average, only one-third of the energy loss in beta decay was accounted for in the emitted beta particle energy.

Pauli suggested that the unobserved beta decay energy was carried by neutral particles, now called antineutrinos. Because they are weakly interacting, it takes a great intensity of beta decay in order to detect the antineutrinos. They were first detected in 1956 coming from intense beta radioactivity in the fission product waste of a nuclear reactor. By conservation laws, Pauli had been able to predict the exact properties to be expected. The beta decay theory was developed soon after Pauli’s suggestion in the 1930s by Enrico Fermi, who then invented the nuclear reactor used to discover the antineutrino. However, Fermi’s theory has a neutron decay directly into a beta particle plus an antineutrino, whereas in the 1960s the theory of beta decay had to be expressed in terms of quarks. Glashow, Weinberg and Salam discovered that to make it a gauge theory there had to be a massive intermediate ‘weak gauge boson’. So what really happens is more complicated than in Fermi’s theory of beta decay. A downquark interacts with a massive W weak field gauge boson, which then decays into an electron and an antineutrino. The massiveness of the field quanta is needed to explain the weak strength of beta decay (i.e., the relatively long half-lives of beta decay, e.g. a free neutron is radioactive and has a beta half life of 10.3 minutes, compared with the tiny lifetimes of a really small fraction of a second for hadrons which decay via the strong interaction). The massiveness of the weak field quanta was a falsifiable prediction, and in 1983 CERN discovered the weak field quanta with the predicted energies, confirming SU(2) weak interaction gauge theory.

There are two relative types or directions of isospin, by analogy to ordinary spin in quantum mechanics (where spin up and spin down states are represented by +1/2 and –1/2 in units of h-bar). These two isospin charges are modelled by the Yang-Mills SU(2) symmetry, which has (2*2)-1 = 3 gauge bosons (with positive, negative and neutral electric charge, respectively). Because the interaction is weak, the gauge bosons must be massive and as a result they have a short range, since massive virtual particles don’t exist for long in the vacuum, and can’t travel far in that short life time. The two isospin charge states allow quark-antiquark pairs, or doublets, to form, called mesons. The weak isospin force only acts on particles with left-handed spin. At high energy, all weak gauge bosons will be massless, allowing weak and electromagnetic forces become symmetric and unify. But at low energy, the weak gauge bosons acquire mass, supposedly from a Higgs field, breaking the symmetry. This Higgs field has not been observed, and the general Higgs models don’t predict a single falsifiable prediction (there are several possibilities).

3. Colour symmetry rotation. This changes the colour charge of a quark, in the process releasing colour charged gluons as the field quanta. Strong nuclear interactions (which bind protons into a nucleus against very strong electromagnetic repulsion, which would be expected to make nuclei explode in the absence of this strong binding force) are described by quantum chromodynamics, whereby quarks have a symmetry due to their strong nuclear or ‘colour’ charges. This originated with Gell-Mann’s SU(3) eightfold way of arranging the known baryons by their properties, a scheme which successfully predicted the existence of the Omega Minus in before it was experimentally observed in 1964 at Brookhaven National Laboratory, confirming the SU(3) symmetry of hadron properties. The understanding (and testing) of SU(3) as a strong interaction Yang-Mills gauge theory in terms of quarks with colour charges and gluon field quanta was a completely radical extension of the original convenient SU(3) eightfold way hadron categorisation scheme.

the eightfold way symmetry of hadron physics.

Experiments in scattering very high energy electrons off neutrons and protons first showed evidence that each nucleon had a more complex structure than a simple point in the 1950s, and therefore the idea that these nucleons were simply fundamental particles was undermined. Another problem with nucleons being fundamental particles was that of the magnetic moments of neutrons and protons. Dirac in 1929 initially claimed that the antiparticle his equation predicted for the electron was the already-known proton (the neutron was still undiscovered until 1932), but because he couldn’t explain why the proton is more massive than the electron, he eventually gave up on this idea and predicted the unobserved positron instead (just before it was discovered). The problem with the proton being a fundamental particle was that, by analogy to the positron, it would have a magnetic moment of 1 nuclear magneton, whereas in fact the measured value is 2.79 nuclear magnetons. Also, for the neutron, you would expect zero magnetic moment for a neutral spinning particle, but the neutron was found to have a magnetic moment of -1.91 nuclear magnetons. These figures are inconsistent with neutrons being fundamental particles, but are consistent with quark structure:

‘The fact that the proton and neutron are made of charged particles going around inside them gives a clue as to why the proton has a magnetic moment higher than 1, and why the supposedly neutral neutron has a magnetic moment at all.’ – Richard P. Feynman, QED, Penguin, London, 1990, p. 134.

To explain hadron physics, Zweig and Gell-Mann suggested the theory that baryons are composed of three quarks. But there was immediately the problem the Omega Minus would contain three identical strange quarks, violating the Pauli exclusion principle that prevents particles from occupying the same set of quantum numbers or states. (Pairs of otherwise identical electrons in an orbital have opposite spins, giving them different sets of quantum numbers, but because there are only two spin states, you can’t make three identical charges share the same orbital by having different spins. Looking at the measured 3/2-spin of the Omega Minus, all of its 1/2-spin strange quarks would have the same spin.) To get around this problem in the experimentally discovered Omega Minus, the quarks must have an additional quantum number, due to the existence of a new charge, namely the colour charge of the strong force that comes in three types (red, blue and green). The SU(3) symmetry of colour force gives rise to (3*3)-1 = 8 gauge bosons, called gluons. Each gluon is a charged combination of a colour and the anticolour of a different colour, e.g. a gluon might be charged blue-antigreen. Because gluons carry a charge, unlike photons, they interact with one another and also with with virtual quarks produced by pair production due to the intense electromagnetic fields near fermions. This makes the strong force vary with distance in a different way to that of the electromagnetic force. At small distances from a quark, the net colour charge increases in strength with increasing distance, which the opposite of the behaviour of the electromagnetic charge (which gets bigger at smaller distances, due to less intervening shielding by the polarized virtual fermions caused in pair production). The overall result is that quarks confined in hadrons have asymptotic freedom to move about over a certain range of distances, which gives nucleons their size. Before the quark theory and colour charge had been discovered, Yukawa discovered a theory of strong force attraction that predicted the strong force was due to pion exchange. He predicted the mass of the pion, although unfortunately the muon was discovered before the pion, and was originally inaccurately identified as Yukawa’s exchange radiation. Virtual pions and other virtual mesons are now understood to mediate the strong interaction between nucleons as a relatively long-range residue of the colour force.

Electroweak charges

Above: the electroweak charges of the Standard Model of mainstream particle physics. The argument we made is that U(1) symmetry isn’t real and must be replaced by SU(2) with two charges and massless versions of the weak boson triplet (we do this by replacing the Higgs mechanism with a simpler mass-giving field that gives predictions of particle masses). The two charged gauge bosons simply mediate the positive and negative electric fields of charges, instead of having neutral photon gauge bosons with 4 polarizations. The neutral gauge boson of the massless SU(2) symmetry is the graviton. The lepton singlet with right handed spin in the standard model table above is not really a singlet: because SU(2) is now being used for electromagnetism rather than U(1), we have automatically a theory that unites quarks and leptons. The problem of the preponderance of matter over antimatter is also resolved this way: the universe is mainly hydrogen, one electron, two quarks and one downquark. The electrons are not actually produced alone. The downquark, as we will demonstrate below, is closely related to the electron.

The fractional charge is due to vacuum polarization shielding, with the accompanying conversion of electromagnetic field energy into short-ranged virtual particle mediated nuclear fields. This is a predictive theory even at low energy because it can make predictions based on conservation of field quanta energy where vacuum polarization attenuates a field, and the conversion of leptons into quarks requires higher energy than existing experiments have had access to. So electrons are not singlets: some of them ended up being converted into quarks in the big bang in very high energy interactions. The antimatter counterpart for the electrons in the universe is not absent but is present in nuclei, because those positrons were converted into the upquarks in hadrons. The handedness of the weak force relates to the fact that in the early stages of the big bang, for each two electron-positron pairs that were produced by pair production in the intense, early vacuum fields of the universe, both positrons but only one electron were confined to produce a proton. Hence the amount of matter and antimatter in the universe is identical, but due to reactions related to the handedness of the weak force, all the anti-positrons were converted into upquarks, but only half of the electrons were converted into downquarks. We’re oversimplifying a little because some neutrons were produced, and quite a few other minor interactions occurred, but this is approximately the overall result of the reactions. The Standard Model table of particles above is in error because it assumes that leptons and quarks are totally distinct. For a more fundamental level of understanding, we need to alter the electroweak portion of the Standard Model.

The apparent deficit of antimatter in the universe is simply a miss-observation: the antimatter has simply been transformed from leptons into quarks, which from a long distance display different properties and interactions to leptons (due to cloaking by the polarized vacuum and to close confinement causing colour charge to physically appear by inducing asymmetry; the colour charge of a lepton is invisible because it is symmetrically distributed over three preons in a lepton, and cancels out to white unless an enormous field strength due to the extremely close proximity of another particle is present, creating an asymmetry in the preon arrangement is produced, allowing a net colour charge to operate on the other nearby particle), so it isn’t currently being acknowledged for what it really is. (Previous discussions of the relationship of quarks to leptons on this blog include https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks/ and https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/ where suggestions by Carl Brannen and Tony Smith are covered.)

Considering the strange quarks in the Omega Minus, which contains three quarks each of electric charge -1/3, vacuum polarization of three nearby leptons would reduce the -1 unit observable charge per lepton to -1/3 observable charge per lepton, because the vacuum polarization in quantum field theory which shields the core of a particle occurs out to about a femtometre or so, and this zone will overlap for three quarks in a baryon like the Omega Minus. The overlapping of the polarization zone will make it three times more effective at shielding the core charges than in the case of a single charge like a single electron. So the electron’s observable electric charge (seen from a great distance) is reduced by a factor of three to the charge of a strange quark or a downquark. Think of it by analogy a couple sharing blankets which act as shields, reducing the emission of thermal radiation. If each of the couple contribute one blanket, then the overlap of blankets will double the heat shielding. This is basically what happens when N electrons are brought close together so that they share a common (combined) vacuum polarization shell around the core charges: the shielding gives each charge in the core an apparent charge (seen from outside the vacuum polarization, i.e., more than a few femtometres away) of 1/N charge units. In the case of upquarks with apparent charges of +2/3, the mechanism is more complex, since the -1/3 charges in triplets are the clearest example of the mechanism whereby shared vacuum polarization shielding transforms properties of leptons into those of quarks. The emergence of colour charge when leptons are confined together also appears to have a testable, falsifiable mechanism because we know how much energy becomes available for the colour charge as the observable electric charge falls (conservation of energy suggests that the attenuated electromagnetic charge gets converted into colour charge energy). For the mechanism of the emergence of colour charge in quarks from leptons, see the suggestions of Tony Smith and Carl Brannen, outlined at https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/.

In particularly, the Cabibbo mixing angle in quantum field theory indicates a strong universality in reaction rates for leptons and quarks: the strength of the weak force when acting on quarks in a given generation is similar to that for leptons to within 1 part in 25. The small 4% difference in reaction rates arises, as explained by Cabibbo in 1964, due to the fact that a lepton has only one way to decay, but a quark has two decay routes, with probabilities of 96% and 4% respectively. The similarity between leptons and quarks in terms of their interactions is strong evidence that they are different manifestations of common underlying preons, or building blocks.

Above: Coulomb force mechanism for electrically charged massless gauge bosons. The SU(2) electrogravity mechanism. Spin-1 gauge bosons for fundamental interactions: the massive versions of the SU(2) Yang-Mills gauge bosons are the weak field quanta which only interact with left-handed particles. One half (corresponding to exactly one handedness for weak interactions) of SU(2) gauge bosons acquire mass at low energy; the other half are the gauge bosons of electromagnetism and gravity.

Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!

This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation.

Electromagnetic coupling constant from the mechanism

Above: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equilibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.

The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.

We are pushed down to Earth because the Earth shields us from gravitons in the downward direction, creating a small amount of asymmetry in the exchange of gravitons between us and the surrounding universe (the cross-section for graviton shielding by an electron is only its black hole event horizon cross-sectional area, i.e. 5.75*10-114 square metres). The special quasi-compressive effects of gravitons on masses accounts for the ‘curvature’ effects of general relativity, such as the fact that the Earth’s radius is 1.5 mm less than the figure given by Euclidean geometry (Feynman Lectures on Physics, c42 p6, equation 42.3).

As soon as you do include masses in the surrounding universe (which are far bigger even though they are further away, i.e., the mass of the Earth and an apple is only 1 part in 1029 of the mass of the universe, and all masses are gravitational charges which exchange gravitons with all other masses and with energy!), you begin to see what is really occurring. Spin-1 gauge bosons are gravitons!

Cosmologically distant masses push one another apart by exchanging gravitons, explaining the lack of gravitational deceleration observed in the universe. But masses which are nearby in cosmological terms (not redshifted much relative to one another) are pushed together by gravitons from the surrounding (highly redshifted) distant universe, because they don’t exert an outward force relative to one another, and so don’t fire a recoil force (mediated by spin-1 gravitons) towards one another. They, in other words, shield each other. Think of the exchange simply as bullets bouncing off particles. If bullets are firing in from all directions, the proximity of a nearby mass which isn’t shooting at you will act as a shield, and you’d be pushed towards that shield (which is why things fall towards large masses). This is a quantitative prediction, predicting the strength of the gravitational coupling which can be checked. So this mechanism, which predicted the lack of gravitational deceleration in the big bang in 1996 (observed in 1998 by Saul Perlmutter’s automated CCD telescope software) ,also predicts gravitation, quantitatively.

It should be noted that in this diagram we have drawn the force-causing gauge or vector boson exchange radiation in the usual convention as a horizontal wavy line (i.e., the gauge bosons are shown as being instantly exchanged, not as radiation propagating at the velocity of light and thus taking time to propagate). In fact, gauge bosons don’t propagate instantly and to be strictly accurate we would need to draw inclined wavy lines. The exchange of the gauge bosons as a kind of reflection process (which imparts an impulse in the case where it causes the mass to accelerate) would make the diagram more complex. Conventionally, Feynman diagrams are shorthand for categories of interactions, not for specific individual interactions. Therefore, they are not depicting all the similar interactions that occur when two particles attract or repel; they are way oversimplified in order to make the basic concept lucid.

Loops in Feynman diagrams and the associated infinite perturbative expansion

Because the gravitational phenomena we have observed manifested in checked aspects of general relativity are at low energy, phenomena such as loops (whereby bosonic field quanta undergo pair production and briefly become fermionic pairs which soon annihilate back into bosons, but become briefly polarized during their existence and in so doing modify the field) which are described by the infinite series of Feynman diagrams each representing one term in the infinite series of terms in the perturbative expansion to a Feynman path integral, can be ignored (this is discussed later in this post). So the direct exchange of gauge bosons such as gravitons, gives us only a few possible types of Feynman diagrams for non-loop, simple, direct exchange of field quanta between charges. These are called ‘tree diagrams’. Important results include:

1. Quantization of mass: the force of gravity is proportional not to M1M2 but instead to M2, which is a vital result because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. Lepton and hadron masses beyond the electron are nearly all integer denominations of 0.5*0.511*137 = 35 MeV where 0.511 MeV is the electron’s mass and 137.036… is the well known Feynman dimensioness factor in charge renormalization (discovered much earlier in quantum mechanics by Sommerfeld); furthermore, quark doublet or meson masses are close to multiples of twice this or 70 MeV while quark triplet or baryon masses are close to multiples of three times this or 105 MeV; it appears that the simplest possible model, which predicts masses of new as yet unobserved particles as well as explaining existing particle masses, is that the vacuum particle which is the building block of mass is 91 GeV like the Z weak boson; the muon mass for instance is 91,000 divided by the product of 137 and twice Pi, which is a combination of a 137 vacuum polarization shielding factor, and twice Pi which is a dimensionless geometric shield factor, e.g. spinning a particle or a missile in flight reduces the radiant exposure per unit area of its spinning surface by Pi as compared to a non-spinning particle or missile, because the entire surface area of the edge of a loop or cylinder is Pi times the cross-sectional area seen side-on, while a spin-1/2 fermion must rotate twice, i.e., by 720 not 360 degrees – like drawing a line right around the single-surface of the Möbius strip – to expose its entire surface to observation and reset its symmetry. This is analysed in an earlier blog post, showing how all masses are built up from only one type of fundamental massive particle in the vacuum, and making checkable predictions. Polarized vacuum veils around particles reduce the strength of the coupling between the massive 91 GeV vacuum particles (which interact with gravitons) and the SU(2) x SU(3) particle core of interest (which doesn’t directly interact with gravitons), accounting for the observed discrete spectrum of fundamental particle masses.

The correct mass giving field is different in some ways to the electroweak symmetry breaking Higgs field of the conventional Standard Model (which gives the standard model charges as well as the 3 weak gauge bosons their symmetry-breaking mass at low energies by ‘miring’ them or resisting their acceleration): a discrete number of the vacuum mass particles (gravitational charges) become associated with leptons and hadrons, either within the vacuum polarized region which surrounds them (strong coupling to the massive particles, hence large effective masses) or outside it (where the coupling, which presumably relies on the electromagnetic interaction, is shielded and weaker, giving lower effective masses to particles). In the case of the deflection of light by gravity, the photons have zero rest mass so it is their energy content which is causing deflection. The mass-giving field in the vacuum still mediates effects of gravitons, but the photon has no net electric charge (it has equal amounts of positive and negative electric field density), it has zero effective rest mass. The quantum mechanism by which light gets deflected as predicted by general relativity has been analysed in an earlier post: due to the FitzGerald-Lorentz contraction, a photon’s field lines are all in a plane perpendicular to the direction of propagation. This means that twice the electric field’s energy density in a photon (or other light velocity particle) is parallel to a gravitational field line that the photon is crossing at normal incidence, compared to the case for a slow-moving charge with an isotropic electric field. The strength of the coupling between the photon’s electric field and the mass-giving particles in the vacuum is generally not quantized, unless the energy of the photon is quantized.

If you are firmly attached to an accelerating horse, you will accelerate at the specific rate that the horse accelerates at. But if you are less firmly attached, the acceleration you get depends on your adhesion to the saddle. If you slide back as the horse accelerates, your acceleration is somewhat less than that of the horse you are sitting on. Particles with rest mass are firmly anchored to vacuum gravitational charges, the particles with fixed mass that replace the traditional role of Higgs bosons. But particles like photons, which lack rest mass, are not firmly attached to the massive vacuum field, and the quantized gravitational interactions – like a fixed acceleration of a horse – is not automatically conveyed upon the photon. The result is that a photon gets deflected more classically by ‘curved spacetime’ created by the effect of gravitons upon the Higgs-like massive bosons in the vacuum, than particles with rest mass such as electrons.

2. The inverse square law, for distance r.

3. Many checked and checkable quantitative predictions. Because the Hubble constant and the density of the universe can be quantitatively measured (within certain error bars, like all measurements), you can use this to predict the value of G. As astronomy gets better measurements, the accuracy of the prediction gets better and can be checked experimentally.

In addition, the mechanism predicts the expansion of the universe: the reason why Yang-Mills exchange radiation is redshifted to lower energy by bouncing off distant masses is that energy from gravitons is being used to cause the distant masses to speed up. This makes quantitative predictions, and is a firm test of the theory. (The outward force of a receding galaxy of mass m is F = mH2R, which requires power P = dE/dt = Fv = mH3R2, where E is energy.)

In 1996 (published via the letters pages of the Oct. 1996 issue of the British-based journal Electronics World) the mechanism also predicted the lack of deceleration at large redshifts, which was confirmed by Perlmutter’s observations on distant supernovae redshifts in 1998. Another prediction, which occurs when you apply the same mechanism in detail to electromagnetism, is that the coupling constant for the electromagnetic interaction is bigger than that of gravitation by the square root of the number of charges in the universe. This again is accurate to within available data, and is a falsifiable prediction because, as the input data inproves, the prediction becomes more accurate and can be compared in more detail to observation.

It should be noted that the gravitons in this model would have a mean free path (average distance between interactions) of 3.10 x 10^77 metres in water, as calculated in the earlier post here. These are able to produce gravity by interacting with the the Higgs-like vacuum field, due to the tremendous flux of gravitons involved. The radially symmetric, isotropic outward force of the receding universe is on the order 10^43 Newtons, and by Newton’s 3rd law this produces a similar equal and opposite (inward) reaction force. This is the immense field behind gravitation. Only a trivial asymmetry in the normal equilibrium of such immense forces is enough to produce gravity. Cosmologically nearby masses are pushed together because they aren’t receding much, and so don’t exert a forceful flux of graviton exchange radiation in the direction of other (cosmologically) nearby masses. Because (cosmologically) nearby masses therefore don’t exert graviton forces upon each other as exchange radiation, they are shielding one another in effect, and therefore get pushed together by the forceful exchange of gravitons which does occur with the receding universe on the unshielded side, as illustrated in Fig. 1 above.

Dr Thomas Love of California State University has pointed out:

‘The quantum collapse [in the mainstream interpretation of of quantum mechanics, which has wavefunction collapse occur when a measurement is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

That looks like a factual problem, undermining the mainstream interpretation of the mathematics of quantum mechanics. If you think about it, sound waves are composed of air molecules, so you can easily write down the wave equation for sound and then – when trying to interpret it for individual air molecules – come up with the idea of wavefunction collapse occurring when a measurement is made for an individual air molecule.

Feynman writes on a footnote printed on pages 55-6 of my (Penguin, 1990) copy of his book QED:

‘… I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed … If you get rid of all the old-fashioned ideas and instead use the [path integral] ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’

Feynman on p85 points out that the effects usually attributed to the ‘uncertainty principle’ are actually due to interferences from virtual particles or field quanta in the vacuum (which don’t exist in classical theories but must exist in an accurate quantum field theory):

‘But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these [classical] rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of intereference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’

Hence, in the path integral picture of quantum mechanics – according to Feynman – all the indeterminancy is due to interferences. It’s very analogous to the indeterminancy of the motion of a small grain of pollen (less than 5 microns in diameter) due to jostling by individual interactions with air molecules, which represent the field quanta being exchanged with a fundamental particle.

The path integral then makes a lot of sense, as it is the statistical resultant for a lot of interactions, just as the path integral was actually used for brownian motion (diffusion) studies in physics before its role in QFT. The path integral still has the problem that it’s unrealistic in using calculus and averaging an infinite number of possible paths determined by the continuously variable lagrangian equation of motion in a field, when in reality there are not going to be an infinite number of interactions taking place. But at least, it is possible to see the problems, and entanglement may be a red-herring:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 57-8.

copy of a comment:

http://asymptotia.com/2008/02/17/tales-from-the-industry-xvii-jump-thoughts/

Hi Clifford,

Thanks for these further thoughts about being science advisor […] for what is (at least partly) a sci fi film. It’s fascinating.

“What I like to see first and foremost in these things is not a strict adherence to all known scientific principles, but instead internal consistency.”

Please don’t be too hard on them if there are apparent internal inconsistencies. Such alleged internal inconsistencies don’t always matter, as Feynman discovered:

“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …

“… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …” – Feynman, quoted at http://www.tony5m17h.net/goodnewsbadnews.html#badnews

I agree with you that: “Entertainment leading to curiosity, real questions, and then a bit of education …”

“… Smolin has launched a controversial attack on those working on the dominant model in theoretical physics. He accuses string theorists of racism, sexism, arrogance, ignorance, messianism and, worst of all, of wasting their time on a theory that hasn’t delivered.”


http://tls.timesonline.co.uk/article/0,,25372-2650590_1,00.html

‘rock guitars could hold secret to the universe’. It might sound like just more pathetic spin, but actually, the analogy of string theory hype to that of a community of rock groupies is sound.

The rock guitar string promoter referred to just above is Dr Lewney who has the site http://www.doctorlewney.com/. He writes on Dr Woit’s blog:

‘I’m actually very open to ideas as to how best to communicate physics to schoolkids.’

Dr Lewney, if you want to communicate real, actual physics rather than useless blathering and lies to schoolkids, that’s really excellent. But please just remember that physics is not uncheckable speculation, and that twenty years of mainstream hype of string theory in British TV, newspapers and the New Scientist has by freak ‘coincidence’ (don’t you believe it) correlated with a massive decline in kids wanting to do physics. Maybe they’re tired of sci fi dressed up as physics or something.

http://www.buckingham.ac.uk/news/newsarchive2006/ceer-physics-2.html:

‘Since 1982 A-level physics entries have halved. Only just over 3.8 per cent of 16-year-olds took A-level physics in 2004 compared with about 6 per cent in 1990.

‘More than a quarter (from 57 to 42) of universities with significant numbers of physics undergraduates have stopped teaching the subject since 1994, while the number of home students on first-degree physics courses has decreased by more than 28 per cent. Even in the 26 elite universities with the highest ratings for research the trend in student numbers has been downwards.

‘Fewer graduates in physics than in the other sciences are training to be teachers, and a fifth of those are training to be maths teachers. A-level entries have fallen most sharply in FE colleges where 40 per cent of the feeder schools lack anyone who has studied physics to any level at university.’

http://www.math.columbia.edu/~woit/wordpress/?p=651#comment-34820:

‘One thing that is clear is that hype of speculative uncheckable string theory has at least failed to encourage a rise in student numbers over the last two decades, assuming that such speculation itself is not actually to blame for the decline in student interest.

‘However, it’s clear that when hype fails to increase student interest, everyone will agree to the consensus that the problem is a lack of hype, and if only more hype of speculation was done, the problem would be addressed.’

Professor John Conway, a physicist at the University of California, has written a post called ‘What’s the (Dark) Matter?’ where someone has referred to my post here as my ‘belief’ that gravitons are of spin-1. Actually, this isn’t a ‘belief’. It’s a fact (not a belief) that so far, spin-2 graviton ideas are at best uncheckable speculation that is ‘not even wrong‘, and it’s a fact (not a belief) that this post shows that spin-1 gravitons do reproduce gravitation as already known from the checked and confirmed results of general relativity, plus quantitatively predicting more stuff such as the strength of gravity. This is not a mere ‘personal belief’, such as the gut feeling that is used by string theorists, politicians and priests to justify hype in religion or politics. It is instead fact-based, not belief-based, and it makes accurate predictions so far as the difficult

Because the effective value of G at early times after the big bang is so small from our spacetime perspective, we see small gravitational effects: the universe looks very flat, i.e., gravity was so weak it was unable to clump matter very much at 400,000 years after the big bang, which is the time of our information on flatness, i.e. the time that the closely studied cosmic background radiation was emitted. The mainstream ad hoc explanation for this kind of observation is a non-falsifiable (endlessly adjustable) idea from Alan Guth that the universe expanded or ‘inflated’ at a speed faster than light for a small fraction of a second, which would have allowed the limited total mass to get very far dispersed very quickly, which would have reduced the curvature of the universe and suppressed the effects of gravitation at subsequent times in the early universe.

On the topic of variations in G, Edward Teller falsely claimed in a 1948 paper that if G had varied as Dirac suggested a few years earlier, then the gravitationally caused compression in the early universe and in stars including the sun would vary with time, affecting fusion rates dramatically because fusion is highly sensitive to the amount of compression (which he knew from his Los Alamos studies on the difficulty of producing a hydrogen bomb at that time). However, the Yang-Mills mechanism of electromagnetism (whose role in fusion is the Coulomb repulsion of protons, i.e., the stronger electromagnetism is, the less fusion you get because protons approach less closely because they are repelled more strongly, so the short-ranged strong force which causes protons to fuse together ends up causing less fusion), shows that it will vary with time in the same way that gravitation does.

This invalidates Teller’s theory, because if you for example halve the value of G (making fusion more difficult by reducing the compression of protons long ago), you simultaneously get an electromagnetic coupling charge which is halved, and the effect of the latter is to increase fusion by reducing the Coulomb barrier which protons need to overcome in order to fuse. The two effects – reduced G which tends to reduce fusion by reducing compression, and reduced Coulomb charge which allows protons to approach closer before being repelled, and therefore increases fusion – offset one another. Dirac wrongly suggested that G falls with time, because he believed that at early times G was as strong as electromagnetism and numerically ‘unified’; actually all attempts to explain the universe by claiming that the fundamental forces including gravity are the same at a particular very early time/high energy, are physically flawed and violate the conservation of energy – the whole reason why the strong force charge strength falls at higher energies is because it is being caused by pair-production of virtual particles including virtual quarks accompanied by virtual gluons. This pair-production is a result of the electromagnetic charge, which increases at higher energy.

‘A Party member … is supposed to live in a continuous frenzy of hatred of foreign enemies and internal traitors … The discontents produced by his bare, unsatisfying life are deliberately turned outwards and dissipated by such devices as the Two Minutes Hate, and the speculations which might possibly induce a skeptical or rebellious attitude are killed in advance by his early acquired inner discipline … called, in Newspeak, crimestop. Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – Orwell, 1984.

Outline of the qualitative mechanism for the coupling of mass to otherwise massless Standard Model fermions.

Above: Simplified depiction of the coupling scheme for mass to be given to Standard Model particles by a separate field, which is the man-in-the-middle between graviton interactions and electromagnetic interactions. A more detailed analysis of the model, with a couple of mathematical variations and some predictions of masses for different leptons and hadrons, is given in the earlier post here and there are updates in other recent posts on this blog. In the case of quarks, the cores are so close together that they share the same ‘veil’ of polarized vacuum, so N quarks in close proximity (asymptotic freedom inside hadrons) boosts its electric charge shielding factor by a factor N, so if you have three quarks of bare charge –j each and normal vacuum polarization shielding factor j, the total charge is not –jN but is –jN/N, where the N in the denominator is there to account for the increased vacuum shielding. Obviously –jN/N = -j, so 3 electron-charge quarks in close proximity will only exhibit the combined charge of 1 electron, as seen at a distance beyond 33 fm from the core. Hence, in such a case, the apparent electric charge contribution per quark is only -1/N = -1/3, which is the exactly what happens in the Omega Minus particle (which has 3 strange quarks of apparent charge -1/3 each, giving the Omega Minus a total apparent electric charge as observed beyond 33 fm of -1 unit). More impressively, this model predicts the masses of all leptons and hadrons, and also makes falsifiable predictions about the variation in coupling constants as a function of energy which result from the conversion of electromagnetic field energy into short range nuclear force field quanta as a result of pair-production of particles including weak gauge bosons, virtual quarks and gluons in the electromagnetic field at high energy (short distances from the particle core). The energy lost from the electromagnetic field, due to vacuum polarization opposing the electric charge core, gets converted into short range nuclear force fields. From the example of the Omega Minus particle, we can see that the electric charge per quark observable at long ranges is reduced from -1 to -1/3 unit due to the close proximity of three similarly charge quarks, as compared to a single particle core surrounded by polarized vacuum, i.e. a lepton (the Omega Minus is a unique, very simple situation; usually things are far and away more complicated because hadrons generally contain pairs or triplets of quarks of different flavour). Hence, 2/3rds of the electric field energy that occurs when only one particle is alone in a polarized vacuum (i.e. a lepton) is used to generate short-ranged weak and strong nuclear force fields when three such particles are closely confined.

As discussed in earlier posts, the similarity of leptons and quarks has been known since 1964, when it was discovered by the Italian physicist Nicola Cabibbo: the rates of lepton interactions are identical to those of quarks to within just 4%, or one part in 25. The weak force when acting on quarks within one generation of quarks is identical to within 1 part in 25 of that when acting on leptons (although if the interaction is between two quarks of different generations, the interaction is weaker by a factor of 25). This similarity of quarks and leptons is called ‘universality’. Cabibbo brilliantly suggested that the slight (4%) difference between the action of the weak force on leptons and quarks is due to the fact that a lepton has only one way to decay, whereas a quark has two possible decay routes, with relative probabilities of 1/25 and 24/45, the sum being of course (1/25) + (24/25) = 1 (the same as that for a lepton). But because only the one quark decay route or the other (1/25 or 24/25) is seen in an experiment, the effective rate of quark interactions are lower than those for leptons. If the weak force involves an interaction between just one generation of quarks, it is 24/25 or 96% as strong as between leptons, but if it involves two generations of quarks, it is only 1/25th as strong as when mediating a similar interaction for leptons.

This is very strong evidence that quarks and leptons are fundamentally the same thing, just in a different disguise due to the way they are paired or tripleted and ’dressed’ by the surrounding vacuum polarization (electric charge shielding effects, and the use of energy to mediate short-range nuclear forces).

A quick but vital update about my research (particularly updating the confusion in some of the comments to this blog post): I’ve obtained the physical understanding which was missing from the QFT textbooks I’ve been studying by Weinberg, Ryder and Zee, from the 2007 edition of Professor Frank Close’s nicely written little book The Cosmic Onion, Chapter 8, ‘The Electroweak Force’.

Close writes that the field quanta of U(1) in the standard model is not the photon, but is a B0 field quanta.

SU(2) gives rise to field quanta W+, W and W0. The photon and the Z0 both result from the Weinberg ‘mixing’ of the electrically neutral W0 from SU(2) with the electrically neutral B0 from U(1).

This is precisely the information I was looking for, which was not clearly stated in the QFT textbooks. It enables me to get a physical feel for how the mathematics works.

The Weinberg mixing angle determines how W0 from SU(2) and B0 from U(1) mix together to yield the photon (textbook electromagnetic field quanta) and the Z0 massive neutral weak gauge boson.

If the Weinberg mixing angle were zero, then W0 = Z0 and B0 = electromagnetic photon. However, this simple scheme fails (although this failure is not made clear in any of the QFT textbooks I’ve read, which have obfuscated instead), and an ad hoc or fudged mixing angle of about 26 degrees (this is the angle between the Z0 and W0 phase vectors) is required.

Here’s a brief comment about the vague concept of a ‘zero point field’ which unhelpfully ignores the differences between fundamental forces and mixes up gravitational and electromagnetic field quanta interactions to create a muddle. Traditional calculations of that ‘field’ (there isn’t only one field acting on ground state for electrons; there is gravity and electromagnetism with different field quanta needed to explain why one is always attractive and the other is attractive only between unlike charges and repulsive between similar charges, not to mention explaining the 10^40 difference in field strengths for those forces) give a massive energy density to the vacuum, far higher than that observed with respect to the small positive cosmological constant in general relativity. However, two separate force fields are there being confused. The estimates of the ‘zero point field’ which are derived from electromagnetic phenomena such as electrons in the ground state of hydrogen being in an equilibrium of emission and reception of field quanta, have nothing to do with the graviton exchange that causes the cosmic expansion (Figure 1 above has the mechanism for that). There is some material about traditional ‘zero point field’ philosophy on Wikipedia

The radius of the event horizon of a black hole electron is on the 1.4*10^{-57} m, the equation being simply r = 2GM/c^2 where M is electron mass.

Compare this to Planck’s length 1.6 * 10^{−35} metres which is a dimensional analysis-based (non physical) length far larger in size, yet historically claimed to be the smallest physically significant size!

The black hole length equation is different from the Planck length equation principally in that Planck’s equation includes Planck’s constant h, and doesn’t include electron mass. Both equations contain c and G. The choice of which is the more fundamental equation should be based on physical criteria, not groupthink or the vagaries of historical precedence.

The Planck length is complete rubbish, it’s not based on physics, it’s unchecked physically, it’s not even wrong uncheckable speculation.

The smaller black hole size is checkable because it causes physical effects. According to the Wikipedia page: http://en.wikipedia.org/wiki/Black_hole_electron

“A paper titled “Is the electron a photon with toroidal topology?” by J. G. Williamson and M. B. van der Mark, describes an electron model consisting of a photon confined in a closed loop. In this paper, the confinement method is not explained. The Wheeler suggestion of gravitational collapse with conserved angular momentum and charge would explain the required confinement. With confinement explained, this model is consistent with many electron properties. This paper argues (page 20) “–that there exists a confined single-wavelength photon state, (that) leads to a model with non-trivial topology which allows a surprising number of the fundamental properties of the electron to be described within a single framework.” “

My papers in Electronics World, August 2002 and April 2003, similarly showed that an electron is physically identical to a confined charged photon trapped into a small loop by gravitation (i.e., a massless SU(2) charged gauge boson which has not been supplied by mass from the Higgs field; the detailed way that the magnetic field curls cancel when such energy goes round in a loop or alternatively is exchanged in both directions between charges, prevent the usual infinite-magnetic-self-inductance objection to the motion of charged massless radiations).

The Wiki page on black hole electrons then claims wrongly that:

“… the black hole electron theory is incomplete. The first problem is that black holes tend to merge when they meet. Therefore, a collection of black-hole electrons would be expected to become one big black hole. Also, an electron-positron collision would be expected to produce a larger neutral black hole instead of two photons as is observed. These problems reflect the non-quantum nature of general relativity theory.
“A more serious issue is Hawking radiation. According to Hawking’s theory, a black hole the size and mass of an electron should vanish in a shower of photons (not just two photons of a given energy) within a small fraction of a second. Again, the current incompatibility of general relativity and quantum mechanics at electron scales prevents us from understanding why this never occurs.”

All of these “objections” are based on flawed versions Hawking’s black hole radiation theory which neglects a lot of vital physics which make the correct theory more subtle.

See the Schwinger equation for pair production field strength requirements: equation 359 of the mainstream work http://arxiv.org/abs/quant-ph/0608140 for equation 8.20 of the mainstream work http://arxiv.org/abs/hep-th/0510040.

First of all, Schwinger showed that you can’t get spontaneous pair-production in the vacuum if the electromagnetic field strength is below the critical threshold of 1.3*10^18 volts/metre.

Hawking’s radiation theory requires this, because his explanation is that pair production must occur near the event horizon of the black hole.

One virtual fermion falls into the black hole, and the other escapes from the black hole and thus becomes a “real” particle (i.e., one that doesn’t get drawn to its antiparticle and annihilated into bosonic radiation after the brief Heisenberg uncertainty time).

In Hawking’s argument, the black hole is electrically uncharged, so this mechanism of randomly escaping fermions allows them to annihilate into real gamma rays outside the event horizon, and Hawking’s theory describes the emission spectrum of these gamma rays (they are described by a black body type radiation spectrum with a specific equivalent radiating temperature).

The problem is that, if the black hole does need pair production at the event horizon in order to produce gamma rays, this won’t happen the way Hawking suggests.

The electrical charge needed to produce Schwinger’s 1.3*10^18 v/m electric field which is the minimum needed to cause pair-production /annihilation loops in the vacuum, will modify Hawking’s mechanism.

Instead of virtual positrons and virtual electrons both having an equal chance of falling into the real core of the black hole electron, what will happen is that the pair will be on average polarized, with the virtual positron moving further towards the real electron core, and therefore being more likely to fall into it.

So, statistically you will get an excess of virtual positrons falling into an electron core and an excess of virtual electrons escaping from the black hole event horizon of the real electron core.

From a long distance, the sum of the charge distribution will make the electron appear to have the same charge as before, but the net negative charge will then come from the excess electrons around the event horizon.

Those electrons (produced by pair production) can’t annihilate into gamma rays, because not enough virtual positrons are escaping from the event horizon to enable them to annihilate.

This really changes Hawking’s theory when applied to fundamental particles as radiating black holes.

Black hole electrons radiate negatively charged massless radiation: gauge bosons. These are the Hawking radiation from black hole electrons. The electrons don’t evaporate to nothing, because they’re all evaporating and therefore all receiving radiation in equilibrium with emission.

This is part of the reason why SU(2) rather than U(1)xSU(2), looks to me like the best way to deal with electromagnetism as well as the weak and gravitational interaction! By simply getting rid of the Higgs mechanism and replacing it with something that provides mass to only a proportion of the SU(2) gauge bosons, we end up with massless charged SU(2) gauge bosons which mimic the charged, force-causing, Hawking radiation from black hole fermions. The massless neutral SU(2) gauge boson is then a spin-1 graviton, which fits in nicely with a quantum gravity mechanism that makes checkable predictions and is compatible with observed approximations such as checked parts of general relativity and quantum field theory.

********

Heaviside, Wolfgang Pauli, and Bell on the Lorentz spacetime

There are a couple of nice articles by Professor Harvey R. Brown of Oxford University (he’s the Professor of the Philosophy of Physics there, see http://users.ox.ac.uk/~brownhr/), http://philsci-archive.pitt.edu/archive/00000987/00/Michelson.pdf and http://philsci-archive.pitt.edu/archive/00000218/00/Origins_of_contraction.pdf

The former paper states:

“… in early 1889, when George Francis FitzGerald, Professor of Natural and Experimental Philosophy at Trinity College Dublin, wrote a letter to the remarkable English auto-didact, Oliver Heaviside, concerning a result the latter had just obtained in the field of Maxwellian electrodynamics.

“Heaviside had shown that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the ether. In this letter, FitzGerald asked whether Heaviside’s distortion result—which was soon to be corroborated by J. J. Thompson—might be applied to a theory of intermolecular forces. Some months later, this idea would be exploited in a letter by FitzGerald published in Science, concerning the baffling outcome of the 1887 ether-wind experiment of Michelson and Morley. … It is famous now because the central idea in it corresponds to what came to be known as the FitzGerald-Lorentz contraction hypothesis, or rather to a precursor of it. This hypothesis is a cornerstone of the ‘kinematic’ component of the special theory of relativity, first put into a satisfactory systematic form by Einstein in 1905. But the FitzGerald-Lorentz explanation of the Michelson-Morley null result, known early on through the writings of Lodge, Lorentz and Larmor, as well as FitzGerald’s relatively timid proposals to students and colleagues, was widely accepted as correct before 1905—in fact by the time of FitzGerald’s premature death in 1901. Following Einstein’s brilliant 1905 work on the electrodynamics of moving bodies, and its geometrization by Minkowski which proved to be so important for the development of Einstein’s general theory of relativity, it became standard to view the FitzGerald-Lorentz hypothesis as the right idea based on the wrong reasoning. I strongly doubt that this standard view is correct, and suspect that posterity will look kindly on the merits of the pre-Einsteinian, ‘constructive’ reasoning of FitzGerald, if not Lorentz. After all, even Einstein came to see the limitations of his own approach based on the methodology of ‘principle theories’. I need to emphasise from the outset, however, that I do not subscribe to the existence of the ether, nor recommend the use to which the notion is put in the writings of our two protagonists (which was very little). The merits of their approach have, as J. S. Bell stressed some years ago, a basis whose appreciation requires no commitment to the physicality of the ether.

“…Oliver Heaviside did the hard mathematics and published the solution [Ref: O. Heaviside (1888), ‘The electro-magnetic effects of a moving charge’, Electrician, volume 22, pages 147–148]: the electric field of the moving charge distribution undergoes a distortion, with the longitudinal components of the field being affected by the motion but the transverse ones not. Heaviside [1] predicted specifically an electric field of the following form …

“In his masterful review of relativity theory of 1921, the precocious Wolfgang Pauli was struck by the difference between Einstein’s derivation and interpretation of the Lorentz transformations in his 1905 paper [12] and that of Lorentz in his theory of the electron. Einstein’s discussion, noted Pauli, was in particular “free of any special assumptions about the constitution of matter”6, in strong contrast with Lorentz’s treatment. He went on to ask:

‘Should one, then, completely abandon any attempt to explain the Lorentz contraction atomistically?’

“It may surprise some readers to learn that Pauli’s answer was negative. …

“[John S.] Bell’s model has as its starting point a single atom built of an electron circling a much more massive nucleus. Ignoring the back-effect of the electron on the nucleus, Bell was concerned with the prediction in Maxwell’s electrodynamics as to the effect on the two-dimensional electron orbit when the nucleus is set gently in motion in the plane of the orbit. Using only Maxwell’s equations (taken as valid relative to the rest frame of the nucleus), the Lorentz force law and the relativistic formula linking the electron’s momentum and its velocity—which Bell attributed to Lorentz—he determined that the orbit undergoes the familiar longitudinal “Fitzgerald” contraction, and its period changes by the familiar “Larmor” dilation. Bell claimed that a rigid arrangement of such atoms as a whole would do likewise, given the electromagnetic nature of the interatomic/molecular forces. He went on to demonstrate that there is a system of primed variables such that the the description of the uniformly moving atom with respect to them is the same as the description of the stationary atom relative to the orginal variables—and that the associated transformations of coordinates are precisely the familiar Lorentz transformations. But it is important to note that Bell’s prediction of length contraction and time dilation is based on an analysis of the field surrounding a (gently) accelerating nucleus and its effect on the electron orbit.12 The significance of this point will become clearer in the next section. …

“The difference between Bell’s treatment and Lorentz’s theorem of corresponding states that I wish to highlight is not that Lorentz never discussed accelerating systems. He didn’t, but of more relevance is the point that Lorentz’s treatment, to put it crudely, is (almost) mathematically the modern change-of-variables, based-on-covariance, approach but with the wrong physical interpretation. …

“It cannot be denied that Lorentz’s argumentation, as Pauli noted in comparing it with Einstein’s, is dynamical in nature. But Bell’s procedure for accounting for length contraction is in fact much closer to FitzGerald’s 1889 thinking based on the Heaviside result, summarised in section 2 above. In fact it is essentially a generalization of that thinking to the case of accelerating bodies. It is remarkable that Bell indeed starts his treatment recalling the anisotropic nature of the components of the field surrounding a uniformly moving charge, and pointing out that:

‘In so far as microscopic electrical forces are important in the structure of matter, this systematic distortion of the field of fast particles will alter the internal equilibrium of fast moving material. Such a change of shape, the Fitzgerald contraction, was in fact postulated on empirical grounds by G. F. Fitzgerald in 1889 to explain the results of certain optical experiments.’

“Bell, like most commentators on FitzGerald and Lorentz, prematurely attributes to them length contraction rather than shape deformation (see above). But more importantly, it is not entirely clear that Bell was aware that FitzGerald had more than “empirical grounds” in mind, that he had essentially the dynamical insight Bell so nicely encapsulates.

“Finally, a word about time dilation. It was seen above that Bell attributed its discovery to J. Larmor, who had clearly understood the phenomenon in 1900 in his Aether and Matter [21]. 16 Indeed, it is still widely believed that Lorentz failed to anticipate time dilation before the work of Einstein in 1905, as a consequence of failing to see that the “local” time appearing in his own (second-order) theorem of corresponding states was more than just a mathematical artifice, but rather the time as read by suitably synschronized clocks at rest in the moving
system. …

“One of Bell’s professed aims in his 1976 paper on ‘How to teach relativity’ was to fend off “premature philosophizing about space and time” 19. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatiotemporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of spacetime Galilean or Minkowskian, say—it is immersed in? 20 Some critics of Bell’s position may be tempted to appeal to the general theory of relativity as supplying the answer. After all, in this theory the metric field is a dynamical agent, both acting and being acted upon by the presence of matter. But general relativity does not come to the rescue in this way (and even if it did, the answer would leave special relativity looking incomplete). Indeed the Bell-Pauli-Swann lesson—which might be called the dynamical lesson—serves rather to highlight a feature of general relativity that has received far too little attention to date. It is that in the absence of the strong equivalence principle, the metric g_μv in general relativity has no automatic chronometric operational interpretation. 21 For consider Einstein’s field equations … A possible spacetime, or metric field, corresponds to a solution of this equation, but nothing in the form of the equation determines either the metric’s signature or its operational significance. In respect of the last point, the situation is not wholly dissimilar from that in Maxwellian electrodynamics, in the absence of the Lorentz force law. In both cases, the ingredient needed for a direct operational interpretation of the fundamental fields is missing.”

Interesting recent comment by anon. to Not Even Wrong:

Even a theory which makes tested predictions isn’t necessarily truth, because there might be another theory which makes all the same predictions plus more. E.g., Ptolemy’s excessively complex and fiddled epicycle theory of the Earth-centred universe made many tested predictions about planetary positions, but belief in it led to the censorship of an even better theory of reality.

Hence, I’d be suspicious of whether the multiverse is the best theory – even if it did have a long list of tested predictions – because there might be some undiscovered alternative theory which is even better. Popper’s argument was that scientific theories can never be proved, only falsified. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools. Mixing beliefs with science quickly makes the fundamental revision of theories a complete heresy. Scientists shouldn’t start to begin believing that theories are religious creeds.

David Holloway’s book, Stalin and the Bomb, is noteworthy for analysing Stalin’s state of mind over American proposals for pacifist anti-proliferation treaties after World War II. Holloway demonstrates in the book that any humility or good-will shown to Stalin by his opponents would be taken by Stalin as (1) evidence of exploitable weakness and stupidity, or (2) a suspicious trick. Stalin would not accept good will at face value. Either it marked an exploitable weakness of the enemy, or else it indicated an attempt to trick Russia into remaining weaker than America. Under such circumstances (which some would attribute to Stalin’s paranoia, others would call it his narcissism), there was absolutely no chance of reaching an agreement for peaceful control of nuclear energy in the postwar era. (However, Stalin had no qualms about making the Soviet-Nazi peacepact with Hitler in 1939, to invade Poland and murder people. Stalin found it easy to trust a fellow dictator because he thought he understood dictatorship, and was astonished to be double-crossed when Hitler invaded Russia two years later.) Similarly, the facts on this blog post (the 45th post on this blog) and in previous posts, are assessed the same way by the mainstream: they are ignored, not checked or investigated properly. Everyone thinks that they have nothing to gain from a theory based on solid, empirical facts!

Rutherford and Bohr were extremely naive in 1913 about the electron “not radiating” endlessly. They couldn’t grasp that in the ground state, all electrons are radiating (gauge bosons) at the same rate they are receiving them, hence the equilibrium of emission and absorption of energy when an electron is in the ground state, and the fact that the electron has to be in an excited state before an observable photon emission can occur:

“There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”

– Rutherford to Bohr, 20 March 1913, in response to Bohr’s model of quantum leaps of electrons which explained the empirical Balmer formula for line spectra. (Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)

The ground state energy and thus frequency of the orbital oscillation of an electron is determined by the average rate of exchange of electromagnetic gauge bosons between electric charges. So it’s really the dynamics of quantum field theory (e.g. the exchange of gauge boson radiation between all the electric charges in the universe) which explains the reason for the ground state in quantum mechanics. Likewise, as Feynman showed in QED, the quantized exchange of gauge bosons between atomic electrons is a random, chaotic process and it is this chaotic quanta nature for the electric field on small scales which makes the electron jump around unpredictably in the atom, instead of obeying the false (smooth, non-quantized) Coulomb force law and describing nice elliptical or circular shaped orbits.

‘Ignorance of the law excuses no man; not that all men know the law; but because ’tis an excuse every man will plead, and no man can tell how to confute him.’ – John Selden (1584-1654), Table Talk.

This is one strong nail in the coffin of the mainstream ideas of

1. inflation (to flatten the universe at 300,000 years after the big bang when gravitational effects were much smaller in the cosmic background radiation than you would expect from the structures which have grown from those minor density fluctuations over the last 13,700 million years; we don’t need inflation because the weaker gravity towards time zero explains the lack of curvature then and how gravity has grown in strength since then: traditional arguments used to dismiss gravity coupling variations with time are false because they assume that electromagnetic Coulomb repulsion effects on fusion rates are time-independent instead of similarly varying with gravity, i.e., when gravity was weaker the big bang fusion and later the fusion in the sun wasn’t producing less fusion energy, because Coulomb repulsion between protons was also weaker, offsetting the effect of reduced gravitational compression and keeping fusion rates stable), and

2. force numerical unification to similar coupling constants, at very high energy such as at very early times after the big bang.

The point (2) above is very important because the mainstream approach to unification is a substitution of numerology for physical mechanism. They have the vague idea from the Goldstone theorem that there could be a broken symmetry to explain why the coupling constants of gravity and electromagnetism are different assuming that all forces are unified at high energy, but it is extremely vague and unpredictive because they have no falsifiable theory, nor a theory based upon known observed facts.

What is interesting to notice is that this strong force law is exactly what the old (inaccurate) LeSage theory predicts for with massive gauge bosons which interact with each other and diffuse into the geometric “shadows” thereby reducing the force of gravity faster with distance than the inverse-square law observed (thus giving the exponential term in the equation e-R/s/R2). So it’s easy to make the suggestion that the original LeSage gravity mechanism with limited-range massive particles and their “problem” due to the shadows getting filled in by the vacuum particles diffusing into the shadows (and cutting off the force) after a distance of a few mean-free-paths of radiation-radiation interactions, actually is a clue about the real mechanism in nature for the physical cause behind the short-range of strons and weak nuclear interactions which are confined in distance to the nucleus of the atom! For gravitons, in a previous post I have calculated their mean free path in matter (not the vacuum!) to be 3.10*1077 metres of water; because of the tiny (event horizon-sized) cross-section for particle interactions with the intense flux of gravitons that constitutes the spacetime fabric, the probability of any given graviton hitting that cross-section is extremely small. Gravity works because of an immense number of very weakly interacting gravitons. Obviously quantum chromodynamics governs strong interactions between quarks, but a residue of that allows pions and other mesons to mediate strong interactions between nucleonsHow to censor out scientific reports without bothering to read them

Here’s the standard four-staged mechanism for avoiding decisions by ignoring reports. I’ve taken it directly from the dialogue of the BBC TV series Yes Minister, Series Two, Episode Four, The Greasy Pole, 16 March 1981, where a scientific report needs to be censored because it reaches scientific but politically-inexpedient conclusions (which would be very unpopular in a democracy where almost everyone has the same prejudices and the majority bias must be taken as correct in the interests of democracy, regardless of whether it actually is correct):

Permanent Secretary Sir Humphrey: ‘There’s a procedure for suppressing … er … deciding not to publish reports.’

Minister Jim Hacker: ‘Really?’

‘You simply discredit them!’

‘Good heavens! How?’

‘Stage One: give your reasons in terms of the public interest. You point out that the report might be misinterpreted. It would be better to wait for a wider and more detailed study made over a longer period of time.’

‘Stage Two: you go on to discredit the evidence that you’re not publishing.’

‘How, if you’re not publishing it?’

‘It’s easier if it’s not published. You say it leaves some important questions unanswered, that much of the evidence is inconclusive, the figures are open to other interpretations, that certain findings are contradictory, and that some of the main conclusions have been questioned.’

‘Suppose they haven’t?’

‘Then question them! Then they have!’

‘But to make accusations of this sort you’d have to go through it with a fine toothed comb!’

‘No, no, no. You’d say all these things without reading it. There are always some questions unanswered!’

‘Such as?’

‘Well, the ones that weren’t asked!’

‘Stage Three: you undermine recommendations as not being based on sufficient information for long-term decisions, valid assessments, and a fundamental rethink of existing policies. Broadly speaking, it endorses current practice.

‘Stage Four: discredit the man who produced the report. Say that he’s harbouring a grudge, or he’s a publicity seeker, or he’s hoping to be a consultant to a multi-national company. There are endless possibilities.’

Go to 2 minutes and 38 seconds in the Utube video (above) to see the advice quoted on suppression!

1. A black hole with the electron’s mass would by Hawking’s theory have an effective black body radiating temperature of 1.35*10^53 K. The Hawking radiation is emitted by the black hole event horizon which has radius R = 2GM/c^2.

2. The radiating power per unit area is the Stefan-Boltzmann constant multiplied by the kelvin temperature raised to the fourth power, which gives 1.3*10^205 watts/m^2. For the black hole event horizon spherical surface area, this gives a total radiated power of 3*10^92 watts.

3. For an electron to keep radiating, it must be absorbing a similar power. Hence it looks like an exchange-radiation theory where there is an equilibrium. The electron receives 3*10^92 watts of gauge bosons and radiates 3*10^92 watts of gauge bosons. When you try to move the electron, you introduce an asymmetry into this normal equilibrium and this is asymmetry felt as inertial resistance, in the way broadly argued (for a zero-point field) by people like Professors Haisch and Rueda. It also causes compression and mass increase effects on loving bodies, because of the snowplow effect of moving into a radiation field and suffering a net force.

When the 3*10^92 watts of exchange radiation hit an electron, they each impart momentum of absorbed radiation is p = E/c, where E is the energy carried, and when they are re-emitted back in the direction they came from (like a reflection) they give a recoil momentum to the electron of a similar p = E/c, so the total momentum imparted to the electron from the whole reflection process is p = E/c + E/c = 2E/c.

The force imparted by successive collisions, as in the case of any radiation hitting an object, is The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c = 2*10^84 Newtons, where P is power as distinguished from momentum p.

So the Hawking exchange radiation for black holes would be 2*10^84 Newtons.

Now the funny thing is that in the big bang, the Hubble recession of galaxies at velocity v = HR implies a force of

F = ma = Hcm = 7*10^43 Newtons.

If that outward force causes an equal inward force which is mediated by gravitons (according to Newton’s 3rd law of motion, equal and opposite reaction), then the cross-sectional area of an electron for graviton interactions (predicting the strength of gravity correctly) is the cross-sectional area of the black hole event horizon for the electron, i.e. Pi*(2GM/c^2)^2 m^2. (Evidence here.)

Now the fact that the black hole Hawking exchange radiation force calculated above is 2*10^84 Newtons, compared 7*10^43 Newtons for quantum gravity, suggests that the Hawking black hole radiation is the exchange radiation of a force roughly (2*10^84)/(7*10^43) = 3*10^40 stronger than gravity.

Such a force is of course electromagnetism.

So I find it quite convincing that the cores of the leptons and quarks are black holes which are exchanging electromagnetic radiation with other particles throughout the universe.

The asymmetry caused geometrically by the shadowing effect of nearby charges induces net forces which we observe as fundamental forces, while accelerative motion of charges in the radiation field causes the Lorentz-FitzGerald transformation features such as compression in the direction of motion, etc.

Hawking’s heuristic mechanism of his radiation emission has some problems for an electron, however, so the nature of the Hawking radiation isn’t the high-energy gamma rays Hawking suggested. Hawking’s mechanism for radiation from black holes is that pairs of virtual fermions can pop into existence for a brief time (governed by Heisenberg’s energy-time version of the uncertainty principle) anywhere in the vacuum, such as near the event horizon of a black hole. Then one of the pair of charges falls into the black hole, allowing the other one to escape annihilation and become a real particle which hangs around near the event horizon until the process is repeated, so that you get the creation of real (long-lived) real fermions of both positive and negative electric charge around the event horizon. The positive and negative real fermions can annihilate, releasing a real gamma ray with an energy exceeding 1.02 MeV.

This is a nice theory, but Hawking totally neglects the fact that in quantum field theory, no pair production of virtual electric charges is possible unless the electric field strength exceeds Schwinger’s threshold for pair production of 1.3*10^18 v/m (equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 and equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040). If you check out renormalization in quantum field theory, this threshold is physically needed to explain the IR cutoff on the running coupling for electric charge. If the Schwinger threshold didn’t exist, the running coupling or effective charge of an electron would continue to fall at low energy instead of becoming fixed at the known electron charge at low energies. This would occur because the vacuum virtual fermion pair production would continue to polarize around electrons even at very low energy (long distances) and would completely neutralize all electric charges, instead of leaving a constant residual charge at low energy that we observe.

Once you include this factor, Hawking’s mechanism for radiation emission starts have a definite backreaction on the idea, and to modify his mathematical theory. E.g., pair production of virtual fermions can only occur where the electric field exceeds 1.3*10^18 v/m, which is not the whole of the vacuum but just a very small spherical volume around fermions!

This means that black holes can’t radiate any Hawking radiation at all using Hawking’s heuristic mechanism, unless the electric field strength at the black hole event horizon radius 2GM/c^2 is in excess of 1.3*10^18 volts/metre.

That requires the black hole to have a relatively large net electric charge. Personally, from this physics I’d say that black holes the size of those in the middle of the galaxy don’t emit any Hawking radiation at all, because there’s no mechanism for them to have acquired a massive net electric charge when they formed. They formed from stars which formed clouds of hydrogen produced in the big bang, and hydrogen is electrically neutral. Although stars give off charged radiations, they emit as much negative charge as electrons and negatively charged ions, as they emit positive charge such as protons and alpha particles. So there is no way they can accumulate a massive electric charge. (If they did start emitting more of one charge than another, as soon as a net electric charge developed, they’d attract back the particles whose emission had caused the net charge and the net charge would soon be neutralized again.)

So my argument physically from Schwinger’s formula for pair production is that the supermassive black holes in the centres of galaxies have a neutral electric charge, have zero electric field strength at their event horizon radius, and thus have no pair-production there and so emit no Hawking radiation whatsoever.

The important place for Hawking radiations is the fundamental particle, because fermions have an electric charge and at the black hole radius of a fermion the electric field strength way exceeds the Schwinger threshold for pair production.

In fact, the electric charge of the fermionic black hole modifies Hawking’s radiation, because it prejudices which of the virtual fermions near the event horizon will fall into. Because fermions are polarized in an electric field, the virtual positrons which form near the event horizon to a fermionic black hole will on average be closer to the black hole than the virtual electrons, so the virtual positrons will be more likely to fall in. This means that instead of virtual fermions of either electric charge sign falling at random into the black hole fermion, you instead get a bias in favour of virtual positrons and other virtual fermions of positive sign being more likely to fall into the black hole, and an excess of virtual electrons and other virtual negatively charged radiation escaping from the black hole event horizon. This means that a black hole electron will emit a stream of negatively charged radiation, and a black hole positron will emit a stream of positively charged radiation.

Although such radiation would appear to be massive fermions, because there is an exchange of such radiation in both directions simultaneously once an equilibrium of such radiation is set up in the universe (towards and away from the event horizon), the overlap of incoming and outgoing radiation will have some peculiar effects, turning the fermionic sub relativistic radiation into bosonic relativistic radiation.

OLDER MATERIAL (CONTAINS VITAL CALCULATIONS):

The Standard Model of particle physics is U(1) x SU(2) x SU(3), representing respectively electromagnetism/weak hypercharge, weak isospin charge that acts only on particles of left-handed spin, and strong colour charge. This doesn’t include the Higgs field (there are several possible Higgs field versions which are to be tested in 2009 at the LHC), or gravitational interactions. Since mass-energy is gravitational charge, there is clearly a link between gravitons and Higgs bosons, but this is not predicted by the Standard Model in its current form.

The role played by U(1) in the Standard Model can actually be done by massless gauge bosons of SU(2) because not all SU(2) gauge bosons acquire mass at low energy. We know that for those that do acquire mass, the resulting massive gauge bosons only interact with left-handed spinors. It’s possible that one handedness of the SU(2) gauge bosons remain massless at low energy, and that these are the gauge bosons of electromagnetism and also gravitation (if the latter is mediated by a spin-1 graviton, not a spin-2 graviton). There are calculations and predictions which justify this. The spin-1 gravitons cause universal repulsion of masses over large distances, i.e. they are the dark energy of the cosmological acceleration. Nearby masses which are relatively small compared to the mass of the surrounding universe are pushed together by the stronger exchange of gravitons (converging inwards from great distances) with the larger mass of the universe than with the relatively small mass of the Earth, a star or a galaxy. It is the only fact-based predictive mechanism of gravity which has survived falsifiable predictions, e.g., making the published prediction that the acceleration of the universe is a = Hc in 1996, confirmed years later by CCD observations on supernovae due to Perlmutter et al.

fig1

“In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …” – Conclusion of the paper by M. Fierz and W. Pauli, “On relativistic wave equations for particles of arbitrary spin in an electromagnetic field”, Proc. Roy. Soc. London., volume A173, pp. 211-232 (1939).

  • In the universe, masses that ‘attract’ due to quantum gravity are in fact surrounded by an isotropic distribution of distant receding masses in all directions (clusters of galaxies), so they must exchange gravitons with those distant masses as well as nearby masses (a fact ignored by the flawed mainstream path integral extensions of the Fierz-Pauli argument for gravitons having spin-2 in order for ‘like’ gravitational charges to attract rather than to repel which of course happens with like electric charges; see for instance pages 33-34 of Zee’s 2003 quantum field theory textbook).
  • Because the isotropically distributed distant masses are receding with a cosmological acceleration, they have a radial outward force, which by Newton’s 2nd law is F = ma, and which by Newton’s 3rd law implies an equal inward-directed reaction force, F = -ma.
  • The inward-directed force, from the possibilities known in the Standard Model of particle physics and quantum gravity considerations, is carried by gravitons:

gravity1
R in the diagram above is the distance to distant receding galaxy clusters of mass m. The distribution of matter around us in the universe can simply be treated as a series of shells of receding mass at progressively larger distances R, and the sum of contributions from all the shells gives the total inward graviton delivered force to masses.

This works for spin-1 gravitons, because

(a) the gravitons coming to you from distant masses (ignored completely by speculative spin-2 graviton hype) are radially converging upon you (not diverging), and

(b) the distant masses are immense in size (clusters of galaxies) compared to local masses like the planet earth, the sun or the galaxy,

so the flux from distant masses is way, way stronger than from nearby masses; consequently the path integral of all spin-1 gravitons from distant masses reduces to the simple geometry below and will cause ‘attraction’ or push you down to the earth by shadowing (the repulsion between two nearby masses from spin-1 graviton exchange is trivial compared to the force pushing them together).

In the case of electromagnetism, like charges repel due to spin-1 virtual photon exchange, because the distant matter in the universe is electrically neutral (equal amounts of charge of positive and negative sign at great distances cancel). This is not the case for quantum gravity, because the distant masses have the same gravitational charge sign, say positive, as nearby masses (there is only one observed sign for gravitational charge). Hence, nearby like gravitational charges are pushed together by gravitons from distant masses, while nearby like electric charges are pushed apart because they exchange spin-1 photons but are not pushed together by virtual photon exchanges with distant matter, due to that matter being electrically neutral.

So the Standard Model U(1) x SU(2) x SU(3) could be replaced by SU(2) x SU(3) for all known interactions plus a replacement field for a Higgs-type theory of mass. This would not affect any existing confirmed predictions of the Standard Model since it would preserve the checked predictions of electrodynamics, weak interactions and strong interactions. But it would add an enormous amount of further falsifiable predictions, while simplifying the theory at the same time. Since Higgs-type field is composed of only one kind of charge (mass), it may well be described by a simple Abelian U(1) theory, in which case the total theory is again the mathematical group combination U(1) x SU(2) x SU(3), but this now has an entirely different physical dynamics to the same mathematical structure in the Standard Model, because in this U(1) x SU(2) x SU(3) (unlike the mathematically similar Standard Model):

U(1) now produces weak hypercharge and gravitation.

SU(2) now represents weak, electromagnetic and gravitational interactions (massless spin-1 gauge bosons at low energy producing electromagnetism and gravity; massive versions of the same gauge bosons being the usual low-energy weak interaction gauge bosons), and

SU(3) still represents the usual strong interactions.

To put this another way: modern physics has been developed by making mathematical guesses and checking them, but this approach seems to have reached the end of the road because sophisticated guesses become ever harder to check. I think that if progress is to continue, a change in tactics is required to make progress. If falsifiable predictions are required, physics needs to start off by being pretty well connected to reality. If the theory involves plenty of direct empirical input, it’s likely to produce a lot of checkable predictions as output. If it has little direct empirical input, then it’s less likely to produce checkable predictions. I think this is the major problem with string theory. It’s vague because the ratio of factual input to speculative beliefs (extra dimensions, supersymmetric unification, graviton spin) is low.

I’m planning at some stage to edit a physics book (although I’ve lost most of my interest in the subject because people don’t listen to the facts, and just want to argue). Here’s a brief synopsis: Einstein insisted on classical field line modelling in general relativity, which describes accelerative fields as a curvature of lines in a continuous spacetime. However, in 1998 it was discovered that the universe isn’t curved on the largest distance scales – gravity isn’t slowing down the Hubble recession of the universe at cosmological distances. So there seems to be the need to add to general relativity an outward acceleration of matter (a repulsion force between masses) to cancel out gravitational attraction over large distances. General relativity doesn’t predict this. Quantum field theory has replaced classical theories in electromagnetism and elsewhere. Here, the exchange of particles of radiation between charges causes forces in a series of discrete interactions. On large scales these quantum gravity interactions average out to look approximately like the continuous force which is described as curvature in general relativity, but the randomness of the interactions causes chaotic effects to particle accelerations on small distance scales.

Mainstream quantum gravity work is called string theory and assumes that the particles which masses exchange to produce gravity (gravitons) have spin-2 which is a complex spin assumed to be needed so that two masses will attract when exchanging them. This spin-2 assumption requires 10 dimensions in string theory, and because 6 dimensions are too small to be seen (yet crucially affect the predictions of the theory), string theory can’t be checked. It has maybe a hundred unknown parameters concerning 6 invisible compactified dimensions, which leads to 10500 different possibilities which can never be investigated even by a fast computer running for the age of the universe. The spin-2 graviton argument on which string theory is built simply ignores almost all of the mass involved, which is in the immense masses of galaxies in the surrounding universe!

Detailed calculations, taking account of the convergence of gravitons from that immense amount of surrounding mass proves that spin-1 gravitons push masses together, and makes checkable predictions that are confirmed by observations. E.g., the acceleration discovered by observations in 1998 had been predicted and published in 1996. As expected, this is unwelcome by string theorists and others, who just don’t want to take it seriously because it uses different mathematics to string theory, it addresses the problem in a completely different way, etc.

‘I consider it quite possible that physics cannot be based on the field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air…’

– Albert Einstein in a letter to friend Michel Besso, 1954.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to [precisely] figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC.

Most of the material below is now obsolete and replaced by the newer post: https://nige.wordpress.com/2008/01/30/book/ However, there is not an exact overlap. In any case, a lot of editing needs to be done to the material on this blog and the older blog to assemble it and improve it into a useful book.

SU(2)xSU(3) particle physics based on solid facts, giving quantum gravity predictions

Hubble’s law: v = dR/dt = HR. => Acceleration: a = dv/dt = d(HR)/dt = (H*dR/dt) + (R*dH/dt) = Hv + 0 = RH2. 0 < a < 6*10-10 ms-2. Outward force: F = ma. Newton’s 3rd law: equal inward reaction force (via gravitons). Since non-receding nearby masses don’t cause reaction, they cause an asymmetry, predicting gravity and in 1996 this theory predicted the ‘cosmological acceleration’ discovered in 1998.

gravity mechanism

Above: how the flux of Yang-Mills gravitational exchange radiation (gravitons) being exchanged between all the masses in the universe physically creates an observable gravitational acceleration field directed towards a cosmologically nearby or non-receding mass, labelled ‘shield’. (The Hubble expansion rate and the distribution of masses around us are virtually isotropic, i.e., radially symmetric.) The mass labelled ‘shield’ creates an asymmetry for the observer in the middle of the sphere, since it shields the graviton flux because it doesn’t have an outward force relative to the observer (in the middle of the circle shown), and thus doesn’t produce a forceful graviton flux in the direction of the observer according to Newton’s 3rd law (action and reaction, an empirical fact, not a speculative assumption).

Hence, any mass that is not at a vast cosmological distance (with significant redshift) physically acts as a shield for gravitons, and you get pressed towards that shield from the unshielded flux of gravitons on the other side. Gravitons act by pushing, they have spin-1. In the diagram, r is the distance to the mass that is shielding the graviton flux from receding masses located at the far greater distance R. As you can see from the simple but subtle geometry involved, the effective size of the area of sky which is causing gravity due to the asymmetry of mass at radius r is equal to the cross-sectional area of the mass for quantum gravity interactions (detailed calculations, included later in this post, show that this cross-section turns out to be the area of the event horizon of a black hole for the mass of the fundamental particle which is acting as the shield), multiplied by the factor (R/r)2, which is how the inverse square law, i.e., the 1/r2 dependence on gravitational force, occurs.

Because this mechanism is built on solid facts of expansion from redshift data that can’t be explained any other way than recession, and on experiment and observation based laws of nature such as Newton’s, it is not just a geometric explanation of gravity but it uniquely makes detailed predictions including the strength of gravity, i.e., the value of G, and the cosmological expansion rate; it is a simple theory as it uses spin-1 gravitons which exert impulses that add up to an effective pressure or force when exchanged between masses. It is quite a different theory to the mainstream model which ignores graviton interactions with other masses in the surrounding universe.

The mainstream model in fact can’t predict anything at all. It begins by ignoring all the masses in the universe except for two masses, such as two particles. It then represents gravity interactions between those two masses by a Lagrangian field equation which it evaluates by a Feynman path integral. It finds that if you ignore all the other masses in the universe, and just consider two masses, then spin-1 gauge boson exchange will cause repulsion, not attraction as we know occurs for gravity. It then ‘corrects’ the Lagrangian by changing the spin of the gauge boson to spin-2, which has 5 polarizations. This new ‘corrected’ Lagrangian with 5 tensor terms for the 5 polarizations of the spin-2 graviton being assumed, gives an always-attractive force between two masses when put into the path integral and evaluated. However, it doesn’t say how strong gravity is, or make any predictions that can be checked. Thus, the mainstream first makes one error (ignoring all the graviton interactions between masses all over the universe) whose fatally flawed prediction (repulsion instead of attraction between two masses) it ‘corrects’ using another error, a spin-2 graviton.

So one reason why the actual spin-2 gravitons don’t cause masses to repel is because the path integral isn’t just a sum of interactions between two gravitational charges (composed of mass-energy) when dealing with gravity; it’s instead a sum of interactions between all mass-energy in the universe. The reason why mainstream people don’t comprehend this is that the mathematics being used in the Lagrangian and path integral are already fairly complex, and they can’t readily include the true dynamics so they ignore them and believe in a fiction instead. (There is a good analogy with the false mathematical epicycles of the Earth-centred universe. Whenever the theory was in difficulty, they simply added another epicycle to make the theory more complex, ‘correcting’ the error. Errors were actually celebrated and simply re-labelled being ‘discoveries’ that nature must contain more epicycles.)

n.JPGSome papers here, home page here. CERN Doc Server deposited draft preprint paper EXT-2004-007, 15/01/2004 (this is now obsolete and can’t be updated to the revised version such as something similar to the discussion and mathematical proof below, because CERN now only accepts feed through arXiv.org which is blocked (even to some string theorists who work on non-mainstream ideas) by mainstream (M-theory) string ‘theorists’ (who have no testable predictions and no checkable theory, and so are not theorists in a scientific sense): ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. What Witten’s claimed ‘prediction of gravity’ is, is the spin-2 graviton and it isn’t a falsifiable prediction, unlike all the predictions made and subsequently confirmed by the spin-1 gravity idea. To grasp Dr Woit’s assessment of the “not even wrong” spin-2 graviton idea, try the following passage:

“For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. […] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135.

The general pressure of exchanged spin-1 gravitons causes masses to tend to just get pushed outward if they are substantially redshifted from one another (distant), instead of suffering universal gravitational attraction, which is of course caused by shielding of all-round graviton pressure. In such an expanding fireball universe where gravitation is a reaction to surrounding expansion due to exchange of gravitons, you will get both expansion and gravitation as results of the same fundamental process: exchange of gravitons.

The pressure of gravitons will cause attraction (due to mutual shadowing) between masses which are relatively nearby, but over cosmological distances the whole collection of masses will be expanding (masses receding from one another) due to the momentum imparted in the process of exchanging gravitons. This prediction was put forward via the October 1996 Electronics World, two years before evidence from Perlmutter’s supernovae observations which confirmed that the universe is not decelerating contrary to the standard predictions of cosmology at that time (i.e., that the expansion of the universe looks as if there is a small positive cosmological constant – of predictable magnitude – offsetting gravitational deceleration over cosmological distances).

Gravity is a force mediated by gravitons, which are exchanged between masses which are receding at relativistic velocities, i.e. well apart in this expanding universe; the mechanism of gravity depends on surrounding recession of masses around the point in question. This means that if general relativity is just a classical approximation to quantum gravity (due to the graviton pressure effect just explained, which implies that spacetime is not curved universally by gravitation over cosmological distances), we have to treat spacetime as finite and not bounded, so that what you see is what you get and the universe may be approximately analogous to a simple expanding fireball. We’re near the centre as proved by the 0.0027 Kelvin cosine anistropy in the 2.7 Kelvin blackbody microwave background radiation (the redshifted thermal signature emitted when the universe was at 3000 Kelvin because radiation-absorbing ionised H atoms then combined into radiation-transparent H2 molecules): this anistropy is due to our motion relative to the cosmic background radiation, was detected in the 1970s, and is massive compared to the tiny ripples indicating the seeding of the first galaxies by density fluctuations in the early universe detected by COBE and WMAP satellites since 1992.

Hence, when you recognise that classical theories like general relativity are just an approximation for making gravitational calculations consistent with the conservation of mass-energy, quantum gravity is a non-curvature theory of the universe, which isn’t geometrically bounded. It’s more like a fireball and our distance from the centre is of the order of magnitude of the age of the universe multiplied by our velocity compared relative to the fireball radiation emitted when hydrogen ions combined. This rough calculation (it assumes constant velocity, which is only an approximation, and the speed is now increasing as the Milky Way is being attracted towards the massive galaxy Andromeda) tells us that we’re pretty near the centre of the fireball universe, located 0.3% of the radius from the middle!

‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, p. 64-74.

Masses near the real ‘outer edge’ (the radial distance in spacetime which corresponds to the time of big bang, i.e. 13,700 million light-years distance) of such a fireball (remember that since gravity doesn’t act over cosmological distances due to graviton redshift when exchanged between receding masses, there is no spacetime curvature causing gravitation over such distances) get an asymmetry in the exchange of gravitons: exchanging them on one side only (the side facing the core of the fireball, where other masses are located).

I’ve been preparing a Google or U-tube video about physical mechanisms, physical forces in the fireballs in the 1962 nuclear tests at high altitude (particularly the amazing films of the fireball dynamics in the Bluegill test), and exchange radiation which will make the material and figures in the post here easier to grasp. It was a study of fireball phenomenology, the break down of general relativity due to a weakening of the gravity coupling constant in an expanding universe (gravitons exchanged between relativistically receding masses – quantum gravity charges – in an expanding universe are redshifted, reducing the effective strength of gravitational interactions in proportion to amount of redshift of the gravitons and the visible light observed, since energy is related to frequency by E = hf) and the analogy to the big bang which suggested the mechanism of gravity in 1996. In an air blast wave, Newton’s 3rd law – the equality of action and reaction forces – always holds. Initially, there is extremely high pressure throughout the fireball, communicating reaction forces in spherical symmetry, i.e., the Northward portion of the shock wave exerts a net outward force equal to its pressure times its surface area, and the reaction force is found in the Southward portion of the shock wave.

But after a while, the amount of air in the shock front is so compressed that the density falls in the central region, which cools and loses pressure. Hence, the central region can no longer mediate the reaction force of the shock wave in different directions. What happens at this stage is that a negative pressure wave, directed inward towards the centre of the explosion, then develops which has lower pressures but longer duration, allowing a reaction force to be exerted. A shock wave cannot exert outward pressure (and thus force, being equal to pressure times area) without satisfying Newton’s 3rd law of action and reaction. The reversed phase of the blast wave (with pressure acting towards the point of the explosion, i.e., suction or ‘negative pressure’ – below the ambient 14.7 psi/101 kPa normal air pressure phase) is vital for maintaining Newton’s 3rd law of motion in a shock wave.

The negative pressure phase consists of an inner shell of blast with a force directed inward in response to the outward force at the shock front. This feature is vital in implosion systems used to actually cause a nuclear explosion in the first place: the implosion relies on the fact that half the force of an explosion is initially directed inward within the mass of exploding material (the inward-travelling shock wave reflects back when it reaches the centre, and the rebounded shock wave travels outward, but in the meantime it squashes very effectively anything placed at the core, like a lump of subcritical fissile material). Implosion is also a feature of the big bang:

The product rule of differentiation is: d(uv)/dx = (v*du/dx) + (u*dv/dx)

Hence the observationally based 1929 Hubble law, v = HR, differentiates as follows:

dv/dt = d(RH)/dt = (H*dR/dt) + (R*dH/dt)

The second term here, R*dH/dt, is zero because in the Hubble law v = HR the term H is a constant from our standpoint in spacetime, so H doesn’t vary as a function of R and thus it also doesn’t vary as a function of apparent time past t = R/c. In the spacetime trajectory we see as we look out to greater distances, R/t is always in the fixed ratio c, because when we see things at distance R the light from that distance has to travel that distance at velocity c to get to us, so when we look out to distance R we’re automatically looking back in time to time t = R/c seconds ago.

Hence R*dH/dt = R*dH/d[R/c] = Rc*dH/dR = Rc*0 = 0.

This is because dH/dR = 0. I.e., there is no variation of the Hubble constant as a function of observable spacetime distance R.

Thus, the acceleration of any distant, receding lump of matter as we perceive it in spacetime, is

a = dv/dt = d(RH)/dt = H*dR/dt = H*v = H*[RH] = R*H^2.

Now the outward acceleration, a, is very small. It reaches only about 6*10^{-10} ms^{-2} for the most distant receding objects. But because the mass of the receding universe is really big, that comes to an outward force on the order of say 7*10^43 Newtons. Newton’s 3rd law tells us there should be an equal and opposite reaction force. According to what is physically known about the possible particles and fields that exist, this inward reaction force might be carried by spin-1 gravitons (non-string theory gravitons; string theory hype supposes spin-2 gravitons), which cause all gravitational field and observed general relativity (contraction, etc.) effects physically by exerting pressure as a quantum field of exchange radiation.

When we calculate the universal gravitational parameter, G, by this theory we get a figure that’s good (within available experimental data). There are complexities because what counts in spacetime for graviton exchange is the observable density of the universe as a function of distance/time past, which increases towards infinity as we look back to immense distances (approaching time zero); however this massive increase in effective outward force is cancelled out by the fact that the reaction force is mediated by gravitons which are extremely redshifted from such locations where the recession velocities are very close to the velocity of light (i.e., relativistic).

One way to imagine the mechanism for why an outward-accelerating particle should fire back gravitons as a reaction force to satisfy Newton’s 3rd law of motion is very simple: walk down a corridor and observe what happens to air that vacates the region in front of you and fills in the region behind you as you walk. Or better, push a ball along while holding it underwater. There is a resistance due to motion against the water (which is a crude model for moving an electron or other object having rest mass in a graviton field in the vacuum of spacetime), which compresses the ball slightly in the direction of motion. There is then a flow of the field quanta (water in the analogy) around the particle from front to back. This flow permits things to move, and because the field flow – once set up after effort (against resistance) – has momentum, it adds inertia to the moving object. (Ships and submarines are hard to stop suddenly because they have extra momentum – not just the usual momentum, but momentum from the water field’s motion around them. This hints that the intrinsic momentum of any moving mass is due to a similar effect involving the vacuum graviton field flowing around individual fundamental particles. As Einstein pointed out, inertial and gravitational masses are indistinguishable.)

Hence, as a 70 kg (70 litre) person walks down a corridor at 1 m/s, some 70 litres of air moving at a net velocity of 1 m/s in the opposite direction flows into the void the person is vacating. (In internet discussions, some ingenious bigots claimed that when you walk, you attract air from behind which follows you to fill the volume of space you are vacating. If that were true, the air pressure along the corridor would become ever more become unequal because of (1) air becoming compressed in front of you (instead of flowing around you to fill in the void behind), and (2) air pressure being reduced still further behind you as air expands to fill in the void. This doesn’t happen. In any case, the example with water makes it clear what happens: water flows from the front to the back of a moving object.

If the object accelerates, the surrounding field responds similarly if the motion of the particles in it is adequately fast that it can respond. (Air molecules have an average velocity of only 500 m/s, but spin-1 gravitons travel at 300 Mm/s.) Hence, if you have a long line of people walking in one direction only along a corridor, you have a current flowing in that direction, which is compensated for by a net flow of the surrounding field (air) in the opposite direction. Although the individual air molecules are going at about 500 m/s, the net flow of the bulk ‘field’ composed of air is equal to the speed of the current of people moving, while the net volume of the field which is effectively flowing is equal to the volume of the people who are moving.

Similarly, when matter moves away from us in the big bang, the graviton field around that matter responds by moving in the other direction at the same time, causing the graviton reaction force as described quantitatively by Newton’s 3rd law.

I’ll insert the video into a blog post on this site in the near future, along with a free PDF download link for the accompanying book. In the meanwhile, please make do with the posts on this page, especially this, this, this, this, this, this, this, this, this, this, this, and this.

To understand why mainstream hype of unchecked stringy theory with its non-falsifiable speculative extra dimensions, multiverse/landscape, and so on are destructive, see this link. The mechanism proved in detail below does work, although it is still very much in a nascent stage. The problems are (1) that it leads to interesting applications in so many directions in physics that it absorbs a great deal of time, (2) it is extremely unpopular because “mechanisms” are sneered at out of prejudice (in favour of mechanism-less mathematical “models”) , and are regarded as being “crazy” by essentially all mainstream physicists, i.e. most professional physicists. People like LeSage and Maxwell (who developed a mechanical model of space which was flawed), with false, half-baked ideas have permanently damaged the credibility of mechanisms in fundamental physics, not to mention the metaphysical (non-falsifiable) hidden variable “interpretation” of quantum mechanics.

The absurdity of this situation is demonstrated by the fact that quantum field theory postulates gauge boson radiations being exchanged in the vacuum between charges in order to mediate force fields (i.e., causing forces), yet the attitude is to believe in this without searching for the underlying physical mechanism! It’s exactly like religion where you allowed to believe things without investigating them scientifically. Moreover, the majority of people in the world actually want to hero-worship religious beliefs in science, in place of supporting accurate, predictive physical mechanisms based on solid facts: people are today using modern physics as an alternative religion. They have (1) abandoned the search for reality, (2) lied that it is not possible to understand physics by mechanisms (it is), and (3) embarked on a campaign to censor out the facts and replace them with false speculations. Differential equations describing smooth curvatures and continuously variable fields in general relativity and mainstream quantum field theory are wrong except for very large numbers of interactions, where statistically they become good approximations to the chaotic (particle interactions) which are producing accelerations (spacetime curvatures, i.e. forces). See https://nige.wordpress.com/2007/07/04/metrics-and-gravitation/ and in particular see Fig. 1 of the post: https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks/.

Think about air pressure as an analogy. Air pressure can be represented mathematically as a continuous force acting per unit area: P = F/A. However, air pressure is not a continuous force, it is due to impulses delivered by discrete random, chaotic strikes by air molecules (travelling at average speeds of 500 m/s in sea level air) against surfaces. If therefore you take a very small area of surface, you will not find a continuous uniform pressure P acting on it. Instead, you will find a series of chaotic impulses due to individual air molecules striking the surface! This is an example of how a useful mathematical fiction on large scales like air pressure, loses its accuracy if applied on small scales. It is well demonstrated by Brownian motion. The motion of an electron in an atom is subjected to the same thing simply because the small size doesn’t allow large numbers of interactions to be averaged out. Hence, on small scales, the smooth solutions predicted by mathematical models are flawed. Calculus assumes that spacetime are endlessly divisible, which is not true when calculus is used to represent a curvature (acceleration) due to a quantum field! Instead of perfectly smooth curvature as modelled by calculus, the path of a particle in a quantum field is affected by a series of discrete impulses from individual quantum interactions. The summation of all these interactions gives you something that is approximated in calculus by the “path integral” of quantum field theory. The whole reason why you can’t predict deterministic paths of electrons in atoms, etc., using differential equations is that their applicability breaks down for individual quantum interaction phenomena. You should be summing impulses from individual quantum interactions to get a realistic “path integral” to predict quantum field phenomena. The total and utter breakdown of mechanistic research in modern physics has instead led to a lot of nonsense, based on sloppy thinking, lack of calculations, and the failure to make checkable, falsifiable predictions and obtain experimental confirmation of them. The abusiveness and hatred directed towards people like myself by those “brane”-washed with failed ideas from Dr Witten et al., is not unique to modern physics. It’s a mixture of snobbish hatred of innovation based on simple ideas, and a lack of real interest in physics by people who claim to be physicists but are in fact only crackpot mathematicians.

Predicted fundamental force strengths, all observable particle masses, and cosmology from a simple causal mechanism of vector boson exchange radiation, based on the existing mainstream quantum field theory

Solution to a problem with general relativity: A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests

(For an introduction to quantum field theory concepts, see The physics of quantum field theory.)

‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64.

Here’s an outline of the basic ‘idea’ (actually it is well-established 100% factual evidence just assembled in a 100% new way, and it is not merely a personal idea, not a speculation, not guesswork, not a pet ‘theory’, but it is scientific fact pure and simple) behind the new mechanistic physics involved (described in detail on this page and more recent pages of this blog):

Galaxy recession velocity in spacetime (Hubble’s empirical law): v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH2 so: 0 < a < 6*10-10 ms-2. Outward force: F = ma by Newton’s 2nd empirical law. Newton’s empirical 3rd law predicts equal inward force (which according to the possibilities in quantum field theory, will be carried by gravitons, exchange radiation between gravitational charges in quantum gravity): but non-receding nearby masses don’t give rise to any reaction force according to this mechanism, so they act as shields and thus cause an asymmetry, producing gravity. This predicts the strength of gravity and electromagnetism, particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

The underlying symmetry group physics which follows from this mechanism is to replace SU(2)xU(1) + Higgs sector in the Standard Model with simply a version of SU(2) where the 22 -1 = 3 gauge bosons can exist in both massless and massive forms. Some field in the vacuum (different to the Higgs field in detail, but similar in that it provides rest mass to particles) gives masses to some of each of the 3 massless gauge bosons of SU(2), and the massive versions are the massive neutral Z, charged W-, and charged W+ weak gauge bosons just as occur in the Standard Model. However, the massless versions of Z, W- and W+ are the gauge bosons of gravity, negative electromagnetic fields, and positive electromagnetic fields, respectively.

The basic method for electromagnetic repulsion is the exchange of similar massless W- gauge bosons between negative charges, or massless W+ gauge bosons between positive charges. The charges recoil apart because they get hit repeatedly by radiation emitted by the other charge. But for a pair of opposite charges, like a negative electron and positive nucleus, you get attraction because each charge can only interact with similar charges, so the effect is opposite charges on one another is to simply shadow them from radiation coming in from other charges in the surrounding universe. A simple vector force diagram (published in Electronics World in April 2003) shows that in this mechanism the magnitudes of the attraction and repulsion forces of electromagnetism are identical. The fact that electromagnetism is on the order of 1040 times as strong as gravity for fundamental charges (the precise figure depends on which fundamental charge are compared), is due to the fact that in this mechanism radiation is only exchanged between similar charges, so you get a statistical-type “random walk” vector summation across the similar charges distributed in the universe. This was also illustrated in the April 2003 Electronics World article. Because gravity is carried by neutral (uncharged) gauge bosons, it’s net force doesn’t add up this way, so it turns out that gravity is weaker than electromagnetism by a factor equal to the square root of the number of similar charges of either sign in the universe. Since 90% of the universe is hydrogen, composed of two negative charges (electron and downquark) and two positive charges (two upquarks), it is easy to make approximate calculations of such numbers, using the density and size of the universe.

Obviously, massless charged radiation is a non-starter in classical physics because it won’t propagate due to it’s magnetic self-inductance; however for Yang-Mills theory (exchange radiation causing forces) this objection doesn’t hold because the transit of similar radiation in two opposite directions along a path at the same time cancels out the magnetic field vectors, allowing propagation. In a different context, we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying charged currents flowing in opposite directions to enable a signal (or pulse, or logic step, or net energy flow) to propagate: the magnetic fields of each charged current (one on each conductor in the pair of conductors) cancel one another out, preventing the infinite self-inductance problem and thus allowing propagation of charged energy currents. Thus the analogy of electricity propagating in a pair of conductors when a switch is closed shows how charged exchange radiation works.

Abstract

The objective is to unify the Standard Model and General Relativity with a causal mechanism for gauge boson mediated forces which makes checkable predictions. In very brief outline, Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2 and outward force F = ma ~ 1043 Newtons. Newton’s 3rd law implies an inward force, which from the possibilities available seems to be carried by gauge boson radiation (gravitons), which predicts gravitational curvature, other fundamental forces, cosmology and particle masses. Non-receding (local) masses don’t cause a reaction force, because they don’t present an outward force, so they act as a shield and cause an asymmetry that we experience as the attraction effect of gravity: see Fig. 1.

The symmetrical inward pressure of graviton radiation (see Fig. 2) exerts a pressure on masses (acting on masses, i.e., what is referred to as ‘Higgs field quanta’, which act on the interaction cross-sectional areas of fundamental particles, and not on the macroscopic surface area of a planet) which causes the gravitational contraction predicted by general relativity, i.e., Earth’s radius is contracted by (1/3)MG/c2 = 1.5 mm by this graviton exchange radiation hitting masses, imparting momentum p = E/c, and then reflecting back (in the process causing another impulse on the mass, by the recoil effect, equal to p = E/c, so that the total imparted momentum is obviously p = 2E/c). (This ‘reflection’ is not the literal mechanism, because although a ball thrown against a wall can bounce back, a photon ‘reflected’ from a mirror actually undergoes a complex series of interactions, the sum of which (or path integral) is merely equivalent to a simple reflection: the photon is absorbed by the mirror and a new photon then gets emitted. Similarly with gauge boson radiations, the interactions involved are far more complex in detail than a simple reflection, although that is a useful approximation to the total process, under some circumstances.) Applying this contraction to motions, we find that the same behaviour of the gravitational field causes inertial force which resists acceleration, because of Einstein’s equivalence principle whereby inertial mass = gravitational mass!

To understand the picture of writing the Hubble expansion rate as an expansion in a time dimension, think of time (age of universe) as 1/Hubble constant (until 1998 it was assumed to be 0.67/Hubble constant with the 2/3 factor due to gravitational deceleration, but that gravitational deceleration was debunked by supernovae observations made by Perlmutter and published in Nature that year; so either gravitons are redshifted over large cosmological distances and lose energy by E = hf, being thus unable to slow down the expansion of the universe, or else there is some “dark energy” which produces an outward acceleration that offsets the inward acceleration of gravity).

If the Hubble constant was different in different directions, the age of the universe, 1/H, would be different in different directions. Hence the isotropy of the big bang we observe around us: there are three effective time dimensions, each corresponding to an expanding spatial dimension. (The redshift of radiation exchanged between receding masses in an expanding universe prevents thermal equilibrium being established, and therefore provides an endless heatsink.) Because of the isotropy, we see the 3 effective time dimensions as always being equal, so they are identical and can be represented by SO(3,1), hence we observe effectively 4 different dimensions including one of time and 3 of space.

Lunsford (discussed and cited below) has proved that the 3 spatial and 3 time dimension spin orthagonal group, SO(3,3) unifies gravity and electrodynamics correctly without the reducible problems of the old Kaluza-Klein unification. I’ve shown that this is reasonable because 3 spatial dimensions are contracted by gravity in general relativity (for example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres), while 3 time dimensions are continuously expanding: this means that the Hubble expansion should be written in terms of velocity as a function of time, not distance:

Remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R. So we have a real outward acceleration in Hubble’s law!

We then use Newton’s 2nd empirical law F=ma to estimate outward force of big bang, and then his 3rd empirical law to estimate the inward recation force carried by gauge bosons exchanged between local and distant receding masses. This makes quantum gravity quantitative and we can calculate the strength of gravity and lots of other things from the resulting mechanism. This post concentrates on gravity’s mechanism.

The Physical Relationship between General Relativity and Newtonian gravity

(1) Newtonian gravity

Let’s begin with a look at the Newtonian gravity law F = mMG/r2, which is based on empirical evidence, not a speculative theory (remember Newton’s claim: hypotheses non fingo!). The inverse square law is based on Kepler’s empirical laws, which were obtained by Brahe’s detailed observations of motion of the planet Mars. The mass dependence was more of a guess by Newton, since he didn’t actually calculate gravitational forces (he did not know or even write the symbol for G, which arrived long after from the pen of Laplace). However, Newton’s other empirical law, F = ma, was strong evidence for a linear dependence of force on mass, and there was some evidence from the observation of the Moon’s orbit. The Moon was known to be about 250,000 miles away and to take about 30 days to orbit the earth, so it’s centripetal acceleration could be calculated from Newton’s law, a = v2/r. This could confirm Newton’s law in two ways. First, since 250,000 miles is about 60 times the radius of the Earth, the acceleration due to gravity from the Earth should, from the inverse-square law, be 602 times weaker at the Moon than it is at the Earth’s surface where it is 9.8 m/s2.

Hence it was possible to check the inverse-square law in Newton’s day. Newton also made a good guess at the average density of the earth, which indicates G fairly accurately using Galileo’s measurement of the gravitational acceleration at the Earth’s surface and – applied also to the Moon (assumed to have a similar density to the Earth) gives a very approximate justification for the assumption of Newton’s that gravitational force is directly proportional to the product of the two masses involved. Newton worked out geometrically proofs for using his law. For example, the mass of the Earth is not located in a point at its centre, but is distributed over a large three-dimensional volume. Newton proved that you can treat the entire mass of the earth as being in a small place in the centre of the Earth for the purpose of making calculations, and this proof is as clever as his demonstration that the inverse square law applies to elliptical planetary orbits (Hooke showed that it applied to circular orbits, which is much easier). Newton treated the mass of the earth as a series of uniform shells of small thickness. He proved that outside the shell, the gravitational field is identical, at any radius from the middle of the shell, to the gravitational field from an equal mass all located in a small lump in the middle. This proof also applies to the quantum gravity mechanism (below).

Cavendish produced a more accurate evaluation of G by measuring the twisting force (torsion) in a quartz fibre due to the gravitational attraction of two heavy balls of known mass located a known distance apart.

(2) General relativity as a modification needed to include relativistic phenomena

Eventually failures in the Newtonian law became apparent. Because orbits of planets are elliptical with the sun at one focus, the planets speed up when near the sun, and this causes effects like time dilation and it also causes their mass to increase due to relativistic effects (this is significant for Mercury, which is closest to the sun and orbits fastest). Although this effect is insignificant over a single orbit, so it didn’t affect the observations of Brahe or Kepler’s laws upon which Newton’s inverse square law was based, the effect accumulates and is substantial over a period of centuries, because it the perhelion of the orbit precesses. Only part of the precession is due to relativistic effects, but it is still an important anomaly in the Newtonian scheme. Einstein and Hilbert developed general relativity to deal with such problems. Significantly, the failure of Newtonian gravity is most important for light, which is deflected by gravity twice as much when passing the sun as that predicted by Newton’s a = MG/r2.

Einstein recognised that gravitational acceleration and all other accelerations are represented by a curved worldline on a plot of distance travelled versus time. This is the curvature of spacetime; you see it as the curved line when you plot the height of a falling apple versus time.

Einstein then used tensor calculus to represent such curvatures by the Ricci curvature tensor, Rab, and he tried to equate this with the source of the accelerative field, the tensor Tab, which represents all the causes of accelerations such as mass, energy, momentum and pressure. In order to represent Newton’s gravity law a = MG/r2 with such tensor calculus, Einstein began with the assumption of a direct relationship such as Rab = Tab. This simply says that mass-energy tells is directly proportional to curvature of spacetime. However, it is false since it violates the conservation of mass-energy. To make it consistent with the experimentally confirmed conservation of mass-energy, Einstein and Hilbert in November 1915 realised that you need to subtract from Tab on the right hand side the product of half the metric tensor, gab, and the trace, T (the sum of scalar terms, across the diagonal of the matrix for Tab). Hence

Rab = Tab – (1/2)gabT.

[This is usually re-written in the equivalent form, Rab(1/2)gabR = Tab.]

There is a very simple way to demonstrate some of the applications and features of general relativity. Simply ignore 15 of the 16 terms in the matrix for Tab, and concentrate on the energy density component, T00, which is a scalar (it is the first term in the diagonal for the matrix) so it exactly equal to its own trace:

T00 = T.

Hence, Rab = Tab – (1/2)gabT becomes

Rab = T00 – (1/2)gabT, and since T00 = T, we obtain

Rab = T[1 – (1/2)gab]

The metric tensor gab = ds2/(dxadxb), and it depends on the relativistic Lorentzian metric gamma factor, (1 – v2/c2)-1/2, so in general gab falls from about 1 towards 0 as velocity increases from v = 0 to v = c.

Hence, for low speeds where, approximately, v = 0 (i.e., v << c), gab is generally close to 1 so we have a curvature of

Rab = T[1 – (1/2)(1)] = T/2.

For high speeds where, approximately, v = c, we have gab = 0 so

Rab = T[1 – (1/2)(0)] = T.

The curvature experienced for an identical gravity source if you are moving at the velocity of light is therefore twice the amount of curvature you get at low (non-relativistic) velocities. This is the explanation as to why a photon moving at speed c gets twice as much curvature from the sun’s gravity (i.e., it gets deflected twice as much) as Newton’s law predicts for low speeds. It is important to note that general relativity doesn’t supply the physical mechanism for this effect. It works quantitatively because is its a mathematical package which accounts accurately for the use of energy.

However, it is clear from the way that general relativity works that the source of gravity doesn’t change when such velocity-dependent effects occur. A rapidly moving object falls faster than a slowly moving one because of the difference produced in way the moving object is subject to the gravitational field, i.e., the extra deflection of light is dependent upon the Lorentz-FitzGerald contraction (the gamma factor already mentioned), which alters length (for a object moving at speed c there are no electromagnetic field lines extending along the direction of propagation whatsoever, only at right angles to the direction of propagation, i.e., transversely). This increases the amount of interaction between the electromagnetic fields of photon and the gravitational field. Clearly, in a slow moving object, half of the electromagnetic field lines (which normally point randomly in all directions from matter, apart from minor asymmetries due to magnets, etc.), will be pointing in the wrong direction to interact with gravity, and so slow moving objects only experience half the curvature that fast moving ones do, in a similar gravitational field.

Some issues with general relativity are focussed on the assumed accuracy of Newtonian gravity which is put into the theory as the low speed, weak field solution normalization. As we shall show below, this is incompatible with a Yang-Mills (Standard Model type) quantum gravity theory for reasons other than the renormalization problems usually assumed to exist. First, over very large distances in an expanding universe, the exchange of gravitons weakens gravitons because redshift reduces the frequency and thus the energy of radiation dramatically over cosmological sized distances. This eliminates curvature over such distances, explaining the lack of gravitational deceleration in supernova data. This is falsely explained by the mainstream by adding an epicycle, i.e.,

(gravitational deceleration without redshift of gravitons in general relativity) + (acceleration due to small positive cosmological constant due to some kind of dark energy) = (observed, non-decelerating, recession of supernovae)

instead of the simpler quantum gravity explanation (predicted in 1996, two years ahead of observation):

(general relativity with G falling for large distances due to redshift of exchange gravitons reducing the energy of gravitational interactions) = (observed, non-decelerating, recession of supernovae).

So there is no curvature of spacetime at extremely big distances! On small scales, too, general relativity is false, because the tensor describing the source of gravity uses an average density to smooth out the real discontinuities resulting from the quantized, discrete nature of particles which have mass! The smoothness of a curvature in general relativity is false in general on small scales due to the input assumption – required for the stress-energy tensor to work (it is a summation of continuous differential terms, not discrete terms for each fundamental particle). So on both very large and very small scales, general relativity is a fiddle. But this is not a problem when you understand the physical dynamics and know the limitations of the theory. It only becomes a problem when people take a lot of discrete fundamental particles representing a real mass causing gravity, average their masses over space to get an average density, and then calculate the curvature from the average density, getting a smooth result and claiming that this proves that curvature is really smooth on small scales. Of course it isn’t. That argument is like averaging the number of kids per household and getting 2.5, then claiming that the average proves that one third of kids are born with only half of their bodies. But there is also a problem with quantum gravity as usually believed (see the previous post, and also this comment, on Cosmic Variance blog, by Professor John Baez).

Symmetry groups which include gravity

We will show how you can make checkable predictions for quantum gravity in this post. In the previous two posts, here and here, the inclusion of gravity in the standard model was shown to require a change of the electroweak force SU(2) x U(1) to SU(2) x SU(2) where the three electroweak gauge bosons (W+, W, and Zo) occur in both short-ranged massive versions and massless, infinite-range versions with the charged ones producing electromagnetic force and the neutral one producing gravitation, and the issues in calculating the outward force of the big bang were described. Depending on how the Higgs mechanism for mass will be modified, this SU(2) x SU(2) electro-weak-gravity may be replacable by a new version of a single SU(2). In the existing Standard Model, SU(3) x SU(2) x U(1), only one handedness of fundamental particles respond to the SU(2) weak force, so if you change the electroweak groups SU(2) x U(1) to SU(2) x SU(2) it can lead to a different way of understanding chiral symmetry and electroweak symmetry breaking. See also this earlier post, which discusses with quantum force effects as Hawking radiation emissions.)

The understanding of the correct symmetry model behind the Standard Model requires a physical understanding of what quarks are, how they arise, etc. For instance, bring 3 electrons close together and you start getting problems with the exclusion principle. But if you could somehow force a triad of such particles together, the net charge would be 3 times stronger than normal, so the vacuum shielding veil of polarized pair-production fermions will be also 3 times stronger, shielding the bare core charges 3 times more efficently. (Imagine it like 3 communities combining their separate castles into one castle with walls 3 times thicker. The walls provide 3 times as much shielding; so as long as they can all fit inside the reinforced castle, all benefit.) This means that the long range (shielded) charge from each of the three charges of the triad will be -1/3 instead of -1. Since pair-production, and polarization of electric charges cancelling out part of the electric field, are experimentally validated phenomena, this mechanism for fractional charges is real. Obviously, while it is easy to explain the downquark this way, you need a detailed knowledge of electroweak phenomena like the weak charges of quarks compared to leptons (which have chiral features) and also the strong force, to explain physically what is occurring with upquarks that have a +2/3 charge. Some interesting although highly abstract mathematical assaults on trying to understand particles have been made by Dr Peter Woit in http://arxiv.org/abs/hep-th/0206135 which generates all the Standard Model particles using a U(2) spin representation (see also his popular non-mathematical introduction, Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics), which can be compared to the more pictorial preon models of particles advocated by loop quantum gravity theorists like Dr Lee Smolin. Both approaches are suggesting that there is a deep simplicity, with the different quarks, leptons, bosons and neutrinos arising from a common basic entity by means of symmetry transformations or twists of braids:

‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ –Wiki. (Hence there is a simple relationship between leptons and fermions; more later on.)

Introduction to the basis for the dynamics of quantum gravity

You can treat the empirical Hubble recession law, v = HR, as describing a variation in velocity with respect to observable distance R, or as a variation of velocity with respect to time past, because as we look to greater distances in the universe, we’re seeing an earlier era, because of the time taken for the light to reach us. That’s spacetime: you can’t have distance without time. Because distance R = ct where c is the velocity of light and t is time, Hubble’s law can be written v = HR = Hct which clearly shows a variation of velocity as a function of time! A variation of velocity with time is called acceleration. By Newton’s 2nd law, the acceleration of matter produces force. This view of spacetime is not new:

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Herman Minkowski, 1908.

To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.

Radial distance elements are equal for the Hubble recession in all directions around us,

H = dv/dr = dv/dx = dv/dy = dv/dz

implying

t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv

implying

dv/H = dr = dx = dy = dx

1/H is a way to measure the age of the universe. If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.

This makes spacetime easier to understand and allows a new unification scheme! The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter. Surely this contradicts general relativity? No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square. To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving:

(dr)/c = (dx)/c = (dy)/c = (dz)/c

which can be written as:

dtr = dtx = dty = dtz

So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal! This is why we only need one time to describe the expansion of the universe. If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions. Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic! This is quite a surprising result as some hostility to this new idea from traditionalists shows.

But the three time dimensions which are usually hidden by this isotropy are vitally important! Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need. It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here. The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity. For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.

This sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity.

The outward motion of matter produces a force which for simplicity for the present (we will discuss correction factors for density variation and redshift effects below; see also this previous post) will be approximated by Newton’s 2nd law in the form

F = ma

= [(4/3)πR3r].[dv/dt],

and since dR/dt = v = HR, it follows that dt = dR/(HR), so

F = [(4/3)πR3r].[d(HR)/{dR/(HR)}]

= [(4/3)πR3r].[H2R.dR/dR]

= [(4/3)πR3r].[H2R]

= R4rH2/3.

gravity mechanism

Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation – spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as we shall see – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram:

(force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).

Later in this post, this will be evaluated proving that the shield’s cross-sectional area is the cross-sectional area of the event horizon for a black hole, π(2GM/c2)2. But at present, to get the feel for the physical dynamics, we will assume this is the case without proving it. This gives

(force of gravity) = (4πR4rH2/3).(π(2GM/c2)2R2/r2)/(4πR2)

= (4/3)πR4rH2G2M2/(c4r2)

We can simplify this using the Hubble law because HR = c gives R/c = 1/H so

(force of gravity) = (4/3)πrG2M2/(H2r2)

This result ignores both the density variation in spacetime (the distant, earlier universe having higher density) and the effect of redshift in reducing the energy of gravitons and weakening quantum gravity contributions from extreme distance, because the momentum of a graviton will be p = E/c and where E is reduced by redshift since E = hf.

Quantization of mass

However, it is significant qualitatively that this gives a force of gravity proportional not to M1M2 but instead to M2, because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. (Obviously ‘large masses’ are just composites of many fundamental particles.) M2 should only arise if the ultimate building blocks of mass (the ‘charge’ in a theory of quantum gravity) are quantized, because it shows that two units of mass are identical. This tells us about the way the mass-giving field particles, the ‘Higgs bosons’, operate. Instead of there being a cloud of an indeterminate number of Higgs bosons around a fermion giving rise to mass, what happens is that each fermion acquires a discrete number of such mass-giving particles.

(These ‘Higgs bosons’ surrounding the fermion acquire inertial and gravitational mass by interacting with the external gravitational field, which explains why mass increases with velocity but electric charge doesn’t. The core of a fermion doesn’t interact with the inertial/gravitational field, only with the massive Higgs bosons surrounding the core, which in turn do interact with the inertial/gravitational field. The core of the fermion only interacts with Standard Model forces, namely electromagnetism, weak force, and in the case of pairs or triads of closely confined fermions – quarks – the strong nuclear force. Inertial mass and gravitational mass arise from the Higgs bosons in the vacuum surrounding the fermion, and gravitons only interact with Higgs bosons, not directly with the fermions.)

This is explicable simply in terms of the vacuum polarization of matter and the renormalization of charge and mass in quantum electrodynamics, and is confirmed by an analysis of all relatively stable (half life of 10-23 second or more) known particles, as discussed in an earlier post here (for a table of the mass predictions compared to measurements see Table 1). (Note that the simple description of polarization of the vacuum as two shells of virtual fermions, a positive one close to the electron core and a negative one further away, depicted graphically on those sites, is a simplification for convenience in depicting the net physical effect for the purpose of understanding what is going on for making accurate calculations. Obviously, in reality, all the virtual positive fermions and all the virtual negative fermions will not be located in two shells; they will be all over the place but on the average the virtual charges of like sign to the real particle core will be further away from the core than the virtual charges of unlike sign.)

Predictions of particle masses compared to experimentally determined masses, using the mass renormalization model

Table 1: Comparison of measured particle masses with predicted particle masses using a physical model for the renormalization of mass (both mass and electric charge are renormalized quantities in quantum electrodynamics, due to the polarization of pairs of charged virtual fermions in the electron’s strong electric field, see previous posts such as this). Anybody wanting a high quality, printable PDF version of this table can find it here. (The theory of masses here was inspired by an arXiv paper by Drs. Rivero and de Vries, and on a related topic I gather than Carl Brannen is using density operators to explain theoretically and extend the application of Yoshio Koide’s empirical formula, which states that the sum of the masses of the 3 leptons electron, muon and tau, multiplied by 1.5, is equal to the square of the sum of the square roots of the masses of those three particles. If that works it may well be compatible with this mass mechanism. Although the mechanism predicts the possible quantized masses fairly accurately as first approximations, it is good to try to understand better how the actual masses are picked out. The mechanism which produced the table produced a formula containing two integers which predicts a lot of particles which are too short-lived to occur. Why are some configurations more stable than others? What selection principle picks out the proton as being particularly stable – if not completely stable? We know that the nuclei of heavy elements aren’t chaotic bags of neutrons and protons, but have a shell structure to a considerable extent, with ‘magic numbers’ which determine relative stability, and which are physically explained by the number of nucleons taken to completely fill up successive nuclear shells. Probably some similar effect plays a part to some extent in the mass mechanism, so that some configurations have magic numbers which are stable, while nearby ones are far less stable and decay quickly. This if true of the quantized vacuum surrounding fundamental particles, would lead to a new quantum theory of such particles, with similar gimmicks explaining the original ‘anomalies’ of the periodic table, viz. isotopes explaining non-integer masses, etc.)

This prediction doesn’t strictly demand perfect integers to be observable, because it’s possible that effects like isotopes to exist, where the different individuals of the same type of meson or baryon can be surrounded by different integer numbers of Higgs field quanta, giving non-integer average masses. (The number would be likely to actually change during a high-energy interaction, where particles are broken up.)

The early attempts of Dalton and others to work out an atomic theory were regularly criticised and even ridiculed by the fact that the measured mass of chlorine is 35.5 times the mass of hydrogen, i.e., nowhere near an integer! Here is a summary of the rules:

If a particle is a baryon, it’s mass should in general be close to an integer when expressed in units of 105 MeV (3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV).

If it is a meson, it’s mass should in general be close to an integer when expressed in units of 70 MeV (2/2 multiplied by the electron mass divided by alpha: 1*0.511*137 = 70 MeV).

If it is a lepton apart from the electron (the electron is the most complex particle), it’s mass should in general be close to an integer when expressed in units of 35 MeV (1/2 multiplied by the electron mass divided by alpha: 0.5*0.511*137 = 35 MeV).

This scheme has a simple causal mechanism in the quantization of the ‘Higgs field’ which supplies mass to fermions and bosons. By itself the mechanism just predicts that mass comes in discrete units, depending on how strong the polarized vacuum is in shielding the fermion core from the Higgs field quanta.

To predict specific masses (apart from the fact they are likely to be near integers if isotopes don’t occur), regular QCD ideas can be used. This prediction doesn’t replace lattice QCD predictions, it just suggests how masses are quantized by the ‘Higgs field’ rather than being a continuous variable.

Every mass apart form the electron is predictable by the simple expression: mass = 35n(N+1) MeV, where n is the number of real particles in the particle core (hence n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is is the integer number of ‘Higgs field’ quanta giving mass to that fermion (lepton or baryon) or meson (boson) core.

From analogy to the shell structure of nuclear physics where there are highly stable or ‘magic number’ configurations like 2, 8 and 50, and we can use n = 1, 2, and 3, and N = 1, 2, 8 and 50 to predict the most stable masses of fermions besides the electron, and also the masses of bosons (mesons):

For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV.

For mesons, n = 2 and N = 1 gives the pion: 35n(N+1) = 140 MeV.

For baryons, n = 3 and N = 8 gives nucleons: 35n(N+1) = 945 MeV.

For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV.

Particle mass predictions: the gravity mechanism implies quantized unit masses. As proved, the 1/a = 137.036… number is the electromagnetic shielding factor for any particle core charge by the surrounding polarised vacuum.

This shielding factor is obtained by working out the bare core charge (within the polarized vacuum) as follows. Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar. The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct. Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence). Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post). In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime. The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology. Now for the slightly clever bit:

px = h-bar implies (when remembering p = mc, and E = mc2):

x = h-bar /p = h-bar /(mc) = h-bar*c/E

so E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx):

F = E/x = (h-bar*c/x)/x

= h-bar*c/x2.

So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law! This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs. So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a. The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge. All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.

One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance. However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx. For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved. This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.

It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistically, scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)

Experimental evidence:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

Plus, in particular:

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

(Levine and Koltick experimentally found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 91 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong. At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases! Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.)

Related to this exchange radiation, are the Feynman’s path integrals of quantum field theory:

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here

As for the indeterminancy of electron locations in the atom, the fuzzy picture is not a result of multiple universes interacting but simply the Poincare manybody problem, whereby Newtonian physics fails when you have more than 2 bodies of similar mass or charge interacting at once (the failure is that you lose deterministic solutions to the equations, having to resort instead to statistical descriptions like the Schroedinger equation and annihilation-creation operators in quantum field theory produce many pairs of charges randomly in location and time in strong fields, deflecting particle motions chaotically on small scales, similarly to Brownian motion; this is the ‘hidden variable’ causing indeterminancy in quantum theory, not multiverses or entangled states). Entanglement is a false interpretation physically of Aspect’s (and related) experiments: Heisenberg’s uncertainty principle only applies to slower than light velocity particles like massive fermions. Aspect’s experiment stems from the Einstein-Rosen-Polansky suggestion to measure the spins of two molecules; if the correlate in a certain way then that would prove entanglement, because molecular spin are subject to the indeterminancy principle. Aspect used photons instead of molecules. Photons cannot change polarization when measured as they are frozen in nature due to their velocity, c. Therefore, the correlation of photon polarizations observed merely confirms that Heisenberg’s uncertainty principle does not apply to photons, rather than implying that (believing that Heisenberg’s uncertainty principle does apply to photons) the photons ‘must’ have an entangled polarization until measured! Aspect’s results in fact discredits entanglement.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Gravity is basically a boson shielding effect, while the errors of LeSage’s infamous pushing-gravity model are due to fermion radiation assumptions, so they did not get anywhere. Once again, gravity is a massless boson – integer spin – exchange radiation effect. LeSage (or Fatio, whose ideas LeSage borrowed), assumed that very small material particles – fermions in today’s language – were the force-causing exchange radiation (discussed by Feynman in the video here). Massless bosons don’t obey the exclusion principle and they don’t interact with one another like massive bosons and all fermions (fermions do obey the exclusion principle, so they always interact with one another). Hence, LeSage’s attractive force mechanism is only valid for short-ranged particles like pions, which produce the strong nuclear attractive force between nucleons. Therefore, the ‘errors’ people found in the past when trying to use LeSage’s mechanism for gravity – the mutual interactions between the particles which equalize the force in the shadow region after a mean-free-path – don’t apply to bosonic radiation which doesn’t obey the exclusion principle. The short-range of LeSage’s gravity becomes an advantage in explaining the pion mediated strong nuclear force. LeSage – or actually Newton’s friend Fatio, whose ideas were allegedly plagarised by LeSage – made a mess of it. The LeSage attraction mechanism is predicted to have a short range on the order of a mean free path of scatter before radiation pressure equalization in the shadows quenches the attractive force. This short range is real for nuclear forces, but not for gravity or electromagnetism:

//www.mathpages.com/home/kmath131/kmath131.htm

(Source: http://www.mathpages.com/home/kmath131/kmath131.htm.)

The Fatio-LeSage mechanism is useless because it makes no prediction for the strength of gravity whatsoever, and it is plain wrong because it assumes gas molecules or fermions are the exchange radiation, instead of gauge bosons. The falsehood of the Fatio-LeSage mechanism is that the gravity force range would be short ranged, since the material pressure of the fermion particles (which bounce off each other due to the Pauli exclusion principle) or gas molecules causing gravity, would get diffused into the shadows within a short distance; just as air pressure is only shielded by a solid for a distance on the order of a mean free path of the gas molecules. Hence, to get a rubber suction cup to be pushed strongly to a wall by air pressure, the wall must be smooth, and it must be pushed firmly. Such a short ranged attractive force mechanism may be useful in making pion-mediated Yukawa strong nuclear force calculations, but is not gravity.

(Some of the ancient objections to LeSage are plain wrong and in contradiction of Yang-Mills theories such as the standard model. For example, it was alleged that gravity couldn’t be the result of an exchange radiation force because the exchange radiation would heat up objects until they all glowed. This is wrong because the mechanisms by which radiation interact with matter don’t necessarily transfer that energy into heat; classically all energy is usually degraded to waste heat in the end, but the gravitational field energy cannot be directly degraded to heat. Masses don’t heat up just because they are exchanging radiation, the gravitational field energy. If you drop a mass and it hits another mass hard, substantial heat is generated, but this is an indirect effect. Basically, many of the arguments against physical mechanisms are bogus. For an object to heat up, the charged cores of the electrons must gain and radiate heat energy; but the gravitational gauge boson radiation isn’t being exchanged with the electron bare core. Instead, the fermion core of the electron has no mass, and since quantum gravity charge is mass, the lack of mass in the core of the electron means it can’t interact with gravitons. The gravitons interact with some vacuum particles like ‘Higgs bosons’, which surround the electron core and produce inertial and gravitational forces indirectly. The electron core couples to the ‘Higgs boson’ by electromagnetic field interactions, while the ‘Higgs boson’ at some distance from the electron core interacts with gravitons. This indirect transfer of force can smooth out the exchange radiation interactions, preventing that energy from being degraded into heat. So objections – if correct – would also have to debunk the Standard Model which is based on Yang-Mills exchange radiation, and which is well tested experimentally. Claiming that exchange radiation would heat things up until they glowed is similar to the Ptolemy followers claiming that if the Earth rotated daily, clouds would fly over the equator at 1000 miles/hour and people would be thrown off the ground! It’s a political-style junk objection and doesn’t hold up to any close examination in comparison to experimentally-determined scientific facts.)

When a mass-giving black hole (gravitationally trapped) Z-boson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have alpha shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is:

Mza2 /(1.5*2p) = 0.51 MeV

If, however, the electron core has more energy and can get so close to a trapped Z-boson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass:

Mza/(2p ) = 105.7 MeV

The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is:

Men(N + 1)/(2a) = 35n(N+1) Mev.

(For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Z-bosons. Lest this be dismissed as ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a physical mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements. There is a similarity in the physics between these vacuum corrections and the Schwinger correction to Dirac’s 1 Bohr magneton magnetic moment for the electron: corrected magnetic moment of electron = 1 + a/(2p) = 1.00116 Bohr magnetons. Notice that this correction is due to the electron interacting with the vacuum field, similar to what we are dealing with here. Also note that Schwinger’s correction is only the first (but is by far the biggest numerically and thus the most important, allowing the magnetic moment to be accurately predicted to 6 significant figures of accuracy) of an infinite series of correction terms involving higher powers of a for more complex vacuum field interactions. Each of these corrections is depicted by a different Feynman diagram. (Basically, quantum field theory is a mathematical correction for the probability of different reactions. The more classical and obvious things generally have the greatest probability by far, but stranger interactions occasionally occur in addition, so these also need to be included in calculations which give a prediction which is statistically very accurate.)

This kind of gravitational calculation also allows us to predict the gravitational coupling constant, G, as will be proved below. We know that the inward force is carried by gauge boson radiation, because all forces are due to gauge boson radiation according to the Standard Model of particle physics, which is the best-tested physical theory of all time and and has made thousands of accurately confirmed predictions from an input of just 19 empirical parameters (don’t confuse this with the bogus supersymmetric standard model, which even in its minimal form requires 125 adjustable parameters and has a large landscape of possibilities, making no definite or precise predictions whatsoever). The Standard Model is a Yang-Mills theory in which the exchange of gauge bosons between relevant charges for the force (i.e., colour charges for quantum chromodynamic forces, electric charges for electric forces, etc.) causes the force.

What happens is that Yang-Mills exchange radiation pushes inward, coming from the surrounding, expanding universe. Since spacetime, as recently observed, isn’t boundless (there’s no observable gravity retarding the recession of the most distant galaxies and supernovae, as discovered in 1998, and so there is no curvature at the greatest distances), the universe is spherical and is expanding without slowing down. The expansion is caused by the physical pressure of the gauge boson radiation. This radiation exerts momentum p = E/c. Gauge boson radiation is emitted towards us by matter which is receding: the reason is Newton’s 3rd law. Because, as proved above, the Hubble recession in spacetime is an acceleration of matter outwards, the matter receding has an outward force by Newton’s 2nd empirical law F = ma, and this outward force has an equal and opposite reaction, just like the exhaust of a rocket. The reaction force is carried by gauge boson radiation.

What, you may ask, is the mechanism behind Newton’s 3rd law in this case? Why should the outward force of the universe be accompanied by an inward reaction force? I dealt with this in a paper in May 1996, made available via the letters page of the October 1996 issue of Electronics World. Consider the source of gravity, the gravitational field (actually gauge boson radiation), to be a frictionless perfect fluid. As lumps of matter, in the form of the fundamenta particles of galaxies, accelerate away from us, they leave in their wake a volume of vacuum which was previously occupied but is now unoccupied. The gravitational field doesn’t ignore spaces which are vacated when matter moves: instead, the gravitational field fills them. How does this occur?

What happens is like the situation when a ship moves along. It doesn’t suck in water from behind it to fill its wake. Instead, water moves around from the front to the back. In fact, there is a simple physical law: there is an equal volume of water moving to the ship’s displacement moving continuously in the opposite direction to the ship’s motion.

This water fills in the void behind the moving ship. For a moving particle, the gravitational field of spacetime does the same. It moves around the particle. If it did anything else, we would see the effects of that: for example, if the gravitational field piled up in front of a moving object instead of flowing around it, the pressure would increase with time and there would be drag on the object, slowing it down. The fact that Newton’s 1st law, inertia, is empirically based tells us that the vacuum field does flow frictionlessly around moving particles instead of slowing them down. The vacuum field does however exert a net force when an object accelerates; this causes increases the mass of the object and causes a flattening of the object in the direction of motion (FitzGerald-Lorentz contraction). However, this is purely a resistance to acceleration, and there is drag to motion unless the motion is accelerative.

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp89-90.

(Consider motion in the Dirac sea, which is incompressible owing to the Pauli exclusion principle: all states are filled: this predicted antimatter successfully. Nobody wants to hear of modelling physical effects of particles moving in the Dirac sea. Why? A good analogy is the particle-and-hole theory used in semiconductor electronics, solid state physics. Now plug in cosmology: both positive and negative real charges are streaming away from us in all directions. Hence virtual charges in the Dirac sea will stream inward. Moving fermions can’t occupy the same space as virtual fermions, they get shoved out of the way due to the Pauli exclusion principle. It is pretty obvious to anyone that the outward force of matter in the expanding universe is balanced by equal inward Dirac sea force, according to Newton’s 3rd law. Similarly, in a a corridor, a person of 70 litre volume moving at velocity v is compensated for by 70 litres of fluid air moving at velocity –v, or the same speed but in the opposite direction to the person’s motion. This is pretty obvious because if the surronding fluid didn’t displace around the person to fill in the volume they are vacating, there would be a vacuum left behind them and the 14.7 psi air pressure in front would exert 144*14.7 ~ 2,100 pounds of pressure per square foot of the person which would prevent the person being able to walk. It is absolutely crucial for the person that air is a fluid which flows around and fills in the space being vacated behind. The lack of this mechanism explains why you need to apply substantial force to remove large suction plungers from smooth surfaces against air pressure. However, to my cost, I have found that this argument using the air pressure analogy or Dirac sea analogy is fruitless. Mainstream crackpots claim that it is all wrong and by deliberately misunderstanding the analogy they can create endless rows which have nothing to do with the point, the gravitational mechanism. As an analogy to this misunderstanding of a simple point, think about Feynman’s remark that energy was misunderstood even by the author of physics school textbook who claimed that ‘energy’ makes everything go. Taking up Feynman’s argument, if you calculate the energy of the air in your room, the air molecules have a mean velocity of about 500 m/s, and there is 1.2 kg of air per cubic metre of your room. Let’s say you are in a small room with 10 cubic metres of air in it, 12 kg of air. The kinetic energy that air possesses is half the mass multiplied by the square of the mean speed, i.e., 1.5 MJ. However, that ‘energy’ is useless to you unless you have a way of extracting it. You can’t power your laptop from the energy of air pressure and temperature! You could of course use it like a battery if you had a big vacuum chamber with a fan powering a generator at a hole in the wall of the vacuum chamber, so that the in-rushing air would turn the fan and generate electricity. But the power it takes to create such a vacuum is more than the energy you can possibly get out of the collapsing vacuum. So the simple idea of ‘energy’ is misleading to mainstream crackpots. What counts is not gross energy, but usable energy! This is why most of the gauge boson radiation energy has nothing to do with the energy we use. Because the gauge boson radiation energy, such as ‘gravitons’, comes from all directions, most of it is not useful energy and cannot do work. Only the small asymmetries in it result in work, by creating the fundamental forces we experience!)

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

Fig. 2 - how general relativity effects are produced physically by quantum gravitation

Fig. 2: The general all-round pressure from the gravitational field does of course produce physical effects. The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is a compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the Earth; this was calculated by Feynman using general relativity in his famous Feynman Lectures on Physics. The reason why nearby, local masses shield the force-carrying radiation exchange, causing gravity, is because the distant masses in the universe is in high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from a local, non-receding mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you rather than exchanging gauge bosons with you, so you get pushed towards it. This is why apples fall.

Since there is very little shielding area (fundamental particle shielding cross-sectional areas are small compared to the Earth’s area) so the Earth doesn’t block all of the gauge boson radiation being exchanged between you and the masses in the receding galaxies beyond the other far side of the Earth. The shielding by the Earth is by fundamental particles in it, specifically the fundamental particles which give rise to mass (supposed to be some form of Higgs bosons which surround fermions, giving them mass) by interacting with the gravitational field of exchange radiation. Although each local fundamental particle over its shielding cross-sectional area stops the gauge boson radiation completely, most of Earth’s volume is devoid of fundamental particles because they are so small. Consequently, the Earth as a whole is an inefficient shield. There is little probability of different fundamental particles in the earth being directly behind one another (i.e., overlapping of shielded areas) because they are so small. Consequently, the gravitational effect from a large mass like the Earth is just the simple sum of the contributions from the fundamental particles which make the mass up, so the total gravity is proportional to the number of particles, which is proportional to the mass.

The point is that nearby masses, which are not receding from you significantly, don’t fire gauge boson radiation towards you, because there is no reaction force! However, they still absorb gauge bosons, so they shield you, creating an asymmetry. You get pushed towards such masses by the gauge bosons coming from the direction opposite to the mass. For example, standing on the Earth, you get pushed down by the asymmetry; the upward beam of gauge bosons coming through the earth is very slightly shielded. The shielding effect is very small, because it turns out that the effective cross-sectional shielding area of an electron (or other fundamental particle) for gravity is equal to πR2 where R = 2GM/c2 which is the event horizon radius of an electron. This is a result of the calculations, as is a prediction of the Newtonian gravitational parameter G! Now let’s prove it.

Approach 1

Referring to Fig. 1 above, we can evaluate the gravity force (which is the proportion of the total force indicated by the dark-shaded cone; the observer is in the middle of the diagram at the apex of each cone). The force of gravity is not simply the total inward force, which is equal to the total outward force. Gravity is only the proportion of the total force which is represented by the dark cone.

The total force, as proved above, is = R4rH2/3. The fraction of this which is represented by the dark cone is equal to the volume of the cone (XR/3, where X is the area of the end of the cone), divided by volume (4πR3/3), of the sphere of radius R (the radius of the observable spacetime universe defined by R = ct = c/H). Hence,

Force of gravity = (4πR4rH2/3).(XR/3)/(4πR3/3)

= R2rH2X/3,

where the area of the end of the cone, X, is observed in Fig. 1 to be geometrically equal to the area of the shield, A, multiplied by (R/r)2.

X = A(R/r)2.

Hence the force of gravity is R2rH2[A(R/r)2]/3

= (1/3)R4rH2A/r2.

(Of course you get exactly the same result if you take the fraction of the total force delivered in the cone to be the area of the base of the cone, X, divided into the surface area, 4πR2, of the sphere of radius R.)

If we assume that the shield area is A = π(2GM/c2)2, i.e., the cross-sectional area of the event horizon of a black hole, then the formula above for the force of gravity, when set equal to the Newtonian law, F = mMG/r2, gives for m = M and c/R = H, the result is the prediction that

G = (3/4)H2/(rπ).

This is of course equal to twice the false amount you get from rearranging the ‘critical density’ formula of general relativity (without a cosmological constant), but what is more interesting is that we do not need to assume that the shield area is A = π(2GM/c2)2. The critical density formula, and other cosmological applications of general relativity, is false because it ignores the quantum gravity dynamics which become important on very large scales due to recession of masses in the universe, because the gravitational interaction is a product of the cosmological expansion; both are caused by gauge boson exchange radiation (the radiation pushes masses apart over large, cosmological distance scales, while pushing things together on small scales; this is because the uniform gauge boson pressure between masses causes them to recede from all surrounding masses and fill the expanding volume of space like raisins in an expanding cake receding from one another, where the gauge boson radiation pressure is represented by the pressure of the dough of the cake as it expands; there is no contradiction whatsoever between this effect and the local gravitational attraction which occurs when two currants are close enough that there is no dough between them and plenty of dough around them, pushing them towards one another like gravity).

We get the same result by an independent method, which does not assume that the shield area is the event horizon cross section of a black hole. Now we shall prove it.

Approach 2

As in the above approach, the outward force of the universe is 4πR4rH2/3, and there is an equal inward force. The fraction of the inward force which is shielded is now calculated as the mass, Y, of those atoms in shaded cone in Fig. 1 which actually emit the gauge boson radiation that hits the shield, divided by the mass of the universe.

The important thing here is that Y is not simply the total mass of the universe in the shaded cone. (If it were, Y would be the density of the universe multiplied by volume of the cone.)

That total mass inside the shaded cone of Fig.1 is not important because part of the gauge boson radiation it emits misses the shield, because it hits other intervening masses in the universe. (See Fig. 3.)

The mass in the shaded cone which actually produces the gauge boson radiation which we are concerned with (that which causes gravity) is equal to the mass of the shield multiplied up geometrically by the ratio of the area of the base of the cone to the area of the shield, i.e., Y = M(R/r)2, because of the geometric convergence of the inward radiation from many masses within the cone towards the center. This is illustrated in Fig. 3.

Hence, the force of gravity is:

(4πR4rH2/3)Y/[mass of universe]

= (4πR4rH2/3).[M(R/r)2]/(4πR3r/3)

= R3H2m/r2.

Comparing this to Newton’s law F = mMG/r2, gives us

G = R3H2/[mass of universe]

= (3/4)H2/(rπ).

Fig. 3.

Fig. 3: The mass multiplication scheme basis of Approach 2.

So we get precisely the same result as the previous method where we assumed that the shield area of an electron was the cross-sectional area of the black hole event horizon! This result for G has been produced entirely without the need for an assumption about what numerical value to take for the shielding cross-sectional area of a particle. Yet it is the same result as that derived above in the previous method when assuming that a fundamental particle has a shielding cross-sectional area for gravity-causing gauge boson radiation equal to the event horizon of a black hole. Hence, this result justifies and substantiates that assumption. We get two major quantitative results from this study of quantum gravity: a formula of G, and a formula of the cross-sectional area of a fundamental particle for gravitational interactions.

The exact formula for G, including photon redshift and density variation

The toy model above began by assuming that the inward force carried by the gauge boson radiation is identical to the outward force represented the simple product of mass and acceleration in Newton’s 2nd law, F = ma. In fact, taking the density of the universe to be the local average around us (at a time of 14,000 million years after the big bang) is an error, because the density increases as we look back in time with increasing distance, seeing earlier epochs which have higher density. This effect tends to increase the effective outward force of the universe, by increasing the density. In fact, the effective mass would go to infinity unless there was another factor, which tends to reduce the force imparted by gravity causing gauge bosons from the greatest distances. This second effect is redshift. This problem of how to evaluate the extent to which these two effects partly offset one another is discussed in detail in the earlier post on this blog, here. It is shown there that the effective inward force should take some more complex form, so that the inward force is no longer simply F = ma but some integral (depending on the way that the redshift is modelled, and there are several alternatives) like

F = ma = mH2r

= ò(r2r )(1 – rc-1H)-3(1 – rc-1H)H2r [1 + {1.1*10-13 (H -1 – r/c )}-1 ]-1 dr

= 4 π r c2 ò r [ {c/(Hr) } 1 ]-2 [1 + {1.1*10-13 (H -1 – r/c )}-1 ]-1 dr.

Where r is the local density, i.e., the density of spacetime at 14,000 million years after the big bang. I have not completed the evaluation of such integrals (some of them give an infinite answer, so it is possible to rule those out as either wrong or missing some essential factor in the model). However, an earlier idea, to take account of the rise in density with increasing spacetime around us, at the same time taking account of the redshift as a divergence of the universe, is to set up a more abstract model.

Density variation with spacetime and divergence of matter in universe (causing the redshift of gauge bosons by an effect which is quantitatively similar to gauge boson radiation being ‘stretched out’ over the increasing volume of space while in transit between receding masses in the expanding universe) can be modelled by the well-known equation for mass continuity (based on the conservation of mass in an expanding gas, etc):

dρ/dt + Ñv) = 0

Or: dρ/dt = –Ñv)

Where divergence term

Ñ .(ρv) = -[{dv)x/dx} + {dv)y/dy} + {dv)z/dz}]

For the observed spherical symmetry of the universe we see around us

dv)x/dx = dv)y/dy = dv)z/dz = dv)R/dR

where R is radius.

Now we insert the Hubble equation v = HR:

dρ/dt = –Ñv) = –Ñ.(ρHR) = -[{dHR)/dR} + {dHR)/dR} + {dHR)/dR}]

= -3dHR)/dR

= -3ρHdR/dR

= -3ρH.

So dρ/dt = -3ρH. Rearranging:

-3Hdt = (1/ρ) dρ. Integrating:

ò-3Hdt = ò(1/ρ) dρ.

The solution is:

-3Ht = (ln ρ1) – (ln ρ). Using the base of natural logarithms e to get rid of the ln’s:

e-3Ht = ρ1

Because H = v/R = c/[radius of universe, R] = 1/[age of universe, t] = 1/t, we find:

e-3Ht = ρ1/ρ = e-3(1/t)t = e-3.

Therefore

ρ = ρ1e3 ~ 20.0855 ρ1.

Therefore, if this analysis is a correct abstract model for the combined effect of graviton redshift (due to the effective ‘stretching’ of radiation as a result of the divergence of matter across spacetime caused by the expansion of the universe) and density variation of the universe across spacetime, our earlier result of G = (3/4)H2/(rπ) should be corrected for spacetime density variation and redshift of gauge bosons, to:

G = (3/4)H2/(rπe3),

which is a factor of ~10 smaller than the rearranged traditional ‘critical density’ formula of general relativity, G = (3/8)H2/(rπ). Therefore, this theory predicts gravity quantitatively and checkably, and it dispenses with the need for an enormous amount of unobserved dark matter. (There is clearly some dark matter, as neutrinos are known to have some mass, but this can be assessed from the rotation curves for spiral galaxies and other observational checks.)

Experimental confirmation for the black hole size as the cross-sectional area for fundamental particles in gravitational interactions

In additional to the theoretical evidence above, there is independent experimental evidence. If the core of an electron is gravitationally trapped Heaviside-Poynting electromagnetic energy current, it is a black hole and it has a magnetic field which is a torus (see Electronics World, April 2003).

Experimental evidence for why an electromagnetic field can produce gravity effects involves the fact that electromagnetic energy is a source of gravity (think of the stress-energy tensor on the right hand side of Einstein’s field equation). There is also the capacitor charging experiment. When you charge a capacitor, practically the entire electrical energy entering it is electromagnetic field energy (Heaviside-Poynting energy current). The amount of energy carried by electron drift is negligible, since the electrons have a kinetic energy of half the product of their mass and the square of their velocity (typically 1 mm/s for a 1 A current).

So the energy current flows into the capacitor at light speed. Take the capacitor to be simple, just two parallel conductors separated by a dielectric composed of just a vacuum (free space has a permittivity, so this works). Once the energy goes along the conductors to the far end, it reflects back. The electric field adds to that from further inflowing energy, but most of the magnetic field is cancelled out since the reflected energy has a magnetic field vector curling the opposite way to the inflowing energy. (If you have a fully charged, ’static’ conductor, it contains an equilibrium with similar energy currents flowing in all possible directions, so the magnetic field curls all cancel out, leaving only an electric field as observed.)

The important thing is that the energy keeps going at light velocity in a charged conductor: it can’t ever slow down. This is important because it proves experimentally that static electric charge is identical to trapped electromagnetic field energy. If this can be taken to the case of an electron, it tells you what the core of an electron is (obviously, there will be additional complexity from the polarization of loops of virtual fermions created in the strong field surrounding the core, which will attenuate the radial electric field from the core as well as the transverse magnetic field lines, but not the polar radial magnetic field lines).

You can prove this if you discharge any conductor x metres long which is charged to v volts with respect to ground, through a sampling oscilloscope. You get a square wave pulse which has a height of v/2 volts and a duration of 2 x/c seconds. The apparently ‘static’ energy of v volts in the capacitor plate is not static at all; at any instant, half of it, at v/2 volts, is going eastward at velocity c and half is going westward at velocity c. When you discharge it from any point, the energy already by chance headed towards that point immediately begins to exit at v/2 volts, while the remainder is going the wrong way and must proceed and reflect from one end before it exits. Thus, you always get a pulse of v/2 volts which is 2 x metres long or 2 x/c seconds in duration, instead of a pulse at v volts and x metres long or x/c seconds in duration, which you would expect if the electromagnetic energy in the capacitor was static and drained out at light velocity by all flowing towards the exit.

This was investigated by Catt, who used it to design the first crosstalk (glitch) free wafer scale integrated memory for computers, winning several prizes for it. Catt welcomed me when I wrote an article on him for the journal Electronics World, but then bizarrely refused to discuss physics with me, while he complained that he was a victim of censorship. However, Catt published his research in IEEE and IEE peer-reviewed journals. The problem was not censorship, but his refusal to get into mathematical physics far enough to sort out the electron.

It’s really interesting to investigate why classical (not quantum) electrodynamics is totally false in many ways: Maxwell’s model is wrong. Some calculations of quantum gravity based on a simple, empirically-based model (no ad hoc hypotheses), which yields evidence (which needs to be independently checked) that the proper size of the electron is the black hole event horizon radius.

There is also the issue of a chicken-and-egg situation in QED where electric forces are mediated by exchange radiation. Here you have the gauge bosons being exchanged between charges to cause forces. The electric field lines between the charges have to therefore arise from the electric field lines of the virtual photons being continually exchanged.

How do you get an electric field to arise from neutral gauge bosons? It’s simply not possible. The error in the conventional thinking is that people incorrectly rule out the possibility that electromagnetism is mediated by charged gauge bosons. You can’t transmit charged photons one way because the magnetic self-inductance of a moving charge is infinite. However, charged gauge bosons will propagate in an exchange radiation situation, because they are travelling through one another in opposite directions, so the magnetic fields are cancelled out. It’s like a transmission line, where the infinite magnetic self-inductance of each conductor cancels out that of the other conductor, because each conductor is carrying equal currents in opposite directions.

Hence you end up with the conclusion that the electroweak sector of the Standard Model is in error: Maxwellian U(1) doesn’t describe electromagnetism properly. It seems that the correct gauge symmetry is SU(2) with three massless gauge bosons: positive and negatively charged massless bosons mediate electromagnetism and a neutral gauge boson (a photon) mediates gravitation. See Fig. 4.

the two massless charged gauge bosons produce the mechanism of electromagnetism, while the massless uncharged gauge boson produces gravitation.

Fig. 4: The SU(2) electrogravity mechanism. Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!

This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation. See Fig. 5.

Gauge bosons of electromagnetism and the prediction of the strength of electromagnetism relative to gravitation

Fig. 5: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.

The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges (Fig. 5).

Experimentally checkable consequences of this gravity mechanism, and consistency with known physics

1. Universal gravitational parameter, G

G = (3/4)H2/(rπe3), derived in stages above, where e3 is the cube of the base of natural logarithms (the correction factor due to the effects of redshift and density variation in spacetime), is a quantitative prediction. In the previous post here, the best observational inputs for Hubble parameter H and local density of universe r were identified: ‘The WMAP satellite in 2003 gave the best available determination: H = 71 +/- 4 km/s/Mparsec = 2.3*10-18 s-1. Hence, if the present age of the universe is t = 1/H (as suggested from the 1998 data showing that the universe is expanding as R ~ t, i.e. no gravitational retardation, instead of the Friedmann-Robertson-Walker prediction for critical density of R ~ t2/3 where the 2/3 power is the effect of curvature/gravity in slowing down the expansion) then the age of the universe is 13,700 +/- 800 million years. … The Hubble space telescope was used to estimate the number of galaxies in a small solid area of the sky. Extrapolating this to the whole sky, we find that the universe contains approximately 1.3*1011 galaxies, and to get the density right for our present time after the big bang we use the average mass of a galaxy at the present time to work out the mass of the universe. Taking our Milky Way as the yardstick, it contains about 1011 stars, and assuming that the sun is a typical star, the mass of a star is 1.9889*1030 kg (the sun has 99.86% of the mass of the solar system). Treating the universe as a sphere of uniform density and radius R = c/H, with the above mentioned value for H we obtain a density for the universe at the present time (~13,700 million years) of about 2.8*10-27 kg/m3.’

Putting H = 2.3*10-18 s-1 and r = 2.8*10-27 kg/m3 into G = (3/4)H2/(rπe3), gives a result of G = 2.2*10-11 m3 kg-1 s-2 which is one third of the experimentally determined value of G = 6.673*10-11 m3 kg-1 s-2. This factor of 3 error is within the error bars for the estimates of the density because of uncertainties in estimating the average mass of a galaxy. To put the accuracy of this prediction into perspective, try reading the statement by Eddington (quoted at the top of this blog post): how many other theories based entirely on observably verified facts like Hubble’s law and Newton’s laws, predict the strength of gravity? Alternatively, compare it to the classical (and incorrect) ‘critical density’ prediction from general relativity (which ignores the mechanism of gravitation), which rearranges to give a formula for G which is e3/2 or 10 times bigger, thus the critical density is 3.3 times bigger than the experimental data.

This is actually an unfair comparison, because the rough estimate for the density is about 3 times too high. Most astronomers suggest that the observable density is 5-20% of the critical density, i.e, 10% with a factor of 2 error limit. This would put the density at r = 10-27 kg/m3 and our prediction is then exact, with a factor of 2 experimental error limit. The abundance of dark matter is not experimentally measured. There is some observational evidence for dark matter, and theoretically there are some solid reasons why there should be such matter in a dark, non luminous form (neutrinos have mass, as do black holes). The mainstream takes the critical density formula from general relativity and the measured density for luminous matter and uses the disagreement to claim that the difference is dark matter. That argument is weak, because general relativity is in error for cosmological purposes through ignoring quantum gravity effects which become important on large scales in an expanding universe (i.e., redshift of gravitons weaking the force gravity over large distances, the nature of the Yang-Mills exchange radiation dynamical mechanism for gravity in which gravity is a result of radiation exchange with the other masses in the expanding universe, etc.). Another argument for a lot of dark matter is the flattening of galactic rotation curves, but if the final theory of quantum gravity is a departure from general relativity and Newtonian gravity, it could potentially resolve this problem (it will be at large distances, because gravitons are redshifted and there could be some significant graviton shielding effect of the immense amount of mass in a galaxy, which are trivial in the solar system).

Professor Sean Carroll writes a lot about cosmology, and is author of a very useful book on general relativity. In writing about the discovery of direct evidence for dark matter on his blog post http://cosmicvariance.com/2006/08/21/dark-matter-exists/ and others, he does highlight some useful arguments. He starts by stating without evidence that 5% of the universe is ordinary matter, 25% dark matter and 70% dark energy. He then explains that the direct evidence for dark matter proves that mainstream cosmologists are not fooling themselves. The problem is that the direct evidence for dark matter doesn’t say how much dark matter there is: it’s not quantitative. It does not allow any confirmation of the theoretical guesswork for the statement he makes that there is 5 times as much dark matter as visible matter. He does then go on to discuss whether some kind of ‘modified Newtonian dynamics,’ rather than dark matter, could resolve the problems – and he writes that he would prefer some objective resolution of that type rather than in effect inventing ‘dark matter’ epicycles as convenient fixes which cannot be readily checked even in principle, but there is no definite proposal discussed which is really concrete and solves the quantum gravity facts (such as this gravity mechanism!).

2. Small size of the cosmic background radiation ripples

The prediction of gravity by this mechanism appears to be accurate to within experimental data, which is accurate to within a factor of approximately two. The second major prediction of this mechanism is the small size in the sound-like ripples in angular distribution of the cosmic background radiation which is the earliest directly observable radiation in the universe, whose emitted power peaked at 370,000 years after the big bang when the temperature was 3,500 Kelvin, and redshifted or ‘stretched out’ due to cosmic expansion which reduces its temperature to 2.7 Kelvin.

Because radiation and matter were in thermal equilibrium (an ionised gas) at the time the cosmic background radiation was emitted, the radiation carries an imprint of the nature of the matter at that time. The cosmic background radiation was found to be of extremely uniform temperature, far more uniform than expected at 370,000 years after the big bang, when conventional models of galaxy formation implied that should have been big ripples to indicate the ‘seeding’ of lumps that could become stars and galaxies.

This is called the ‘horizon problem’ or ‘isotropy problem’, because the microwave background radiation from opposite directions in the sky is similar to within 0.01%, and in the mainstream models gravity always has the same strength and would have caused bigger non-uniformities within 370,000 years of the big bang. A mainstream attempt to solve this problem is ‘inflation’ whereby the universe expanded at a faster than light speed for a small fraction of a second after the big bang, making the density of the universe uniform all over the sky before gravity had a chance to magnify irregularities in the expansion process.

This ‘horizon problem’ is closely related to the ‘flatness problem’ which is the issue that in general relativity, the universe depending on its density has three possible geometries: open, flat, and closed. At the critical density it will be flat, with gravitation causing its radius to increase in proportion to the two-thirds power of time after the big bang. Mainstream consensus was that the universe was probably flat – which means of critical density, five to twenty times more than the observable density. The flatness problem is that if the universe was not completely flat, but of slightly different density across the universe, then the variation in density would be greatly magnified by the expansion of the universe and would be obvious today. The absense of any such large anisotropy is widely believed, by the mainstream, to be evidence for a flat geometry.

The mechanism for gravity solves these problems. It solves the flatness problem by showing that the critical density (distinguishing the open, flat, and closed solutions to the Friedmann-Robertson-Walker metric of general relativity, which is applied to cosmology) is false for ignoring quantum gravity effects: there ars no long range gravitational influences in an expanding universe because the graviton exchange radiation of quantum gravity is becomes severely redshifted like light, and cannot produce curvature effects like forces on large distances. So the whole existing mainstream structure of using general relativity to work out cosmology falls apart.

The horizon problem as to why the cosmic background is so smooth is solved by this model in an interesting way. It is very simple. The relationship giving the gravity parameter G is directly proportional to the age of the universe. The older the universe gets, the stronger gravity gets. At 370,000 years after the big bang, G was 40,000 times smaller than it is now, and at earlier times it was even smaller. The ripples in the cosmic background radiation are extremely small, because the gravitational force was so small.

As proved earlier, the Hubble acceleration is a = dv/dt = H2R = H2ct, where t is time past when the light was emitted but can be set equal to the age of the universe for our purposes here. Hence the outward force F = ma = mH2ct, is proportional to the age of the universe, as is the equal inward force according to Newton’s 3rd law of motion.

We can also see proportionality to time in the result G = (3/4)H2/(rπe3), since H2 = 1/t2 and r is mass of universe divided by volume (which is proportional to the cube of radius, i.e., the cube of the product ct), so this formula implies that G is proportional to (1/t2)/(1/t3) which is of course directly proportional to time.

Dirac did not have a mechanism for a time-dependence of G but he guessed that G might vary. Unfortunately, lacking this mechanism, Dirac guessed that G was falling with time when it is actually increasing, and he did not realise that it is not just the strength constant for gravity that varies, but all the strength coupling constants vary in the same way. This disproves Edward Teller’s claim (based on just G varying) that if it were true, the sun’s radiant power would vary with time in a way incompatible with life (e.g., he calculated that the oceans would have been literally boiling during the Cambrian era if Dirac’s assumption was true).

It also disproves another claim that G is constant based on nucleosythesis in the big bang, in the same way. The argument here is that nuclear fusion in stars and in the big bang depends on gravity to cause the basic compressive force, causing electrically charged positive particles to collide hard enough to sufficiently break through the ‘barrier’, caused by the repulsive electric Coulomb force, so that the short-ranged strong attractive force can then fuse the particles together. The big bang nucleosynthesis model correctly predicts the observed abundances of unfused hydrogen and fusion products like helium, assuming that G is constant. Because the result is correct, it is often claimed (even by students of Professor Carroll) that G must have had a value at 1 minutes after the big bang that is no more than 10% different to today’s value for G. The obvious fallacy here is that both electromagnstism and gravity vary the same way. If you double both the Coulomb force and the gravity force, the fusion rate doesn’t vary, because the Coulomb force is opposing fusion while gravity is causing fusion, and both are inverse square forces. The effect of G varying is not manifested in a change to the fusion rate in the big bang or in a star, because the corresponding change in the Coulomb force offsets it.

For a discussion of why the different forces unify by scaling similarly (it is due to vacuum polarization dynamics) see this earlier post: https://nige.wordpress.com/2007/03/17/the-correct-unification-scheme/

Louise Riofrio has investigated the dimensionally correct relationship GM = tc3 which was discussed earlier on this blog here, here and here where M is the mass of the universe and t is its age. This is algebraically equivalent to G = (3/4)H2/(rπ), i.e, the gravity prediction without a dimensionless redshift-density correction factor of e3. It is interesting that it can be derived on the basis of energy based methods, as first pointed out by John Hunter who suggested setting E = mc2 = mMG/R, i.e, setting rest mass energy equal to gravitational potential energy.

Since the electromagnetic charge of the electron is massless bosonic energy trapped as a black hole, the gravitational potential energy would have to be equal, to keep it trapped.

This rearranges to give the equations of Riofrio and Rabinowitz, although physically it is obviously missing some dimensionless multiplication constant because the gravitational potential energy cannot be E = mMG/R, where R is the radius of the universe. It is evident that this equation describes the gravitational potential energy which would be released if the universe were (somehow) to collapse. However, the average radial distance of the mass of the universe M will be less than the radius of the universe R. This brings up the density variation problem: gravitons and light both go at velocity c so we see them coming from times in the past when the density was greater (density is proportional to the reciprocal of the cube of the age of the universe due to expansion). So you cannot assume constant density and get a simple solution. You really also need to take account of the redshift of gravitons from the greatest distances, or the density will cause you problems due to tending towards infinity at radii approaching R. Hence, this energy-based approach to gravity is analogous to the physical mechanism described above. See also the derivation, by mathematician Dr Thomas R. Love of California State University, of Kepler’s law at https://nige.wordpress.com/2006/09/30/keplers-law-from-kinetic-energy/ which demonstrates that you can indeed treat problems generally by assuming that the rest mass energy of the spinning, otherwise static fundamental particle or the kinetic energy of the orbiting body, is being trapped by gravitation.

This leads to to a concrete basis for John Hunter’s suggestions published as a notice in the 12 July 2003 issue of New Scientist, page 17: he suggested that if E = mc2 = mMG/R, then the effective value of G depends on distance since G = Rc2/M, which is algebraically equivalent to the expression we obtained above for the gravity mechanism, and published in the article ‘Electronic Universe, Part 2’, Electronics World, April 2003 (excluding the suggested e-cube correction for density variation with distance and graviton redshift, which was published in a letter to Electronics World in 2004). Hunter’s July 2003 notice in New Scientist indicated that this solves the horizon problem of cosmology (thus not requiring the speculative mainstream extravagances of Alan Guth’s inflation theory). Hunter pointed out in his notice that his E = mc2 = mMG/R, when applied to the earth, should include another term for the influence of the nearby mass of the sun, leading to E = mc2 = mMG/R + mM’G/r where m is mass of Earth, M is mass of universe, R is radius of universe (which is inaccurate as pointed out since the average distance of the mass of the surrounding universe can hardly be the radius of the universe, but must be a smaller distance, leading to the problem of the time-variation of density and thus also the redshift of the gravitons causing gravity), M’ is the mass of the Sun, and r is the distance of the Earth from the sun. Hunter argued that since r varies and is 3.4% bigger in July than in January (when Earth is closest to the sun), this leads to a suggestion for a definite experiment to test the theory: ‘Prediction: the weight of objects on the Earth will vary by 3.3 parts in 10 billion over a year, as the Earth to Sun distance changes.’ (My only problem with this prediction is simply that it is virtually impossible to test, just like the ‘not even wrong’ Planck scale unification supersymmetry ‘prediction’. Because the Earth is constantly vibrating due to seismic effects, you can never really hope to make such accurate measurements of weight. Anyone who has tried to make measurements of masses beyond a few significant figures for quantitative chemical analysis knows how difficult such a mass measurement is: making sensitive instruments is a problem, but the increased sensitivity multiplies up background vibrations so the instrument just becomes a seismograph. However, maybe some space-based precise measurements with clever experimentalist/observationist tricks will one day be able to check this to some extent.)

3. Electric force constant (permittivity), Hubble parameter, etc.

The proof [above] predicts gravity accurately, with G = ¾ H2/(pre3). Electromagnetic force (discussed above and in the April 2003 Electronics World article) in quantum field theory (QFT) is due to ‘virtual photons’ which cannot be seen except via forces produced. The mechanism is continuous radiation from spinning charges; the centripetal acceleration of a = v2/r causes the emission energy emission which is naturally in exchange equilibrium between all similar charges, like the exchange of quantum radiation at constant temperature. This exchange causes a ‘repulsion’ force between similar charges, due to recoiling apart as they exchange energy (two people firing guns at each other recoil apart). In addition, an ‘attraction’ force occurs between opposite charges that block energy exchange, and are pushed together by energy being received in other directions (shielding-type attraction). The attraction and repulsion forces are equal for similar net charges. The net inward radiation pressure that drives electromagnetism is similar to gravity, but the addition is different. The electric potential adds up with the number of charged particles, but only in a diffuse scattering type way like a drunkards walk, because straight-line additions are cancelled out by the random distribution of equal numbers of positive and negative charge. The addition only occurs between similar charges, and is cancelled out on any straight line through the universe. The correct summation is therefore statistically equal to the square root of the number of charges of either sign multiplied by the gravity force proved above.

Hence F(electromagnetism) = mMGN1/2/r2 = q1q2/(4per2) (Coulomb’s law), where G = ¾ H2/(pre3) as proved above, and N is as a first approximation the mass of the universe (4pR3r/3= 4p(c/H)3r/3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (-1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:

e = qe2e2.7…3[r/(12pme2mprotonHc3)]1/2 F/m.

Using old data as in the letter published in Electronics World some years ago which gave the G formula (r = 4.7 x 10-28 kg/m3 and H = 1.62 x 10-18 s-1 for 50 km.s-1Mpc-1), gives e = 7.4 x 10-12 F/m which is only 17% low as compared to the measured value of 8.85419 x 10-12 F/m.

Rearranging this formula to yield r, and rearranging also G = ¾ H2/(pre3) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ¾ H2/(pre3) to give a prediction for r which is independent of H:

H = 16p2Gme2mprotonc3e2/(qe4e2.7…3) = 2.3391 x 10-18 s-1 or 72.2 km.s-1Mpc-1, so 1/H = t = 13,550 million years. This is checkable against the WMAP result that the universe is 13,700 million years old; the prediction is well within the experimental error bar.

r = 192p 3Gme4mproton2c6e4/(qe8e2.7…9) = 9.7455 x 10-28 kg/m3.

Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanism-based predictive method.

Furthermore, calculations show that Hawking radiation from electron-mass black holes has the right force as exchange radiation of electromagnetism: https://nige.wordpress.com/2007/03/08/hawking-radiation-from-black-hole-electrons-causes-electromagnetic-forces-it-is-the-exchange-radiation/

4. Particle masses

Particle mass mechanism

Fig. 6: Particle mass mechanism. The ‘polarized vacuum’ shell exists between IR and UV cutoffs. We can work out the shell outer radius from either using the IR cutoff energy as the collision energy to calculate the distance of closest approach in a particle scattering event (like Coulomb scattering, which predominates at low energies) or we use Schwinger’s formula for the minimum static electric field strength which is needed to cause pair-productions of fermion-antifermion pairs to pop out of the Dirac sea in the vacuum. The outer radius of the polarized vacuum around a unit charge by either calculation is on the order 1 fm. This scheme doesn’t just explain and predict masses, it also replaces supersymmetry with a proper physical, checkable prediction of what happens to Standard Model forces at extremely high energy. The following text is an extract from an earlier blog post here:

‘The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997. Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).

‘There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons). How can gravitational charge be renormalized? There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field. For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized. If they are both displaced in the same direction by the field, they aren’t polarized. So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s Cavorite.

‘There is no evidence for this. Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value. However, this is not a problem. The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude). Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.

‘Finally, supersymmetry is totally flawed. What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification. You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).

‘The energy sapped from the gauge boson mediated field of electromagnetism is being used. It’s being used to create pairs of charges, which get polarized and shield the field. This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory. Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.

‘Now take the situation where you put N electrons close together, so that their cores are very nearby. What will happen is that the surrounding vacuum polarization shells of both electrons will overlap. The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger. So the shielding of the polarized vacuum is N times stronger! This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron. Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!

‘This has three remarkable consequences. First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy. Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.

‘Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials). If you could extend the Pauli exclusion principle, you could allow particles to acquire short-range nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.

‘Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do. Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.

‘The right handed electron has a weak hypercharge of -2. The left handed electron has a weak hypercharge of -1. The left handed downquark (with observable low energy, electric charge of -1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of -2/3.

‘It’s totally obvious what’s happening here. What you need to focus on is the hadron (meson or baryon), not the individual quarks. The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks). The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them. It all comes from the electromagnetic field. Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces. Hence, at the usually assumed so-called Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric). This ties in with representation theory for particle physics, whereby symmetry transformation principles relate all particles and fields (the conservation of gauge boson energy and the exclusion principle being dynamic processes behind the relationship of a lepton and a quark; it’s a symmetry transformation, physically caused by quark confinement as explained above), and it makes predictions.

‘It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength. This is done when electric field energy is stored in a capacitor. In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose. See page 70 of http://arxiv.org/abs/hep-th/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy. (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge. The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.) The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces. This allows predictions and more checks. It’s totally tied down to hard facts, anyway. If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.

‘To give an example from https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron. Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 – 1 = 136.036 times the electric charge energy of the electron experienced at large distances. This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.’

5. Quantum gravity renormalization problem is not real

The following text is an extract from an earlier blog post here:

‘Quantum gravity is supposed – by the mainstream – to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.

‘According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ h-bar.

‘Since time = distance/c,

‘(energy)*(distance) ~ c*h-bar.

‘Hence,

‘(distance) ~ c*h-bar/(energy)

‘Very small distances therefore correspond to very big energies. Since gravitons capable of graviton-graviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is non-renormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.

‘The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).

‘The problem is that gravity has only one type of ‘charge’, mass. There’s no anti-mass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.

‘This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.

‘However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.

‘This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.’

Experimental confirmation of the redshift of gauge boson radiation

All the quantum field theories of fundamental forces (the standard model) are Yang-Mills, in which forces are produced by exchange radiation.

The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inverse-square law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E = hf. This is because the momentum carried by radiation is p = E/c = hf/c. Any reduction in frequency f therefore reduces the momentum imparted by a gauge boson, and this reduces the force produced by a stream of gauge bosons.

Therefore, in the universe all forces between receding masses should, according to Yang-Mills quantum field theory (where forces are due to the exchange of gauge boson radiation between charges), suffer a bigger fall than the inverse square law. So, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening long-range gravity.

When you check the facts, you see that the role of ‘cosmic acceleration’ as produced by dark energy (the cc in GR) is designed to weaken the effect of long-range gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.

In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss E = hf and momentum loss p = E/c of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.

The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.

Nobel Laureate Phil Anderson points out:

‘… the flat universe is just not decelerating, it isn’t really accelerating …’ –

http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R

Like my paper, Lunsford’s paper was censored off arxiv without explanation.

Lunsford had already had it published in a peer-reviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.

The way the mainstream censors out the facts is to first delete them from arXiv and then claim ‘look at arxiv, there are no valid alternatives’. It’s a story of dictatorship:

‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, Nineteen Eighty Four, Chancellor Press, London, 1984, p225.

The approach above focusses on gauge boson radiation shielding. We now consider the interaction. In the intense fields near charges, pair production occurs, in which the energy of gauge boson radiation is randomly and spontaneously transformed into ‘loops’ of matter and antimatter, i.e., virtual fermions which exist for a brief period (as determined by the uncertainty principle) before colliding and annihilating back into radiation (hence the spacetime ‘loop’ where the pair production and annihilation is an endless cycle).

In this framework, we have physical material pressure from the Dirac sea of virtual fermions, not just gauge boson radiation pressure. To be precise, as stated before on this blog, the Dirac sea of virtual fermions only occurs out to a radius of about 1 fm from an electron; beyond that radius there are no virtual fermions in the vacuum because the electric field strength is below 1018 volts/metre, the Schwinger threshold for pair production. So at all distances beyond about 10-15 metre from a fundamental particle, the vacuum only contains gauge boson radiation, and contains no pairs of virtual fermions, no chaotic Dirac sea. This cutoff of pair production is a reason why renormalization of charge is necessary with an ‘IR (infrared) cutoff’; the vacuum can only polarize (and thus shield electric charge) out to the range at which the electric field is strong enough to begin to cause pair production to occur in the first place. If it could polarize without such a cutoff, it would be able to completely cancel out all real electric charges, instead of only partly cancelling them. Since this doesn’t happen, we know there is a limit on the range of the Dirac sea of virtual fermions. (For those wanting to see the formula proving the minimum electric field strength that is required for pairs of virtual charges to appear in the vacuum, see equation 359 of Dyson’s http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 of Luis Alvarez-Gaume and Miguel Vazquez-Mozo, http://arxiv.org/abs/hep-th/0510040.)

So what happens is that gauge boson exchange radiation powers the production of short ranged, massive spacetime loops of virtual fermions being created and annihilated (and polarized in the electric field between creation and annihilation).

Now let’s consider general relativity, which is the mathematics of gravity. Contrary to some misunderstandings, Newton never wrote down F = mMG/r2, which is due to Laplace. Newton was proud of his claim ‘hypotheses non fingo’ (I feign no hypotheses), i.e., he worked to prove and predict things without making any ad hoc assumptions or guesswork speculations. He wasn’t a string theorist, basing his guesses on non-observed gravitons (which don’t exist) or extra-dimensions, or unobservable Planck-scale unification assumptions. The effort above in this blog post (which is being written totally afresh to replace obsolete scribbles at the current version of the page http://quantumfieldtheory.org/Proof.htm) similarly doesn’t frame any hypotheses.

It’s actually well proved geometry, well-proved Newtonian first and second law, well proved redshift (which can’t be explained by ‘tired light’ speculation, but is a known and provable effect which occurs from recession, since the Doppler effect – unlike ‘tired light’ – is experimentally confirmed to occur) and similar hard, factual evidence. As explained in the previous post, the U(1) symmetry in the standard model is wrong, but apart from that misinterpretation and associated issues with the Higgs mechanism of electroweak symmetry breaking, the standard model of particle physics is the best checked physical theory ever: forces are the result of gauge boson radiation being exchanged between charges.

*****

I’ve just received an email from CERN’s document server:

From: “CDS Support Team” <cds.alert@cdsweb.cern.ch>

To: <undisclosed-recipients:>

Sent: Friday, May 25, 2007 4:30 PM

Subject: High Energy Physics Information Systems Survey

Dear registered CDS user,

The CERN Scientific Information Service, the CDS Team and the
SPIRES Collaboration are running a survey about the present and the future
of HEP Scientific Information Services.

The poll will close on May 30th. If you have not already
answered it, this is the last reminder to invite you to fill an anonymous
questionnaire at

<http://library.cern.ch/poll.html>

it takes about 15 minutes to be completed and *YOUR* comments and
opinions are most valuable for us.

If you have already answered to the questionnaire, we wish to
thank you once again!

With best regards,

The CERN Scientific Information Service, the CDS Team, the
SPIRES Collaboration

*****

This email relates to my authorship of one paper on CERN, http://cdsweb.cern.ch/record/706468, and it’s really annoying that I can’t update, expand and correct that paper because CERN closed that archive and now only accepts updates to papers that are on the American archive, arXiv (American spelling). I pay my taxes in Europe where they help fund CERN. I can’t complain if arXiv don’t want to publish physics or want to eradicate physics and replace it with extra-dimensional ‘not even wrong’ spin-2 gravitons. But it is disappointing that there is no competitor to arXiv run by CERN anymore. By closing down external submissions and updates to papers hosted exclusively by CERN’s document server, they have handed total control of world physics to bunch of yanks obsessed by the string religion and trying to dictate it to everyone and to stop freedom of physicists to do checkable, empirically defensible research in fundamental problems. Well done, CERN.

(CERN by the way is a French abbreviation and in World War II, the government of France surrendered officially to another dictatorial bunch of mindless idealists, although fortunately there was an underground resistance movement. Although CERN is located on the border of France and Switzerland, France dominates Europe and seems to control the balance of power. I wouldn’t be surprised if their defeatist, collaborative attitude towards arXiv was responsible for this travesty of freedom. However, I’m grateful to have anything on such a server at all. If I was in America, my situation would be far worse. Some arXiv people in America appear to actually try to stop physicists giving lectures in London; it demonstrates what bitter scum some of the arXiv people are. See also the comments here. However, some respectable people have papers to arXiv so I’m not claiming that 100% of it is rubbish, although the string theory stuff is.)

Factual heresy

Below there is a little compilation of factual heresy from other people, just to well and truly finish off this post. The Michelson-Morley experiment preserves the gravitational field (‘aether’ to use an ambiguous and unhelpful term), simply because the contraction in the direction of motion (due to the behaviour of the gravitational field, causing inertial force which resists acceleration, according to Einstein’s equivalence principle whereby inertial mass = gravitational mass) means light has a shorter distance to go in the direction of motion!

The instrument is physically contracted. The fact that photons which are slowed down due to the Earth’s motion only have to travel a shorter distance than those doing transversely (which aren’t slowed down) means that the instrument shows no interference fringes: the effect of Earth’s motion in slowing down one beam is cancelled out by the contraction of the instrument which means they have to travel less far. It’s like a race where the slower the runner, the shorter the distance their lane extends before they arrive at the finish post: all runners arrive at the same time, having gone unequal distances at unequal speeds:

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

One funny or stupid denial of this was in a book called Einstein’s Mirror by a couple of physics lecturers, Patrick Hey and Tony Walters. They seemed to vaguely claim, in effect, that in the Michelson-Morley experiment the arms of the instrument are of precisely the same length and measure light speed absolutely, then they claimed that if anyone built a Michelson-Morley instrument with arms of unequal length, the contraction wouldn’t work. In fact, the arms were never of equal length to within a wavelength of light to begin with, and they only detected the relative difference in apparent light speed between two perpendicular directions by utilising interference fringes, which is a way to measure relative speed in one direction to another, not absolute speed in any direction. You can’t measure the speed of light with the Michelson-Morley instrument, it only shows if there is a difference between two perpendicular directions if you implicitly assume there is no length contraction!

It’s really funny that Eddington made Einstein’s special relativity (anti-aether) famous in 1919 by confirming aetherial general relativity. The media couldn’t be bothered to explain aetherial general relativity, so they explained Einstein’s earlier false special relativity instead!

‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.

‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.

‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 184-5; written quickly to get Jewish Infeld out of Nazi Germany and accepted as a worthy refugee in America.

‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities… According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.)

‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’ – Einstein’s Legacy – Where are the “Einsteinians?”, by Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm

‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’… A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90. (However, this is a massive source of controversy in GR because it’s a continuous approximation to discrete lumps of matter as a source of gravity which gives rise to a falsely smooth Riemann curvature metric; really continuous differential equations in GR must be replaced by a summing over discrete – quantized – gravitational interaction Feynman graphs.)

‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.

‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2nd ed., v1, p. v, 1951.

‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 64-74.

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘… the Heisenberg formulae [virtual particle interactions cause random pair-production in the vacuum, introducing indeterminancy] can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’ – G. Builder, ‘Ether and Relativity’ in the Australian Journal of Physics, v11 (1958), p279.

(This paper of Builder on absolute velocity in ‘relativity’ is the analysis used and cited by the famous paper on the atomic clocks being flown around the world to validate ‘relativity’, namely J.C. Hafele in Science, vol. 177 (1972) pp 166-8. So it was experimentally proving absolute motion, not ‘relativity’ as widely hyped Absolute velocities are required in general relativity because when you take synchronised atomic clocks on journeys within the same gravitational isofield contour and then return them to the same place, they read different times due to having had different absolute motions. This experimentally debunks special relativity. Einstein was wrong when he wrote in Ann. d. Phys., vol. 17 (1905), p. 891: ‘we conclude that a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ See, for example, page 12 of the September 2005 issue of ‘Physics Today’, available at: http://www.physicstoday.org/vol-58/iss-9/pdf/vol58no9p12_13.pdf.)

So we see from this solid experimentally evidence that the usual statement that there is no ‘preferred’ frame of reference, i.e., a single absolute reference frame, is false. Experimentally, a swinging pendulum or spinning gyroscope is observed to stay true to the stars (which are not moving at sufficient angular velocities from our observation point to have any significant problem with being an absolute reference frame for most purposes).

If you need a more accurate standard, then use the cosmic background radiation, which is the truest blackbody radiation spectrum ever measured in history.

These different methods of obtaining measurements of absolute motion are not really examining ‘different’ or ‘preferred’ frames, or pet frames. They are all approximations to the same thing, the absolute reference frame. All the Copernican propaganda since the time of Einstein that: ‘Copernicus didn’t discover the earth orbits the sun, but instead Copernicus denied that anything really orbited anything because he thought there is no absolute motion, only relativism’, is a gross lie. That claim is just the sort of brainwashing double-think propaganda which Orwell accused the dictatorships of doing in his book ‘1984’. You won’t get any glory following the lemmings over the cliff. Copernicus didn’t travel throughout the entire universe to confirm that the earth is: ‘in no special place’. Even if he did make that claim, it would not be founded upon any evidence. Science is (or rather, should be) concerned with being unprejudiced in areas where there is a lack of evidence.

IMPORTANT:

The article above is extracted from the blog post here, and readers should be aware that there are vital comments with amplifications and explanations in them which are not included in the extract above. There are also further vital developments in other blog posts here, here, here and here.

LINKS (the links sidebar recently disappeared from this blog when I changed format, and I cannot retrieve them easily, so the links are listed below instead):

Links

125 thoughts on “Quantum gravity mechanism and predictions (6 May 2009 update)

  1. copy of a comment (if you like the comment below, see also my little article http://quantumfieldtheory.org/Dingle.pdf ):

    http://cosmicvariance.com/2007/05/27/smolin-on-einstein-in-the-new-york-review-of-books/#comment-266146

    Professor Smolin has written some funny things about Einstein. His description in The Trouble with Physics of how he went to the Institute for Advanced Study to meet Freeman Dyson and find out what Einstein was like, was hilarious. (Dyson himself went there to meet Einstein in the late 40s but never did meet him, because the evening before his meeting he read a lot of Einstein’s recent research papers and decided they were rubbish, and skipped the meeting to avoid an embarrassing confrontation.) In an earlier article on Einstein, Smolin writes:

    ‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’

    – Einstein’s Legacy – Where are the “Einsteinians?”, by Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm

    This definitely isn’t what’s required by school physics teachers and string theorists, who both emphasise that special relativity is 100% correct because it’s self-consistent and has masses of experimental evidence. Their argument is that general relativity is built on special relativity, and they ignore Einstein’s own contrary statements like

    ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).

    Einstein does actually admit, therefore, that special relativity is wrong as stated in his earlier paper in Ann. d. Phys., vol. 17 (1905), p. 891, where he falsely claims:

    ‘Thence [i.e., from the SR theory which takes no account of accelerations or gravitation] we conclude that a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’

    This is by consensus held to be the one error of special relativity, see for example http://www.physicstoday.org/vol-58/iss-9/pdf/vol58no9p12_13.pdf

    When clocks were flown around the validate ‘relativity’ they actually validated the absolute coordinate system based general relativity (the gravitational field is the reference frame). G. Builder (1958) is an article called ‘Ether and Relativity’ in the Australian Journal of Physics, v11 (1958), p279, writes:

    ‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’

    The famous paper on the atomic clocks being flown around the world to validate ‘relativity’ is J.C. Hafele in Science, vol. 177 (1972) pp 166-8, which cites uses ‘G. Builder (1958)’ for analysis of the atomic clock results. Hence the time-dilation validates the absolute velocities in Builder’s ether paper!

    In In 1995, physicist Professor Paul Davies – who won the Templeton Prize for religion (I think it was $1,000,000), wrote on pp. 54-57 of his book About Time:

    ‘Whenever I read dissenting views of time, I cannot help thinking of Herbert Dingle… who wrote … Relativity for All, published in 1922. He became Professor … at University College London… In his later years, Dingle began seriously to doubt Einstein’s concept … Dingle … wrote papers for journals pointing out Einstein’s errors and had them rejected … In October 1971, J.C. Hafele [used atomic clocks to defend Einstein] … You can’t get much closer to Dingle’s ‘everyday’ language than that.’

    Dingle wrote in the Introduction to his book Science at the Crossroads, Martin Brian & O’Keefe, London, 1972, c2:

    ‘… you have two exactly similar clocks … one is moving … they must work at different rates … But the [SR] theory also requires that you cannot distinguish which clock … moves. The question therefore arises … which clock works the more slowly?’

    This question really kills special relativity and makes you accept that general relativity is essential, even for clocks in uniform motion. I don’t think Dingle wrote the question very well. He should have asked clearly how anyone is supposed to determine which clock is moving, in order to calculate the time-dilation.

    If there is no absolute motion, you can’t determine which clock runs the more slowly. In chapter 2 of Science at the Crossroads, Dingle discusses Einstein’s error in calculating time-dilation with special relativity in 1905 and comments:

    ‘Applied to this example, the question is: what entitled Einstein to conclude from his theory that the equatorial, and not the polar, clock worked more slowly?’

    Einstein admitted even in popular books that wherever you have a gravitational field, velocities depend upon absolute coordinate systems:

    ‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

    The real brilliance of Einstein is that he corrected his own ideas when they were too speculative (e.g. his ‘biggest blunder’, the large positive CC to cancel out gravity at the mean intergalactic distance, keeping the universe from expanding). What a contrast to string theory.

  2. A little more about the ‘absolute’ reference frame provided by the real existence of the gravitational field:

    (1) The clocks time dilation experiment is experimental proof of absolute not relative motion, because you have to use an absolute reference frame to determine which clock is moving and which is not moving.

    (2) In Yang-Mills (exchange radiation) quantum gravity, all masses are exchanging some sort of gravitons (spin 1 as far as physicists are concerned as proved in this post; spin 2 as far as the 10/11 dimensional 10^500 universes of string ‘theorists’ are concerned).

    This means that the average locations of all the masses in the universe gives us the absolute reference frame.

    Here we get into the obvious issue as to whether space is boundless or not. Until 1998, it was believed without observational evidence that space was boundless, i.e., that gravitational (the curvature causing gravitation) extends across all distance scales. Because of this, geodesics (the lines in space along which photons or small pieces of matter travel in spacetime) would be curved even on the largest scales, so all lines would curve back to their point of origin. This would allegedly mean that space is boundless, so that every person – no matter where they are in the universe – would see the same isotropic universe surrounding them.

    However there are problems with this idea. Firstly, the universe isn’t isotropic because the thing we see coming from the greatest distance, the cosmic background radiation emitted 400,000 years after the big bang (thus coming from about 13,300 million light years away) is certainly not isotropic:

    ‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600
    kms.’

    – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 64-74.

    However, the main problem with the old idea of boundless space and the idea it implied that the universe should be isotropic around every observer wherever they are in the universe (a pretty unscientific claim even if it doesn’t disagree with observations made from here on Earth, since nobody has actually been everywhere in the universe to observer whether it looks isotropic from other places or not, and the difficulties of travelling to distant galaxies make it a ‘not even wrong’ piece of speculative guesswork which is not checkable) is that there is no actual gravity on the greatest distance scales.

    This arises because of the redshift of gravitons exchanged between masses. On the greatest distance scales, the redshift is greatest, so the gravitons have little energy E = hf and thus little momentum p = E/c = hf/c, so they can’t cause gravitational effects over such long distances! Hence, there can’t be any distant curvature of spacetime. Go a long way, and quantum gravity tells you that instead of you travelling along a closed geodesic circle which will return you sooner or later to the place you started from (which is what general relativity falsely predicts, since it ignores graviton redshift over long distances in an expanding universe), gravitational effects from curvature will diminish because the exchange of gravitons with the matter of the universe will become trivial.

    This was observationally confirmed by Perlmutter’s supernova redshift observations in 1998, which showed a lack of gravitational slowing of the most distant masses that can be observed!

    Sadly, instead of acknowledging that this is evidence for quantum gravity, the mainstream in astronomy tried to resurrect a disproved old idea called the cosmological constant to provide a repulsive force at long distances whose strength they adjust (by varying the assumed amount of unobservable ‘dark energy’ powering the assumed repulsion force) to exactly cancel out the attractive gravity force over those very long distances.

    This cosmological constant is a false idea which goes back to Einstein in 1917, who thought the universe was static and used a massive positive cosmological constant to cancel out gravity over a distance equal to the average distances between galaxies. He believed that this would make the universe stable by preventing galaxies from being attracted to one another. However, he was wrong because the role of the cosmological constant for that purpose would make the universe unstable like a pin balanced upright, standing on its point (any slight variation of an inter-galactic distance from the average value would set off the collapse of the universe!).

    The resurrection of the cosmological constant (lambda) is similar to the original Einstein cosmological constant idea. The new cosmological constant is also positive in sign like Einstein’s 1917 ‘biggest blunder’, but it differs in quantity: it is very small compared to Einstein’s 1917 value. Because it is so much smaller, the repulsive force it predicts (which increases with distance) only becomes significant in comparison to gravitational attraction when at much greater distances, where the gravity is weaker.

    There are several reasons why the new small positive cosmological constant is a fiddle. First, Lunsford using a unification of electromagnetism and gravitation in which there are 6 effective dimensions (3 expanding time dimensions which describe the expanding spacetime universe, and 3 contractable spatial dimensions which describe the contractable matter in the universe which gets squeezed by radiation pressure due to the gravitational field and motion in the gravitational field) proves that the cosmological constant is zero:

    http://cdsweb.cern.ch/record/688763

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

    Second, observations of recession in detail using gamma ray bursters suggests that the value of the cosmological ‘constant’ and dark energy is not actually constant at all but is evolving:

    http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

    http://www.google.co.uk/search?hl=en&q=evolving+dark+energy&meta=

    Finally, the physical mechanism (graviton redshift) for the lack of gravitational retardation of distant receding matter is incompatible with a small positive cosmological constant! See

    https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/

    Although naively, you might expect that a small positive cosmological constant, by cancelling out gravity at great distances, does the same thing as graviton redshift, it does not have the same quantitative features.

    Graviton redshift cancels out gravity (i.e., gravitational retardation of distant receding matter) at great distances, but doesn’t cause repulsion at still greater distances. A small positive cosmological constant will at a particular distance do the same as graviton redshift (cancelling gravity), but at greater distances than that it differs from quantum gravity since it has a net repulsive force. Graviton redshift cancels gravity at all great distances without ever causing repulsion, unlike a positive cosmological constant (regardless of the size of the cosmological constant, which just determines the quantitative distance beyond which the net force is repulsive).

    Professor Phil Anderson points out that the data don’t require anything more than a cancelling of gravity at great distances:

    ‘… the flat universe is just not decelerating, it isn’t really accelerating …’ –

    http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

  3. I want to add some comments about the exact role of Loop Quantum Gravity (LQG) and also about the Zen-like interpretation of Feynman’s path integrals:

    ‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’

    – Prof. Clifford V. Johnson’s comment

    ‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

    – Feynman, QED, Penguin, 1990, page 54.

    ‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’

    – Prof. Sean Carroll’s blog post on laws

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    Clifford Johnson’s argument is pictorially illustrated in Chapter I.2 (Path Integral Formulation of Quantum Physics) of Zee’s book, Quantum Field Theory in a Nutshell.

    I think there is a big similarity in the underlying assumption, in this explanation of path integrals, to the underlying assumption of LQG.

    It’s clear that path integrals apply to different sorts of particles – photons, electrons, etc. – which behave to give the well known interference effects when the double-slit experiment is done. However, physically the mechanism of what is behind the success of path integrals may vary.

    One big question with bosonic radiation like photons is how it can scatter in the vacuum below the Schwinger threshold electric field strength for pair-production of fermions: photons interact with fermions, not with other photons (i.e., photons don’t obey the Pauli exclusion principle).

    In other words, the diffraction of light in the double slit experiment is due to the presence of fermions (electrons) in the edges of the slit material. Take away the physical material of the mask with its slits, and the photons will be unable to diffract at all. So the path integral or sum over histories cannot be interpreted correctly by making endless holes in the mask until the mask completely disappears.

    However, this argument ignores the presence of charged radiation in the vacuum which mediates all electromagnetic interactions, as described in this blog post. So we get the question arising: does the path integral (sum over histories) arise because photons and other particles interact with charged gauge boson exchange radiation present throughout the zero-point field of the vacuum?

    The answer seems to be yes. This is exactly where the formulation of LQG comes into the argument. Because the spin 1 gravitons (not the widely assumed spin-2 ones, see https://nige.wordpress.com/2007/05/19/sheldon-glashow-on-su2-as-a-gauge-group-for-unifying-electromagnetism-and-weak-interactions/ ) used to make the checkable predictions in this post don’t interact with one another (photons don’t interact with one another), LQG which is based on spin-2 gravitons does directly not apply to gravity. But it will be useful for electromagnetic forces where the gauge bosons are charged. What I like about the LQG framework is that it is applying well-validated (the double slit experiment is well checked) path integral concepts to model exchange radiation in the vacuum: as Smolin’s Perimeter Institute lectures explain clearly, the path integral to determine fundamental forces is the sum over the interaction graphs of what the gauge bosons are doing in the vacuum. This path integral naturally gives a relationship between the cause of the field and the acceleration produced (curvature of spacetime, or force effect) which is similar to Einstein’s general relativity.

  4. The key background experimental fact to the nature of electromagnetic force gauge bosons being charged (rather than photons) is the nature of the logic step and the theoretical modifications it necessitates to the traditional role of Maxwell’s displacement current:

    http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html :

    “… What actually happens in the sloping part of the real logic step is that electrons are accelerated in non-zero time, and in so doing radiate energy like a radio transmitter antenna. Because the current variation in each conductor [you need 2 conductors to propagate a logic step, each carrying an equal and opposite current] is an exact inversion of that in the other, the fields from the radio waves each transmits is capable of exactly cancelling the fields from the signal from the opposite conductor. Seen from a large distance, therefore, there is no radio transmission of energy whatsoever. But at short distances, between the conductors there is exchange of radio wave energy between conductors in the rising portion of the step. This exchange powers the logic step … That’s why the speed of a logic pulse is the speed of light for the insulator between and around the conductors. …”

  5. Below are copied a couple of comments from the previous post on this blog,

    https://nige.wordpress.com/2007/05/19/sheldon-glashow-on-su2-as-a-gauge-group-for-unifying-electromagnetism-and-weak-interactions/

    because they explains in several ways why the black hole size for a fundamental particle like an electron is physically more sensible than Planck’s length:

    copy of a comment:

    http://kea-monad.blogspot.com/2007/05/blog-notice.html

    It’s an interesting post about the Planck units. Actually, the Planck units are very useful to befuddled lecturers who confuse fact with orthodoxy.

    The Planck scale is purely the result of dimensional analysis, and Planck’s claim that the Planck length was the smallest length of physical significance is vacuous because the black hole event horizon radius for the electron mass, R = 2GM/c^2 = 1.35*10^{-57} m, which is over 22 orders of magnitude smaller than the Planck length, R = square root (h bar * G/c^3) = 1.6*10^{-35} m.

    Why, physically, should this Planck scale formula hold, other than the fact that it has the correct units (length)? Far more natural to use R = 2GM/c^2 for the ultimate small distance unit, where M is electron mass. If there is a natural ultimate ‘grain size’ to the vacuum to explain, as Wilson did in his studies of renormalization, in a simple way why there are no infinite momenta problems with pair-production/annihilation loops beyond the UV cutoff (i.e. smaller distances than the grain size of the ‘Dirac sea’), it might make more physical sense to use the event horizon radius of a black hole of fundamental particle mass, than to use the Planck length.

    All the Planck scale has to defend it is a century of obfuscating orthodoxy.

    Comment by nc — May 21, 2007 @ 2:19 pm

    Non-submitted comment (saved here as it is a useful brief summary of some important points of evidence; I didn’t submit the comment to the following site in the end because it covers a lot of ground and doesn’t include vital mathematical and other backup evidence):

    http://borcherds.wordpress.com/2007/05/22/toroidal-black-holes/

    This is interesting. A connection I know between a toroidal shape and a black hole is that, if the core of an electron is gravitationally trapped Heaviside-Poynting electromagnetic energy current, it is a black hole and it has a magnetic field which is a torus.

    Experimental evidence for why an electromagnetic field can produce gravity effects involves the fact that electromagnetic energy is a source of gravity (think of the stress-energy tensor on the right hand side of Einstein’s field equation). There is also the capacitor charging experiment. When you charge a capacitor, practically the entire electrical energy entering it is electromagnetic field energy (Heaviside-Poynting energy current). The amount of energy carried by electron drift is negligible, since the electrons have a kinetic energy of half the product of their mass and the square of their velocity (typically 1 mm/s for a 1 A current).

    So the energy current flows into the capacitor at light speed. Take the capacitor to be simple, just two parallel conductors separated by a dielectric composed of just a vacuum (free space has a permittivity, so this works). Once the energy goes along the conductors to the far end, it reflects back. The electric field adds to that from further inflowing energy, but most of the magnetic field is cancelled out since the reflected energy has a magnetic field vector curling the opposite way to the inflowing energy. (If you have a fully charged, ’static’ conductor, it contains an equilibrium with similar energy currents flowing in all possible directions, so the magnetic field curls all cancel out, leaving only an electric field as observed.)

    The important thing is that the energy keeps going at light velocity in a charged conductor: it can’t ever slow down. This is important because it proves experimentally that static electric charge is identical to trapped electromagnetic field energy. If this can be taken to the case of an electron, it tells you what the core of an electron is (obviously, there will be additional complexity from the polarization of loops of virtual fermions created in the strong field surrounding the core, which will attenuate the radial electric field from the core as well as the transverse magnetic field lines, but not the polar radial magnetic field lines).

    You can prove this if you discharge any conductor x metres long which is charged to v volts with respect to ground, through a sampling oscilloscope. You get a square wave pulse which has a height of v/2 volts and a duration of 2x/c seconds. The apparently ’static’ energy of v volts in the capacitor plate is not static at all; at any instant, half of it, at v/2 volts, is going eastward at velocity c and half is going westward at velocity c. When you discharge it from any point, the energy already by chance headed towards that point immediately begins to exit at v/2 volts, while the remainder is going the wrong way and must proceed and reflect from one end before it exits. Thus, you always get a pulse of v/2 volts which is 2x metres long or 2x/c seconds in duration, instead of a pulse at v volts and x metres long or x/c seconds in duration, which you would expect if the electromagnetic energy in the capacitor was static and drained out at light velocity by all flowing towards the exit.

    This was investigated by Catt, who used it to design the first crosstalk (glitch) free wafer scale integrated memory for computers, winning several prizes for it. Catt welcomed me when I wrote an article on him for the journal Electronics World, but then bizarrely refused to discuss physics with me, while he complained that he was a victim of censorship. However, Catt published his research in IEEE and IEE peer-reviewed journals. The problem was not censorship, but his refusal to get into mathematical physics far enough to sort out the electron.

    It’s really interesting to investigate why classical (not quantum) electrodynamics is totally false in many ways. I think quantum electrodynamics and particle-wave duality have blocked progress.

    Some calculations of quantum gravity based on a simple, empirically-based model (no ad hoc hypotheses), which yields evidence (which needs to be independently checked) that the proper size of the electron is the black hole event horizon radius.

    There is also the issue of a chicken-and-egg situation in QED where electric forces are mediated by exchange radiation. Here you have the gauge bosons being exchanged between charges to cause forces. The electric field lines between the charges have to therefore arise from the electric field lines of the virtual photons being continually exchanged.

    How do you get an electric field to arise from neutral gauge bosons? It’s simply not possible. The error in the conventional thinking is that people incorrectly rule out the possibility that electromagnetism is mediated by charged gauge bosons. You can’t transmit charged photons one way because the magnetic self-inductance of a moving charge is infinite. However, charged gauge bosons will propagate in an exchange radiation situation, because they are travelling through one another in opposite directions, so the magnetic fields are cancelled out. It’s like a transmission line, where the infinite magnetic self-inductance of each conductor cancels out that of the other conductor, because each conductor is carrying equal currents in opposite directions.

    Hence you end up with the conclusion that the electroweak sector of the SM is in error: Maxwellian U(1) doesn’t describe electromagnetism properly. It seems that the correct gauge symmetry is SU(2) with three massless gauge bosons: positive and negatively charged massless bosons mediate electromagnetism and a neutral gauge boson (a photon) mediates gravitation. This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. Since there are around 10^80 charges, electromagnetism is 10^40 times gravity. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.

  6. copy of a comment:

    http://cosmicvariance.com/2007/05/21/guest-post-joe-polchinski-on-science-or-sociology/#comment-273147

    Peter claims: ” the equations one has to solve in order to find the “vacuum state” corresponding to our world are at least as complicated if not more so than the ones that define the SM”

    So what? Are these equations required to be simpler? Is such a simplification always the case when a large, deeper, and more comprehensive physics theory includes an earlier theory of much more limited scope? – gina

    More complex mathematical models for the physical world are justified if there is a pay-back in terms of solid predictions which can validate the need for the additional complexity. The nemesis of physics is the endlessly complex theory which makes no falsifiable predictions and is proudly defended for being incomplete.

    Ockham’s razor (entia non sunt multiplicanda praeter necessitatem): “Entities should not be multiplied beyond necessity”.

    String theory multiplies entities without necessity. Where is the necessity for anything in string? If there is no falsifiable prediction, there is no necessity.

    Don’t get me wrong: I’m all for complex theories when there is a pay-back for the additional complexity. In certain ways, Kepler’s elliptical orbits were more ‘complex’ than the circular orbits of both Ptolemy and Copernicus, oxidation is more complex than phlogiston, and caloric was replaced by two theories to explain heat: kinetic theory of gases, and radiation.

    These increases in complexity didn’t violate Ockham’s razor because they were needed. Maxwell’s aether violated Ockham’s razor because it required moving matter to be contracted in the direction of motion (FitzGerald contraction) in order for the Michelson-Morley experiment to be explained. This was an ad hoc adjustment, and aether had to be abandoned becaus it did not make falsifiable predictions. Notice that Aether was the leading mathematical physics theory of its heyday, circa 1865 when Maxwell’s equations based on the theory were published.

    String theory has not even led to anything predictive like Maxwell’s equations. String theorists should please just try to understand that, until they get a piece of solid evidence that the world is stringy, they should stop publishing totally speculative papers which saturate the journals with one form of speculative, uncheckable orthodoxy which makes it impossible for others to get checkable ideas published and on to arxiv. (Example: Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters.)

  7. The Greek symbols used in this and other posts and sites will not display properly on computers which don’t have Greek symbol fonts installed. Hence on such a computer you will get r appear in place of the Greek symbol for Rho, and you will get p appear in place of the symbol for Pi, etc.
    The problem here is that I use r a lot for radius and p a lot for momentum, so the maths will not be readable on such computers. One solution is to produce PDF files of the pages, since PDF documents contain the symbol set used in the pages within the file itself. All people have to get is the free PDF reader from Adobe, http://www.adobe.com/products/reader/
    Hence I have put this blog (unfortunately posts only, no comments – and since I’ve been using comments to add supplementary information this is a problem) on PDF format at
    http://quantumfieldtheory.org/1.pdf
    (204 pages, 3.27 MB)
    while this post (including the comments above) is in PDF format at:
    http://quantumfieldtheory.org/1a.pdf
    (50 pages, 1.29 MB)
    I’ve now finished my blogging activities (which scientifically has been fairly fruitful, although the results are messy long essays), and will devote future spare time not to blogging but to writing up the content to go on the site http://quantumfieldtheory.org
    It is too hard writing long mathematical blog posts on either blogspot or wordpress without sophisticated mathematical software and lots of time. What is on this blog is enough to be going on with. I think it is a disaster to try writing anything on a computer, because you end up always typing at the speed you are thinking so the results are inclined to be a stream of ideas. It’s vital at this stage to write the papers on paper using a pen, which will permit better editing. You can’t edit efficiently on a computer because there is just too much material you don’t want to cut out. The only way to get something well written and concise is to write it by hand, correct it on paper, and then revise again it while typing it into a computer. Better still, use the computer as a typewriter and don’t save files – just print them out. That forces you to retype the whole thing from beginning to end, and while you do so, you naturally condense that material which is the least concise to save retyping it. It’s expensive in terms of time, but it’s probably the only way to get a lot of detailed editing done efficiently on a complex topic like this.

  8. Just a final note about probability theory abuse in modern physics, and how this can lead to experimental tests.

    As quoted in this post, Heisenberg’s uncertainty principle and the associated mechanism for Schroedinger’s equation in quantum mechanics, is due to scattering of multiple electrons by other as they orbit.

    Schroedinger’s equation can only be solved exactly for single electron atoms, like hydrogen isotopes (hydrogen, deuterium, tritium). For all heavier atoms, the solutions require approximations to be made, and are inexact.

    The claim here is that if you picture the atom like the solar system, it would look like a Schroedinger atom, with chaotic orbits instead of classical elliptical orbits. The reason the solar system doesn’t have chaotic (Schroedinger) orbits is mainly due to the fact that the sun has 99.8% of the mass of the solar system, and the mass is the gravitational charge holding the solar system together. Hence the planets don’t affect the orbits of each other much, because they are relatively light and far apart. By far the main source of gravity they experience is due to the sun’s mass.

    That’s why the planets have deterministic, classical orbits; chaotic effects are trivial.

    If you made an atom like the solar system, the electric charges of the electrons would need to be far smaller than they are while keeping the nuclear electric charge high. In such a case, the electrons would interfere with one another less, so the orbits would become more classical.

    However, it is likely that the pair-production mechanism (spontaneous appearance and annihilation of pairs of virtual positrons and electrons out as far as 1 fm from an electron core) causes random, spontaneous deflections to the motion of an electron on a small scale, such as in an atom. This mechanism would explain why the hydrogen atom’s electron doesn’t have a classical orbit.

    The problem with the Schroedinger equation is that it implies that there is some chance of finding the electron at any distance from the nucleus; the peak probability corresponds to the classical Bohr atom radius, but there is some chance of finding the electron at arbitrarily greater distances.

    This is probably unphysical, as there is probably a maximum range on the electron determined by physical considerations. The electron loses its radial kinetic energy as it moves further from the nucleus, due to the deceleration effect produced by the Coulomb attraction. At some distance, the electron’s outward directed radial velocity will fall to zero, and it will then stop receding from the nucleus and start falling back. This physical model doesn’t contradict Schrodinger’s model as a first-approximation, it just supplements it with a physical limitation due to conservation of energy.

    Similarly, sometimes you hear crackpot physics lecturers claiming that there is a definite small probability of all the air in a room happening to be in one corner. That actually isn’t possible unless the dimensions of the room are on the order of the mean free path of the air molecules. The reason is that pressure fluctuations physically result from collisions of air molecules, which disperse the kinetic energy isotropically, instead of concentrating it in a particular place. In order to have all the air in a room located in one corner, you have to explain the intermediate steps required to achieve that result: before all the air gets into one corner, there must be a period where the air pressure is increasing in the corner and decreasing elsewhere. As the air pressure in the corner begins to increase, the air in the corner will expand in causal consequence, and the pressure will fall. Hence, it is impossible to get all the air in a room in a corner by random fluctuations of pressure: the probability is exactly zero, rather than being low but finite. (Unless the room is very small and contains so few air molecules that there are negligible interactions between them to dissipate pressure isotropically.)

    Similarly, a butterfly flapping its wing has zero propability of triggering off a hurricane because of the stability of the atmosphere to small scale fluctuations. Hurricanes are triggered by the large scale deflection due to the Coriolis effect (Earth’s rotation) of rising warm moist air which has been evaporated from a large area of warm ocean (surface temperature above 27 C). It’s not triggered by small scale irregularities.

    Small scale irregularities and random chances can be multiplied up into massive effects, but only if there is instability to begin with. For example, on a level surface a nothing can generate a landslide. But on a steeply sloping surface covered by loose rocks which have been loosened from the surface by weathering, the system may become increasing unstable until the rocks over a wide area of the sloping surface may need only a minor trigger to set off an avalanche.

    The same effect occurs when too much snow lands on steep mountain slopes. Another example of an unstable situation is trying to balance a pyramid on its apex. If it is balanced like that temporarily, it is highly unstable and the slightest impulse (even from a butterfly landing on it) would trigger off a much bigger event.

    You can see why the physics in this comment – which is obvious – is officially ignored by mainstream physicists. They’re not completely stupid, but they believe in selling hoaxes and falsehoods which sound romantic to them and the gullible, naive fools who buy those claims. At the end of the day, modern physics is in a dilemma.

    It is immoral to knowingly sell modern physics packaged in the usual extra-dimensional, stringy magic multiverse, in which anything is possible to some degree of probability.

    The sociology of group-think in physics:

    ‘(1). The idea is nonsense.

    (2). Somebody thought of it before you did.

    (3). We believed it all the time.’

    – Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle in his autobiography, Home is Where the Wind Blows, Oxford University Press, 1997, p154).

    ‘… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly…’

    – Nicolo Machiavelli, The Prince, Chapter VI:
    Concerning New Principalities Which Are Acquired By One’s Own Arms And Ability, http://www.constitution.org/mac/prince06.htm

  9. Just found an interesting brief independent comment about gravity mechanism (I’ve omitted the sections which are either 100% wrong or trivial, and have interpolated one amplification in square brackets). Notice, however, that critical comments which follow below it on the physicsmathforums page link by ‘Epsilon=One’ are largely rubbish (example of rubbish: ‘Gravity travels at many, many times the SOL; otherwise the Cosmos would have the motions of the balls on a pool table. Gravity’s universal “entanglement” requires speeds that are near infinite’):

    http://physicsmathforums.com/showthread.php?p=8084#post8084

    09-22-2006, 05:47 PM
    Member bpj1138
    Join Date: Jul 2006
    Location: USA
    Posts: 35

    The Shielding Theory

    … La Sage … proposed that gravity was caused by shielding of universal flux surrounding bodies in close proximity. …

    The first principle of the theory is really that no force can be a made to pull objects, it must always be a push force, so gravity is actually caused by pushing of the surround space, rather than a planet emitting “pulling particles” that eminate outwards from the planet.

    The space that pushes objects towards each other is the complete surrounding space around a locality (where the two objects are in proximity), so it is a very large sphere with its radius reaching to the light wall (where objects are traveling at the speed of light relative to the center of this sphere therefore are red shifted out of view, and influence on the center). …

    The push concept also helps to explain why there is no [spin-2, attractive] graviton. It is actually the lack of light, or shadow that causes the effect. Moreover, it explains why gravity travels at the speed of light.

    Another thing that the theory seems to expalin is why there is a universal expansion. At some point, roughly on a galactic scale, there is not enough shielding to cause attraction, rather the flux becomes just a force of expansion.

    What’s also remarkable is that even though the theory is based on the geometry of shielding of background flux, the equations (though not based on squares) still yield a function that is almost exactly in the form of the inverse squared distance, as described by Newton.

    … if you think about it, it is often hard to change something from the inside, such as with politics. It’s also an advantage not to have any preconceived ideas. …

    … I don’t think anybody has expressed the idea quite as bluntly as I did, and I say this because again, other people are aware of this idea. I only hope my writing suits the purpose.

    Regards,
    Bart Jaszcz
    http://www.geocities.com/bpj1138

  10. copy of a comment:

    http://kea-monad.blogspot.com/2007/06/light-speed-ii.html

    “… the strange spiral nebulae were in fact distant galaxies, not unlike our own. The same seminar concluded, on the observation that accelerated publication rates had not produced as many major breakthroughs in the last two decades, that technology had finally caught up with observations over the electromagnetic spectrum and that we may well have seen most of what there is to see. Naturally there was some dissent. To believe that in a mere 80 years humanity can go from a relatively trivial understanding of the cosmos to complete comprehension is hubris. …”

    When I did a cosmology course in 1996, one thing that worried me was that the galaxies are back in time. If I were Hubble in 1929, I wouldn’t have just written recession velocity/distance = constant with units of [time]^-1. Because of spacetime, I’d have considered velocity/time past = constant with units of [acceleration].

    This is the major issue. I think it is a fact that the matter is accelerating outward, simply by Hubble’s law

    v = dx/dt = Hx (Hubble’s law)

    Hence, dt = dx/v

    Now, acceleration

    a = dv/dt = dv/(dx/v)

    = v*dv/dx

    = (Hx)*d(Hx)/dx

    = (H^2)x

    Because the universe isn’t decelerating due to gravity, H = 1/t where t is the age of the universe [if the universe were decelerating like a critical density universe, then the relationship would be H = (2/3)/t].

    Hence, a = (H^2)x

    = x/t^2

    = c/t

    = c/(x/c)

    = (c^2)/x

    So there’s cosmic acceleration.

    Lee Smolin on page 209 of his book The Trouble with Physics:

    “The next simplest thing [by guesswork dimensional analysis] is (c^2)/R [where R is radius of universe]. This turns out to be an acceleration. It is in fact the acceleration by which the rate of expansion of the universe is increasing – that is, the acceleration produced by the cosmological constant.”

    You can see my problem. The empirically verified Hubble law acceleration happens to be similar to the metaphysical dark energy acceleration. Result: endless confusion about the acceleration of the universe.

    It would be useless to try to discuss it with Smolin or anyone else who believes that the two things are the same thing. The real acceleration of the universe is just an expression of Hubble’s law in terms of velocity/time past.

    The assumed dark energy is completely different, not a real acceleration but a fictional outward acceleration which is supposed to cancel out the (also fictional!) inward acceleration due to gravity over great distances.

    In fact the vital inward acceleration due to gravity which was supposed (by the mainstream) to be slowing down distant supernovas until this effect was disproved in 1998 by Perlmutter’s group, isn’t real because the enormous redshift of light from such great distances doesn’t just apply to visible light but also to the gauge bosons of gravity. For this reason, the gravity coupling constant is weakened because the exchanged energy arrives in a very severely redshifted (very low energy) condition.

    So the cosmological constant (outward acceleration fiddled to be just enough to cancel out inward gravity for Perlmutter’s supernovae, so that the observations fit the lambda-CDM model) is actually spurious.

    There is really no long-range gravity over distances where redshift of light is extreme, because the masses are receding and gravitons are redshifted. Hence, there is no cosmological constant, because the role of the cosmological constant is to cancel out gravitational acceleration, not to accelerate the universe.

    In virtually all the popular accounts of dark energy, popularisers lie and say that the role of dark energy is to accelerate the universe.

    Actually, it’s not real, and even if the dark energy theory were correct, it is a fictional acceleration made up to cancel out gravity’s effect, just like the way the Coulomb force between your bum and your chair stop you from accelerating downward at 9.8ms^{-2}.

    You don’t hear people describe the normal reaction force of a chair as an upward acceleration.

    So why do people refer to dark energy as the cause of cosmic acceleration? It’s really pathetic. There is an acceleration as big as the fictional dark energy acceleration, but it isn’t due to a cosmological constant or dark energy. It’s just the normal expansion of the universe, due to vector bosons. There’s too much disinformation and confusion out there for anybody to listen. My case is that you take the real acceleration, use Newton’s second law and calculate outward force F=ma, and then the 3rd law to give the inward reaction force carried by vector bosons, and you get gravity after simple calculations.

  11. Mechanisms for the lack of gravitational deceleration at large redshifts (i.e., between gravitational charges – masses – which are relativistically receding from one another)

    One thing I didn’t list in this post, which (apart from the nature of the electron as trapped negative electromagnetic field energy; see Electronics World, April 2003), is fairly comprehensive, is the first major comfirmed prediction. This prediction was published in October 1996 and confirmed in 1998.

    This is the lack of gravitational retardation on the big bang at large redshifts, i.e., great distances. There are several mechanisms depending on the theory of quantum gravity you decide to use, but all dispense with the need for a cosmological constant:

    (1) if gravity is due to the direct exchange of spin-2 gravitons between two receding masses, the redshift of the gravitons weakens the effective gravitational charge (coupling constant) at big redshifts, because the energy exchanged by redshifted gravitons will be small, E = hf where h is Planck’s constant and f is frequency of received quanta.

    (2) if gravity is due to the blocking of exchange (shielding as described in detail in this post) then the small amount of receding matter beyond the distant supernovae of interest will only produce a similarly small retarding force of gravity on the supernova.

    This will be cancelled approximately by the redshifted exchange radiation from the much larger mass within a shell centred on us of radius equal to the distance to the supernova. Hence, gravity will be approximately cancelled. For this mechanism, which is the one with evidence behind it, see the post (particularly its illustration) on the old blog:

    http://electrogravity.blogspot.com/2006/04/professor-phil-anderson-has-sense-flat.html

  12. Experimental evidence that quarks are confined leptons with some minor symmetry transformations

    (1) In this blog post it as shown that if in principle you could somehow have enough energy – working against the Pauli exclusion principle and Coulomb repulsion – to press three electrons together in a small space, their polarized vacuum veils would overlap, and all three electrons would share the same polarized dielectric field.

    Instead of the core electric charge of each electron being shielded by a factor of 137.036, the core electric charge of each electron would be shielded by the factor 3 x 137.036, so the electric charge per electron in the closely confined triad, when seen from a long distance, would be -1/3, the downquark charge.

    This argument doesn’t apply to the case of 3 closely confined positrons: the upquark charge is +2/3, not +1/3. However, Peter Woit explains very simply the complex chiral relationship between weak and electric charges in the book “Not Even Wrong”: positive and negative charges are not the same because of the handedness effects which determine the weak force hypercharge.

    According to electroweak unification theory, there are two symmetry properties of the electroweak unification: weak force isospin and weak force hypercharge. The massive weak force gauge bosons couple to weak isospin, while the electromagnetic force gauge boson assumed by SU(2)xU(1) couples to weak hypercharge.

    There is a relationship between electric charge Q, weak isospin charge Tz and weak hypercharge Yw:

    Q = Tz + Yw/2. – http://en.wikipedia.org/wiki/Weak_hypercharge

    “Type “u” fermions (quarks u, c, t and neutrinos) have Tz = +1/2, while type “d” fermions (quarks d, s, b and charged leptons) have Tz = −1/2.” – http://en.wikipedia.org/wiki/Weak_isospin

    “Yw = −1 for left-handed leptons (+1 for antileptons)

    “Yw = +1/3 for left-handed quarks (−1/3 for antiquarks)” – http://en.wikipedia.org/wiki/Weak_hypercharge

    So the factor of 2 discrepancy in the vacuum polarization logic can be addressed by considering the weak force charges.

    (2) When a massive amount of energy is involved in particle collisions (interactions within a second of the big bang, for example), the leptons can be collided hard enough against Coulomb repulsion and Pauli exclusion principle force, that they approach closely.

    The Pauli exclusion principle isn’t violated; what happens is that the tremendous energy just changes the quantum numbers of the weak and strong force charges (leptons well isolated have zero strong force colour charge!), so that the quantum numbers of 2 or 3 confined leptons are extended to permit them to not violate the exclusion principle.

    This change modifies the electric charge because, as stated above, Q = Tz + Yw/2.

    (3) Direct experimental evidence for the fact that leptons and quarks are different forms of the same underlying entity exists: universality. This was discovered when it was found that the weak force controlled beta decay event

    muon (a LEPTON) -> e + electron antineutrino + muon neutrino

    is nearly equal to

    neutron (a QUARK COMPOSED hadron) -> proton + electron + electron antineutrino

    Compelling further experimental evidence that quarks are just leptons with some minor symmetry transformations was analysed by Nicola Cabibbo:

    “Cabibbo’s major work on the weak nuclear interaction originated from a need to explain two observed phenomena:
    “the transitions between up and down quarks, between electrons and electron neutrinos, and between muons and muon neutrinos had similar amplitudes; and

    “the transitions with change in strangeness had amplitudes equal to 1/4 of those with no change in strangeness.

    “Cabibbo solved the first issue by postulating weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles. He solved the second issue with a mixing angle θc, now called the Cabibbo angle, between the down and strange quarks.

    “After the discovery of the third generation of quarks, Cabibbo’s work was extended by Makoto Kobayashi and Toshihide Maskawa to the Cabibbo-Kobayashi-Maskawa matrix.”

    http://en.wikipedia.org/wiki/Nicola_Cabibbo

  13. To clarify the previous comment further: the minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.)

  14. On the causality of the Heisenberg uncertainty principle in its energy-time form: the momentum-distance form of the uncertainty principle is equivalent to the momentum-distance form as shown in the post, and Popper showed that the latter is causal. The mechanism for pair production by Heisenberg’s uncertainity principle with Popper’s scattering mechanism is that, at high energy (above the Schwinger threshold for pair production, i.e., electric field strengths exceeding 10^18 v/m), the flux of electromagnetic gauge bosons is sufficient to knock pairs of Dirac sea fermions out of the ground state of the vacuum temporarily. This is a bit like the photoelectric effect: photons hitting bound (Dirac sea of vacuum) particles hard enough can temporarily free them for a short period of time, when they become visible as “virtual fermions”. Obviously the ground state of the Dirac sea is invisible to detection, since it doesn’t polarize. Maxwell’s error in assuming that it does polarize, without knowing Schwinger’s threshold for pair production, totally contradicts QED results on renormalization (Maxwell’s electrons would have zero electric charge seen from a long distance, because the polarization of the vacuum would be able to extend far enough – without Schwinger’s limit which corresponds to the IR cutoff – that the real electric charge of the electron would be completely cancelled out!), and so Maxwell’s displacement current is false in fields below 10^18 v/m. What happens instead of displacement current in weak electric fields is radiation exchange, which produces results that have been misinterpreted to be displacement current, as proved at: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

  15. Evidence for redshift implying expansion

    Alternative redshift ideas like tired light theories can’t work as proved here: http://www.astro.ucla.edu/~wright/tiredlit.htm

    Recession-caused redshift does work; it is empirically confirmed by redshifts from stars in different parts of rotating galaxies, etc.

    There is no evidence for tired light whatsoever. The full spectrum of redshifted light is uniformly displaced to lower frequencies, which rules out most fanciful ideas (scattering of light is frequency dependent, so redshift as observed isn’t due to intergalactic dust).

  16. copy of a comment:

    http://motls.blogspot.com/2007/05/varying-speed-of-light-vsl-theories.html

    c = 186 miles/second or 300 megametres/second in a vacuum, less in a medium filled with strong electromagnetic fields which slow the photon down (eg light travelling inside a block of glass).

    The vacuum has some electric permittivity and magnetic permeability because it has a Dirac sea in it. The Dirac sea only produces observable pairs of charges above Schwinger’s threshold field strength of 10^18 v/m, which occurs at 1 fm from the middle of an electron (you can estimate that by simply setting Coulomb’s law equal to F = qE where E is electric field strength in v/m and q is charge).

    This creates a problem for Maxwell’s theory of light, because his displacement current of vacuum gear cogs and idler wheel dynamics isn’t approximated by the real vacuum unless the electric field is above 10^18 v/m. So Maxwell doesn’t explain how radio waves propagate where the field strength is just a few v/m. The actual mechanism weak electric fields mimics the Maxwell displacement current mathematically, but is entirely different in physical processes: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

    If the spacetime fabric or Dirac sea was really expanding, you might expect the permittivity and permeability of the vacuum to alter over time, like the velocity of light in a block of glass increasing as the density of the glass decreases due to the glass expanding.

    However, what is expanding is the matter of the universe, which is receding. There is no evidence that the spacetime fabric is expanding:

    ‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3.

    (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

    Think of a load of people running along a corridor in one direction. The air around them doesn’t end up following the people, leaving a vacuum at the end the people come from. Instead, a volume of air equal the volume of the people moves in the opposite direction to the people, filling in the displaced volume. In short, the people end up at the other end of the corridor while the air moves the opposite way and fills in the volume of space the people have vacated.

    There is no reason why the gravitational field should not do the same thing. Indeed, to make it possible to use the stress-energy tensor T_{ab} we have to treat the source of the curvature mechanism as being an ideal fluid spacetime fabric:

    ‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp89-90.

    This works and is a useful concept. Think about a ship in the sea (like a Dirac sea). The ship goes in one direction, and the water travels around it an fills in the void behind it. So water of volume equal to the ship goes in the opposite direction with a speed equal to the ship, and if the ship is accelerating, then the same volume of water has an acceleration equal to the ship’s acceleration but in the opposite direction (this is merely a statement of Newton’s 2nd and 3rd laws of motion: the ship to accelerate needs a forward force, and the water carries the recoil force which is equal and opposite to the forward force, like a rocket’s exhaust gas).

    Evidence that the spacetime fabric does push inward as matter recedes outward is easy to obtain. The Hubble recession

    v = HR

    implies outward acceleration

    a = dv/dt

    where dt = dR/v (because v = dR/dt)

    a = dv/(dR/v) = v*dv/dR

    Putting in v = HR gives

    a = (H^2)R

    That’s the outward acceleration of the mass of the receding universe. By Newton’s 2nd law

    F = ma

    where m is mass of universe. That gives outward force.

    By Newton’s 3rd law, there is an equal inward force, which is the graviton force and predicts the strength of gravity, https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/

    This is evidence that the spacetime fabric isn’t expanding, so light velocity probably is constant.

  17. On the CERN document server closure to updates to papers and to new submissions (except via arxiv) see:

    http://www.math.columbia.edu/~woit/wordpress/?p=154#comment-2339

    Tony Smith Says:

    March 6th, 2005 at 7:22 pm

    Roger Penrose’s book The Road to Reality comes in two editions:
    UK edition (ISBN: 0224044478, Publisher: Jonathan Cape, July 29, 2004)
    and
    USA edition (ISBN: 0679454438, Publisher: Knopf, February 22, 2005).

    The two editions are NOT identical.

    For example:

    The UK edition on page 1050 says in part:

    “… Bibliography
    There is one major breakthrough in 20th century physics thatI have yet to touch upon, but which is nevertheless among the most important of all! This is the introduction of arXiv.org, an online repository where physicists … can publish preprints (or ‘e-prints’) of their work before (or even instead of!) submitting it to journals. …as a consequence the pace of research activity has accelerated to unheard of heights. … In fact, Paul Ginsparg, who developed arXiv.org, recently won a MacArthur ‘genius’ fellowship for his innovation. …”
    but
    the USA edition on its corresponding page (also page 1050) says in part:
    “… Bibliography
    … modern technology and innovation have vastly improved the capabilities for disseminating and retrieving information on a global scale. Specifically, there is the introduction of arXiv.org, an online repository where physicists … can publish preprints (or ‘e-prints’) of their work before (or even instead of!) submitting it to journals. …as a consequence the pace of research activity has accelerated to an unprecedented (or, as some might consider, an alarming) degree. …”.
    However,
    the USA edition omits the laudatory reference to Paul Ginsparg that is found in the UK edition.

    For another example:
    The USA edition adds some additional references, including (at page 1077):
    “… Pitkanen, M. (1994). p-Adic description of Higgs mechanism I: p-Adic square root and p-adic light cone. [hep-th/9410058] …”.
    Note that Matti Pitkanen was in 1994 allowed to post papers on the e-print archives now known as arXiv(obviously including the paper
    referenced immediately above), but that since that time Matti Pitkanen has been blacklisted by arXiv and is now barred from posting his work there. His web page account of being blacklisted is at http://www.physics.helsinki.fi/~matpitka/blacklist.html

    It seems to me that it is likely that the omission of praise of arXiv’s Paul Ginsparg and the inclusion of a reference to the work of now-blacklisted physicist Matti Pitkanen are deliberate editorial decisions.

    Also, since the same phrase “… physicists … can publish preprints (or ‘e-prints’) of their work before (or even instead of!) submitting it to journals. …” appears in both editions, it seems to me that Roger Penrose favors the option of posting on arXiv without the delay (and sometimes page-charge expense) of journal publication with its refereeing system.

    Therefore, a question presented by these facts seems to me to be:

    What events between UK publication on July 29, 2004 and USA publication on February 22, 2005 might have influenced Roger Penrose to make the above-described changes in the USA edition ?

    There are two possibly relevant events in that time frame of which I am aware:

    1 – The appearance around November 2004 of the ArchiveFreedom web site at http://www.physics.helsinki.fi/~matpitka/blacklist.html which web site documents some cases of arXiv blacklisting etc;

    2 – According to a CERN web page at http://documents.cern.ch/EDS/current/access/action.php?doctypes=NCP “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …”. Note that the CERN EXT-series had been used as a public repository for their work by some people (including me) who had been blacklisted by arXiv .

    Maybe either or both of those two events influenced Roger Penrose in making the above-described changes in the USA edition.
    If anyone has any other ideas as to why those changes were made, I would welcome being informed about them.

    Tony Smith http://valdostamuseum.org/hamsmith/

  18. Browsing Tony Smith’s internet site, I just came across http://valdostamuseum.org/hamsmith/Sets2Quarks10.html#sub3 which contains a nice illustration of the spin-2 graviton from Feynman, plus a concise quotation (in an illustration extract of a scanned page):

    “the graviton has an energy content equal to (h-bar)*(angular velocity omega), and therefore it is itself a source of gravitons. We speak of this as the nonlinearity of the gravitational field.”

    However, as proved at the page http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html , what is normally attributed to vacuum “displacement current” caused by polarization of vacuum charges in a photon is really due to not vacuum charge polarization, but radiation effects:

    i.e., a photon itself is – in a sense – a source of photons (which travel in the transverse direction, at right angles to the direction of propagation, and produce the effects normally attributed to Maxwell’s displacement current; the only limitation being that the photon absorbs all the photons it emits as proved for the example of the two conductor transmission line with a vacuum dielectric.

    However, at high energy – above the Schwinger threshold field strength for pair production in the vacuum by an electromagnetic field – Maxwell’s model is physically justified because the pairs of free charges above that ~10^18 volts/metre field strength threshold (closer than ~1 fm to the middle of an electron) really do polarize and in order to polarize they drift along the electric field lines, causing a “displacement current” in the vacuum. Maxwell’s theory is only false (or incomplete) for the mechanism of a light wave (or other phenomena requiring “displacement current” to propagate) in which the electric field strength is below ~10^18 volts/metre.

    At weaker field strengths, bosonic radiation has the same effect as that due to fermion displacement currents above the IR cutoff.

    Another thing to watch out for is the spin. Spin 1 means is regular spin, like a spinning loop made a strip of paper with the ends stuck together, which has no twist: a 360 degrees turn brings it back to the starting point.

    Now imagine making a mark on one side of a Mobius strip and rotating it while looking only at one side of the strip while rotating. Because the Mobius strip (a looped piece of paper with a half twist in it: http://en.wikipedia.org/wiki/M%C3%B6bius_strip ) has only one surface (not two surfaces – you can prove this by drawing a pen line along the surface of the strip, which will go right the way around ALL surfaces, with a length exactly twice the circumference of the loop!), it follows that you need to rotate it not just 360 degrees but 720 degrees to get back to any given mark on the surface.

    Hence, for a 360 degree rotation, the Mobius strip will only complete 360/720 = half a rotation. This is similar to the electron, which has a half-integer spin.

    The stringy graviton, assumed by the mainstream to be a spin-2 particle, is the opposite of the Mobius strip: it has twice the normal rotational symmetry instead of just half of it.

    This is supposed to make the exchange of such particles result in an attractive force. One well-known (unlike the unobserved spin-2 graviton) example of such a known attractive force is the strong force, mediated between protons and neutrons in the nucleus by vector bosons or gauge bosons called pions: http://en.wikipedia.org/wiki/Pion

    The story of the strong force is this. Japanese physicists first came up with the nuclear atom in 1903, but it was dismissed because the protons would all be confined together in the nucleus with nothing to stop it blowing itself to pieces under the tremendous Coulomb repulsion of electromagnetism.

    Then Geiger and Marsden discovered, from the quantitatively large amount of “backscatter” (180 degree scattering angle) of positively alpha particles hitting gold, that gold atoms must contain clusters of very intense positive electric charge which can reflect (backscatter) the amount of alpha radiation which was measure to be reflected back towards the source.

    Rutherford made some calculations, showing that the results support a nuclear atom with a nucleus containing all the positive charges (the positive charge being equal to all the negative charge, in a neutral atom).

    People then had the problem of explaining why the nucleus didn’t explode due to the repulsion between positively charged protons all packed together. It was obvious that there was some unknown factor at work. Either the Coulomb law didn’t hold on very small scales like the nuclear size, or else it did work but there was another force, an attractive force, capable of cancelling out the Coulomb repulsion and preventing the nucleus from decaying in a blast of radiation. The latter theory was shown to be true (obviously, some atoms – radioactive ones and particularly fissile ones like uranium-235 and plutonium-239 – are not entirely stabilized by the attractive force, and randomly decay, or can be caused to break up by being hit).

    In 1932, Chadwick discovered the neutron, but because it is neutral, it didn’t help much with explaining why the nucleus didn’t explode due to Coulomb repulsion between protons.

    Finally in 1935, the Japanese physicist Yukawa came up with the strong nuclear force theory in his paper “On the Interaction of Elementary Particles, I,” (Proc. Phys.-Math. Soc. Japan, v17, p48), in which force is produced by the exchange of “field particles”. From the uncertainty principle and the known experimentally determined nuclear size (i.e., the radius of the cross-sectional area for the nucleus to shield nuclear radiations which penetrate the electron clouds of the atom without difficulty), Yukawa calculated that the field particle would have a mass 200 times the electron’s mass.

    In 1936 the muon (a lepton) was discovered with the right mass and was hyped as being the Yukawa field particle, but of course it was not the right particle and this was eventually revealed by an analysis of muon reaction rates in 1947 by Fermi, Teller, and Weisskopf: the muon mediated reactions are too slow to mediate strong nuclear forces.

    In 1948 the correct Yukawa field particle, the pion, was finally discovered, with a mass of 270 electron masses.

    If we take the Heisenberg uncertainty principle in its energy-time form, E*t = h-bar.

    Putting in the mass-energy of the pion (140 MeV), we get a time of t = (h-bar)/E = 5*10^{-24} second.

    The maximum possible range for the pion is therefore c*t = 1.5*10^{-15} metre = 1.5 fm.

    This is the size of the nucleus (for constant density the nuclear radius only increases slowly with mass number – in proportion to the cube-root of mass number to be exact – so a plutonium atom is only about 6 times the radius of a hydrogen atom; obviously the fact that the range of the strong nuclear attractive force does not scale up in proportion to the cube-root of mass number tends to make it less effective at holding the bigger atoms together, so 100% of the very atoms are of course unstable, and decaying), and it is also the range of the IR cutoff, and of the related Schwinger limit for pair-production in the vacuum (a field strength of ~10^18 v/m occurs at about this distance from a unit charge).

    Pions, causing the attractive strong force, have zero spin: http://en.wikipedia.org/wiki/Pion#Basic_properties

    It’s so obvious that you don’t need spin-2 gravitons to cause gravity, the whole thing is a joke. Looking at another of Tony Smith’s interesting pages:

    http://valdostamuseum.org/hamsmith/goodnewsbadnews.html#badnews

    He makes the point that even people like Feynman had terrible problems getting anyone to listen to a too-new idea, and he quotes Feynman’s reactions to dismissive ridicule from Teller, Dirac and Bohr at the 1948 Pocono conference:

    “… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right.

    … For instance,

    take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …

    … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

    … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

    … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

    I gave up, I simply gave up …”.

    What happened was that Dyson wrote a paper showing that Feynman’s approach is equivalent to Schwinger’s and Tomonaga’s [unfortunately, Dyson ignored someone whose work had preceded all of them, namely E.C.G. Stueckelberg, Annalen der Physik, vol. 21 (1934), whose paper was even harder for contemporaries to grasp than Feynman’s; people like Pauli spend great efforts trying to grasp Stueckelberg in the 1930s and gave up, which to me shows a danger in being too abstract, mathematically speaking, although some like Chris Oakley – http://www.cgoakley.demon.co.uk/qft/ and to some degree also Peter Woit, Danny Lunsford, and others who may also have too much respect for the mathematical rigor over the physical processes being modelled mathematically – view pictorial mechanisms, and possibly solid predictions as well, as being kid’s stuff and seem to think that the more mathematically abstract, and less readily understandable, your work is to your contemporaries, the better].

    Next, Dyson was ridiculed, see him talking about it at http://video.google.co.uk/videoplay?docid=-77014189453344068&q=Freeman+Dyson+Feynman

    “… the first seminar was a complete disaster because I tried to talk about what Feynman had been doing, and Oppenheimer interrupted every sentence and told me how it ought to have been said, and how if I understood the thing right it wouldn’t have sounded like that. He alwats knew everything better, and was a terribly bad organiser of seminars.

    “I mean he would – he had to have the centre state for himself and couldn’t shut up [like string theorists today!], and we couldn’t tell him to shut up. So in fact, there was very little communication at all.

    “Well, I felt terrible and I remember going out after this seminar and going to Cecile for consolation, and Cecile was wonderful …

    “I always felt Oppenheimer was a bigoted old fool. … And then a week later I had the second seminar and it went a little bit better, but it still was pretty bad, and so I still didn’t get much of a hearing. And at that point Hans Bethe somehow heard about this and he talked with Oppenheimer on the telephone, I think. …

    “I think that he had telephoned Oppy and said ‘You really ought to listen to Dyson, you know, he has something to say and you should listen. And so then Bethe himself came down to the next seminar which I was giving and Oppenheimer continued to interrupt but Bethe then came to my help and, actually, he was able to tell Oppenheimer to shut up, I mean, which only he could do. …

    “So the third seminar he started to listen and then, I actually gave five altogether, and so the fourth and fifth were fine, and by that time he really got interested. He began to understand that there was something worth listening to. And then, at some point – I don’t remember exactly at which point – he put a little note in my mail box saying, ‘nolo contendere’.”

  19. Another nice quotation of Tony Smith’s page
    http://valdostamuseum.org/hamsmith/goodnewsbadnews.html#badnews
    According to Freeman Dyson, in his 1981 essay Unfashionable Pursuits (reprinted in From Eros to Gaia (Penguin 1992, at page 171)), “… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …”

  20. One more quotation from Tony Smith about censorship by mainstream world-leading (dictatorial?) physicist Oppenheimer [simply imagine some mainstream M-theory leader in the role of Oppenheimer]:

    http://www.math.columbia.edu/~woit/wordpress/?p=189#comment-3222

    What about David Bohm’s expulsion from Princeton?

    According to the Bohm biography Infinite Potential, by F. David Peat (Addison-Wesley 1997) at pages 101, 104, and 133:

    “… when his [Bohm’s] … Princeton University … teaching … contract came up for renewal, in June [1951], it was terminated. … Renewal of his contract should have been a foregone conclusion … Clearly the university’s decison was made on political and not on academic grounds … Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … “We consider it juvenile deviationism …” … no one had actually read the paper … “We don’t waste our time.” … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should “pay no attention to Bohm’s work.” … Oppenheimer went so far as to suggest that “if we cannot disprove Bohm, then we must agree to ignore him.” …”.

    Tony Smith
    http://www.valdostamuseum.org/hamsmith/

  21. The Schwinger correction for the magnetic moment of leptons (i.e., the first Feynman diagram coupling correction term)

    In the text of this blog post I wrote:

    ‘There is a similarity in the physics between these vacuum corrections and the Schwinger correction to Dirac’s 1 Bohr magneton magnetic moment for the electron: corrected magnetic moment of electron = 1 + {alpha}/(2*{Pi}) = 1.00116 Bohr magnetons. Notice that this correction is due to the electron interacting with the vacuum field, similar to what we are dealing with here.’

    In comment 19 above, I wrote:

    ‘Now imagine making a mark on one side of a Mobius strip and rotating it while looking only at one side of the strip while rotating. Because the Mobius strip (a looped piece of paper with a half twist in it: http://en.wikipedia.org/wiki/M%C3%B6bius_strip ) has only one surface (not two surfaces – you can prove this by drawing a pen line along the surface of the strip, which will go right the way around ALL surfaces, with a length exactly twice the circumference of the loop!), it follows that you need to rotate it not just 360 degrees but 720 degrees to get back to any given mark on the surface.

    ‘Hence, for a 360 degree rotation, the Mobius strip will only complete 360/720 = half a rotation. This is similar to the electron, which has a half-integer spin.’

    What I should have mentioned in the post is the mechanism for why the first (i.e., the Schwinger) vacuum virtual charge coupling correction term is alpha/(2*{Pi}) added to the electron’s core magnetic moment of 1 Bohr magneton (Dirac’s result).

    Schwinger’s result, 1 + {alpha}/(2*{Pi}) = 1.00116 Bohr magnetons, is:

    {magnetic moment of bare electron core, neglecting the interaction picture with the surrounding virtual charges in the vacuum}*(1 + {alpha}/(2*{Pi})

    because

    the virtual particle which is adding to the bare core magnetic moment is shielded from the bare core by the intervening polarized vacuum, which causes the alpha (i.e., 1/137.036) dimensioness shielding factor to appear in the correction term.

    In addition, there is a spin correction factor which reduces the contribution of the magnetism from the virtual charge by a factor of 2*{Pi}.

    Notice that we may need to think of a Mobius strip in a particular way (comment 19, as quote above) to explain causally why the electron is spin-1/2.

    The 2*Pi reduction factor is equal to the difference in exposed electron perimeter (if the electron is a loop) when seen side on and when seen along the longitudinal axis of symmetry passing through the middle of the loop.

    Hence, if you look at a loop side on, you see a length (and an associated area) which is smaller by a factor of 2*{Pi} than the length and area visible when looking at the loop from above (or from below): the circumference is 2*{Pi} times the diameter.

    There was a trick suggested during President Reagan’s 1980s ‘Star Wars’ (SDI project) whereby you can protect a missile partly from laser bursts by giving it a rapid spin. This means that the flux per unit area which will be received on the side of the missile from a laser (or whatever) is reduced by a factor of 2*{Pi} if the missile is spinning about its long axis, compared to the non-spinning scenario.

    So there you have candidate explanations for why the first virtual particle correction to the magnetic moment of a lepton doesn’t give a total of 1 + 1 = 2 Bohr magnetons, but merely 1 + 1/(137.036*2*Pi) = 1.00116 Bohr magnetons.

    Obviously, it’s a sketchy explanation. However, it becomes clearer when you look at what a black hole/trapped TEM (transverse electromagnetic) wave looks like (Electronics World, April 2003 issue): draw a circle to represent the path of propagation of the trapped TEM wave, and draw radial (outward) lines of electric field coming from that line, which are orthagonal to the direction of propagation of the TEM wave at the point they radiate from. Next, draw the magnetic field lines looping around the direction of propagation: the magnetic field lines are orthagonal to both the direction of propagation and to the electric field lines (see the Poynting-Heaviside vector). Whey you discover is that at big distances (distances many times the diameter of the electron loop), the magnetic field lines (which have a toroidal shape close to the electron) do form a magnetic dipole with the poles (radial magnetic field lines) running along the axis of rotation of the electron’s loop.

    Those polar magnetic field lines which are parallel to the radial electric field lines (at big distances from the electron) are – unlike the electric field lines – not shielded by the electric polarization of the vacuum. The electric field lines are shielded simply because the electric polarization (displacement of virtual charges) opposes the electric field of the electron core by creating a radial electric field pointing in the opposite direction to that from the core of the electron (conventionally, the electric field vector or arrow points inwards, towards a negative charge, so the radial electric field created by the polarized vacuum points outward, partially cancelling the electron’s charge as seen from great distances). There is no interaction between parallel electric and magnetic field lines (if there were, they wouldn’t be separate fields; the electric and magnetic fields interact according to Maxwell’s two curl equations, which are composed of Faraday’s law and Ampere’s law plus Maxwell’s displacement current term for vacuum effects – both of which have the magnetic field and electric field at right angles for biggest interaction and show that there is never an interaction for parallel magnetic and electric force field vectors).

    Hence, the polar magnetic dipole field from the electron is not shielded by the pairs of virtual charges in the vacuum (the other magnetic field lines, which are not parallel but more nearly at right angles to the the radial lines from the middle of the electron, will generally be shielded like electric field lines, of course).

    This justifies why there are two important terms (neglecting the higher order corrections for other vacuum interactions, which are trivial in size), 1 + alpha/(2*{Pi}) where 1 is the bare core charge and the second term is the contribution from a vacuum charge aligning with the core.

    There is obviously more to be done to illustrate this mechanism clearly with diagrams and to use the resulting simple physical principles to make predictive calculations of other vacuum interactions in far more simplified and quick way than existing mainstream methods.

    In the meanwhile, two other aspects of the loop electron. Hermann Weyl points out in The Theory of Groups and Quantum Mechanics, Dover, 2nd ed., 1931, page 217:

    “[In the Schroedinger equation] The charge contained in V [volume] is … capable of assuming only the values -e and 0, i.e. according to whether the electron is found in V or not. In order to guarantee the atomicity of electricity, the electric charge must equal -e times the probability density. But if we base our theory on the de Broglie wave equation, modified by introducing the electromagnetic potentials in accordance with the rule [replacing (1/i)d/dx_a by {(1/i)d/dx_Alpha} + {(e/{hc}){Phi_Alpha}], we find as the expression for the charge density one involving the temporal derivative d{Psi}/dt in addition to Psi; this expression has nothing to do with the probability density and is not even an idempotent form. According to Dirac, this is the most conclusive argument for the stand that the differential equations for the motion of an electron in an electromagnetic field must contain only first order derivatives with respect to the time. Since it is not possible to obtain such an equation with a scalar wave function which satisfies at the same time the requirement of relativistic invariance, the spin appears as a phenomenon necessitated by the theory of relativity.” (Emphasis as in original.)

    The simple way to link relativity to spin was published in the Electronics World Apr. 2003 article: the spin is the flow of the TEM wave energy around the loop (for an half integer spin fermion, the polarization vector changes direction as it goes around the loop, hence making the field twisted, like a Mobius strip):

    Let the whole electron (TEM wave loop) propagate at velocity v along its axis of symmetry, i.e., it propagates along the axis of “spin”. Let the spin speed of the TEM wave energy going around the loop be x. Since the vectors v and x are perpendicular, their resultant is given by Pythagoras’ theorem:

    (v^2) + (x^2) = c^2

    We let the resultant be c^2 because of the requirement of empirically confirmed electromagnetism and relativity.

    A measure of time is for an electron is the spin speed (just as the measure of time for a thrown watch is how fast the hands spin around the watch face). Time, hence relative spin speed, taking time to pass at rate unity for electron propagation speed v = 0, is

    t’/t = x/c = [[(c^2) – (v^2)]^{1/2}]/c

    = [1 – (v/c)^2]^{1/2}

    which is the usual time-dilation factor in the FitzGerald-Lorentz contraction.

    To get the length contraction in the direction of motion, you can argue that the observable(distance)/(time) = c, hence to preserve this ratio any time-dilation factor must be accompanied by an identical length contraction factor. A more physical, explanation is however that the vacuum “drag” (actually not continuous drag, but just resistance to changes in speed, i.e., resistance to acceleration) causes contraction:

    In 1949 a crystal-like ground state of the vacuum (i.e., at lower electric field strength that Schwinger’s threshold for pair production of 10^18 v/m), for situations below the IR cutoff energy so that there are no pairs of virtual particles forming a dissipative gas like steam coming off crystalline water – ice – by sublimation, the Dirac sea was shown to explain the length contraction and mass-energy effects. The reference is

    C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, volume A62, pp 131-4:

    ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor [1 – (v/c)^2]^{1/2}, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/[1 – (v/c)^2]^{1/2}, where E(o) is the potential energy of the dislocation at rest.’

  22. copy of a comment:
    http://www.math.columbia.edu/~woit/wordpress/?p=562#comment-25739
    nc Says:
    June 4th, 2007 at 10:19 am
    ‘If I accept that these theories (or “schemes”) make zero predictions, do they still give me a unified description of the fundamental forces?’
    1. Supersymmetry is the theory required to ‘unify’ forces and just does that by getting all the SM forces to have equal charges (coupling constants) near the Planck scale, presumably because that looks prettier on a graph than the SM prediction (which does not show that the 3 interaction coupling constants – for EM, weak, and strong forces – meet at a single point near the Planck scale).
    The problem’s here are massive. The experimental evidence available confirms the SM, and the supersymmetric model extrapolated to low energy seems to contradict experimental results. See Woit’s book Not Even Wrong, UK ed., page 177 [using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%].
    In addition, why should the three SM forces unify by having the same strength at the Planck scale? It’s group-think galore with no solid facts behind any of it. Planck never determined the Planck scale from a solid theory, let alone observed it. It just seems to be a way to glorify his constant, h (e.g. the length 2mG/c^2 where m is electron mass is not only much smaller than the Planck length, but it is also more meaningful since i[t] is a black hole radius). Why should they meet at a point anyway?
    2. String theory hasn’t succeeded in usefully putting gravity into the standard model. Furthermore, unlike supersymmetry which at least has been found to disagree with experiment (as mentioned), string theory says nothing remotely checkable about gravity. All it does is to allow vacuous claims like:
    ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.
    What he means is presumably that it predicts speculative things called gravitons, or that it might predict something about gravity, some day. Congratulations, in advance! But why can’t these string theorists admit that their ‘theory’ doesn’t even exist, and the claims made for it are just a lot of hype that help censor alternatives?

  23. A bit more about the Weyl-suggested link between particle spin and relativity outlined in comment 22 above: the reason why a particle with spin will generally move in a direction along the axis of spin, so that the direction of spin is orthagonal to the direction of the propagation of the whole particle (relative to the surrounding “gravitational field”, i.e., the absolute spacetime of general relativity, the evidence for which was discussed in an earlier comment), is that this makes the spin speed consistent at each point around the loop of the electron. If the angle between the electron’s propagation and the plane in which the electron spins is anything other than 90 degrees, the speed will vary around the circumference of the electron instead of remaining constant, and this will result in a net transmission of energy as oscillating waves (in addition to the usual equilibrium of Yang-Mills force-causing exchange radiation which constitutes the electromagnetic field).

    When electrons are deflected in direction, they do indeed emit “synchrotron radiation”. This is a well known effect and is well confirmed experimentally! It’s not speculation in any way. It’s a plain fact.

    What I need to do now is to investigate Hawking’s radiation as a source for the electromagnetic charged gauged bosons:

    “Furthermore, calculations show that Hawking radiation from electron-mass black holes has the right force as exchange radiation of electromagnetism: https://nige.wordpress.com/2007/03/08/hawking-radiation-from-black-hole-electrons-causes-electromagnetic-forces-it-is-the-exchange-radiation/

    Conventionally, Hawking radiation is supposed to occur when a pair of fermion-antifermion particles appears near a black hole event horizon. One of them (fermion or antifermion) falls into the black hole, while the other escapes and later annihilates with another such particle (this is the key point: there is an implicit assumption in Hawking’s theory which states that, on average, as many virtual positrons as virtual electrons escape, i.e., you get gamma rays given off because you end up with an equal number of escaped particles and escaped anti-particles, so they can all annihilate into uncharged gamma rays). So in Hawking’s theory, all the radiation gets converted into gamma radiation. Simple.

    However, the black holes we’re dealing with are not the same as those Hawking’s calculations apply to. We’re dealing with fermions as black holes, which means they carry a net electric charge. This electric charge dramatically alters the selection of which one of the pair (fermion and antifermion, for example: electron and positron), falls into the black hole, and which one escapes.

    We’re back to vacuum polarization and displacement current again: the particles which will be swallowed up by the black hole will tend to have an opposite charge to the black hole’s charge.

    This means that a fermion black hole does not tend to produce uncharged gamma rays: the particles it allows to escape all have like sign and can’t annihilate into gamma rays.

    So it is fully consistent with the mechanism in this blog post: Hawking radiation doesn’t produce gravity, it produces the charged exchange radiation we need for electromagnetism.

    Notice that, as calculated at https://nige.wordpress.com/2007/03/08/hawking-radiation-from-black-hole-electrons-causes-electromagnetic-forces-it-is-the-exchange-radiation/ , the electron as black hole – because of its small mass – has an extremely high Hawking radiating temperature, 1.35*10^53 Kelvin.

    Any black-body at that temperature radiates 1.3*10^205 watts/m^2 (via the Stefan-Boltzmann radiation law). We calculate the spherical radiating surface area 4*Pi*r^2 for the black hole event horizon radius r = 2Gm/c^2 where m is electron mass, hence an electron has a total Hawking radiation power of

    3*10^92 watts

    But that’s Yang-Mills electromagnetic force exchange (vector boson) radiation. Electron’s don’t evaporate, they are in equilibrium with the reception of radiation from other radiating charges.

    So the electron core both receives and emits 3*10^92 watts of electromagnetic gauge bosons, simultaneously.

    The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

    The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

    Using P = 3*10^92 watts as just calculated,

    F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

    For gravity, the model in this blog post gives an inward and an outward gauge boson calculation F = 7*10^43 N.

    So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of [2*10^84] / [7*10^43] = 3*10^40.

    This figure of approximately 10^40 is indeed the ratio between the force coupling constant for electromagnetism and the force coupling constant for gravity.

    So the Hawking radiation force seems to indeed be the electromagnetic force!

    Electromagnetism between fundamental particles is about 10^40 times stronger than gravity.

    The exact figure of the ratio depends on whether the comparison is for electrons only, electron and proton, or two protons (the Coulomb force is identical in each case, but the ration varies because of the different masses affecting the gravity force).

  24. To lucidly clarify the distinction between a “virtual fermion” and a real (long lived) fermion, the best thing to do is to quote the example Glasstone gives in a footnote in his 1967 Sourcebook on Atomic Energy (3rd ed.):

    “In the interaction of a nucleon of sufficiently high (over 300 MeV) energy with another nucleon, a virtual pion can become a real (or free) pion, i.e., at a distance greater than about 1.5*10^{-13} cm from the nucleon. Such a free pion can then be detected before it is either captured by a nucleon or decays into a muon. This is the manner by which pions are produced …”

    See Figure 5 in the post to explain how two fermion-like exchange radiation components, while passing through one another due to the exchange process between charges, acquire boson-like behaviour. This explains how the Hawking-type fermion radiation constitutes vector bosons. It’s experimentally validated by transmission line logic signals which propagate like bosons if there are two conductors with opposite currents in each, see http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

    (There are other examples of pairing of fermions to create bosons, such as the Bose-Einstein condensate. Simply get the outer electrons in two atoms to pair up coherently, and the result is like a boson. This is how you get superconductivity, frictionless fluids, and such like in low temperature physics. Vibrations, due to thermal energy, at higher temperatures destroy the coherence.)

  25. (To be precise, superconductivity is pairing of conduction electrons into Cooper pairs of electrons. Presumably, as explained in comment 22 above, the reason why the vacuum virtual particles increase the magnetic moments of leptons the amount they do, is due to the fact that on average there is always one virtual lepton pairing with the real lepton core, allowing for a weaking in coherence due the shielding effect by the polarized vacuum, and the geometry.)

  26. Plan for further research:

    Work out the correct way that SU(2) symmetry works under the lepton => quark transformation mechanism (see, for instance, comment 13 above).

    The reason why a downquark (which is the key clue for the mechanism) has -1/3 unit charge is that precisely 2/3rds of the electron charge energy is being transformed into the weak and strong forces.

    It’s easy to calculate the energy of a field: the electric field strength is given by Gauss’ law, i.e., you take coulomb’s force law and put force F = qE (this is a vector equation but since the E field lines are radial and the Coulomb force acts in the radial direction, along E field lines, for our purposes it is fine to take it as a scalar as long as we are dealing with small unit charges with radial E field lines), where q is charge being acted upon and E is electric field strength in volts/metre, so E = F/q. (This means that E is identical to coulomb’s law except that there is just one term for charge in the numerator, instead of two terms for charge.)

    The amount of electric field energy per unit volume at a given E strength is well known: charge up a capacitor with a known volume between the capacitor plates and measure the energy you put into it as well as the uniform E field strength in it, and you have the relationship:

    Electric field energy density =

    (1/2)*{electric permittivity of free space}*E^2

    J/m^3.

    Similarly for magnetic fields:

    Magnetic field energy density =

    (1/2)*{1/magnetic permeability of free space}*B^2

    J/m^3.

    However, there is a minor problem in that the trapped energy in a capacitor isn’t “static”:

    “a) Energy current can only enter a capacitor at the speed of light.

    “b) Once inside, there is no mechanism for the energy current to slow down below the speed of light.”

    http://www.ivorcatt.com/1_3.htm (To avoid confusion in linking to Catt, I have to add the note that although there are important findings by Catt, I disagree with many of Catt’s own slipshod interpretations of the results his own research, and I note that he refuses to engage in discussions with a view to improving the material.)

    If you look at the top diagram on the page http://www.ivorcatt.com/1_3.htm , you see that electromagnetic energy in a charged object is in equilibrium containing trapped Poynting-Heaviside “energy current” (TEM wave) that oscillates in both directions, travelling through itself to, so that the B field curls oppose each other and appear to cancel, while the E fields add together.

    So we get into the problem of how energy conservation applies to the situation where fields cancel out: half way between a positron and an electron, is there no field energy density, or is there equal positive and negative electromagnetic field energy? Put it like this: if you have a tug of war, and the teams are evenly matched so that the rope doesn’t move, is that the same thing as having no strain on the rope? Force fields are just like the tug of war: you only see them when they make charges accelerate. The late Dr Arnold Lynch, a leading expert in microwave beaming interference problems, pointed out to me by letter that because radiowaves are boson like radiation they can pass through each other in opposite directions and, if they are out of phase during the overlap, the fields “cancel out” totally, but the energy continues propagating and re-emerges after the overlap just as before. Hence, spacetime can contain “hidden” electromagnetic field energy.

    [This has nothing to do with Young’s “explanation” of the famous double slit experiment, where he claimed that you should get light waves arriving in the dark fringes but their fields are out of sync and interfere, cancelling out. Clearly, firing one photon at a time, when you consider the need for energy conservation in the double slit experiment, you have to admit that Young’s “explanation” is just plain wrong. You can do the double slit experiment with fairly efficient photomultipliers, and nobody has found that half the photons (those that should arrive in the dark fringes in Young’s explanation) are unaccounted for.]

    My reason for this diversion is that, when you integrate the energy density for the electron over radial distance using the E field of an electron, you get a result that is generally far too big.

    So what traditionally is done is a normalization in which the known (or assumed) rest mass-energy of the electron is set equal to the integration result, and the latter is adjusted to give an electron radius which yields the correct answer!

    This calculation yields what is known as the “classical electron radius”, http://en.wikipedia.org/wiki/Classical_electron_radius , ~2.818 fm.

    Notice that this result is the same as the radius of the 0.511 MeV IR cutoff commonly used in QFT (assuming Coulomb scattering you calculate the closest approach of two electrons each with a kinetic energy of 0.511 MeV – equal to the rest-mass energy of an electron – and this distance is the classical electron radius). It is also close to the radius for the pair-production threshold electric field strength as calculated from Schwinger’s formula, so it marks the maximum range from the electron core, out to which electron-positron pairs can briefly pop into existence, introducing chaos and electric polarization (charge shielding) effects into the otherwise classical-type electric field, which is the simple field at greater distances.

    The classical electron radius needs to be explained in terms of QFT: the chaotic field of virtual particles within 2.8 fm of an electron core means that it doesn’t contribute to the rest-mass energy at all. So when an electron and a positron annihilate, only part of the total energy is converted into a pair of gamma rays, so E=mc^2 is not the total energy, merely that portion of the total which is released by annihilation. “Energy” is always a problem in physics because we have to define directed useful energy from random, useless energy (the 3rd law of thermodynamics tells us that not all energy is equal; energy can only do work if it – in effect – can be used to do work, which is not true of degraded energy where you have no heat sink available).

    Presumably the classical electron radius just marks the boundary of chaotic, useless energy which can’t ever be released to do useful work.

    To return to the downquark charge question, the -1/3 charge of the downquark correlated to the -1 charge of the electron as discussed earlier: 2/3rds of the electron energy is transformed into hadron binding energy when leptons are transformed into quarks.

    This electric charge energy is not all transformed into the strong force energy, because there is the weak force to consider as well. Chiral symmetry needs to be taken into account. Working out the full details for the correct replacement for the Higgs mass giving mechanism when replacing SU(2)xU(1) by some modified SU(2) or SU(2)xSU(2) scheme is the priority. Hopefully, SU(2) with a mass giving field giving masses to the bosons in a particular high energy zone, in such a way that chiral symmetry and the excess of matter over antimatter in the universe is explained.

  27. Feynman writes usefully on the evidence that the weak gauge bosons (charged W’s, and neutral Z) are just variants on the photon (see Figure 5 in this blog post for the details):

    “The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W- and Wo or Zo] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still the ‘seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct.” [Emphasis added.]

    – R. P. Feynman, QED, Penguin, 1990, pp141-142.

  28. Distinction between virtual and real photons

    Referring to Fig. 5 in this post, we see the distinction between real and virtual (gauge boson) “photons”. The gauge boson type “photon” has extra polarizations. This is Feynman’s explanation in his book QED, Penguin, 1990, p120:

    “Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization – for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)”

    This accords with Figure 5 in this blog post where an electromagnetic force-carrying gauge/vector boson has important time polarization because of the exchange mechanism.

  29. copy of a comment directed to a landscape popularizer:

    http://www.math.columbia.edu/~woit/wordpress/?p=564#comment-25893

    Your comment is awaiting moderation.
    nigel cook Says:
    June 7th, 2007 at 7:04 am

    If you think it permissible to bring up the McCarthy era as an analogy to criticisms of the cosmic landscape, you may escalate the hostilities because others will draw analogies between string propaganda and the propaganda of certain historical dictatorships, etc.

    The following question in my opinion can more appropriately be directed to those who popularize pseudoscience, than to those who combat it:

    ‘Have you no sense of decency?’ – http://www.americanrhetoric.com/speeches/welch-mccarthy.html

  30. Calculating the magnetic moments of leptons

    More about comment 22. The first coupling correction or Feynman diagram (which gives Schwinger’s alpha-squared-over-two-pi addition to Dirac’s 1 Bohr magneton for the magnetic moment of leptons) is for the electron to emit and then absorb a photon.

    In order for this physical process to occur, photon is emitted by the real electron and then reflected back (absorbed and re-emitted) by one of the virtual electrons in the surrounding vacuum.

    That’s the mechanism.

  31. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/06/holes-in-mars-and-beyond.html

    This is interesting, but remember that super-tiny black holes are just as exciting!

    Uncharged black holes emit gamma radiation because virtual fermion pair production near the event horizon leads one of the pair to fall in and the other to escape and become a real particle.

    So if the black hole is uncharged, on average you will get as many fermions as positrons leaking from the event horizon, which will annihilate each other to form gamma rays.

    The way to really confirm black holes is to detect this radiation.

    However, I’ve got two developments on this.

    First, pair production doesn’t occur everywhere in space. It only occurs above minimum electric field strength calculated by Schwinger in 1948, the threshold being about 1.3*10^18 v/m (equation 8.20 in http://arxiv.org/abs/hep-th/0510040 and equation 359 in http://arxiv.org/abs/quant-ph/0608140).

    So in order for a there to be Hawking radiation, the black hole needs to be accompanied by a strong electric field of more than 1.3*10^18 volts/metre at the event horizon. Hence, black holes much be charged to emit Hawking radiation.

    That’s the bad news for Hawking.

    The good news is that calculations show that fermions seem to behave like black holes, and since they are electrically charged, they give off Hawking radiation! The particular types of radiations seem to have the right characteristics to explain the Yang-Mills quantum field theory vector bosons (exchange radiation constituting fields) for electromagnetism and gravity.

  32. copy of a comment:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “One has to note however that for non-abelian groups the curvature of the additional dimensions will not vanish, thus flat space is no longer a solution to the field equations. However, it turns out that the number of additional dimensions one needs for the gauge symmetries of the Standard Model U(1)xSU(2)xSU(3) is 1+2+4=7 [10]. Thus, together with our usual four dimensions, the total number of dimensions is D=11. Now exponentiate this finding by the fact that 11 is the favourite number for those working on supergravity, and you’ll understand why KK was dealt as a hot canditate for unification.”

    Thanks for a nice brief summary of the mainstream KK and supergravity idea, but that vital reference [10] in the test doesn’t occur in your list of references, which only goes to reference [9]. It’s maybe interesting that the one successful, peer-reviewed and published alternative to KK was immediately censored off arxiv without explanation.

    The usual assumption of only one time-like dimension is a bit crazy! According to spacetime, distances can be described by time.

    Hence, the 3 orthagonal spatial dimensions can be represented by 3 orthagonal time dimensions.

    Back in 1916 when general relativity was formulated, this was no use because the universe was supposed to be static universe, but with 3 continuoysly expanding dimensions in the big bang (there’s no gravitational deceleration observable), it makes sense to relate those expanding dimensions to time, and distinguish them from the 3 spatial dimensions of matter which don’t expand because the forces holding them together are very strong. Indeed, matter or energy generally is contracted spatially in gravitation. Feynman calculated that Earth suffers a radial contraction of GM/(3c^2) = 1.5 millimetre. That really shows that spatial dimensions describing matter should be distinguished from those describing the continuous expansion of the universe. There’s no overlap, you just have 3 overall dimensions split into two parts: those that describe contractable matter and those which describe expanding space.

    The one successful, peer-reviewed and published alternative to KK predicts that the cosmological constant is zero, in agreement with observations that the universe isn’t decelerating because at big redshifts (over large distances) the gravitational coupling constant falls: vector boson radiation exchanged between gravitational charges (masses) will be redshifted (depleted in energy when received) so there’s no effective gravity at high redshifts. That’s why there’s no deceleration of the universe:

    Nobel Laureate Phil Anderson points out:

    ‘… the flat universe is just not decelerating, it isn’t really accelerating …’

    http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

    Hence the real cc is zero. Moreover, this scheme predicts G.

    Take the expanding dimensions to be time dimensions. Then the Hubble constant is not velocity/distance but velocity/time.

    This gives outward acceleration for the matter in the universe, resulting in outward force:

    F = ma

    = m*dv/dt,

    and since dR/dt = v = HR, it follows that dt = dR/(HR), so

    F = m*d(HR)/{dR/(HR)}]

    = m*(H^2)R.dR/dR

    = mRH^2

    By Newton’s 3rd law you get equal unward force – which is carried by gravitons that cause compression and curvature – and this quantitatively allows you to explain general relativity and to work out the gravitational constant G, which turns out to be correct within the experimental error of the data like Hubble constant and density you put into relationship.

    It’s kind of funny that falsifiable, easily checkable work based on observed facts and extending the applications of Newton’s 2nd and 3rd laws to gauge boson radiation, is so ignored by the mainstream.

    Even if my mechanism and predictions are ignorable (because for instance, they were only rough calculations at first and only published in the journal Electronics World), you’d think arXiv would have taken Lunsford’s highly technical paper seriously as he was a student of David Finklestein and he published his paper in International Journal of Theoretical Physics, Volume 43, Number 1, January 2004 , pp. 161-177(17).

  33. A quick clarification of comment 32 above: fermion particle cores exchange charged vector bosons (giving electromagnetic forces) but the uncharged vector bosons that give rise to gravity are exchanged not between the cores of fermions, but between the mass-giving particles in the polarized vacuum surrounding each fermion out to a radius of about 1 fm.

    copy of a follow-up comment:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    ‘I have sympathized with the idea of having 3 time like dimensions as well, but eventually gave up because I couldn’t make sense out of it. Does a particle move on a trajectory in all 6 dimension, and if so what happens to two particles that have a distances in time but not in space?’

    Bee, thanks for giving me the opportunity to explain a bit more. The particle only moves in 3 dimensions (the curvature that corresponds to an effective extra time dimension could have a physical explanation in quantum gravity; curvature is a mathematical convenience not the whole story).

    The particle doesn’t ‘move’ in any time dimension except on a graph of a spatial dimension plotted against time. Since there are 3 spatial dimensions, the simplest model is to have 3 corresponding time dimensions.

    The assumption that there’s only one time dimension is the same as you would have if everything around you was radially symmetric, like living in the middle of an onion, where only changes as a function of radial distance (one effective dimension) are readily apparent: the Hubble constant, and hence the age of the universe (time)is independent of the direction you are looking in.

    If the Hubble constant varied with direction then t = 1/H would vary with direction, and we’d need more than one effective time dimension to describe the universe.

    The role of time as due to expansion is proved by the following:

    Time requires organized work, e.g., a clock mechanism, or any other regular phenomena.

    If the universe was static rather than expanding, then it would reach thermal equilibrium (heat death) so there would be no heat sinks, and no way to do organized work. All energy would be disorganised and useless.

    Because it isn’t static but continuously expanding, thermal radiation received is always red-shifted, preventing thermal equilibrium being attained between received and emitted radiation, and thereby ensuring that there is always a heat sink available, so work is possible.

    Hence time is dependent upon the expansion of the universe.

    You can’t really measure ‘distance’ by using photons because things can move further apart (or together) while the photons are in transit.

    All cosmological dimensions should – and usually are – measured as time. That’s why Herman Minkowski said in 1908:

    ‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’

    It’s remarkable that as soon as you start thinking like this, you see that the recession of matter by Hubble’s empirical law v = HR at distance R is velocity v = dR/dt, so dt = dR/v, which can be put straight into the definition of acceleration, a = dv/dt giving us a = dv/dt = dv/[dR/v] = v*dv/dR = [HR]*d[HR]/dR = RH^2.

    So there really is an outward acceleration of the matter in the universe, which means we are forced to try to apply Newton’s 2nd (F=ma) and 3rd (equal and opposite reaction) laws of motion to this outward acceleration of matter. This process quantitatively predicts gravity as mentioned, and other stuff too. Why is this physics being censored out of arxiv in place of stringy guesswork stuff which isn’t based on empirically confirmed laws but on a belief in unobservables, and doesn’t predict a thing? It’s sad in some ways but really funny in others. 😉

  34. I’ve recently revised the page http://quantumfieldtheory.org/proof.htm which contains supplementary information.

    another follow-up comment to Backreaction:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “This is exactly the problem, because it’s definitely not what we observe. Everyone of us, every measurement makes gives the same H(t).

    “Plus, I’d like to know how you get stable objects like planets – another problem I stumbled across. … To make that point clear: write down Einstein’s field equations in Vacuum, spherical symmetric, solve them. In 3+1 you find Schwarzschild, what do you find? ”

    1. Spacetime means that for each observed spatial dimension characterized by length element dx, there’s a time-like equivalent, dt = dx/c.

    2. We observe 3 spatial dimensions, so we have 3 time-like corresponding dimensions:

    dt_x = dx/c
    dt_y = dy/c
    dt_z = dz/c

    The fact that time flows at the same rate is here indicated by the fact that c is a constant and doesn’t depend on the direction.

    Similarly, for situations of spherical symmetry, where r is radius

    dx = dr
    dy = dr
    dz = dr

    So we can represent all three spatial dimensions by the same thing, dr. This greatly simplifies the treatment of spherically symmetric things like stars. It’s just the same with time.

    3. Einstein’s 3+1 d general relativity uses the fact that time dimensions are all similar, so all 3 time dimensions can be represented by 1 time dimension in general relativity.

    The Riemann metric of Minkowski spacetime is normally (in 3+1 d):

    ds^2 = {Eta}(dx^2) = dx^2 + dy^2 + dz^2 – d(ct)^2

    The correct full 3+3 d spacetime Riemann metric will be just the same because this is just a Pythagorean sum of line elements in three orthagonal spatial dimensions with the resultant equal to ct.

    4. It’s the same for the Schwarzschild metric of general relativity. Two time dimensions are conveniently ignored because they are the same, and it’s more economical to treat 3 time dimensions as 1 time dimension.

    The splitting of 3 spatial dimensions into 6 dimensions isn’t a real increase in the number of dimensions, just in the treatment.

    It’s just more convenient to consider the parts of the 3 dimensions which are within matter (like a planet or a ruler or measuring rod) as distance-like, contractable and not expanding, while the parts of dimensions where distance is uniformly increasing are time-like due to the expansion of the universe.

    Obviously, the framework of general relativity for the Schwarzschild metric isn’t affected by this. It’s only adding to general relativity a mechanism for unification and for making additional predictions about the strength of gravity, other cosmological implications, etc.

    Thanks for your interest.

  35. copy of a follow-up comment

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    Garrett, Lubos Motl previously dismissed Lunsford’s SO(3,3) unification on the basis that time-like dimensions might form circles and violate causality. However, the time-like dimensions of spacetime are orthagonal just like the three large spatial dimensions.

    It’s pretty easy to list failures that people in the past have had, and that simply aren’t applicable.

    However, giving a list of objections without backing them up is helpful if you or someone reading is biased. There is evidence that all observed fermion masses can be built up from a simple quantized mass of similar to the Z_0 mass of 91 GeV. Whether you actually find experimentally confirmed evidence based on a physical mechanism convincing is your own choice.

    How can the spacetime correspondence explained above, where dt = (1/c)dx, etc., give rise to wave solutions that go faster than c? It doesn’t look as if that is the case here. Perhaps it occurs with incorrect use of extra time dimensions? Thanks.

  36. “Your point 2 is just wrong, in case of spherical symmetry it is not dx=dy=dz=dr but
    dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).” -Bee

    Nope, x, y and z are just orthagonal directions, so the definition of spherical symmetry a variation of any function F with respect to any of them is identical.

    (dF_x)/dx = (dF_y)/dy = (dF_z)/dz

    here F_x = F_y = F_z and dx = dy = dx = dr, where r is radius.

    Radius is the same in directions x, y, and z, because, as stated, we’re dealing with spherical symmetry. The expansion rate and Hubble constant don’t vary with direction.

    As I mentioned before, if you’re modelling a star, you can do it by taking spherical symmetry and just considering radial dimension r, which is equal to dimensions x, y and z, because of spherical symmetry.

    Suppose the star’s radius is a million miles. This will be the same in the x, the y, and the z directions, and all are equal to the radius r.

    Similarly by spherical symmetry,

    (d^n)(F_x)/dx^n = (d^n)(F_y)/dy^n = (d^n)(F_z)/dz^n

    where n is 2 for your example (Pythagorean sum).

    Here, F_x = F_y = F_x = F_r

    because of radial symmetry, which, because (d^n)(F_x)/dx^n = (d^n)(F_y)/dy^n = (d^n)(F_z)/dz^n, implies:

    dx^n = dy^n = dz^n = dr^n

    where r is radial dimension. The dimensions x, y and z and all resulting elements are indistinguishable from the radial dimension r, because of the spherical symmetry.

    Notice now that dx^n = dy^n = dz^n = dr^n with n = 2 gives not the Pythagorean sum but dx^2 = dy^2 = dz^2 = dr^2, and with n = 1 it gives dx = dy = dz = dr.

    So your statement that “it is not dx=dy=dz=dr but dr^2 = dx^2 + dy^2 + dz^2” isn’t making any sense, and is just confusing spherical symmetry for the Riemann metric.

    Both the results are consequences of the same equation, they’re not contradictions. It’s down to the spherical symmetry involved.

    I’m sure you must know this as it is all elementary stuff, and that your question is a tease.

    Never mind, thanks for at least having some kind of discussion.

    Best regards.

  37. copy of a comment:

    http://scienceblogs.com/framing-science/2007/06/on_framing_science_thoughts_fr.php

    “The point is that even in the context of a journal article or traditional science coverage, packaging is unavoidable. Facts, rarely if ever, speak for themselves.

    “When attention to science shifts from the science pages to other media beats, new audiences are reached, new interpretations emerge, and new voices gain standing in coverage. These rival voices strategically frame issues around dimensions that feed on the biases of journalists, commentators, and their respective audiences.

    “If scientists do not adapt to the rules of an increasingly fragmented media system, shifting from frames that only work at the science beat to those that fit at other media outlets, then they risk ceding their important role as communicators. Scientists should remain true to the science, but recast issues in ways that connect more closely to the social background of non-traditional audiences and that fit the imperatives of the particular media beat.

    “E.O Wilson’s latest book The Creation is a leading example. He defines environmental problems both in terms of science but also morality, arguing that scientists and religious Americans should share similar goals when it comes to stewardship of the planet. In doing so, he appeals to a religious audience who might not otherwise read popular science books. His framing efforts have also functioned as a news peg at religious media outlets, sparking incidental exposure to coverage of the environment and climate change. In this effective message strategy, certainly no one would accuse E.O Wilson of engaging in spin!”

    So you’re saying that scientists should start preaching their messages to the religious public, dressed up as morality?

    I can’t believe what you’re writing: you’re actively and concisely advocating an end to the scientific era and a return to the religious authority of the man/woman in a white coat who should be worshipped because she/he has special knowledge.

    The mix-up of religion and science was tragic when the monk Giordano Bruno was burned at the stake on 17 February 1600 for proposing that stars are not pinholes in the celestial sphere that let in the light of heaven, but sun’s.

    When scientists stick their noses into religion and morality, as was the case when the works of Ptolemy and Aristotle were adopted by the early Christian church as absolute authority, they corrupt science by making further developments – which may contradict the earlier views – politically impossible.

    Just five hundred years ago, everyone believed that it was common sense and obvious that the Earth was the centre of the universe, because the stars, planets, and sun appeared to revolve around it.

    Add to the prejudice of common sense the ‘scientific authority’ of the endlessly complex mathematical epicycles of Ptolemy, and the subject was beyond discussion.

    Now remember Feynman’s definition of science:

    ‘Science is the belief in the ignorance of experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

    Feynman makes it clear there that science is not groupthink, it’s not consensus, it’s not led by authority of any form, but by checkable evidence.

    What you are suggesting, a religious and moral application of science, has been done many times before, always with precisely the same result, the banning of innovation which must upset the apple-cart of religious morality and dictatorial authority:

    ‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mind-history of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime… The result will easily be guessed. Egypt stood still… Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.

    ‘Whatever ceases to ascend, fails to preserve itself and enters upon its inevitable path of decay. It decays … by reason of the failure of the new forms to fertilise the perceptive achievements which constitute its past history.’ – Alfred North Whitehead, F.R.S., Sc.D., Religion in the Making, Cambridge University Press, 1927, p. 144.

    A mechanism for the conversion is the burning of heretics:

    ‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. … But I do not believe the innate decency of the British people has gone. Asleep, sedated, conned, duped, gulled, deceived, but not abandoned.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

    This works because it prevents research into heretical areas:

    ‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, Nineteen Eighty Four, Chancellor Press, London, 1984, p225.

    The innovator is then in serious difficulty:

    ‘… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly…’ – Nicolo Machiavelli, The Prince, Chapter VI: Concerning New Principalities Which Are Acquired By One’s Own Arms And Ability.

    As a result, the popular response to innovation is to hate it, and try to ignore it or ridicule it:

    ‘(1). The idea is nonsense.
    (2). Somebody thought of it before you did.
    (3). We believed it all the time.’

    – Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle in his autobiography, Home is Where the Wind Blows, Oxford University Press, 1997, p154).

    Tony Smith quotes the hostile reception Feynman had in 1948 from leading physicists Teller, Dirac and Bohr, to his brilliant new approach to doing QED:

    “… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right.

    … For instance,

    take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …

    … Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

    … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

    … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

    I gave up, I simply gave up …” – The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

    Tony Smith also quotes the case of Professor David Bohm, inventor of the heresy of hidden variables in quantum mechanics (which contradicted the mainstream Copenhagen Interpretation, which had no evidence whatsoever, just groupthink). Bohm disproved a theorem by the mathematician John von Neumann which was claimed to rule out hidden variables, and this was unpopular:

    “… when his [Bohm’s] … Princeton University … teaching … contract came up for renewal, in June [1951], it was terminated. … Renewal of his contract should have been a foregone conclusion … Clearly the university’s decison was made on political and not on academic grounds … Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … “We consider it juvenile deviationism …” … no one had actually read the paper … “We don’t waste our time.” … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should “pay no attention to Bohm’s work.” … Oppenheimer went so far as to suggest that “if we cannot disprove Bohm, then we must agree to ignore him.” …” – Infinite Potential, by F. David Peat (Addison-Wesley 1997) at pages 101, 104, and 133.

    This is science working at the best of times. What would censorship look like with a handful of Oppenheimer-type dictators popularising their own favorite prejudices as moral and religious necessity, and censoring critics? Answer: science would cease to make radical innovations, and would become a religious-prejudice controlled lab. Lots of small innovations would occur, as were permitted under the Inquisition. For example, various advances in mathematics and chemistry occurred in the dark ages (algebra, trigonometry, distillation, etc.). But the foundations of subjects were ‘protected’ from innovations, because they were tied down to metaphysics, i.e., obsolete scientific assumptions which have become ingrained in groupthink and morality.

    When Galileo discovered the four moons of Jupiter on 7 January 1610 using a self-made telescope, he was attacked by the Florentine astronomer, Francesco Sizzi, who claimed to logically disprove the existence of such things:

    “… the satellites are invisible to the naked eye, and therefore can have no influence on the earth, and therefore would be useless, and therefore do not exist. … the Jews and other ancient nations as well as modern Europeans have adopted the division of the week into seven days, and have named them from the seven planets: now if we increase the number of the planets this whole system falls to the ground.” – Sir Oliver Lodge, Galileo Overthrows Ancient Philosophy

    Galileo wrote to Kepler of the amusing difficulties in publishing news of his scientific discovery against scientific-religious prejudices:

    “Oh, my dear Kepler, how I wish that we could have one hearty laugh together! Here, at Padua, is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass, which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa laboring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.”

    [From http://history-world.org/galileo_overthrows_ancient_philo.htm ]

  38. copy of a comment:

    http://cosmicvariance.com/2007/06/11/latest-declamations-about-the-arrow-of-time/#comment-284988

    Well done, that’s a fairly good presentation! The 2nd law of thermodynamics is linked to cosmology, but you miss out just a few tiny considerations. In a static universe (such as Einstein’s model in 1917 or Hoyles in 1950), entropy (disorder) was supposed to always increase because the temperature becomes more and more uniform.

    There are several issues with this idea. Firstly, the lab experiments on chemical reactions which showed that entropy rises (and the theoretical calculations backing them up) applied to instances where the gravitational attraction between the reacting molecules was small, and where the molecules weren’t receding from one another at immense speeds.

    So the 2nd law is hardly a good model for the macroscopic universe! Three flaws:

    1) Entropy increases are limited due to redshift in an expanding universe: thermal equilibrium (heat death through maximum entropy) between receding bits of well-separated matter would require the uniform exchange of thermal radiation, but in an expanding universe such radiation is always received in a redshifted (lower-energy) state than that emitted. Hence, all matter emits into the outer space more energy than it receives back. This redshift effect prevents thermal equilibrium from being attained while any energy remains, so outer space remains an effective heat sink. (This redshift of incoming radiation is also the solution to Olber’s paradox, i.e. why the sky looks bright and we aren’t scorched to a cinder by the 3000 K infrared cosmic background radiation flash still reaching us from 13,700 million light years away.)

    2) Entropy (as temperature disorder) actually falls with time in the universe due to gravitation. When the CBR was emitted at 400,000 years after the BB, the temperature was uniform to within one part in 10,000 or whatever. Now, the temperature is grossly non-uniform, hardly an advert for rising entropy. Space is at 2.7 K and the middle of the sun is at 15,000,000 K. So goodbye 2nd ‘law’. The reason is that the ‘theory’ (that temperatures should become more uniform by diffusion of heat from hot to cool areas) behind the 2nd law of thermodynamics neglects gravitation, which is trivial for molecules in a test-tube in the lab, but is big on cosmic scales.

    The universe is 74% hydrogen (by mass), so gravity causes stars to form by inducing nuclear fusion when it pushes this matter together into compressed lumps. The fusion creates the non-uniformity of temperatures because it’s exothermic. This debunks Rudolf Clausius’s definiton of the 2nd law: ‘The entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium.’ The universe is an ‘isolated system’ (there’s no evidence against it being an isolated system), and the universe is ‘not in equilibrium’ because of the redshift phenomena [see poing 1) above]. Hence, however well the 2nd law works in chemistry, it fails spectacularly in cosmology unless redefined.

    3) Eddington pontificated in The Nature of the Physical World (1927): ‘The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.’

    This kind of scientifically-vacuous but authoritative pontification (which was made before Hubble discovered the redshift relationship) should make any genuine scientist deeply skeptical of the ‘law’. Arthur C. Clarke points out that, historically: ‘When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.’

  39. copy of a follow up comment:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    Bee, in radial symmetry,

    (dF_x)/dx = (dF_y)/dy = (dF_z)/dz

    This equation says that the gradient of function F is isotropic, i.e., spherically symmetric.

    Think about radial variations in electric field strength, or density, or or recession velocities of stars, in directions x, y and z around us.

    Look in x direction from the middle of your sphere, and the electric field or density, or recession velocity varies in the same way with distance that it does in directions z, and y.

    All are the same.

    In

    (dF_x)/dx = (dF_y)/dy = (dF_z)/dz

    the numerators are all equal and the denominators are all equal,

    dF_x = dF_y = dF_z

    and

    dx = dy = dz

    Because of this latter property, it is usual to simplify spherically symmetric cases to just dr:

    dx = dy = dz = dr

    Now in your claim:

    “it is not dx=dy=dz=dr but dr^2 = dx^2 + dy^2 + dz^2”

    you’re wrong, because dx=dy=dz=dr which applies to spherical symmetry, has simply nothing to do with dr^2 = dx^2 + dy^2 + dz^2. I don’t know why you claim they are the same, but they aren’t.

    The whole physical point of dr^2 = dx^2 + dy^2 + dz^2 is that the squares of the line elements are not equal hence, dr^2 = dx^2 + dy^2 + dz^2 doesn’t generally get used for spherically symmetric cases.

    If we do apply spherical symmetry to dr^2 = dx^2 + dy^2 + dz^2, then we get:

    dr^2 = dx^2 = dy^2 = dz^2

    Hence:

    3dr^2 = dx^2 + dy^2 + dz^2

    Notice that the number 3 appears, which you don’t include!

    I thought you were maybe joking before, but now it seems as if you are not:

    “When I said your point 2 is just wrong this was neither a question nor a tease, but a fact. And it is still wrong. It seems to me you slept through the GR session, you evidently have no idea how to make a coordinate transformation. Please look up every standard textbook on differential geometry to find out that I am correct.

    “A differential operator (or the 1 form rspt) in 3 dimensions with spherical symmetry is not identical to three times the one-dimensional case. If it was as you said, then the Coulomb force law in 3 dimensions wouldn’t fall with 1/r^2 as it does, but the force of a point charge would remain constant as it does in 1 dimension. This follows if what you said was correct. If you convert the one-forms appropriately with a coordinate transformation, you’ll instead find that the Laplace operator in 3 dimensions is not just d^2/dr^2 but actually 1/r^2 d/dr r^2 d/dr (which correctly gives 0 for r neq 0 when applied to 1/r). For a very brief into, see e.g. Wiki on the Laplace operator.”

    This is just plain wrong, and if you are going to be that abrasive you really do need to have a just cause for doing so! You are just confusing the Laplacian operator for the divergence operator.

    Divergence of electric field is

    div.F = [(dF_x)/dx] + [(dF_y)/dy] + [(dF_z)/dz]

    for spherical symmetry, this produces Coulomb’s law from Gauss’s law:

    Gauss’s law:

    div.E = charge density/permittivity

    hence:

    div.E = [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    because there is spherical symmetry:

    [(dE_x)/dx] = [(dE_y)/dy] = [(dE_z)/dz]

    = 3(dE_x)/dx
    = 3(dE_r)/dr

    where r is a radial direction (equal to x, y and z, because of the spherical symmetry of the electric field around a point charge).

    Hence:

    div.E = charge density /permittivity

    = 3(dE_r)/dr

    When we insert charge per spherical volume (4/3)*{Pi}*r^3 for the “charge density”, and rearrange we get:

    E_r = charge/(4*{Pi}*r^2)

    This immediately becomes Coulomb’s law when we remember: force_r = {charge acted upon} * E_r

    So that’s why you’re wrong for the divergence operator, where dx=dy=dz=dr for spherical symmetry.

    For the Laplacian operator in spherical symmetry, we have the same kind of thing:

    div^2 F = 3(d^2 F_x)/dx^2

    = 3(d^2 F_r)/dr^2

    where F_x = F_y = F_z = F_r

    and

    dx^2 = dy^2 = dz^2 = dr^2

    This is all easy stuff, and it’s shocking that you weren’t taught any of it.

    I forgive you for all the abrasiveness, but insist that you learn the facts. If you can find any book which disproves the facts, let me know please. I think someone other than me fell asleep during the math lectures, unless your lecturer was the one who fell asleep. Thanks a lot.

  40. minor correction (the bold print for the + signs doesn’t seem noticeable!):

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    There are two plus signs missing, corrected below in bold:

    Divergence of electric field is

    div.F = [(dF_x)/dx] + [(dF_y)/dy] + [(dF_z)/dz]

    for spherical symmetry, this produces Coulomb’s law from Gauss’s law:

    Gauss’s law:

    div.E = {charge density}/{permittivity}

    hence:

    div.E = [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    because there is spherical symmetry:

    [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    = 3(dE_x)/dx
    = 3(dE_r)/dr

    where r is a radial direction (equal to x, y and z, because of the spherical symmetry of the electric field around a point charge).

    Hence:

    div.E = charge density /permittivity

    = 3(dE_r)/dr

    When we insert charge per spherical volume (4/3)*{Pi}*r^3 for the “charge density”, and rearrange we get:

    E_r = charge/(4*{Pi}*r^2)

    This immediately becomes Coulomb’s law when we remember: force_r = {charge acted upon} * E_r

    So that’s why you’re wrong for the divergence operator, where dx=dy=dz=dr for spherical symmetry.

  41. copy of another (hopefully final!) comment in reply to Dr Bee:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “I haven’t talked about the divergence,” – Bee.

    Bee, you wrote above:

    “If it was as you said, then the Coulomb force law in 3 dimensions wouldn’t fall with 1/r^2 as it does, ”

    The Coulomb law in operator symbolism is:

    divergence.E = charge density/permittivity

    with force F_r = {charge acted upon} * E_r

    So you need to learn that in vector calculus, Coulomb’s law does involve divergence.

    “I tried to explain why your reasoning is faulty. I used the example of the Laplace operator trying to show that your argumentation is wrong. If it was as you said, then the Laplace operator was d^2/dr^2, which is definitely not the case. Go on and try to derive the correct form using your argumentation.” – Bee

    The Laplacian operator is defined as the sum:

    {namba}^2 F = [(d^2 F_x)/dx^2] + [(d^2 F_y)/dy^2] + [(d^2 F_z)/dz^2]

    Source: Feynman lectures on physics, section on Maxwell’s equations (I’m travelling myself and don’t have access to a library at present, so can’t give page number, but it’s in there).

    For spherical symmetry,

    [(d^2 F_x)/dx^2] = [(d^2 F_y)/dy^2] = [(d^2 F_z)/dz^2]

    = [(d^2 F_r)/dr^2]

    Hence,

    [(d^2 F_x)/dx^2] + [(d^2 F_y)/dy^2] + [(d^2 F_z)/dz^2]

    = [(d^2 F_r)/dr^2] + [(d^2 F_r)/dr^2] + [(d^2 F_r)/dr^2]

    = 3[(d^2 F_r)/dr^2]

    Notice the factor of 3 appear here too!

    Notice that in the derivation of Coulomb’s force in simple algebra from Maxwell’s div.E equation in vector calculus (which I showed in a comment above), it is essential that the sum of three orthagonal differential terms in vector calculus is equal to three times any one of those terms (or the radial direction term). This factor of three cancels out the 3 in the volume of a sphere (4/3)*Pi*r^3, giving the 4*Pi we find in the denominator of Coulomb’s law.

    “dx^2 = dy^2 = dz^2 = dr^2” – Bee

    This is correct, and what I was saying!

    Earlier you claimed “it is not dx=dy=dz=dr but dr^2 = dx^2 + dy^2 + dz^2”.

    If you were right that

    “dr^2 = dx^2 + dy^2 + dz^2”

    This earlier statement of yours (which is wrong) is incompatible with your new correct comment that “dx^2 = dy^2 = dz^2 = dr^2” is wrong.

    Because dx^2 = dy^2 = dz^2 = dr^2, we can set them all equal to 1 unit, for convenience.

    Then your previous claim that “dr^2 = dx^2 + dy^2 + dz^2” becomes:

    1 = 1 + 1 + 1.

    This is why you need the factor of three! Hence if dx^2 = dy^2 = dz^2 = dr^2, then it follows

    3dr^2 = dx^2 + dy^2 + dz^2

    i.e.,

    3 = 1 + 1 + 1.

    Which is correct.

    Your new statement

    “ds^2 = dx^2 + dy^2 + dz^2”

    is correct and has nothing to do with spherical symmetry whatsoever, because ds^2 has nothing to do with dr^2.

    ds^2 = dx^2 + dy^2 + dz^2 is nothing to do with 3dr^2 = dx^2 + dy^2 + dz^2 where r is radial distance from the origin in spherical symmetry (the origin being defined x = 0, y = 0, z = 0).

    However you then write:

    Hence:

    3dr^2 = dx^2 + dy^2 + dz^2

    Notice that the number 3 appears, which you don’t include!

    This is complete bullshit. Would you please to me a favor, write down your transformation, transform your one-forms according to it, and derive the above. Good luck.” – Bee.

    This is just wrong. You are, as shown completely confused between radial distances r and the Pythagorean resultant s.

    “Of course in spherical symmetry the gradient is spherical symmetric, this is not the point.” – Bee.

    You claimed earlier that this was the point, stating:

    “Your point 2 is just wrong, in case of spherical symmetry it is not dx=dy=dz=dr but
    dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).” -Bee

    You here got confused between

    ds^2 = dx^2 + dy^2 + dz^2

    and

    dr^2 = dx^2 + dy^2 + dz^2 (which you write, and which is wrong).

    The ds^2 = dx^2 + dy^2 + dz^2 formula is a metric and has nothing to do with the Laplacian operator!

    If you want to write dr^2 equals some sum, the correct formula you should write is

    3dr^2 = dx^2 + dy^2 + dz^2

    for spherical symmetry, or

    dr^2 = (1/3)*(dx^2 + dy^2 + dz^2)

    This is the reason why the radial contraction of the earth in general relativity is (1/3)GM/c^2 = 1.5 mm, instead of GM/c^2 = 4.5 mm (see Feynman’s lectures).

    But even if you write dr^2 = (1/3)*(dx^2 + dy^2 + dz^2), that still has nothing to do with dx=dy=dz=dr.

    You next state:

    “What you still don’t get is that three dimensions – even with spherical symmetry – are not the same as three times one dimension.” – Bee

    If

    dx = dy = dz = dr

    as in spherical symmetry, then even a child can write down the sum

    dx + dy + dz = 3dr

    You simply can’t add up 1 + 1 + 1 = 3. You keep falsely claiming that because 1 = 1 = 1 = 1, that means 1 + 1 + 1 = 1! Please go and learn to add up to the number 3.

    “This is not a derivation, but a hand-wavy way of putting together factors. I presume you want to derive the field of a point charge, the source is \delta(0) not ~1/r^3. Write down the equations (Laplace on potential equals charge), solve them.” – Bee

    The Laplace operation on potential isn’t a Maxwell equation as noted above: the relevant Maxwell equation is div.E. You can write this in other forms but the most useful form is using the divergence operator (see Feynman lectures on physics for example), which you claim is hand-waving nonsense.

    It’s not a hand-wavy argument:

    1. Newton proved that in any spherically-symmetric distribution of inverse-square law force sources, the resultant is the same as for a point source located in the centre.

    2. Maxwell’s equation for Gauss’s law is div.E = charge density/permittivity of free space

    charge density = Q/[(4/3)*Pi*r^3]

    Hence

    div.E = Q/[{permittivity}*(4/3)*Pi*r^3] [Eq. 1]

    Now

    div.E = [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    where for spherical symmetry

    [(dE_x)/dx] = [(dE_y)/dy] = [(dE_z)/dz] = [(dE_r)/dr]

    Hence

    div.E =
    [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    = 3(dE_r)/dr

    Setting this equal to Eq. 1 gives:

    div.E = Q/[{permittivity}*(4/3)*Pi*r^3] = 3(dE_r)/dr

    Hence:

    E_r = Q/[4*Pi*{permittivity}*r^2]

    Remembering F = q*E_r (where q is charge acted upon by charge Q),

    F = qQ/[4*Pi*{permittivity}*r^2]

    Which is Coulomb’s law. Where is the hand-waving? It’s rigorous. We have to use spherical symmetry in it where

    [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    = 3(dE_r)/dr

    If the number 3 didn’t occur here, we’d have a result three times stronger than Coulomb’s law, which is experimentally wrong. So the figure 3 must occur here, even if you don’t believe that adding three identical functions leads to a resultant which is three times as big as any one of the three!

    You need to know this derivation because otherwise you won’t be able to derive Coulomb’s law in algebra from the vector calculus form in Maxwell’s equations (which are written with divergences and curls, not Laplacian operators). If you don’t know this, one day you might be caught out on it.

    Best,
    nige

  42. response to arun:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    Arun,

    So you believe as Bee claimed that the equality of gradients in all directions in a spherical space somehow disproves – or is replaced by – a metric? From someone who confuses ds^2 with dr^2 and whose arithmetic in claiming that dr^2 = dx^2 + dy^2 + dz^2 is equivalent to claiming that 1 = 1 + 1 + 1 since dr^2, dx^2, dy^2 and dz^2 are all equal for the spherically symmetric case where the variable changing in x, y and z directions is independent of the direction?

  43. My response to more attacks from Bee:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “I have marked all your quotations in italics. Where you remark
    This is correct, and what I was saying! you have wrongly attributed your own sentence to me.” – Bee

    Well in that case you maybe need to do something other than use italics when you write a formula, or it looks as if you are just emphasising it with italics.

    “You should know that the electromagnetic fields can be written as a gradient of a potential, in which case the equation becomes what I have written (Laplacian on the potential).” – Bee

    I don’t know why you continue to attack me by making false claims? I of course know that which is precisely why I wrote in my comment above:

    “You can write this in other forms but the most useful form is using the divergence operator (see Feynman lectures on physics for example), which you claim is hand-waving nonsense.”

    So what you are doing now is just ignoring what I’m saying and hurling abuse at me which is all completely wrong? I’m sorry to see you write such abuse of a commentator. All this extra-dimensional KK stuff has proved not even wrong. Maybe your replies prove why it’s supporters can’t be reasoned with. If you claim that Feynman is “arm-waving” then maybe you need to write your ignorant criticisms of him, not me.

    The reason I gave the derivation from Maxwell’s divergence equation and not the Laplacian, is – as I explained to you above – because it clearly shows that you that:

    [(dE_x)/dx] + [(dE_y)/dy] + [(dE_z)/dz]

    = 3(dE_r)/dr

    which shows that for equal gradients the sum of three of them is three times the radial gradient, which you don’t grasp:

    “Your point 2 is just wrong, in case of spherical symmetry it is not dx=dy=dz=dr but
    dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).” -Bee

    Here you are claiming dr^2 = dx^2 + dy^2 + dz^2 when in fact the correct formula is

    3dr^2 = dx^2 + dy^2 + dz^2

    because all three are equal to dr^2. Similarly, for the case dr = dx = dy = dz, we get

    3dr = dx + dy + dz

    Not what you claimed which was dr = dx + dy + dz.

    In addition, there is as I explained, still another error in your claim that you still don’t seem to grasp:

    “Your point 2 is just wrong, in case of spherical symmetry it is not dx=dy=dz=dr but
    dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).” -Bee

    The fact is, dx=dy=dz=dr has nothing whatsoever to do with dr^2 = dx^2 + dy^2 + dz^2.

    Even if you had the factor of three in dr^2 = dx^2 + dy^2 + dz^2,

    i.e., 3dr^2 = dx^2 + dy^2 + dz^2,

    your argument that dx=dy=dz=dr should be replaced with dr^2 = dx^2 + dy^2 + dz^2 is entirely wrong.

    You then tried to justify this claim by referring to the Laplacian operator for Coulomb’s law, and I proved you don’t need it – even if your claim was substantive – because you can use Maxwell’s equation for divergence of an electric field instead. You really are wrong, and you still can’t see it?

    “You conclusion that dx=dy=dz is nonsensical because these forms are basis elements and independend of each other – otherwise you’ve managed to collaps a three dimensional space to a one dimensional one.” – Bee

    Bee, you are still totally confused so I suggest you read the Feynman lectures in physics! In spherical symmetry, the direction-dependent vectors are all equal in x, y, and z directions.

    (dE_x)/dx is the same in spherical symmetry as (dE_y)/dy which is the same as (dE_y)/dy, which is equal to the general gradient as a function of radial distance (dE_r)/dr.

    Because of the spherical symmetry, dE_x = dE_y = dE_z = dE_r and dx = dy = dz = dr.

    This is a fact because of the spherical symmetry! You still don’t seem to grasp this fact, and instead of admitting that you don’t understand it and trying to learn some physics, you instead attack me?

    “You conclusion that dx=dy=dz is nonsensical because these forms are basis elements and independend of each other – otherwise you’ve managed to collaps a three dimensional space to a one dimensional one. Do yourself a favor and check your above equation with the simplest case you can think of, like Coulomb force or so.” – Bee

    I’ve already done that above, where the gradients and line elements are equal to the radial line element in spherical symmetry. This is why I can derive Coulomb’s law from the divergence of an electric field. The spherical symmetry means that gradients and line elements are all equal to the radial gradient.

    So I’ve already done you the favor of explaining it to you. Maybe you need to read it to see where your errors are.

    Best,
    Nigel

  44. further comment:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “I am not attacking you.” – Bee

    You seem to be because you falsely wrote

    “It seems to me you slept through the GR session, you evidently have no idea how to make a coordinate transformation. Please look up every standard textbook on differential geometry to find out that I am correct.”

    I never mentioned coordinate transformations, which have nothing to do with the existence of spherical symmetry of gradients you object to. It’s abusive for you to keep on calling the facts as I state them “bullshit” and making the false claim that because I don’t mention something irrelevant, I “evidently have no idea how to make a coordinate transformation.”

    You would feel abused if someone made false statements about your work being wrong, and then said that they evidently didn’t know something which they do know, but which isn’t relevant for spherical symmetry and equal gradients! So why do you claim that you were not being abusive to me by making false claims about my work and false sneers about what I know?

    I don’t understand why you keep repeating this endlessly, after I explained by your dismissal is totally vacuous.

    The spherical symmetry of gradients has nothing to do with general relativity or with metrics, so you are falsely attacking me.

    Spherical symmetry means that x = y = z = r and anything that varies with one direction varies in the other directions in exactly the same way.

    Hence dx = dy = dz = dr has nothing to do with the metric.

    You claimed

    “Your point 2 is just wrong, in case of spherical symmetry it is not dx=dy=dz=dr but
    dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).”

    When in fact dx=dy=dz=dr has nothing to do with dr^2 = dx^2 + dy^2 + dz^2, which in spherical symmetry would be 3dr^2 = dx^2 + dy^2 + dz^2, if dr^2 = dx^2 = dy^2 = dz^2 (definition of spherical symmetry is that anything varying as a function of an element in x, y or z is identical).

    You again repeat that 3dr^2 = dx^2 + dy^2 + dz^2 is “just wrong. I have explained it above. I neither have the time nor the patience to repeat myself.”

    It’s abusive of you to keep ignoring the facts and keep insisting that they are wrong:

    Fact: in spherical symmetry,

    x = y = z = r where r is radial distance from the origin.

    dx = dy = dz = dr follows from this because of the spherical symmetry. Similarly,

    (dx^n) = (dy^n) = (dz^n) = dr^n

    This has nothing to do with general relativity and nothing to do with a metric

    Hence the sum

    (dx^n) + (dy^n) + (dz^n)

    = (dr^n) + (dr^n) + (dr^n)

    = 3dr^n

    Hence inserting n = 2 we get

    3dr^2 = (dx^2) + (dy^2) + (dz^2)

    It’s elementary stuff, and has nothing to do with a metric, despite the fact that it looks a bit like one – however you were the one who started talking about dr^2 = (dx^2) + (dy^2) + (dz^2), not me.

    I then proved you are wrong for spherical symmetry, which is what I’m talking about.

    In general relativity you do get the factor 3 come in for spherical symmetry when dealing with earth’s contraction. In the Lorentzian contraction, you have

    gamma = [1 – (v^2)/(c^2)]^{1/2}

    In general relativity, you have a contraction ratio which is similar except v^2 = 2GM/c^2. For small curvature, this gives by the binomial expansion a reduction in length by the small amount, GM/c^2.

    But this contraction for gravity is spread over 3 dimensions, as compared to only being a contraction in the direction of motion for the Lorentz contraction! So we have to correct it.

    Using Feynman’s spherical symmetry,

    {delta}x = {delta}y = {delta}z = {delta}r

    where r is radial direction, giving:

    {delta}x + {delta}y + {delta}z = 3{delta}r

    So:

    3{delta}r = GM/c^2

    Hence: {delta}r = (1/3)GM/c^2.

    This result is given in Feynman’s lectures on physics, for the Earth’s mass

    (1/3)GM/c^2 = 1.5 mm,

    and since this is radial only (x,y z directions) and not transverse (the circumference is not contracted) you see the need for non-Euclidean geometry.

    So even if you are going to bring general relativity into this discussion of spherical symmetry, the sum of small line elements in three perpendicular dimensions is still three times the line element in any one dimension.

    “I am not attacking you, I am trying to explain basics of differential geometry that you happily ignore. Maybe try to derive the volume element from your argumentation. Is it dV=dxdydz=dr^3 – as according to your statement that dx = dy = dz = dr?” – Bee

    In spherical symmetry where everything in direction x is the same as everything in directions y and z and all three directions are the same as the radial parameter r, then the product dxdydz = dr^3.

    This only holds for spherical symmetry.

    “Aha. Unfortunately, it’s my calculation that is mathematically consistent where yours is not.” – Bee

    I explained where your dismissal of my work is flawed. dx is the element of x, and in spherical symmetry it such elements for any gradient are identical in all directions.

    “So I thank you very much for that ‘favor’ of yours but your confused argumentation that completely ignores what ‘dx’ actually means doesn’t indicate any error in my argument.” – Bee

    You earlier asked me to do a “favor”:

    “Do yourself a favor and check your above equation with the simplest case you can think of, like Coulomb force or so.” – Bee

    I had already done it! It proves you wrong. The fact that you make a false claim and refuse to read the facts does indicate an error in your argument. You need to check what you are claiming before you make assertions like:

    “Your point 2 is just wrong, in case of spherical symmetry it is not dx=dy=dz=dr but
    dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).” – Bee

    Once again, dx is not the same as dx^2 and a sum of squares of line elements doesn’t have anything to do with the equality of line elements in spherical symmetry.

    That you should invent such a claim as this is like claiming someone who says oranges are orange is somehow wrong because apples are green. You are inventing a false ‘disproof’ that is totally wrong, and then you’re claiming I don’t know general relativity.

    Maybe you need to be sure you are correct before making assertions, or – if you don’t have the time to check everything at the time of making an assertion – you should at least correct errors when pointed out to you.

    Best,
    Nigel

  45. Hopefully this really is the last of the responses I need to make to Bee:

    http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “This is wrong. Correct is dxdydx = r^2 sin \theta dr d\theta d\phi, where x,y,z, are the Euclidean coordinates.”

    No, in spherical symmetry r is defined as radial direction which is equal to x, to y and to z. The page you link to is for spherical coordinates, not spherical symmetry!

    The equation

    dxdydx = r^2 sin \theta dr d\theta d\phi

    applies to the general case, not specifically to spherical symmetry.

    Now we see why you are wrong: you are claiming that a specific solution of a formula (spherical symmetry where dr = dx and is equal to all other perpendicular elements) somehow is disproved by the general formula.

    If you look at the page you actually linked to,

    http://www.tf.uni-kiel.de/matwis/amat/elmat_en/kap_3/basics/b3_2_2.html

    you can see quite clearly that it has nothing to do with spherical symmetry, just with spherical coordinates.

    Spherical symmetry means that x = y = z = r and

    Is just wrong. Spherical symmetry of a function means that that function only depends on the distance to the origin, that distance being r= \sqrt(x^2 + y^2 + z^2) where x,y,z are the Euclidean coordinates. Obviously x is not equal to y and z everywhere.” – Bee

    It’s not obvious, it’s plain wrong because the whole x axis is symmetric to the whole y axis and to the whole z axis. They’re all similar!

    Spherical symmetry around a point means that all the parameters varying as a function of distance from that point vary as the same function of distance, i.e., as a function of radial distance.

    If a star is spherically symmetrical, then any variation in any parameter as a function of distance in x direction is mirrored by y and z directions.

    This is why we can avoid talking of three separate directions in spherical symmetry and set them all equal.

    Suppose that you want to use calculus to find the volume of a sphere. You integrate the surface area, 4*Pi*r^2 over increasing radius,

    volume of sphere =

    {integral symbol}(4*Pi*r^2)dr

    = (4/3)*Pi*r^3.

    There is no inclusion of x, y and z dimensions, because they’re all the same as r in spherical symmetry!

    This relies on the fact that Spherical symmetry means that x = y = z = r.

    Suppose you want to integrate to find the total mass observable in spacetime from us, you would need to use x = y = z = r because the universe looks the same in each direction. The fact that there is spherical symmetry means you can dispense with your formulae like

    dxdydx = r^2 sin \theta dr d\theta d\phi,

    because you are just dealing with the radial direction, outwards from us. For example, to find the total mass of the universe observable to us you would need to integrate the density over increasing radial distance, allowing for the fact that the density increases with apparent distance because we’re looking back to earlier epochs of the big bang (when density was higher).

    This is the sort of calculation I’ve been on the basis of spherical symmetry.

    “I am not in the mood to correct all other points that follow from that misunderstanding.” – Bee

    Well, I suppose that this saves me the time required to point out other inaccuracies, so thank you for that. Good luck with your research.

    Best,
    Nigel

  46. http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    For example of finding observable mass of universe by integrating over radial distance:

    M = {integral symbol}(4πr^2){rho} dr

    Density, {rho}, increases with distance r in proportion to the factor

    [1 – rH/c)^{-3}

    this is caused by decreasing observable age of the universe we’re seeing at earlier times (great distances).

    The integral out to r gives infinity, because density tends towards infinity at great distances (corresponding to small times since the big bang).

    This raises quite a lot of questions if gravitons are exchanged between all the masses in the universe. You get a problem with an infinite effective mass of the universe.

    So you are forced to take account of other factors such as redshift of exchanged gravitons, which reduces the strength of gravitational interactions when gravitons are exchanged over large distances (large redshifts). There are other factors like inflation, which would affect the result. But there are a lot of simple calculations that can be made like this which give clues about the nature of gravity.

    Best,
    nige

  47. http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    “(you seem to implicitly assume without stating it that the origin of your coordinates is the center of the symmetry.)” – Bee

    I explicitly stated:

    “The assumption that there’s only one time dimension is the same as you would have if everything around you was radially symmetric, like living in the middle of an onion, where only changes as a function of radial distance (one effective dimension) are readily apparent: the Hubble constant, and hence the age of the universe (time)is independent of the direction you are looking in.”

    This was the whole point I’m making: that the spherical symmetry applies where the origin of the coordinates is the centre of symmetry.

    “I will repeat it one last time: spherical symmetry does not mean that x=y=z when x,y,z are Euclidean coordinates. The equation x=y=z is a constraint that singles out a one-dimensional curve in space-time (the one on which x=y=z).” – Bee

    That’s what I’ve been telling you: Euclidean coordinates in general have nothing to do with x=y=z. I’m only dealing with a very specific, particular situation in which everything around the observer is extremely similar regardless of direction. This is the situation of cosmology.

    “In case of spherical symmetry one can reduce the differential equations (depending on r, theta, phi) to a system with only a dependence on the radial coordinate, but this equation will not be the same as in the case where you’d only have a space-time with one dimension …” – Bee

    That’s only correct if you are not having the symmetry centred on coordinates (0,0,0). You are apparently thinking about the general case, where the coordinates are not the centre of symmetry. This doesn’t have anything to do with any of my calculations, which are simplified by the very fact that the expansion rate of the universe, its density, etc., are very nearly isotropic.

    “You are taking your words and translating them incorrectly into equations.” – Bee.

    Ouch! I think you are mistaken, and I’d like to know where I’m wrong. Misunderstanding my discussion of being at the middle of a spherically symmetric onion (which I still think is a very clear example of what I mean, and translates perfectly into equations), doesn’t indicate that my equations are incorrect. Maybe in future I’ll add a more detailed discussion of spherical symmetry when I write about it, and show how it is derived from general spherical coordinates. (Then physicists will have to find something else to misunderstand so they can go on falsely claiming it to be completely nonsense.)

    Thank you very much for your time.

    Best,
    nige

  48. http://backreaction.blogspot.com/2007/06/early-extra-dimensions.html

    Hi Stefan,

    I’m very sorry that the discussion went the way it did but I did my best to be clear and was still misunderstood.

    I don’t have a “private version of vector analysis”. Maths is maths, it’s not my private version.

    If Bee didn’t understand it she could have said so in the first place instead of insulting my intelligence by claiming – falsely – that I’m somehow wrong when Bee missed the point, probably because she was busy with something more important.

    I’ve been answering claims made against my intelligence when I stated facts, not deliberately “hijacking the thread”.

    Time has been wasted, mine mainly because it’s more work to refute insults to your intelligence than to merely claim someone else is wrong without first checking what they are saying. This is elementary background stuff everyone should know, and diverts attention from the physics which is vital!

    I have no interest in responding to any more insults, and hope you have the decency to leave it at that. I won’t bring up physics points again on your blog, as it just leads to attacks on me lacking any substance, and wasting time.

    Still, thanks for your patience. The physics is here.

    Best,
    nige

  49. Nige,

    I’m interested in Bee’s comment to you where she claimed

    “… in case of spherical symmetry it is not dx=dy=dz=dr but dr^2 = dx^2 + dy^2 + dz^2 (provided that your x,y,z are euclidean coordinates).”

    She is confusing dx with dx^2 which has nothing to do with a linear gradient like the Hubble recession dv/dr = H, a constant, where dr is equal to dx and dr is equal to dy and dr is equal to dz,

    i.e.,

    Hubble recession

    dv/dr = H

    where

    dr = dx = dy = dz.

    This is due to the spherical symmetry of the Hubble recession. So she’s just wrong. It has nothing to to with dr^2 = dx^2 + dy^2 + dz^2. The fact she continues to claim this shows she is unable to grasp basic concepts.

    I noticed she has an attitude as expressed in her comment replying to Professor Bert Schroer in the “Not Even Wrong” blog on 17 Nov 96:

    “I think I made quite clear that I was pointing out there are different ways to argue, and I am not willing to start a discussion …” – Bee, http://www.math.columbia.edu/~woit/wordpress/?p=489#comment-19096

    Interesting attitude! She admits that there are different ways to argue to Prof. Schroer, but when it comes to physics, she is unable to grasp a different way.

  50. anon.,

    The equation you quote, dv/dr = H, is written differently to the Hubble law which is v/r = H.
    You are correct however, in that the ratio

    dv/dr = v/r = H

    for the cosmology we are interested in. I need to be clear that Bee and others are thinking of something completely different to this, and as a result there is confusion.

    Where Bee quotes some formulae, she is actually correct that the formulae have validity generally, but the formulae she quotes do not have anything to do with the particular statement I made.

    Because the Hubble recession is isotropic (the same in every direction we look), dr = dx = dy = dz in Hubble’s law:

    dv/dr = H where H is constant and r is observable radial distance from us.

    Bee is welcome to try to replace the isotropic equivalence of line elements dr = dx = dy = dz with dr^2 = dx^2 + dy^2 + dz^2, or with any other formula she likes. I don’t think she will get anywhere because the Hubble law

    dv/dr = H

    does not involve squares of line elements!

    The problem I have with this is just the way she is using false arguments to attack me, for example she is saying that dr = dx = dy = dz is wrong and the correct formula is dr^2 = dx^2 + dy^2 + dz^2. You can see this is wrong as an “argument”, because as you said,

    H = dv/dr

    in which isotropy demands the condition:

    dr = dx = dy = dz.

    This has simply nothing to do with Bee’s formula dr^2 = dx^2 + dy^2 + dz^2.

    (Furthermore, even if dr^2 = dx^2 = dy^2 = dz^2, then Bee’s formula dr^2 = dx^2 + dy^2 + dz^2 is wrong, because dr^2 = dx^2 = dy^2 = dz^2 implies that 3dr^2 = dx^2 + dy^2 + dz^2, and Bee has missed out the factor of 3.)

    The main point is that dr = dx = dy = dz is simply nothing to do with dr^2 = dx^2 = dy^2 = dz^2.

    There is simply no physical connection in Hubble law.

    I think what happens is that when some people see dr = dx = dy = dz, they remember a formula from their textbook that says ds^2 = dx^2 = dy^2 = dz^2, and then muddle the two up.

    Some people out there aren’t aware that in the Hubble recession dv/dr = H, where H is a constant which is inversely proportional to the age of the universe, and dr = dx = dy = dz, in fact dr is the radial line element in any dimension due to spherical symmetry.

    If those people support Bee’s claim that dx=dy=dz=dr is wrong and should be replaced by dr^2 = dx^2 + dy^2 + dz^2 , then that’s up to them.

    I now remember being asked (very rudely by somebody anonymous) about this sort of thing a few years ago at a science forum. I’ve obviously absolutely no idea who the anonymous person was and they did not allow me to reply to their question (it was a “rhetorical question” which they insisted on answering incorrectly themselves, and just dismissing the facts that I had written).

    At some stage it would be sensible to add a kids-type “frequently asked questions” section to the blog, answering common questions like this. Then, when the next person comes up with the same old question about replacing dr = dx = dy = dz in the Hubble law with dr^2 = dx^2 + dy^2 + dz^2 , they can be directed to “question number ###”, and it will avoid having time-wasting discussions of elementary basics.

    But, please, let there be no more discussion about this! It has nothing to do with real physics, so I’m tired of discussing it.

    If anyone has any problems or suggestions with the actual physics of the blog post, please let me know.

    Kindergarden stuff like explaining why the Hubble law is a spherically symmetrical velocity gradient and age-of-the-universe gradient will be addressed when time permits.

    Thank you.

    Nigel

  51. Just in case anyone asks how photons get deflected by gravity in this mechanism, it should be pointed out that anything with momentum is given that momentum by the mass-giving ‘Higgs field’ or whatever, and the stress-energy tensor of general relativity (whether or not you agree with the smoothness of gravity source employed) shows that the origin of gravity fields can be mass, momentum, energy, etc.

    Maybe some readers aren’t convinced by facts but just by the authority of renowned experts, so in that case let us quote Professor P.C.W. Davies (before he made a fool of himself for adopting a flawed theory of time as a religion and collecting the million dollar Templeton Prize – see comment 1 above):

    “The fact that photons have no rest mass isn’t a problem because … they can never be at rest anyway …”

    – page 21 of P.C.W. Davies, The Forces of Nature, Cambridge University Press, London, 2nd ed., 1986.

  52. Relating to the CERN Document Server issue mentioned in this post, is the following statement:

    “In October 2004, CERN shut down their website service to arxiv.org blacklistees. They did this most likely under pressure from the “Americans”.”

    http://archivefreedom.org/freedom/furthersuppression.html

    Also see:

    “Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?

    by Frank J. Tipler

    Abstract- The notion that a scientific idea cannot be considered intellectually respectable until it has first appeared in a “peer” reviewed journal did not become widespread until after World War II. Copernicus’s heliocentric system, Galileo’s mechanics, Newton’s grand synthesis — these ideas never appeared first in journal articles. They appeared first in books, reviewed prior to publication only by their authors, or by their authors’ friends. Even Darwin never submitted his idea of evolution driven by natural selection to a journal to be judged by “impartial” referees. Darwinism indeed first appeared in a journal, but one under the control of Darwin’s friends. And Darwin’s article was completely ignored. Instead, Darwin made his ideas known to his peers and to the world at large through a popular book: On the Origin of Species. I shall argue that prior to the Second World War the refereeing process, even where it existed, had very little effect on the publication of novel ideas, at least in the field of physics. But in the last several decades, many outstanding physicists have complained that their best ideas — the very ideas that brought them fame — were rejected by the refereed journals. Thus, prior to the Second World War, the refereeing process worked primarily to eliminate crackpot papers. Today, the refereeing process works primarily to enforce orthodoxy. I shall offer evidence that “peer” review is NOT peer review: the referee is quite often not as intellectually able as the author whose work he judges. We have pygmies standing in judgment on giants. I shall offer suggestions on ways to correct this problem, which, if continued, may seriously impede, if not stop, the advance of science.”

    http://www.iscid.org/boards/ubb-get_topic-f-10-t-000059.html

  53. Info about the guy quoted in previous comment:

    “Frank J. Tipler is Professor of Mathematical Physics at Tulane University and a fellow with the International Society for Complexity Information and Design.”

  54. From: “Nigel Cook”
    To: “David Tombe”
    Sent: Sunday, July 01, 2007 10:41 PM
    Subject: Re: Trapped Energy and Vortices

    Dear David,

    Thanks for your email. There’s an error in the Standard Model but that doesn’t detract from the basic idea that mass (i.e., gravitational charge and inertial charge) is a separate field from electromagnetic charge.

    An electron is two things: negative electric field and accompanying magnetic field, and a separate mass charge in the vacuum around it.

    If you get rid of the mass charge, all you have left is the electric field.

    So a photon is basically like a positron and electron pair, minus the mass charges.

    My model, summarised at http://quantumfieldtheory.org/1.pdf with updates at
    https://nige.wordpress.com/ indicates that there is evidence that the electron has black hole size, which seems to suggest that it is like a black hole. I.e., gravity is the force trapping the energy current into a loop.

    If the gravity is provided by the separate mass field of the electron, then if you remove the mass field from the electron, it will have no reason to be confined anymore and will go off at a tangent still moving at speed c.

    One way to get rid of the rest masses from a positron and electron is to bring them together. The removal of the rest masses in this way transforms them into photons.

    Best wishes,
    Nigel

    —– Original Message —–
    From: “David Tombe”
    To: “Nigel Cook”
    Sent: Sunday, July 01, 2007 10:27 PM
    Subject: Trapped Energy and Vortices

    > Dear Nigel,
    > I can only think of one formula that could compromise
    > between your idea of trapped energy current and my idea of rotating
    > electron positron dipoles.
    >
    > It occurred to me last September when I was debating with
    > Norman Albers.
    >
    > He suggested to me that my rotating electron positron
    > dipoles in the non-radiative state represented a unit of EM radiation in a
    > radiative equilibrium.
    >
    > My own view is that EM radiation is a propagation of
    > angular acceleration through a sea of these dipoles.
    >
    > Hence, Albers has a point. A single dipole in the steady,
    > non-radiating state, is like a trapped EM wave.
    >
    > Such a dipole is in fact an aether vortex and in many
    > respects it is the fundamental particle in the universe (albeit dielectric
    > and splittable).
    >
    > This would be the best that I could do as regards linking
    > particles to trapped energy currents.
    >
    > Yours sincerely
    > David Tombe
    >
    > —-Original Message Follows—-
    > From: “Nigel Cook”
    > Reply-To: “Nigel Cook”
    > To: “David Tombe”
    > ,,
    > CC:
    > ,,,,,,,,,,,,,,,
    > Subject: Re: Landis cites the Angel Jackson
    > Date: Sun, 1 Jul 2007 09:40:19 +0100
    >
    > David,
    >
    > You’re missing the whole point because Ivor Catt has gone off as usual on
    > some tangent instead of sticking to the key discovery of importance:
    >
    > When you treat the spread of charge across the capacitor plate as a long,
    > flat-topped logic signal entering the capacitor, it bounces around and
    > never slows down.
    >
    > So you get a new, dynamic, picture of what “static” charge is: chop up a
    > charged capacitor until you get unit charges and hence by reductum ad
    > absurdum you see that an electron is trapped electromagnetic energy. This
    > tells you the nature of matter: all charges are gravitationally trapped
    > energy currents.
    >
    > (Unfortunately, bigoted charlatans won’t consider the overwhelming
    > evidence for this, such as
    > https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/ )
    >
    > This is vital to understanding physics. That’s your “big deal”. The fact
    > that Ivor Catt is incompetent at explaining his own work is irrelevant.
    >
    > Cheers,
    >
    > nigel
    >
    >
    >
    > —– Original Message —– From: “David Tombe”
    > To: ;
    > Cc: “Nigel Cook”; ;
    > ; ;
    > ; ; ;
    > ; ; ;
    > ; ; ;
    > ; ; ;
    >
    > Sent: Thursday, June 28, 2007 5:21 PM
    > Subject: Re: Landis cites the Angel Jackson
    >
    >
    >>In 1975 while studying A-Level physics, I was taught that charge spreads
    >>out in the plates of a capacitor. What’s the big deal? It’s certainly not
    >>a new idea.

  55. Interesting comparison:

    “I have a lot of trouble believing that special relativity is false; if it is, then there is a preferred state of rest and both the direction and speed of motion must be ultimately detectable.” – Professor Lee Smolin, The Trouble with Physics, Houghton Mifflin, U.S. ed., 2006, p314.

    “U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.” – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 64-74.

    The apparent absurdity of the contradictions between the first and second quotations brings to mind the traditional children’s pantomine where the audience has to shout “look behind you” to warn Goldilocks that there is a bear eating her porridge.

    However, the mainstream physicist doesn’t see the bear eating the porridge even when he is looking at it. The +/-3 mK anistropy in the cosmic background radiation with direction (and this is a far bigger anistropy than the widely hyped “ripples” which indicate galaxy seeding) which allows earth’s motion to be deduced absolutely with regard to the cosmic background radiation, is traditionally ignored (despite the reference to “new aether drift” in the title of Muller’s May 1978 Scientific American article on the subject).

    Mainstream physicists actually prefer to ignore inconvenient evidence by pretending it is invisible to their eyes – much as the emperor in the fairytale ignored the fact that his clothes were invisible (even after the little boy pointed this out to the crowd) – than to face the consequences of not ignoring it.

    The cosmic background radiation is not the only evidence for an absolute reference frame: all rotation is absolute motion. The fact that the frame of absolute motion is only approximately represented by the cosmic background radiation (for linear motion) and by nearby stars (for rotational motion) doesn’t discredit the existence of an absolute frame of reference. In reality, all physical theories are approximate and all measurements are approximate. So the standard objections to such absolute motion and an absolute reference frame are totally bunk. Special relativity would only be true if the amount of matter and energy in the universe was zero. Instead of admitting this straight fact and its consequences, the integrity of physicists is being destroyed by empty obfuscation.

  56. copy of an email:

    —–
    From: “Nigel Cook”
    To: “David Tombe” ; ; ;
    Cc: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
    Sent: Friday, July 20, 2007 12:29 PM
    Subject: Re: It’s as obvious as the Continental Divide

    Since you failed to delete me as requested, I’ll reply to this email you
    sent. In the Standard Model, which makes thousands of accurate predictions
    in particle physics from just 19 empirically observable inputs, exchange
    radiation mediates all forces over distances exceeding the range of the IR
    cutoff (0.5 MeV particle scattering energy), i.e., beyond 10^{-15} metre or
    1 femtometre, there is just bosonic exchange radiation mediating forces
    (within that distance, there is also pair production loops of fermions with
    rest-mass present, hence short range forces occur in high energy physics).
    The exchange of such bosons causes gravity and inertial effects (resistance
    to acceleration) between field particles (some kind of “higgs” bosons) that
    give mass to the standard model charges in each particle, such as protons.

    Regarding the aberration of starlight, i.e., the fact you need to tilt your
    telescope if the source of the light is in motion, doesn’t imply an aether
    being dragged by the earth. Instead, just stick to the facts. The exchange
    radiation of quantum field theory is “dragged around” if charges move. As
    the earth orbits the sun, the sun is receiving exchange radiation
    (gravitons) from the earth in a direction which is changing. Similarly, as
    the earth rotates, you get the same effect. The Lorentz-FitzGerald
    contraction changes the shape of a mass when it accelerates (rotation is a
    centripetal acceleration, a = (v^2)/R). Once acceleration stops, the rate
    of contraction is zero. This is caused by (and compensates for) changes in
    exchange radiation due to motion of one body with respect to others. During
    acceleration, there is a net emission of gauge boson exchange radiation
    above that of reception, so you get gravitational waves or light waves being
    emitted. The normal exchange radiation is only manifested as ordinary
    forces such as inertia, gravity, electromagnetism, etc; because there is
    normally an equilibrium of exchange radiation (i.e. the power from place A
    to B equals that from B to A), the exchange radiation is invisible except
    for things like electric and gravitational fields.

    I’ve gone into this in detail.

    Nigel

    Click to access 1.pdf

    —– Original Message —–
    From: “David Tombe”
    To: ; ;

    Cc: ; ;
    ; ;
    ; ; ;
    ; ; ;
    ; ; ;
    ; ; ;

    Sent: Friday, July 20, 2007 11:34 AM
    Subject: It’s as obvious as the Continental Divide

    > Roger,
    > You may call it legitimate maths but it disguises the fact that
    > the
    > laboratory reference frame possesses distinct physical characteristics.
    >
    > A street light does not aberrate. Starlight does aberrate. Does
    > that
    > not tell you that the luminiferous medium is entrained with the Earth?
    >
    > Do pictures of the magnetosphere with its long tail not tell you
    > that solar gravity shapes the laboratory frame?
    >
    > Yours sincerely
    > David Tombe

  57. Bee and Lubos have had a difference in opinion over each other’s competency in physics:

    http://motls.blogspot.com/2007/07/phenomenology-of-quantum-gravity.html :

    Saturday, July 21, 2007
    Phenomenology of quantum gravity

    Sabine’s musings

    I consider Sabine’s (and Stefan’s) blog to be one of three most inspiring physics blogs in the world. Her openness about various topics is refreshing and it is also helpful that we share certain influences of the Central European cultural space.

    However, her opinions about a majority physics questions that go beyond the college material look scarily uninformed to me if not downright dumb.

    Her article “Phenomenological quantum gravity” is no exception. I guess that because of political correctness and her sex, no one ever tells her why her constructs are physically nonsensical. At “Loops 2007”, there is one more reason why she wasn’t told so – namely that all participants except for Moshe Rozali were cranks.

    The first oxymoron of the article is its title. No characteristic phenomena based on quantum gravity can be observed today and unless extra dimensions are very large or strongly warped, this situation will continue for quite some time. The assertion in the previous sentence is not a random guess because extra dimensions are necessary to change the dimensional analysis. Every honest person who understands both reality and theoretical physics and who has ever started to study quantum gravity must have known very well that quantum gravity has been a theoretical enterprise.

    In fact, today, it is a purely theoretical enterprise. Anyone who is doing quantum gravity and who says that she is doing so by a more physical analysis of experiments than others is cheating herself and the rest of the world, too. It is impossible because no such experiments are available and careful calculations show that in the likely scenarios, they will continue to be out of reach. They have been impossible for those 40 years during which the people were talking about quantum gravity, and frankly speaking, I estimate that they will continue to be impossible at least for next few decades. If someone promises that he or she will surely transform these phenomena into observational science in a foreseeable future is a liar.

    Quantum gravity is about doing the theory and mathematics carefully and right. It has never been motivated by easily doable experiments, it is not motivated by them today, and it will probably not be motivated by them in a foreseeable future.

    Questions to be answered

    At the beginning, Sabine outlines some of the questions that should be answered by the research and they make sense:

    But [the Standard Model] has also left us with several unsolved problems, question that can not be answered – that can not even be addressed within the SM. There are the mysterious whys: why three families, three generations, three interactions, three spatial dimensions? Why these interactions, why these masses, and these couplings? There are the cosmological puzzles, there is dark matter and dark energy. And then there is the holy grail of quantum gravity.
    If you neglect the subtlety that families and generations are the same thing 🙂 – she might have mentioned three colors – everything is justifiable and somewhat conventional.

    However, you may notice that what she actually writes about later has absolutely nothing to do with any of these big questions about the Standard Model. And I will argue that it has nothing to do with quantum gravity either.

    She introduces the top-down and bottom-up approaches. That would be fine except that all of her additions are strange. Let me start with the terminological issues. First, she gives a new name to the top-down approach: she turns it into a “reductionist approach”. That’s a very bizarre identification. In reality, the particle phenomenologists and model builders are pretty much as staunch believers in reductionism as string theorists. The top-down vs bottom-up dichotomy is not about reductionism. Reductionism is a rational belief that a theoretical tunnel can be built to connect relatively complex perceptions with pure, elementary forms of existence – with fundamental forces and particles. Top-down and bottom-up approaches differ in the strategy how to dig this tunnel. If you wanted to find an approach that questions reductionism, you would have to talk to some condensed matter physicists such as Robert Laughlin who would like to talk about fundamental physics of space but who have no idea what they’re talking about.

    Principled vs constructivist theories

    Sabine’s own name for the bottom-up approach is “constructivist approach”. That’s less inaccurate than the “reductionist approach” but it is still a historically and logically misleading term. Recall that Albert Einstein has divided physical theories to principled and constructivist theories. Principled theories start with a grand principle and then mathematically derive its consequences for our observations. Einstein named thermodynamics (the non-existence of perpetuum mobiles of various kinds being its principles) and both theories of relativity (with its postulates) as examples. On the other hand, constructivist theories, represented by statistical physics or quantum mechanics, build their new insights by grouping several known phenomena and quantitatively abstracting their common features.

    Most string theorists surely prefer the first approach even though the actual history of string theory has been a wiggly, phenomenological one. While we understand the theory pretty well these days, we still don’t know what is the universal principle behind all of it and whether such a principle exists at all.

    You might think that Einstein’s principled theories are results of top-down work while the constructivist theories are bottom-up, phenomenological models. However, the adjectives top-down and bottom-up contain something that Einstein couldn’t have understood well: the scales. When we talk about top-down and bottom-up things today, we literally talk about the pyramid of energy scales, with the top represented by the huge Planck energy and the bottom represented by the 0.1 TeV scale that is accessible to current experiments. These simple insights about the renormalization group have allowed us to organize our ignorance in a logical and visually satisfactory way. The long-distance physics is pretty much independent of the short-distance details which is both good as well as bad news. It’s good because we can learn long-distance physics without knowing the short-distance details. It’s bad because of the same reason, namely because we can’t directly learn short-distance physics from its long-distance manifestations.

    Concerning the “reductionist” approach, Sabine writes that

    The difficulty with this approach is that not only one needs that ‘promising candidate for the fundamental theory’, but most often one also has to come up with a whole new mathematical framework to deal with it.
    Well, developing and understanding mathematical frameworks has always been a difficulty for most people. I just wonder when did theoretical physicists started to consider this thing a difficulty, too. Developing new mathematical frameworks has been the most important and most exciting part of theoretical physics at least since the era of Isaac Newton.

    Middle-of-nowhere approach

    All these things were just details. The real fun starts when Sabine tries to describe what is her own approach: does she prefer top-down over bottom-up? We learn that she can’t pronounce “phenomenology” so instead,
    I picture myself somewhere in the middle. People have called that ‘effective models’ or ‘test theories’. Others have called it ‘cute’ or ‘nonsense’. I like to call it ‘top-down inspired bottom-up approaches’. That is to say, I take some specific features that promising candidates for fundamental theories have, add them to the standard model and examine the phenomenology.
    I like to call it nonsense, too.

    Whenever we consider new physics, we either believe that it could be real, or we don’t believe it could be real. If we believe that it is not real, we shouldn’t talk about it. If we believe that it is real, we want to answer more detailed and refined questions about it, in order to make progress. There are two known classes of methods how to do so. One of them is to carefully analyze the internal structure of the new phenomena at their typical scale which is usually inaccessible to existing experiments because it is too high. This is the top-down approach. The other approach, the bottom-up approach, is to study consequences of the new phenomena for physics at accessible scales. Sabine’s words clearly sound more like the bottom-up approach than the top-down approach: so what does the extra fog mean?

    I think that her bizarre “compromise” of the two approaches is meant to allow one to be more sloppy than phenomenologists in their work but simultaneously pretend to be as fundamental as top-down theorists. In other words, the middle-of-nowhere approach is a systematic algorithm to create confusion. One of the main conceptual results of the last 35 years in theoretical physics has been the renormalization group – especially its insight that our knowledge can be organized according to scales. As far as I can say, if someone claims that there can be a middle-of-nowhere approach, she misunderstands this fundamental insight due to Ken Wilson et al.

    Sabine’s examples of her middle-of-nowhere approach are extremely diverse in character and require separate discussions. …

  58. More on the battle of Bee (Sabine) versus Lubos:

    http://backreaction.blogspot.com/2007/07/consistency.html#c4938917282854008220

    “At 5:39 AM, July 31, 2007, Bee said…
    Dear Lubos:

    Your ‘answer’ to my comment has hardly anything to do with what I wrote. As so often, you either deliberately or mistakenly misinterpret me, construct an opinion I don’t have, and then use it to confirm your already present believe that I am stupid. …”

  59. copy of a comment:

    https://nige.wordpress.com/about

    As mentioned in a comment on the original version of this post,

    https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/

    there are two factors to take account of in dealing with the lack of deceleration of galaxy clusters at extreme redshifts predicted in 1996. See

    http://electrogravity.blogspot.com/2006/04/professor-phil-anderson-has-sense-flat.html

    This shows one simple mechanism: the calculation is made for the spacetime reference frame we are observing, so masses which appear to us to be near the visible horizon will also be near the visible horizon for calculational purposes in working out their recession. You can’t muddle up reference frames: in the reference frame we observe, objects at extreme red-shifts are near the boundary of the observable universe and we must calculate their gravity accordingly:

    we are only interested in calculating what we can observe from our reference frame, not in taking account of masses that may be at greater distances, which don’t contribute to what we are observing because they are beyond the visible horizon caused by the age of the universe dropping toward zero as radius approaches 13,700,000,000 light years.

    In addition to this mechanism by which recession velocities at high red-shifts are affected (asymmetry of gauge boson radiation due to location of galaxy with respect to observable centre of mass of universe being where we are, in our frame of reference [this not a claim that we are in the centre of the universe, just that the surrounding mass of the universe is uniformly distributed around us, so gravitational deceleration of distant galaxies can be calculated simply by assuming the entire mass of the universe is where we are; similarly, in calculating gravity at Earth’s surface from Newton’s law, we can quite accurately assume that the entire mass of the Earth is located at a point in the middle of the Earth]) there is another mechanism at work:

    See my comment to:

    http://riofriospacetime.blogspot.com/2007/08/dark-energy-is-bad-for-astronomy_09.html

    Louise, this is very good. This “dark energy” groupthink is mainstream mythology, mob culture in physics. It was always like this.

    Back in 1667, Johann Joachim Becher “discovered” a substance later called “phlogiston”, in order to explain how some things (but not others) could burn. This idea caught on, with German chemist Georg Ernst Stahl naming it phlogiston after the Greek word for fire, and applying the idea to all sorts of problems in chemistry, finding it a useful descriptive model in many ways.

    The “phlogiston” was supposed to be released when something burns, and the fact that some things don’t burn was simply “explained” by posulating the absense of “phlogiston” inside them. All problems with the theory were automatically new discoveries; instead of writing that the theory was wrong, people would write that they had discovered that the theory needed such-and-such modifications to make it account for this-and-that.

    You see, once this was given a name, it entered science because it was “needed” to explain why certain things burn.

    Then “phlogiston theory” was taught in scientific education, as the only self-consistent theory of combustion (just like string theory is supposed to explain gravity today, because it’s self consistent).

    Unfortunately, although it was wonderfully self-consistent and it was easy to cook up a lot of maths to describe certain aspects of combustion based on this “phlogiston” theory, there was no experimental evidence for it. It became a self-propagating fantasy. How can something be named by a scientist if it has never been discovered? Absurd, people thought, so they believed that the “evidence” for it (so indirect that it didn’t rule out alternative ideas) and the consensus behind it made it scientific.

    If you burn wood, the ash is lighter, and the loss in mass was attributed to a loss of phlogiston from the wood. (Actually, the wood has simply released things like CO2 gas to the air during combustion, which accounts for the decrease in mass.)

    This was supposedly the proof of phlogiston theory. It was debunked by Antoine Lavoisier (the French chemist who was beheaded in the Revolution) in 1783, who showed that fire is primarily a process of oxidation, the gaining of oxygen from the air. (This had previously been obscured in studies of fire by the natural production of gases like CO and CO2.)

    Sadly, Lavoisier’s discovery that the air contains a vital ingredient for combustion, oxygen, and his dismissal of phlogiston, were both negated by his claim in his 1783 paper Réflexions sur le phlogistique that there is a fluid substance of heat called caloric.

    This caloric was supposed to be composed of particles which repel one another and thereby flow from hot bodies to cool ones, explaining how temperatures equalize over time.

    Sadi Carnot’s heat engine theory (which is quantitatively correct) was also developed from the false theory of caloric. Caloric as a fundamental fluid of conserved heat was disproved in 1798 by Count Rumford who showed that an endless amount of heat can be released by friction in boring holes in metal to make cannons. Caloric is not conserved.

    The “dark energy” theory is far worse than phlogiston and caloric.

    I think “aether” is an interesting thing to compare to dark energy. The problem is that the universe is expanding at an accelerating rate in the conventional analysis which assumes that the field equation of general relativity describes the cosmological expansion, not quantum gravity.

    Problem is, quantum gravity accounts for the observations without a cosmological constant:

    (1) the mainstream general relativity model says that a cosmological constant (describing dark energy) causes a repulsive effect that offsets gravitational attraction at very long distances (large redshifts).

    (2) quantum gravity (gravity due to gravitons of some sort exchanged between receding gravitational charges, i.e., masses) implies a very different explanation: gravitons are red-shifted to lower energy in being exchanged between masses over long distances (high redshifts).

    So in (1) above, otherwise unobservable “dark energy” provides a repulsive force that offsets gravity at great distances, thus explaining the supernova red-shift data.

    But in (2) above, the same supernova red-shift data can be explained by the loss of energy of red-shifted gravitons being exchanged between masses which are receding at relativistic velocities (large red-shifts).

    Hence, general relativity needs to take account of quantum gravity effects like graviton red-shift weakening gravity and decreasing the effective value of gravity constant G towards zero as red-shift (and distance) increase to extremely large figures.

    If general relativity is corrected in such a way, we get a prediction of the supernova results which allegedly (in the current uncorrected general relativity paradigm) show “acceleration”. Actually that “acceleration” is an artifact of the mainstream data processing, which assumes gravity constant G is not affected by large distances (when quantum gravity suggests otherwise; this fact was censored off arXiv).

    The entire mainstream theory is built on brainwashing, prejudice, groupthink, consensus, politics, and similar. Any effort to get those people to listen leads them to think that the person with the facts is just ignorant of the “beauty” and “elegance” of the mainstream model. It’s hopeless.

    **********************

    Updated summary at top of the blog:

    http://electrogravity.blogspot.com/

    Standard Model and General Relativity mechanism with predictions

    Galaxy recession velocity v = dR/dt = HR. (R is distance.) Acceleration a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^2 = 6*10^-10 ms^-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

  60. I’ve just revised that brief “summary” (very scanty introduction to the main idea) to reduce any possible confusion (however, the more precise scientifically it is, the more abstract and boring it will look for non-mathematical readers, but you can’t please everyone all the time):

    http://electrogravity.blogspot.com/

    Galaxy recession velocity: v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^2 so: 0 < a < 6*10^-10 ms^-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

  61. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/galaxies-dark-and-distant_12.html

    “If the large Voids contain 55 per cent of total volume of the universe, then maybe the smaller Voids might contain the 20 percent needed to get to the 75 percent commonly thought of as the Dark Energy proportion.

    “A guess about the SomethingElse commonly thought of as Dark Energy might be that the Voids contain free conformal graviphotons that vary c,
    unlke
    the Ordinary-Matter-Dominated regions such as where we live in which graviphotons carrying the conformal c-varying degrees of freedom are frozen and suppressed.

    “Whether or not my guess has some seeds of truth,
    it is interesting that the percentage of volume of our universe in Voids is similar to the WMAP Dark Energy proportion of about 75 per cent.” – Tony Smith

    I wonder if people agree on what is meant physically by “dark energy”? If dark energy just means gauge boson exchange radiation energy, i.e. energy of gravitons, then that’s more physical and more reasonable. The confusion is illustrated by Lee Smolin writing in “The Trouble with Physics” (U.S. ed., page 209) that the acceleration of the universe due to “dark energy” is (c^2)/R:

    “… c^2 /R. This turns out to be an acceleration. It is in fact the acceleration by which the rate of expansion of the universe is increasing – that is, the acceleration produced by the cosmological constant … it is a very tiny acceleration: 10^-8 centimetres per second.”

    Obviously, Smolin or the publisher’s editor gets the units wrong (acceleration is centimetres per second^2). But there is a far deeper error.

    Take Hubble’s law known in 1929: v=HR.

    Acceleration is then:

    a = dv/dt = d(HR)/dt = Hv.

    For the scale of the universe, v = c and H = 1/t = c/R, so

    a = Hv = (c/R)c = (c^2)/R.

    Hence, we have obtained Smolin’s acceleration for the universe from Hubble’s law, by a trivial but physical calculation. The fact that velocity varies with distance in spacetime automatically implies an effective outward acceleration. That’s present in Hubble’s law which is built into the Friedmann-Robertson-Walker metric of general relativity.

    So there is a massive “coincidence” that the real acceleration of the universe, given by the fact that Hubble’s v = HR implies a = (c^2)/R, is identical to the acceleration allegedly offsetting gravitational deceleration at large redshifts!

    Maybe this can be explained if we can reinterpret the cosmological constant and dark energy to the gauge boson exchange radiation energy which is being exchanged between masses. Gravitational attraction occurs as a shadowing effect (causing an anisotropy and hence a net force towards the mass which is shielding you), whereas the isotropic graviton pressure causes gravitational contraction effects (the (1/3)MG/c^2 = 1.5 mm shrinkage of Earth’s radius which Feynman deduces from GR in his Lectures on Physics), and also the expansion of the finite-sized universe (the impacts of gravitons being exchanged between a finite number of atoms in the universe cause it to expand).

    I also agree that dark matter exists. What is at issue here is how much there is and what evidence there is for it. There is dark matter around in neutrinos which have mass. I’ve seen papers showing that, when galaxies merge, their central black holes can sometimes be catapulted out of their galaxy in the chaos and end up in a void of space.

    But this paper astro-ph/0610520 is exceedingly speculative and builds on the mainstream guesswork general relativity model, which doesn’t contain quantum gravity mechanism corrections for general relativity on large (cosmological) scales.

    The actual nature of “dark matter” can be determined simply by working out the correct quantum gravity theory and correcting general relativity accordingly for exchange radiation (graviton) effects: if it turns out that the quantum gravity theory differs from Lambda-CDM in dispensing with 90% of the currently-presumed quantity of dark matter, then we know that the amounts of dark matter present in the universe are relatively small and can be explained using known physics.

    The density of the universe in the Lambda-CDM mainstream model of cosmology is approximately the critical density in the Friedmann-Walker-Robertson model,

    Rho = (3/8)*(H^2)/(Pi*G).

    This is the estimate of total density which is about 10-20 times higher than the observed density of visible stars in the universe. Hence this is the key formula which leads to the quantitative “prediction” (a very non-falsifiable prediction, well in the “not even wrong” category) that 90-95% of the mass of the universe is invisible dark matter.

    However, some calculations based on a quantum gravity mechanism suggest that when quantum effects are taken into account, the correct density prediction is different, being almost exactly a factor of ten smaller:

    Rho = (3/4)*(H^2)/(Pi*G*e^3)

    where e is base of natural logs, and comes into this from an integral necessary to evaluate the effect of the changing density of the universe in spacetime (the density increases with observable distance, because of looking back to a more compressed era of the universe) on graviton exchange.

    This implies that the correct density of the universe may be around 10 times less than the critical density given by general relativity (which is wrong for neglecting quantum gravity dynamics, like G falling with the redshift of gravitons exchanged between receding masses over long distances in the universe, the variation in density of the universe in spacetime, where gravitons coming from great distances come from more compressed eras of the universe, etc.).

    So, instead of there being as much as 10-20 times as much dark matter as mass in the visible stars, the total mass of dark matter is probably at most only similar to the amount of visible matter.

    This probably means that escaped black holes and neutrinos can account for dark matter, which means that by studying quantum gravity effects, it is possible to determine the nature of dark matter (simply because you know the correct abundance). Of course, orthodoxy insists (falsely) that general relativity only needs correction for quantum gravity effects on small distances (high energy physics), not over massive distances.

    But physically any form of boson, including a graviton, should be affected by recession when being exchanged between two receding gravitational charges (masses). The redshift of the graviton received should weaken the gravity coupling and thus the effective value of G for gravitational interactions between receding (highly redshifted) masses.

    I was disappointed when Stanley G. Brown, editor of PRL, rejected my paper on this when I was studying at Gloucestershire university:

    Sent: 02/01/03 17:47
    Subject: Your_manuscript LZ8276 Cook

    Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories. …

    Yours sincerely,
    Stanley G. Brown,
    Editor, Physical Review Letters

    I didn’t seriously expect to have the paper published in PRL, but I did hope for some scientific reaction. After several exchanges of emails, Stanley G. Brown resorted to sending me an email saying that an associate editor had read the paper and determined that it wasn’t pertinent to string theory or any other mainstream idea. I then responded that it obviously wasn’t intended to be. Stanley G. Brown forwarded a final response from his associate editor claiming that my calculation was just a theory “based on various assumptions”. Actually, it was based on various facts determined by observations. E.g., Hubble’s law v = HR implies acceleration a = dv/dt = H*dR/dt = H*v = H*HR = 6*10^{-10} ms^{-2} for the matter receding at the greatest distances. This implies outward force of that matter of F=ma= m(H^2)R, and by Newton’s 3rd law you have an equal inward force (by elimination of possibilities, this inward force is that carried by gauge boson radiations like gravitons) which gives a mechanism for gravity by masses shadowing the inward-directed force of graviton exchange radiation.

    Maybe the focus with black holes could be on trying to understand existing experimentally verified facts? Instead of imaginatively filling the voids of the vacuum with speculative black holes based on dark matter estimates made by discrepancies between somewhat speculative or wrong models, it might be more productive to consider what the consequences would be if fundamental particles were black holes. The radius of a black hole electron is far smaller than the Planck length. The Hawking radiation emission from small black holes is massive, perhaps it is the gauge boson exchange radiation that causes force-fields? At least you can easily check that kind of theory just by calculating all the consequences. The Hawking black body radiating temperature depends on the mass of the black hole, and the radiant power of Hawking radiation is then dependent on that temperature and the black hole event horizon radius (2GM/c^2) which provides the radiating surface. Hence you can predict the rate of emission of Hawking radiation from a black hole of electron mass. It’s immense, but that’s what you need to calculate the physical dynamics gauge boson exchange radiation; the cross-section for capture of the radiation by other fundamental particle masses is very small (their cross-sections are the area of a circle of radius equal to event horizon 2GM/c^2), so you need an immense radiant power to produce the observed forces.

    Obviously the Yang-Mills theory is physically real, and the exchange radiation is normally in some sort of equilibrium: the energy gravitons (and other force-mediating radiation) falling into black hole-sized fundamental particles gets re-emitted as Hawking radiation (behaving as gravitons). So there is an exchange of gravitons between masses at the velocity of light.

    Undoubtedly believers in spin-2 gravitons can raise objections about this being unorthodox, but spin-2 gravitons haven’t actually been observed.

    Nigel

  62. Pioneer1,

    Thanks for the criticism of this historical sentence in my blog post, but see:

    http://www.fas.harvard.edu/~scdiroff/lds/NewtonianMechanics/CavendishExperiment/CavendishExperiment.html

    “The Cavendish apparatus basically consists of two pairs of spheres, each pair forming dumbbells … One dumbbell is suspended from a quartz fiber and is free to rotate by twisting the fiber; the amount of twist measured by the position of a reflected light spot from a mirror attached to the fiber. … the gravitational attraction between two sets of spheres twists the fiber, and it is the measure of this twist that allows the magnitude of the gravitational force to be calculated. … The apparatus was originally invented by the Rev. John Michell in 1795 to measure the density of the Earth. It was modified by Henry Cavendish in 1798 to measure G and subsequently by Coulomb to measure electrical and magnetic attraction and repulsion. Apart from the historical significance of the experiment, it’s really neat to see that you can measure such an incredibly weak force using such a simple device.”

    Maybe you can direct some fire at them, and also the publishers of Dr Asimov’s books which say Cavendish used a quartz fibre to measure G?

    I know from practical experience of this and other experiments that when you are doing this kind of thing, developments occur in stages. Cavendish in his first experiments may have used a copper wire, but that gave way to a quartz fibre for greater accuracy. Similarly, in his first paper he may not have calculated or used the symbol G = (Fr^2)/(mM). However, G is just a letter representing the proportionality factor and thus relative strength of the gravitational interaction. Cavendish measured all the values in (Fr^2)/(mM).

    The exact history of when the quartz fibre and the ratio for G were introduced into the experiment is trivial for the my purposes. If you can produce evidence that none of these (quartz fibre and G calculation) developments to Cavendish’s experiment were his own doing, and can identify who actually did them, then maybe I can add their names to the discussion of the outline history.

    If you want to attack historical inaccuracy which is actually damaging mainstream science today, may I suggest that you take a look at Maxwell’s theory of electromagnetism: Maxwell got the equations wrong, he never wrote what are called the vector calculus “Maxwell’s equations” which were written by Oliver Heaviside in 1893.
    Nobody worries about the physics of Maxwell’s “displacement current” equation although Schwinger’s threshold of 1.3*10^18 volts/metre as the minimum electric field required for pair-production of polarizable charges in the vacuum prevents any “displacement current” in the vacuum in weak electric fields, such as those measurable in all radio waves which according to Maxwell’s theory are waves of vacuum displacement current which are caused by, and in turn cause, electromagnetic fields. Maxwell’s theory of light is expressed as the Faraday induction equation curl.E = -dB/dt and Maxwell’s “displacement current” equation curl.B = x*dE/dt. As a result, the classical-quantum Maxwell-Yang-Mills electromagnetic unification is totally misunderstood today, which has serious implications for electroweak theory. Correct the photon theory, and you automatically get an accurate of what the electromagnetic force-field vector boson is (with its 4 rather than 2 polarizations).

  63. Re: the errors in Maxwell’s equations. See Chalmers, http://www.iop.org/EJ/abstract/0031-9120/10/1/011 quoted at http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html Dr Alan F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated: ‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’ Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, ‘On Physical Lines of Force’ (January 1862), as Chalmers explains: ‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of [root 2] smaller than the velocity of light.’

  64. Nigel, thanks for the comments.

    The Cavendish apparatus basically consists of two pairs of spheres, each pair forming dumbbells

    Harvard’s Lab Tutorial for the Cavendish experiment contains many inaccuracies and highlights how widespread Cavendish mythology is. The confusion arises because they refer to the toy pendulum they use in the lab as “Cavendish apparatus” as if it were a replica of Cavendish’s pendulum.

    This is a good example of how physicists confuse themselves by naming important concepts multiple times and using the same label for many different concepts. Most concepts in physics are context sensitive. This is a characteristic sign of pre-scientific fields.

    Maybe you can direct some fire at them.

    Yes. That’s one of my ongoing projects. The page did not have contact info but I will write to some people at Harvard physics department to see if they reply.
    Previously, I have written to Britannica and Nature asking them to correct mythological statements about the Cavendish experiment. If more people wrote to them that would be helpful. Here’s the letter I wrote to Nature. (Nature email: correspondence@nature.com) and also to Britannica (Britannica email: CustomerService@us.britannica.com). If you communicate with them please let me know their response so that I can include it in my blog.

    Development occurs in stages.

    True. I agree.

    Cavendish in his first experiment may have used a copper wire, but that gave way to quartz fibre for greater accuracy.

    Yes, exactly. C.V. Boys, an assistant professor of physics in Royal College of Science, first introduced the quartz fibre in order to build a smaller apparatus circa 1889. This site has the historical details.

    Similarly in his first paper he may not have calculated or used the symbol G… [but] Cavendish measured all the values in Fr^2/mM.

    I dispute that Cavendish measured F. He measured the excursion of the pendulum arm. He supplied some of the constants of the pendulum precisely and others he ignored. But the values of the constants of the pendulum are not relevant to the experiment because Cavendish never measured F. He never claimed that he did.

    I agree that it is irrelevant to your discussion if Cavendish used copper or quartz but it may be relevant if he did not measure force.

    My claim is that Newtonian force is occult. Occult does not exist in nature. Therefore, force is not a magnitude and cannot be measured. Consequently, Cavendish never measured the Newtonian force. More info about the history of G can be found in my wiki.

    To me the occult nature of the Newtonian force is definitive proof of its non-existence. But occult is not enough evidence to discredit a Newtonian concept in the framework of Newtonian physics. Failing to measure the Newtonian force physicists defined the Cavendish experiment as the posthumous measurement of it.

    Today, physicists own the Cavendish experiment and they define it to prove their Newtonian doctrine. Only when the Cavendish experiment is freed from the ownership of physicists it will be understood as what it really is: a computation of the mean density of the Earth by an application of Kepler’s rule. The apparatus is redundant.

    The exact history of when the quartz fibre and the ratio for G were introduced into the experiment is trivial for my purposes.

    Yes. I agree.

    If you can produce evidence that ….

    Yes. I can produce evidence for both. I think the Oxford site referenced above should be good evidence that quartz was introduced in the 19th century. Cavendish gave the wire’s specification as “the wire by which the arm was suspended was 39 1/4 inches long, and was of copper silvered…”

    It is well known that G was first defined in 1894 by C.V. Boys. Cavendish did not know about it and he did not use it. B.E. Clotfelter’s 1987 article Cavendish experiment as Cavendish knew it covers the issue in detail. Here’s another link where a physicist from University of Texas states that the Cavendish did not measure G.

    If you want to attack historical inaccuracy which is actually damaging mainstream science today, may I suggest that you take a look at Maxwell’s theory of electromagnetism….

    Many thanks for your reference to Maxwell’s fabrication. Maxwell did the same regarding the Cavendish experiment. He is the source of the mythology that Cavendish apparatus has a mirror attached to it.

    The same exact process of rewriting history to make Maxwell’s equations “consistent with reigning ideology” also happened with the Cavendish experiment.
    “Reigning ideology” of Newtonian physics is Newtonism. Force is the faith of Newtonism. Force cannot be questioned within physics. So physicists simply rewrote history to match their Newtonian doctrine.

    In the 19th century, about two hundred years after Newton’s definition of force, this force had not yet been observed even though physicists have been trying constantly to observe it. C. V. Boys solved the problem by defining the Cavendish experiment as the first measurement of the Newtonian force by defining a unit he called G. Boys is practicing Newtonian Whig history. He is fixing history to fit it into the Newtonian doctrine in order to save Newton’s sacred authority. He was very succesfull. To this day Cavendish experiment is believed by physicists to be the first measurement of the Newtonian force. It is a crime against science.

    I believe this shows that science and history are the same. My goal is not to “attack historical inaccuracies damaging to mainstream science.” I agree with you that it is trivial to correct what kind of wire Cavendish used. But what Cavendish measured or did not measure is not trivial. And that’s revealed by historical analysis. In this sense history and science are synonyms. Physics, in this sense, is not science. Because physicists took an experiment to “Determine the Density of the Earth” and rebranded it as “the first measurement of the Newtonian force.”

    Thanks for the inspirational comments.

  65. copy of a relevant comment:

    http://globalpioneering.com/wp02/more-cavendish-mythology

    Pioneer1,

    Thanks. I’m interested in your statement above:

    “My claim is that Newtonian force is occult. Occult does not exist in nature. Therefore, force is not a magnitude and cannot be measured. Consequently, Cavendish never measured the Newtonian force. …

    “To me the occult nature of the Newtonian force is definitive proof of its non-existence. But occult is not enough evidence to discredit a Newtonian concept in the framework of Newtonian physics. Failing to measure the Newtonian force physicists defined the Cavendish experiment as the posthumous measurement of it.”

    The word “force” to me is just rate of change of momentum or approximately the product, mass*acceleration, i.e., F = dp/dt ~ ma.

    Mass can be measured, momentum can be measured, and acceleration can be measured.

    So I don’t see a deep problem, really. If you don’t like F = GmM/r^2, then employ F=ma and you can write down acceleration a = GM/r^2, so your problem is sorted: acceleration is definitely measurable.

    Force might be occult in one sense, but you can measure both of the things you need to calculate it’s value.

    If you are going to attack force as being occult, then you could also attack momentum and energy.

    The problem with momentum occurs when you ask what momentum a photon of light has. If light hits a surface and is absorbed, it imparts a momentum of p = E/c, but if the light is reflected from the surface (say a rigid mirror), the total momentum imparted to the surface is p = 2E/c. The difference is because the reflection process can be considered as two events: the absorption of the photon (giving momentum p = E/c to the mirror) and then the re-emission of the photon in the opposite direction (giving a recoil to the mirror which – because of the reversed direction of the photon – adds a second p = E/c to the first impulse, so the total momentum is p = 2E/c.

    Therefore, the amount of momentum a photon is able to deliver to a target depends on whether the photon is absorbed or reflected back the way it came. [Obviously, there is a snag in my simple argument here, because in deriving p = 2E/c for reflection, I’m assuming that the mirror is perfectly rigid. If it really is perfectly rigid (i.e., of infinite mass) then it won’t actually recoil at all. If it is not perfectly rigid, then the reflection factor will be less than 2, because some of the incident energy will be lost before the photon is re-emitted and the new photon will have a longer wavelength and less energy, giving less recoil.]

    The occult problem with energy is the problem of the reference frame in which a collision process is viewed. This is well known in particle physics, of course, where the conventional reference frame is that of the centre of mass for the system under consideration:

    Consider two colliding toy cars, and the occult nature of energy becomes clear.

    (1) Take two cars each 1 kg in mass and have them each travelling at 5 metres/second towards each other (total impact speed 10 metres/second). The total kinetic energy release is E = 2[(1/2)mv^2] = 25 Joules.

    (2) Take the same two cars but keep one stationary and have the other hit it at 10 metres/second. The total kinetic energy release is E = (1/2)mv^2 = 50 Joules.

    Now why the difference? When two 1 kg masses collide at a total speed of 10 m/s you get 25 Joules if the centre-of-mass is the reference frame, but you get 50 Joules for the same masses hitting at the same relative speed when you view the collision in the reference frame of one mass!

    Clearly the difference is due to the fact that kinetic energy is proportional to the square of velocity, so you get disproportionately more energy when one body is considered to have all the motion, than you do when you consider the motion to be equally distributed.

    But surely conservation of energy is violated? Of course, it is not violated.

    I’ve had a long correspondence by email with Dr Mario Rabinowitz over various things, often consisting of politely worded arguments. One problem he had with my work was my argument that in the expanding universe, radiation being exchanged between charges (in the case of gravitational force, the charge is mass) is redshifted to lower energies. This means that masses receding from one another at immense distances will exchange redshifted radiation, including redshifted “gravitons” in any Yang-Mills quantum field theory of gravity. As a result, at extreme redshifts, quantum gravity should be reduced in strength in an expanding universe, because redshifted gravitons have lower energy (E=hf). Mario suggested that this would violate conservation of energy. However, it is just usual redshift theory.

    If you think about the redshift of any radiation in an expanding universe, say the most severely redshifted radiation there is – the cosmic background radiation – what has happened to the principle of conservation of energy there?

    It’s fairly obvious that energy is conserved, and what happens is that photons get “stretched out” longitudinally in the expanding universe, so the radiation expands in length as the universe expands, filling the same proportion of the volume of the universe.

    The total energy present remails the same, it is just that the transverse frequency falls because the photon gets longer. What is occurring as the universe expands is that the energy density of radiation in space falls (because the same energy is distributed over a bigger volume), but the total energy remains constant. However, it gets redshifted to lower frequency, so its entropy increases and it is a less useful form of energy.

    There is still a bit of mystery here when you consider a single photon that is redshifted due to the expansion of the universe. How can the frequency of a single photon in the cosmic background radiation get shifted to a lower value as the universe expands, without violating conservation of energy, E = hf?

    If individual photons in the cosmic background do lose energy as they get more and more redshifted, where does that energy go? Clearly the answer is has to do with reference frames. The cosmic background radiation we see is coming from a vast difference associated with a massive recession velocity. So the paradox in energy when considering a single photon is that we’re comparing dissimilar energy, because the reference frames are different. When we consider a single photon before redshift, we’re calculating its energy in a reference frame far from us, receding at a massive rate. When we consider the redshifted photon arriving here on earth, we’re changing reference frames to one in which (to us) the photon appears to be severely redshifted.

    In order to understand conservation of energy, it’s only conserved in a similar reference frame. So to get into the reference frame of a photon of the cosmic background radiation, you’d need to be in a spaceship travelling outward from the earth at the same speed that the gaseous fireball matter (at the location where the cosmic background radiation originated from) was receding at.

    Once you get up to that speed into the radiation so you are in the same reference frame, the photons of cosmic background radiation will no longer be redshifted to 2.7 K, they will have a temperature of 3000 K or so (just as they did at 400,000 years after the big bang). This is because the radiation hitting you head on is (to you) blueshifted to higher energy as you accelerate to higher speeds. So if you get in the same frame of reference as the radiation when it was emitted, energy is perfectly conserved.

    It’s really shameful how hard it is to get clear explanations of physics from textbooks and lecturers on some things. You end up having to try to work answers out for yourself. Another problem with reference frames regards Newton’s gravitational force law:

    (a) Suppose you have an apple falling to earth, then the law is F = GmM/r^2 where m is mass of apple and M is mass of earth. Here, the force F is due to the earth upon the apple (the gravitational field around an apple is trivial). This is proved because F = ma, so F = GmM/r^2 is due to simply the acceleration field around the big mass M, acceleration a = GM/r^2.

    (b) Now suppose you have two planets of approximately equal mass, m and M. In that case F = GmM/r^2 is wrong. The reason is that each mass M then has its own significant gravitational acceleration field around it so and the total acceleration is then

    a = (Gm/r^2) + GM/r^2

    = G(m + M)/r^2.

    Hence the total force of one planet with respect to the other is generally given by:

    F = Ma = GM(m + M)/r^2.

    This is substantially different from Newton’s law because for identical masses m = M gives the solution:

    F = 2GMM/r^2

    This is obviously twice what Newton’s law says! The inaccuracy in Newton’s law is not a substantial problem because in the solar system, the mass of one body is always much smaller than the other, so for instance the acceleration of the sun towards the earth’s mass can be excluded and the only significant acceleration is a = MG/r^2 where M is sun’s mass, giving F = ma = mMG/r^2 which is Newton’s law.

    It’s only where both masses are similar (which would be the case in certain binary stars) that Newton’s law is wrong.

    This is maybe useful evidence that force, as defined in Newton’s law for gravity, is occult, and it is far less confusing to consider the accelerative field surrounding a mass than to consider the “force”. There are also problems with a = MG/r^2 due to modifications introduced by general relativity. Because radial coordinates are contracted around a mass in general relativity (just as masses in motion are contracted in the direction of motion), the effective strength of gravity gets altered.

    The reason for this effect in general relativity is that when Newton’s a = MG/r^2 is converted into tensor mathematics notation you get R_(ab) = 4*Pi*GT_(ab), which “when combined with the contracted Bianchi identity … leads to the conclusion that the trace T of the energy-momentum tensor … has to be constant throughout spacetime. This is blatently inconsistent with ordinary (non-gravitational) physics. Accordingly Einstein … came up with [a correction factor to the Newtonian tensor model] we now know as Einstein’s field equation R_(ab) – (1/2)Rg_(ab) = 8*Pi*GT_(ab).” (Quoted from Penrose, Road to Reality, ch. 19, section 7. Penrose includes a minus sign on the source term, right hand side, for the direction of arrows on the gravitational field lines drawn between masses, but that is just convention and confuses the story for quantum gravity where the field lines represent the paths of gravitons being exchanged between masses, i.e. gravitons are travelling in both directions along such lines.)

    Still another problem with Newton’s a = MG/r^2 introduced by general relativity is that for light speed radiation crossing the radial gravitational field lines at right angles, the deflection of the radiation is not given by a = MG/r^2 but by a = 2MG/r^2. This is because a low speed object has its electric field lines extending in 3 spatial dimensions, but a light speed object is contracted in the direction of motion so it’s electric field lines in the direction of propagation have zero magnitude. All of the electric field lines from a light velocity object are in the transverse direction (at right angles to the line of propagation). The geometry of how the electric field lines from a photon and a slow moving object interact with gravitational field lines then shows that for a given number of electric field lines, you get twice the interaction if the object is going at light velocity (a photon) than if it is going at low speed.

    So there are many failures in Newton’s law, although the usual way the corrections are handed out does not inspire much understanding.

  66. Obviously previous comment above I made showing F = 2GMM/r^2 for similar masses M and M will affect the calculation made in this post where I assumed F = GMM/r^2. The correction factor needed is a factor of 2. I will leave the post as it is for the moment.

  67. Nigel,

    Thanks for the comment.

    The word “force” to me is just rate of change of momentum or approximately the product, mass*acceleration, i.e., F = dp/dt ~ ma.

    I would say that, the way you read it, “rate of change of momentum” and “mass*acceleration” and “force” are synonyms.

    The way I read F = dp/dt = ma all these terms F, dp, dt, m and a are placeholders that must be cancelled when we want to compute orbits. The placeholders are not magnitudes, instead they hide the two magnitudes, R=radius of the orbit and T=the period of the orbit.

    What remains after placeholders are eliminated is half of Kepler’s law R/T^2. Newton labeled this half of Kepler’s law “Force.” He also labeled the other half 1/R^2 “Force.” Since orbits cannot be computed with expressions containing placeholders Newton cancelled force terms to recover Kepler’s rule and used Kepler’s rule for astronomical calculations. Newton called Kepler’s rule Newton’s laws. Physicists still read Kepler’s rule as Newton’s law.

    Mass can be measured, momentum can be measured, and acceleration can be measured. So I don’t see a deep problem, really. If you don’t like F = GmM/r^2, then employ F=ma and you can write down acceleration a = GM/r^2, so your problem is sorted: acceleration is definitely measurable.

    You just cancelled force. It is irrelevant if I measure acceleration and assign it to GM/r^2. In a = GM/r^2 there is no force. Force terms are eliminated. What remains is

    R/T^2 = (some constant) 1/R^2

    This is Kepler’s rule. Yes. R and T are magnitudes and can be measured. Force was a placeholder not magnitude and it cancelled. You cannot cancel R or T without destroying the proportionality.

    When you eliminate the occult Newtonian force to obtain the working proportionality the force terms are gone.

    Force might be occult in one sense, but you can measure both of the things you need to calculate it’s value.

    Not true. The working equation has no force terms in it. Only non-working definitions F=R/T^2 and F=1/R^2 do.

    If you are going to attack force as being occult, then you could also attack momentum and energy.

    Why discuss momentum and energy? In this context momentum is a placeholder. We can reduce momentum to its constituents. There is no reason to discuss the small m, it gets eliminated. What remains is velocity v=R/T. I have no problem with R/T.

    But for the moment let’s think in terms of Cavendish experiment because in physics Cavendish experiment is given as the first measurement of the Newtonian force.

    What did Cavendish measure? He measured the excursion of the pendulum arm.

    Cavendish did not measure momentum. He did not measure mass and he did not measure acceleration. He measured distance. How can we claim that Cavendish measured force?

    In order to claim that Cavendish measured force we need to look at the formula he used to compute the density of the earth. This is the formula:

    N^2/10844 D

    N is the period and D is the divisions the arm moved as Cavendish changed the position of the weights. There is no term for force in this expression.

    The pendulum itself did not contain something called force. Cavendish did not measure a quantity called force. The experimental equation that Cavendish used to obtain mean density of the earth did not contain a term for force either. If so, how can we claim that Cavendish measured force?

    The pendulum arm oscillates as simple harmonic motion. No force term enters into the description of the pendulum’s motion. If you know such an expression please let me know.

    What is the effect of the Newtonian force emanating from the so-called attracting weight?

    If we assume that the attracting weight indeed attracts the pendulum arm, it does so in an intelligent way: the lead weight calculates the attraction necessary according to Newtonian laws and sends that information instantaneously to the small weight attached to the arm of the pendulum. The small weight does its own intelligent calculations and sends it back to the attracting weight and it moves legally according to Newtonian laws. All this happens instantaneously in zero time.

    This is occult.

    The motion of the pendulum is not described with an equation which includes force. If we write down F=GmM/r^2 and F=ma and then equate them and eliminate Fs this means that the motion does not require force. Fs canceled. Fs were put there to save Newton’s authority.

    Again, Newton’s law is nothing other than Kepler’s rule. Newton defined 1/R^2 = F and T/R^2 = F. When he wanted to do astronomical calculations he canceled Fs and used Kepler’s law. In this sense Newton’s force is not even occult. It is a placeholder which is written to save Newton’s authority then eliminated.

    So I would like to find an equation describing the motion of the Cavendish pendulum which has a term for force in it.

    Thanks for this comment. Plenty of good information. I’ll reply to other points you make as well.

  68. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    “Theoretically if an accelerator fired enough mass into a tiny space a singularity would be created. The Black Hole would almost instantly evaporate, but could be detected via Hawking radiation. Unfortunately quantum mechanics says that a particle’s location can not be precisely measured. This quantum uncertainty would prevent us from putting enough mass into a singularity.”

    I disagree with Lisa Randall here. It depends on whether the black hole is charged or not, which changes the mechanism for the emission of Hawking radiation.

    The basic idea is that in a strong electric field, pairs of virtual positive fermions and virtual negative fermions appear spontaneously. If this occurs at the event horizon of a black hole, one of the pair can at random fall into the black hole, while the other one escapes.

    However, there is a factor Hawking and Lisa Randall ignore: the requirement of the black hole having electric charge in the first place, because pair production has only been demonstrated to occur in strong fields, the standard model fields of the strong and electromagnetic force fields (nobody has ever seen pair production occur in the extremely weak gravitational fields).

    Hawking ignores the fact that pair production in quantum field theory (according to Schwinger’s calculations, which very accurately predict other things like the magnetic moments of leptons and the Lamb shift in the hydrogen spectra) requires a net electric field to exist at the event horizon at the black hole.

    This in turn means that the black hole must carry a net electric charge and cannot be neutral if there is to be any Hawking radiation.

    In turn, this implies that Hawking radiation in general is not gamma rays as Hawking claims it is.

    Gamma rays in Hawking’s theory are produced just beyond the event horizon of the black hole by as many virtual positive fermions as virtual negative fermions escaping and then annihilating into gamma rays.

    This mechanism can’t occur if the black hole is charged, because the net electric charge [which is required to give the electric field which is required for pair-production in the vacuum in the first place] of the black hole interferes with the selection of which virtual fermions escape from the event horizon!

    If the black hole has a net positive charge, it will skew the distribution of escaping radiation so that more virtual positive charges escape than virtual negative charges.

    This, in turn, means that the escaped charges beyond the event horizon won’t be equally positive and negative; so they won’t be able to annihilate into gamma rays.

    It’s strange that Hawking has never investigated this.

    You only get Hawking radiation if the black hole has an electric charge of Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar).

    (This condition is derived below.)

    The type of Hawking radiation you get emitted is generally going to be charged, not neutral.

    My understanding is that the fermion and boson are both results of fundamental prions. As Carl Brannen and Tony Smith have suggested, fermions may be a triplet of prions to explain the three generations of the standard model, and the colour charge in SU(3) QCD.

    Bosons of the classical photon variety would generally have two prions: because their electric field oscillates from positive to negative (the positive electric field half cycle constitutes an effective source of positive electric charge and can be considered to be one preon, while the negative electric field half cycle in a photon can be considered another preon).

    Hence, there are definite reasons to suspect that all fermions are composed of three preons, while bosons consist of pairs of preons.

    Considering this, Hawking radiation is more likely to be charged gauge boson radiation. This does explain electromagnetism if you replace the U(1)xSU(2) electroweak unification with an SU(2) electroweak unification, where you have 3 gauge bosons which exist in both massive forms (at high energy, mediating weak interactions) and also massless forms (at all energies), due to the handedness of the way these three gauge bosons acquire mass from a mass-providing field. Since the standard model’s electroweak symmetry breaking (Higgs) field fails to make really convincing falsifiable predictions (there are lots of versions of Higgs field ideas making different “predictions”, so you can’t falsify the idea easily), it is very poor physics.

    Sheldon Glashow and Julian Schwinger investigated the use of SU(2) to unify electromagnetism and weak interactions in 1956, as Glashow explains in his Nobel lecture of 1979:

    ‘Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).’

    This is plain wrong: Glashow and Schwinger believed that electromagnetism would have to be explained by a massless uncharged photon acting as the vector boson which communicates the force field.

    If they had considered the mechanism for how electromagnetic interactions can occur, they would have seen that it’s entirely possible to have massless charged vector bosons as well as massive ones for short range weak force interactions. Then SU(2) gives you six vector bosons:

    Massless W_+ = +ve electric fields
    Massless W_- = -ve electric fields
    Massless Z_o = graviton (neutral)

    Massive W_+ = mediates weak force
    Massive W_- = mediates weak force
    Massive Z_o = neutral currents

    Going back to the charged radiation from black holes, massless charged radiation mediates electromagnetic interactions.

    This idea that black holes must evaporate if they are real simply because they are radiating, is flawed: air molecules in my room are all radiating energy, but they aren’t getting cooler: they are merely exchanging energy. There’s an equilibrium.

    Equations

    To derive the condition for Hawking’s heuristic mechanism of radiation emission, he writes that pair production near the event horizon sometimes leads to one particle of the pair falling into the black hole, while the other one escapes and becomes a real particle. If on average as many fermions as antifermions escape in this manner, they annihilate into gamma rays outside the black hole.

    Schwinger’s threshold electric field for pair production is: E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040

    So at least that electric field strength must exist at the event horizon, before black holes emit any Hawking radiation! (This is the electric field strength at 33 fm from an electron.) Hence, in order to radiate by Hawking’s suggested mechanism, black holes must carry enough electric charge so make the eelectric field at the event horizon radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.

    Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so

    E = Q/(4*Pi*Permittivity*R^2) v/m.

    Setting this equal to Schwinger’s threshold for pair-production, (m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

    R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.

    Where Q is black hole’s electric charge, and e is electronic charge, and m is electron’s mass. Set this R equal to the event horizon radius 2GM/c^2, and you find the condition that must be satisfied for Hawking radiation to be emitted from any black hole:

    Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)

    where M is black hole mass.

    So the amount of electric charge a black hole must possess before it can radiate (according to Hawking’s mechanism) is proportional to the square of the mass of the black hole.

    On the other hand, it’s interesting to look at fundamental particles in terms of black holes (Yang-Mills force-mediating exchange radiation may be Hawking radiation in an equilibrium).

    When you calculate the force of gauge bosons emerging from an electron as a black hole (the radiating power is given by the Stefan-Boltzmann radiation law, dependent on the black hole radiating temperature which is given by Hawking’s formula), you find it correlates to the electromagnetic force, allowing quantitative predictions to be made. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997 for example.

    To summarize: Hawking, considering uncharged black holes, says that either of the fermion-antifermion pair is equally likey to fall into the black hole. However, if the black hole is charged (as it must be in the case of an electron), the black hole charge influences which particular charge in the pair of virtual particles is likely to fall into the black hole, and which is likely to escape. Consequently, you find that virtual positrons fall into the electron black hole, so an electron (as a black hole) behaves as a source of negatively charged exchange radiation. Any positive charged black hole similarly behaves as a source of positive charged exchange radiation.

    These charged gauge boson radiations of electromagnetism are predicted by an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    It’s amazing how ignorant mainstream people are about this. They don’t understand that charged massless radiation can only propagate if there is an exchange (vector boson radiation going in both directions between charges) so that the magnetic field vectors cancel, preventing infinite self inductance.

    Hence the whole reason why we can only send out uncharged photons from a light source is that we are only sending them one way. Feynman points out clearly that there are additional polarizations but observable long-range photons only have two polarizations.

    It’s fairly obvious that between two positive charges you have a positive electric field because the exchanged vector bosons which create that field are positive in nature. They can propagate despite being massless because there is a high flux of charged radiation being exchanged in both directions (from charge 1 to charge 2, and from charge 2 to charge 1) simultaneously, which cancels out the magnetic fields due to moving charged radiation and prevents infinite self-inductance from stopping the radiation. The magnetic field created by any moving charge has a directional curl, so radiation of similar charge going in opposite directions will cancel out the magnetic fields (since they oppose) for the duration of the overlap.

    All this is well known experimentally from sending logic signals along transmission lines, which behave as photons. E.g. you need two parallel conductors at different potential to cause a logic signal to propagate, each conductor containing a field waveform which is an exact inverted image of that in the other (the magnetic fields around each of the conductors cancels the magnetic field of the other conductor, preventing infinite self-inductance).

    Moreover, the full mechanism for this version of SU(2) makes lots of predictions. So fermions are blac[k] holes and the charged Hawking radiation they emit is the gauge bosons of electromagnetism and weak interactions.

    Presumably the neutral radiation is emitted by electrically neutral field quanta which give rise to the mass (gravitational charge). The reason why gravity is so weak is because it is mediated by electrically neutral vector bosons.

  69. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    Tony,

    You wrote here (that is a U.S. Amazon book discussion comment, where I can’t contribute as participants need to have bought books from the U.S. Amazon site, and being in England I’ve only bought books from Amazon.co.uk):

    … shortly after Baez described his Six Mysteries in Ontario, I sent an e-mail message to Smolin saying:

    ‘… I would like to present, at Perimenter, answers to those questions, as follows: Mysteries 2 and 3: The Higgs probably does exist, and is related to a Tquark-Tantiquark condensate, and mass comes from the Standard Model Higgs mechanism, producing force strengths and particle masses consistent with experiment, as described in http://www.valdostamuseum.org/hamsmith/YamawakiNJL.pdf and http://www.valdostamuseum.org/hamsmith/TQ3mHFII1vNFadd97.pdf

    ‘Mystery 4: Neutrino masses and mixing angles consistent with experiment are described in the first part of this pdf file http://www.valdostamuseum.org/hamsmith/NeutrinosEVOs.pdf Mystery 5: A partial answer: If quarks are regarded as Kerr-Newman black holes, merger of a quark-antiquark pair to form a charged pion produce a toroidal event horizon carrying sine-Gordon structure, so that, given up and down quark constituent masses of about 312 MeV,the charged pion gets a mass of about 139 MeV, as described in http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf Mysteries 6 and 1:The Dark Energy : Dark Matter : Ordinary Matter ratio of about 73 : 23 : 4 is described in http://www.valdostamuseum.org/hamsmith/WMAPpaper.pdf

    I’m extremely interested in this, particularly the idea that the mass-providing boson is a condensate particle formed of a Top quark and an anti-Top quark, like a meson. I’m also extremely interested in quarks modelled as Kerr-Newman black holes in the pion, to predict the mass. Your mathematical technical approach is not easy going for me, however.

    Maybe I can outline some independent information I’ve acquired regarding three basic scientific confirmations that fermions are indeed black holes, emitting gauge bosons at a tremendous rate as a form of Hawking radiation:

    (1) The “contrapuntal model for the charged capacitor”, which I’ll explain in detailed numbered steps below:

    (1.a) All electric energy carried by conductors travels at light velocity for the insulator around the conductors.

    (1.b) A small section of a (two-conductor) transmission line can be charged by like a capacitor, and behaves like a simple capacitor, storing electric energy.

    (1.c) Charge up that piece of transmission line using of sampling oscilloscopes to record what happens, and you learn that energy flows into it at light velocity for the insulator.

    (1.d) There is no mechanism for that electricity to suddenly slow down when it enters a capacitor. It can’t physically slow down. It reflects off the open circuit at the far end and is trapped in a loop, going up and down the transmission line endlessly. This produces the apparently “static” electric field in all charges. The magnetic fields from each component of the trapped energy (going in opposite directions) curl in different directions around the propagation direction, so the magnetic field cancels out.

    (1.e) The “field” (electromagnetic vector boson exchange radiation) that causes electromagnetic forces controls the speed of the logic signal, and the electron drift speed (1 millimetre/second for 1 Amp in typical domestic electric cables) has nothing to do with it.

    (1.f) Electricity in paired conductors is primarily driven by vector boson radiation (comprising the electromagnetic “field”). The electron drift current, although vital for supplying electrons to chemical reactions and to cathode emitters in vacuum tubes, is pretty much irrelevant as far as the delivery of electric energy is concerned. (It’s easy to calculate what the kinetic energy of all the electron drift in a cable amounts to, and it is insignificant compared to the amount of energy being delivered by electricity. This is because of the low speed of the electron drift in typical situations, combined with the fact that the conduction electrons have little mass so their total mass is typically just ~0.1% of the mass of the conductors. Kinetic energy E = (1/2)mv^2 tells you that for small m and tiny drift velocity v, electron drift is not the main source of energy delivery in ordinary electricity. Instead, gauge/vector bosons in the EM field are responsible for delivering the energy. Hence, by a close study of the details of how logic pulses interact and charge up capacitors – which is not modelled accurately by Maxwell’s classical model – something new about the EM vector bosons of QFT may be deduced from solid, repeatable experimental data!)

    (1.g) The trapped light velocity energy in a capacitor is unable to slow down, and the effect of it being trapped leads to the apparently “static” electric field and nil magnetic field (as explained in 1.d above). Another effect of the trapping of energy is that there is no net electric field along the charged up capacitor plate: the potential is the same number of volts everywhere, so there is no gradient (i.e., there is no volts/metre) and thus no electron drift current. Without electron drift current, we have no resistance because resistance is due to moving conduction band electrons colliding with the conductor’s metal lattice and releasing heat as a result of the deceleration. There is merely a energy bounding at light speed in all directions in any charged object.

    There is also the effect of electric charge in the form of electrons that drifts into one capacitor plate (the negative one), and out of the other plate (the positive one), while the capacitor is charging up.

    (1.h) Now for electrons. The capacitor model (1.g above) explains how gauge boson radiation (the field) gets trapped in a capacitor. Experiments by I.C., who pioneered research on logic signal crosstalk in the 60s, confirmed this: a capacitor receives energy at light speed for the insulator in the feel transmission line, the energy that gets trapped in a transmission line can’t slow down, and it exits at light speed when discharged. He, together with two other engineers, also showed how to get Maxwell’s exponential charging law (1 – e^x) out of this model although it contains various errors and omissions in the physics. However, the main results are correct. When you discharge the a capacitor charged at v volts, (such as a charged length of cable), instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get a pulse of v/2 volts taking 2x/c seconds to exit. In other words, the half of the energy already moving towards the exit end, exits first. That gives a pulse of v/2 volts lasting x/c seconds. Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy. This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse. Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds. This is experimental fact. It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals). (Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a “step”, an unphysical discontinuity which would imply infinite rate of change of the field at the instant of the step, causing infinite “displacement current”, and this error is inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.)

    Using the model of trapped gauge boson radiation to represent static charge, the electron is understood to be a trapped charged gauge boson. The only way to trap a light velocity gauge boson like this is for spacetime curvature (gravitation) to trap it in a loop, hence it’s a black hole.

    In the August 2002 issue of British journal Electronics World there is an illustration demonstrating that for such a looped gauge boson, the electric field lines – at long distances compared to the black hole radius – diverge as given by Gauss’s/Coulomb’s law, while the magnetic field lines circling around the looped propagation direction form a toroidal shape near the electron black hole radius but at large distances the results of cancellations is that you just see magnetic dipole, which is a feature of leptons.

    (2) The second piece of empirical evidence that fermions can be modelled by black holes that I’ve come across is in connection with gravity calculations. If the outward acceleration of the mass of the universe creates a force like F = ma (which is a force on the order of 7*10^43 Newtons, although there are obvious various corrections you can think of such as the effect of the higher density of the universe at earlier times and greater distances – I’ve undertaken some such calculations on my newer blog – or questions over how much “dark matter” there is which is behaving like mass and accelerating away from us) where m is mass of universe and a is acceleration, then Newton’s 3rd law suggests an equal inward force, which according to the possibilities available would seem to be carried by vector bosons that cause forces.

    To test this, we work out what cross-sectional shielding area an electron would need to have in order that the shielding of the inward-directed force would give rise to gravity as an asymmetry effect (this asymmetry idea as the cause of gravity is an idea sneered at and ignorantly dismissed for false reasons, and variously credited to Newton’s friend Fatio or to Fatio’s Swiss plagiarist, Georges LeSage).

    It turns out that the cross-sectional area of the electron would be Pi*(2GM/c^2)^2 square metres where M is the electron’s rest mass, which implies an effective electron radius of 2GM/c^2, which is the event horizon radius for a black hole.

    This is the second piece of evidence that an electron is related to black hole, although it is not a strong piece of evidence in my view because the result could be just a coincidence.

    (3) The third piece of evidence is a different calculation for the gravity mechanism discussed in (2) above. A simple physical argument allows the derivation of the the actual cross-sectional shielding area for gravitation, and this calculation can be found as “Approach 2” on my blog page here.

    When combined with the now-verified earlier calculation, this new approach allows gravity strength to be predicted accurately as well as giving evidence that fermions have a cross-sectional area for gravitational interactions equal to the cross-sectional area of the black hole event horizon for the particle mass.

  70. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    One more piece of quantitative evidence that fermions are black holes:

    Using Hawking’s formula to calculate the effective black body radiating temperature of a black hole yields the figure of 1.35*10^53 Kelvin.

    Any black-body at that temperature radiates 1.3*10^205 watts/m^2 (via the Stefan-Boltzmann radiation law). We calculate the spherical radiating surface area 4*Pi*r^2 for the black hole event horizon radius r = 2Gm/c^2 where m is electron mass, hence an electron has a total Hawking radiation power of

    3*10^92 watts

    But that’s Yang-Mills electromagnetic force exchange (vector boson) radiation. Electron’s don’t evaporate, they are in equilibrium with the reception of radiation from other radiating charges.

    So the electron core both receives and emits 3*10^92 watts of electromagnetic gauge bosons, simultaneously.

    The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

    The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

    Using P = 3*10^92 watts as just calculated,

    F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

    For gravity, the model in this blog post gives an inward and an outward gauge boson calculation F = 7*10^43 N.

    So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of [2*10^84] / [7*10^43] = 3*10^40.

    This figure of approximately 10^40 is indeed the ratio between the force coupling constant for electromagnetism and the force coupling constant for gravity.

    So the Hawking radiation force seems to indeed be the electromagnetic force!

    Electromagnetism between fundamental particles is about 10^40 times stronger than gravity.

  71. copy of comment to:

    http://kea-monad.blogspot.com/2007/11/m-theory-lesson-119.html

    Thanks for the link to Carl Brannen’s use of electromagnetic field equations to explain the Koide mass formula.

    The Koide mass formula says take the three generations of masses such as leptons, square root the mass of each and then add these square roots together, square the result, then multiply that sum by 2/3, you obtain the sum of the three lepton generation masses!

    As Carl suggests, because for an electromagnetic field the energy density (Joules per cubic metre) of the field is equal to half the product of the permittivity of the vacuum and the square of the electric field strength (volts/metre), you get

    Energy density (energy E, unit volume X)

    = E/X

    = (mc^2)/X = (1/2)*{permittivity}*{electric field strength in volts/metre}^2

    Hence,

    m ~ e^2

    i.e. mass is directly proportional to the square of the field strength.

    If masses depend on field strengths (i.e., if some vacuum field bosons which interact with gravitons to produce mass, are associated with electric charges), then because the electron, muon and tauon all have similar electric charges (with slight differences due to vacuum polarization loops, the kind of thing that means that differemt leptons have slightly different intrinsic magnetic moments), you would expect to have to square root the masses:

    m ~ e^2

    so

    m^{1/2} ~ {+/-}e

    I think this is true because there is evidence I’ve seen that is consistent with it, and that evidence comes from several calculations and many “coincidences”.

    What is occurring here physically is very simple indeed. The core of the electron (or muon or tauon) has electric charge only and no mass; hence the core of the electron/muon/tauon is a common entity.

    The differences between these three leptons stem from the surrounding vacuum field.

    It is the surrounding vacuum field, some kind of Higgs field or a Dirac sea, which interacts with gravitons to produce the “space time curvature” of the vacuum.

    It interacts also with the cores of leptons and quarks to give them their masses.

    m^{1/2} ~ {+/-}e

    can be seen to be a consequence of the idea that masses arise from electromagnetic field interactions with a vacuum field that interacts with the gravitons,

    hence the two-step interaction for masses/gravity: charged core interacts with surrounding field in vacuum (Higgs-like or Dirac-sea like) which in turn interacts with gravitons.

    The fact that the Koide mass formula says m1 + m2 + m3 = (2/3)*[(m1^1/2) + (m2^1/2) + (m3^1/2)]^2 looks to me like a WEIGHTED AVERAGING of the ratio of relative masses to relative field strengths (expressed in mass units).

    I.e., it should be written as:

    (m1 + m2 + m3)/[(m1^1/2) + (m2^1/2) + (m3^1/2)]^2

    = 2/3.

    In plain English: the sum of the masses divided by the square of the sum of the relative field strengths (when expressed in units of mass via E=mc^2) gives you a dimensionless number, 2/3.

    The reciprocal of number also comes up in other “coincidences”, e.g., muon mass is roughly {electron mass}/[(2/3)*{alpha)] = 205.5*{electron mass, 0.511 MeV} ~105 MeV.

    I’ve tried to come up with a mechanism, although it is still sketchy in detail, for hadron masses as well as leptons, e.g. see the “about” link on the sidebar of my nige.wordpress.com blog, but it needs to be rewritten and there are lots of loose ends.

    The Koide formula is impressive and I didn’t think about the square roots coming in via the electromagnetic field equation, so I’m grateful to Carl for pointing this out. Hopefully it is the missing link ….

  72. Just for the benefit of string theorists and other morons who can’t remember the basics of calculus:

    “… Acceleration: a = dv/dt = d[HR]/dt = H*dR/dt = Hv …”

    This line in my post comes about as follows.

    The product rule of differentiation is

    d(uv)/dx = (v*du/dx) + (u*dv/dx)

    Hence the observationally based Hubble law v = HR differentiates as follows:

    dv/dt = d(RH)/dt

    = (H*dR/dt) + (R*dH/dt)

    The second term here, R*dH/dt, is zero because in the Hubble law v = HR the term H is a constant from our standpoint in spacetime, so H doesn’t vary as a function of R and thus it also doesn’t vary as a function of apparent time past t = R/c. In the spacetime trajectory we see as we look out to greater distances, R/t is always in the fixed ratio c, because when we see things at distance R the light from that distance has to travel that distance at velocity c to get to us, so when we look out to distance R we’re automatically looking back in time to time t = R/c seconds ago.

    Hence R*dH/dt = R*dH/d[R/c] = Rc*dH/dR = Rc*0 = 0.

    This is because dH/dR = 0. I.e., there is no variation of the Hubble constant as a function of observable spacetime distance R.

    One problem in communication between myself and string theorists is that they want long mathematical proofs for results like these, which are so obvious at a glance that you don’t need to write down the long proof. Such people are not physicists, just wooden-style mathematicians who can’t grasp any facts from looking at the equations but instead need to work through long proofs. That’s all very well in abstract mathematics, but it obscures the physical mechanisms involved in nature. Nature isn’t mathematical, it’s mechanistic and the approximate mathematical models which describe the mechanism are just that: approximate mathematical representations. If you start to present it as a mathematical system, readers get the wrong impression and anyone with any sense is immediately put off.

  73. copy of a comment:

    http://kea-monad.blogspot.com/2008/01/monthly-misquote.html

    If the events supposedly creating the gravitational waves are at cosmological (massive) distances, then the reason why the gravitational waves will be much smaller than mainstream predictions suggest is down to quantum gravity:

    1. Quantum gravity works by the exchange of gravitons between masses (and fields possessing energy).

    2. If the masses or fields the detector is exchanging gravitons with is located at cosmological distances, then the exchanged gravitons are passing through an expanding distance due to cosmological recession. This will cause redshift of graviton energy.

    3. By Planck’s law E = hf, a shift of frequency of any field quanta is accompanied by a loss of energy of the field quanta. [This is why the light we see from the big bang is redshifted to microwaves, instead of 3000 K blackbody (mainly infrared) radiation scorching us to a cinder, which would occur in the absence of redshift!]

    4. The redshift of exchanged gravitons over cosmological (massive) distances reduces the energy of the received gravitons and thus reduces the effective value of the gravitational coupling constant, G, for the interaction.

    5. As a result, the gravitational waves from events at cosmological distances appear far, far smaller than predicted using a constant gravitational coupling constant G. The faulty prediction can be corrected by working out the relative graviton redshift energy depletion effect for the cosmological distance of interest, and simply scaling the value of G by the same factor. I.e., if the redshift is such that quanta suffer a doubling of wavelength and a halving of frequency, their energy will fall by a factor of two, and the correct scaled value of G to use in gravity wave predictions will be G/2. It’s as easy as that to compensate for graviton redshift (the amount of graviton redshift will be identical to the amount of visible light redshift at a given distance, and that is well known from the Hubble law).

  74. Another factor which needs to be taken account of in calculating the effective outward force of the distant galaxies that are acceelrating away from us in the big bang, is the relativistic mass increase of those distant, rapidly receding masses.

  75. The quotation near the end of the post text,

    ‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 64-74,

    should be followed by a statement pointing out that if you take the 600 km/s velocity as an order of magnitude estimate of the our velocity compared to the cosmic background radiation since the big bang (actually it may be higher than the average velocity since time zero, because the Milky Way is currently being approaching Andromeda, which is a much bigger galaxy), you can derive a distance from the origin of the universe (assuming the universe is something like an expanding fireball with a centre, rather than assuming the “boundless curvature” lying picture from general relativity usually hyped as if it had some factual evidence to support it, which it doesn’t when you take account of the long range redshift of gravitons in an expanding universe, which would destroy the idea of long-range curvature and thus would stop the universe being a boundless, curved geometric entity with no outer edge):

    distance = vt

    v = 600 km/s
    t = 13.7 thousand million years expressed in seconds

    The result is that we’re on the order of only 0.3% of the radius of the universe from the “middle” of the fireball, or the “point of origin” (to use language which is a heresy to general relativity worshippers who are nearly a century out of date and haven’t heard of graviton dynamics yet).

  76. Nobel laureate Gerard ‘t Hooft has recently begun arguing that although phenomena on scall scales are non-deterministic as described in quantum mechanics by probabilities, the evolution of the universe is the most fundamental event we have knowledge about and that is deterministic (a determinable, predictable sequence of events occurs; e.g. clumping of hydrogen and other atoms by gravity can be predicted to form stars where nuclear fusion occurs):

    ‘… quantum mechanics as it is known today has become the theory enabling us to produce the best possible predictions for the future, given as much information as we can give about the system’s past, in any conceivable experimental setup. Quantum mechanics is not a description of the actual course of events between past and future.

    ‘Quantum mechanics will exactly reproduce the statistical features of Nature at a local scale, in our laboratories. The only effect our present considerations will have on the pursuit of an improved, accurate theory of quantum gravity and cosmology is that the universe in itself is required to be controlled by equations beyond quantum mechanics. The argument for this is simple. Quantum mechanics has been tailored by us to describe the statistical outcomes of experiments when repeated many times, locally in some laboratory. We may well assume this theory to be exact in describing local statistics. The entire Universe, however (in particular when we are talking about a closed universe), is itself a ‘experiment’ carried out only once, and all events in it are unique. The question whether a single event took place or not can only be answered by ‘yes’ or ‘no’, but there is no probabilistic answer. A theory that yields ‘maybe’ as an answer should be recognized as an inaccurate theory. If this is what we should believe, then only deterministic theories describing the entire cosmos should be accepted. There can be no ‘quantum cosmology’.’

    – Gerard ‘t Hooft, ‘Emergent Quantum Mechanics and Emergent
    Symmetries’, http://arxiv.org/abs/0707.4568v1 , 31 Jul 2007, page 2.

    The argument by Gerard ‘t Hooft above is very important: it is an attack on the quantum indeterminancy of Leonard Susskind’s ‘cosmic landscape’ of string theory, which claims that the universe is ruled by a quantum cosmology with a landscape of 10^500 metastable vacua (the exact number being determined by the details of the Rube-Goldberg machines which are required to stabilize the Calabi-Yau manifold’s moduli). According to Susskind, we have to religiously believe on faith (without any proof) that one of those 10^500 metastable vacua is our own vacuum, which determines the Standard Model of particle physics. To emphasise again, nobody can even prove that the universe is one of the 10^500 metastable vacua, let alone identify the particular one and use it to predict the parameters of the Standard Model. However, Susskind’s argument is that it is more logical to replace existing religious belief in ‘intelligent design’ with a religious belief in string theory, because the latter is – in his opinion – more logical. ‘t Hooft’s argument is contrary to Susskind’s.

    ‘t Hooft argues that quantum mechanics shows us that the universe as a whole must be deterministic, bringing up the religious idea of Laplace that God started the universe and it has run in deterministic fashion by applying laws to initial conditions ever since.

    Let’s examine ‘t Hooft’s argument (quoted above). He starts by pointing out that quantum mechanics is just a statistical (for large numbers of particles under examination) or probabilistic (for small numbers of observed particles) model, and is not a physical description of events: ‘Quantum mechanics is not a description of the actual course of events between past and future. … the universe in itself is required to be controlled by equations beyond quantum mechanics. The argument for this is simple. Quantum mechanics has been tailored by us to describe the statistical outcomes of experiments when repeated many times, locally in some laboratory. We may well assume this theory to be exact in describing local statistics. The entire Universe, however (in particular when we are talking about a closed universe), is itself a ‘experiment’ carried out only once, and all events in it are unique. The question whether a single event took place or not can only be answered by ‘yes’ or ‘no’, but there is no probabilistic answer. A theory that yields ‘maybe’ as an answer should be recognized as an inaccurate theory. If this is what we should believe, then only deterministic theories describing the entire cosmos should be accepted.’

    The point being made by ‘t Hooft is that although quantum mechanics is not deterministic in its present mainstream form (for example in the way that quantum mechanics models probabilities by integrating the square of a statistical wavefunction over volume), the ultimate equations describing the universe we observe cannot be defended in a statistical manner. We have definite information from cosmology, which is about the big bang event. It’s not statistical information, derived from a large number of universes. Anyone trying to force the statistical/probabilistic methods of quantum mechanics on to cosmology is missing the point that the whole statistical/probabilistic basis for quantum mechanics as a nature of model is only defensible because it is impossible to observe the history of an individual electron in an atom in the same way that we can observe the history of the universe by simply looking to further distances (earlier times).

    It is for this reason that a scientific model of the universe (based on laws derived from scientific observations, not from armchair speculations) must be based upon laws that must be deterministic, not statistical, in nature. If science is based on laws based on observation, then those laws based on observations of statistical and probabilistic phenomena like electron scattering measurements or random radioactive decay, will be statistical in nature, predicting only in terms of probability what will occur in the future; while laws based on observations of a once-only event such as the evidence we have from the universe, will be deterministic rather than based on probability/speculation.

    Where I disagree with ‘t Hooft is in his claim that the universe is really ‘deterministic’. Given initial conditions, I can predict (using a semi-Monte Carlo simulation run on a computer) certain aspects of the evolution of the universe, but not others.

    Even ignoring the crucial role of the virtual particles (whose number is generally regarded as not conserved, because you get high-energy virtual photons spontaneously disappearing and producing pairs of fermions which after an uncertain amount of time tend to annihilate back into radiation) which interact with real particles and cause most of the chaos on small scales in the nucleus and the atom, you still have the problem that there are 10^80 long-lived (‘real’) particles in the observable universe.

    That’s too many interactions to accurately simulate by Monte Carlo calculations in a computer; you have to make simplifying approximations like treating distant stars, planets and galaxies as individual items of known mass. Even if you could include the 10^80 long-lived particles, you would get chaotic indeterminancy creep into the simulation due to Poincare chaos: even classical physics fails to give deterministic predictions where you have more than two particles interacting. Where you have three bodies in orbit, chaos is a natural consequence.

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

    But even if you could somehow simulate every long-lived particle in the universe using a Monte Carlo computer code, that would not make the universe deterministic because you don’t know the initial conditions at a tiny fraction of a second. All you know is that fermions and quarks were created at very high energy; you don’t know their initial locations and motions. In other words, the Monte Carlo computer simulation would have to start off by using random numbers (with a statistically representative distribution for particular physics in question, e.g. a Maxwell-Boltzmann distribution or whatever statistics are appropriate to that stage in the universe) to enable the computer represent all the unknown particles usefully.

    It can then apply physical laws to those initial conditions to simulate the universe. However, if you alter the randomly selected initial conditions, you will end up with changes in details later in the universe. E.g., you might be able to predict the approximate statistical rate of formation of spiral galaxies as a function of time after the big bang, but you won’t be able to predict exact numbers because when you run the simulation repeatedly with different random numbers to represent the initial particle motions, you get different details in the universe at later times.

    In any case, interactions randomly with virtual particles such as gauge bosons in the vacuum will make physics indeterministic both at early times in the universe and at small distances from particles in the universe today. Because the initial conditions of the universe can’t be determined accurately even in principle, the subsequent history of the universe is – so far as details go – indeterministic. It is only deterministic when you look at the statistical evolution of the universe, e.g. the percentage of different kinds of galaxy at different times after the big bang (different distances from us in spacetime). Similarly, quantum mechanics is deterministic when you look at large statistics, e.g., on the average the electron in the ground state of a hydrogen atom is most likely to be found near the Bohr radius, 53 pico metres. If you measure the electron distance from the nucleus for a large number of hydrogen atoms (or infer it from the size and mass of a large quantity of condensed hydrogen), indeterminancy effects become trivial and you get a deterministic result.

    It’s a pity that instead of wasting time on a search for a cosmology of indeterminism (the landscape of 10^500 metastable vacua) or determinism (‘t Hooft), people don’t investigate causality in cosmology. Physics is not deterministic, but it does have a causal mechanism.

  77. Copy of a comment to:

    https://nige.wordpress.com/2007/03/16/why-old-discarded-theories-wont-be-taken-seriously/

    Teresa,

    Electromagnetism and gravity do have a certain amount in common; the inverse square law. What’s also interesting is that the electromagnetic force between a proton and electron is 10^40 times stronger than gravitation. Also, magnetism is dipolar; nobody has discovered even a single magnetic monopole in nature. You get attraction of unlike poles and repulsion of like poles. Gravitation is a monopole force field; yet it is always attractive no matter what the electric charge of the mass/energy.

    I wrote an article in Electronics World April 2003 which leads to the conclusion that the distinction between gravity and electromagnetism is a result of a simple physical difference: the charge of the gauge bosons being exchanged. This predicts the 10^40 coupling constant difference between electromagnetism and gravity, and it explains why gravitation is always attractive (over non-cosmological distances; get too far and the net effect is repulsion because the theory predicts the small positive cosmological constant which is accelerating the universe), and why unlike electromagnetic charges attract while like electromagnetic charges repel.

    Gravity is due to electrically uncharged gauge bosons are exchanged between all mass/energy in the universe. Net gravitational forces arise due to asymmetry, the Lesage shadowing effect, due to the way the exchange process works.

    In order for two masses to exchange gravitons, they must be receding from one another at a relativistic velocity in accordance with the Hubble law, v = HR. This gives them an outward acceleration from one another of a = dv/dt = d(HR)/dt = Hv = RH^2. As a result of this acceleration, they have a force outward from one another of F = ma = mRH^2. Simple!

    Newton’s 3rd law (action and reaction are equal and opposite) then tells us that the outward forces of each of the receding masses must result in an equal inward reaction force. This force – by elimination of all other possibilities – is carried by gravitons.

    Hence, gravitation causes distant receding masses to forcefully fire off gravitons at each other, so the relativistically receding masses end up exchanging gravitons and being repelled apart. Impulses and recoil forces when gravitons are exchanged between relativistically receding masses causes those masses to go on accelerating as they recede from one another. This gives the cosmological acceleration normally attributed to dark energy (the Lambda term).

    Now examine what happens when two masses (say me and the planet Earth) are not relativistically receding relative to one another! There is no forceful exchange of gravitons between me and the Earth! This is because the acceleration of me away from the Earth is zero, so the force of me away from the Earth is zero, and the reaction force of gravitons from me towards the Earth is zero.

    In other words, I’m not exchanging gravitons with the Earth in a forceful way, simply because I’m not receding from the Earth. So the Earth and I are unable to exchange gravitons efficiently! This is a shielding effect, because the Earth and myself are both exchanging gravitons with the distant, receding galaxies in the universe.

    The only direction in which I’m not able to efficiently exchange gravitons is downward, because some of the tiny fundamental particles in the Earth are exchanging gravitons with distant receding masses in that direction from me, but are unable to then exchange those gravitons with me because there is no graviton exchange between myself and the Earth. Hence, the fundamental particles in the Earth are shielding or shadowing a small portion of the graviton force from distant receding galaxies in the downward direction from me!

    So the net graviton force on me is the excess of gravitons pushing downwards over that coming upward through the planet below me.

    Now this is a very simple geometric effect: gravitons are electrically uncharged exchange radiation with spin-1, like photons. For electromagnetism, the only way to get a physical understanding is to change Feynman’s QED U(1) Abelian theory. There are lots of problems with U(1): it only has one type of charge (hence the 1 in U(1) symmetry), so negative and positive charges have to be treated as the same thing moving in different directions through time. But there is no evidence that anything goes backward in time. Also, there are other problems with the mainstream U(1) electromagnetism. It doesn’t predict or explain physically what the mechanism for electromagnetic forces is; it has to use a photon with 4-polarizations instead of the normal 2, so that it can include attraction and not just repulsion. It’s a very unsatisfactory physical description.

    My argument here is that electromagnetism and gravity are actually an SU(2) Yang-Mills theory, with charged massless gauge bosons. SU(2) gives rise to two types of charge and three types of gauge boson: neutral, positive and negative. I’ve worked out that charged massless gauge bosons can propagate in the vacuum despite the usual objection to the charged massless radiation (infinite magnetic self-inductance): what happens is that in exchange radiation, there is an equilibrium of exchange of radiation travelling in two directions at once, so the clockwise magnetic curl of say leftward travelling charged radiation will exactly cancel out the relatively anticlickwise curl of rightward travelling charged radiation. The cancellation of the magnetic curls in this way means that the magnetic self-inductance is no longer infinite but zero!

    Next, the exchange of charged massless gauge bosons between electromagnetic charges has more possibilities than gravitation. The random arrangement of fundamental charges (positive and negative) relative to one another throughout the universe means that all of the positive and negative electric charges in the universe will be linked up by their exchange of charged gauge bosons, like a lot of positive and negative charged capacitor plates separated by vacuum dielectric. Because the arrangement is random, they won’t add up linearly. If the addition was linear with positive and negative charges arranged in a long line with alternating sign at each charge, then the result would be like a series of batteries or capacitors in circuit, and electromagnetism would be stronger than gravitation by about a factor of 10^80 (the number of hydrogen atoms in the universe).

    Because the arrangement is random, and charged gauge bosons of one sign are stopped by half the charges in the universe, the actual addition is non-linear. It’s a drunkard’s statistical walk, like the zig-zag path of a particle undergoing Brownian motion. The vector sum can be worked out by doing a path integral calculation. It’s approximately the square root of the number of hydrogen atoms in the universe, times stronger than gravity. I.e. 10^40.

    This model also explains repulsive forces and attractive forces in electromagnetism, as a correspondent (Guy Grantham) has pointed out to me. Because you have two types of charged gauge boson, two protons have overlapping force fields composed of positively charged massless gauge bosons.

    As a result, the protons exchange positively charged gauge bosons and get repelled away from one another, rather like two people firing machine guns at one another will be forced apart both by the recoil impulses when firing each round, and by the strikes when receiving each round! (The incoming positively charged exchange radiation from distant masses in the universe to the far side of each of the protons being considered is severely redshifted and thus carries little energy and hence little momentum.)

    In the case of dissimilar charges, the positive charge and negative charge (or north pole and south pole in the case of two magnets) suffer the problem that the opposing fields cancel each other out instead of adding up. So there is no forceful exchange of radiation between them; they shield one another just like the Lesage shadowing gravity mechanism, and so opposite charges get pushed together by the exchange radiations coming from the distant receding galaxies in the universe.

    The fact that the electromagnetic attractive force between a proton and an electron is identical in strength but opposite in sign (i.e. direction) to the repulsive force between either two protons or two electrons, is explained by the energy balance of exchange radiation with the surrounding universe during the period that the force is acting, as proved graphically in my April 2003 Electronics World article.

    When two particles repel or attract due to electromagnetism, they are converting the potential energy of the redshifted incoming exchange radiation energy (from distant charges in the receding universe) into kinetic energy. The amount of energy available in this way per second (i.e., the power used to accelerate charges) to just two charges (whether they are proton and electron, proton and proton, or electron and electron) is the same because each charge has a similar cross-section for interactions with exchange radiations!

    Hence, when two protons or two electrons repel, they are being repelled by a similar power of radiant exchange radiation supplied externally by the surrounding universe as in the case of the attraction of one proton and one electron.

    The diagram in the April 2003 Electronics World article makes this energy summation clearer: the resultant of all the exchanges is that unit similar charges repel at the same force that dissimilar charges attract.

    I agree with you that light is a particle and has mass: saying light has “no rest mass” which the literature is fond of announcing, is pathetic because light is not at rest anyway,

    “The fact that photons have no rest mass isn’t a problem because … they can never be at rest anyway …”

    – page 21 of P.C.W. Davies, The Forces of Nature, Cambridge University Press, London, 2nd ed., 1986.

    Nige

  78. Copy of a comment:

    http://coraifeartaigh.wordpress.com/2008/04/10/more-on-inflation/#comment-38

    ‘Basically, the idea is that quantum fluctuations in the early universe could have been stretched by inflation to astronomical proportions, providing the seeds for galaxy formation. The predicted spectrum of these fluctuations was calculated by Guth and others in 1982.’

    You write as if the stretching of the quantum fluctuations made them big enough to seed galaxy formation, which is totally misleading I fear.

    I studied cosmology a decade ago, and my understanding is the opposite of what seems to be implied by those sentences in your otherwise very nice post.

    General relativity (Friedmann-Robertson-Walker metric) predicts far too much curvature in the early universe, so the density fluctuations predicted by general relativity without inflation would lead to galaxy formation much too soon. With the Hubble telescope and others, the era of early galaxy formation can be determined and it is a lot later than general relativity predicts.

    In addition, the cosmic background radiation tells us what the fluctuations in radiation (and density of matter) were at 300,000 years after the big bang, when the temperature of the big bang fell below 3000 K allowing electrons and protons to combine into hydrogen, which made the universe transparent to most radiation. (At higher temperatures i.e. earlier times, the universe was basically ionised hydrogen gas, which was a strong absorber of all electromagnetic waves. Hence at earlier times than 300,000 years after the big bang, the radiation and matter temperatures were identical because they were in an equilibrium, but at all later times the radiation field temperature decoupled from that of the matter and decreased due to falling energy of photons received from 300,000 years emission time by distant matter as the universe expanded, i.e. the redshift effect.)

    Inflation was supposed to have occurred at very early times after the big bang (10^{-26} of a second or so), due to a phase change in the vacuum’s state, as you write in the post, which briefly allowed faster-than-light expansion.

    This extremely rapid ‘inflationary’ expansion epoch, at around 10^{-26} of a second into the big bang, is supposed to be wonderful because it would reduce the curvature of the universe thereafter, and would reduce galaxy formation rates subsequently. Galaxy formation requires curvature to make the quantum fluctuations grow. The role of inflation is to reduce the curvature of the universe on large scales by spreading the same amount of mass-energy over a bigger volume than suggested by the Friedmann-Robertson-Walker metric. Curvature (gravitational acceleration) is reduced if you spread the same amount of mass-energy over a bigger volume, just as gravitation would appear weaker if the Earth was made bigger in size but only contained the same mass.

    Inflation spreads out the matter over a bigger volume, hence it reduces curvature, which reduces the rate at which quantum fluctuations grow in size, which in turn reduces the rate at which star and galaxy formation is seeded.

    So the fact that as you write, ‘quantum fluctuations in the early universe could have been stretched by inflation to astronomical proportions’ is actually a bit misleading.

    The quantum fluctuations are actually reduced in size by inflation, because inflation reduces curvature, which in turn reduces the growth rate of quantum fluctuations.

    Inflation is not impressive because it makes no falsifiable predictions. It’s a false, epicycle-style piece of metaphysics.

    Instead of inflation reducing curvature by expanding the universe faster that light velocity during a phase transition in the vacuum state, what happens is that the universal gravitational constant is directly proportional to time since the big bang. This is a checkable prediction from a quantum gravity mechanism which reproduces all checked general relativity effects and which also predicts the gravitational constant G within the experimental error bars of the data.

    Hence at 300,000 years after the big bang, the universe was 46,000 times younger than it is now, so G was smaller by a similar factor. This is the correct reason as to why there the early universe was much flatter (less curved, i.e. less gravitational field strength) at early times such as when the cosmic background radiation originated.

    This explanation is the correct one for the observed slow rates of galaxy formation in the early universe, not inflation. There is solid evidence behind this model, because it is simple, makes checked predictions, and reproduces empirically confirmed aspects of quantum field theory and general relativity.

    Note that the variation in G with time prediction of this alternative theory was falsely attacked by Edward Teller in 1948 after a different theory of varying G was proposed by Dirac. Teller ignored the fact that electromagnetism is related to gravitation so should have a coupling constant that varies in the same way with time that G varies as. Teller claimed that varying G would vary the compression rate in the fusion rates in the stars (and obviously the fusion rate in first minutes of the big bang) in a way incompatible with observations, but this objection is totally false and based on Teller’s ignorance that the electromagnetic coupling will vary with time in the same way as G. The variation in electromagnetic coupling constant means that where gravitation is weaker (causing less compression of matter in the big bang fusion and in star fusion), Coulomb electromagnetic repulsion would similarly be weaker. Since it is precisely the Coulomb barrier that stops protons from easily being fused together by the short-ranged pion-mediated strong nuclear force, the variation in Coulomb force with time cancels out the effect of gravity constant G varying with time (so far as nuclear fusion is concerned). Everything works out!

  79. copy of a comment:

    http://coraifeartaigh.wordpress.com/2008/04/16/euclid-and-the-renaissance

    ‘The strange aspect of the story is that the painstaking recovery of classical maths and science made it appear sacred. In fact, we now know much of Aristotlean science was wrong (due to lack of experimentation), but it was only against the greatest resistance that reformists such as Copernicus, Kepler and Galileo gradually made headway.’

    Well, all radical ideas meet objections. A new scientific theory doesn’t make progress by overcoming objections with sheer force or counter-arrogance, but by becoming clearer, more lucid, and therefore better understood.

    Euclid is commonly dismissed for neglecting the possibility that spacetime is curved.

    However, according to quantum gravity, the effects of curvature are attributed to a spacetime fabric composed of graviton interactions.

    Therefore, the classical curved spacetime of general relativity is wrong. Albert Einstein himself expressed it in a 1954 letter to his close friend Michel Besso:

    “I consider it quite possible that physics cannot be based on the [spacetime continuum] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included…”

    Already we know that two confidence-tricks must be used to make the continuously variable differential geometry of general relativity (Ricci tensor, its trace, and the stress-energy tensor) approximate the real world.

    Instead of representing the particulate distribution of matter realistically in the stress-energy tensor, you always have to put in an artificially smoothed (averaged) distribution which ignores the discontinuities of fundamental particles at discrete locations in the vacuum. So you have to model mass-energy (the source of the curvature) using a false model such as a “perfect fluid” in which density is averaged, rather than being lumpy. You can’t use calculus on discontinuities, or you get infinities and zeros as output. E.g., true density is zero up to the edge of a fundamental particle, say the Planck scale or whatever radius you use, then density jumps discontinuously to a very high value. The rate of change of density is thus zero or infinity.

    So to get the stress-energy tensor to work smoothly, you have to put in an artificial, averaged distribution of fundamental particles instead of using the really discontinuous distribution of small fundamental particles in the vacuum.

    Then when you calculate curvature, you are again using a tensor, differential geometry. But curvature is not real because spacetime is quantized with discrete, quantum graviton interactions causing all the effects. Hence, masses accelerate in discrete impulses due to successive graviton interactions, not as the smooth acceleration suggested by 3+1 dimensional curved spacetime in classical general relativity.

    All of the curvature effects of general relativity can be better understood in terms of graviton interactions than in terms of classical 3+1 dimensional spacetime curvature. E.g., the normal way to present curvature of 3 dimensions in space is to draw a 2 dimensional diagram. As soon as you extend the 2 dimensional diagram to 3 dimensions, you end up facing the reality of the spacetime fabric, not geometry.

    Aristotle’s “Physics” made it clear that he didn’t understand what air is. He believed that when an arrow is fired, it continues to move because air is displaced by bthe motion of the arrow from its front, and then pushes around the moving arrow from front to rear, pressing in at the back of the arrow, thereby keeping the arrow in motion.

    What he was trying to do was to explain the physical mechanism behind Newton’s 1st law of motion (not yet formulated experimentally in Aristotle’s time), the momentum of a moving object.

    If Aristotle had known about the spacetime fabric (as distinct from air) and fundamental particles, he could have applied his mechanism for momentum to the spacetime fabric flowing around moving fundamental particles from front to rear.

    This mechanism is vital for physics. Gauge bosons which interact with fundamental particles can’t penetrate through those particles as if those particles were not there, or they wouldn’t have any interactions or any effects. Hence, gauge bosons like gravitons are stopped by the fundamental particles they interact with. Consequently, when a fundamental particle such as some fermion moves, it might be expected to create a void in the spacetime fabric behind it, and for the spacetime fabric to pile up on the moving side.

    But because the spacetime fabric such as gravitons is composed of light-velocity exchange radiation in perpectual motion, the spacetime fabric is capable of moving out of the way of moving fundamental particles if those particles aren’t going at light velocity themselves. Hence, the Aristotle mechanism really does seem to reply. Also, the pressure of gravitons against moving matter in the direction of motion can be shown to cause the FitzGerald length contraction and mass increase effect.

    What really settles the issue is that in in the big bang, the relative outward motion of matter away from us at velocity v=H*R (Hubble’s recession law) leads to radially outward acceleration of matter a=dv/dt = d(HR)/dt = H*v + R*dH/dt = H*v = R*H^2. Thus you obtain an outward force of receding matter F=ma, and by Newton’s 3rd law an equal inward directed reaction force, mediated by gravitons which predicts the strength of gravity…

  80. copy of a comment:

    http://www.nonequilibrium.net/various/41-peter-woit-what-will-you-do-if-string-theory-is-wrong/#comment-242

    “It is simply true that the Planck scale is the ultimate scale below which the usual concepts about geometry have to break down.” – Dr Lubos Motl

    Dr Motl, the Planck length scale has no empirical evidence. It’s just a combination of G, c, and h by dimensional analysis to yield a small length. Actually, the black hole event horizon radius of an electron (~10^{-57} m) uses G, c and m_electron, and yields a much smaller length than the Planck scale. So if you want a small “fundamental” length from dimensional analysis, why choose Planck’s length over the smaller black hole event horizon radius for a fundamental particle? The decision of which to use is down to prejudice and familiarity, rather than being based upon empirical evidence in favour of the Planck scale which is unobservable.

    “… an amazing example of an outsider’s collapsed mental abilities – a person who can only produce ad hominem attacks but cannot ever do anything useful.” – Dr Lubos Motl

    That might just be down to the fact that the work of outsiders is rejected, unread, by the mainstream string community, who seem to be regarded by journal editors as the perfect “peer-reviewers” for innovative non-string ideas. If you delete work from arXiv that is non-string based, as mine was in December 2002, then your claim that outsiders never do anything useful is just hot air. You’ve insulated yourselves from what outsiders are doing, you don’t care to read their work, and you just hype your own genius all the time. Journals are full of masses of physically-meaningless, self-consistent mathematical drivel that leads nowhere, can never be falsified, etc. I don’t see any evidence for genius in string theorist work, except for the kind of nefarious genius of snubbing work they haven’t even bothered to read before deleting it.

    “If you think that you can do research in physics – or even better than the real physicists, right? – why don’t you do it instead of the defamations and crackpot preprints of yours?” – Dr Lubos Motl

    Nice one, Lubos. Doubtless you have a genius for putting into passionate words the unspoken kind thoughts of Edward Witten, Jacques Distler, and many other heroes of the pro-string media.

  81. Comment by anon.

    http://www.math.columbia.edu/~woit/wordpress/?p=686#comment-38329

    “This is certainly true: if the string theory landscape made lots of testable predictions so that we had good reason to believe in it, and the same structure implied a multiverse, that would be good reason to believe in the multiverse.” – PW

    Even a theory which makes tested predictions isn’t necessarily truth, because there might be another theory which makes all the same predictions plus more. E.g., Ptolemy’s excessively complex and fiddled epicycle theory of the Earth-centred universe made many tested predictions about planetary positions, but belief in it led to the censorship of an even better theory of reality.

    Hence, I’d be suspicious of whether the multiverse is the best theory – even if it did have a long list of tested predictions – because there might be some undiscovered alternative theory which is even better. Popper’s argument was that scientific theories can never be proved, only falsified. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools. Mixing beliefs with science quickly makes the fundamental revision of theories a complete heresy. Scientists shouldn’t start begin believing that theories are religious creeds.

  82. comment by anon.

    http://www.math.columbia.edu/~woit/wordpress/?p=686#comment-38472

    ‘… but, until recently the question of whether particle theorists were doing science or pseudo-science was not one that ever came up. You just didn’t see leading figures in the field publicly making bogus claims about what it means to test a scientific theory.’ – PW

    Witten twelve years ago wrote that the most remarkable prediction of string theory is the fact it predicts spin-2 gravitons:

    ‘String theory has the remarkable property of predicting gravity.’ – E. Witten, Physics Today, April 1996.

    It’s quite interesting that string theory does attempt to tie together long-established speculations concerning spin-2 gravitons, coupling constant unification at the Planck scale, and black hole entropy. These theoretical ‘tests’ of string theory – in which it is merely shown to be compatible with speculation (partly based on theoretical arguments, but embellished by prejudice over the years) about the gauge boson of quantum gravitational interactions and so on – is weaker than direct experimental verification, but they do make string theory appear consistent with such speculations.

    So I’m wondering how on earth anyone is ever going to get motivated enough to work seriously and hard on alternative ideas, to really rival string theory. Aristarchus of Samos came up with the solar system in 250 BC, but it was unable to make any headway against the mainstream for nearly two thousand years (until Kepler’s ‘inelegant’ elliptical orbits took away the need for ‘pure’ circles with epicycles in Copernicus’s complex solar system model). By analogy, maybe some crazy idea around today is basically true, but requires a lot of work scientifically before it is taken as a serious contender to rival the mainstream string theory speculation.

  83. Anon: please stop copying comments here. If you attack Witten and other people like him, it will give my blog a bad reputation, and it will look as if I’m condoning your arguments by allowing your comments to remain.

    http://dorigo.wordpress.com/2008/05/17/one-more-chunk-of-susy-parameter-space-ticked-off/

    On the reality of the big bang, can I recommend http://www.astro.ucla.edu/~wright/tiredlit.htm for an analysis of the redshift facts and the reasons why pseudoscientists can’t accept the big bang facts as valid.

    Notice also that Alpher and Gamow predicted the cosmic background radiation in 1948 and it was discovered in 1965.

    Actually, the big bang theory is incomplete, because when you take the derivative of the Hubble expansion law v = HR, you get acceleration a=dv/dt = d(H*R)/dt = (H*dR/dt) + (R*dH/dt) = H*v = R*H^2. This tells you that receding masses around us have a small outward acceleration, only on the order of 10^{-10} ms^{-2} for the most distant objects. This is a tremendous prediction. I published it via Electronics World back in Oct 96, well before Perlmutter confirmed it observationally.

    This is just about the observed acceleration of the universe! Smolin points this amount of acceleration and the “numerical coincidence” that it is on the order of a = Hc = R*H^2 out in his book “The Trouble with Physics” (2006) but neglects to state that you get this result by differentiating the Hubble recession law! Note that arXiv.org allowed my paper upload from university in 2002, but then deleted in within seconds, unread!

    Dr Bob Lambourne of the Open University years ago suggested submitting my paper to the Institute of Physics’ Classical and Quantum Gravity, the editor of which sent it for “peer-review” to a string theorist who rejected it because it added nothing to string theory!

    So some additional evidence and confirmed predictions of the big bang do definitely exist (the outward acceleration of matter leads to radially outward force, which by newton’s 3rd law gives a predictable inward reaction force, which allows quantitative predictions of gravity that again are confirmed by empirical facts). Don’t just believe that only stuff that survives censorship by string theorists is factual. Classical and Quantum Gravity was publishing the Bogdanov’s string theory speculations (which the journal later had to retract) at the time it was rejecting my fact-based paper!

  84. copy of a comment of mine to Not Even Wrong (currently in moderation queue there, so it might not appear there):

    http://www.math.columbia.edu/~woit/wordpress/?p=689#comment-38515

    Your comment is awaiting moderation.

    nigel cook Says:

    May 22nd, 2008 at 11:43 am

    ‘… the problem is whether they are even in principle scientifically testable or not. If they’re not, they’re not science and promoting them to the public is a bad idea.’

    You actually need to specify precisely why it’s a bad idea to promote belief-based ideas (that have no more checkable evidence behind them than religions), otherwise some readers will assume that you’re asserting a personal opinion about what is ‘bad’. Some readers, I’m sure, think uncheckable speculation is fine science.

  85. my comment hasn’t appeared yet, but Dr Woit has stated (in a reply to a comment by somebody else):

    http://www.math.columbia.edu/~woit/wordpress/?p=689#comment-38536

    “My problem is … with the idea of writing articles for a major US popular science magazine promoting the multiverse and Boltzmann brain argumentation. This gives people the idea that this kind of empty speculation is what science is, impressing those who can’t tell the difference between science and science fiction, and turning off those who can.”

  86. http://kea-monad.blogspot.com/2008/05/m-theory-lesson-192.html

    It’s a shame if loop quantum gravity is now describing things which can’t ever be experimentally refuted if incorrect.

    Smolin put some Perimeter Institute [lecture]s of his on quantum gravity online a few years ago, and I was impressed with the basic concept: to quantize gravity with a minimal amount of speculation.

    In particular, I liked the idea of arriving at a gravitational force path integral by summing all of the interaction graphs for gravitational interactions in spacetime.

    This seems a physically sensible approach. Smolin showed (in outline) in the first lectures that you can sum interaction graphs to get general relativity without a metric, which is what he calls background independence.

    A metric is an output of general relativity for a particular set of input assumptions. So it’s interesting that you can get the basic field equation without a metric from summing spin-foam interaction graphs over spacetime.

    However, all of this is very abstract and it doesn’t predict anything comparable to observation such as the relatively weak force of the gravitational interaction (relative to other fundamental forces).

  87. http://kea-monad.blogspot.com/2008/05/m-theory-lesson-192.html

    One thing that I don’t see any evidence for is the assumption in loop quantum gravity that the penrose “spin network” model for gravitational interactions in spacetime is physically the best model to use! Smolin’s summation of interaction graphs is basically a summation of the Penrose spin network graphs, which is a very abstract an[d] questionable model of spacetime.

    Penrose’s own papers on spin networks, http://math.ucr.edu/home/baez/penrose/Penrose-AngularMomentum.pdf and http://math.ucr.edu/home/baez/penrose/Penrose-OnTheNatureOfQuantumGeometry.pdf, are entirely abstract models with no checkable predictions or even solid foundations in physical facts.

    On page 18 of the first paper Penrose states:

    “When the vertex connections have been completed at every vertex of a closed spin-network, then we shall have a number of closed loops, with no open-ended strands remaining.”

    This could physically be a model for the closed loops of graviton radiation being exchanged from gravitational charge A to charge B and then back again to charge A, in a endless cycle (closed loop).

    However, the linkage between mathematical or geometric model and physical fact is so indirect and vague that it’s just not very helpful physically.

    On page 4 of the first paper, Penrose writes:

    “I have referred to these line segments [in the Penrose spin network spacetime illustration] as representing, in some way, the world-lines of particles. But I don’t want to imply that these lines stand just for elementary particles (say). Each line could represent some compound system which separates itself from other such systems for long enough that (in some sense) it can be regarded as isolated and stationary, with a well-defined total angular momentum n*(1/2)*h-bar. Let us call such a system or particle an n-unit. (We allow n = 0, 1, 2, …) For the precise model I am describing, we must also imagine that the particles or systems are not moving relative to one another. They just transfer angular momentum around, regrouping themselves into different subsystems, perhaps annihilating one another, perhaps producing new units.”

    This is needlessly very vague, which is a pity. Why not physically describe something specific, such as gauge boson exchange between gravitational charges, and see where it leads? Why instead do they just pick one very vague spacetime model and work on that (the Penrose spin network)?

  88. copy of a comment:

    http://kea-monad.blogspot.com/2008/05/neutrino08.html

    Here’s something they won’t discuss concerning Rutherford. He and Bohr were extremely naive in 1913 about the electron “not radiating” endlessly. They couldn’t grasp that in the ground state, all electrons are radiating (gauge bosons) at the same rate they are receiving them, hence the equilibrium of emission and absorption of energy when an electron is in the ground state, and the fact that the electron has to be in an excited state before an observable photon emission can occur:

    “There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”

    – Rutherford to Bohr, 20 March 1913, in response to Bohr’s model of quantum leaps of electrons which explained the empirical Balmer formula for line spectra. (Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)

    The ground state energy and thus frequency of the orbital oscillation of an electron is determined by the average rate of exchange of electromagnetic gauge bosons between electric charges. So it’s really the dynamics of quantum field theory (e.g. the exchange of gauge boson radiation between all the electric charges in the universe) which explains the reason for the ground state in quantum mechanics. Likewise, as Feynman showed in QED, the quantized exchange of gauge bosons between atomic electrons is a random, chaotic process and it is this chaotic quanta nature for the electric field on small scales which makes the electron jump around unpredictably in the atom, instead of obeying the false (smooth, non-quantized) Coulomb force law and describing nice elliptical or circular shaped orbits.

  89. copy of a comment:

    http://riofriospacetime.blogspot.com/2008/05/einsteins-sphere.html

    “He rejects a flat Universe, for his General Relativity shows that Space/Time is curved. He rejects the idea of boundaries and considers the Universe “finite yet unbounded”. The obvious analogy is a sphere.

    This 4-dimensional spherical Space has a finite volume given by:

    V = 2 $\pi$^2 R^3

    Where R is radius, with dimensions of length. (If anyone can’t abide by this, please complain to Einstein.)”

    There isn’t any “curved” smooth classical spacetime, it’s just an approximation using calculus to represent effects of discrete field quanta being exchanged between gravitational charges composed of mass or energy.

    The universe isn’t curved, this was discovered by Perlmutter around 1998, when it was found that the predicted curvature (gravitational acceleration) as assessed from the redshifts of distant supernovae, was absent.

    Einstein wrote to Besso in 1954:

    “I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included…”

    Quantum field theory is something that definitely needs to be considered.

  90. On my newer post introducing the basic mechanism of quantum gravity, https://nige.wordpress.com/2008/01/30/book/ , I make the point that the mainstream spin-2 graviton hypothesis is wrong because it only deals with a path integral that considers 2 regions of mass-energy (i.e., 2 sources of gravitation).

    It ignores graviton exchanges with all other masses in the universe, and thereby concludes that exchange of spin-2 gravitons would cause attraction of two masses (in a universe which only consisted of two masses).

    However, when you include the surrounding masses in the universe, this argument breaks down:

    1. When you include the surrounding masses in the universe, spin-1 gravitons do the job of pushing masses together, https://nige.wordpress.com/2008/01/30/book/

    2. When you include the surrounding masses in the universe, spin-2 gravitons no longer necessarily cause the correct (ad hoc) model of attraction for 2 masses. In any case, this spin-2 model is unphysical and has no evidence, unlike the correct predictions of quantitative effects by the spin-1 graviton model, https://nige.wordpress.com/2008/01/30/book/

  91. copy of comment to:

    https://nige.wordpress.com/2007/03/16/why-old-discarded-theories-wont-be-taken-seriously/

    It should be noted that the Wikipedia article about LeSage has been considerably increased in quality and content since the discussion in this post, yet the basic flaws in the article survive untouched.

    It continues to try to debunk the idea of exchange radiations causing fundamental forces using the false argument that quantum fields would (by exchanging field quanta between particles of matter) heat up matter, despite the fact that this exchange radiation model is the MAINSTREAM Yang-Mills and Abelian Standard Model of particle physics, and quantum gravity.

    Shamefully, people like the authors of the book on LeSage gravity, “Pushing Gravity”, continue to try to ignore this fact, ignoring all of quantum field theory which is based on the exchange of field quanta between charged particles of matter.

    Another one is the drag objection: again, if exchange radiation caused drag on moving particles, this criticism would need to be leveled against the MAINSTREAM model of Yang-Mills quantum field theory, the Standard Model of quantum physics. Actually, long-range (light velocity) exchange radiations are responsible for some effects on moving bodies in lieu of drag: length contraction, mass increase, etc. These effects are discussed in the latest (and final) post on this blog, https://nige.wordpress.com/2008/01/30/book/ as well as in the earlier post https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/ and various other posts.

    However, while all these people ignore the facts using false arguments (hypocrisy, because such arguments would debunk the Standard Model of particle physics if they were true, and they don’t use such arguments to debunk that; they just focus such arguments at the LeSage model because they’re just crackpots) they have at least assembled quite a few facts about the (false) reasons for the censorship at

    http://en.wikipedia.org/wiki/Le_Sage's_theory_of_gravitation

    including notably a readable English translation of LeSage’s own paper:

    http://en.wikisource.org/wiki/The_Le_Sage_Theory_of_Gravitation

  92. Another piece of horseshit in the Wikipedia article:

    “Therefore, in order to be viable, Fatio and Le Sage postulated that the shielding effect is so small as to be undetectable, which requires that the interaction cross-section of matter must be extremely small (P10, below). This places an extremely high lower-bound on the intensity of the flux required to produce the observed force of gravity. According to standard physics any form of gravitational shielding is a violation of the equivalence principle and therefore is inconsistent with general relativity.[44]”

    Quantum fields are incompatible with general relativity too, because they deny smooth curvature and replace it with quantized effects (discrete particles of exchange radiation constituting the field, not a continuum). This means nothing, because we know that general relativity is only a classical aproximation, not a religious truth declared by God.

    So all this horseshit about LeSage not being compatible with general relativity is just an example of propaganda, mud-throwing in the hope that some will stick.

    In any case, when you look at the numbers, the outward acceleration of the universe, on the order 10^{-10} ms{-2} as observed from supernovae redshifts by perlmutter and published in Nature, and calculated see https://nige.wordpress.com/2008/01/30/book/ , this gives an immense outward force F=ma on the order 10^43 Newtons (because the mass of the receding universe is immense, despite the small acceleration).

    From the possibilities known to be available in the Standard Model and quantum gravity for what carries the equal and opposite reaction force (Newton’s 3rd law), gravitons are one candidate. The shielding area is the area of the fundamental particle’s black hole event horizon, see https://nige.wordpress.com/2008/01/30/book/ for references. This area is very small, and there is no significant error in Einstein’s equivalence principle. Overlap of particles in the Earth or even a star is not a significant effect because the areas are so small. It’s extremely improbable that two particles will be aligned so perfectly along any line-of-sight that their miniscule shielding areas will overlap. This is an example of a quantitative effect becoming a qualitative effect because of the extreme scale of the numbers involved. Because the cross-sections for quantum gravity interactions are so small, there is no significant overlap problem of the LeSage variety.

    In any case, LeSage’s original theory is as far from the truth as Aristarchus or Samos’s solar system was from Kepler’s and Newton’s laws of planetary motion and gravity. The hard work isn’t proclaiming that the Earth orbits the sun, but obtaining the laws of motion which were quite different (elliptical orbits) to Aristarchus’s and Copernicus’s circular orbits.

    In the 1750 or more years when Aristarchus’s correct idea was being suppressed and censored (i.e., from 250 BC to 1500 AD or so), the people doing the censorship could throw any horseshit abuse at the idea without bothering to read it or study it carefully, and they certainly did throw a lot of mud. For example, they asserted that because the solar system required the earth to spin on its axis (one revolution daily), it was immediately disproved by the fact we aren’t thrown off the earth (at 1000 miles/hour near the equator), and by the fact that the clouds in the sky over the equator don’t whiz by at 1000 miles per hour. These were some of the ignorant sneers made by Ptolemy in his “sun orbits earth” tract, the Almagest, published in 150 AD.

    This was all horseshit, based on hostility, ignorance, and lying political showmanship of the sneering variety. The physics of the laws of motion and meteorology didn’t exist at the time Aristarchus was around. Newton in 1687 published the laws of motion, and detailed understanding of meteorology was discovered in the centuries after that.

    That horseshit is similar to Maxwell and kelvin’s horseshit about quantum fields being an impossibility because the exchange of field quanta would heat up objects until they glowed red hot. They invent a speculative objection, based on their own ignorance, which is like saying that people at the equator would fly off if the earth was really spinning. In science, unobserved speculations don’t disprove observed facts, except in the minds of the gullible and the confidence tricksters like string theorists.

  93. Notice that there is a flaw in the automatic hyperlinking of web-addresses by the wordpress comments code: http://en.wikipedia.org/wiki/Le_Sage's_theory_of_gravitation does not hyperlink to the relevant page because the comments page automatically formats the simple (neither 6 nor 9 shaped) apostrophe into an intelligent 9-shaped apostrophe, before attempting to hyperlink it. This makes the hyperlinking fail at the apostrophe. Wikipedia has a page http://en.wikipedia.org/wiki/Le_Sage's_theory_of_gravitation with a simple apostrophe in the name LeSage’s, not a clever apostrophe.

    Anyway, a couple of further observations about the LeSage page. It claims falsely that general relativity and the LeSage mechanism are incompatible in general, as I’ve explained above. In fact, the incompatibility is non-observable, and at a higher level general relativity is the classical (non-quantized) inaccurate approximation, not vice-versa. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/ and https://nige.wordpress.com/2007/07/04/metrics-and-gravitation/ for a discussion of general relativity’s contraction mechanism. What happens in general relativity is a mathematical generalization of the Newtonian gravitational law in tensor calculus, followed by a correction that is needed for conservation of energy in the field: it is the correction which makes a photon of light (or anything else moving at velocity c) get deflected twice as much as a slow-moving (non-relativistic) object would in the same gravitational field.

    “As shown by Laplace, another possible Le Sage effect is orbital aberration due to finite speed of gravity. Unless the Le Sage particles are moving at speeds much greater than the speed of light, as Le Sage and Kelvin supposed, there is a time delay in the interactions between bodies (the transit time). In the case of orbital motion this results in each body reacting to a retarded position of the other, which creates a leading force component. Contrary to the drag effect, this component will act to accelerate both objects away from each other. In order to maintain stable orbits, the effect of gravity must either propagate much faster than the speed of light or must not be a purely central force. This has been suggested by many as a conclusive disproof of any Le Sage type of theory. In contrast, general relativity is consistent with the lack of appreciable aberration identified by Laplace, because even though gravity propagates at the speed of light in general relativity, the expected aberration is almost exactly cancelled by velocity-dependent terms in the interaction.[48]” – http://en.wikipedia.org/wiki/Le_Sage's_theory_of_gravitation

    The problem with the above is simply that the gravity mechanism gives rise to general relativity as a classical approximation for orbital aberration and many other purposes! See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/ . Because the quantum gravity theory explains the mechanism behind general relativity and shows where it is valid (it isn’t valid for cosmologically vast distance interactions where you get the cosmological acceleration and recession rather than gravitational attraction, but that is not relevant to orbital aberration! Hence the entire quotation above is horseshit.

    The discussion on “drag” in that article is wrong because it implicitly assumes a false idea that the particles causing gravity are simply like a gas. We know even in the MAINSTREAM model, that there are two components involved which are distinct from normal fermionic particles:

    (1) bosonic exchange radiation
    (2) a mass-giving bosonic field such as some kind of Higgs field

    The combined effect of these two particles is approximated by an ideal fluid, which doesn’t exhibit any drag at all. Drag occurs only when there is a net energy loss to the surrounding medium due to motion of a particle relative to that medium. With bosonic field quanta, this “drag” effect only occurs when a fermion accelerates or decelerates. When accelerating or decelerating, energy is lost or emitted as radiation and the fermion changes shape and mass due to the surrounding bosonic field. Once acceleration stops, the net emission of energy stops because an equilibrium is established.

    The whole point is that any particle is continually emitting and receiving radiation at all times, regardless of motion: this quickly stabilises as an equilibrium so there is no net loss or gain of energy. Moving a fermion causes an upset to the equilibrium during the period of acceleration. After that, the equilibrium is re-established and there is no net loss or gain of energy. Because the particle is not able to lose energy after the Lorentz contraction process (which occurs during accelerations only, a fact covered up in special/restricted relativity by the problem that special/restricted relativity doesn’t apply to accelerations at all), it is unable to slow down. Drag can’t occur physically, because no energy is being lost.

    Drag occurs when particles of air hit a moving object and carry away some of the original kinetic energy of the moving object in the form of increased motion of air molecules. This is a process which can’t occur with massless bosonic exchange radiation (the Z boson is a massive bosonic field quanta), which are restricted in velocity to the velocity of light. Because the mechanism for quantum gravity involves inertial forces and Lorentz contraction phenomena during accelerations but not during constant velocity motion, drag doesn’t occur.

    “In many particle models, such as Kelvin’s, the range of gravity is limited due to the nature of particle interactions amongst themselves. The range is effectively determined by the rate that the proposed internal modes of the particles can eliminate the momentum defects (shadows) that are created by passing through matter. Such predictions as to the effective range of gravity will vary and are dependent upon the specific aspects and assumptions as to the modes of interactions that are available during particle interactions. However, for this class of models the observed large-scale structure of the cosmos constrains such dispersion to those that will allow for the aggregation of such immense gravitational structures.” – http://en.wikipedia.org/wiki/Le_Sage's_theory_of_gravitation

    The lying horseshit here is the sneaky “hint” that the large-scale structure of the universe debunks a limited range for gravitational attraction: actually the acceleration of the cosmos on large cosmological scales, discovered from supernova redshifts by Perlmutter in 1998, is a universal repulsion of masses on extremely large scales. As shown in Fig. 1 at https://nige.wordpress.com/2008/01/30/book/ , the limited range of gravitational attraction and the existence of repulsive cosmological acceleration on larger scales, are both predicted accurately from one mechanism. The cosmological acceleration was predicted in 1996 and published prior to Perlmutter’s observational discovery of it.

  94. copy of a comment:

    http://riofriospacetime.blogspot.com/2008/06/inflation-deflated.html

    “Why the Best Theories Aren’t Always Right” – New Scientist editorial, http://www.newscientist.com/channel/opinion/mg19826592.900-editorial-why-the-best-theories-arent-always-right.html

    Thanks for quoting this classically absurd New Scientist editorial headline! It’s a great title, telling us a lot about the thinking of the editor.

    Personally, I think that a scientist should hold the viewpoint that the best theory is the correct theory.

    As soon as you start believing that theories which are not right are the best theories, you enter the “doublespeak” world of delusion discussed by George Orwell in 1984.

    Notice that the New Scientist editorial tells the lie:

    “When Copernicus showed that the observations fitted more elegantly with a theory in which the Earth went around the sun, Ptolemy’s work became redundant.”

    This was debunked by Arthur Koestler in his 1959 masterpiece of research, “The Sleepwalkers”.

    Koestler counted the number of epicycles used by Ptolemy and by Copernicus (both needed epicycles, since they used perfect circles to describe orbits, not ellipses which were only discovered long after Copernicus by Kepler who used Brahe’s detailed observations to work out the orbit of Mars).

    Koestler found that Ptolemy used about 40, and Copernicus used 80.

    This is hardly the “elegant” simplicity that the New Scientist editorial claims; it is ugly complexity.

    The reason was that Copernicus was only partly right; he wrongly used many epicycles (twice as many as Ptolemy) because he missed out the fact that planets go in elliptical shaped orbits, rather than lots of circles within circles.

    Copernicus’ circular orbits with circular epicycles (within the circular orbits) was proposed in 1500 AD. Kepler discovered that Copernicus’s model was wrong in all the mathematical details when he discovered circa 1610 that the planets move in ellipses. It was only on the back of Kepler’s three accurate laws of planetary motion (based on new observations by Tycho Brahe, the astronomer who had lost his nose in a sword duel), that Newton was able to come up with three general laws of motion, ending the medieval era for physics.

    The New Scientist editorial continues:

    “Questioning and replacing long-held ideas is what science does best. Copernicus could not have happened without Ptolemy.”

    This is ignoring Aristarchus of Samos, who came up with the solar system of Copernicus (minus some of Copernicus’s false epicycles) in 250 BC, some 1750 years before Copernicus!

    I can’t believe that the editor of the New Scientist really believes that a false theory doctrine was helpful. It wasn’t. Copernicus failed to publish until he was on his deathbed. If it hadn’t been for Ptolemy’s rubbish, progress would have happened a lot faster.

    E.g., when you read Ptolemy’s Almagest (published in 150 AD) – you can find Ptolemy’s Almagest together with Copernicus and Kepler in volume 16 of Encyclopedia Britannica’s series from 1952, “Great Books of the Western World” (volume 11 in that series is also vital reading for scientists) – you see that Ptolemy made slighting attacks against the solar system theory.

    Ptolemy declared that if the solar system was right, the Earth would need to be spinning on its axis daily, which isn’t true because clouds near the equator would be flying across the sky at an immense speed (over 1000 miles/hour). Notice that Ptolemy was writing this in 150 AD, over 1500 years before Newton wrote down the three basic laws of motion.

    So Ptolemy had no basis for claiming that the solar system was wrong because clouds should be left behind by the Earth’s spin. It was completely junk “debunking” – he was using speculative guesswork to deduce a false “prediction” from the solar system, then claiming that because the false prediction is in disagreement with nature, the solar system must be wrong!

    This is very similar to some of the crackpotism that occurs when the Fatio or LeSage gravity mechanism is discussed: physicists want to ignore mechanism or to pretend that there is no basis for it so they falsely claim that any exchange radiation which mediates forces would heat up objects like ordinary heat radiation, or that exchange radiation would cause drag and slow things down. These objections are insubstantial because in any quantum field theory, forces are caused by the exchange of field quanta. This has been established in the accurate tests of quantum field theories of electromagnetism, the weak interactions and the strong force. The field quanta don’t cause objects to heat up, despite the fact that all of these interactions have a much higher coupling than gravity does! The objectors are confusing real/observable radiation for the exchange radiation (which has extra polarizations, e.g. a the field quanta of electromagnetism have four polarizations rather than the two polarizations of observable photons), so they aren’t the same thing. Gauge bosons don’t cause objects to heat up, they just cause fundamental forces. Nobody in the mainstream objects to exchange radiations in the Standard Model, including electromagnetism which is a long-range, inverse-square law like gravitation, so they shouldn’t try to ridicule a basic mechanism using such hypocritical, unethical and ignorant nonsense.

    The physical mechanism of New Scientist’s editorials in the universe is to slow down the development of physics by defending ignorance.

    For the editor to defend Ptolemy by saying that Copernicus required Ptolemy’s bigotry and nonsense to bog down physics for 1350 years (150 AD – 1500 AD), is like saying that Churchill and his supporters really owe a debt of gratitude to Hitler because World War II wouldn not have been won without Hitler causing the initial problem. While a moron might be swept along by such an argument, anyone sensible will raise the point that although World War II wouldn’t have been won without dictators, the world would have been better off not having the war at all!

    Ptolemy’s Almagest is the most evil work ever written, due to not just ignoring the correct model, but ridiculing it for false reasons and not properly analysing it (the correct solar system model, albeit with circular orbits not ellipses, had been published by Aristarchus of Samos in 250 BC but was lost when the library of Alexandria burned down, because it hadn’t been copied due to mainstream ignorant bigotry such as that spread by people like Ptolemy).

    I recommend the book by Robert R. Newton, “The Crime of Claudius Ptolemy”, John Hopkins University Press, London, 1979.

  95. copy of a comment:

    http://riofriospacetime.blogspot.com/2008/06/inflation-deflated.html

    In his book 1984, published in 1949, Orwell actually uses astronomy to illustrate doublethink:

    ‘What are the stars?’ said O’Brien indifferently. ‘They are bits of fire a few kilometres away. We could reach them if we wanted to. Or we could blot them out. The earth is the centre of the universe. The sun and the stars go round it.’ Winston made another convulsive movement. This time he did not say anything. O’Brien continued as though answering a spoken objection: ‘For certain purposes, of course, that is not true. When we navigate the ocean, or when we predict an eclipse, we often find it convenient to assume that the earth goes round the sun and that the stars are millions upon millions of kilometres away. But what of it? Do you suppose it is beyond us to produce a dual system of astronomy? The stars can be near or distant, according as we need them. Do you suppose our mathematicians are unequal to that? Have you forgotten doublethink?’

    The editor of New Scientist is actually well in tune with O’Brien’s doublethink.

  96. here is a comment I will copy here if you don’t mind in case it gets lost:

    http://kea-monad.blogspot.com/2008/06/m-theory-lesson-197.html

    Thank you for the link to the article about Galois’s last letter before his fatal duel. He must have led a very exciting life, making breakthroughs in mathematics and fighting duels. Dueling was a very permanent way to settle a dispute, unlike the uncivilized, interminable, tiresome squabbles which now take the place of duels.

    The discussion of groups is interesting. I didn’t know that geometric solids correspond to Lie algebras. Does category theory have any bearing on group theory in physics, e.g. symmetry groups representing basic aspects of fundamental interactions and particles?

    E.g., the Standard Model group structure of particle physics, U(1)*SU(2)*SU(3) is equivalent to the S(U(3)*U(2)) subgroup of SU(5), and E(8) contains many elements, including S(U(3)*U(2)) subgroups, so SU(5) and E(8) have been considered candidate theories of everything on mathematical grounds.

    Do you think that these platonic symmetry searching methods are the wrong way to proceed in physics? Woit writes in http://arxiv.org/PS_cache/hep-th/pdf/0206/0206135v1.pdf that there the Standard Model problems are not tied to making the symmetry groups appear from some grand theory like a rabbit from a hat, but are concerned with explaining things like why the weak isospin SU(2) force is limited to action on just left-handed particles, why the masses of the standard model particles – including neutrinos – have the values they do, whether some kind of Higgs theory for mass and electroweak symmetry breaking is really solid science or whether it is like epicycles (there are quite a landscape of different versions of the Higgs theory with different numbers of Higgs bosons, so by ad hoc selection of the best fit and the most convenient mass, it’s a quite adjustable theory and not extremely falsifiable), and how quantum gravity can be represented within the symmetry group structure of the Standard Model at low energies (where any presumed grand symmetry like SU(5) or E(8) will be broken down into subgroups by various symmetry breaking mechanisms).

    What worries me is that, because gravity isn’t included within the Standard Model, there is definitely at least one vital omission from the Standard Model. Because gravity is a long-range, inverse-square force at low energy (like electromagnetism), gravity will presumably involve a term similar to part of the electroweak SU(2)*U(1) symmetry group structure, not to the more complex SU(3) group. So maybe the SU(2)*U(1) group structure isn’t complete because it is missing gravity, which would change this structure, possibly simplifying things like the Higgs mechanism and electroweak symmetry breaking. If that’s the case, then it’s premature to search for a grand symmetry group which contains SU(3)*SU(2)*U(1) (or some isomorphism). You need to empirically put quantum gravity into the Standard Model, by altering the Standard Model, before you can tell what you are really looking for.

    Otherwise, what you are doing is what Einstein spend the 1940s doing, i.e., seaching for a unification based on input that fails to include the full story. Einstein tried to unify all forces twenty before the weak and strong interactions were properly understood from experimental data, so he was too far ahead of his time to have sufficient understanding of the universe experimentally to be able to model it correctly theoretically. E.g., parity violation was only discovered after Einstein died. Einstein’s complete dismissal of quantum fields was extremely prejudiced and mistaken, but it’s pretty obvious that he was way off beam not just for his theoretical prejudices, but for trying to build a theory without having sufficient experimental input about the universe. In Einstein’s time there was no evidence of quarks, no colour force, no electroweak unification, and instead of working on trying to understand the large number of particles being discovered, he preferred to stick to classical field theory unification attempts. To the (large) extend that mainstream ideas like string theory tend to bypass experimental data from particle physics entirely, such theories seem to suffer the same fate as Einstein’s efforts at unification. To start with, they ignore most of the real problems in fundamental physics (particle masses, symmetry breaking mechanisms, etc.), they assume that existing speculations about unification and quantum gravity are headed in the correct direction, then they speculatively unify those guesses without making any falsifiable predictions. That’s what Einstein was doing. To those people this approach seemed like a very good idea at the time, or at least it seemed to be the best choice available at the time. However, a theory that isn’t falsifiable experimentally may still be discarded for theoretical reasons when a better theory comes along.

  97. According to Table 1 in this post, the theory of the mass mechanism gives relative masses for the leptons the electron, muon and tauon (which are identical except for differences in mass and small corrections due to mass-related implications):

    1, (2+1)/(2*alpha), and (50+1)/(2*alpha)

    (as explained in the post, the 2 and 50 in the numerators for muon and tauon respectively are the stable shell structure number of massive particles associated with the high energy states of the lepton, an analogy to the ‘magic’ numbers for high nuclear stability of 2 and 50 nucleons)

    Hence the series of relative masses for electron, muon and tauon is:

    1, 3/(2*alpha), and 51/(2*alpha).

    In the case of mesons, there are two quarks per particle core, so for a pion the mass relative to an electron is

    2(1+1)/(2*alpha)

    while for a baryon there are three quarks per particle core, so for a nucleon the mass relative to an electron is

    3(8+1)/(2*alpha)

    where the number 8 is another stable shell structure of the massive particles which are the units of quantized mass (in the analogy to nuclear shell structure stability, 8 is another nuclear ‘magic number’ of nucleons of high stability – little radioactivity – in nuclear physics).

    The full explanation for these formulae is in the text, e.g. see Table 1 and Figure 6.

  98. interesting comment by anon to Not Even Wrong (copied here in case it gets lost in moderation queue):

    http://www.math.columbia.edu/~woit/wordpress/?p=701#comment-39191

    All applied mathematics for real world physics is only approximate:

    1. Newtonian physics only has exact analytical solutions for two-body interactions, whereas there are many bodies present in the universe. Poincare chaos arises for orbits of more than two bodies, where each affects (alters) the orbit of the other as it moves. There is also a quantum chaos from the random exchange of field quanta that causes the electromagnetic interaction between electrons and protons, which on small scales is random (on big scales the large number of field quanta interaction statistics smooth out to give the deterministic classical Coulomb law). This prevents deterministic calculation of electron orbits inside the atom.

    2. General relativity’s stress-energy tensor uses an artificially smoothed distribution of mass and energy instead of representing the real particulate (discontinuous, i.e. atoms and quanta) distribution of matter and energy, to create an equally false smooth source for the Riemann curvature. It just ignores the QFT idea that gravity field quanta (gravitons) are exchanged in discrete interactions, not continuous acceleration (smooth curvature).

    3. Even if you just consider simple addition, counting two electrons, you haven’t an exact mathematical model with 1+1=2 for two times the same thing, because the electrons are all slightly different in their motions and by the uncertainty principle in principle you can’t ever find their exact positions and momentums. So they will have slightly different velocities and therefore slightly different masses. So you’re not adding up exactly the same real thing. To make the point clearer, if you add up apples (or if you count sheep), you are adding up things which are approximately similar, but not exactly the same. Two similar looking items will differ at the atomic scale. So addition is only ever exactly true when dealing with tokens like money, an invention due to mathematics.

    It is impossible even in principle to get exactly true input data in the real world from making measurements. Also, it’s impossible to make exact predictions, because all applied physics calculations for the real world involve making approximations. So the universe isn’t intrinsically mathematical. You can’t get completely exact input data, and – even if you did know exact initial conditions – the mathematics used to model real (complex) phenomena is an approximation only.

    In order for the universe to be intrinsically mathematical, it would be necessary in principle for there to exist some way of exactly representing the real world using mathematics, instead of relying on approximations and statistical wave equations. Mathematics is in principle at best just an approximation to the universe, so the universe can’t – even in principle – be intrinsically mathematical in nature.

  99. copy of a comment:

    http://riofriospacetime.blogspot.com/

    Hi Louise,

    I don’t understand the details the black hole mechanism for heat release you mention.

    Enceladus, a moon of Saturn, may generate heat in various ways. I don’t see how you can rule out radioactivity as a source of heat; potassium-40, uranium and thorium-232 abundances in it are not known. Before radioactivity was known, Kelvin worked out that tidal action wouldn’t generate enough heat inside the earth to account for volcanic action and the temperature at the bottom of deep mineshafts, so he simply made up a theory that the earth’s internal heat was mainly due to the latent heat given off by lava as it solidified (like the latent heat energy given off when steam condenses). This was the basis for Kelvin in 1862 trying to disprove Darwin’s 1859 theory of evolution by “proving” that the Earth couldn’t be more than 100 million years old, which wasn’t enough time for evolution in Darwin’s theory. Then in 1897, Bequerel discovered radioactivity. Although you can measure the radioactivity in surface rocks, you are limited in what you can deduce about the amount of radioactivity in the core of the Earth, so the theory isn’t completely checkable. However, the antineutrinos emitted in radioactivity within the Earth are detectable and provide limits.

    What I don’t understand about the black hole heat source idea you mention, is what the mechanism is for its stable consumption of 2.8 kg of matter per year. What stabilises it, preventing the black hole from either evaporating faster than it can suck in matter, or alternatively, sucking in matter faster than it can radiate energy?

    Surely if a such a black hole was surrounded by a lot of matter, all the matter would soon fall into it, and the black hole would either grow or explode. Why should it remain stable? Why should just 2.8 kg of the surrounding matter drip into the black hole over the course of a year? This seems strange.

    But on the fundamental particle level, this stability question can be solved if the Hawking radiation is gauge boson exchange radiation: the fundamental particle as a black hole is stable because it is in an equilibrium, receiving and emitting gauge boson radiations. Radiation continually falls in and is continually radiated out, giving a mechanism for the Yang-Mills theory of exchange radiation.

  100. The subject is very impressive, but I would like to know more freedom from the equations of gravity and Earth studies, and thank you

  101. Here’s an idea for a test of this theory.
    Given the speed at which we orbit the center of our galaxy, and relative speeds to various objects in ours galaxy, one should be able to deduce the amount of mass around us

  102. Hi Michael,

    I kinda object to it being called a theory: the predictive model is based on facts and I’ve carefully avoided any speculations. There’s factual evidence for the cosmological acceleration via supernova redshifts (from Perlmutter et al.) which has been well confirmed by other observers. Fact. Simply put that acceleration of the mass of the universe into Newton’s 2nd law, F=ma, and you predict an outward force on the order 10^43 N or so. Then Newton’s 3rd law tells you you should have equal inward force, and from the known facts of particle physics gauge interactions (Standard Model and quantum gravity considerations) you conclude that this inward force is mediated by gravitons – not anything else. This allows you to predict the strength of gravity, and it predicts other things too. Where is the “speculative theory”? Newton’s 2nd law? Newton’s 3rd law? Well, these have evidence behind them and aren’t speculative theories. They’re well tested empirical facts of nature.

    Regarding your suggested test, the centripetal force is F = ma = m(v^2)/R where v is orbital velocity of mass m at radius R from the centre of the galaxy. This force should be equal to gravity, F = mMG/r^2 in the Newtonian approximation or a slightly different value in this mechanism when the quantum gravity effects bring about a slight departure. E.g., for very great masses, despite the very low cross-section for graviton interactions with matter (i.e. black hole event horizon cross-sectional area for a fundamental particle) there can be some overlap which means that the gravity force is no longer increasing directly with the mass when that mass becomes extremely large. Another departure from Newton in this quantum gravity mechanism is the geometric effect of the distribution of masses around you. E.g., on very large scales the graviton exchange between masses gets so large that it exceeds the inward push and thus starts to push masses apart, causing the cosmological acceleration of the universe and expansion.

    John Hunter has also approached the problem of the galaxy rotation curves from a perspective in some respects similar to my quantum gravity argument. I.e., Hunter suggested that the rest mass of a particle is equivalent to its gravitational potential energy with respect to the total surrounding matter M at weighted mean radial distance R as illustrated at http://www.gravity.uk.com/galactic_rotation_curves.html (this gravitational potential energy would be the energy released by the particle’s mass if the universe collapsed, so it is considering the gravitational field energy in a physical way). See his paper, “On Gravity and the Motion of Dark Matter”, at http://vixra.org/abs/0908.0004 http://vixra.org/pdf/0908.0004v1.pdf and his site page: http://www.gravity.uk.com/galactic_rotation_curves.html (although I disagree in other details with some parts of his theoretical model, these aspects are important). See also http://cosmologystatement.org/ which is in some ways correct and in other ways misguided. The “big bang” Lambda-CDM model using general relativity with ad hoc amounts of dark matter and dark energy is misguided because it’s indeed possible to fit general relativity metrics to just about any kind of universe without learning anything further in physics, but the three pieces of basic evidence for the big bang are strong. The problem is the Lambda-CDM model standing in the way of quantum gravity, like religious dogma standing in the way of Copernicus’s solar system.

    I’m working on a paper which will address the prediction of “dark matter” effects as well as presenting the model and other predictions and comparisons with the data in a more structured way than blog posts. Certainly there is some dark matter because neutrinos have mass, but the question of how much needs a careful, detailed quantitative answer since the Lambda-CDM model “predicts” that most of the universe is dark energy and dark matter. The dark energy is the gravity field which presses small nearby masses together (gravity) and presses large distant masses apart (cosmological acceleration and expansion).

  103. Hi. As basically a layperson in QFT/GR arena, am impressed in what I can pick up. Bothered by one thing though. Have you checked predictions of gravitational radiation for spin-1 graviton theory against the usual GR (equivalent to spin-2 graviton) predictions & results for Hulse-Taylor binary pulsar PSR B1913+16, as tabled for instance at http://en.wikipedia.org/wiki/PSR_B1913%2B16 ?

    Thanks

    [Reply: it yields precisely the same predictions because the equations are precisely the same for binary pulsars (the mechanism is equivalent to the general relativity field equation on the scale of planets, stars, etc, since it produces the full contraction term). All local predictions of general relativity are included in this mechanism, by which I mean predictions of phenomena like pulsars which are on scales much smaller than the universe. The differences in the equations between the spin-1 graviton mechanism and general relativity only occur on the very large and very small distance scales, not for the sizes of stars, planets etc. Thanks.]

    1. Thanks for response Nigel. I had not seen gravitational radiation directly addressed in your articles (but they are l-o-n-g so may have missed it) so reassuring it has been done. Wondering though if the usually assumed GR quadrupole nature – ‘electric’ only component, no ‘magnetic’ component for GW’s applies also to spin-1 gravity. In other words, any possibility of novel means of GW detection utilizing possible ‘magnetic’ component(s)?

      Reply: Tnak s Kevin, but no I’ve not done detailed research in applying this to gravitational wave predictions, and I don’t think much experimental data is going to be gathered from gravitational wave detectors. The coupling for gravity is 10^40 times smaller than electromagnetism, and natural seismic activity “noise” will obliterate signals. Even if and when some gravitational wave is detected and correlated with a supernova or similar event, it’s probably going to be on the limits of detection statistically close to the noise level, not a strong clear signal that will tell us anything useful about the details of gravity waves. I’m interested in applying the theory to make predictions that can be reliably tested, e.g. it predicted the cosmological acceleration correctly in 1996, ahead of experimental discovery. It also sorts out problems in cosmology, and – given cosmological observations, produces G accurately within the experimental error of the input data.

Leave a comment