The Standard Model of particle physics is U(1) x SU(2) x SU(3), representing respectively electromagnetism/weak hypercharge, weak isospin charge that acts only on particles of lefthanded spin, and strong colour charge. This doesn’t include the Higgs field (there are several possible Higgs field versions which are to be tested in 2009 at the LHC), or gravitational interactions. Since massenergy is gravitational charge, there is clearly a link between gravitons and Higgs bosons, but this is not predicted by the Standard Model in its current form.
The role played by U(1) in the Standard Model can actually be done by massless gauge bosons of SU(2) because not all SU(2) gauge bosons acquire mass at low energy. We know that for those that do acquire mass, the resulting massive gauge bosons only interact with lefthanded spinors. It’s possible that one handedness of the SU(2) gauge bosons remain massless at low energy, and that these are the gauge bosons of electromagnetism and also gravitation (if the latter is mediated by a spin1 graviton, not a spin2 graviton). There are calculations and predictions which justify this. The spin1 gravitons cause universal repulsion of masses over large distances, i.e. they are the dark energy of the cosmological acceleration. Nearby masses which are relatively small compared to the mass of the surrounding universe are pushed together by the stronger exchange of gravitons (converging inwards from great distances) with the larger mass of the universe than with the relatively small mass of the Earth, a star or a galaxy. This seems to be the mechanism of gravity.
So the Standard Model U(1) x SU(2) x SU(3) could be replaced by SU(2) x SU(3) for all known interactions plus a replacement field for a Higgstype theory of mass. This would not affect any existing confirmed predictions of the Standard Model since it would preserve the checked predictions of electrodynamics, weak interactions and strong interactions. But it would add an enormous amount of further falsifiable predictions, while simplifying the theory at the same time. Since Higgstype field is composed of only one kind of charge (mass), it may well be described by a simple Abelian U(1) theory, in which case the total theory is again the mathematical group combination U(1) x SU(2) x SU(3), but this now has an entirely different physical dynamics to the same mathematical structure in the Standard Model, because in this U(1) x SU(2) x SU(3) (unlike the mathematically similar Standard Model):
U(1) now represents gravitational charge (massenergy) and spin1 ‘Higgs bosons’,
SU(2) now represents weak, electromagnetic and gravitational interactions (massless spin1 gauge bosons at low energy producing electromagnetism and gravity; massive versions of the same gauge bosons being the usual lowenergy weak interaction gauge bosons), and
SU(3) still represents the usual strong interactions.
To put this another way: modern physics has been developed by making mathematical guesses and checking them, but this approach seems to have reached the end of the road because sophisticated guesses become ever harder to check. I think that if progress is to continue, a change in tactics is required to make progress. If falsifiable predictions are required, physics needs to start off by being pretty well connected to reality. If the theory involves plenty of direct empirical input, it’s likely to produce a lot of checkable predictions as output. If it has little direct empirical input, then it’s less likely to produce checkable predictions. I think this is the major problem with string theory. It’s vague because the ratio of factual input to speculative beliefs (extra dimensions, supersymmetric unification, graviton spin) is low.
Gauge symmetry: whenever the motion or charge or angular momentum of spin, or some other symmetry, is altered, it is a consequence of conservation laws for momentum and energy in physics that radiation is emitted or received. This is Noether’s theorem, which was applied to quantum physics by Weyl, giving the concept of gauge symmetry. Fundamental interactions are modelled by Feynman diagrams of scattering between gauge bosons or ‘virtual radiation’ (virtual photons, gravitons, etc.) and charges (electric charge, mass, etc.). The existing mainstream model is abstract, so it doesn’t represent the gauge bosons as taking any time to travel between charges (massless radiations travel at light velocity), and there are various other errors. E.g., two extra polarizations have to be added to the photon on the mainstream model of quantum electrodynamics, to make it produce attractive forces between dissimilar charges. This is an ad hoc modification, similar to changing the spin of the graviton to spin2 to ensure universal attraction between similar gravitational charge (mass/energy). If you look at the physics more carefully, you find that the spin of the graviton is actually spin1 and gravity is a repulsive effect: we’re exchanging gravitons (as repulsive scattertype interactions) more forcefully with the immense masses of receding galaxies above us than we are with the graviton scatter crosssections of the particles in the Earth below us, so the net effect is a downward pushing force. What’s impressive about this (unlike spin2 graviton rubbish endorsed by Woit) is that it makes checkable predictions including the strength (coupling G) of gravity and many other things (see calculations below), unlike string ‘theory’ which is a spin2 graviton framework that leads to 10^{500} different predictions (so vague it could be made to fit anything that nature turns out to be, but makes no falsifiable predictions, i.e. junk science). When you look at electromagnetism more objectively, the virtual photons carry an additional polarization in the form of a net electric charge (positive or negative). This again leads to checkable predictions for the strength of electromagnetism and other things. The most important single prediction however was the acceleration of the universe, due to the longdistance repulsion between large masses in the universe mediated by spin1 gravitons. This was published in 1996 and confirmed by observations in 1998 published in Nature by Saul Perlmutter et al., but it is still censored out by charlatans such as string ‘theorists’ (quotes are around that word because it is no genuine scientific theory, just a landscape of 10^{500} different speculations). Drs Woit and Smolin, who have written books opposing the hype and overfunding of string theory lies, are too busy with other speculations to acknowledge these facts.
Typical string theory deception: ‘String theory has the remarkable property of predicting gravity.’ (E. Witten, ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996.)
Actually what he means but can’t be honest enough to say is that string theory in 10/11 dimensions is compatible with a false spin2 graviton speculation. Let’s examine the facts:
Above: Spin1 gravitons causing apparent “attraction” by repulsion, the “attraction” being due to similar charges being pushed together by repulsion from massive amounts of similar sign gravitational charges in the distant surrounding universe.
Nearby gravitational charges don’t exchange gravitons forcefully enough to compensate for the stronger exchange with converging gravitons coming in from immense masses (clusters of galaxies at great distances, all over the sky), due to the physics discussed below, so their graviton interaction crosssection effectively shields them on facing sides. Thus, they get pushed together. This is what we see as gravity.
By wrongly ignoring the rest of the mass in the universe and focussing on just a few masses (right hand side of diagram), Pauli and Fierz in the 1930s falsely deduced that for similar signs of gravitational charges (all gravitational charge so far observed falls the same way, downwards, so all known mass/energy has similar gravitational charge sign, here arbitrarily represented by “” symbols, just to make an analogy to negative electric charge to make the physics easily understood), spin1 gauge bosons can’t work because they would cause gravitational charges to repel! So they changed the graviton spin to spin2, to “fix it”.
This mechanism proves that a spin2 graviton is wrong; instead the spin1 graviton does the job of both ‘dark energy’ (the outward acceleration of the universe, due to repulsion of similar sign gravitational charges over long distances) and gravitational ‘attraction’ between relatively small, relatively nearby masses which get repelled more towards one another due to distant masses in the universe than they are repelling one another!
Above: Spin1 gauge bosons for fundamental interactions. In each case the surrounding universe interacts with the charges, a vital factor ignored in existing mainstream models of quantum gravity and electrodynamics.
The massive versions of the SU(2) YangMills gauge bosons are the weak field quanta which only interact with lefthanded particles. One half (corresponding to exactly one handedness for weak interactions) of SU(2) gauge bosons acquire mass at low energy; the other half are the gauge bosons of electromagnetism and gravity. (This diagram is extracted from the more detailed discussion and calculations made below in the more detailed treatment; which is vital for explaining how massless electrically charged bosons can propagate as exchange radiation while they can’t propagate – due to infinite magnetic selfinductance – on a oneway path. The exchange of electrically charged massless bosons in two directions at once along each path – which is what exchange amounts to – means that the curls of the magnetic fields due to the charge from each oppositelydirected component of the exchange will cancel out the curl of the other. This means that the magnetic selfinductance is effectively zero for massless charged radiation being endlessly exchanged from charge A to charge B and back again, even though it infinite and thus prohibited for a one way path such as from charge A to charge B without a simultaneous return current of charged massless bosons. This was suggested by the fact that something analogous occurs in another area of electromagnetism.)
Masses are receding due to being knocked apart by gravitons which cause repulsion between masses as already explained (pushing distant galaxies apart and also pushing nearby masses together). The inward force, presumably mediated by spin1 gravitons, from a receding mass m at distance r receding with the Hubble velocity law v = Hr is:
F = ma
= m*dv/dt
= m*d(Hr)/dt
= (mH*dr/dt) + (mr*dH/dt)
= mHv + 0
= mrH^{2}.
This is because mass accelerating away from us has an outward force due to Newton’s 2nd law, and an equal and opposite (inward) reaction force under Newton’s 3rd law. If its mass m is small and/or if distance r is small, then the inward force of gravitons (being exchanged), which is directed towards you from that small nearby mass, is trivial because of the linear dependence on force on m and r in the equation F = mrH^{2}. But there will be very large masses beyond that nearby mass (distant receding galaxies) sending in a large inward force due to their large distance and mass. These spin1 gravitons effectively interact with the mass by scattering back off the graviton scatter crosssection for that mass. So small nearby masses get pressed together, because a nearby, nonreceding particle with mass will cause an asymmetry in the graviton field being received from more distant masses in that direction, and you’ll be pushed towards it. This gives an inversesquare law force and it uniquely also gives an accurate prediction for the gravitational parameter G as proved later in this post.
When you push two things together using field quanta such as those from the electrons on the surface of your hands, the resulting motions can be modelled as an attractive effect, but it is clearly caused by the electrons in your hands repelling those on the surface of the other object. We’re being pushed down by the gravitational repulsion of immense distant masses distributed around us in the universe, which causes not just the cosmological acceleration over large distances, but also causes gravity between relatively small, relatively nearby masses by pushing them together. (In 1996 the spin1 quantum gravity proof given below in this post was inspired by an account of the ‘implosion’ principle, used in all nuclear weapons now, whereby the inwarddirected half of the force of an explosion of a TNT shell surrounding a metal sphere compresses the metal sphere, making the nuclei in that sphere approach one another as though there was some contraction.)
Notice that in the universe the fact that we are surrounded by a lot of similar gravitational charge (mass/energy) at large distances will automatically cause the accelerative expansion of the universe (predicted accurately by this gauge theory mechanism in 1996, well before Perlmutter’s discovery confirming the predicted acceleration/’dark energy’), as well as causing gravity, and uses spin1. It doesn’t require the epicycle of changing the graviton spin to spin2. Similar gravitational charges repel, but because there is so much similar gravitational charge at long distances from us with the gravitons converging inwards as they are exchanged with an apple and the Earth, the immense long range gravitational charges of receding galaxies and galaxy clusters repel a two small nearby masses together harder than they repel one another apart! This is why they appear to attract.
This is an error for the reason (left of diagram above) that spin1 only appears to fail when you ignore the bulk of the similar sign gravitational charge in the universe surrounding you. If you stupidly ignore that surrounding mass of the universe, which is immense, then the simplest workable theory of quantum gravity necessitates spin2 gravitons.
The best presentation of the mainstream longrange force model (which uses massless spin2 gauge bosons for gravity and massless spin1 gauge bosons for electromagnetism) is probably chapter I.5, Coulomb and Newton: Repulsion and Attraction, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 306. Zee uses an approximation due to Sidney Coleman, whereby you have to work through the theory assuming that the photon has a real mass m, to make the theory work, but at the end you set m = 0. (If you assume from the beginning that m = 0, the simple calculations don’t work, so you then need to work with gauge invariance.)
Zee starts with a Langrangian for Maxwell’s equations, adds terms for the assumed mass of the photon, then writes down the Feynman path integral, which is ò DAe^{iS(A)} where S(A) is the action, S(A) = ò d^{4}xL, where L is the Lagrangian based on Maxwell’s equations for the spin1 photon (plus, as mentioned, terms for the photon having mass, to keep it relatively simple and avoid including gauge invariance). Evaluating the effective action shows that the potential energy between two similar charge densities is always positive, hence it is proved that the spin1 gauge bosonmediated electromagnetic force between similar charges is always repulsive. So it works.
A massless spin1 boson has only two degrees of freedom for spinning, because in one dimension it is propagating at velocity c and is thus ‘frozen’ in that direction of propagation. Hence, a massless spin1 boson has two polarizations (electric field and magnetic field). A massive spin1 boson, however, can spin in three dimensions and so has three polarizations.
Moving to quantum gravity, a spin2 graviton will have 2^{2} + 1 = 5 polarizations. Writing down a 5 component tensor to represent the gravitational Lagrangian, the same treatment for a spin2 graviton then yields the result that the potential energy between two lumps of positive energy density (mass is always positive) is always negative, hence masses always attract each other.
This has now hardened into a religious dogma or orthodoxy which is used to censor the facts of the falsifiable, predictive spin1 graviton mechanism as being “weird”. Even Peter Woit and Lee Smolin, who recognise that string theory’s framework for spin2 gravitons isn’t experimentally confirmed physics, still believe that spin2 gravitons are right!
Actually, the amount of spin1 gravitational repulsion force between two small nearby masses is negligible, and it takes the immense masses in the receding surrounding universe (galaxies, clusters of galaxies, etc., surrounding us in all directions) to produce what we see as gravity. The fact that gravity is not cancelled out by coming in equal and opposite forms is the reason why we have to include the gravitational charges in the surrounding universe in quantum gravity, while in electromagnetism it is conventional orthodoxy to ignore surrounding electric charges which come in opposite types which appear to cancel one another out. There is definitely no such cancellation of gravitational charges from surrounding masses in the universe, because there is only one kind of gravitational charge observed! So we have to accept a spin1 graviton, not a spin2 graviton, as being the simplest theory (see the calculations below that prove it predicts the observed strength for gravitation!), and spin1 gravitons lead somewhere: the spin1 graviton it neatly fits gravity into a modified, simplified form of the Standard Model of particle physics!
‘If no superpartners at all are found at the LHC, and thus supersymmetry can’t explain the hierarchy problem, by the ArkaniHamed/Dimopoulos logic this is strong evidence for the anthropic string theory landscape. Putting this together with Lykken’s argument, the LHC is guaranteed to provide evidence for string theory no matter what, since it will either see or not see weakscale supersymmetry.’ – Not Even Wrong blog post, ‘Awaiting a Messenger From the Multiverse’, July 17th, 2008.
It’s kinda nice that Dr Woit has finally come around to grasping the scale of the terrible, demented string theory delusion in the mainstream, and can see that nothing he writes affects the victory to be declared for string theory, regardless of what data is obtained in forthcoming experiments! His position and that of Lee Smolin and other critics is akin to the dissidents of the Soviet Union, traitors like Leon Trotsky and nuisances like Andrei Sakharov. They can maybe produce minor embarrassment and irritation to the Empire, but that’s about all. The general opinion of string theorists to his writings is that it’s inevitable that someone should complain. The public will go on ignoring the real quantum gravity facts, and so indeed will Woit and Smolin. He writes:
‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length[...] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘Mtheory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.
‘This whole situation is reminiscent of what happened in particle theory during the 1960’s, when quantum field theory was largely abandoned in favor of what was a precursor of string theory. The discovery of asymptotic freedom in 1973 brought an end to that version of the string enterprise and it seems likely that history will repeat itself when sooner or later some way will be found to understand the gravitational degrees of freedom within quantum field theory.
‘While the difficulties one runs into in trying to quantize gravity in the standard way are wellknown, there is certainly nothing like a nogo theorem indicating that it is impossible to find a quantum field theory that has a sensible short distance limit and whose effective action for the metric degrees of freedom is dominated by the Einstein action in the low energy limit. Since the advent of string theory, there has been relatively little work on this problem, partly because it is unclear what the use would be of a consistent quantum field theory of gravity that treats the gravitational degrees of freedom in a completely independent way from the standard model degrees of freedom. One motivation for the ideas discussed here is that they may show how to think of the standard model gauge symmetries and the geometry of spacetime within one geometrical framework.
‘Besides string theory, the other part of the standard orthodoxy of the last two decades has been the concept of a supersymmetric quantum field theory. Such theories have the huge virtue with respect to string theory of being relatively welldefined and capable of making some predictions. The problem is that their most characteristic predictions are in violent disagreement with experiment. Not a single experimentally observed particle shows any evidence of the existence of its “superpartner”.’
– P. Woit, Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135, p. 52.
But notice that Dr Woit was convinced in 2002 that a spin2 graviton would explain gravity. More recently he has become less hostile to supersymmetric theories, for example by conjecturing that spin2 supergravity without string theory may be what is needed:
‘To go out on a limb and make an absurdly bold guess about where this is all going, I’ll predict that sooner or later some variant (”twisted”?) version of N=8 supergravity will be found, which will provide a finite theory of quantum gravity, unified together with the standard model gauge theory. Stephen Hawking’s 1980 inaugural lecture will be seen to be not so far off the truth. The problems with trying to fit the standard model into N=8 supergravity are well known, and in any case conventional supersymmetric extensions of the standard model have not been very successful (and I’m guessing that the LHC will kill them off for good). So, some sofarunknown variant will be needed. String theory will turn out to play a useful role in providing a dual picture of the theory, useful at strong coupling, but for most of what we still don’t understand about the SM, it is getting the weak coupling story right that matters, and for this quantum fields are the right objects. The dominance of the subject for more than 20 years by complicated and unsuccessful schemes to somehow extract the SM out of the extra 6 or 7 dimensions of critical string/Mtheory will come to be seen as a hardtounderstand embarassment, and the multiverse will revert to the philosophers.’
Evidence
As explained briefly above, there’s a fine back of the envelope calculation – allegedly proving that a spin2 graviton is needed for universal attraction – in the mainstream accounts, as exemplified by Zee’s online sample chapter from his ‘Quantum Field Theory in a Nutshell’ book (section 5 of chapter 1). But when you examine that kind of proof closely, it just considers two masses exchanging gravitons with one another, which ignores two important aspects of reality:
1. there are far more than two masses in the universe which are always exchanging gravitons, and in fact the majority of the mass is in the surrounding universe; and
2. when you want a law for the physics of how gravitons are imparting force, you find that only receding masses forcefully exchange gravitons with you, not nearby masses. Take Hubble’s recession law which is empirical: v = HR. Differentiate: a = d(HR)/dt = (H*dR/dt)+(R*dH/dt) = Hv = HHR = RH<sup2. This predicts the Perlmutter’s observed acceleration of the universe. It also gives receding matter outward force by Newton’s second law, and gives a law for gravitons: Newton’s third law gives an equal inwarddirected force, which by elimination of the possibilities known in the Standard Model and quantum gravity, must be mediated by gravitons. Nearby masses which aren’t receding have outward acceleration of zero and so produce zero inward graviton force towards you for their gravitoninteraction crosssectional area. This produces an asymmetry, so you get pushed towards nonreceding masses while being pushed away from highly redshifted masses. You can then do calculations to predict the strength of gravitation:
Above: the Feynman diagrams for gravity due to the classical nonquantum theory (general relativity), spin1 and spin2 gravitons, with their respective issues, plus the geometry of the asymmetry produced by a mass in the graviton field of the universe which is vital for working out, by summing vector contributions (effectively the summing of interaction histories to represent a path integral formulation), the net sum of spin1 graviton contributions: the labelled ’shield’ area is the crosssection for spin1 graviton backscatter from a fundamental particle such as an electron (or rather the effective crosssectional area for graviton interactions, because the electron’s mass or gravitational charge according to the Standard Model comes from a Higgstype bosonic quantum field surrounding the core of the electron; the Higgs field interacts with both the electron core and with gravitons, so it acts as a maninthemiddle and mediates gravitational force from gravitons to the electron). For evidence that this effective crosssection for gauge boson back scatter is (exchange of gauge bosons between gravitational charges such as masses) is the event horizon crosssectional area of a black hole with the mass of the electron or other fundamental particle being accelerated by gravity, see this post and its links; for evidence that the black hole event horizon crosssectional area is also that for electromagnetic interactions see the calculations summarised in this post. (There is other evidence as well published in other posts. I’ll try to organize everything better when time permits.)
1. Outward force of receding matter (recession velocity v = HR where H is Hubble constant and R is apparent distance) is
F = ma
= m.dv/dt
= m.d(HR)/dt
= m[H.dR/dt + R.dH/dt]
= m[Hv + 0]
= mH(HR)
= mRH^{2}.
This is on the order of F = 7 * 10^{43} Newtons, but there is a correction to be applied for the apparent increase in density as we look back to earlier times (great distances in spacetime), and for relativistic mass increase of receding matter. But for simplicity, to see how the maths works, ignore the correction:
F = ma = [(4/3)πR^{3}r].[dv/dt] = [(4/3)πR^{3}r].[H^{2}R] = 4πR^{4}rH^{2}/3.
2. Inward force (which must be carried by gravitons or the spacetime fabric, according to the possibilities from what is available in the empirically defensible Standard Model and quantum gravity frameworks), is equal to the outward force (action and reaction are equal and opposite – a simple empirical law usually called Newton’s third law). However, there is a redshift of gravitons approaching us from relativistically receding, extremely redshifted masses, which reduces the effective graviton energy when received. (This redshift effect offsets the infinityapproaching outward force effects of relativistic mass increase and the increasing density of the earlier universe at ever greater distances.)
3. Gravity force,
F(gravity)
= (total inwarddirected graviton force, F = ma = m.dv/dt = mRH^{2}).(fraction of total force which is uncancelled, due to the asymmetry in inward graviton force which imposed by the lack of graviton force from black hole event horizon crosssectional area π(2GM/c^{2})^{2} for the nonreceding nearby mass labelled ’shield’),
= (total inwarddirected graviton force).(fraction of total inward force which is uncancelled to to the asymmetry imposed by the shield, e.g. the greyed cone area)
= (total inwarddirected graviton force).(area of end of cone, as labelled x)/(total spherical surface area out to radius of R = ct where t is age of universe, t = 1/H instead of the old FriedmannRobertsonWalker prediction for a critical density universe with zero cosmological constant of t = (2/3)/H, since there is no observable longrange gravitational deceleration on expansion rates, e.g., at long ranges there is no curvature of spacetime because the acceleration of the universe offsets gravitation)
= (total inwarddirected graviton force).(area of end of cone, as labelled x)/(4πR^{2})
= ((ma)*π(2GM/c^{2})^{2}).(x)/(4πR^{2})
= ((ma)*π(2GM/c^{2})^{2}).((shield area).(R/r )^{2})/(4πR^{2})
= ((ma)*π(2GM/c^{2})^{2}).(π(2GM/c^{2})^{2}.(R/r )^{2})/(4πR^{2})
= (4πR^{4}rH^{2}/3).(π(2GM/c^{2})^{2}R^{2}/r^{2})/(4πR^{2})
= (4/3)πR^{4}rH^{2}G^{2}M^{2}/(c^{4}r^{2})
We can simplify this using the Hubble law because at great distances/early times (where the density of the universe is highest) it is a good approximation to put HR = c, which gives R/c = 1/H, so:
F(gravity) = (4/3)πrG^{2}M^{2}/(H^{2}r^{2})
Notice the inverse square law, 1/r ^{2}. There are several consequences from this beyond the obvious ability to uniquely make theoretically justifiable quantitative calculations of the strength of gravity when compared to Newton’s semiempirical law of gravity (which was deduced from Kepler’s laws of planetary motion and Galileo’s law of terrestial gravitational acceleration), e.g. the force of gravity for quantum phenomena between fundamental particles is proportional not to M_{1}M_{2} but instead to M^{2}, suggesting quantization of masses as to be compared to a similar result in QED where the electromagnetic inverse square force is proportional to the the square of the unit electric charge, not to the product of two different charges (i.e., the quantization of fundamental charges).
It’s tempting for people to dismiss new calculations without checking them, just because they are inconsistent with previous calculations such as those allegedly proving the need for spin2 gravitons (maybe combined with the belief that “if the new idea is right, somebody else would have done it before”; which is of course a brilliant way to stop all new developments in all areas by everybody …).
The deflection of a photon by the sun is by twice the amount predicted for the theory of a nonrelativistic object (say a slow bullet) fired along the same (initial) trajectory. Newtonian theory says all objects fall, as does this theory (gravitons may presumably interact with energy via unobserved Higgs field bosons or whatever, but that’s not unique for spin1, it’s also going to happen with spin2 gravitons). The reason why a photon is deflected twice the amount that Newton’s law predicts is that a photon’s speed is unaffected by gravity unlike the case of a nonrelativistic object which speeds up as it enters stronger gravitational field regions. So energy conservation forces the deflection to increase due to the gain in gravitational potential energy, which in the case of a photon is used entirely for deflection (not speed changes).
In general relativity this is a result of the fact that the Ricci tensor isn’t directly proportional to the stress energy tensor because the divergence of the stress energy tensor isn’t zero (which it should be for conservation of massenergy). So from the Ricci tensor, half the product of the metric tensor and the trace of the Ricci tensor must be subtracted. This is what causes the departure from Newton’s law in the deflection of light by stars. Newton’s law omits conservation of massenergy, a problem which is clear when it’s expressed in tensors. GR corrects this error. If you avoid assuming Newton’s law and obtain the correct theory direct from quantum gravity, this energy conservation issue doesn’t arise.
Spin 2 graviton exchanges between 2 masses cause attraction.
Spin 1 graviton exchanges between 2 masses cause repulsion.
Spin 1 graviton exchanges between all masses will push 2 nearby masses together.
This is because the actual graviton exchange force causing repulsion in the space between 2 nearby masses is totally negligible (F = mrH^{2} with small m and r terms) compared to the gravitons pushing them together from surrounding masses at great distances (F = mrH^{2} with big receding mass m at big distance r).
Similarly if you had two protons nearby and surrounded them with a spherical shell of immense positive charge, they might be pushed together. (Another example is squeezing two things together: the electrons in your hand repel the things, but that doesn’t stop the two things being pushed together as if there is ‘attraction’ occurring between them.) This is what is occurs when spin1 gravitons cause gravity by pushing things together locally. Gauge bosons are virtual particles, but they still interact to cause forces!
String theory is widely hailed for being compatible with the spin2 graviton widely held to be true because for universal attraction between two similar charges (all masses and all energy falls the same way in a gravitational field, so it all has similar gravitational charge) you need a gauge boson which has spin2. This argument is popularized by Professor Zee in section 5 of chapter 1 of his textbook Quantum Field Theory in a Nutshell. It’s completely false because we simply don’t live in a universe with two gravitational charges. There are more than two particles in the universe. The path integral that Zee and others do assume explicitly only two masses are involved in the gravitational interactions which cause gravity.
If you correct this error, the repulsion of similar charges cause gravity by pushing two nearby masses together, just as on large scales it pushes matter apart causing the accelerating expansion of the universe. The formula F=mrH^{2} can be obtained from solid facts based on observations of nature, e.g., by Newton’s second law if we find acceleration by differentiating the Hubble expansion velocity of v = rH. [F = m*dv/dt = m*d(rH)/dt = mH*dr/dt + mR*dH/dt = mHv = mrH^{2}. Notice here that Newton's third law of motion then allows us to quantify the amount of inward force we get, since it is equal to the outward force. A mass which is receding from us with a given force gives rise to an equal graviton force directed towards us in order to satisfy the third law of motion. This force is totally negligible for small nearby masses, which therefore get pushed together by the repulsion which occurs over large distances where great masses (receding galaxies) are receding.]
There was a sequence of comments on the Not Even Wrong blog post about Awaiting a Messenger From the Multiverse concerning the spin of the graviton (some of which have been deleted since for getting off topic). Some of these comments have been retrieved from my browser history cache and are below. There was an anonymous comment by ‘somebody’ at 5:57 am on 20 July 2008 stating:
‘Perturbative string theory has something called conformal invariance on the worldsheet. The empirical evidence for this is gravity. The empirical basis for QFT are locality, unitarity and Lorentz invariance. Strings manage to find a way to tweak these, while NOT breaking them, so that we can have gravity as well. This is oftrepeated, but still extraordinary. The precise way in which we do the tweaking is what gives rise to the various kinds of matter fields, and this is where the arbitrariness that ultimately leads to things like the landscape comes in. … It can easily give rise to things like multiple generations, nonabelain gauge symmetry, chiral fermions, etc. some of which were considered thorny problems before. Again, constructing PRECISELY our matter content has been a difficult problem, but progress has been ongoing. … But the most important reason for liking string theory is that it shows the features of quantum gravity that we would hope to see, in EVERY single instance that the theory is under control. Black hole entropy, gravity is holographic, resolution of singularities, resolution of information paradox – all these things have seen more or less concrete realizations in string theory. Black holes are where real progress is, according to me, but the string phenomenologists might disagree. Notice that I haven’t said anything about gaugegravity duality (AdS/CFT). Thats not because I don’t think it is important, … Because it is one of those cases where two vastly different mathematical structures in theoretical physics mysteriously give rise to the exact same physics. In some sense, it is a bit like saying that understanding quantum gravity is the same problem as understanding strongly coupled QCD. I am not sure how exciting that is for a nonstring person, but it makes me wax lyrical about string theory. It relates black holes and gauge theories. …. You can find a bound for the viscosity to entropy ratio of condensed matter systems, by studying black holes – thats the kind of thing that gets my juices flowing. Notice that none of these things involve farout mathematical m***********, this is real physics – or if you want to say it that way, it is emprically based. … String theory is a large collection of promising ideas firmly rooted in the emprirical physics we know which seems to unify theoretical physics …’
To which anon. responded:
‘No it’s not real physics because it’s not tied to empirical facts. It selects an arbitary number of spatial extra dimensions in order to force the theory to give the nonfalsifiable agreement with existing speculations about gravity, black holes, etc. Gravity and black holes have been observed but spin2 gravitons and the detailed properties of black holes aren’t empirically confirmed. Extra spatial dimensions and all the extra particles of supersymmetries like supergravity haven’t been observed. Planck scale unification is again a speculation, not an empirical observation. The entire success of string theory is consistency with speculations, not with nature. It’s built on speculations, not upon empirical facts. Further, it’s not even an ad hoc model that can replace the Standard Model, because you can’t use experimental data to identify the parameters of string theory, e.g., the moduli. It’s worse therefore than ad hoc models, it can’t incorporate let alone predict reality.’
Anon. should have added that the AdS/CFT correspondence is misleading. [AdS/CFT correspondence work, with strong interactions being modelled by antide Sitter space with a negative (rather than positive) cosmological constant is misleading. People should be modelling phenomena by accurate models, not returning physics to the days when guys were arguing that epicycles are a clever invention and modelling the solar system using a false model (planets and stars orbiting the Earth in circles within circles) is a brilliant state of the art calculational method! (Once you start modelling phenomenon A using a false approximation from theory B, you're asking for trouble because you're mixing up fact and fiction. E.g., if a prediction fails, you have a readymade excuse to simply add further epicycles/fiddles to 'make it work'.) See my comment at http://keamonad.blogspot.com/2008/07/ninjaprof.html]
somebody Says:
July 20th, 2008 at 10:42 am
Anon
The problems we are trying to solve, like “quantizing gravity” are already speculative by your standards. I agree that it is a reasonable stand to brush these questions off as “speculation”. But IF you consider them worthy of your time, then string theory is a game you can play. THAT was my claim. I am sure you will agree that it is a bit unreasonable to expect a nonspeculative solution to a problem that you consider already speculative.
Incidentally, I never said a word about supersymmetry and Planck scale unification in my post because it was specifically a response to a question on the empirical basis of string theory. So I would appreciate it if you read my posts before taking off on rants, stringing cliches, .. etc. It was meant for the critics of string theory who actually have scientific reasons to dislike it, and not gutreactions.
anon. Says:
July 20th, 2008 at 11:02 am
‘The problems we are trying to solve, like “quantizing gravity” are already speculative by your standards.’
Wrong again. I never said that. Quantizing gravity is not speculative by my standards, it’s a problem that can be addressed in other ways without all the speculation involved in the string framework. That’s harder to do than just claiming that string theory predicts gravity and then using lies to censor out those working on alternatives.
‘Incidentally, I never said a word about supersymmetry and Planck scale unification in my post because it was specifically a response to a question on the empirical basis of string theory.’
Wrong, because I never said that you did mention them. The reason why string theory is not empirical is precisely because it’s addressing these speculative ideas that aren’t facts.
‘It was meant for the critics of string theory who actually have scientific reasons to dislike it, and not gutreactions.’
If you want to defend string as being empirically based, you need to do that. You can’t do so, can you?
somebody Says:
July 20th, 2008 at 11:19 am
‘Quantizing gravity is not speculative by my standards,’
Even though the spin 2 field clearly is.
My apologies Peter, I truly couldn’t resist that.
anon. Says:
July 20th, 2008 at 11:53 am
The spin2 field for gravity is based on the false speculation that gravitons are exchanged purely between the attracting bodies. (To get universal attraction, such field quanta can be shown to require a spin of 2.) This speculation is contrary to the general principle that every body is a source of gravity. You never have gravitons exchanged merely between two masses in the universe. They will be exchanged between all the masses, and there is a lot of mass surrounding us at long distances.
There is no disproof which I’m aware of that the graviton has a spin of 1 and operates by pushing masses together. At least this theory doesn’t have to assume that there are only two gravitating masses in the universe which exchange gravitons!
somebody Says:
July 20th, 2008 at 12:20 pm
‘The spin2 field for gravity is based on the false speculation that gravitons are exchanged purely between the attracting bodies. This speculation is contrary to the general principle that every body is a source of gravity.’
How many gravitationally “repelling” bodies do you know?
Incidentally, even if there were two kinds of gravitational charges, AND the gravitational field was spin one, STILL there are ways to test it. Eg: I would think that the bending of light by the sun would be more suppressed if it was spin one than if it is spin two. You need two gauge invariant field strengths squared terms to form that coupling, one for each spin one field, and that might be suppressed by a bigger power of mass or something. Maybe I am wrong about the details (i haven’t thought it through), but certainly it is testable.
somebody Says:
July 20th, 2008 at 12:43 pm
One could have only repelling bodies with spin one, but NOT only attractive ones. Because attractive requires opposite charges.
anon. Says:
July 20th, 2008 at 6:51 pm
‘How many gravitationally “repelling” bodies do you know?’
This repulsion between masses is very well known. Galaxies are accelerating away from every other mass. It’s called the cosmic acceleration, discovered in 1998 by Perlmutter.
The acceleration of the universe is the derivative of the Hubble expansion velocity formula, v = dr/dt = Hr: a = dv/dt = d(Hr)/dt = Hv + 0 = rH^2.
This is factbased (differentiating a Hubble velocity to find acceleration isn’t speculative), and agrees with the observation error bars on the acceleration. F=ma then gives outward force of accelerating matter, and the third law of motion gives us equal inward force. All simple stuff. This inward force is F=ma = mrH^2.
Since this force is apparently mediated by spin1 gravitons, the gravitational force of repulsion from one relatively nearby small mass to another is effectively zero. Because of the terms m and r in F = mrH^2, the exchange of gravitons only produces a repulsive force over large distances from a large mass, such as a distant receding galaxy. This is why two relatively nearby (relative in cosmological sense of many parsecs) masses don’t repel, but are pushed together because they repel the very distant masses in the universe.
‘One could have only repelling bodies with spin one, but NOT only attractive ones. Because attractive requires opposite charges.’
As already explained, there is a mechanism for similar charges to ‘attract’ by repulsion if they are surrounded by a large amount of matter that is repelling them towards one another. If you push things together by a repulsive force, the result can be misinterpreted as attraction…
*******************************************************************
After this comment, ‘somebody’ (evidently a string supporter who couldn’t grasp physics) then gave a list issues he/she had about this comment. Anon. then responded to each:
anon. Says:
July 20th, 2008 at 6:51 pm
‘1. The idea of “spin” arises from looking for reps. of local Lorentz invariance. At the scales of the expanding universe, you don’t have local Loretz invarince.’
There are going to be graviton exchanges whether they are spin 1 or spin 2 or whatever, between distant receding masses in the expanding universe. So if this is a problem it’s a problem for spin2 gravitons just as it is for spn1. I don’t think you have any grasp of physics at all.
‘2. … The expanding background is a solution of the underlying theory, whatever it is. The generic belief is that the theory respects Lorentz invariance, even though the solution breaks it. This could of course be wrong, …’
Masses are receding from one another. The assumption that they are being carried apart on a carpet of expanding spacetime fabric which breaks Lorentz invariance is just a classical GR solution speculation. It’s not needed if masses are receding due to being knocked apart by gravitons which cause repulsion between masses as already explained (pushing distant galaxies apart and also pushing nearby masses together).
‘3. … For spin 1 partciles, this gives an inverse square law. In particular, I don’t see how you jumped from the empirical F=mrH^2 relation to the claim that the graviton is spin 1.’
The inward force, presumably mediated by spin1 gravitons, from a receding mass m at distance r is F=mrH^2. If it’s mass is small or r is small, the inward force towards you is small. But there will be very large masses beyond that nearby mass (distant receding galaxies) sending in a large inward force due to their large distance and mass. These spin1 gravitons will presumably interact with the mass by scattering back off the graviton scatter crosssection for that mass. So a nearby, nonreceding particle with mass will cause an asymmetry in the graviton field being received from more distant masses in that direction, and you’ll be pushed towards it. This gives an inversesquare law force.
‘4. You still have not provided an explanation for how the solar system tests of general relativity can survive in your spin 1 theory. In particular the bending of light. Einstein’s theory works spectacularly well, and it is a local theory. We know the mass of the sun, and we know that it is not the cosmic repulsion that gives rise to the bending of light by the sun.’
The deflection of a photon by the sun is by twice the amount predicted for the theory of a nonrelativistic object (say a slow bullet) fired along the same (initial) trajectory. Newtonian theory says all objects fall, as does this theory (gravitons may presumably interact with energy via unobserved Higgs field bosons or whatever, but that’s not unique for spin1, it’s also going to happen with spin2 gravitons). The reason why a photon is deflected twice the amount that Newton’s law predicts is that a photon’s speed is unaffected by gravity unlike the case of a nonrelativistic object which speeds up as it enters stronger gravitational field regions. So energy conservation forces the deflection to increase due to the gain in gravitational potential energy, which in the case of a photon is used entirely for deflection (not speed changes).
In general relativity this is a result of the fact that the Ricci tensor isn’t directly proportional to the stress energy tensor because the divergence of the stress energy tensor isn’t zero (which it should be for conservation of massenergy). So from the Ricci tensor, half the product of the metric tensor and the trace of the Ricci tensor must be subtracted. This is what causes the departure from Newton’s law in the deflection of light by stars. Newton’s law omits conservation of massenergy, a problem which is clear when it’s expressed in tensors. GR corrects this error. If you avoid assuming Newton’s law and obtain the correct theory direct from quantum gravity, this energy conservation issue doesn’t arise.
‘5. The problems raised by the fact that LOCALLY all objects attract each other is still enough to show that the mediator cannot be spin 1.’
I thought I’d made that clear;
Spin 2 graviton exchanges between 2 masses cause attraction.
Spin 1 graviton exchanges between 2 masses cause repulsion.
Spin 1 graviton exchanges between all masses will push 2 nearby masses together.
This is because the actual graviton exchange force causing repulsion in the space between 2 nearby masses is totally negligible (F = mrH^2 with small m and r terms) compared to the gravitons pushing them together from surrounding masses at great distances (F = mrH^2 with big receding mass m at big distance r).
Similarly if you had two protons nearby and surrounded them with a spherical shell of immense positive charge, they might be pushed together. (Another example is squeezing two things together: the electrons in your hand repel the things, but that doesn’t stop the two things being pushed together as if there is ‘attraction’ occurring between them.) This is what is occurs when spin1 gravitons cause gravity by pushing things together locally. Gauge bosons are virtual particles, but they still interact to cause forces!
somebody Says:
July 21st, 2008 at 3:35 am
Now that you have degenerated to weird theories and personal attacks, I will make just one comment about a place where you misinterpret the science I wrote and leave the rest alone. I wrote that expanding universe cannot be used to argue that the graviton is spin 1. You took that to mean “… if this is a problem it’s a problem for spin2 gravitons just as it is for spn1.”
The expanding universe has nothing to do with the spin of the particle was my point, not that it can be used to argue for this or that spin. Spin arises from local Lorentz invariance.
anon. Says:
July 21st, 2008 at 4:21 am
‘The expanding universe has nothing to do with the spin of the particle was my point, not that it can be used to argue for this or that spin.’
Spin1 causes repulsion. The universe’s expansion is accelerating. I’ve never claimed that particle spin is caused by the expansion of the universe. I’ve stated that repulsion is evident in the acceleration of the universe.
If you want to effectively complain about degeneration into weird theories and personal attacks, try looking at string theory more objectively. 10^500 universes, 10 dimensions, spin2 gravitons, etc. (plus the personal attacks of string theorists on those working on alternative ideas).
***********************************
However, I’m getting way off the topic. Which was that calculations are vital in physics, because they are something that can be checked for consistency with nature. In string theory, so far there is no experimental possible, so all of the checks done are really concerned with internal consistency, and consistency with speculations of one kind or another. String theorist Professor Michio Kaku summarises the spiritual enthusiasm and hopeful religious basis for the string theory belief system as follows in an interview with the ‘Spirituality’ section of The Times of India, 16 July 2008, quoted in a comment by someone on the Not Even Wrong weblog (notice that Michio honestly mentions ‘… when we get to know … string theory…’, which is an admission that it’s not known because of the landscape problem of 10^500 alternative versions with different quantitative predictions; at present it’s not a scientific theory but rather 10^500):
‘… String theory can be applied to the domain where relativity fails, such as the centre of a black hole, or the instant of the Big Bang. … The melodies on these strings can explain chemistry. The universe is a symphony of strings. The “mind of God” that Einstein wrote about can be explained as cosmic music resonating through hyperspace. … String theory predicts that the next set of vibrations of the string should be invisible, which can naturally explain dark matter. … when we get to know the “DNA” of the universe, i.e. string theory, then new physical applications will be discovered. Physics will follow biology, moving from the age of discovery to the age of mastery.’
As with the 200+ mechanical aether theories of force fields existing the 19th century (this statistic comes from Eddington’s 1920 book Space Time and Gravitation), string theory at best is just a model for unobservables. Worse, it comes in 10^500 quantitatively different versions, worse than the 200 or so aethers of the 19th century. The problems with theorising about the physics at the instant of the big bang and the physics in the middle of a black hole is that you can’t actually test it. Similar problems exist when explaining dark matter because your theory contains invisible particles whose masses you can’t predict beyond saying they’re beyond existing observations (religions similarly have normally invisible angels and devils, so you could equally use religions to ‘explain dark matter’; it’s not a quantitative prediction in string theory so it’s not really a scientific explanation, just a belief system). Unification at the Planck scale and spin2 gravitons are both speculative errors.
Once you remove all these the errors from string theory, you are left with something that is no more impressive than aether: it claims to be a model of reality and explain everything, but you don’t get any real use from it for predicting experimental results because there are so many versions it’s just too vague to be a science.It doesn’t connect well with anything in the real world at all. The idea that at least it tells us what particle cores are physically (vibrating loops of extradimensional ’string’) doesn’t really strike me as being science. People decide which version of aether to use by artistic criteria like beauty or fitting the theory to observations and arguing that if the aether was different from this or that version we wouldn’t exist to observe it’s consequences (the anthropic selection principle), instead of using factual scientific criteria: there are no factual successes of aether to evaluate. So it falls into the category of a speculative belief system, not a piece of science.
By Mach’s principle of economy, speculative belief systems are best left out of science until they can be turned into observables, useful predictions, or something that is checkable. Science is not divine revelation about the structure of matter and the universe, instead it’s about experiments and related factbased theorizing which predicts things that can be checked.
**************************************************
Update: If you look at what Dr Peter Woit has done in deleting comments, he’s retained the one from anon which states:
‘[string is] not real physics because it’s not tied to empirical facts. It selects an arbitary number of spatial extra dimensions in order to force the theory to give the nonfalsifiable agreement with existing speculations about gravity, black holes, etc. Gravity and black holes have been observed but spin2 gravitons and the detailed properties of black holes aren’t empirically confirmed. Extra spatial dimensions and all the extra particles of supersymmetries like supergravity haven’t been observed. Planck scale unification is again a speculation, not an empirical observation. The entire success of string theory is consistency with speculations, not with nature. It’s built on speculations, not upon empirical facts. Further, it’s not even an ad hoc model that can replace the Standard Model, because you can’t use experimental data to identify the parameters of string theory, e.g., the moduli. It’s worse therefore than ad hoc models, it can’t incorporate let alone predict reality.’
Although he has kept that, Dr Woit deleted the further discussion comments about the spin 1 versus spin 2 graviton physics, as being offtopic. Recently he argued that supergravity (a spin2 graviton theory) in low dimensions was a good idea (see post about this by Dr Tommaso Dorigo), so he is definitely biased in favour of the graviton having a spin of 2, despite that being not ‘not even wrong’ but plain wrong for reasons given above. If we go look at Dr Woit’s post ‘On Crackpotism and Other Things’, we find Dr Woit stating on January 3rd, 2005 at 12:25 pm:
‘I had no intention of promulgating a general theory of crackpotism, my comments were purely restricted to particle theory. Crackpotism in cosmology is a whole other subject, one I have no intention of entering into.’
If that statement by Dr Woit still stands, then facts from cosmology about the accelerating expansion of the universe presumably won’t be of any interest to him, in any particle physics context such as graviton spin. In that same ‘On Crackpotism and Other Things’ comment thread, Doug made a comment at January 4th, 2005 at 5:51 pm stating:
‘… it’s usually the investigators labeled “crackpots” who are motivated, for some reason or another, to go back to the basics to find what it is that has been ignored. Usually, this is so because only “crackpots” can afford to challenge long held beliefs. Noncrackpots, even tenured ones, must protect their careers, pensions and reputations and, thus, are not likely to go down into the basement and rummage through the old, dusty trunks of history, searching for clues as to what went wrong. …
‘Instead, they keep on trying to build on the existing foundations, because they trust and believe that …
‘In other words, it could be that it is an interpretation of physical concepts that works mathematically, but is physically wrong. We see this all the time in other cases, and we even acknowlege it in the gravitational area where, in the low limit, we interpret the physical behavior of mass in terms of a physical force formulated by Newton. When we need the accuracy of GR, however, Newton’s physical interpretation of force between masses changes to Einstein’s interpretation of geometry that results from the interaction between mass and spacetime.’
Doug commented on that ‘On Crackpotism and Other Things’ post at January 1st, 2005 at 1:04 pm:
‘I’ve mentioned before that Hawking characterizes the standard model as “ugly and ad hoc,” and if it were not for the fact that he sits in Newton’s chair, and enjoys enormous prestige in the world of theoretical physics, he would certainly be labeled as a “crackpot.” Peter’s use of the standard model as the criteria for filtering out the serious investigator from the crackpot in the particle physics field is the natural reaction of those whose career and skills are centered on it. The derisive nature of the term is a measure of disdain for distractions, especially annoying, repetitious, and incoherent ones.
‘However, it’s all too easy to yield to the temptation to use the label as a defense against any dissent, regardless of the merits of the case of the dissenter, which then tends to convert one’s position to dogma, which, ironically, is a characteristic of “crackpotism.” However, once the inevitable flood of anomalies begins to mount against existing theory, no one engaged in “normal” science, can realistically evaluate all the inventive theories that pop up in response. So, the division into camps of innovative “liberals” vs. dogmatic “conservatives” is inevitable, and the use of the excusionary term “crackpot” is just the “defender of the faith” using the natural advantage of his position on the high ground.
‘Obviously, then, this constant struggle, especially in these days of electronically enhanced communications, has nothing to do with science. If those in either camp have something useful in the way of new insight or problemsolving approaches, they should take their ideas to those who are anxious to entertain them: students and experimenters. The students are anxious because the defenders of multiple points of view helps them to learn, and the experimenters are anxious because they have problems to solve.
‘The established community of theorists, on the other hand, are the last whom the innovators ought to seek to convince because they have no reason to be receptive to innovation that threatens their domains, and clearly every reason not to be. So, if you have a theory that suggests an experiment that Adam Reiss can reasonably use to test the nature of dark energy, by all means write to him. Indeed, he has publically invited all that might have an idea for an experiment. But don’t send your idea to [cosmology professor] Sean Carroll because he is not going to be receptive, even though he too publically acknowledged that “we need all the help we can get” (see the Science Friday archives).’
Gauge symmetry: whenever the motion or charge or angular momentum of spin, or some other symmetry, is altered, it is a consequence of conservation laws in physics that radiation is emitted or received. This is Noether’s theorem, which was applied to quantum physics by Weyl. (Illustration credit: http://hyperphysics.phyastr.gsu.edu/hbase/particles/expar.html. Unfortunately the arrow they show for the antineutrino in this diagram points the wrong way: the antineutrino is emitted in beta decay.)
Noether’s theorem (discovered 1915) connects the symmetry of the action of a system (the integral over time of the Lagrangian equation for the energy of a system) with conservation laws. In quantum field theory, the WardTakahashi identity expresses Noether’s theorem in terms of the Maxwell current (a moving charge, such as an electron, can be represented as an electric current since that is the flow of charge). Any modification to the symmetry of the current involves the use of energy, which (due to conservation of energy) must be represented by the emission or reception of photons, e.g. field quanta. (For an excellent introduction to the simple mathematics of the Lagrangian in quantum field theory and its role in symmetry modification for Noether’s theorem, see chapter 3 of Ryder’s Quantum Field Theory, 2nd ed., Cambridge University Press, 1996.)
So, when the symmetry of a system such as a moving electron is modified, such a change of the phase of an electron’s electromagnetic field (which together with mass is the only feature of the electron that we can directly observe) is accompanied by a photon interaction, and viceversa. This is the basic gauge principle relating symmetry transformations to energy conservation. E.g., modification to the symmetry of the electromagnetic field when electrons accelerate away from one another implies that they emit (exchange) virtual photons.
All fundamental physics is of this sort: the electromagnetic, weak and strong interactions are all examples of gauge theories in which symmetry transformations are accompanied by the exchange of field quanta. Noether’s theorem is pretty simple to grasp: if you modify the symmetry of something, the act of making that modification involves the use or release of energy, because energy is conserved. When the electron’s field undergoes a local phase change to its symmetry, a gauge field quanta called a ’virtual photon’ is exchanged. However, it is not just energy conservation that comes into play in symmetry. Conservation of charge and angular momentum are involved in more complicated interactions. In the Standard Model of particle physics, there are three basic gauge symmetries, implying different types of field quanta (or gauge bosons) which are radiation exchanged when the symmetries are modified in interactions:
1. Electric charge symmetry rotation. This describes the electromagnetic interaction. This is supposedly the most simple gauge theory, the Abelian U(1) electromagnetic symmetry group with one element, invoking just one charge and one gauge boson. To get negative charge, a positive charge is represented as travelling backwards in time, and viceversa. The gauge boson of U(1) is mixed up with the neutral gauge boson of SU(2), to the amount specified by the empirically based Weinberg mixing angle, producing the photon and the neutral weak gauge boson. U(1) represents not just electromagnetic interactions but also weak hypercharge.
The U(1) maths is based on a type of continuous group defined by Sophus Lie in 1873. Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together. It was the representation theory of these groups that Weyl was studying.
‘A simple example of a Lie group together with a representation is that of the group of rotations of the twodimensional plane. Given a twodimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point. This is a symmetry of the plane. The thing that is invariant is the distance between a point on the plane and the central point. This is the same before and after the rotation. One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point. There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.
Argand diagram showing rotation by an angle on the complex plane. Illustration credit: based on Fig. 3.1 in Not Even Wrong.
‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one. If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers). As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1).
‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions]. Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave. This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees. Because of this analogy, U(1) symmetry transformations are often called phase transformations. …
‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N). It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest. The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N). Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large.
‘In the case N = 1, SU(1) is just the trivial group with one element. The first nontrivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3). The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’
2. Isospin symmetry rotation. This describes the weak interaction of quarks, controlling the transformation of quarks within one family of quarks. E.g., in beta decay a neutron decays into a proton by the transformation of a downquark into an upquark, and this transformation involves the emission of an electron (conservation of charge) and an antineutrino (conservation of energy and angular momentum). Neutrinos were a falsifiable prediction made by Pauli on 4 December 1930 in a letter to radiation physicists in Tuebingen based on the spectrum of beta particle energies in radioactive decay (‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding … the continous betaspectrum … I admit that my way out may seem rather improbable a priori … Nevertheless, if you don’t play you can’t win … Therefore, Dear Radioactives, test and judge.’ – Pauli’s letter, quoted in footnote of page 12, http://arxiv.org/abs/hepph/0204104). The total amount of radiation emitted in beta decay could be determined from the difference in mass between the beta radioactive material and its decay product, the daughter material. The amount of energy carried in readily detectable ionizing beta particles could be measured. However, the beta particles were emitted with a continuous spectrum of energies up to a maximum upper energy limit (unlike the line spectra of gamma ray energies): it turned out that the total energy lost in beta decay was equal to the upper limit of the beta energy spectrum, which was three times the mean beta particle energy! Hence, on the average, only onethird of the energy loss in beta decay was accounted for in the emitted beta particle energy.
Pauli suggested that the unobserved beta decay energy was carried by neutral particles, now called antineutrinos. Because they are weakly interacting, it takes a great intensity of beta decay in order to detect the antineutrinos. They were first detected in 1956 coming from intense beta radioactivity in the fission product waste of a nuclear reactor. By conservation laws, Pauli had been able to predict the exact properties to be expected. The beta decay theory was developed soon after Pauli’s suggestion in the 1930s by Enrico Fermi, who then invented the nuclear reactor used to discover the antineutrino. However, Fermi’s theory has a neutron decay directly into a beta particle plus an antineutrino, whereas in the 1960s the theory of beta decay had to be expressed in terms of quarks. Glashow, Weinberg and Salam discovered that to make it a gauge theory there had to be a massive intermediate ‘weak gauge boson’. So what really happens is more complicated than in Fermi’s theory of beta decay. A downquark interacts with a massive W_{} weak field gauge boson, which then decays into an electron and an antineutrino. The massiveness of the field quanta is needed to explain the weak strength of beta decay (i.e., the relatively long halflives of beta decay, e.g. a free neutron is radioactive and has a beta half life of 10.3 minutes, compared with the tiny lifetimes of a really small fraction of a second for hadrons which decay via the strong interaction). The massiveness of the weak field quanta was a falsifiable prediction, and in 1983 CERN discovered the weak field quanta with the predicted energies, confirming SU(2) weak interaction gauge theory.
There are two relative types or directions of isospin, by analogy to ordinary spin in quantum mechanics (where spin up and spin down states are represented by +1/2 and –1/2 in units of hbar). These two isospin charges are modelled by the YangMills SU(2) symmetry, which has (2*2)1 = 3 gauge bosons (with positive, negative and neutral electric charge, respectively). Because the interaction is weak, the gauge bosons must be massive and as a result they have a short range, since massive virtual particles don’t exist for long in the vacuum, and can’t travel far in that short life time. The two isospin charge states allow quarkantiquark pairs, or doublets, to form, called mesons. The weak isospin force only acts on particles with lefthanded spin. At high energy, all weak gauge bosons will be massless, allowing weak and electromagnetic forces become symmetric and unify. But at low energy, the weak gauge bosons acquire mass, supposedly from a Higgs field, breaking the symmetry. This Higgs field has not been observed, and the general Higgs models don’t predict a single falsifiable prediction (there are several possibilities).
3. Colour symmetry rotation. This changes the colour charge of a quark, in the process releasing colour charged gluons as the field quanta. Strong nuclear interactions (which bind protons into a nucleus against very strong electromagnetic repulsion, which would be expected to make nuclei explode in the absence of this strong binding force) are described by quantum chromodynamics, whereby quarks have a symmetry due to their strong nuclear or ‘colour’ charges. This originated with GellMann’s SU(3) eightfold way of arranging the known baryons by their properties, a scheme which successfully predicted the existence of the Omega Minus in before it was experimentally observed in 1964 at Brookhaven National Laboratory, confirming the SU(3) symmetry of hadron properties. The understanding (and testing) of SU(3) as a strong interaction YangMills gauge theory in terms of quarks with colour charges and gluon field quanta was a completely radical extension of the original convenient SU(3) eightfold way hadron categorisation scheme.
Experiments in scattering very high energy electrons off neutrons and protons first showed evidence that each nucleon had a more complex structure than a simple point in the 1950s, and therefore the idea that these nucleons were simply fundamental particles was undermined. Another problem with nucleons being fundamental particles was that of the magnetic moments of neutrons and protons. Dirac in 1929 initially claimed that the antiparticle his equation predicted for the electron was the alreadyknown proton (the neutron was still undiscovered until 1932), but because he couldn’t explain why the proton is more massive than the electron, he eventually gave up on this idea and predicted the unobserved positron instead (just before it was discovered). The problem with the proton being a fundamental particle was that, by analogy to the positron, it would have a magnetic moment of 1 nuclear magneton, whereas in fact the measured value is 2.79 nuclear magnetons. Also, for the neutron, you would expect zero magnetic moment for a neutral spinning particle, but the neutron was found to have a magnetic moment of 1.91 nuclear magnetons. These figures are inconsistent with neutrons being fundamental particles, but are consistent with quark structure:
‘The fact that the proton and neutron are made of charged particles going around inside them gives a clue as to why the proton has a magnetic moment higher than 1, and why the supposedly neutral neutron has a magnetic moment at all.’ – Richard P. Feynman, QED, Penguin, London, 1990, p. 134.
To explain hadron physics, Zweig and GellMann suggested the theory that baryons are composed of three quarks. But there was immediately the problem the Omega Minus would contain three identical strange quarks, violating the Pauli exclusion principle that prevents particles from occupying the same set of quantum numbers or states. (Pairs of otherwise identical electrons in an orbital have opposite spins, giving them different sets of quantum numbers, but because there are only two spin states, you can’t make three identical charges share the same orbital by having different spins. Looking at the measured 3/2spin of the Omega Minus, all of its 1/2spin strange quarks would have the same spin.) To get around this problem in the experimentally discovered Omega Minus, the quarks must have an additional quantum number, due to the existence of a new charge, namely the colour charge of the strong force that comes in three types (red, blue and green). The SU(3) symmetry of colour force gives rise to (3*3)1 = 8 gauge bosons, called gluons. Each gluon is a charged combination of a colour and the anticolour of a different colour, e.g. a gluon might be charged blueantigreen. Because gluons carry a charge, unlike photons, they interact with one another and also with with virtual quarks produced by pair production due to the intense electromagnetic fields near fermions. This makes the strong force vary with distance in a different way to that of the electromagnetic force. At small distances from a quark, the net colour charge increases in strength with increasing distance, which the opposite of the behaviour of the electromagnetic charge (which gets bigger at smaller distances, due to less intervening shielding by the polarized virtual fermions caused in pair production). The overall result is that quarks confined in hadrons have asymptotic freedom to move about over a certain range of distances, which gives nucleons their size. Before the quark theory and colour charge had been discovered, Yukawa discovered a theory of strong force attraction that predicted the strong force was due to pion exchange. He predicted the mass of the pion, although unfortunately the muon was discovered before the pion, and was originally inaccurately identified as Yukawa’s exchange radiation. Virtual pions and other virtual mesons are now understood to mediate the strong interaction between nucleons as a relatively longrange residue of the colour force.
Above: the electroweak charges of the Standard Model of mainstream particle physics. The argument we made is that U(1) symmetry isn’t real and must be replaced by SU(2) with two charges and massless versions of the weak boson triplet (we do this by replacing the Higgs mechanism with a simpler massgiving field that gives predictions of particle masses). The two charged gauge bosons simply mediate the positive and negative electric fields of charges, instead of having neutral photon gauge bosons with 4 polarizations. The neutral gauge boson of the massless SU(2) symmetry is the graviton. The lepton singlet with right handed spin in the standard model table above is not really a singlet: because SU(2) is now being used for electromagnetism rather than U(1), we have automatically a theory that unites quarks and leptons. The problem of the preponderance of matter over antimatter is also resolved this way: the universe is mainly hydrogen, one electron, two quarks and one downquark. The electrons are not actually produced alone. The downquark, as we will demonstrate below, is closely related to the electron.
The fractional charge is due to vacuum polarization shielding, with the accompanying conversion of electromagnetic field energy into shortranged virtual particle mediated nuclear fields. This is a predictive theory even at low energy because it can make predictions based on conservation of field quanta energy where vacuum polarization attenuates a field, and the conversion of leptons into quarks requires higher energy than existing experiments have had access to. So electrons are not singlets: some of them ended up being converted into quarks in the big bang in very high energy interactions. The antimatter counterpart for the electrons in the universe is not absent but is present in nuclei, because those positrons were converted into the upquarks in hadrons. The handedness of the weak force relates to the fact that in the early stages of the big bang, for each two electronpositron pairs that were produced by pair production in the intense, early vacuum fields of the universe, both positrons but only one electron were confined to produce a proton. Hence the amount of matter and antimatter in the universe is identical, but due to reactions related to the handedness of the weak force, all the antipositrons were converted into upquarks, but only half of the electrons were converted into downquarks. We’re oversimplifying a little because some neutrons were produced, and quite a few other minor interactions occurred, but this is approximately the overall result of the reactions. The Standard Model table of particles above is in error because it assumes that leptons and quarks are totally distinct. For a more fundamental level of understanding, we need to alter the electroweak portion of the Standard Model.
The apparent deficit of antimatter in the universe is simply a missobservation: the antimatter has simply been transformed from leptons into quarks, which from a long distance display different properties and interactions to leptons (due to cloaking by the polarized vacuum and to close confinement causing colour charge to physically appear by inducing asymmetry; the colour charge of a lepton is invisible because it is symmetrically distributed over three preons in a lepton, and cancels out to white unless an enormous field strength due to the extremely close proximity of another particle is present, creating an asymmetry in the preon arrangement is produced, allowing a net colour charge to operate on the other nearby particle), so it isn’t currently being acknowledged for what it really is. (Previous discussions of the relationship of quarks to leptons on this blog include https://nige.wordpress.com/2007/06/13/feynmandiagramsinloopquantumgravitypathintegralsandtherelationshipofleptonstoquarks/ and https://nige.wordpress.com/2007/07/17/energyconservationinthestandardmodel/ where suggestions by Carl Brannen and Tony Smith are covered.)
Considering the strange quarks in the Omega Minus, which contains three quarks each of electric charge 1/3, vacuum polarization of three nearby leptons would reduce the 1 unit observable charge per lepton to 1/3 observable charge per lepton, because the vacuum polarization in quantum field theory which shields the core of a particle occurs out to about a femtometre or so, and this zone will overlap for three quarks in a baryon like the Omega Minus. The overlapping of the polarization zone will make it three times more effective at shielding the core charges than in the case of a single charge like a single electron. So the electron’s observable electric charge (seen from a great distance) is reduced by a factor of three to the charge of a strange quark or a downquark. Think of it by analogy a couple sharing blankets which act as shields, reducing the emission of thermal radiation. If each of the couple contribute one blanket, then the overlap of blankets will double the heat shielding. This is basically what happens when N electrons are brought close together so that they share a common (combined) vacuum polarization shell around the core charges: the shielding gives each charge in the core an apparent charge (seen from outside the vacuum polarization, i.e., more than a few femtometres away) of 1/N charge units. In the case of upquarks with apparent charges of +2/3, the mechanism is more complex, since the 1/3 charges in triplets are the clearest example of the mechanism whereby shared vacuum polarization shielding transforms properties of leptons into those of quarks. The emergence of colour charge when leptons are confined together also appears to have a testable, falsifiable mechanism because we know how much energy becomes available for the colour charge as the observable electric charge falls (conservation of energy suggests that the attenuated electromagnetic charge gets converted into colour charge energy). For the mechanism of the emergence of colour charge in quarks from leptons, see the suggestions of Tony Smith and Carl Brannen, outlined at https://nige.wordpress.com/2007/07/17/energyconservationinthestandardmodel/.
In particularly, the Cabibbo mixing angle in quantum field theory indicates a strong universality in reaction rates for leptons and quarks: the strength of the weak force when acting on quarks in a given generation is similar to that for leptons to within 1 part in 25. The small 4% difference in reaction rates arises, as explained by Cabibbo in 1964, due to the fact that a lepton has only one way to decay, but a quark has two decay routes, with probabilities of 96% and 4% respectively. The similarity between leptons and quarks in terms of their interactions is strong evidence that they are different manifestations of common underlying preons, or building blocks.
Above: Coulomb force mechanism for electrically charged massless gauge bosons. The SU(2) electrogravity mechanism.
Think of two flakjacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!
This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation.
Above: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the pathintegral YangMills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of selfinductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for YangMills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the selfinductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equilibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.
The price of the random walk statistics needed to describe such a zigzag summation (avoiding opposite charges!) is that the net force is not approximately 10^{80} times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zigzag inefficiency of the sum, i.e., about 10^{40} times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 10^{40}/10^{80} = 10^{40} as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 10^{80} randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are YangMills radiation being exchanged between all charges (including all charges of similar sign) is 10^{40} times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.
Fig. 1a: Feynman diagrams for quantum gravity interactions discussed in this post. M_{1} and M_{2} are two masses which accelerate towards one another. Note that in the spin1 graviton model, the accelerating expansion of the universe is maintained by the longrange YangMills exchange of gravitons between all of the masses in the universe because the emission of gravitons causes the recoil of those masses further away from one another, and when gravitons are received they also help to knock receding masses further apart. The overall effect is accelerative recession of masses.
This same mechanism, i.e. the exchange of gravitons between masses, which causes the acceleration of the universe and the Hubble expansion, also causes gravitational attraction between masses which are not substantially redshifted relative to one another. This effect is due to the fact that nearby masses don’t recede substantially from one another, so they don’t exchange gravitons forcefully. The whole basis for graviton exchange is that masses must be receding. This recession leads to outward acceleration of one mass relative to another of a = dv/dt = d(HR)/dt = Hv = H(Hv), which results in outward force of one mass from another of F = ma = mRH^{2} (the Hubble recession law is v = HR, which can be differentiated to find acceleration).
By Newton’s 3rd law Law of Motion, it follows that there is an equal reaction force, which – from the possibilities implied by the Standard Model and gravitational physics – turns out to be mediated by gravitons. Hence nonreceding masses don’t have any outward force relative to one another, and thus no inward directed gravitonmediated reaction force. In other words, the physics tells us that nonreceding masses (or masses which are not receding from one another at immense, relativistic velocities) actually shield one another from gravitons exchanged with the rest of the universe (which is radially symmetrically distributed around the sky at the greatest distances for which graviton exchange contributions are most important, i.e. it leads to isotropic incoming graviton exchangeradiation). Hence, we are pushed down to Earth because the Earth shields us from gravitons in the downward direction, creating a small amount of asymmetry in the exchange of gravitons between us and the surrounding universe (the crosssection for graviton shielding by an electron is only its black hole event horizon crosssectional area, i.e. 5.75*10^{114} square metres). The special quasicompressive effects of gravitons on masses accounts for the ‘curvature’ effects of general relativity, such as the fact that the Earth’s radius is 1.5 mm less than the figure given by Euclidean geometry (Feynman Lectures on Physics, c42 p6, equation 42.3).
In the big bang (see http://www.astro.ucla.edu/~wright/tiredlit.htm for evidence that the big bang is the only scientifically defensible interpretation of the redshift of distant matter in the universe), the relative radial outward motion of matter away from us at velocity v = dR/dt = H*R (Hubble’s recession law) leads to outward cosmological acceleration of matter a = dv/dt = d(H*R)/dt = (H*dR/dt) + (R*dH/dt) = H*v = R*H^{2}.
This is the cosmological acceleration observed, and also gives us an outward force of receding matter (Newton’s 2nd law, F=dp/dt ~ ma), which by Newton’s 3rd law leads to an equalinward directed reaction force (which it turns out is mediated by gravitons, which predicts the strength of gravity as proved below). Notice that the path integral for nonloop quantum gravity interactions (those important at low energy, e.g. for determining the model of discreteinteraction quantum gravity which replaces the classical differential geometry approximation used in the theory of general relativity), is very simple. We need only to sum the simple (nonloop) exchanges of gravitons between masses. Because the model denies that you get substantially forceful graviton exchange between masses which aren’t receding at relativistic velocities, it follows that all relatively nearby (nonredshifted) masses like the sun act as a shield of gravitons, towards which we are pushed by the unimpeded gravitons from other directions. Unlike LeSage’s original idea, this model is substantiated by a fully predictive, falsifiable physical analysis and actually predicts the strength of gravity and other checkable facts such as the acceleration of the universe (see Fig. 3 below for a simplified version of the analysis).
The mainstream spin2 graviton speculation is not even wrong
The mainstream spin2 graviton model firstly makes the error of considering only two regions of energy or mass and ignoring all the other masses in the entire universe in the analysis! So it assumes – with no evidence for this whatsoever – that gravitons are only exchanged between the two masses which are attracted together. Actually, as explained above, this is the opposite of what occurs. There is no reason, in any case, why gravitons are not being exchanged with the rest of the masses in the universe. The fraction of the mass of the universe contained in an apple and the Earth is trivial. The omission from the physical model used by the mainstream of graviton exchanges with the mass of the surrounding universe causes a massive error. (A good analogy to this error is Sternglass’s confusion over lowlevel radiation effects, where he similarly begins with a false assumption and then turns the false assumption into a factlike arguing point to interpret his evidence wrongly. As with Sternglass and lowlevel radiation hype, the facts gain absolutely no publicity when published, and Sternglass does not retract and apologise any more than string theorists do, and the media continues to make a living from selling lies.)
But that is not all. Because all gravitational charge is positive (mass and energy), the mainstream compounds this error by arguing that spin1 exchange radiation would cause repulsion between such similar charges. We know that similar masses appear to attract, not repel, so there is an error somewhere in the assumptions made. But the mainstream, rather than finding the real error (which is that it is ignoring the effects of contributions from all the mass in the surrounding universe which is also exchanging spin1 gravitons with the two relkatively small masses of interest for the calculation!!), has instead (since the 1930s!!) followed into stringy fairyland a 1930s suggestion by Pauli and Fierz that the faulty assumption is just the spin of the graviton, and that if the graviton is spin2 instead of being spin1, it will cause universal attraction between similar charges (just as spin1 causes universal repulsion between similar charges).
So, compounding the first error of ignoring almost all of the mass in the universe when writing down its pathintegral for quantum gravity, the mainstream then adds to that error by making the second error of fraudulently ‘correcting’ the false prediction of repulsion that would occur using spin1 graviton exchange between two regions of massenergy, by fraudulently adjusting the assumed spin properties of the mediating graviton to make the force attractive instead. Using spin2 gives a 5polarisation graviton with a 5component tensor in the Lagrangrian, which when evaluated in a Feynman path integral would make two masses always attract. The failure here, aside from predicting nothing checkable unlike the spin1 graviton, is that it is false in the first place to assume that gravitons are only going to be exchanged between two masses. Why on Earth should the gravitons from other masses in the rest of the universe not be exchanged with the two masses you are considering when calculating gravitation? Of course they should be! It’s obvious to one who is concerned with the mathematical physics, rather than ignorantly working mathematical machinery with no concern in the physics.
As soon as you do include masses in the surrounding universe (which are far bigger even though they are further away, i.e., the mass of the Earth and an apple is only 1 part in 10^{29} of the mass of the universe, and all masses are gravitational charges which exchange gravitons with all other masses and with energy!), you begin to see what is really occurring. Spin1 gauge bosons are gravitons!
The correct model is radical and extremely predictive and checkable unlike the ‘not even wrong’ spin2 graviton belief system which leads to the stringy landscape of pseudoscience: in simple outline, receding (v = HR Hubble law) masses have an acceleration dv/dt = d(H*R)/dt = H*dR/dt + R*dH/dt = Hv + 0 = H(H*R) and thus carry an outward force F = m*dv/dt which has, by Newton’s 3rd law, an inward reaction force which is mediated by gravitons. Cosmologically distant masses push one another apart by exchanging gravitons, explaining the lack of gravitational deceleration observed in the universe. But masses which are nearby in cosmological terms (not redshifted much relative to one another) are pushed together by gravitons from the surrounding (highly redshifted) distant universe, because they don’t exert an outward force relative to one another, and so don’t fire a recoil force (mediated by spin1 gravitons) towards one another. They, in other words, shield each other. Think of the exchange simply as bullets bouncing off particles. If bullets are firing in from all directions, the proximity of a nearby mass which isn’t shooting at you will act as a shield, and you’d be pushed towards that shield (which is why things fall towards large masses). This is a quantitative prediction, predicting the strength of the gravitational coupling which can be checked. So this mechanism, which predicted the lack of gravitational deceleration in the big bang in 1996 (observed in 1998 by Saul Perlmutter’s automated CCD telescope software) ,also predicts gravitation, quantitatively.
It should be noted that in this diagram we have drawn the forcecausing gauge or vector boson exchange radiation in the usual convention as a horizontal wavy line (i.e., the gauge bosons are shown as being instantly exchanged, not as radiation propagating at the velocity of light and thus taking time to propagate). In fact, gauge bosons don’t propagate instantly and to be strictly accurate we would need to draw inclined wavy lines as shown in Fig. 2 below. The exchange of the gauge bosons as a kind of reflection process (which imparts an impulse in the case where it causes the mass to accelerate) would make the diagram more complex. Conventionally, Feynman diagrams are shorthand for categories of interactions, not for specific individual interactions. Therefore, they are not depicting all the similar interactions that occur when two particles attract or repel; they are way oversimplified in order to make the basic concept lucid.
Loops in Feynman diagrams and the associated infinite perturbative expansion
Because the gravitational phenomena we have observed manifested in checked aspects of general relativity are at low energy, phenomena such as loops (whereby bosonic field quanta undergo pair production and briefly become fermionic pairs which soon annihilate back into bosons, but become briefly polarized during their existence and in so doing modify the field) which are described by the infinite series of Feynman diagrams each representing one term in the infinite series of terms in the perturbative expansion to a Feynman path integral, can be ignored (this is discussed later in this post). So the direct exchange of gauge bosons such as gravitons, gives us only a few possible types of Feynman diagrams for nonloop, simple, direct exchange of field quanta between charges.
The illustration above summarises a few of the basic (widely ignored) points about the failure of existing general relativity to represent quantum fields (by making clear that curvature is an approximation of a lot of little deflections caused by lots of individual, discrete, quantum gravity interactions), and the failure of the mainstream quantum gravity model to include graviton exchange with surrounding masses in the rest of the universe. When receding masses appear to be accelerating radially away relative to us (as observed in spacetime), they are emitting gravitons which travel towards us at the same velocity as the visible light we observe from such receding galaxies. The recoil and impulses created by the emission and reception of such gravitons explain both gravitation and the acceleration of the universe in one go, as shown in the 3rd Feynman diagram of Fig. 1 above, and in the more technical mathematics below in this blog post. (I’ve only completed the first two sections in chapter 1 so far: book draft version 1.23.)
Fig. 1b: an illustration of some of the Feynman diagrams corresponding to successive terms in the perturbative expansion for electronelectron scattering (illustration credit: http://www.answers.com/topic/feynmandiagram?cat=technology). The first Feynman diagram shown represents the low energy (nonloop) approximation, i.e. Coulomb’s law in Maxwell’s equations (Gauss’s law in the Maxwell’s equations describes the diverging electric field from a charge and is physically equivalent to Coulomb’s law). This simple Feynman diagram contains no loops as it has only two vertices (it is secondorder). All of the other Feynman diagrams in the illustration have four vertices and thus are fourthorder; these are the ‘loop’ corrections.
It is very important to recognise that the simplest (nonloop) Feynman diagram is of overwhelming importance for calculations in lowenergy physics! It is the simplest Feynman diagram which corresponds to the classical approximation (the lowenergy or longdistance asymptotic limit to a quantum field theory). Although the presence of loops does cause charge and mass renormalization, whereby the apparent values of these parameters at low energy is different to their values at high energy (due to the shielding or antishielding of the respective force field by pairproduction virtual particles which arise in relatively intense fields), it is a fact that at low energy coupling parameters are constant.
This means that we can analyse the lowenergy limit to a quantum field theory of gravity and electromagnetism without complex calculations of looped Feynman diagrams. For example, the first loop Feynman diagram for the calculation of the magnetic moment of the electron, only increases the simple (Dirac equation) nonloop calculation of the magnetic moment of the electron from 1 Bohr magneton to about 1.00116 Bohr magnetons. In other words, the most important loop Feynman diagram only varies the calculated result by 0.116%. This itself is quite a trivial correction, and in general the more complex the Feynman diagram, the less likely it is to occur and so the smaller the contribution it makes to a prediction of what will be observed in experiments. For this reason, we can ignore loops when we analyse the pathintegrals for fundamental forces. This means that the path integral has a simple approximate solution for the nonloop factor, and omits the complex perturbative expansion of looped diagram terms.
This makes the calculations extremely straightforward and soluble by simple geometric methods, such as asymmetry analysis (see Fig. 3 below for a calculation of the force of gravity by this method of analyzing nonloop contributions to path integrals geometrically).
Fig. 1c: an illustration from a paper by Reinhard Alkofer demonstrating the complexity of loops in feynman diagrams. This demonstrates the simple cancellation of nonloop Feynman diagrams, as opposed to the noncancellation you get when loops occur. Generally, loops occur when a boson (an oscillatory electromagnetic wave, with as much negative electric field as positive electric field), when in a strong (>1.3*10^18 volts/metre) electric field, briefly becomes two virtual (shortlived) fermions, one positive and one negative. The fermions quickly (as predicted by the energyversustime version of the Heisenberg uncertainty principle, a simple scattering law) recombine and annihilate back into bosonic field quanta. But during the brief phase as virtual fermions, the virtual fermions move in opposite directions in the original electric field, introducing an electric dipole which tends to oppose and partially screen the original electric field (i.e., the observable charge, which is determined from the observed electric field, since nobody can see the core charge directly). Hence, the existence of loops in electric fields tend to shield those fields as seen from a great distance. In the case of colour charge fields in QCD, the virtual charges can increase the field strength rather than shielding it. Loops are important in highenergy, shortranged fields. For lowenergy, longrange electromagnetic and gravitational physics, loops don’t exist in spacetime far from charges because the field strengths are too weak to allow pairproduction phenomena. Generally, field strengths must exceed Schwinger’s threshold before there is any pairproduction. As far as quantum gravity field loops are concerned, there is no experimental evidence that they even exist, although they are assumed in vacuous string theory to exist very close to gravitational charges, as a means of making the weak force of gravity ‘unify’ with other forces at high energy (however, that stringy approach to numerological ‘unification’ of coupling constants ignores the conservation of energy as discussed in a previous post, and it is not a physical unification of different force fields, which has been demonstrated by different means using a physical mechanism).
For more simple discussion on loops in quantum fields, see this paper and this paper. This blog post is concerned primarily with nonloop interactions, i.e., force fields in low energy physics, i.e. the classical limit for quantum field theory, where quantum field effects are relatively simple and therefore, as shown below, simply don’t require the kind of very sophisticated mathematics required to accommodate loop effects.
“The cloud of virtual particles acts like a screen or curtain that shields the true value of the central core. As we probe into the cloud, getting closer and closer to the core charge, we ‘see’ less of the shielding effect and more of the core. This means that the electromagnetic force from the electron as a whole is not constant, but rather gets stronger as we go through the cloud and get closer to the core. Ordinarily when we look at or study an electron, it is from far away and we don’t realize the core is being shielded.” – Professor David Koltick.
Unlike the electromagnetic field which is shielded by the vacuum and gets stronger than predicted by the Coulomb inversesquare law as you approach the core of a fermion, the strong nuclear force – which is the “glue” that holds together elementary particles such as protons – actually gets weaker closer to the core charge. “Because the electromagnetic charge is in effect becoming stronger as we get closer and the strong force is getting weaker, there is a possibility that these two forces may at some energy be equal. Many physicists have speculated that when and if this is determined, an entirely new and unique physics may be discovered.” – Professor David Koltick, quoted at http://findarticles.com/p/articles/mi_m1272/is_n2625_v125/ai_19496192
‘… we [experimentally] find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hepth/0510040, p 71.
Plus, in particular:
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
Fig. 2: Feynman diagrams (left) by convention make various simplifications: the gauge boson radiation is not actually transmitted instantly between charges, contrary to the convention as depicted in places like http://hyperphysics.phyastr.gsu.edu/hbase/particles/expar.html. Instead, as the diagram on the right shows, it takes time for radiation to be transferred between charges. If gravitons went instantly (i.e. as a horizontal wavy line on a diagram where the vertical axis depicts time), then gravity would act instantly instead of being constrained by the velocity of light. The errors introduced by oversimplification of Feynman diagrams helps to keep mainstream physicists insulated from reality, and concentrating on nonexistent ‘problems’ like working out ways to avoid the difficulties in renormalizing a spin2 graviton theory. If they concentrated on the fact that gravitons are spin1, as demonstrated by the empirical, observed evidence, the entire problem could be sorted out straight away as shown below.
They don’t want to be heretics, however. Groupthink wins: ‘Groupthink is a type of thought exhibited by group members who try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. During Groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.’ – Wikipedia. ‘[Groupthink is a] mode of thinking that people engage in when they are deeply involved in a cohesive ingroup, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.’ – Irving Janis.
Fig. 3: This is the key diagram working out, without a fancy path integral formulation, the net sum of spin1 graviton contributions. The first few logical steps are included:
1. Outward force of receding matter (recession velocity v = HR where H is Hubble constant and R is apparent distance) is F = ma = m.dv/dt = m.d(HR)/dt = m[H.dR/dt + R.dH/dt] = m[Hv + 0] = mH(HR) = mRH^2. This is on the order of F = 10^43 Newtons, but there is a correction to be applied for the apparent increase in density as we look back to earlier times (great distances in spacetime), and for relativistic mass increase of receding matter. But for simplicity, to see how the maths works, ignore the correction:
F = ma = [(4/3)πR^{3}r].[dv/dt] = [(4/3)πR^{3}r].[H^{2}R] = 4πR^{4}rH^{2}/3.
2. Inward force (which must be carried by gravitons or the spacetime fabric, as explained in the book draft and here), is equal to the outward force (action and reaction are equal and opposite – Newton’s 3rd law). However, there is a redshift of gravitons approaching us from relativistically receding, extremely redshifted masses, which reduces the effective graviton energy when received. (This redshift effect offsets the infinityapproaching outward force effects of relativistic mass increase and the increasing density of the earlier universe at ever greater distances.)
3. Gravity force, F = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s crosssectional area and the ratio R^{2}/r^{2}) / (total spherical area with radius R).
In an earlier post, it is proved that the shield’s crosssectional area is the crosssectional area of the event horizon for a black hole, π(2GM/c^{2})^{2}. But at present, to get the feel for the physical dynamics, we will assume this is the case without proving it. This gives
(force of gravity) = (4πR^{4}rH^{2}/3).(π(2GM/c^{2})^{2}R^{2}/r^{2})/(4πR^{2})
= (4/3)πR^{4}rH^{2}G^{2}M^{2}/(c^{4}r^{2})
We can simplify this using the Hubble law because at great distances/early times (where the density of the universe is highest) it is a good approximation to put HR = c, which gives R/c = 1/H, so:
(force of gravity) = (4/3)πrG^{2}M^{2}/(H^{2}r^{2})
This key result ignores both the density variation in spacetime (the distant, earlier universe having higher density) and the effect of redshift in reducing the energy of gravitons and weakening quantum gravity contributions from extreme distance, because the momentum of a graviton will be p = E/c and where E is reduced by redshift since E = hf, but it does demonstrate three important things about this line of research:
1. Quantization of mass: the force of gravity is proportional not to M_{1}M_{2} but instead to M^{2}, which is a vital result because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. Lepton and hadron masses beyond the electron are nearly all integer denominations of 0.5*0.511*137 = 35 MeV where 0.511 MeV is the electron’s mass and 137.036… is the well known Feynman dimensioness factor in charge renormalization (discovered much earlier in quantum mechanics by Sommerfeld); furthermore, quark doublet or meson masses are close to multiples of twice this or 70 MeV while quark triplet or baryon masses are close to multiples of three times this or 105 MeV; it appears that the simplest possible model, which predicts masses of new as yet unobserved particles as well as explaining existing particle masses, is that the vacuum particle which is the building block of mass is 91 GeV like the Z weak boson; the muon mass for instance is 91,000 divided by the product of 137 and twice Pi, which is a combination of a 137 vacuum polarization shielding factor, and twice Pi which is a dimensionless geometric shield factor, e.g. spinning a particle or a missile in flight reduces the radiant exposure per unit area of its spinning surface by Pi as compared to a nonspinning particle or missile, because the entire surface area of the edge of a loop or cylinder is Pi times the crosssectional area seen sideon, while a spin1/2 fermion must rotate twice, i.e., by 720 not 360 degrees – like drawing a line right around the singlesurface of the Möbius strip – to expose its entire surface to observation and reset its symmetry. This is analysed in an earlier blog post, showing how all masses are built up from only one type of fundamental massive particle in the vacuum, and making checkable predictions. Polarized vacuum veils around particles reduce the strength of the coupling between the massive 91 GeV vacuum particles (which interact with gravitons) and the SU(2) x SU(3) particle core of interest (which doesn’t directly interact with gravitons), accounting for the observed discrete spectrum of fundamental particle masses.
The correct mass giving field is different in some ways to the electroweak symmetry breaking Higgs field of the conventional Standard Model (which gives the standard model charges as well as the 3 weak gauge bosons their symmetrybreaking mass at low energies by ‘miring’ them or resisting their acceleration): a discrete number of the vacuum mass particles (gravitational charges) become associated with leptons and hadrons, either within the vacuum polarized region which surrounds them (strong coupling to the massive particles, hence large effective masses) or outside it (where the coupling, which presumably relies on the electromagnetic interaction, is shielded and weaker, giving lower effective masses to particles). In the case of the deflection of light by gravity, the photons have zero rest mass so it is their energy content which is causing deflection. The massgiving field in the vacuum still mediates effects of gravitons, but the photon has no net electric charge (it has equal amounts of positive and negative electric field density), it has zero effective rest mass. The quantum mechanism by which light gets deflected as predicted by general relativity has been analysed in an earlier post: due to the FitzGeraldLorentz contraction, a photon’s field lines are all in a plane perpendicular to the direction of propagation. This means that twice the electric field’s energy density in a photon (or other light velocity particle) is parallel to a gravitational field line that the photon is crossing at normal incidence, compared to the case for a slowmoving charge with an isotropic electric field. The strength of the coupling between the photon’s electric field and the massgiving particles in the vacuum is generally not quantized, unless the energy of the photon is quantized.
If you are firmly attached to an accelerating horse, you will accelerate at the specific rate that the horse accelerates at. But if you are less firmly attached, the acceleration you get depends on your adhesion to the saddle. If you slide back as the horse accelerates, your acceleration is somewhat less than that of the horse you are sitting on. Particles with rest mass are firmly anchored to vacuum gravitational charges, the particles with fixed mass that replace the traditional role of Higgs bosons. But particles like photons, which lack rest mass, are not firmly attached to the massive vacuum field, and the quantized gravitational interactions – like a fixed acceleration of a horse – is not automatically conveyed upon the photon. The result is that a photon gets deflected more classically by ‘curved spacetime’ created by the effect of gravitons upon the Higgslike massive bosons in the vacuum, than particles with rest mass such as electrons.
2. The inverse square law, for distance r.
3. Many checked and checkable quantitative predictions. Because the Hubble constant and the density of the universe can be quantitatively measured (within certain error bars, like all measurements), you can use this to predict the value of G. As astronomy gets better measurements, the accuracy of the prediction gets better and can be checked experimentally.
In addition, the mechanism predicts the expansion of the universe: the reason why YangMills exchange radiation is redshifted to lower energy by bouncing off distant masses is that energy from gravitons is being used to cause the distant masses to speed up. This makes quantitative predictions, and is a firm test of the theory. (The outward force of a receding galaxy of mass m is F = mH^{2}R, which requires power P = dE/dt = Fv = mH^{3}R^{2}, where E is energy.)
It should be noted that the gravitons in this model would have a mean free path (average distance between interactions) of 3.10 x 10^77 metres in water, as calculated in the earlier post here. These are able to produce gravity by interacting with the the Higgslike vacuum field, due to the tremendous flux of gravitons involved. The radially symmetric, isotropic outward force of the receding universe is on the order 10^43 Newtons, and by Newton’s 3rd law this produces a similar equal and opposite (inward) reaction force. This is the immense field behind gravitation. Only a trivial asymmetry in the normal equilibrium of such immense forces is enough to produce gravity. Cosmologically nearby masses are pushed together because they aren’t receding much, and so don’t exert a forceful flux of graviton exchange radiation in the direction of other (cosmologically) nearby masses. Because (cosmologically) nearby masses therefore don’t exert graviton forces upon each other as exchange radiation, they are shielding one another in effect, and therefore get pushed together by the forceful exchange of gravitons which does occur with the receding universe on the unshielded side, as illustrated in Fig. 1 above.
Some other posts, besides the key one, that are useful to grasping the details are this, this, this, this, this and this. Some of the earlier posts contain omissions or errors which have later been corrected in later posts, or by comments added to the post. Science is not a religion or political business where a dogma or policy is agreed upon and then fixed forever. Where omissions or errors occur, they should be corrected.
Update: I’ve decided that in the finished book, every righthand page will be simply a fullpage illustration (using diagrams, graphs, etc.) of the technical content of the text on the lefthand page. Otherwise the book will rely on appealing to people who have the time to read a lot of technical text, which most people do not have. Hopefully the technical illustrations on all righthand pages will provide the ‘reader’ with the ability to grasp all the main points in a few seconds visually, and then they can refer to the text on the facing page if they want further particulars. I’ll probably wait until I finish the text for each chapter, before designing and inserting the full page illustrations.
As of 10 February 2008, I’ve changed the banner of this blog from SU(2) x SU(3) to “U(1) x SU(2) x SU(3) quantum field theory: Is electromagnetism mediated by charged, massless SU(2) gauge bosons? Is the weak hypercharge interaction mediated by the neutral massless SU(2) weak gauge boson? Is gravity mediated by the spin1 gauge boson of U(1)? This blog provides the evidence and predictions for this introduction of gravity into the Standard Model of particle physics.” This is driven by the fact, explained in the comments to this post, that:
… SU(2) x SU(3), … [it] seems too difficult to make SU(2) account for weak hypercharge, weak isospin charge, electric charge and gravity.I thought it would work out by changing the Higgs field so that some massless versions of the 3 weak gauge bosons exist at low energy and cause electromagnetism, weak hypercharge and gravity.However, since the physical model I’m working on uses the two electrically charged but massless SU(2) gauge bosons for electromagnetism, that leaves only the electrically neutral massless SU(2) gauge boson to perform both the role of weak hypercharge and gravity. That doesn’t work out, because the gravitational charges (masses) are evidently going to be different to the weak hypercharge which is only a factor of two different between an electron and a neutrino. Clearly, an electron is immensely more massive than a neutrino. So the SU(2) x SU(3) model must be wrong.The only possibility left seems to be similar to the Standard Model U(1) x SU(2) x SU(3), but with differences from the Standard Model. U(1) would model gravitational charge (mass) and spin1 (push) gravitons. The massless neutral SU(2) gauge boson in the model I’m working on would then mediate weak hypercharge only, instead of mediating gravitation as well.The whole point about my approach is that I’m working from factbased predictive mechanisms for fundamental forces, and in this world view the symmetry group is just a mathematical model which is found to describe the symmetries suggested by the mechanisms. Here are some links to some online basic information about hypercharge, weak hypercharge and SU(2) isospin. Ryder’s book Quantum Field Theory (2nd ed., 1996), chapters 13 and 89, contains the best (physically understandable) introduction to the basic mathematics including Lagrangians, path integrals, YangMills theory and the Standard Model. From my perspective, the symmetry groups are the end product of the physics; they summarise the symmetries of the interactions. The end product can change when the understanding of the details producing it changes. I’ve revised the latest draft book manuscript PDF file accordingly.
Dr Thomas Love of California State University has pointed out:
‘The quantum collapse [in the mainstream interpretation of of quantum mechanics, which has wavefunction collapse occur when a measurement is made] occurs when we model the wave moving according to Schroedinger (timedependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (timeindependent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’
That looks like a factual problem, undermining the mainstream interpretation of the mathematics of quantum mechanics. If you think about it, sound waves are composed of air molecules, so you can easily write down the wave equation for sound and then – when trying to interpret it for individual air molecules – come up with the idea of wavefunction collapse occurring when a measurement is made for an individual air molecule.
Feynman writes on a footnote printed on pages 556 of my (Penguin, 1990) copy of his book QED:
‘… I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of oldfashioned ideas … But at a certain point the oldfashioned ideas would begin to fail, so a warning was developed … If you get rid of all the oldfashioned ideas and instead use the [path integral] ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’
Feynman on p85 points out that the effects usually attributed to the ‘uncertainty principle’ are actually due to interferences from virtual particles or field quanta in the vacuum (which don’t exist in classical theories but must exist in an accurate quantum field theory):
‘But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these [classical] rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of intereference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’
Hence, in the path integral picture of quantum mechanics – according to Feynman – all the indeterminancy is due to interferences. It’s very analogous to the indeterminancy of the motion of a small grain of pollen (less than 5 microns in diameter) due to jostling by individual interactions with air molecules, which represent the field quanta being exchanged with a fundamental particle.
The path integral then makes a lot of sense, as it is the statistical resultant for a lot of interactions, just as the path integral was actually used for brownian motion (diffusion) studies in physics before its role in QFT. The path integral still has the problem that it’s unrealistic in using calculus and averaging an infinite number of possible paths determined by the continuously variable lagrangian equation of motion in a field, when in reality there are not going to be an infinite number of interactions taking place. But at least, it is possible to see the problems, and entanglement may be a redherring:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
 R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
copy of a comment:
http://asymptotia.com/2008/02/17/talesfromtheindustryxviijumpthoughts/
Hi Clifford,
Thanks for these further thoughts about being science advisor [...] for what is (at least partly) a sci fi film. It’s fascinating.
“What I like to see first and foremost in these things is not a strict adherence to all known scientific principles, but instead internal consistency.”
Please don’t be too hard on them if there are apparent internal inconsistencies. Such alleged internal inconsistencies don’t always matter, as Feynman discovered:
“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …
“… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …
” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …” – Feynman, quoted at http://www.tony5m17h.net/goodnewsbadnews.html#badnews
I agree with you that: “Entertainment leading to curiosity, real questions, and then a bit of education …”
Update (23 February 2008): via Dr Woit’s blog Not Even Wrong, see the recent review of Smolin’s book in the Times Literary Review,
“… Smolin has launched a controversial attack on those working on the dominant model in theoretical physics. He accuses string theorists of racism, sexism, arrogance, ignorance, messianism and, worst of all, of wasting their time on a theory that hasn’t delivered.”

http://tls.timesonline.co.uk/article/0,,253722650590_1,00.html
Update (28 February 2008): via Woit, the latest hype for string is ‘rock guitars could hold secret to the universe’. It might sound like just more pathetic spin, but actually, the analogy of string theory hype to that of a community of rock groupies is sound.
Update (2 March 2008): The rock guitar string promoter referred to just above is Dr Lewney who has the site http://www.doctorlewney.com. He writes on Dr Woit’s blog:
‘I’m actually very open to ideas as to how best to communicate physics to schoolkids.’
Dr Lewney, if you want to communicate real, actual physics rather than useless blathering and lies to schoolkids, that’s really excellent. But please just remember that physics is not uncheckable speculation, and that twenty years of mainstream hype of string theory in British TV, newspapers and the New Sceintist has by freak ‘coincidence’ (don’t you believe it) correlated with a massive decline in kids wanting to do physics. Maybe they’re tired of sci fi dressed up as physics or something.
http://www.buckingham.ac.uk/news/newsarchive2006/ceerphysics2.html:
‘Since 1982 Alevel physics entries have halved. Only just over 3.8 per cent of 16yearolds took Alevel physics in 2004 compared with about 6 per cent in 1990.
‘More than a quarter (from 57 to 42) of universities with significant numbers of physics undergraduates have stopped teaching the subject since 1994, while the number of home students on firstdegree physics courses has decreased by more than 28 per cent. Even in the 26 elite universities with the highest ratings for research the trend in student numbers has been downwards.
‘Fewer graduates in physics than in the other sciences are training to be teachers, and a fifth of those are training to be maths teachers. Alevel entries have fallen most sharply in FE colleges where 40 per cent of the feeder schools lack anyone who has studied physics to any level at university.’
http://www.math.columbia.edu/~woit/wordpress/?p=651#comment34820:
‘One thing that is clear is that hype of speculative uncheckable string theory has at least failed to encourage a rise in student numbers over the last two decades, assuming that such speculation itself is not actually to blame for the decline in student interest.
‘However, it’s clear that when hype fails to increase student interest, everyone will agree to the consensus that the problem is a lack of hype, and if only more hype of speculation was done, the problem would be addressed.’
Professor John Conway, a physicist at the University of California, has written a post called ‘What’s the (Dark) Matter?’ where someone has referred to my post here as my ‘belief’ that gravitons are of spin1. Actually, this isn’t a ‘belief’. It’s a fact (not a belief) that so far, spin2 graviton ideas are at best uncheckable speculation that is ‘not even wrong‘, and it’s a fact (not a belief) that this post shows that spin1 gravitons do reproduce gravitation as already known from the checked and confirmed results of general relativity, plus quantitatively predicting more stuff such as the strength of gravity. This is not a mere ‘personal belief’, such as the gut feeling that is used by string theorists, politicians and priests to justify hype in religion or politics. It is instead factbased, not beliefbased, and it makes accurate predictions so far as the difficult calculations and the imperfect experimental data to date can be used to check it, so there’s no belief system here, just cold hard fact. This is why I’m writing about it, and why censoring it is wrong. If science is to be based on mainstream groupthink, then it is reduced to a religion or to politics, i.e., a dictatorship of the majority over minorities which is enforced not by solid physical reasoning from facts determined in nature, but by the political tools of censorship.
On the subject of dark matter, my analysis of the gravity coupling constant G shows that the usual critical density formula (for a flat universe) from general relativity, implies a density which is too high, simply because of quantum gravity effects on G which are ignored by general relativity which is classical on large scales. Sure there is some dark matter (neutrinos and large black holes which give off little Hawking radiation, for example), but it is nowhere near the amount suggested by general relativity’s quantumgravityignoring FriedmannRobertsonWalker metric. Actually, with graviton exchange between masses being the source of gravity, there is a difference between the classical approximation you get for fairly short range effects (like an apple falling to the earth, and the earth orbiting the sun), and very long range effects in cosmology where the masses involved are actually receding with motion that is relativistic (approaching c velocity). The latter case involves gravitons being received in a redshifted condition, i.e. with lower energy than is the case over shorter distances where masses aren’t receding so rapidly. This, and related effects, could easily be included in the usual framework of general relativity by reducing the value of G for long ranges to an effective value that allows for this graviton redshift effect. General relativity is only a classical approximation, but it needn’t be completely wrong for cosmological scales: by building in corrections for physical mechanism, it can be made to approximate cosmology far better.
To explain the apparent dark matter manifested in the flattened shapes of observed galactic rotation curves showing orbit velocity versus radial distance from the centre of a rotating galaxy, I recommend that the reader should check a page on John Hunter’s website, http://www.gravity.uk.com/galactic_rotation_curves.html. I first came across Hunter’s idea after he published a quarterpage notice in the New Scientist a few years ago, and we corresponded. Hunter is apparently not too interested in quantum gravity (spin1 graviton exchange as a mechanism), just with making a mathematical conjecture and checking it, but his simple idea is mathematically equivalent to the physical mechanism of graviton exchange I worked out (I didn’t investigate the idea of inertial and gravitational potential energy equivalence and its consequences for about galactic rotation curves). Hunter starts off with the conjecture that the rest mass of any object, mc^2, is equivalent to the gravitational potential energy GMm/R with respect to distant matter of mass M at distance R, which is important (see comment 18 of this blog post) from my standpoint because general relativity rests on the principle of equivalence between inertial and gravitational masses. Since inertial mass has an energy equivalent via Einstein’s famous formula, and gravitational mass also has an energy equivalent (the gravitational potential energy of that mass with respect to the surrounding universe, i.e. the energy which would be released by that mass via the gravitational field if the universe collapsed). Einstein failed to apply the equivalence principle (for inertial and gravitational masses in general relativity) to the energy equivalents of those respective inertial and gravitational masses, which are known from special relativity and from classical gravitation:
Fig. 4: John Hunter’s result: ‘So stars moving at a constant velocity at different radii means a constant m/r ratio. … For any given radius r, if the mass within this radius is such that the m/r value is higher than an average value (k), then the effective gravitational constant is lowered. This allows rotating matter to drift away from the centre, thus reducing the m/r ratio at this radius. If m/r < k (for any given radius r) then the effective gravitational constant is higher than average attracting more matter to within this radius, increasing the m/r ratio at this radius. In this way a constant m/r ratio for spiral galaxies can be maintained for different r, resulting in the constant velocity of stars and the flat shape of the rotation graphs. A reduction in the value of G at the centre of galaxies, … may lead to the phenomenon of active galactic nuclei and the emergence of jets.’
Notice that Hunter is oversimplifying the mass distribution in the universe, since due to the big bang the effective density increases with spacetime distance (the earlier universe had higher density) and he is not including all graviton interaction effects, but the basic conjecture and some of its consequences are mathematically similar to the physical mechanism of graviton exchange I’m working on. The normal equilibrium of radiated graviton power, which occurs via the exchange of gravitons between any given mass and the rest of the universe, produces the immense pressure on each mass which keeps it confined to a small as a tiny black hole; fermions are charged bosons which are trapped by gravitation. It is because of this graviton exchange equilibrium that the rest mass energy of a fundamental particle is equal to the gravitational potential energy of that mass with respect to the other masses in the surrounding universe: equilibrium of graviton exchange between one mass and all the other masses in the surrounding universe is the cause of the equality between inertial and gravitational masses/energies. It also shows why masses contract in the direction of motion and gain mass when in motion (as predicted by special relativity): acceleration of mass alters the exchange equilibrium, the resistance being the force of inertia, and the pressure effect of encountering gravitons in the direction of motion causes the Lorentz contraction.
If you look at Hunter’s conjecture mc^2 = mMG/R, since in spacetime R = ct, this immediately gives you Louise Riofrio’s fundamental equation, namely tc^3 = MG. Louise Riofrio is a physicist who has investigated whether this formula suggests that c is inversely proportional to the cuberoot of the age of the universe. The quantum gravity mechanism gives the same equation (ignoring dimensionless multiplication factors for redshift and varying density effects) and suggests that c isn’t varying; instead the effective value of G varies for various reasons as already discussed (see also discussion here and updates in comments at that post).
Update (3 March 2008): in Figure 4, John Hunter states that the basic result from his equivalence, G = R(c^2)/M, provides an explanation for the flatness problem. This is something I also obtained from the graviton interaction mechanism, as you can see in the detailed post here. From Hunter’s way of writing the formula, because it is so simplified (in certain ways it is oversimplified), is maybe easier to grasp why the universe is so flat at the greatest distances (earliest times): gravitation was weaker then. Gravitation coupling G increases in direct proportion to the age of the universe, G = R(c^2)/M = ct(c^2)/M. This formula is oversimplified because it ignores various subtle but important physical effects like the variation in the density of the universe with distance in spacetime (the density tends towards infinity at the greatest distances and hence earliest times after the big bang), and the effect of graviton exchange in an expanding universe which quenches certain aspects of gravitation over immense distances because gravitons received as a result of exchange between two receding masses arrive at each mass in a redshifted state, weaking that interaction.
Because the effective value of G at early times after the big bang is so small from our spacetime perspective, we see small gravitational effects: the universe looks very flat, i.e., gravity was so weak it was unable to clump matter very much at 400,000 years after the big bang, which is the time of our information on flatness, i.e. the time that the closely studied cosmic background radiation was emitted. The mainstream ad hoc explanation for this kind of observation is a nonfalsifiable (endlessly adjustable) idea from Alan Guth that the universe expanded or ‘inflated’ at a speed faster than light for a small fraction of a second, which would have allowed the limited total mass to get very far dispersed very quickly, which would have reduced the curvature of the universe and suppressed the effects of gravitation at subsequent times in the early universe.
Hence, the ‘peer’ review mainstream has blocked the proper publication of Hunter’s research just as it blocks mine, because nonfalsifiable mainstream ideas are in place and as Dr Stanley Brown, editor of PRL, emailed me in January 2004, checkable ‘alternatives’ to uncheckable mainstream speculation are unpublishable in mainstream journals due to the attitude of ‘peer’ reviewers.
On the topic of variations in G, Edward Teller falsely claimed in a 1948 paper that if G had varied as Dirac suggested a few years earlier, then the gravitationally caused compression in the early universe and in stars including the sun would vary with time, affecting fusion rates dramatically because fusion is highly sensitive to the amount of compression (which he knew from his Los Alamos studies on the difficulty of producing a hydrogen bomb at that time). However, the YangMills mechanism of electromagnetism (whose role in fusion is the Coulomb repulsion of protons, i.e., the stronger electromagnetism is, the less fusion you get because protons approach less closely because they are repelled more strongly, so the shortranged strong force which causes protons to fuse together ends up causing less fusion), shows that it will vary with time in the same way that gravitation does.
This invalidates Teller’s theory, because if you for example halve the value of G (making fusion more difficult by reducing the compression of protons long ago), you simultaneously get an electromagnetic coupling charge which is halved, and the effect of the latter is to increase fusion by reducing the Coulomb barrier which protons need to overcome in order to fuse. The two effects – reduced G which tends to reduce fusion by reducing compression, and reduced Coulomb charge which allows protons to approach closer before being repelled, and therefore increases fusion – offset one another. Dirac wrongly suggested that G falls with time, because he believed that at early times G was as strong as electromagnetism and numerically ‘unified’; actually all attempts to explain the universe by claiming that the fundamental forces including gravity are the same at a particular very early time/high energy, are physically flawed and violate the conservation of energy – the whole reason why the strong force charge strength falls at higher energies is because it is being caused by pairproduction of virtual particles including virtual quarks accompanied by virtual gluons. This pairproduction is a result of the electromagnetic charge, which increases at higher energy.
The electromagnetic force has been proved to cause pairproduction (this is a major source of shielding of gamma rays above 1.022 MeV by nuclei with a high Coulomb charge like lead, and this has been very carefully observed studied for eighty years now using all the gear of particle physics from the obsolete Wilson cloud chamber onwards) which produces the virtual particles including mesons and gluons which mediate the shortrange interactions. By the principle of conservation of massenergy, you can work out and predict exactly how the this works. Electromagnetic charge increases with collision energy (and thus decreasing distance between particles) if the collision energy exceeds that which takes the particles close enough so that their electric field strengths exceed 1.3 x 10^18 v/m, Schwinger’s threshold for pairproduction in the vacuum. Where the electric field exceeds this value, virtual fermions form a dielectric medium of polarized dipoles which on average tend to align to oppose the electric charge of the real particle core, reducing the value of the latter as observed from a large distance. The energy density of an electromagnetic field is precisely known from electromagnetism. Integrating it over successive radial distances, r + dr, where the charge is varying, tells you how much energy is being shielded by the polarized vacuum and is becoming available to power shortranged nuclear forces at any particular distance. Conservation of massenergy tells you, therefore, exactly how much energy is being used to create pairs of polarized charges at any given distance from a particle’s core. The textbook equations of quantum field theory don’t investigate this obvious physical approach to explaining the different forces; they instead simply find a logarithmic variation of effecive charge as a function of energy between two cutoffs for the Standard Model forces. The lower energy or ‘infrared’ cutoff must physically correspond to Schwinger’s pairproduction threshold electric field strength, and the upper energy or ‘ultraviolet’ cutoff physically corresponds to some kind of ‘grain size’ in the Dirac sea, or – far more likely – to a minimum physical distance scale that is required for pair production charges to become polarized before they annihilate back into bosonic field quanta. When two particles get very close, the strong nuclear charge decreases because there is less shielding of the electromagnetic charge between them, and therefore less electromagnetic energy is being transformed by pair production into strong force mediating pions and gluons. This is the physics, and it’s a checkable prediction because you can calculate the details to see if they work out if you have the time.
Mainstream string gatherings have the allure of attending a rock concert, i.e. social entertainment, while also maintaining some features of religious dogma and political inertia and reluctance to listen to or investigate new ideas, except new ideas building on the mainstream speculations such as string theory. When ‘peer’ reviewers are mainstream faithbased physicists, they are not the ‘peers’ of those who build on empirically confirmed facts; they are in competition with them. Expecting such ‘peers’ to behave ethically (i.e. recommend the publication of facts that don’t fit in with uncheckable mainstream Party speculation) is as irrational and misguided as expecting honest and decent behaviour from politicians or sellers of religious dogmas: they’re bored and repelled by physics of the downtoearth fact based, checkable type.
‘A Party member … is supposed to live in a continuous frenzy of hatred of foreign enemies and internal traitors … The discontents produced by his bare, unsatisfying life are deliberately turned outwards and dissipated by such devices as the Two Minutes Hate, and the speculations which might possibly induce a skeptical or rebellious attitude are killed in advance by his early acquired inner discipline … called, in Newspeak, crimestop. Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – Orwell, 1984.
Update (25 March 2008):
Maybe part of the problem here is that most people (including Catt) don’t grasp the fault in Maxwell’s electromagnetism:
Fig. 5: Maxwell’s error in electromagnetic theory and how it physically maps classical electromagnetism on to quantum field theory.
VITAL POINTS TO NOTE:
1. Maxwell and Heaviside claimed that a vacuum “displacement current” of polarized virtual charges occurs, with the process of polarization being a “displacement current” which closes the open circuit between the two conductors before the logic step has completed the full circuit (i.e., while the logic step is moving along the circuit at light velocity for the insulator which must be presumed to be a “dielectric”, even if a vacuum).
2. Julian Schwinger worked out that the quantum field theory vacuum only undergoes any polarization in electric fields above 1.3*10^18 v/m. Such fields don’t occur in computers, but they still work!
3. In each conductor, as the energy step passes a given location, the relatively loosely bound (conduction band) electrons get accelerated from a mean of zero to their full mean drift speed. This causes them to radiate and swap EM energy!!! This is the physical mechanism for what happens, replacing Maxwell’s mistaken “displacement current” with tested physics.
As Fig. 5 indicates, the electrons accelerate in opposite directions along each of the two conductors, so each conductor radiates a waveform of EM radiation which is the exact inversion of that from the other conductor. Hence, at distances from the transmission line, there is perfect cancellation or interference, cancelling any detectable signal! Thus, no net energy loss occurs due to the radiation. The sole effect of this radiation (ignored by Catt and leading to a serious argument between us, even after I wrote an Electronics World coverstory about Catt’s best invention) is that it is exchanged between the two conductors. This is the physical mechanism that does the same job that Maxwell’s false pet theory of “displacement current” (which doesn’t exist, because as Nobel Laureate Schwinger proved, the quantum field theory vacuum doesn’t polarize in electrif fields below 1.3*10^18 volts/metre, and you don’t get that kind of field strength in radio waves or computers, where field strengths are very much lower).
(I’ve uploaded some vital background information on Catt’s vital background work to this blog here, here, here, here, here, here, here and here, since Wikipedia entries are being vandalised and some of Catt’s web pages are now disappearing owing to his long hospitalization.)
This changes the physical understanding of Maxwell’s equations: from it, we know that wherever Maxwell claimed “displacement currents” to exist, exchange radiation is occurring which produces the same forces and energy transfers but tells us about the previously hidden quantum field theory mechanism behind the quantum electromagnetic interaction.
Surely the quantum gravitational charge, mass, can be expected to behave as a first approximation like electric charge when accelerated.
Whereas the acceleration of electric charge produces an asymmetry in the field (which itself is mediated by gauge boson exchange radiation) that ripples outward as an observable transverse EM wave (mediated by numerous gauge bosons or field quanta), with gravity what you are doing is accelerating a mass (a unit gravitational charge), which introduces an asymmetry into the graviton exchange mechanism, that propagates as a gravitational wave (mediated by numerous gravitons, or field quanta).
Why introduce additional complexity? It looks as if the mechanism for gravitational waves is a perfect analogy to electromagnetic waves, and that the relative weakness of the gravitational waves is simply due to the relatively weakness of the gravitational coupling, as compared to electromagnetism.
Update (31 March 2008):
Fig. 6: Simplified depiction of the coupling scheme for mass to be given to Standard Model particles by a separate field, which is the maninthemiddle between graviton interactions and electromagnetic interactions. A more detailed analysis of the model, with a couple of mathematical variations and some predictions of masses for different leptons and hadrons, is given in the earlier post here and there are updates in other recent posts on this blog. In the case of quarks, the cores are so close together that they share the same ‘veil’ of polarized vacuum, so N quarks in close proximity (asymptotic freedom inside hadrons) boosts its electric charge shielding factor by a factor N, so if you have three quarks of bare charge j each and normal vacuum polarization shielding factor j, the total charge is not jN but is jN/N, where the N in the denominator is there to account for the increased vacuum shielding. Obviously jN/N = j, so 3 electroncharge quarks in close proximity will only exhibit the combined charge of 1 electron, as seen at a distance beyond 33 fm from the core. Hence, in such a case, the apparent electric charge contribution per quark is only 1/N = 1/3, which is the exactly what happens in the Omega Minus particle (which has 3 strange quarks of apparent charge 1/3 each, giving the Omega Minus a total apparent electric charge as observed beyond 33 fm of 1 unit). More impressively, this model predicts the masses of all leptons and hadrons, and also makes falsifiable predictions about the variation in coupling constants as a function of energy which result from the conversion of electromagnetic field energy into short range nuclear force field quanta as a result of pairproduction of particles including weak gauge bosons, virtual quarks and gluons in the electromagnetic field at high energy (short distances from the particle core). The energy lost from the electromagnetic field, due to vacuum polarization opposing the electric charge core, gets converted into short range nuclear force fields. From the example of the Omega Minus particle, we can see that the electric charge per quark observable at long ranges is reduced from 1 to 1/3 unit due to the close proximity of three similarly charge quarks, as compared to a single particle core surrounded by polarized vacuum, i.e. a lepton (the Omega Minus is a unique, very simple situation; usually things are far and away more complicated because hadrons generally contain pairs or triplets of quarks of different flavour). Hence, 2/3rds of the electric field energy that occurs when only one particle is alone in a polarized vacuum (i.e. a lepton) is used to generate shortranged weak and strong nuclear force fields when three such particles are closely confined.
As discussed in earlier posts, the similarity of leptons and quarks has been known since 1964, when it was discovered by the Italian physicist Nicola Cabibbo: the rates of lepton interactions are identical to those of quarks to within just 4%, or one part in 25. The weak force when acting on quarks within one generation of quarks is identical to within 1 part in 25 of that when acting on leptons (although if the interaction is between two quarks of different generations, the interaction is weaker by a factor of 25). This similarity of quarks and leptons is called ‘universality’. Cabibbo brilliantly suggested that the slight (4%) difference between the action of the weak force on leptons and quarks is due to the fact that a lepton has only one way to decay, whereas a quark has two possible decay routes, with relative probabilities of 1/25 and 24/45, the sum being of course (1/25) + (24/25) = 1 (the same as that for a lepton). But because only the one quark decay route or the other (1/25 or 24/25) is seen in an experiment, the effective rate of quark interactions are lower than those for leptons. If the weak force involves an interaction between just one generation of quarks, it is 24/25 or 96% as strong as between leptons, but if it involves two generations of quarks, it is only 1/25th as strong as when mediating a similar interaction for leptons.
This is very strong evidence that quarks and leptons are fundamentally the same thing, just in a different disguise due to the way they are paired or tripleted and ’dressed’ by the surrounding vacuum polarization (electric charge shielding effects, and the use of energy to mediate shortrange nuclear forces).
A quick but vital update about my research (particularly updating the confusion in some of the comments to this blog post): I’ve obtained the physical understanding which was missing from the QFT textbooks I’ve been studying by Weinberg, Ryder and Zee, from the 2007 edition of Professor Frank Close’s nicely written little book The Cosmic Onion, Chapter 8, ‘The Electroweak Force’.
Close writes that the field quanta of U(1) in the standard model is not the photon, but is a B_{0} field quanta.
SU(2) gives rise to field quanta W_{+}, W_{} and W_{0}. The photon and the Z_{0} both result from the Weinberg ‘mixing’ of the electrically neutral W_{0} from SU(2) with the electrically neutral B_{0} from U(1).
This is precisely the information I was looking for, which was not clearly stated in the QFT textbooks. It enables me to get a physical feel for how the mathematics works.
The Weinberg mixing angle determines how W_{0} from SU(2) and B_{0} from U(1) mix together to yield the photon (textbook electromagnetic field quanta) and the Z_{0} massive neutral weak gauge boson.
If the Weinberg mixing angle were zero, then W_{0} = Z_{0} and B_{0} = electromagnetic photon. However, this simple scheme fails (although this failure is not made clear in any of the QFT textbooks I’ve read, which have obfuscated instead), and an ad hoc or fudged mixing angle of about 26 degrees (this is the angle between the Z_{0} and W_{0} phase vectors) is required.
This mixing angle is physically unexplained in the Standard Model, it’s just an epicycle needed to make it represent the experimentally known facts well enough to predict other things accurately (like Ptolemiac epicycles), just as force coupling constants and particle masses have to be put in by hand. Because neutrinos also mix as they propagate (changing flavour), there are mixing parameters there too. The Standard Model has 19 parameters which have to be put in by hand. My objective to to get away from such fiddled factors and to get a theory which does more while requiring less speculative assertion. The total number of parameters that need to be supplied is far smaller than the Standard Model requires, because the model predicts masses of leptons and quarks, and force coupling parameters.
I haven’t had time yet to analyse the Weinberg mixing yet. My first reaction is that it is a failure of the Standard Model that you need to mix up the U(1) field quanta with the usually massive weak electrically neutral field quanta from SU(2) in order to arrive at empirically useful descriptions of the electromagnetic field quanta and the weak field quanta. This is definitely being coveredup by the textbooks, which obfuscate it terribly. In fact, the mixing angle is an epicycle or fudge factor which is needed to force an inaccurate physical description of electromagnetism to work. It has no natural explanation.
However, it is vital to understand what the existing theory is in order to get a complete grasp on what needs to go in its place. This is getting very interesting. Unfortunately, I will have no time for several weeks to work on this further.
Note that Professor Close was kind enough to email me back this evening within four hours of my emailing him an error I spotted in his book (however if I had sent a longer email with a paper or a request for him to spend valuable time on my pet ideas, it would predictably have been a very different story):
From: Frank Close
To: Nigel Cook
Sent: Monday, March 31, 2008 10:48 PM
Subject: RE: The Cosmic Onion, 2007 ed., Fig 11.3 page 156
yes. well spotted. pythagaros requires 1+24 =25 (all over 25)
—–Original Message—–
From: Nigel Cook
Sent: Mon 3/31/2008 7:06 PM
To: Frank Close
Subject: The Cosmic Onion, 2007 ed., Fig 11.3 page 156
Dear Professor Close,
Should the small square in Fig 11.3 on p 156 of The Cosmic Onion (2007 ed.) be labelled 1/25 rather than 1/5? It seems to be a printing error.
Thanks for your clear discussion of universality in that book which I only discovered very recently. I’ve only done undergraduate physics at university, and am interested in quantum field theory, so it’s great to get some semipopular discussions of basic stuff to supplement the more mathematical works by Weinberg, Ryder, Zee, et al.
Kind regards,
Nige Cook
http://quantumfieldtheory.org/
Update (23 April 2008): Below is the text of a comment, summarising what is missing from existing quantum mechanics and quantum field theory, in the moderation queue to:
http://egregium.wordpress.com/2008/03/30/legendarylecturesonqftbysidneycoleman/
Geroch’s Special Topics in Particle Physics are very concise and begin in a simple way, but soon become extremely technical.
Coleman’s notes on QFT (as written up by Tong) are slightly longer and more detailed, and in some ways address the key questions I have with QFT a lot better.
As a latecomer to QFT (I’ve only recently read Zee, Weinberg vI and vII, and Ryder), it’s amazing that the structure of the theory is entirely based on classical field equations for the lagrangian. I had read (previous to seeing the maths of QFT) that in principle the path integral can be used to model the motion of orbital electrons. Feynman gives an illustration of this in his book QED: the random exchange of discrete Coulomb field quanta between electrons and the proton cause the chaotic motion and indeterminancy according to Feynman: fields are quantized so they aren’t classical.
‘… when seen on a large scale, they [electrons, photons, etc.] travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from quantum interactions, each represented by a Feynman diagram] becomes very important, and we have to sum the arrows [amplitudes] to predict where an electron is likely to be.’
 R. P. Feynman, QED, Penguin, 1990, page 845.
This implies that the physical difference between Bohr’s atomic model and quantum mechanics is that the Coulomb field should be quantized properly. If you derived the Bohr model using a Coulomb force equation that correctly modelled the fluctuations in the electric attractive force on small scales (in atoms), the key problem of Bohr’s incorrect quantum mechanics would be solved.
Instead of that, quantum mechanics leaves the classical Coulomb potential intact, and adopts a statistical wave equation with which to introduce indeterminancy. This leads to a lot of issues. QFT follows quantum mechanics in this model, using classical Maxwell equations for things like the Coulomb potential.
QFT only quantizes the field indirectly by using integrating the Maxwell lagrangian equations over space. The integral is then evaluated as a perturbative expansion with each term in the infinite series of terms representing one Feynman interaction diagram (i.e., one category of interaction of the field quanta which contributes to the force). Since more complex interactions produce smaller forces, the expansion is convergent and a few terms (the simplest Feynman diagrams in the infinite series) represent most of the actual interactions.
This seems to be the major selling point of QFT. However, the very fact that the perturbative expansion is convergent and only a few Feynman diagrams contribute significantly to the observed forces, suggests that nature is for practical purposes as simple as those few Feynman diagrams. For example, the first perturbative correction (corresponding to the Feynman diagram where there is a loop for a field quanta travelling from a magnet to the electron) for the magnetic moment of the electron that Schwinger calculated in 1948, only increases the magnetic moment of the electron by 0.116%. That’s trivial! The lesson here is surely that nature is basically very simple. So why introduce all the complex mathematics?
The first thing to get a handle on is that Schroedinger’s nonrelativistic equation and its relativistic variants such as the GordonKlein and Dirac equations, are not causal models but statistical models. If they were properly representing the dynamics, you wouldn’t have a wave equation in there; you wouldn’t need it. By taking the classical model and correcting the Coulomb field equation to contain indeterminancy on small scales (caused by random field quanta interactions, rather than the classical continuum field represented by Maxwell’s equations), the chaotic interactions of field quanta would produce the indeterminancy which would lead to statistically wavy motions of orbital electrons.
So what has gone wrong in quantum mechanics and quantum field theory – assuming that Feynman’s analysis in QED suggests the correct physics – is that the field equations have never been properly quantized to simulate the chaotic, random interactions of field quanta with charges, producing forces. On the atomic scale, the chaos of field quanta in the Coulomb force results in all the wave effects and indeterminancy which are usually and falsely attributed to mathematics via the Schroedinger (plus GordonKlein and Dirac) time dependent and time independent equations.
Of course a wave equation such as Schroedinger’s is accurate statistically. If you look at sound waves in air, they are the statistical resultant of a massive number of very small chaotic interactions of air molecules hitting each other at average speeds of 500 m/s. That chaos appears to give rise to classical sound waves on large scales. Similarly, the timeaveraged motion of the electron in an atom is well modelled by the Schroedinger time independent wave equation. It’s not really a surprise. Mathematically, it’s even a correct model in the statistical limit that you average the position of the electron over a very long period of time, and the equation gives you the correct probability distribution of finding it anywhere.
What’s wrong is that quantum mechanics falsely claims that there is no better physical model. From Feynman’s argument, there is: the path integral! The whole physical reason why the electron suffers indeterminancy is the chaos of the field quanta which give a nonclassical Coulomb potential. Instead of fixing that by making the Coulomb potential properly quantized (random on scales of spacetime), the founders of quantum mechanics and quantum field theory ignored it completely, continued to use a classical smooth Coulomb potential, and introduced indeterminancy by employing a wave equation for the motion of the particle.
The resulting problems with wavefunction collapse are exactly analogous to trying to apply a sound wave equation to a single air molecule and ending up with all sorts of crazy interpretations of how the sound wave equation can tally with a single air molecule. Anyone can see that the problem here is that the sound wave equation is not suited to model a air molecule, but only a very large number of air molecules so that the averaged motion corresponds statistically to a wave.
For some reason, the analogy between air molecules in classical pressure theory, and field quanta in quantum field theory, has been missed by the experts. Where you have a large rate of air molecule interactions, the result can be statistically modelled by classical wave equations and by the assumption that the resulting pressure is a constant. But on small scales (for example, when dealing with a pollen grain), the motion becomes chaotic and determinancy vanishes because the impacts occur from random directions after random intervals of time. By analogy to this, the randomness of field quanta exchanges between atomic electrons and the proton in an atom creates indeterminancy. You can’t predict where the electron will be because you can’t predict when individual field quanta will interact with the electron.
For the life of me, I can’t see why the experts don’t see this, and move away from using classical calculus (Maxwell equation lagrangian) in QFT, and towards a stochastic simulation (a Monte Carlo computer code, with random numbers – weighted in frequency according to statistical distribution of occurrance – representing parameters for individual quantum interaction events). This would properly simulate quantum mechanics and quantum field theory, without the problems that you get with the use of continuously variable lagrangians for quantized fields.
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
 R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
copy of a comment:
http://motls.blogspot.com/2008/04/conspiracytheoriesaboutmagnets.html
http://www.haloscan.com/comments/lumidek/7922516010231994065/?a=32019#1028671
“Nope, gravitons (or gauge bosons) are not hidden variables. Gravitons (and gauge bosons) are actual measurable particles. Hidden variables are coordinates of some invisible and separately undetectable objects (particles) that should, according to the (wrong) theories of hidden variables, decide about the particular outcome of quantum experiments even though quantum mechanics can only predict the probabilities, not the particular outcomes – not even in principle. …
“As explained above, it is not true that Feynman has ever complained that Taylor expansions contain infinitely many terms because every person with an elementary qualification in maths – and Feynman belonged to that group since he was a kid – knows that virtually all functions have infinitely many terms when Taylor expanded. There is nothing wrong about it whatsoever.” – Lubos Motl
Thank you for saying that gauge bosons like gravitons are not hidden variables but real particles. Now you should be treating them like real particles, and making checkable calculations to prove it, instead of ignoring virtual particles and the chaotic (effects) they produce on individual electrons in an atom.
If you want to understand the simplest real interaction, your path integral is expanded into an infinite series of terms each with a corresponding, distinct Feynman diagram. That corresponds to requiring an infinite number of gauge boson interactions for even events occurring in no matter how small an interval of time. This is the physical problem Feynamn had. You have confused this problem (infinitely many Feynman diagram interactions in a tiny space and tiny time) with the maths of an infinite perturbative expansion. Of course in maths you can have an infinite number of terms in a Taylor series and it is not wrong; Feynman was objecting that it is wrong to have an infinite number of Feynman diagram interactions occurring between two electrons in any short period of time!
The gauge interaction equation which is written as the Lagrangian is an empirical equation, e.g., based on Maxwell’s equation treating the motion of charge as a current. Because it is an empirical equation (formulated by Maxwell from experimental observations by Faraday, Ampere, Gauss, et al.) it is a net result already containing the effects of all virtual particles. E.g., it already includes the observable effects of the Feynman diagram terms in the perturbative expansion.
When you place that empirical Lagrangian into the action for a path integral of all possible interactions and represent that integral as a perturbative expansion, you can work out the contributions of the different Feynman diagram interactions to the Lagrangian.
This is the physical way that the path integral and perturbative expansion converts classical empirical Maxwell equations into a quantized field containing discrete interactions, each represented pictorially by Feynman diagrams and mathematically by a term in the perturbative expansion which corresponds to the path integral.
The failure of the use of calculus in the path integral is that although you quantize the field into an infinite number of different Feynman diagrams (interactions or histories), you are not quantizing individual interactions, only categories of interactions which correspond to each Feynman diagram.
E.g., the interactions depicted in the simplest Feynman diagrams (which correspond to the first terms in the perturbative expansion), will each represent very common interactions of virtual particles if the perturbative expansion converges quickly (as is the case for selfinteraction corrections for leptons).
The relative contribution of each term in a perturbative expansion is dependent upon how frequently that type of interaction actually occurs. Simple interactions occur more frequently than complex interactions, if the perturbative expansion converges rapidly.
As a good approximation (to twosignificant figures) in low energy lepton interactions, you can ignore all the Feynman diagram loops (selfinteraction correction terms) in the perturbative expansion, because only the the simplest (direct) interaction of a field quanta being exchanged between two charges is important to accuracy of two significant figures. The first loop correction for the magnetic moment of a lepton only increases the magnetic moment by 0.116%. It’s trivial.
Feynman diagrams with loops are important for three or more significant figures when calculating lepton interactions at low energy, but they are still only about 0.1% contributions! Hence 99.9% of low energy lepton physics is nonloop Feynman diagram contributions, the straight exchange of field quanta between charges. This is the priority in working on quantum gravity.
Feynman diagram loops (represented by the terms in a perturbative expansion) are only really big contributors for quarks or for lepton interactions at high energies.
Schwinger showed that there is no pair production in the vacuum at electromagnetic field strengths less than the Schwinger threshold, 1.3*10^18 volts/metre. Hence, there are no loops when gauge bosons are exchanged in the weaker fields which exist at more than a few femtometres from an electron or a proton.
Loops (for instance the brief conversion of a field boson into a pair of fermions that start to polarize in the field, disturbing the field before they attract and annihilate back into a field boson) only occur at very short distances from charges, where the field is strong. For the purpose of making predictions of gravitation at low energy over large distances from charges, there are no loops in the vacuum and everything is very simple.
A simulation for exchange of gauge bosons could therefore find the correct theory of quantum gravity in the lowenergy limit (which correspond to general relativity), by ignoring loop terms in the perturbative expansion. This will make the interactions direct and simple, allowing checkable predictions.
This is the simplicity Feynman was referring to, I think.
“In quantum gravity, the entropy in a volume – the logarithm of the dimension of the required effective Hilbert space – is bounded by the area of the region and is thus finite. In this sense, every finite region does behave as a computer with finitely many registers. Except that the number of registers is still huge because it equals the area over four times the Planck area, and the Planck area in the denominator is extraordinarily tiny.”
The Planck area is a not even wrong conjecture, because the Planck scale has never been observed. There is no evidence for that particular scale. Planck could have dimensionally written the “fundamental” length as not the
(hbar * G/c^3)^{1/2} = 1.6*10^{35} metre
(which is the Planck length) but instead as the smaller more fundamental size for the electron core,
2GM/c^2 = 1.353*10^{−57} metre. ( http://en.wikipedia.org/wiki/Black_hole_electron )
Wikipedia defends the Planck scale as follows:
‘The Planck length is deemed “natural” because it can be defined from three fundamental physical constants: the speed of light, Planck’s constant, and the gravitational constant.’ – http://en.wikipedia.org/wiki/Planck_length
But it’s far more “natural” to choose the three constants to be the speed of light, the electron mass and the gravitational constant, because then the length you get is the size of a black hole of electron mass! This has physical meaning because it suggests the physics of the core of an fermion is that of a black hole. It is also a smaller and therefore more fundamental unit of length than the Planck scale.
[End of comment to Lubos' blog]
*******************************
Here’s a brief comment about the vague concept of a ‘zero point field’ which unhelpfully ignores the differences between fundamental forces and mixes up gravitational and electromagnetic field quanta interactions to create a muddle. Traditional calculations of that ‘field’ (there isn’t only one field acting on ground state for electrons; there is gravity and electromagnetism with different field quanta needed to explain why one is always attractive and the other is attractive only between unlike charges and repulsive between similar charges, not to mention explaining the 10^40 difference in field strengths for those forces) give a massive energy density to the vacuum, far higher than that observed with respect to the small positive cosmological constant in general relativity. However, two separate force fields are there being confused. The estimates of the ‘zero point field’ which are derived from electromagnetic phenomena such as electrons in the ground state of hydrogen being in an equilibrium of emission and reception of field quanta, have nothing to do with the graviton exchange that causes the cosmic expansion (Figure 1 above has the mechanism for that). There is some material about traditional ‘zero point field’ philosophy on Wikipedia
Added on 8 May 2008: Extract from an email to SM:
If you come up with a really original idea, nobody exists to act as a “peer reviewer” because you don’t have any peers in that new discipline.
E.g., Hubble’s law for recession is simply v = HR, recession velocity is directly proportional to distance R. By calculus, acceleration, a = dv/dt = d(HR)/dt = H*(dR/dt) + R*(dH/dt) = Hv + 0 = H(HR) = RH^2.
This proof that a=RH^2 is very simple science, but is nevertheless too “out of the box” to be taken seriously by cranks like the string theorists who are “peerreviewers” (better named “rivalreviewers” or “mainstream cranks”)!
This result tells you that expansion of the universe is the acceleration of the universe, and it gives you a quantitative prediction of the acceleration of the universe using just Hubble’s law.
Not only that, but by using Newton’s second and third laws of motion with it, it enables you to predict gravity and to prove the mechanism of gravity. Simply, Newton’s second law is F = dp/dt ~ ma which gives you an outward force (radially outward in all directions from us, which can be understood simply by analogy to the net outward force of the pressure wave in an explosion; the socalled dynamic or drag pressure of a blast wave is the outwarddirected pressure component of a shock wave in an explosion, and it’s outward force is its pressure multiplied by the spherical surface area of the shock).
By Newton’s 3rd law – action and reaction are equal and opposite – we then get an equal inwarddirected force, which from the possibilities (vector bosons) of quantum field theory, seems to be mediated by gravitons. This allows checkable predictions of the force of gravity: ttp://nige.wordpress.com/
Maybe all of this factbased physics I’ve done is “crackpot”, and all the hocuspocus based (e.g. see my site http://quantumfieldtheory.org/ for proof it is nonsense) string theory graviton nonphysics is correct; but I’ve got evidence and string theory doesn’t. If society wants to throw the “crackpot” label at pseudoscience, it should do so at string theory, not at entirely factbased theories which make tested, confirmed predictions (even if string theorists run mainstream journal peerreview teams and censor out factual alternatives to stringy spin2 graviton trash.
Dr Samuel Glasstone authored 40 different technical textbooks ranging from “The Effects of Nuclear Weapons” to “Theoretical Chemistry” and “Nuclear Reactor Engineering”. Recently I found a very enlightening essay from one of his coauthors: http://www.garfield.library.upenn.edu/classics1988/A1988Q713800001.pdf
The article is by Glasstone’s coauthor Keith J. Laidler, an Oxford graduate who went to Princeton University aged 22 in 1938 to do a PhD under Henry Eyring. Erying had come up with a controversial theory of chemical reaction rates in 1935, which had been initially rejected by the Journal of Chemical Physics.
As a result of the lack of comprehension which his theoretical paper was greeted with, he realised that he would need to write a book in order to:
“present the basic theory in a fairly detailed way, discuss its implications and assumptions, and apply it to rate processes of various kinds. Eyring knew that he would find it difficult to settle down to long sessions of writing, which are necessary to produce a book. He therefore invited me to collaborate with him on the book, with the arrangement to be that I would do the actual writing, in regular consultation with him.”
Laidler undertook the writing of the book while working on his PhD, and Samuel Glasstone took over the editing in the Summer of 1939 when he arrived at Princeton:
“In the summer of 1939 Samuel Glasstone arrived in Princeton as a research associate in the Department of Chemistry. Glasstone, then aged about 40, had already had a successful research career at the University of Sheffield and was the author of several very successful books on physical chemistry. In view of his background, it was natural to enlist his help with the writing of the book, especially since it would be necessary for me to leave Princeton in 1940 to carry out war research. I provided Glasstone with everything I had written and continued to give him material as I wrote it during my second year at Princeton. At the same time, Glasstone, Erying, and I collaborated on research on overvoltage, a subject on which Glasstone had previously worked. Glasstone greatly supplemented the material I gave him for the book, and he put everything into final form. Eyring himself did hardly any of the writing, but he made numerous and valuable comments on everything we wrote, and I well remember many vigorous but always very friendly arguments on a number of fundamental points. Although World War II interrupted most basic scientific work for a few years after the book’s publication in 1941, the book attracted much attention from the start, particularly as it was the first comprehensive treatment of the new rate theory and of its applications to a variety of chemical and physical processes; it also contained a good deal of previously unpublished material. Records of sales during the war years have been lost, but probably at least 10,000 copies were sold during that period. After 1947 about 10,000 further copies were sold until the book went out of print in 1970. The Science Citation Index shows that it has been frequently quoted and that it has been Eyring’s most often cited publication. In 1948 a pirated Russian translation of the book appeared, and there have also been Japanese and Spanish editions.”
This particular anecdote about the reason for writing a book and how collaboration worked is very interesting!
Update (9 May 2008): copy of comments to http://keamonad.blogspot.com/2008/05/thoofttalk.html
… I was really amazed to learn that the weak mixing angle as an ad hoc fix literally mixes up the U(1) gauge boson with the neutral SU(2) gauge boson to produce something that fits the description of the gauge boson of electromagnetism.
It’s simply not true that U(1) represents electromagnetism and SU(2) the weak interaction: instead the Standard Model weak mixing angle blends the neutral gauge boson properties of U(1) and of SU(2) (where the massiveness of the SU(2) neutral gauge boson isn’t inherently natural, but has to be explained by an external agency, the Higgs field; i.e. the intrinsic mass of the SU(2) gauge bosons is zero and they acquire virtual mass from Higgs bosons).
The gauge boson of U(1) is B, which isn’t observed in nature, and the neutral gauge boson of SU(2) is W_0, which again isn’t observed in nature. The ad hoc “epicycle” of mixing the B from U(1) with the W_0 from SU(2) yields two mixed up combinations, the observed electromagnetic gauge boson and the massless version of the observed Z_0 weak gauge boson.
So even the electroweak sector of the Standard Model is a messy ad hoc theory. I look forward to reading ‘t Hooft’s paper and seeing if it sheds light on the kind of mathematics which I’m interested in at the moment…
Wow! I’ve just started to read the ‘t Hooft paper and was struck by his slide which states:
The radius of the event horizon of a black hole electron is on the 1.4*10^{57} m, the equation being simply r = 2GM/c^2 where M is electron mass.
Compare this to Planck’s length 1.6 * 10^{−35} metres which is a dimensional analysis based (non physical) length far larger in size, yet historically claimed to be the smallest physically significant size!
The black hole length equation is different from the Planck length equation principally in that Planck’s equation includes Planck’s constant h, and doesn’t include electron mass. Both equations contain c and G. The choice of which is the more fundamental equation should be based on physical criteria, not groupthink or the vagaries of historical precedence.
The Planck length is complete rubbish, it’s not based on physics, it’s unchecked physically, it’s not even wrong uncheckable speculation.
The smaller black hole size is checkable because it causes physical effects. According to the Wikipedia page: http://en.wikipedia.org/wiki/Black_hole_electron
“A paper titled “Is the electron a photon with toroidal topology?” by J. G. Williamson and M. B. van der Mark, describes an electron model consisting of a photon confined in a closed loop. In this paper, the confinement method is not explained. The Wheeler suggestion of gravitational collapse with conserved angular momentum and charge would explain the required confinement. With confinement explained, this model is consistent with many electron properties. This paper argues (page 20) “–that there exists a confined singlewavelength photon state, (that) leads to a model with nontrivial topology which allows a surprising number of the fundamental properties of the electron to be described within a single framework.” “
My papers in Electronics World, August 2002 and April 2003, similarly showed that an electron is physically identical to a confined charged photon trapped into a small loop by gravitation (i.e., a massless SU(2) charged gauge boson which has not been supplied by mass from the Higgs field; the detailed way that the magnetic field curls cancel when such energy goes round in a loop or alternatively is exchanged in both directions between charges, prevent the usual infinitemagneticselfinductance objection to the motion of charged massless radiations).
The Wiki page on black hole electrons then claims wrongly that:
All of these “objections” are based on flawed versions Hawking’s black hole radiation theory which neglects a lot of vital physics which make the correct theory more subtle.
See the Schwinger equation for pair production field strength requirements: equation 359 of the mainstream work http://arxiv.org/abs/quantph/0608140 for equation 8.20 of the mainstream work http://arxiv.org/abs/hepth/0510040.
First of all, Schwinger showed that you can’t get spontaneous pairproduction in the vacuum if the electromagnetic field strength is below the critical threshold of 1.3*10^18 volts/metre.
Hawking’s radiation theory requires this, because his explanation is that pair production must occur near the event horizon of the black hole.
One virtual fermion falls into the black hole, and the other escapes from the black hole and thus becomes a “real” particle (i.e., one that doesn’t get drawn to its antiparticle and annihilated into bosonic radiation after the brief Heisenberg uncertainty time).
In Hawking’s argument, the black hole is electrically uncharged, so this mechanism of randomly escaping fermions allows them to annihilate into real gamma rays outside the event horizon, and Hawking’s theory describes the emission spectrum of these gamma rays (they are described by a black body type radiation spectrum with a specific equivalent radiating temperature).
The problem is that, if the black hole does need pair production at the event horizon in order to produce gamma rays, this won’t happen the way Hawking suggests.
The electrical charge needed to produce Schwinger’s 1.3*10^18 v/m electric field which is the minimum needed to cause pairproduction /annihilation loops in the vacuum, will modify Hawking’s mechanism.
Instead of virtual positrons and virtual electrons both having an equal chance of falling into the real core of the black hole electron, what will happen is that the pair will be on average polarized, with the virtual positron moving further towards the real electron core, and therefore being more likely to fall into it.
So, statistically you will get an excess of virtual positrons falling into an electron core and an excess of virtual electrons escaping from the black hole event horizon of the real electron core.
From a long distance, the sum of the charge distribution will make the electron appear to have the same charge as before, but the net negative charge will then come from the excess electrons around the event horizon.
Those electrons (produced by pair production) can’t annihilate into gamma rays, because not enough virtual positrons are escaping from the event horizon to enable them to annihilate.
This really changes Hawking’s theory when applied to fundamental particles as radiating black holes.
Black hole electrons radiate negatively charged massless radiation: gauge bosons. These are the Hawking radiation from black hole electrons. The electrons don’t evaporate to nothing, because they’re all evaporating and therefore all receiving radiation in equilibrium with emission.
This is part of the reason why SU(2) rather than U(1)xSU(2), looks to me like the best way to deal with electromagnetism as well as the weak and gravitational interaction! By simply getting rid of the Higgs mechanism and replacing it with something that provides mass to only a proportion of the SU(2) gauge bosons, we end up with massless charged SU(2) gauge bosons which mimic the charged, forcecausing, Hawking radiation from black hole fermions. The massless neutral SU(2) gauge boson is then a spin1 graviton, which fits in nicely with a quantum gravity mechanism that makes checkable predictions and is compatible with observed approximations such as checked parts of general relativity and quantum field theory.
********
Dr Peter Woit has a couple of new posts up. One is called “So what will you do if string theory is wrong?” and it quotes a draft paper of that title by string theorist Moataz Emam which is to be published in the American Journal of Physics:
“So even if someone shows that the universe cannot be based on string theory, I suspect that people will continue to work on it. It might no longer be considered physics, nor will mathematicians consider it to be pure mathematics. I can imagine that string theory in that case may become its own new discipline; that is, a mathematical science that is devoted to the study of the structure of physical theory and the development of computational tools to be used in the real world. The theory would be studied by physicists and mathematicians who might no longer consider themselves either. …
“Whether or not string theory describes nature, there is no doubt that we have stumbled upon an exceptionally huge and elegant structure which might be very difficult to abandon. …”
I’ll have to get around to adding that to my domain http://quantumfieldtheory.org/ when I update it.
Another of Dr Woit’s new posts is “Witten on Dark Energy”. This contains the following text:
The crucial point of course is … how can you ever test these [string theory landscape of dark energy] ideas, making them real science and not metaphysics? At the end of [Edward Witten's] talk, Rachel Bean tried to pin him down on this question, leading to this exchange:
Bean: “If we have this landscape, this multiverse, … can we learn nothing, or is there some hope, do you have some hope, that if you were to find a universe that had remarkably small CC [cosmological constant, i.e. the term measuring the dark energy needed to model cosmological acceleration] you could also make some allusion to the other properties of that universe for example the fine structure constant, or are we saying that all of these things are random variables, uncorrelated and we’ll never get an insight.”
Witten: “Well, I don’t know of course, I’m hoping that we’ll learn more, perhaps the LHC will discover supersymmetry and maybe other unexpected discoveries will change the picture. I wasn’t meaning to advocate anything.”
Bean: “I’m asking your opinion.”
Witten (after a silence): “I don’t really know what to think has got to be the answer…”
Update (16 May 2008):
Heaviside, Wolfgang Pauli, and Bell on the Lorentz spacetime
There are a couple of nice articles by Professor Harvey R. Brown of Oxford University (he’s the Professor of the Philosophy of Physics there, see http://users.ox.ac.uk/~brownhr/), http://philsciarchive.pitt.edu/archive/00000987/00/Michelson.pdf and http://philsciarchive.pitt.edu/archive/00000218/00/Origins_of_contraction.pdf
The former paper states:
“… in early 1889, when George Francis FitzGerald, Professor of Natural and Experimental Philosophy at Trinity College Dublin, wrote a letter to the remarkable English autodidact, Oliver Heaviside, concerning a result the latter had just obtained in the field of Maxwellian electrodynamics.
“Heaviside had shown that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the ether. In this letter, FitzGerald asked whether Heaviside’s distortion result—which was soon to be corroborated by J. J. Thompson—might be applied to a theory of intermolecular forces. Some months later, this idea would be exploited in a letter by FitzGerald published in Science, concerning the baffling outcome of the 1887 etherwind experiment of Michelson and Morley. … It is famous now because the central idea in it corresponds to what came to be known as the FitzGeraldLorentz contraction hypothesis, or rather to a precursor of it. This hypothesis is a cornerstone of the ‘kinematic’ component of the special theory of relativity, first put into a satisfactory systematic form by Einstein in 1905. But the FitzGeraldLorentz explanation of the MichelsonMorley null result, known early on through the writings of Lodge, Lorentz and Larmor, as well as FitzGerald’s relatively timid proposals to students and colleagues, was widely accepted as correct before 1905—in fact by the time of FitzGerald’s premature death in 1901. Following Einstein’s brilliant 1905 work on the electrodynamics of moving bodies, and its geometrization by Minkowski which proved to be so important for the development of Einstein’s general theory of relativity, it became standard to view the FitzGeraldLorentz hypothesis as the right idea based on the wrong reasoning. I strongly doubt that this standard view is correct, and suspect that posterity will look kindly on the merits of the preEinsteinian, ‘constructive’ reasoning of FitzGerald, if not Lorentz. After all, even Einstein came to see the limitations of his own approach based on the methodology of ‘principle theories’. I need to emphasise from the outset, however, that I do not subscribe to the existence of the ether, nor recommend the use to which the notion is put in the writings of our two protagonists (which was very little). The merits of their approach have, as J. S. Bell stressed some years ago, a basis whose appreciation requires no commitment to the physicality of the ether.
“…Oliver Heaviside did the hard mathematics and published the solution [Ref: O. Heaviside (1888), ‘The electromagnetic effects of a moving charge’, Electrician, volume 22, pages 147–148]: the electric field of the moving charge distribution undergoes a distortion, with the longitudinal components of the field being affected by the motion but the transverse ones not. Heaviside [1] predicted specifically an electric field of the following form …
“In his masterful review of relativity theory of 1921, the precocious Wolfgang Pauli was struck by the difference between Einstein’s derivation and interpretation of the Lorentz transformations in his 1905 paper [12] and that of Lorentz in his theory of the electron. Einstein’s discussion, noted Pauli, was in particular “free of any special assumptions about the constitution of matter”6, in strong contrast with Lorentz’s treatment. He went on to ask:
‘Should one, then, completely abandon any attempt to explain the Lorentz contraction atomistically?’
“It may surprise some readers to learn that Pauli’s answer was negative. …
“[John S.] Bell’s model has as its starting point a single atom built of an electron circling a much more massive nucleus. Ignoring the backeffect of the electron on the nucleus, Bell was concerned with the prediction in Maxwell’s electrodynamics as to the effect on the twodimensional electron orbit when the nucleus is set gently in motion in the plane of the orbit. Using only Maxwell’s equations (taken as valid relative to the rest frame of the nucleus), the Lorentz force law and the relativistic formula linking the electron’s momentum and its velocity—which Bell attributed to Lorentz—he determined that the orbit undergoes the familiar longitudinal “Fitzgerald” contraction, and its period changes by the familiar “Larmor” dilation. Bell claimed that a rigid arrangement of such atoms as a whole would do likewise, given the electromagnetic nature of the interatomic/molecular forces. He went on to demonstrate that there is a system of primed variables such that the the description of the uniformly moving atom with respect to them is the same as the description of the stationary atom relative to the orginal variables—and that the associated transformations of coordinates are precisely the familiar Lorentz transformations. But it is important to note that Bell’s prediction of length contraction and time dilation is based on an analysis of the field surrounding a (gently) accelerating nucleus and its effect on the electron orbit.12 The significance of this point will become clearer in the next section. …
“The difference between Bell’s treatment and Lorentz’s theorem of corresponding states that I wish to highlight is not that Lorentz never discussed accelerating systems. He didn’t, but of more relevance is the point that Lorentz’s treatment, to put it crudely, is (almost) mathematically the modern changeofvariables, basedoncovariance, approach but with the wrong physical interpretation. …
“It cannot be denied that Lorentz’s argumentation, as Pauli noted in comparing it with Einstein’s, is dynamical in nature. But Bell’s procedure for accounting for length contraction is in fact much closer to FitzGerald’s 1889 thinking based on the Heaviside result, summarised in section 2 above. In fact it is essentially a generalization of that thinking to the case of accelerating bodies. It is remarkable that Bell indeed starts his treatment recalling the anisotropic nature of the components of the field surrounding a uniformly moving charge, and pointing out that:
‘In so far as microscopic electrical forces are important in the structure of matter, this systematic distortion of the field of fast particles will alter the internal equilibrium of fast moving material. Such a change of shape, the Fitzgerald contraction, was in fact postulated on empirical grounds by G. F. Fitzgerald in 1889 to explain the results of certain optical experiments.’
“Bell, like most commentators on FitzGerald and Lorentz, prematurely attributes to them length contraction rather than shape deformation (see above). But more importantly, it is not entirely clear that Bell was aware that FitzGerald had more than “empirical grounds” in mind, that he had essentially the dynamical insight Bell so nicely encapsulates.
“Finally, a word about time dilation. It was seen above that Bell attributed its discovery to J. Larmor, who had clearly understood the phenomenon in 1900 in his Aether and Matter [21]. 16 Indeed, it is still widely believed that Lorentz failed to anticipate time dilation before the work of Einstein in 1905, as a consequence of failing to see that the “local” time appearing in his own (secondorder) theorem of corresponding states was more than just a mathematical artifice, but rather the time as read by suitably synschronized clocks at rest in the moving
system. …
“One of Bell’s professed aims in his 1976 paper on ‘How to teach relativity’ was to fend off “premature philosophizing about space and time” 19. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatiotemporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of spacetime Galilean or Minkowskian, say—it is immersed in? 20 Some critics of Bell’s position may be tempted to appeal to the general theory of relativity as supplying the answer. After all, in this theory the metric field is a dynamical agent, both acting and being acted upon by the presence of matter. But general relativity does not come to the rescue in this way (and even if it did, the answer would leave special relativity looking incomplete). Indeed the BellPauliSwann lesson—which might be called the dynamical lesson—serves rather to highlight a feature of general relativity that has received far too little attention to date. It is that in the absence of the strong equivalence principle, the metric g_μv in general relativity has no automatic chronometric operational interpretation. 21 For consider Einstein’s field equations … A possible spacetime, or metric field, corresponds to a solution of this equation, but nothing in the form of the equation determines either the metric’s signature or its operational significance. In respect of the last point, the situation is not wholly dissimilar from that in Maxwellian electrodynamics, in the absence of the Lorentz force law. In both cases, the ingredient needed for a direct operational interpretation of the fundamental fields is missing.”
Interesting recent comment by anon. to Not Even Wrong:
Even a theory which makes tested predictions isn’t necessarily truth, because there might be another theory which makes all the same predictions plus more. E.g., Ptolemy’s excessively complex and fiddled epicycle theory of the Earthcentred universe made many tested predictions about planetary positions, but belief in it led to the censorship of an even better theory of reality.
Hence, I’d be suspicious of whether the multiverse is the best theory – even if it did have a long list of tested predictions – because there might be some undiscovered alternative theory which is even better. Popper’s argument was that scientific theories can never be proved, only falsified. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools. Mixing beliefs with science quickly makes the fundamental revision of theories a complete heresy. Scientists shouldn’t start to begin believing that theories are religious creeds.
Update (22 May 2008):
I now understand quantum field theory sufficiently well to begin writing a book about it. What is interesting is the marketing perspective on this subject. I think Figure 1 on this blog post, for instance, is a very clear statement of the facts. However, those people unfamiliar with Feynman diagrams will not appreciate why the time and spatial distance axes are reversed from the usual display. Hence, any attempt to appeal to the technically educated will automatically alienate further the ignorant masses. I’ll have to explain things from both a nontechnical and from a technical perspective in the book. Glasstone’s books usually had chapters written with two sections: a nontechnical section first, followed by a section with the more mathematical and technical material. Possibly this is what I will need to do with each chapter in the planned book, to cater for the widest possible audience without leaving out crucial technical evidence.
David Holloway’s book, Stalin and the Bomb, is noteworthy for analysing Stalin’s state of mind over American proposals for pacifist antiproliferation treaties after World War II. Holloway demonstrates in the book that any humility or goodwill shown to Stalin by his opponents would be taken by Stalin as (1) evidence of exploitable weakness and stupidity, or (2) a suspicious trick. Stalin would not accept good will at face value. Either it marked an exploitable weakness of the enemy, or else it indicated an attempt to trick Russia into remaining weaker than America. Under such circumstances (which some would attribute to Stalin’s paranoia, others would call it his narcissism), there was absolutely no chance of reaching an agreement for peaceful control of nuclear energy in the postwar era. (However, Stalin had no qualms about making the SovietNazi peacepact with Hitler in 1939, to invade Poland and murder people. Stalin found it easy to trust a fellow dictator because he thought he understood dictatorship, and was astonished to be doublecrossed when Hitler invaded Russia two years later.) Similarly, the facts on this blog post (the 45th post on this blog) and in previous posts, are assessed the same way by the mainstream: they are ignored, not checked or investigated properly. Everyone thinks that they have nothing to gain from a theory based on solid, empirical facts! Once I have written the book, I will update http://quantumfieldtheory.org/ and make the book freely available, with some video explanations.
Update (25 May 2008):
There isn’t any “curved” smooth classical spacetime, it’s just an approximation using calculus to represent effects of discrete field quanta being exchanged between gravitational charges composed of mass or energy.
The universe isn’t curved, this was discovered by Perlmutter around 1998, when it was found that the predicted curvature (gravitational acceleration) as assessed from the redshifts of distant supernovae, was absent.
Einstein wrote to Besso in 1954:
“I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included…”
Quantum field theory is something that definitely needs to be considered by the mainstream more realistically than the halfbaked, nonpredictive approach taken in socalled string ‘theory’ (there isn’t a string theory, there are 10^500 variants, all different, so it’s not quantitatively predictive science).
Rutherford and Bohr were extremely naive in 1913 about the electron “not radiating” endlessly. They couldn’t grasp that in the ground state, all electrons are radiating (gauge bosons) at the same rate they are receiving them, hence the equilibrium of emission and absorption of energy when an electron is in the ground state, and the fact that the electron has to be in an excited state before an observable photon emission can occur:
“There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”
 Rutherford to Bohr, 20 March 1913, in response to Bohr’s model of quantum leaps of electrons which explained the empirical Balmer formula for line spectra. (Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)
The ground state energy and thus frequency of the orbital oscillation of an electron is determined by the average rate of exchange of electromagnetic gauge bosons between electric charges. So it’s really the dynamics of quantum field theory (e.g. the exchange of gauge boson radiation between all the electric charges in the universe) which explains the reason for the ground state in quantum mechanics. Likewise, as Feynman showed in QED, the quantized exchange of gauge bosons between atomic electrons is a random, chaotic process and it is this chaotic quanta nature for the electric field on small scales which makes the electron jump around unpredictably in the atom, instead of obeying the false (smooth, nonquantized) Coulomb force law and describing nice elliptical or circular shaped orbits.
Update (27 May 2008):
The comment immediately above is taken from a submission to Kea’s Arcadian Functor blog. Apparently, Dr Sheppeard didn’t much like the comment with respect to Rutherford, who came from New Zealand, aand thought I was calling him an idiot. Actually, I was just pointing out that even great geniuses can make mistakes, missing mechanisms, and so on. I didn’t call him an idiot, and have the greatest respect for him with regards to the useful work he did in nuclear physics (interpreting Geiger and Marsden’s experimental results correctly) and many other things. Notice that Rutherford did make other blunders which I haven’t even mentioned, for example he said:
“Anyone who expects a source of power from the transformation of the atom is talking moonshine” (Quoted by Richard Rhodes, “The Making of the Atomic Bomb”, Simon and Schuster, 1986.)
FURTHER READING (SELECTED POSTS ON THIS BLOG WHICH ARE RELEVANT TO BUT MORE EXTENSIVE THAN THIS POST, although note that some of the older posts are obsolete or in error):
3. https://nige.wordpress.com/2007/05/25/quantumgravitymechanismandpredictions/
7. https://nige.wordpress.com/2007/07/04/metricsandgravitation/
8. https://nige.wordpress.com/2007/07/17/energyconservationinthestandardmodel/
9. https://nige.wordpress.com/2007/08/27/stringtheoryversusphysicalfactsinscientificamerican/
10. https://nige.wordpress.com/2007/11/28/predictingthefuturethatswhatphysicsisallabout/
12. https://nige.wordpress.com/2008/01/22/farewelltoblogging/
14. https://nige.wordpress.com/2007/02/20/thephysicsofquantumfieldtheory/
15. https://nige.wordpress.com/2007/03/16/whyolddiscardedtheorieswontbetakenseriously/
comment:
http://riofriospacetime.blogspot.com/2008/05/einsteinssphere.html
Surprised,
That’s obviously what is meant because that’s Dirac’s prediction from the spinor of his famous equation. He had to modify the Hamiltonian and one consequence is antimatter.
It was Schwinger, however, who found that pair production occurs spontaneously in the vacuum if the electric field strength exceeds a threshold of 1.3*10^18 volts/metre. See equation 359 in Dyson’s 1951 Lectures on Advanced Quantum Mechanics, Second Edition, http://arxiv.org/abs/quantph/0608140, or equation 8.20 of Luis AlvarezGaume and Miguel A. VazquezMozo, Introductory Lectures on Quantum Field Theory, http://arxiv.org/abs/hepth/0510040
One thing that really annoys me about popular books on the subject is that they claim – falsely – that pairs of fermions are constantly popping into existence and annihilating everywhere in the vacuum, without limit.
Actually, that only occurs with[in] a distance of 32.953 fm from an electron (see https://nige.wordpress.com/2007/06/13/feynmandiagramsinloopquantumgravitypathintegralsandtherelationshipofleptonstoquarks/ ).
So all those physicists who state that the entire vacuum is a seething foam of Heisenbergformula controlled pairproduction and annihilation (i.e., looped Feynman diagrams), are talking out of their hats.
It’s been known for over fifty years that there is a cutoff on the pair production. It’s pair production that allows pairs of shortlived (virtual) fermions to become briefly polarized in a field, which opposes and partially the primary electric field, thereby explaining physically the reason for electric charge renormalization.
If pairproduction occurred throughout the vacuum, there would be no infrared cutoff on the lowenergy range for running couplings, and the observable electric charge would get for ever smaller as you got further from an electron. This doesn’t happen, proving that pair productionannihilation certainly doesn’t occur everywhere in the vacuum.
*****
Here’s a section from my site http://feynman137.tripod.com/ which is generally obsolete now, but still contains some useful bits of information (I’ve found a slightly different and later version of this argument in my post https://nige.wordpress.com/2007/07/04/metricsandgravitation/ but it is not necessarily better, and all variants will need to be reviewed to get the best lucid presentation when writing the book):
Penrose’s Perimeter Institute lecture is interesting: ‘Are We Due for a New Revolution in Fundamental Physics?’ Penrose suggests quantum gravity will come from modifying quantum field theory to make it compatible with general relativity…I like the questions at the end where Penrose is asked about the ‘funnel’ spatial pictures of blackholes, and points out they’re misleading illustrations, since you’re really dealing with spacetime not a hole or distortion in 2 dimensions. The funnel picture really shows a 2d surface distorted into 3 dimensions, where in reality you have a 3dimensional surface distorted into 4 dimensional spacetime. In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c^{2} = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences.
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)^{1/2}, so v^{2} = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the [gravitational binding] energy [which] mass [has when it is bound] in a gravitational field at radius x from the centre of mass [which is producing the gravitational binding effect on the other mass] is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.
By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v^{2} = 2GM/x) into the FitzgeraldLorentz contraction, giving g = (1 – v^{2}/c^{2})^{1/2} = [1 – 2GM/(xc^{2})]^{1/2}. However, there is an important difference between this gravitational transformation and the usual FitzgeraldLorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:
FitzgeraldLorentz contraction effect:
g = x/x_{0} = t/t_{0} = m_{0}/m = (1 – v^{2}/c^{2})^{1/2} = 1 – ½v^{2}/c^{2} + …
Gravitational contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = [1 – 2GM/(xc^{2})]^{1/2} = 1 – GM/(xc^{2}) + …,where for [radially] spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGeraldLorentz contraction: x/x_{0} + y/y_{0} + z/z_{0} = 3r/r_{0}. Hence the radial contraction of space around a mass is r/r_{0} = 1 – GM/(xc^{2}) = 1 – GM/[(3rc^{2}]
Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The [amount of radial contraction of a spherically symmetric mass is] (1/3) GM/c^{2}. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the LorentzFitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.
*****
Copy of a comment to Dr Tommaso Dorigo’s blog:
On the reality of the big bang, can I recommend http://www.astro.ucla.edu/~wright/tiredlit.htm for an analysis of the redshift facts and the reasons why pseudoscientists can’t accept the big bang facts as valid.
Notice also that Alpher and Gamow predicted the cosmic background radiation in 1948 and it was discovered in 1965.
Actually, the big bang theory is incomplete, because when you take the derivative of the Hubble expansion law v = HR, you get acceleration a=dv/dt = d(H*R)/dt = (H*dR/dt) + (R*dH/dt) = H*v = R*H^2. This tells you that receding masses around us have a small outward acceleration, only on the order of 10^{10} ms^{2} for the most distant objects. This is a tremendous prediction. I published it via Electronics World back in Oct 96, well before Perlmutter confirmed it observationally.
This is just about the observed acceleration of the universe! Smolin points this amount of acceleration and the “numerical coincidence” that it is on the order of a = Hc = R*H^2 out in his book “The Trouble with Physics” (2006) but neglects to state that you get this result by differentiating the Hubble recession law! Note that arXiv.org allowed my paper upload from university in 2002, but then deleted in within seconds, unread!
Dr Bob Lambourne of the Open University years ago suggested submitting my paper to the Institute of Physics’ Classical and Quantum Gravity, the editor of which sent it for “peerreview” to a string theorist who rejected it because it added nothing to string theory!
So some additional evidence and confirmed predictions of the big bang do definitely exist (the outward acceleration of matter leads to radially outward force, which by newton’s 3rd law gives a predictable inward reaction force, which allows quantitative predictions of gravity that again are confirmed by empirical facts). Don’t just believe that only stuff that survives censorship by string theorists is factual. Classical and Quantum Gravity was publishing the Bogdanov’s string theory speculations (which the journal later had to retract) at the time it was rejecting my factbased paper!
**********
Above: the first two pages of the four pages long article in the August 2002 issue of Electronics World, introducing the particle physics for the gravity mechanism; the second part (published in the April 2003 issue, six pages) gave the gravity mechanism in its original formulation. As explained in the earlier post on this blog here, a new formulation was developed over the last couple of years to make the basic physical principles more transparent to other physicists. CERN Document server hosted a preprint, but stringy moderators on arXiv instantly (within 5 seconds) deleted an updated paper to arXiv in December 2002 without even taking the time to first read it. This kinda indicates paranoia, but the discovery of the basic fact that the Hubble recession law implies a very small acceleration outward, giving an immense outward force (because the mass of the universe is really huge, and force is the product of mass and acceleration), dates back to twelve years ago and was first published via the October 1996 letters pages of Electronics World.
**********
The primary purpose of this blog is to provide evidence that electromagnetism is mediated by charged, massless gauge bosons, replacing the ad hoc and poorly predictive Higgs mechanism for electroweak symmetry breaking with a factbased and falsifiable predictive mechanism. Another objective is to provide evidence from a working (checkable, successful) factbased mechanism of quantum gravity that the graviton is a spin1 gauge boson, possibly the massless neutral gauge boson of SU(2) or a gravity field described by by U(1). This blog sets out to provide evidence for the correct way to introduce gravity into the Standard Model of particle physics, by making qnantitative predictions that are checkable.
Update (13 June 2008):
copy of comment (in case accidentally lost) by anon. to the Not Even Wrong weblog (quotation is from Tony Smith):
“… dipole axis of amplitude – due to motion with respect to the Cosmic Microwave Background. – according to astroph/0302207 (First Year WMAP) “… COBE determined the dipole amplitude is 3.353 +/ 0.024 mK in the direction …”.”
‘Dipole axis of amplitude’ is a very polite euphemism for this massive +/ 3mK cosine anisotropy in the CBR, compared to the original name given by discoverer R. A. Muller in his Scientific American article (v238, May 1978, pp. 6474): see http://adsabs.harvard.edu/abs/1978SciAm.238…64M
(If the CMB is used to establish a reference frame for motion, this anisotropy indicates absolute motion with respect to that frame.)
Update (21 June 2008):
Some comments have been added to the post https://nige.wordpress.com/2007/03/16/whyolddiscardedtheorieswontbetakenseriously/ which clarify the errors made by crackpots who are dismissing the Standard Model and quantum gravity work on the basis that, if exchanged gauge bosons caused fundamental forces, charges would get hot or would be slowed down by drag effects due to graviton fields in space (this is the standard false dismissal for the Standard Model, quantum gravity and LeSage gravity by exchange radiation mechanism haters). Actually, gauge boson radiation fields in space don’t have the right frequency to oscillate charges into gaining internal energy (heat) so they don’t heat up matter; they just impact and thereby cause the accelerative effects of force fields, and the related the contraction effects (the contraction of length of moving bodies and the radialonly contraction of gravitating mass/energy due to graviton compression which results in the distorted, nonEuclidean geometry that is modelled approximately by general relativity) of spacetime.
Two electrons repel by exchanging positively charged massless electromagnetic gauge bosons (actually the massless positively charged radiations of SU(2), rather than the assumed uncharged massless radiation of U(1) which is currently part of the Standard Model). In the case of quantum gravity, two nearby similarly gravitationallycharged (massive) objects are pushed together, because they forcefully exchange gravitons with every mass in the receding universe, but not nonreceding masses. In the case of the repulsion of two electrons, the exchanged radiations are charged, massless Hawking type radiation so the situation is reversed relative to that of quantum gravity.
I.e., in quantum gravity the exchanged radiations are only able to carry a net force if the objects are receding from one another; this is due to the physics whereby objects must have a Hubble recession giving acceleration dv/dt = d(Hv)/dt and thus an outward force relative to the other object of F = ma = m*d(Hv)/dt, so that there is an inward reaction force of similar magnitude (Newton’s 3rd law of motion) mediated by the spacetime fabric of gravitons (the only thing which can carry the force). Hence, nearby masses which aren’t receding, cannot physically exchange any gravitons which deliver a net force. This is the physical mechanism for the shadowing effect in the LeSage mechanism.
Sadly the few people who take the LeSage mechanism seriously seem to be prejudiced against the redshift facts of the big bang. For example, the editor of a book called ‘Pushing Gravity’ (Matthew Edwards) initially emailed me encouragement, but later claimed it was wrong because in his opinion the big bang is not the right explanation of the Hubble redshift law. However, redshift due to recession is the only empirically demonstrated mechanism that not only models consistently all the observational details of redshifted spectra (e.g., if redshift were due to scattering of radiation, the spectrum of redshifted light would be skewed rather than uniformly shifted to lower frequencies), see http://www.astro.ucla.edu/~wright/tiredlit.htm
(It’s fascinating to study the reactions of physicists to a = dv/dt = d(HR)/dt = (R*dH/dt)+(H*dR/dt) = H*v = H*H*R, and the implications. They want to dismiss it so they say either it is too simple to be right or too complex to be right. It’s so simple that if it was right, any one of the great physicists of yesteryear would have discovered it or thought about it in this way already and dismissed it for some good reason without recording it, so nature cannot be simple. Alternatively, they argue that it is too complex to be right. They think that because we have to differentiate the Hubble law, it’s too complex for them to follow, and it would be simpler to stick to string theory with it’s landscape of 10^500 combinations of invisibly small compactified CalabiYau 6dimensional manifold speculation. Alternatively, they claim that the LeSage gravity mechanism has been disproved for gauge boson radiation because ‘if moving gravitons existed in space, they would slow down objects and heat them up until they glowed red hot’. When you explain the evidence that gravitons only interact with massive particles thay provide mass to the Standard Model particles which are normally massless, and that gravitons cause effects like inertia and momentum rather than drag and heating, people just don’t want to listen. Conversations begin because they think they can explain to you why they think you are wrong, and when it becomes clear that they are wrong, instead of being pleased to learn something, they still try to claim they are right, just as pseudoscientists do. Their final line of defence is that this mechanism is being censored out, and it would not be censored out – in their opinion – if it was correct. If you counter this by citing wellknown historical examples from physics of the initial hostility to correct work, they claim that you are claiming that you are a genius like the people in the well known (undisputed) historical examples, and they therefore miss the point entirely (innovation is routinely hated by society and particularly the media which waits until a revolution has occurred before commenting instead of promoting the news in advance of a revolution, precisely because innovation is not considered clever but is rather a nuisance to everyone or an act of pathetic egotism). If you alternatively show that peerreviewers haven’t even bothered to read or check the results, but have just rejected it because you are not a fellow string theorist or working in the community of grouthink ideas which are currently popular, then that really angers them. They have no way out other than to get angry with you, while continuing to ignore the actual science you have been talking about. Actually, it is quite common that irrational actions have to be enforced by society without a sensible excuse. As an example, nobody in a modern society knows all the laws; there are too many laws and the statute books are too long to read and remember in a lifetime. The nearest thing is to find a specialised lawyer, who knows at least the major laws in a particular area. However, any use of solicitors is expensive. So some people end up accidentally breaking laws, purely through ignorance, just because there are too many of them to know about. If ignorance could be used as a defence in law, then everyone would be immune to the law unless there was proof that the person had read the relevant law in question before breaking the law (this is why people with security clearance traditionally must sign a copy of the official secrets act, so they can be proved to know precisely what they can and cannot do with the secret information they possess). Because in practise the law would not work if ignorance of law was accepted as an excuse, the Latin dictum Ignorantia juris non excusat (ignorance of law is no excuse) reigns:
‘Ignorance of the law excuses no man; not that all men know the law; but because ’tis an excuse every man will plead, and no man can tell how to confute him.’ – John Selden (15841654), Table Talk.
Similarly to this unhappy situation, scientific advances that don’t follow the mainstream string theory groupthink are treated as guilty of error (not because they have been shown wrong, but because they take time and effort to understand and check) until they become mainstream in their turn. The process of becoming mainstream is a very ugly, very unscientific thing, all to do with scientific journal media publicity and celebrity endorsement (marketing called peerreview and citation counting). This marketing effort has nothing whatsoever to do with real science, although some people (those who likewise think that Jesus endorsed the financially lucrative dogmatic organised religion which is actually a travesty of Jesus’ message) redefine science to make it compatible with dogma, officialdom, groupthink, belief systems, orthodoxy, innovationhatred, supernatural nonfalsifiable string theory and such like. Hence there are two totally different extremes of scientists: those obsessed with nature and those obsessed with being good little team players who earn Brownie points for not being a nuisance by endlessly coming up with new ideas and theories.)
The physics for electromagnetic gauge boson exchange is different from gravitation. Charged particles like electrons and quarks are black holes, emitting radiation. Because the black hole electrons and quarks are electrically charged, this affects the Hawking radiation mechanism: only virtual charged SU(2) gauge boson pair production charges of opposite sign to the real particle fall into the event horizon, so the other halves of the pairs (with similar sign to the real particle, but with no mass) get radiated away. So an electron is consistently radiating in all directions negatively charged massless gauge boson radiations, and is receiving similar radiation from other electrons in the surrounding universe. This exchange is possible despite the massless nature of the charged radiation, because the magnetic field curls of radiation going from charge A to charge B is cancelled out by the magnetic field curls of massless charged radiation going from charge B to charge A. Hence, the usual problem with the propagation of massless charged radiation does not apply where there is an equilibrium of radiation being exchanged in two directions at the same time (the normal equilibrium situation): we don’t find that there is any infinite magnetic field selfinductance causing problems in the theory, simply because the physics naturally cancels out the magnetic field curls. The usual equilibrium is disturbed, however, if another electric charge is nearby. Unlike the case of gravity, the forceful exchange of charged massless SU(2) gauge bosons occurs even between two charges which are not receding. Two electrons repel because they exchange radiation that is not redshifted, unlike the radiation being exchanged with most of the other electrons in the the rest of the mainly distant, receding universe. As a result, the exchange of gauge bosons between two electrons is stronger between them than in other directions, so they are repelled (knocked apart). If the universe was not receding, this would not happen, because there would be a perfect equilibrium.
This equilibrium is disturbed by the asymmetry that receding masses send back redshifted gauge bosons with lower energy and thus less momentum than those of nearby masses which are not receding. In both gravity and electromagnetism, the asymmetry of the exchange of gauge bosons being exchanged is vitally dependent on the Hubble expansion. Without the expansion of the universe, there would be no gravity and no electromagnetic forces. But there is also an interdependence between such forces and the big bang, because it is the exchange of gauge bosons bewteen masses which causes the expansion of the universe.
I first worked out the vector summation that proves that electromagnetic gauge bosons cause attraction of unlike unit charges with the same force as they cause repulsion of similar sized unit charges of like sign, at Christmas 2000, and it was published in the April 2003 issue of Electronics World. The equilibrium referred to in the previous paragraphs is not absolutely perfect: it’s almost isotropic, but it is not timeindependent. Because the gauge bosons are being exchanged between receding masses (and are causing the recession), there is a timedelay between emission and reception during which then during which the redshift increases. The energy of the exchanged gravitons is being routinely converted into the kinetic energy of the receding masses of the universe. But because gravity and electromagnetism are both powered by the effect of the expansion of the universe upon the exchange of gauge bosons with such surrounding, receding masses and charges, these forces are very gradually increasing in coupling constant as time passes.
This is one strong nail in the coffin of the mainstream ideas of
1. inflation (to flatten the universe at 300,000 years after the big bang when gravitational effects were much smaller in the cosmic background radiation than you would expect from the structures which have grown from those minor density fluctuations over the last 13,700 million years; we don’t need inflation because the weaker gravity towards time zero explains the lack of curvature then and how gravity has grown in strength since then: traditional arguments used to dismiss gravity coupling variations with time are false because they assume that electromagnetic Coulomb repulsion effects on fusion rates are timeindependent instead of similarly varying with gravity, i.e., when gravity was weaker the big bang fusion and later the fusion in the sun wasn’t producing less fusion energy, because Coulomb repulsion between protons was also weaker, offsetting the effect of reduced gravitational compression and keeping fusion rates stable), and
2. force numerical unification to similar coupling constants, at very high energy such as at very early times after the big bang.
The point (2) above is very important because the mainstream approach to unification is a substitution of numerology for physical mechanism. They have the vague idea from the Goldstone theorem that there could be a broken symmetry to explain why the coupling constants of gravity and electromagnetism are different assuming that all forces are unified at high energy, but it is extremely vague and unpredictive because they have no falsifiable theory, nor a theory based upon known observed facts.
They are actually wrong, because if you picture a sphere of radius R around any fundamental particle, the total flux (Joules per second per square metre) of gauge boson radiation summed over that surface area, passing into and out of that spherical area on the way to the particle in the middle, must be conserved or if not conserved, then some of the energy must be transformed in nature and used in some other way. The running couplings (change in effective observable electromagnetic charge of the electron, for example) at short distances from electrons and other particles need explanation in terms of this conservation of energy principle. The total output from a charge of electromagnetic gauge boson radiation is varied due to the polarized vacuum shielding it at small distances around a lepton. As you get closer than a few femtometres to an electron, the electric charge of the electron starts to rise. Hitting electrons together at 91 GeV (to make them approach very closely before bouncing off) gives a repulsion force measured to be 7% stronger than predicted using Coulomb’s law, so the electric charge at that energy (close to an electron) is higher than at low energies (below 0.5 MeV scattering energy it is a constant and equal to the normal textbook value). The rise in measurable electric charge at very small distances is due to seeing less intervening shielding due to the polarized vacuum of pair production charges around the core of the particle. But what happens to the electromagnetic field energy that is lost from the gauge bosons that are emitted from a charge core as they are attenuated by the polarized vacuum surrounding the core? The energy lost from the electromagnetic gauge bosons is deposited in the virtual fermion field , and is actually used to create pairs of leptons and bosons and quarks. The shortranged nuclear force charge running couplings are powered by the massive virtual particles like SU(2) massive particles which are produced by electromagnetic energy being attenuated by the polarized vacuum.
Therefore, instead of all forces (including gravity) numerically unifying in strength as you get to very small distances from bare particle cores high energies which are unobservable, or very early times after the big bang, what happens is that the electromagnetic charge continues to increase towards its barecore (maximum) value as you approach it, but the nuclear charges there are there zero because there is no attenuated electromagnetic energy to produce such field effects. There is no direct relationship between gravitational charge (mass) and electric charge (which doesn’t increase with particle velocity, unlike mass), and in the Standard Model all mass is supplied by a separate bosonic Higgs field with which gravitons interact instead of directly interacting with the Standard Model charges. As a result, there is simply no mechanism for gravity to suddenly increase in coupling by a factor of 10^40 at unobservably high energies: the mainstream claims that this happens because gravitons possess energy and are therefore sources for gravitation in their own right. However, it is clear that this is missing the fact that gravitons don’t produce gravity alone, but only in combination with a massive Higgslike field which supplies mass. Energy and matter both acquire gravitational charge by coupling to the Higgslike field bosons, which are the only things that directly interact with gravitons:
Photon <> Higgstype mass bosons <> Gravitons <> Higgstype mass bosons <> Electron
Above: the chain of indirect links in the deflection of a photon by the gravity field of an electron. In this picture, only Higgstype bosons interact with gravitons, and gravitons don’t have gravitational charge: energy doesn’t actually have any intrinsic mass because it doesn’t directly interact with gravitons. Gravitational charge is supplied to energy by a separate Higgstype field of massive bosonic particles. Gravitons travel at light velocity and the whole idea that they are interacting with one another directly is in disagreement with the nature of the simple model which predicts all of the correct (observed) features of gravitation.
In other words, it is an illusion to think that either electrons or photons have gravitational charge: they don’t and they merely acquire it indirectly from a secondary Higgslike field. This distinction is important for the issue of whether gravitons actually have gravitational charge: they don’t. All gravitational effects occur due to gravitons hitting into Higgslike bosons, not other gravitons.
In the fundamental force mechanism for gauge boson exchange which has been explained in this blog post (see previous posts as listed above for more information), all fundamental forces were zero initially in the big bang, and the couplings of each have increased in direct proportion to the time since the big bang. At extremely high energy today (i.e., seeing through the polarized vacuum to bare particle cores), the fundamental forces differ from those believed to be the case by string theorists: electromagnetic charge is 137 times stronger than at low energies as you approach the black hole event horizon of an electron, and since massive (gravitonexchnaging) Higgslike bosons are couples to the electron by electromagnetic interaction, the effective massof the core you can experience varies in the same way. This is not the 10^40 factor jump in G argued for by string theorists who want gravity and electromagnetism to be identical in strength at high energy. At extremely high energy, shortranged nuclear couplings begin to fall as you approach the black hole event horizon of a fundamental particle, where they are zero. This is based on factual mechanisms which have already made tested, confirmed originally falsifiable predictions (such as of the acceleration of the universe two years before first observation), and conservation principles, not abjectly speculative ‘good ideas’ or the consensus of a lot of physicists who belief because their friends believe or some other religious, pseudoscientific reason.
Update (22 June 2008):
Copy of a comment in moderation queue to John Horgan’s string theory post on his blog (will probably be deleted for going off topic or being too long, or whatever):
http://www.stevens.edu/csw/cgibin/blogs/csw/?p=162
‘String theory now comes in so many versions that it “predicts” virtually anything!’
So you’re unconvinced by Lubos Motl’s top twelve results of string theory:
http://motls.blogspot.com/2006/06/toptwelveresultsofstringtheory.html
Notice that Joe Polchinski added the last couple, including the claim that the landscape of 10^500 variants of string theory is actually a major selling point, because it makes string theory cover such a large number of different models, it’s possible that one of them will include a small positive cosmological constant.
I think that Peter Woit was working on QCD lattice calculations in the early 1980s and used an impressive idea discovered by Witten to make checkable predictions. Then Woit moved on to studying the chiral symmetry problems (i.e. the issue of why the weak force only acts on lefthanded spinning particles, and how this relates to electroweak theory and its symmetry breaking), which is a real problem. Then after the first string revolution in 1985, mainstream attention moves away from trying to better understand the real world symmetries of the standard model, and towards string theories with their nonempirical, imaginary ‘problems’ about unobserved grand unification and unobserved gravitons. So I think that’s the reason why Dr Woit has a different perspective; he was left high and dry when the tide went out leaving gauge theory a dead end. The string theorists can’t predict the standard model details because the nature of the vacuum in string theory is sensitively dependent on the compactification of 6 unobservable dimensions, and the CalabiYau manifold can allow many variations depending on the moduli of the extra dimensions’ sizes and shapes, as well as the way that those are stabilized with RubeGoldbery machines to give meta stable ground state configurations to the vacuum.
The key question is whether electroweak theory is completely correct (the simplest Higgs theory of electroweak symmetry breaking is wrong, and there are several variants with multiple Higgs bosons to decide from at a more complex level, assuming that the Goldstone theorem of symmetry breaking is correct), is how gravity relates to the existing U(1) X SU(2) x SU(3) standard model of electromagnetic, weak, and strong forces.
“In the standard model, at temperatures high enough so that the symmetry is unbroken, all elementary particles except the scalar Higgs boson are massless. At a critical temperature, the Higgs field spontaneously slides from the point of maximum energy in a randomly chosen direction, like a pencil standing on end that falls. Once the symmetry is broken, the gauge boson particles — such as the leptons, quarks, W boson, and Z boson — get a mass. The mass can be interpreted to be a result of the interactions of the particles with the “Higgs ocean”.” – http://en.wikipedia.org/wiki/Higgs_mechanism#General_Discussion
The U(1) x SU(2) x SU(3) Standard Model is very nice at first contact. I’m convinced that the QCD model represented by the SU(3) symmetry is correct, because it’s so firmly tied to empirical evidence such as the eightfold way of categorising and predicting particle properties. So my interest is focussed on the U(1) x SU(2) electroweak symmetry.
What disgusts me is that the symmetry is broken at low energy, U(1) is not electromagnetism and SU(2) is not the weak force.
Instead, the ad hoc, empirical electroweak Weinberg mixing angle of the Standard Model means that the true gauge boson of electromagnetism is not simply the massless photon of U(1), but is instead a composite of the gauge boson of U(1) and the neutral gauge boson of SU(2). Similarly, the neutral massive weak gauge boson of SU(2) is not the observed weak massive gauge boson of the weak force!
To make U(1) x SU(2) model experimentally known facts of electromagnetism, the neutral gauge bosons of each must be mixed together. The usual over simplified books on the standard model falsely claim that electromagnetism is the U(1) symmetry and the weak force is the SU(2) symmetry.
But the SU(2) gauge bosons are W+, W, and W0, where the W0 has never been observed. Instead, the observed neutral weak gauge boson is the Z0, which is a mix of the electromagnetic U(1) massless neutral gauge boson and the W0. Similarly, the electromagnetic gauge boson isn’t the photon of U(1) but is instead a B particle which is another result of the mix up between the photon and the W0 of SU(2). There is no theoretical basis for mixing up electromagnetism and the weak force in this way. The Weinberg mixing angle is 26 degrees and this comes empirically by fitting the theory to model data; it’s not a theoretical prediction that is compared to experiment.
So the U(1) x SU(2) electroweak symmetry theory is not too impressive, it’s a gigantic mess and a gigantic fraud in the way it is explained to the public as being simple and elegant. In addition, because gravity has more similarity to electromagnetism than it does to any other known force (both are infiniterange, inverse square law forces), one would expect that gravity could come into the electroweak symmetry sector of the Standard Model, when a complete theory including quantum gravity is discovered.
U(1) has other problems as a gauge theory. E.g, it has only one kind of charge, while electromagnetism comes with two types (positive and negative, or for magnetism north pole and south pole). It also has only one type of gauge boson, which has to have 4 polarizations in order to account for being able to cause attraction of unlike and repulsion of like. It’s pretty obvious to me that if you consider a single electron, it’s surrounded by a negative electric field, which is due to the gauge bosons being exchanged. Hence, negative charge is mediated by charged (negative) massless gauge bosons. Because exchange radiation passes two ways along any route (from charge A to B and back to charge A), the magnetic field curls of the charged gauge bosons will automatically cancel, preventing infinite self inductance, so there is no problem with having massless electrically charged radiation as long as it is being exchanged along a twoway route and not just trying to propagate in one direction only.
So why not kick U(1) out of the picture and replace it by a SU(2) group for electromagnetism. Here you have two charges and three gauge bosons: by legitimately adapting the Higgs symetry breaking theory (Higgs is a speculative theory with no empirical confirmation, not an established fact), you can easily get some replacement to U(1) x SU(2) which looks like SU(2) x SU(2) or just SU(2) where instead of the gauge bosons being massless at only high energy and massive at low energy, some (say one handedness) remain massless at low energy.
Hence, the massless charged SU(2) gauge bosons at low energy produce electromagnetism, while still at low energy the massive versions of those remain the W+ and W and produce the lefthanded weak force. The uncharged W0 produces the weak force’s massive Z0 for neutral currents, and it’s massless version is the graviton with spin1. This fits into a gauge mechanism I’m interested in that makes predictions that can be checked.
Update (24 June 2008):
http://coraifeartaigh.wordpress.com/2008/06/15/thestandardmodel/#comment129
A big problem comes from the Abelian U(1) electromagnetic theory, e.g. the Weinberg mixing angle.
U(1) has 1 charge and 1 gauge boson and it is supposed to model electromagnetism, while SU(2) has 2 charges (two isospins) and 3 gauge bosons (neutral, positive and negative in charge) for the neutral currents and W+/ bosons of the weak force.
But the observable gauge boson of electromagnetism and the observable Z_0 of the weak neutral currents, are not adequately modelled by U(1) and SU(2) respectively, so an ad hoc mixing of the two is needed. So neutral gauge bosons B and W_0 from U(1) and SU(2) need to be mixed together to produce something modelling the observable photon and observable Z_0 gauge boson.
This is an entirely empirical correction, with the Weinberg mixing angle coming not from theory but from adjustment to make the theory model the observables. It makes the U(1) x SU(2) a very complex and inelegant theory of electroweak phenomena, even before you get into discussing the Higgs mechanism you need to break the symmetry and make it consistent with observations.
I think that a much better model would be to change the Higgs mechanism and just use SU(2), so instead of the Higgs mechanism giving mass to all of the gauge bosons at low energy, it only gives mass to some of them, allowing only lefthanded spinors get to interact with weak gauge bosons.
The rest of the W_+, W_, and W_0 gauge bosons of SU(2) remain massless, and we observe them as electromagnetic (massless W_+ and W_) and gravitational (massless W_0) gauge bosons. The extra polarizations of the gauge boson photon (it has 4, rather than the usual 2 polarizations for photons) come from the electric charge carried. Normally massless radiation can’t propagate it it has an electric charge, due to the infinite magnetic selfinductance which would result, but that is cancelled out in the case of exchange radiation, because the magnetic field curls cancel out between each oppositelydirected flow of gauge bosons from one charge to another and back. This scheme allows a full causal mechanism of exchange radiation causing electromagnetic and gravitational forces, predicting the coupling parameters.
When you think about U(1) for electromagnetism, it is an extremely problematic theory. You have only one electric charge, so opposite charge must be considered to be charge going backwards in time. Then you only have 1 type of gauge boson, with 4 polarizations and no explanation of what the additional 2 polarizations are, beyond the fact that you need them to allow repulsion as well as attraction forces. It is possible to remove U(1) and extend the role of SU(2) to include electromagnetism and gravitation, simply by modifying the Higgs mechanism so that it allows some massless versions of SU(2) gauge bosons to exist at low energy. This makes new checkable predictions, and is consistent with the observationally checked aspects of the existing Standard Model.
******************************************
25 June 2008:
Simple calculation of barecore electromagnetic charge of a single fundamental particle (without shielding by the polarized vacuum). (I’m not including the delta symbol here, the detlas cancel out in the end anyway. Also, the inclusion of mass in the field quanta here is no more wrong than the inclusion of a mass term in Professor Zee’s calculation of the Coulomb law from quantum field theory – attributed to an approximation by Sidney Coleman in a footnote – in his book Quantum Field Theory in a Nutshell. The only difference is that the calculation below makes a quantitative prediction, and the Zee/Coleman doesn’t. Both agree with the inverse square law.)
Heisenberg’s uncertainty principle (momentumdistance form):
ps = hbar (minimum uncertainty)
For relativistic particles, momentum p = mc, and distance s = ct.
ps = (mc)(ct) = t*mc^2 = tE = hbar
This is the energytime form of Heisenberg’s law.
E = hbar/t
= hbar*c/s
Putting s = 10^{15} metres into this (i.e. the average distance between nucleons in a nucleus) gives us the predicted energy of the strong nuclear exchange radiation, about 200 MeV. According to Ryder’s Quantum Field Theory, 2nd ed. (Cambridge University press, 1996, p. 3), this is what Yukawa did in predicting the mass of the pion (140 MeV) which was discovered in 1947 and which causes the attraction of nucleons. In Yukawa’s theory, the strong nuclear binding force is mediated by pion exchange, and the pions have a range dictated by the uncertainty principle, s = hbar*c/E. he found that the potential energy in this strong force field is proportional to e^{R/s}/R, where R is the distance of one nucleon from another and s = hbar*c/E, so the strong force between two nucleons is proportional to e^{R/s}/R^{2}, i.e. the usual square law and an exponential attenuation factor. What is interesting to notice is that this strong force law is exactly what the old (inaccurate) LeSage theory predicts for with massive gauge bosons which interact with each other and diffuse into the geometric “shadows” thereby reducing the force of gravity faster with distance than the inversesquare law observed (thus giving the exponential term in the equation e^{R/s}/R^{2}). So it’s easy to make the suggestion that the original LeSage gravity mechanism with limitedrange massive particles and their “problem” due to the shadows getting filled in by the vacuum particles diffusing into the shadows (and cutting off the force) after a distance of a few meanfreepaths of radiationradiation interactions, actually is a clue about the real mechanism in nature for the physical cause behind the shortrange of strons and weak nuclear interactions which are confined in distance to the nucleus of the atom! For gravitons, in a previous post I have calculated their mean free path in matter (not the vacuum!) to be 3.10*10^{77} metres of water; because of the tiny (event horizonsized) crosssection for particle interactions with the intense flux of gravitons that constitutes the spacetime fabric, the probability of any given graviton hitting that crosssection is extremely small. Gravity works because of an immense number of very weakly interacting gravitons. Obviously quantum chromodynamics governs strong interactions between quarks, but a residue of that allows pions and other mesons to mediate strong interactions between nucleons. But there is still a further step, one which makes falsifiable predictions, that we can make using this result of E = hbar*c/s.
Work energy is force multiplied by distance moved due to force, in direction of force (we won’t need to use bold print to remember that these are vectors because we know what we are doing physically):
E = Fs = hbar*c/s
F = hbar*c/s^{2}
which is the inversesquare geometric form for force. This derivation is a bit oversimplified, but it allows a quantitative prediction: it predicts a relatively intense force between two unit charges, some 137.036… times the observed (low energy physics) Coulomb force between two electrons, hence it indicates an electric charge of about 137.036 times that observed for the electron. This is the barecore charge of the electron (the value we would observe for the electron if it wasn’t for the shielding of the core charge by the intervening polarized vacuum veil which extends out to a radius on the order 1 femtometre). What is particularly interesting is that it should enable QFT to predict the bare core radius (and the grain size vacuum energy) for the electron simply by setting the logarithmic runningcoupling equation to yield a bare core electron charge of 137.036 times the value observed in low energy physics. That logarithmic equation (see http://arxiv.org/PS_cache/hepth/pdf/9803/9803075v2.pdf and particularly http://arxiv.org/PS_cache/hepth/pdf/0510/0510040v2.pdf pages 7071) correctly predicted an increase in observable electron charge to 1.07 times the low energy value when electrons are collided at 90 GeV. The collision energy is directly related to distance from a particle core, because the harder electrons collide, the closer they approach before being repelled (bouncing back). They obviously reach the point of closest approach when 100% of their kinetic energy has been converted into potential energy in the Coulomb field of the particles, which stops the approaching particle, then immediately electrostatic repulsion begins to accelerate the particle backwards. However, the logarithmic equation includes a summation of all the pairproduction loops of fermions which can be polarized in the electric field, from low energy right up to the extremely high energy of concern to us. There are lots of virtual fermions that exist over such an enormous span of energy scales, so the summation is a major project.
Update (9 July 2008):
How to censor out scientific reports without bothering to read them
Here’s the standard fourstaged mechanism for avoiding decisions by ignoring reports. I’ve taken it directly from the dialogue of the BBC TV series Yes Minister, Series Two, Episode Four, The Greasy Pole, 16 March 1981, where a scientific report needs to be censored because it reaches scientific but politicallyinexpedient conclusions (which would be very unpopular in a democracy where almost everyone has the same prejudices and the majority bias must be taken as correct in the interests of democracy, regardless of whether it actually is correct):
Permanent Secretary Sir Humphrey: ‘There’s a procedure for suppressing … er … deciding not to publish reports.’
Minister Jim Hacker: ‘Really?’
‘You simply discredit them!’
‘Good heavens! How?’
‘Stage One: give your reasons in terms of the public interest. You point out that the report might be misinterpreted. It would be better to wait for a wider and more detailed study made over a longer period of time.’
‘Stage Two: you go on to discredit the evidence that you’re not publishing.’
‘How, if you’re not publishing it?’
‘It’s easier if it’s not published. You say it leaves some important questions unanswered, that much of the evidence is inconclusive, the figures are open to other interpretations, that certain findings are contradictory, and that some of the main conclusions have been questioned.’
‘Suppose they haven’t?’
‘Then question them! Then they have!’
‘But to make accusations of this sort you’d have to go through it with a fine toothed comb!’
‘No, no, no. You’d say all these things without reading it. There are always some questions unanswered!’
‘Such as?’
‘Well, the ones that weren’t asked!’
‘Stage Three: you undermine recommendations as not being based on sufficient information for longterm decisions, valid assessments, and a fundamental rethink of existing policies. Broadly speaking, it endorses current practice.
‘Stage Four: discredit the man who produced the report. Say that he’s harbouring a grudge, or he’s a publicity seeker, or he’s hoping to be a consultant to a multinational company. There are endless possibilities.’
Go to 2 minutes and 38 seconds in the Utube video (above) to see the advice quoted on suppression!
These are the key steps used in ignoring science. They’re used not just by governments, but by everyone as an excuse to avoid the expenditure of time necessary to check out new reports which attempt to move beyond status quo. Now try reading this and deciding how easy it is to censor it:
‘In 1929 Hubble discovered a linear correlation between redshift of distant galaxies and clusters of galaxies, and their distance. The recession velocities v were directly proportional to the radial distance of the receding object from us, r. Hence v = Hr, where H is Hubble’s constant, which has the units of 1/time. This is then fitted into general relativity using the FriedmannRobertsonWalker metric, which cant fit any observations since it has a continuously variable landscape of infinitelymany arbitrarilyadjustable solutions and doesn’t predict the cosmological acceleration.
‘Because of spacetime, you are simultaneously looking back to earlier times when you look to ever greater distances. So the recession velocity is varying as a function of observable time, giving the observable acceleration.
‘Hence, Hubble could have legitimately plotted acceleration: the derivative of the Hubble law is: a = dv/dt = d(Hr)/dt = H(dr/dt) + r(dH/dt) = Hv + r*0 = Hv = rH^2.
‘For the greatest distances approaching the visible horizon (where recession is highly relativistic with massive redshifts), this predicts a = Hc = 6*10^{10} ms^{2}, which is the tiny observered cosmological acceleration at such distances.
‘Another option is that the Hubble could just have plotted velocities versus times past, defined by the travel time of the light from source to the observer on the Earth, t = r/c. This would have given him a constant ratio of v/t which has units of acceleration: a = v/t = (Hr)/(r/c) = Hc = 6*10^{10} ms^{2}.
‘Although this gives the same prediction for cosmological acceleration at very large distances, it differs from the prediction at smaller distances:
‘Approach 1: a = dv/dt = d(Hr)/dt = H(dr/dt) + r(dH/dt) = Hv + r*0 = Hv = rH^2.
‘Approach 2: a = v/t = (Hr)/(r/c) = Hc.
‘The first approach differs from the second because of the definition of time. All times are implicitly defined as t = r/v in the first approach, but in the second approach time is explicitly defined as t = r/c. The change from v to c results in the different prediction. Approach 1 deals with the times taken for recession, while Approach 2 deals with the time taken by light or other bosonic radiation such as gravitons to reach us.
‘However, the magnitude of the acceleration for the furthest distances, where most of the mass of the universe is effectively located from our perspective, is similar in both arguments (remembering that observable density increases with distance, since we’re looking back in time to earlier times after the big bang, where density was higher, and in any case – even if there was a uniform density distribution – most of the mass is still located at the greatest distances from an observer, because the mass contained within a given volume of uniform density is proportional to the cube of the radius of that volume, so a disproportionately large amount of mass is found at the greatest radial distances).
‘Besides making the subsequently confirmed prediction of the acceleration of the universe, this also predicts the correct strength of quantum gravity, because the acceleration of matter radially outward implies outward force e.g. Newton’s second (empirically confirmed) law states F=ma, and this outward force implies by Newton’s third (empirically confirmed) states that there is an equal force in the opposite direction, i.e. inwarddirected (analogous to the inwarddirected implosion wave you get when you set off an explosion, which is well understood and is actually used to squeeze the fissile core in all modern fission explosives). This inward force is mediated by the spacetime fabric of gravitational field quanta, gravitons. We can predict the quantitative amount of gravity because the mechanism is quantitative due to being closely tied to empirical facts, unlike string theory’s cranky spin2 graviton.
‘The factbased prediction of the acceleration (small positive cosmological constant) was published in 1996 in Electronics World (the editor of Nature turned it down) and confirmed by observations made two years later by Perlmutter who published in Nature. There was no mention in Nature that the acceleration had been predicted in 1996. Philip Campbell, editor of Nature, censored repeated letters sent by recorded delivery, as did other physical sciences editors at Nature. Other journals including Classical and Quantum Gravity submitted the paper to string ‘theorists’ for socalled ‘peer’review who had no knowledge of this area of physics or interest in reading the paper, and didn’t read it judging from their ignorant comments they made about facts.
‘While this has been censored out, I’ve invested my spare time in extending and checking this further. However, the mainstream doesn’t want to investigate it because it’s innovative and therefore unorthodox, and it is stuck with trying to use a classical approximation to gravitation – general relativity’s FriedmannRobertsonWalker metric – to interpret the Hubble recession, while the crackpots who are more interested in quantum gravity (and see general relativity as just a classical approximation to quantum gravity) aren’t interested because they don’t pay attention to the Hubble recession effect at all, perhaps because of crackpot dismissals of the Hubble law, which are rightly discredited at http://www.astro.ucla.edu/~wright/tiredlit.htm
‘Whenever I set out to explain this mechanism to anybody, I face having to teach them the facts, then the difference between the facts of physics and the speculations. Instead of discussing the mechanism of gravity, the conversation or correspondence gets immediately focussed on me having to explain to them the facts of physics, e.g. the evidence for cosmological recession. If they are professional physicists who happen to have studied cosmology, then the conversation again gets bogged down with me explaining what is right in general relativity, and what is unhelpful in general relativity. E.g., general relativity models the gravitational and accelerative forces as results of a cuvature in spacetime caused by a gravity source. It makes falsifiable predictions because the simple tensor field equation representing the relationship between the source of gravitation and the curvature resulting has to have a term included for relative effects in order for massenergy to be conserved. It is this added term which makes lightvelocity objects fall with twice the gravitational acceleration as lowvelocity (non relativistic) objects, and this is the basis for predicting that a photon gets deflected by the sun’s gravity to twice the extent that a slow bullet would be deflected by gravity when moving along the same initial trajectory. So general relativity is genuine, falsifiable physics in this sense. It’s right in this sense. It’s only wrong in the sense that it’s based on using calculus (continuous variables) to represent discontinuous field quanta (graviton) effects, and also in the stressenergy tensor where the really discontinuous source of gravity (fundamental particles of mass and energy) are replaced by a perfect fluid approximation to make the calculus work.
‘The real tragedy of general relativity is that it prevents the professional physicist from investigating the simple facts. General relativity doesn’t include a mechanism for gravity, so the large scale models of cosmology based on general relativity, e.g. the FriedmannRobertsonWalker metric of general relativity, are missing vital facts because they don’t include the cause of gravity as the result of a reaction force (mediated by gravitons) to the accelerative expansion of the universe. So the mainstream professional physicist who is trying to use the FriedmannRobertsonWalker metric of general relativity to model cosmology is in the position of Ptolemy, using epicycles. The results are impressively complex mathematical epicycles that can be fiddled to fit anything, but which don’t predict the cosmological acceleration or the strength of gravitation.
‘Professional physicists specializing in quantum field theory are either dismissive of cosmology or else dismissive of mechanisms. So they either refuse to read http://www.astro.ucla.edu/~wright/tiredlit.htm or else they proclaim their religious faith as being that the universe is purely mathematical, with no mechanisms present for fundamental interactions. Sadly, there is a third category, in which quantum field theorists have zero interest in whether Hubble’s law if being correctly applied to cosmology, and also zero interest in mechanism for quantum gravity. Such people have no willingness to discuss physics at all, just nonfalsifiable speculative mathematical models. The gauge boson exchange radiations in Feynman diagrams are to such people just imaginary, and the discovery of the weak gauge bosons in 1983 and other evidence that quantum field exchange radiations are real, are best ignored. So in every direction, this mechanism faces opposition or apathy from crackpots of one sort or another.’
Is the calculation of isolated quark masses pseudoscience (since you can’t isolate quarks)?
Mass of electron, M_e = (Mass of Z_0)*(alpha^2)/(3*Pi) = 0.51 MeV
Mass of all other isolated particles = (M_e)*n(N + 1)/(2*alpha) = 35n(N + 1) MeV
where n is number of fundamental particles in the isolated particle (n = 1 for leptons, n = 2 for 2 quarks in a meson, and n = 3 for 3 quarks in a baryon), and N is the discrete number of massive field quanta which give mass to the particle (as with nuclear physics ‘magic numbers’ for stable nucleon shell configurations, N = 1, 2, 8 and 50 give high stability results, predicting observed particle masses).
Some examples:
For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV
For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV
For mesons (n = 2 quarks per meson), N = 1 gives the pion: 35n(N+1) = 140 MeV
For baryons (n = 3 quarks per baryon), N = 8 gives nucleons: 35n(N+1) = 945 MeV
Above: observable fundamental particle masses, i.e. hadrons and leptons with lifetimes exceeding 10^{23} second. ‘Quark masses’ tend to be a crackpot concept because they aren’t observable even in principle because quarks can never be isolated, although you can calculate effective masses for different interactions. The mainstream focus on calculating isolated quark masses when quarks can’t be isolated is crackpot. Quarks have observable properties such as causing effects in particle scattering and in causing the neutron to have a magnetic moment, but it makes no sense to go on claiming that quark masses are the fundamental building blocks of hadronic matter.
In previous posts here and here, there’s evidence for the composition of all observable lepton and hadron masses from a slightly Higgslike (miring) boson field which exists in the vacuum and interact with gravitons. Massless gravitons are exchanged between massive Higgslike bosons; the latter give observed particles their mass and cause photons to be deflected by gravitation.
Overemphasis on quark masses – since quarks can’t be isolated, most of the observable mass of particles that contain quarks is the strong force field energy binding the quarks together – is pseudoscience. In fact, the calculations that try to predict hadron masses are lattice gauge theories, mechanistic simulations of the real vacuum interactions of particles with fields (exactly what I’m interested in, but focussed just on strong fields not on gravitation and electromagnetism).
http://dorigo.wordpress.com/2008/07/08/atopmassmeasurementtechniqueforcmsandatlas/
Hi Tommaso, thanks for this interesting article. The way you estimate the top quark mass looks very complicated and indirect. I’m just a bit surprised by discussion’s everywhere of ‘quark masses’ as if such masses are somehow fundamental properties of nature. Because quarks can’t be isolated, giving them masses as if they were isolated doesn’t make sense: the databook quoted masses of the three quarks in a baryon add up to only about 10 MeV which is just 1% of the mass of the baryon. So most of the mass in matter is due to mass associated with virtual particle vacuum fields, not intrinsically with the longlived quarks. Therefore the whole exercise in assigning masses to particles which can’t be isolated looks like nonsense? Surely the most striking fact here is that observable masses of particles (hadrons) containing longlived quarks has nothing much to do with the masses associated with those longlived quarks? So why discuss imaginary (not isolated in practice) quark masses at all? What matters in physics is what we can observe, and since quarks can’t be isolated, the calculations have no physical corespondence to the mass of anything that really exists!
On p102 of Siegel’s book Fields, http://arxiv.org/PS_cache/hepth/pdf/9912/9912205v3.pdf, he points out:
‘The quark masses we have listed are the “current quark masses”, the effective masses when the quarks are relativistic with respect to their hadron (at least for the lighter quarks), and act as almost free. But since they are not free, their masses are ambiguous and energy dependent, and defined by some convenient conventions. Nonrelativistic quark models use instead the “constituent quark masses”, which include potential energy from the gluons. This extra potential energy is about .30 GeV per quark in the lightest mesons, .35 GeV in the lightest baryons; there is also a contribution to the binding energy from spinspin interaction. Unlike electrodynamics, where the potential energy is negative because the electrons are free at large distances, where the potential levels off (the top of the “well”), in chromodynamics the potential energy is positive because the quarks are free at high energies (short distances, the bottom of the well), and the potential is infinitely rising. Masslessness of the gluons is implied by the fact that no colorful asymptotic states have ever been observed.’
I first heard about quarks when an Alevel physics student in 1988, and couldn’t understand why the masses of the three quarks in a proton weren’t each about a third of the proton mass. I did quantum mechanics and general relativity later, no particle physics or quantum field theory, so it’s only recently that I’ve come to understand the enormous importance of field energy (binding energy) in particle masses. So from my perspective, I can’t see why anybody cares about quark masses. Because quarks can’t be isolated, such masses are just a mathematical invention; quarks will always really have different masses because of the fields around them when they are in hadrons. In the Standard Model, quarks don’t have any intrinsic masses anyway; the mass is supplied externally by the Higgs field. Whether it is 99% or 100% of the mass of quark composed matter which is in the field that is binding quarks into hadrons, surely the masses of quarks is not important. Surely, to predict observable particle masses, a theory needs to predict the binding energy tied up in the strong force field, not just add up quark masses. This seems to indicate that the mainstream [is] off in fantasy land when trying to estimate quark masses and present them as somehow ‘real’ masses, when they aren’t real masses at all.
2 Comments »
 Dear Nige,it is of course true that quarks cannot be isolated, and that their “current mass” values are not measurable with infinite precision. In that sense, you could even say that those quantities are illdefined, to a certain extent (larger for light quarks, O(100 MeV) for the top quark).What should surprise you is that that kind of fuzzy definition does happen with most physical quantities we measure and use, to some extent: that is a source of systematic uncertainty which is usually neglected. What does it mean, for instance, when we say that the inner temperature of a alive human body is 100+x °F ? Define body, define alive, define inner, define temperature from a microscopic point of view… What does it mean to say that the gravitational acceleration at sea level is 9.8 m/s^2 ? Define acceleration, define sea level, explain how it depends on whether we are at the north pole or on the equator, whether there’s water or rock around.The message is the following: physical quantities we measure and use have a meaning in a certain context, less so if absolutized. The fine structure constant is a very good quantity to use in lowenergy electrodynamics, but it is no constant in highenergy physics.Current quark masses are crucial to perform calculations of cross sections, which are proportional to clicks in our detectors, and in a number of other theoretical calculations. You should be careful to avoid equivocating the meaning of pseudoscience, which certainly does not apply to the case you mentioned.
Cheers,
T.  Hi Tommaso,Thank you for your reply. My response:http://dorigo.wordpress.com/2008/07/08/atopmassmeasurementtechniqueforcmsandatlas/#comment98677‘Quark masses are crucial to perform calculations of cross sections, which are proportional to clicks in our detectors, and in a number of other theoretical calculations.’ – TommasoHi Tommaso,
Isn’t that a circular argument, because you’re defining quark masses on the basis of a calculation based on measuring crosssections, and then using that value to calculate crosssections?
Since crosssections are the effective target areas for specific interactions, I don’t see how mass comes into it. Whatever crosssection you are dealing with, it will be for a standard model interaction, not gravitation. Since mass would be a form of gravitational charge, mass is only going to be key to calculating crosssections for gravitational interactions in quantum gravity.
Clearly mass can come into calculations of other interactions, but only indirectly. E.g., the masses of different particles produced in a collision will determine how much velocity the particles get, because of conservation of momentum.
Re: the analogy to the temperature of the human body. In comment 2 I’m not denying that quarks have mass, just curious as to why so much mainstream attention is focussed on something that’s ambigious. The internal temperature of the human body is easily measurable: a thermometer can be inserted into the mouth.
You can’t isolate quarks, so whatever mass you calculate for an isolated quark, you aren’t calculating a mass that exists. The actual mass will always be much higher because of the mass contribution from the strong force field surrounding the quark in a hadron.
Hi Lubos,
Quark masses can be welldefined in various different ways. They just can’t be isolated, so while they are useful parameters for calculations, they aren’t describing anything that can be isolated (even in principle). Nobody has ever measured the mass of an isolated quark, they have made measurements of interactions and inferred the quark mass, which doesn’t physically exist because quarks can’t exist in isolation. Other masses, such as lepton and hadron masses, may also involve indirect measurements, but at least there you end up with the mass of something that does exist on its own.
In any case, their masses are negligible compared to the masses of the hadrons containing quarks. So hadron masses are accounted for by the strong field energy between the 2or 3 quarks in the hadron, not the masses of the quarks themselves.
Comment by nige cook — July 9, 2008 @ 2:27 pm
Further discussion on Tommaso’s blog:
‘Nige, no, no circularity. You need a top mass as a parameter if you want to determine the theoretical prediction for the number of topantitop events you collect. The top mass is needed because the parton luminosities depend on the fraction of momentum of the parent proton or antiproton they carry. There are fewer partons at larger momentum, so the cross section decreases with the top mass, because a higher top mass “fishes out” the rarer highmomentum partons.’ – Tommaso
Thanks for this explanation, but nobody measures the isolated mass of any quark, since quark masses can’t be isolated. The derivation of the mass of the quark comes from reaction crosssections to make the theory work, and then you use that calculated quark mass to calculate somethiung else. At no point has the isolated quark mass been measured, because it has never been isolated.
By analogy, the original 1960s string theory of strong interactions requires that the strings have a tension of something like 10 tons weight (100 kiloNewtons of force). This figure is required to make the theory describe the strong force, and using this parameter other things about the nucleus can be calculated. However, this isn’t the same thing as ‘measuring’ the tension of strings which bind nuclei together. Just because you can indirectly use experimental data to quantify some parameter and then use that parameter to make checkable calculations, doesn’t mean that it is a real parameter or that it is a real ‘measurement’. In the case of hadronic string theory, it was soon realised that exchange of gluons caused the strong interaction, not string tension.
I think it’s fundamentally misleading for properties of quarks to be quoted where those properties aren’t observable even in principle because of the impossibility of isolating a quark. It’s against Mach’s concept that physics be based on observables. Once you start popularising values for isolated quark masses when isolated quarks never exist even in principle, you break away from Mach’s conception of physics. Hadron masses are directly observable, and they are only 1% constituent quark masses, and 99% mass associated with hadron binding energy. I think that people should be analyzing the latter as a priority, to understand masses.
Hadron masses can be correlated in a kind of periodic table summarized by the expression
M= mn(N+1)/(2*alpha) = 35n(N+1) MeV,
where m is the mass of an electron, alpha = 1/137.036, n is the number of particles in the isolatable particle (n = 2 quarks for mesons, and n = 3 quarks for baryons), and N is the number of massive field quanta (such as Higgs bosons) which give the particle its mass. The particle is a lepton or a pair or triplet of quarks surrounded by shells of massive field quanta which couple to the charges and give them mass, then the number of massive particles which have a highly stable structure might be expected to correspond to the ‘magic numbers’ of nucleon shells in nuclear physics: N = 1, 2, 8 and 50 are such numbers for high stability.
For leptons, n=1 and N=2 gives: 35n(N+1) = 105 MeV (muon)
Also for leptons, n=1 and N=50 gives 35n(N+1) = 1785 MeV (tauon)
For quarks, n=2 quarks per meson and N=1 gives: 35n(N+1) = 140 MeV (pion).
Again for quarks, n=3 quarks per baryon and N=8 gives: 35n(N+1) = 945 MeV (nucleon)
I’ve checked this for particles with lifespans above 10^{23} second, and the model does correlate well with the other data: http://quantumfieldtheory.org/table1.pdf Obviously there’s other complexity involved in determining masses. For example, as with the periodic tables of the elements you might get effects like isotopes, whereby different numbers of uncharged massive particles can give mass to a particular species, so that certain masses aren’t integers. (For a long while, the mass of chlorine was held by some people as a disproof of Dalton’s theory of atomic weights.)
It’s just concerning that emphasis on ‘measuring’ and ‘explaining’ unobservable (isolated) quark masses deflects too much attention from the observable masses of leptons and hadrons.
nige cook – July 10, 2008
‘… Modern postmach Physics has and will continue to do very well to introduce and deal with primitive concepts that might even be unobservable as long as there are observable consequences…’
Hi goffredo,
There aren’t any observable consequences of calculating and using a value for the isolated mass of nonisolatable quarks.
All the other features of quarks apart from the calculated ‘isolated’ mass are real features that quarks have when in hadrons, not hypothetical values for isolated quarks, when quarks can’t be isolated. E.g., colour charge and spin are properties that quarks actually have when in hadrons. These are perfectly scientific because they refer to properties of quarks when in pairs or triplets inside mesons and baryons. What is less helpful in teaching the subject is to specify isolated masses for things that can’t be isolated.
nige cook – July 10, 2008
A pretty good example of the issue of the lack of observable consequences is epicycles. Ptolemy used observational measurements to calculate the sizes and speeds of epicycles of planets. Those parameters were solid numbers, based on observations. He then calculated the positions of planets based on this model. However, despite the sizes of the epicycles being based on fitting the model to real world data, and despite the calculations based on the epicycle parameters being checked by observations, at no point was the epicycle size parameter measuring anything real. (It’s the same fallacy as where theorists use hadronic string theory to work out that the strong force is due to strings with a given amount of tension, and then use that parameter to calculate other things. At no point is that parameter anything measurable or real, even though it is indirectly based on measured data, and predicts measurable data.)
30. nige cook – July 11, 2008
“… the parameters describing epicycles are absolutely real and every theory that had or has at least an infinitesimal chance to survive must be able to account for their values… ” – Lubos Motl
This is sadly incorrect, Lubos. Those parameters aren’t real. The epicycle theory did manage to fit and predict the apparent positions of planets which could be observed in Ptolemy’s time (150 A.D.), but it fails today to predict the distances of the planets from us which can now be measured. It assumes that all the planets, the moon, and the sun orbit the Earth in circles and [also] go around [in] small circles (epicycles) around that path [during] the orbit in order to resolve the problems with circular orbits around the Earth.
http://everything2.com/title/Ptolemaic%2520system
“Ptolemy’s model was finally disproved by Galileo, when, using his telescope, Galileo discovered that Venus goes through phases, just like our moon does. Under the Ptolemaic system, however, Venus can only be either between the Earth and the Sun, or on the other side of the Sun (Ptolemy placed it inside the orbit of the Sun, after Mercury, but this was completely arbitrary; he could just as easily swapped Venus and Mercury and put them on the other side, or any combination of placements of Venus and Mercury, as long as they were always colinear with the Earth and Sun). If that was the case, however, it would not appear to go through all phases, as was observed. If it was between the Earth and Sun, it would always appear mostly dark, since the light from the sun would be falling mainly where we can’t see it. On the other hand, if it was on the far side, we would only be able to see the lit side. Galileo saw it small and full, and later large and crescent. The only (reasonable) way to explain that is by having Venus orbit the Sun.”
Other specific points on the error of epicycles:
1. The Moon was always a serious problem for the theory of epicycles. In order to predict where in the sky the moon would be at any time using epicyles (instead of an ellipical orbit of the moon as it goes around the earth), Ptolemy’s epicycles for the Moon has the unfortunate problem of making the moon recede and approach the earth regularly to the extent that the apparent diameter of the Moon would vary by a factor of two. Since the Moon’s apparent diameter doesn’t vary by a factor of two, there is a serious disagreement between the correct elliptical orbit theory and Ptolemy’s epicycle’s fit to observations of the path of the Moon around the sky. You can get epicycles to fit the positions of planets or the Moon in terms of latitude and longitude on the map of the sky, but it doesn’t accurately model how far the planet or the Moon is from the Earth. The problem for the moon also exists with the other planets, whose diameters were too small to check against the epicycle theory in Ptolemy’s time when there were no telescopes.
2. If you knew the history of the solar system, you’d also be aware that the classical area of physics including Newton’s laws stem ultimately from Tycho Brahe’s observations on the planets. He obtained many accurate data points on the position of Mars, and Kepler tried analysing that data according to epicycles and [g]ave up when it didn’t provide a suitable accurate model. This is why he moved over to elliptical orbits. With lots of epicycles and many adjustable parameters adjusting these, the landscape of possible models made from epicycles is practically infinite, so you can use the anthopic principle to select suitable epicycles and parameters model planetary positions of longitude and latitude on the celestial sphere, and once you have determined nice fits to the empirical data by selecting suitable parameters for epicycle sizes (by analogy to the selectable size and shape moduli of the CalabiYau manifold, when producing the string landscape), you can then “predict” the paths of planets and the Moon around the sky as seen from the Earth. But it is not a good threedimensional model; it fails to predict accurately how far the planets are from the Earth.
“Moreover, cook Nigel has also screwed the analogy with highenergy physics. His map is upside down.” – Lubos.
Maybe my map just appears to up to be upside down to you, because you’re standing on your head? Thanks for giving us your wisdom on how we can go on using epicycles and string theory.
Copy of a comment to Louise Riofrio’s blog (11 July 2008):
http://riofriospacetime.blogspot.com/2008/07/andromeda.html
Nice post. Andromeda is very interesting, since it’s relatively nearby and is a rare blueshifted galaxy, due to the fact that the Milky Way is being attracted to it so the two galaxies are approaching, not receding as is the case with other galaxies.
I love the fact that black holes exist in the centre of galaxies.
“If Black Holes seeded formation of these stars, their continued presence would keep the stars stable.”
Presumably the first stars that began shortly after the big bang grew very large because there were massive really clouds of hydrogen gas which collapsed to form them.
They fused hydrogen into heavier elements quickly, then exploded as supernovae (such as the one which created all the heavy elements in the solar system’s planets) or collapsed into black holes, which then seeded galaxy formation.
I realise that you are busy with spacesuit design and that other people like Kea and Carl Brannen are busy with Category Theory and Mass Operators/Koide formula theory development, but may I just summarise here some evidence about the possibility of fundamental particle cores being black holes and Hawking radiation as a gauge theory exchange radiation?
1. A black hole with the electron’s mass would by Hawking’s theory have an effective black body radiating temperature of 1.35*10^53 K. The Hawking radiation is emitted by the black hole event horizon which has radius R = 2GM/c^2.
2. The radiating power per unit area is the StefanBoltzmann constant multiplied by the kelvin temperature raised to the fourth power, which gives 1.3*10^205 watts/m^2. For the black hole event horizon spherical surface area, this gives a total radiated power of 3*10^92 watts.
3. For an electron to keep radiating, it must be absorbing a similar power. Hence it looks like an exchangeradiation theory where there is an equilibrium. The electron receives 3*10^92 watts of gauge bosons and radiates 3*10^92 watts of gauge bosons. When you try to move the electron, you introduce an asymmetry into this normal equilibrium and this is asymmetry felt as inertial resistance, in the way broadly argued (for a zeropoint field) by people like Professors Haisch and Rueda. It also causes compression and mass increase effects on loving bodies, because of the snowplow effect of moving into a radiation field and suffering a net force.
When the 3*10^92 watts of exchange radiation hit an electron, they each impart momentum of absorbed radiation is p = E/c, where E is the energy carried, and when they are reemitted back in the direction they came from (like a reflection) they give a recoil momentum to the electron of a similar p = E/c, so the total momentum imparted to the electron from the whole reflection process is p = E/c + E/c = 2E/c.
The force imparted by successive collisions, as in the case of any radiation hitting an object, is The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c = 2*10^84 Newtons, where P is power as distinguished from momentum p.
So the Hawking exchange radiation for black holes would be 2*10^84 Newtons.
Now the funny thing is that in the big bang, the Hubble recession of galaxies at velocity v = HR implies an outward acceleration of either
a = v/t = (HR)/(R/c) = Hc
or else
a = dv/dt = d(HR)/dt = H*dR/dt + R*dH/dt = Hv + R*0 = Hv = RH^2.
For distances near the horizon radius of the universe R = ct, both of these estimates for a are the same, although they differ for smaller distances.
However, since most of the mass is at great distances, an order of magnitude estimate is that this acceleration causes an outward force of
F = ma = Hcm = 7*10^43 Newtons.
If that outward force causes an equal inward force which is mediated by gravitons (according to Newton’s 3rd law of motion, equal and opposite reaction), then the crosssectional area of an electron for graviton interactions (predicting the strength of gravity correctly) is the crosssectional area of the black hole event horizon for the electron, i.e. Pi*(2GM/c^2)^2 m^2. (Evidence here.)
Now the fact that the black hole Hawking exchange radiation force calculated above is 2*10^84 Newtons, compared 7*10^43 Newtons for quantum gravity, suggests that the Hawking black hole radiation is the exchange radiation of a force roughly (2*10^84)/(7*10^43) = 3*10^40 stronger than gravity.
Such a force is of course electromagnetism.
So I find it quite convincing that the cores of the leptons and quarks are black holes which are exchanging electromagnetic radiation with other particles throughout the universe.
The asymmetry caused geometrically by the shadowing effect of nearby charges induces net forces which we observe as fundamental forces, while accelerative motion of charges in the radiation field causes the LorentzFitzGerald transformation features such as compression in the direction of motion, etc.
Hawking’s heuristic mechanism of his radiation emission has some problems for an electron, however, so the nature of the Hawking radiation isn’t the highenergy gamma rays Hawking suggested. Hawking’s mechanism for radiation from black holes is that pairs of virtual fermions can pop into existence for a brief time (governed by Heisenberg’s energytime version of the uncertainty principle) anywhere in the vacuum, such as near the event horizon of a black hole. Then one of the pair of charges falls into the black hole, allowing the other one to escape annihilation and become a real particle which hangs around near the event horizon until the process is repeated, so that you get the creation of real (longlived) real fermions of both positive and negative electric charge around the event horizon. The positive and negative real fermions can annihilate, releasing a real gamma ray with an energy exceeding 1.02 MeV.
This is a nice theory, but Hawking totally neglects the fact that in quantum field theory, no pair production of virtual electric charges is possible unless the electric field strength exceeds Schwinger’s threshold for pair production of 1.3*10^18 v/m (equation 359 in Dyson’s http://arxiv.org/abs/quantph/0608140 and equation 8.20 in Luis AlvarezGaume, and Miguel A. VazquezMozo’s http://arxiv.org/abs/hepth/0510040). If you check out renormalization in quantum field theory, this threshold is physically needed to explain the IR cutoff on the running coupling for electric charge. If the Schwinger threshold didn’t exist, the running coupling or effective charge of an electron would continue to fall at low energy instead of becoming fixed at the known electron charge at low energies. This would occur because the vacuum virtual fermion pair production would continue to polarize around electrons even at very low energy (long distances) and would completely neutralize all electric charges, instead of leaving a constant residual charge at low energy that we observe.
Once you include this factor, Hawking’s mechanism for radiation emission starts have a definite backreaction on the idea, and to modify his mathematical theory. E.g., pair production of virtual fermions can only occur where the electric field exceeds 1.3*10^18 v/m, which is not the whole of the vacuum but just a very small spherical volume around fermions!
This means that black holes can’t radiate any Hawking radiation at all using Hawking’s heuristic mechanism, unless the electric field strength at the black hole event horizon radius 2GM/c^2 is in excess of 1.3*10^18 volts/metre.
That requires the black hole to have a relatively large net electric charge. Personally, from this physics I’d say that black holes the size of those in the middle of the galaxy don’t emit any Hawking radiation at all, because there’s no mechanism for them to have acquired a massive net electric charge when they formed. They formed from stars which formed clouds of hydrogen produced in the big bang, and hydrogen is electrically neutral. Although stars give off charged radiations, they emit as much negative charge as electrons and negatively charged ions, as they emit positive charge such as protons and alpha particles. So there is no way they can accumulate a massive electric charge. (If they did start emitting more of one charge than another, as soon as a net electric charge developed, they’d attract back the particles whose emission had caused the net charge and the net charge would soon be neutralized again.)
So my argument physically from Schwinger’s formula for pair production is that the supermassive black holes in the centres of galaxies have a neutral electric charge, have zero electric field strength at their event horizon radius, and thus have no pairproduction there and so emit no Hawking radiation whatsoever.
The important place for Hawking radiations is the fundamental particle, because fermions have an electric charge and at the black hole radius of a fermion the electric field strength way exceeds the Schwinger threshold for pair production.
In fact, the electric charge of the fermionic black hole modifies Hawking’s radiation, because it prejudices which of the virtual fermions near the event horizon will fall into. Because fermions are polarized in an electric field, the virtual positrons which form near the event horizon to a fermionic black hole will on average be closer to the black hole than the virtual electrons, so the virtual positrons will be more likely to fall in. This means that instead of virtual fermions of either electric charge sign falling at random into the black hole fermion, you instead get a bias in favour of virtual positrons and other virtual fermions of positive sign being more likely to fall into the black hole, and an excess of virtual electrons and other virtual negatively charged radiation escaping from the black hole event horizon. This means that a black hole electron will emit a stream of negatively charged radiation, and a black hole positron will emit a stream of positively charged radiation.
Although such radiation would appear to be massive fermions, because there is an exchange of such radiation in both directions simultaneously once an equilibrium of such radiation is set up in the universe (towards and away from the event horizon), the overlap of incoming and outgoing radiation will have some peculiar effects, turning the fermionic sub relativistic radiation into bosonic relativistic radiation.
The reason why a fermion differs from a boson is down to spin and can be grasped by the example of an electron and a positron annihilating into gamma rays and vice versa. When the fermionic 1/2spins of an electron and positron are combined, you get bosonic 1spin radiation. Physically what happens can be understood in terms of the magnetic field curls you get when electric charge propagates through space.
There is a backreaction effect called selfinductance which arises when an electric charge is accelerated. The magnetic field produces a force which opposes acceleration. The increased inertial mass can be considered an ellect of this. A massless charged radiation would have an infinite selfinductance, and wouldn’t be able to propagate.
However, if you have two fermionic electric charges side by side, as in all examples of electricity, you get the emergence of a special phenomenon whereby energy propagates like bosonic radiation. E.g., the TEM wave logic step of electricity requires that you have two parallel conductors in a power ‘transmission line’. At any moment where electric power is propagative, the electric charge in one conductor of the transmission line is opposite to that in the other conductor immediately adjacent. The mechanism for what happens is simply that the magnetic curl around the negative conductor is in the opposite direction to the magnetic curl of around the positive conductor, so that the superimposed curls cancel each other, cancelling out the magnetic inductance and therefore allowing electric power to cease behaving line subrelativistic massive fermions and to instead behave as light velocity bosonic radiation: the electric lightvelocity power transmission is a case of two oppositely charged fermions (one in each conductor) combining in such a way that together they behave as a boson for the purpose of allowing light velocity transmission of electric power. (This is clear to me from Catt’s research in transmission lines, e.g. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4039191.)
Other examples of this kind of superposition are well known. For example, superconductivity occurs for exactly the same reason, you get Cooper pairs of electrons forming which behave as photons. Generally, in condensed matter physics (low temperature phenomena generally) pairs of half integer spin fermions can associate to produce composite particles that have the properties of integer spin particles, bosons.
This is the mechanism by which Hawking gauge theory exchange radiations, while overlapping in space in the process of going to and coming from the event horizons of black holes, behave as bosons rather than as fermions.
The diagram here: https://nige.files.wordpress.com/2007/05/fig5.jpg?w=702&h=1065 shows in terms of electromagnetic field strengths the difference between Maxwell’s imaginary photon, the real transverse path integralsuggested photon of QED, and the exchange radiation composed of two fermionlike charges superimposed which occurs in the case of both lightvelocity electricity power transmission and the exchange of Hawking radiation I’ve described above.
The diagram here: https://nige.files.wordpress.com/2007/05/fig4.jpg?w=505&h=548 shows how all the longrange forces (gravity and electromagnetism) arise physically from exchange radiations. E.g., why universal attraction comes from gravity, why like charges repel and unlike charges attract with the same force for unit charges as the repulsion of like charges. My current effort to distinguish what is correct from what is incorrect in quantum field theory is site, including calculations for quantum gravity. However, it’s again in need of rewriting, updating and improving. (It’s just as well that virtually everybody is negative about it, because if there was a fanfare of interest I’d probably soon be locked down to the theory in a particular state, and unable to keep reformulating it, finding out new details and problems and tackling them in my leisure time. It would be more stressful to have to work fulltime on this. I’m developing an SQL database and ASP website at the moment, which is a welcome change from this crazylooking but factually surprisingly solid physics.)
************
Further thoughts
U(1) describes the singlets of electroweak theory because it has only one element for charge, and SU(2) describes the doublets since it has two charges.
I think that U(1) is a flawed symmetry for electromagnetism. Electric charges come in doublets via pair production, so I don’t think that there are any real singlets. It would be nice to construct a theory of electromagnetism based on SU(2) which has two charges, positive and negative, just as in the weak isospin SU(2) symmetry you have two types of spin.
The meson consists of a quark and antiquark pair as the doublet in SU(2) or, in the case of leptons, the doublet would be a lefthanded electron and a left handed neutrino (both with equal and opposite amounts of weak isospin charge), with the righthanded spinning electron being the singlet with zero isospin charge but twice as much weak hypercharge as the lefthanded electron.
I think that in a preon theory downquarks should be correlated with electrons. The lefthanded electron has exactly the same weak isotopic charge as the lefthanded downquark, 1/2. The lefthanded electron has weak hypercharge and electric charge of 1 unit each, while the lefthanded downquark has weak hypercharge and electric charge of 1/3 unit each.
The fractional hyper and electric charge of the downquark has a pretty obvious mechanism in pair production polarization when the simplest hadron, the omega minus (containing three almost identical ‘strange’ downquarklike charges) is investigated. The electric field of a particle causes virtual fermion pairproduction at high field strengths, the pairs can become briefly polarized by the electric field radially around the downquark, and as a result of this polarization they cancel out some of the primary electric field as observed from greater distances.
If you triple the charge that is causing the polarization, by having three electronsized charges confined in a small volume, the polarization of the surrounding vacuum will be 3 times more intense, and so the shielding of the electric charge will be 3 times greater than in the case of a single charge. If the relative shielding factor is say N = 137 units for a single electron sized charge, then the observed charge at a long distance is the bare core charge divided by N, e.g. 137/N = 137/137 = 1. For three strange quarks based on the same preons as electrons, we get a shielding factor of N = 3*137 because the stronger charge causes more polarization and thus more shielding of the electric field and apparent (observable) charge seen from a large distance, which becomes 137/N = 137/(3*137) = 1/3.
I can see why this heuristic argument isn’t being taken seriously; firstly the mainstream is hostile to mechanisms, and secondly there are upquarks where the electric charge observable is +2/3 units. However, it’s clear that electric charge is related to weak isotopic charge. An upquark has the same weak hypercharge as a downquark (+1/3) but has 1 unit more weak isotopic charge (+1/2 instead of 1/2), and this extra unit of weak isotopic charge directly shows in the 1 unit of extra electric charge that an upquark has over a downquark (+2/3 is 1 unit more than 1/3).
If you look at the a table of the first generation of standard model particles, e.g., https://nige.files.wordpress.com/2007/06/particles2.jpg?w=612&h=361 it’s clear that the weak isotopic charges for the three weak SU(2) gauge bosons are identical to the corresponding electric charges of those gauge bosons. So I think it’s a fact that SU(2) describes both electromagnetism (electric charge, weak hypercharge) as well as weak interactions. The differences between the two types of interaction (electromagnetism and weak) are down to the mass acquired by certain of the SU(2) gauge bosons, while electromagnetic gauge bosons remain massless. There is obviously quite a lot of mathematical work to be done to really get this to work in detail.
The key connection between the lagrangian equation for a field and the symmetry group is Noether’s theorem, but it’s pretty hard to grasp the physics in books like Ryder’s (which seems to be far more physical than Weinberg’s first two volumes of QFT maths). The things I want to understand are glossed over, and a lot of space is devoted to explaining in detail a lot of trivia which I don’t need because I don’t have much spare time or energy for irrelevant stuff which is just there to provide some basis for examination questions.
Ryder (2nd ed.) gives two sections on SU(2), firstly section 3.5 on the YangMills field (which is pretty abstract maths and doesn’t give any simple physics of the Noether theorem’s application in getting from the SU(2) symmetry group to the lagrangian for the YangMills field), and section 8.5 on the WeinbergSalam model, which is easier to grasp because it starts with the Dirac lagrangian equation, simplifies it to the massless case then separates it into left and right handed components.
Then Ryder tries to introduce weak isospin as acting on lefthanded particles, and claims that the resulting lagrangian is invariant under conditions which form the rotational symmetry of SU(2). He does all this in half a page, and I can’t follow the physical reasoning. On the next page he discusses weak hypercharges modelled by U(1). The maths again isn’t physically ground in reality, it’s just model building and there is no reason for that particular way of building a model.
I think my next step will be to try to get hold of the original Yang and Mills paper on SU(2) and see if that helps to clear up my questions about Ryder’s approach.
*************************
SU(2)xSU(3) particle physics based on solid facts, giving quantum gravity predictions
Hubble’s law: v = dR/dt = HR. => Acceleration: a = dv/dt = d(HR)/dt = (H*dR/dt) + (R*dH/dt) = Hv + 0 = RH^{2}. 0 < a < 6*10^{10} ms^{2}. Outward force: F = ma. Newton’s 3rd law: equal inward reaction force (via gravitons). Since nonreceding nearby masses don’t cause reaction, they cause an asymmetry, predicting gravity and in 1996 this theory predicted the ‘cosmological acceleration’ discovered in 1998.
Above: how the flux of YangMills gravitational exchange radiation (gravitons) being exchanged between all the masses in the universe physically creates an observable gravitational acceleration field directed towards a cosmologically nearby or nonreceding mass, labelled ‘shield’. (The Hubble expansion rate and the distribution of masses around us are virtually isotropic, i.e., radially symmetric.) The mass labelled ‘shield’ creates an asymmetry for the observer in the middle of the sphere, since it shields the graviton flux because it doesn’t have an outward force relative to the observer (in the middle of the circle shown), and thus doesn’t produce a forceful graviton flux in the direction of the observer according to Newton’s 3rd law (action and reaction, an empirical fact, not a speculative assumption).
Hence, any mass that is not at a vast cosmological distance (with significant redshift) physically acts as a shield for gravitons, and you get pressed towards that shield from the unshielded flux of gravitons on the other side. Gravitons act by pushing, they have spin1. In the diagram, r is the distance to the mass that is shielding the graviton flux from receding masses located at the far greater distance R. As you can see from the simple but subtle geometry involved, the effective size of the area of sky which is causing gravity due to the asymmetry of mass at radius r is equal to the crosssectional area of the mass for quantum gravity interactions (detailed calculations, included later in this post, show that this crosssection turns out to be the area of the event horizon of a black hole for the mass of the fundamental particle which is acting as the shield), multiplied by the factor (R/r)^{2}, which is how the inverse square law, i.e., the 1/r^{2} dependence on gravitational force, occurs.
Because this mechanism is built on solid facts of expansion from redshift data that can’t be explained any other way than recession, and on experiment and observation based laws of nature such as Newton’s, it is not just a geometric explanation of gravity but it uniquely makes detailed predictions including the strength of gravity, i.e., the value of G, and the cosmological expansion rate; it is a simple theory as it uses spin1 gravitons which exert impulses that add up to an effective pressure or force when exchanged between masses. It is quite a different theory to the mainstream model which ignores graviton interactions with other masses in the surrounding universe.
The mainstream model in fact can’t predict anything at all. It begins by ignoring all the masses in the universe except for two masses, such as two particles. It then represents gravity interactions between those two masses by a Lagrangian field equation which it evaluates by a Feynman path integral. It finds that if you ignore all the other masses in the universe, and just consider two masses, then spin1 gauge boson exchange will cause repulsion, not attraction as we know occurs for gravity. It then ‘corrects’ the Lagrangian by changing the spin of the gauge boson to spin2, which has 5 polarizations. This new ‘corrected’ Lagrangian with 5 tensor terms for the 5 polarizations of the spin2 graviton being assumed, gives an alwaysattractive force between two masses when put into the path integral and evaluated. However, it doesn’t say how strong gravity is, or make any predictions that can be checked. Thus, the mainstream first makes one error (ignoring all the graviton interactions between masses all over the universe) whose fatally flawed prediction (repulsion instead of attraction between two masses) it ‘corrects’ using another error, a spin2 graviton.
So one reason why the actual spin2 gravitons don’t cause masses to repel is because the path integral isn’t just a sum of interactions between two gravitational charges (composed of massenergy) when dealing with gravity; it’s instead a sum of interactions between all massenergy in the universe. The reason why mainstream people don’t comprehend this is that the mathematics being used in the Lagrangian and path integral are already fairly complex, and they can’t readily include the true dynamics so they ignore them and believe in a fiction instead. (There is a good analogy with the false mathematical epicycles of the Earthcentred universe. Whenever the theory was in difficulty, they simply added another epicycle to make the theory more complex, ‘correcting’ the error. Errors were actually celebrated and simply relabelled being ‘discoveries’ that nature must contain more epicycles.)
Some papers here, home page here. CERN Doc Server deposited draft preprint paper EXT2004007, 15/01/2004 (this is now obsolete and can’t be updated to the revised version such as something similar to the discussion and mathematical proof below, because CERN now only accepts feed through arXiv.org which is blocked (even to some string theorists who work on nonmainstream ideas) by mainstream (Mtheory) string ‘theorists’ (who have no testable predictions and no checkable theory, and so are not theorists in a scientific sense): ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, Mtheory originator, Physics Today, April 1996. What Witten’s claimed ‘prediction of gravity’ is, is the spin2 graviton and it isn’t a falsifiable prediction, unlike all the predictions made and subsequently confirmed by the spin1 gravity idea. To grasp Dr Woit’s assessment of the “not even wrong” spin2 graviton idea, try the following passage:
“For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. [...] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘Mtheory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135.
Gravity gets weaker than the inverse square over massive distances in this universe. This is because gravity is mediated by gravitons which get redshifted and thus the quanta lose energy when exchanged between masses which are receding at relativistic velocities, i.e. well apart in this expanding universe, which would reduce the effective value of G over immense distances. Additionally, from empirical facts (see the calculations below in this blog post), the mechanism of gravity depends on surrounding recession of masses around any point. This means that if general relativity is just a classical approximation to quantum gravity (due to the graviton redshift effect just explained, which implies that spacetime is not curved over cosmological distances), we have to treat spacetime as finite and not bounded, so that what you see is what you get and the universe may be approximately analogous to a simple expanding fireball.
Masses near the real ‘outer edge’ (the radial distance in spacetime which corresponds to the time of big bang, i.e. 13,700 million lightyears distance) of such a fireball (remember that since gravity doesn’t act over cosmological distances due to graviton redshift when exchanged between receding masses, there is no spacetime curvature causing gravitation over such distances) get an asymmetry in the exchange of gravitons: exchanging them on one side only (the side facing the core of the fireball, where other masses are located).
Hence such masses tend to just get pushed outward, instead of suffering the usual gravitational attraction, which is of course caused by shielding of allround graviton pressure. In such an expanding fireball where gravitation is a reaction to surrounding expansion due to exchange of gravitons, you will get both expansion and gravitation as results of the same fundamental process: exchange of gravitons. The pressure of gravitons will cause attraction (due to mutual shadowing) between masses which are relatively nearby, but over cosmological distances the whole collection of masses will be expanding (masses receding from one another) due to the momentum imparted in the process of exchanging gravitons. This prediction was put forward via the October 1996 Electronics World, two years before evidence from Perlmutter’s supernovae observations which confirmed that the universe is not decelerating contrary to the standard predictions of cosmology at that time (i.e., that the expansion of the universe looks as if there is a small positive cosmological constant – of predictable magnitude – offsetting gravitational deceleration over cosmological distances).
I’ve been preparing a Google or Utube video about physical mechanisms, physical forces in the fireballs in the 1962 nuclear tests at high altitude (particularly the amazing films of the fireball dynamics in the Bluegill test), and exchange radiation which will make the material and figures in the post here easier to grasp. It was a study of fireball phenomenology, the break down of general relativity due to a weakening of the gravity coupling constant in an expanding universe (gravitons exchanged between relativistically receding masses – quantum gravity charges – in an expanding universe are redshifted, reducing the effective strength of gravitational interactions in proportion to amount of redshift of the gravitons and the visible light observed, since energy is related to frequency by E = hf) and the analogy to the big bang which suggested the mechanism of gravity in 1996. In an air blast wave, Newton’s 3rd law – the equality of action and reaction forces – always holds. Initially, there is extremely high pressure throughout the fireball, communicating reaction forces in spherical symmetry, i.e., the Northward portion of the shock wave exerts a net outward force equal to its pressure times its surface area, and the reaction force is found in the Southward portion of the shock wave.
But after a while, the amount of air in the shock front is so compressed that the density falls in the central region, which cools and loses pressure. Hence, the central region can no longer mediate the reaction force of the shock wave in different directions. What happens at this stage is that a negative pressure wave, directed inward towards the centre of the explosion, then develops which has lower pressures but longer duration, allowing a reaction force to be exerted. A shock wave cannot exert outward pressure (and thus force, being equal to pressure times area) without satisfying Newton’s 3rd law of action and reaction. The reversed phase of the blast wave (with pressure acting towards the point of the explosion, i.e., suction or ‘negative pressure’ – below the ambient 14.7 psi/101 kPa normal air pressure phase) is vital for maintaining Newton’s 3rd law of motion in a shock wave.
The negative pressure phase consists of an inner shell of blast with a force directed inward in response to the outward force at the shock front. This feature is vital in implosion systems used to actually cause a nuclear explosion in the first place: the implosion relies on the fact that half the force of an explosion is initially directed inward within the mass of exploding material (the inwardtravelling shock wave reflects back when it reaches the centre, and the rebounded shock wave travels outward, but in the meantime it squashes very effectively anything placed at the core, like a lump of subcritical fissile material). Implosion is also a feature of the big bang:
The product rule of differentiation is: d(uv)/dx = (v*du/dx) + (u*dv/dx)
Hence the observationally based 1929 Hubble law, v = HR, differentiates as follows:
dv/dt = d(RH)/dt = (H*dR/dt) + (R*dH/dt)
The second term here, R*dH/dt, is zero because in the Hubble law v = HR the term H is a constant from our standpoint in spacetime, so H doesn’t vary as a function of R and thus it also doesn’t vary as a function of apparent time past t = R/c. In the spacetime trajectory we see as we look out to greater distances, R/t is always in the fixed ratio c, because when we see things at distance R the light from that distance has to travel that distance at velocity c to get to us, so when we look out to distance R we’re automatically looking back in time to time t = R/c seconds ago.
Hence R*dH/dt = R*dH/d[R/c] = Rc*dH/dR = Rc*0 = 0.
This is because dH/dR = 0. I.e., there is no variation of the Hubble constant as a function of observable spacetime distance R.
Thus, the acceleration of any distant, receding lump of matter as we perceive it in spacetime, is
a = dv/dt = d(RH)/dt = H*dR/dt = H*v = H*[RH] = R*H^2.
Now the outward acceleration, a, is very small. It reaches only about 6*10^{10} ms^{2} for the most distant receding objects. But because the mass of the receding universe is really big, that comes to an outward force on the order of say 7*10^43 Newtons. Newton’s 3rd law tells us there should be an equal and opposite reaction force. According to what is physically known about the possible particles and fields that exist, this inward reaction force might be carried by spin1 gravitons (nonstring theory gravitons; string theory hype supposes spin2 gravitons), which cause all gravitational field and observed general relativity (contraction, etc.) effects physically by exerting pressure as a quantum field of exchange radiation.
When we calculate the universal gravitational parameter, G, by this theory we get a figure that’s good (within available experimental data). There are complexities because what counts in spacetime for graviton exchange is the observable density of the universe as a function of distance/time past, which increases towards infinity as we look back to immense distances (approaching time zero); however this massive increase in effective outward force is cancelled out by the fact that the reaction force is mediated by gravitons which are extremely redshifted from such locations where the recession velocities are very close to the velocity of light (i.e., relativistic).
One way to imagine the mechanism for why an outwardaccelerating particle should fire back gravitons as a reaction force to satisfy Newton’s 3rd law of motion is very simple: walk down a corridor and observe what happens to air that vacates the region in front of you and fills in the region behind you as you walk. Or better, push a ball along while holding it underwater. There is a resistance due to motion against the water (which is a crude model for moving an electron or other object having rest mass in a graviton field in the vacuum of spacetime), which compresses the ball slightly in the direction of motion. There is then a flow of the field quanta (water in the analogy) around the particle from front to back. This flow permits things to move, and because the field flow – once set up after effort (against resistance) – has momentum, it adds inertia to the moving object. (Ships and submarines are hard to stop suddenly because they have extra momentum – not just the usual momentum, but momentum from the water field’s motion around them. This hints that the intrinsic momentum of any moving mass is due to a similar effect involving the vacuum graviton field flowing around individual fundamental particles. As Einstein pointed out, inertial and gravitational masses are indistinguishable.)
Hence, as a 70 kg (70 litre) person walks down a corridor at 1 m/s, some 70 litres of air moving at a net velocity of 1 m/s in the opposite direction flows into the void the person is vacating. (In internet discussions, some ingenious bigots claimed that when you walk, you attract air from behind which follows you to fill the volume of space you are vacating. If that were true, the air pressure along the corridor would become ever more become unequal because of (1) air becoming compressed in front of you (instead of flowing around you to fill in the void behind), and (2) air pressure being reduced still further behind you as air expands to fill in the void. This doesn’t happen. In any case, the example with water makes it clear what happens: water flows from the front to the back of a moving object.
If the object accelerates, the surrounding field responds similarly if the motion of the particles in it is adequately fast that it can respond. (Air molecules have an average velocity of only 500 m/s, but spin1 gravitons travel at 300 Mm/s.) Hence, if you have a long line of people walking in one direction only along a corridor, you have a current flowing in that direction, which is compensated for by a net flow of the surrounding field (air) in the opposite direction. Although the individual air molecules are going at about 500 m/s, the net flow of the bulk ‘field’ composed of air is equal to the speed of the current of people moving, while the net volume of the field which is effectively flowing is equal to the volume of the people who are moving.
Similarly, when matter moves away from us in the big bang, the graviton field around that matter responds by moving in the other direction at the same time, causing the graviton reaction force as described quantitatively by Newton’s 3rd law.
I’ll insert the video into a blog post on this site in the near future, along with a free PDF download link for the accompanying book. In the meanwhile, please make do with the posts on this page, especially this, this, this, this, this, this, this, this, this, this, this, and this.
To understand why mainstream hype of unchecked stringy theory with its nonfalsifiable speculative extra dimensions, multiverse/landscape, and so on are destructive, see this link. The mechanism proved in detail below does work, although it is still very much in a nascent stage. The problems are (1) that it leads to interesting applications in so many directions in physics that it absorbs a great deal of time, (2) it is extremely unpopular because “mechanisms” are sneered at out of prejudice (in favour of mechanismless mathematical “models”) , and are regarded as being “crazy” by essentially all mainstream physicists, i.e. most professional physicists. People like LeSage and Maxwell (who developed a mechanical model of space which was flawed), with false, halfbaked ideas have permanently damaged the credibility of mechanisms in fundamental physics, not to mention the metaphysical (nonfalsifiable) hidden variable “interpretation” of quantum mechanics.
The absurdity of this situation is demonstrated by the fact that quantum field theory postulates gauge boson radiations being exchanged in the vacuum between charges in order to mediate force fields (i.e., causing forces), yet the attitude is to believe in this without searching for the underlying physical mechanism! It’s exactly like religion where you allowed to believe things without investigating them scientifically. Moreover, the majority of people in the world actually want to heroworship religious beliefs in science, in place of supporting accurate, predictive physical mechanisms based on solid facts: people are today using modern physics as an alternative religion. They have (1) abandoned the search for reality, (2) lied that it is not possible to understand physics by mechanisms (it is), and (3) embarked on a campaign to censor out the facts and replace them with false speculations. Differential equations describing smooth curvatures and continuously variable fields in general relativity and mainstream quantum field theory are wrong except for very large numbers of interactions, where statistically they become good approximations to the chaotic (particle interactions) which are producing accelerations (spacetime curvatures, i.e. forces). See https://nige.wordpress.com/2007/07/04/metricsandgravitation/ and in particular see Fig. 1 of the post: https://nige.wordpress.com/2007/06/13/feynmandiagramsinloopquantumgravitypathintegralsandtherelationshipofleptonstoquarks/.
Think about air pressure as an analogy. Air pressure can be represented mathematically as a continuous force acting per unit area: P = F/A. However, air pressure is not a continuous force, it is due to impulses delivered by discrete random, chaotic strikes by air molecules (travelling at average speeds of 500 m/s in sea level air) against surfaces. If therefore you take a very small area of surface, you will not find a continuous uniform pressure P acting on it. Instead, you will find a series of chaotic impulses due to individual air molecules striking the surface! This is an example of how a useful mathematical fiction on large scales like air pressure, loses its accuracy if applied on small scales. It is well demonstrated by Brownian motion. The motion of an electron in an atom is subjected to the same thing simply because the small size doesn’t allow large numbers of interactions to be averaged out. Hence, on small scales, the smooth solutions predicted by mathematical models are flawed. Calculus assumes that spacetime are endlessly divisible, which is not true when calculus is used to represent a curvature (acceleration) due to a quantum field! Instead of perfectly smooth curvature as modelled by calculus, the path of a particle in a quantum field is affected by a series of discrete impulses from individual quantum interactions. The summation of all these interactions gives you something that is approximated in calculus by the “path integral” of quantum field theory. The whole reason why you can’t predict deterministic paths of electrons in atoms, etc., using differential equations is that their applicability breaks down for individual quantum interaction phenomena. You should be summing impulses from individual quantum interactions to get a realistic “path integral” to predict quantum field phenomena. The total and utter breakdown of mechanistic research in modern physics has instead led to a lot of nonsense, based on sloppy thinking, lack of calculations, and the failure to make checkable, falsifiable predictions and obtain experimental confirmation of them. The abusiveness and hatred directed towards people like myself by those “brane”washed with failed ideas from Dr Witten et al., is not unique to modern physics. It’s a mixture of snobbish hatred of innovation based on simple ideas, and a lack of real interest in physics by people who claim to be physicists but are in fact only crackpot mathematicians.
Predicted fundamental force strengths, all observable particle masses, and cosmology from a simple causal mechanism of vector boson exchange radiation, based on the existing mainstream quantum field theory
Solution to a problem with general relativity: A YangMills mechanism for quantum field theory exchangeradiation dynamics, with prediction of gravitational strength, spacetime curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests
(For an introduction to quantum field theory concepts, see The physics of quantum field theory.)
‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64.
Here’s an outline of the basic ‘idea’ (actually it is wellestablished 100% factual evidence just assembled in a 100% new way, and it is not merely a personal idea, not a speculation, not guesswork, not a pet ‘theory’, but it is scientific fact pure and simple) behind the new mechanistic physics involved (described in detail on this page and more recent pages of this blog):
Galaxy recession velocity in spacetime (Hubble’s empirical law): v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^{2} so: 0 < a < 6*10^{10} ms^{2}. Outward force: F = ma by Newton’s 2nd empirical law. Newton’s empirical 3rd law predicts equal inward force (which according to the possibilities in quantum field theory, will be carried by gravitons, exchange radiation between gravitational charges in quantum gravity): but nonreceding nearby masses don’t give rise to any reaction force according to this mechanism, so they act as shields and thus cause an asymmetry, producing gravity. This predicts the strength of gravity and electromagnetism, particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.
The underlying symmetry group physics which follows from this mechanism is to replace SU(2)xU(1) + Higgs sector in the Standard Model with simply a version of SU(2) where the 2^{2} 1 = 3 gauge bosons can exist in both massless and massive forms. Some field in the vacuum (different to the Higgs field in detail, but similar in that it provides rest mass to particles) gives masses to some of each of the 3 massless gauge bosons of SU(2), and the massive versions are the massive neutral Z, charged W, and charged W+ weak gauge bosons just as occur in the Standard Model. However, the massless versions of Z, W and W+ are the gauge bosons of gravity, negative electromagnetic fields, and positive electromagnetic fields, respectively.
The basic method for electromagnetic repulsion is the exchange of similar massless W gauge bosons between negative charges, or massless W+ gauge bosons between positive charges. The charges recoil apart because they get hit repeatedly by radiation emitted by the other charge. But for a pair of opposite charges, like a negative electron and positive nucleus, you get attraction because each charge can only interact with similar charges, so the effect is opposite charges on one another is to simply shadow them from radiation coming in from other charges in the surrounding universe. A simple vector force diagram (published in Electronics World in April 2003) shows that in this mechanism the magnitudes of the attraction and repulsion forces of electromagnetism are identical. The fact that electromagnetism is on the order of 10^{40} times as strong as gravity for fundamental charges (the precise figure depends on which fundamental charge are compared), is due to the fact that in this mechanism radiation is only exchanged between similar charges, so you get a statisticaltype “random walk” vector summation across the similar charges distributed in the universe. This was also illustrated in the April 2003 Electronics World article. Because gravity is carried by neutral (uncharged) gauge bosons, it’s net force doesn’t add up this way, so it turns out that gravity is weaker than electromagnetism by a factor equal to the square root of the number of similar charges of either sign in the universe. Since 90% of the universe is hydrogen, composed of two negative charges (electron and downquark) and two positive charges (two upquarks), it is easy to make approximate calculations of such numbers, using the density and size of the universe.
Obviously, massless charged radiation is a nonstarter in classical physics because it won’t propagate due to it’s magnetic selfinductance; however for YangMills theory (exchange radiation causing forces) this objection doesn’t hold because the transit of similar radiation in two opposite directions along a path at the same time cancels out the magnetic field vectors, allowing propagation. In a different context, we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying charged currents flowing in opposite directions to enable a signal (or pulse, or logic step, or net energy flow) to propagate: the magnetic fields of each charged current (one on each conductor in the pair of conductors) cancel one another out, preventing the infinite selfinductance problem and thus allowing propagation of charged energy currents. Thus the analogy of electricity propagating in a pair of conductors when a switch is closed shows how charged exchange radiation works.
Abstract
The objective is to unify the Standard Model and General Relativity with a causal mechanism for gauge boson mediated forces which makes checkable predictions. In very brief outline, Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH^{2} and outward force F = ma ~ 10^{43} Newtons. Newton’s 3rd law implies an inward force, which from the possibilities available seems to be carried by gauge boson radiation (gravitons), which predicts gravitational curvature, other fundamental forces, cosmology and particle masses. Nonreceding (local) masses don’t cause a reaction force, because they don’t present an outward force, so they act as a shield and cause an asymmetry that we experience as the attraction effect of gravity: see Fig. 1.
The symmetrical inward pressure of graviton radiation (see Fig. 2) exerts a pressure on masses (acting on masses, i.e., what is referred to as ‘Higgs field quanta’, which act on the interaction crosssectional areas of fundamental particles, and not on the macroscopic surface area of a planet) which causes the gravitational contraction predicted by general relativity, i.e., Earth’s radius is contracted by (1/3)MG/c^{2} = 1.5 mm by this graviton exchange radiation hitting masses, imparting momentum p = E/c, and then reflecting back (in the process causing another impulse on the mass, by the recoil effect, equal to p = E/c, so that the total imparted momentum is obviously p = 2E/c). (This ‘reflection’ is not the literal mechanism, because although a ball thrown against a wall can bounce back, a photon ‘reflected’ from a mirror actually undergoes a complex series of interactions, the sum of which (or path integral) is merely equivalent to a simple reflection: the photon is absorbed by the mirror and a new photon then gets emitted. Similarly with gauge boson radiations, the interactions involved are far more complex in detail than a simple reflection, although that is a useful approximation to the total process, under some circumstances.) Applying this contraction to motions, we find that the same behaviour of the gravitational field causes inertial force which resists acceleration, because of Einstein’s equivalence principle whereby inertial mass = gravitational mass!
To understand the picture of writing the Hubble expansion rate as an expansion in a time dimension, think of time (age of universe) as 1/Hubble constant (until 1998 it was assumed to be 0.67/Hubble constant with the 2/3 factor due to gravitational deceleration, but that gravitational deceleration was debunked by supernovae observations made by Perlmutter and published in Nature that year; so either gravitons are redshifted over large cosmological distances and lose energy by E = hf, being thus unable to slow down the expansion of the universe, or else there is some “dark energy” which produces an outward acceleration that offsets the inward acceleration of gravity).
If the Hubble constant was different in different directions, the age of the universe, 1/H, would be different in different directions. Hence the isotropy of the big bang we observe around us: there are three effective time dimensions, each corresponding to an expanding spatial dimension. (The redshift of radiation exchanged between receding masses in an expanding universe prevents thermal equilibrium being established, and therefore provides an endless heatsink.) Because of the isotropy, we see the 3 effective time dimensions as always being equal, so they are identical and can be represented by SO(3,1), hence we observe effectively 4 different dimensions including one of time and 3 of space.
Lunsford (discussed and cited below) has proved that the 3 spatial and 3 time dimension spin orthagonal group, SO(3,3) unifies gravity and electrodynamics correctly without the reducible problems of the old KaluzaKlein unification. I’ve shown that this is reasonable because 3 spatial dimensions are contracted by gravity in general relativity (for example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres), while 3 time dimensions are continuously expanding: this means that the Hubble expansion should be written in terms of velocity as a function of time, not distance:
Remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H^{2}R. So we have a real outward acceleration in Hubble’s law!
We then use Newton’s 2nd empirical law F=ma to estimate outward force of big bang, and then his 3rd empirical law to estimate the inward recation force carried by gauge bosons exchanged between local and distant receding masses. This makes quantum gravity quantitative and we can calculate the strength of gravity and lots of other things from the resulting mechanism. This post concentrates on gravity’s mechanism.
The Physical Relationship between General Relativity and Newtonian gravity
(1) Newtonian gravity
Let’s begin with a look at the Newtonian gravity law F = mMG/r^{2}, which is based on empirical evidence, not a speculative theory (remember Newton’s claim: hypotheses non fingo!). The inverse square law is based on Kepler’s empirical laws, which were obtained by Brahe’s detailed observations of motion of the planet Mars. The mass dependence was more of a guess by Newton, since he didn’t actually calculate gravitational forces (he did not know or even write the symbol for G, which arrived long after from the pen of Laplace). However, Newton’s other empirical law, F = ma, was strong evidence for a linear dependence of force on mass, and there was some evidence from the observation of the Moon’s orbit. The Moon was known to be about 250,000 miles away and to take about 30 days to orbit the earth, so it’s centripetal acceleration could be calculated from Newton’s law, a = v^{2}/r. This could confirm Newton’s law in two ways. First, since 250,000 miles is about 60 times the radius of the Earth, the acceleration due to gravity from the Earth should, from the inversesquare law, be 60^{2} times weaker at the Moon than it is at the Earth’s surface where it is 9.8 m/s^{2}.
Hence it was possible to check the inversesquare law in Newton’s day. Newton also made a good guess at the average density of the earth, which indicates G fairly accurately using Galileo’s measurement of the gravitational acceleration at the Earth’s surface and – applied also to the Moon (assumed to have a similar density to the Earth) gives a very approximate justification for the assumption of Newton’s that gravitational force is directly proportional to the product of the two masses involved. Newton worked out geometrically proofs for using his law. For example, the mass of the Earth is not located in a point at its centre, but is distributed over a large threedimensional volume. Newton proved that you can treat the entire mass of the earth as being in a small place in the centre of the Earth for the purpose of making calculations, and this proof is as clever as his demonstration that the inverse square law applies to elliptical planetary orbits (Hooke showed that it applied to circular orbits, which is much easier). Newton treated the mass of the earth as a series of uniform shells of small thickness. He proved that outside the shell, the gravitational field is identical, at any radius from the middle of the shell, to the gravitational field from an equal mass all located in a small lump in the middle. This proof also applies to the quantum gravity mechanism (below).
Cavendish produced a more accurate evaluation of G by measuring the twisting force (torsion) in a quartz fibre due to the gravitational attraction of two heavy balls of known mass located a known distance apart.
(2) General relativity as a modification needed to include relativistic phenomena
Eventually failures in the Newtonian law became apparent. Because orbits of planets are elliptical with the sun at one focus, the planets speed up when near the sun, and this causes effects like time dilation and it also causes their mass to increase due to relativistic effects (this is significant for Mercury, which is closest to the sun and orbits fastest). Although this effect is insignificant over a single orbit, so it didn’t affect the observations of Brahe or Kepler’s laws upon which Newton’s inverse square law was based, the effect accumulates and is substantial over a period of centuries, because it the perhelion of the orbit precesses. Only part of the precession is due to relativistic effects, but it is still an important anomaly in the Newtonian scheme. Einstein and Hilbert developed general relativity to deal with such problems. Significantly, the failure of Newtonian gravity is most important for light, which is deflected by gravity twice as much when passing the sun as that predicted by Newton’s a = MG/r^{2}.
Einstein recognised that gravitational acceleration and all other accelerations are represented by a curved worldline on a plot of distance travelled versus time. This is the curvature of spacetime; you see it as the curved line when you plot the height of a falling apple versus time.
Einstein then used tensor calculus to represent such curvatures by the Ricci curvature tensor, R_{ab}, and he tried to equate this with the source of the accelerative field, the tensor T_{ab}, which represents all the causes of accelerations such as mass, energy, momentum and pressure. In order to represent Newton’s gravity law a = MG/r^{2} with such tensor calculus, Einstein began with the assumption of a direct relationship such as R_{ab} = T_{ab}. This simply says that massenergy tells is directly proportional to curvature of spacetime. However, it is false since it violates the conservation of massenergy. To make it consistent with the experimentally confirmed conservation of massenergy, Einstein and Hilbert in November 1915 realised that you need to subtract from T_{ab} on the right hand side the product of half the metric tensor, g_{ab}, and the trace, T (the sum of scalar terms, across the diagonal of the matrix for T_{ab}). Hence
R_{ab} = T_{ab } (1/2)g_{ab}T.
[This is usually rewritten in the equivalent form, R_{ab}  (1/2)g_{ab}R = T_{ab}.]
There is a very simple way to demonstrate some of the applications and features of general relativity. Simply ignore 15 of the 16 terms in the matrix for T_{ab}, and concentrate on the energy density component, T_{00}, which is a scalar (it is the first term in the diagonal for the matrix) so it exactly equal to its own trace:
T_{00} = T.
Hence, R_{ab} = T_{ab } (1/2)g_{ab}T becomes
R_{ab} = T_{00 } (1/2)g_{ab}T, and since T_{00} = T, we obtain
R_{ab }= T[1 _{} (1/2)g_{ab}]
The metric tensor g_{ab} = ds^{2}/(dx^{a}dx^{b}), and it depends on the relativistic Lorentzian metric gamma factor, (1 – v^{2}/c^{2})^{1/2}, so in general g_{ab} falls from about 1 towards 0 as velocity increases from v = 0 to v = c.
Hence, for low speeds where, approximately, v = 0 (i.e., v << c), g_{ab} is generally close to 1 so we have a curvature of
R_{ab }= T[1 _{} (1/2)(1)] = T/2.
For high speeds where, approximately, v = c, we have g_{ab }= 0 so
R_{ab }= T[1 _{} (1/2)(0)] = T.
The curvature experienced for an identical gravity source if you are moving at the velocity of light is therefore twice the amount of curvature you get at low (nonrelativistic) velocities. This is the explanation as to why a photon moving at speed c gets twice as much curvature from the sun’s gravity (i.e., it gets deflected twice as much) as Newton’s law predicts for low speeds. It is important to note that general relativity doesn’t supply the physical mechanism for this effect. It works quantitatively because is its a mathematical package which accounts accurately for the use of energy.
However, it is clear from the way that general relativity works that the source of gravity doesn’t change when such velocitydependent effects occur. A rapidly moving object falls faster than a slowly moving one because of the difference produced in way the moving object is subject to the gravitational field, i.e., the extra deflection of light is dependent upon the LorentzFitzGerald contraction (the gamma factor already mentioned), which alters length (for a object moving at speed c there are no electromagnetic field lines extending along the direction of propagation whatsoever, only at right angles to the direction of propagation, i.e., transversely). This increases the amount of interaction between the electromagnetic fields of photon and the gravitational field. Clearly, in a slow moving object, half of the electromagnetic field lines (which normally point randomly in all directions from matter, apart from minor asymmetries due to magnets, etc.), will be pointing in the wrong direction to interact with gravity, and so slow moving objects only experience half the curvature that fast moving ones do, in a similar gravitational field.
Some issues with general relativity are focussed on the assumed accuracy of Newtonian gravity which is put into the theory as the low speed, weak field solution normalization. As we shall show below, this is incompatible with a YangMills (Standard Model type) quantum gravity theory for reasons other than the renormalization problems usually assumed to exist. First, over very large distances in an expanding universe, the exchange of gravitons weakens gravitons because redshift reduces the frequency and thus the energy of radiation dramatically over cosmological sized distances. This eliminates curvature over such distances, explaining the lack of gravitational deceleration in supernova data. This is falsely explained by the mainstream by adding an epicycle, i.e.,
(gravitational deceleration without redshift of gravitons in general relativity) + (acceleration due to small positive cosmological constant due to some kind of dark energy) = (observed, nondecelerating, recession of supernovae)
instead of the simpler quantum gravity explanation (predicted in 1996, two years ahead of observation):
(general relativity with G falling for large distances due to redshift of exchange gravitons reducing the energy of gravitational interactions) = (observed, nondecelerating, recession of supernovae).
So there is no curvature of spacetime at extremely big distances! On small scales, too, general relativity is false, because the tensor describing the source of gravity uses an average density to smooth out the real discontinuities resulting from the quantized, discrete nature of particles which have mass! The smoothness of a curvature in general relativity is false in general on small scales due to the input assumption – required for the stressenergy tensor to work (it is a summation of continuous differential terms, not discrete terms for each fundamental particle). So on both very large and very small scales, general relativity is a fiddle. But this is not a problem when you understand the physical dynamics and know the limitations of the theory. It only becomes a problem when people take a lot of discrete fundamental particles representing a real mass causing gravity, average their masses over space to get an average density, and then calculate the curvature from the average density, getting a smooth result and claiming that this proves that curvature is really smooth on small scales. Of course it isn’t. That argument is like averaging the number of kids per household and getting 2.5, then claiming that the average proves that one third of kids are born with only half of their bodies. But there is also a problem with quantum gravity as usually believed (see the previous post, and also this comment, on Cosmic Variance blog, by Professor John Baez).
Symmetry groups which include gravity
We will show how you can make checkable predictions for quantum gravity in this post. In the previous two posts, here and here, the inclusion of gravity in the standard model was shown to require a change of the electroweak force SU(2) x U(1) to SU(2) x SU(2) where the three electroweak gauge bosons (W_{+}, W_{}, and Z_{o}) occur in both shortranged massive versions and massless, infiniterange versions with the charged ones producing electromagnetic force and the neutral one producing gravitation, and the issues in calculating the outward force of the big bang were described. Depending on how the Higgs mechanism for mass will be modified, this SU(2) x SU(2) electroweakgravity may be replacable by a new version of a single SU(2). In the existing Standard Model, SU(3) x SU(2) x U(1), only one handedness of fundamental particles respond to the SU(2) weak force, so if you change the electroweak groups SU(2) x U(1) to SU(2) x SU(2) it can lead to a different way of understanding chiral symmetry and electroweak symmetry breaking. See also this earlier post, which discusses with quantum force effects as Hawking radiation emissions.)
The understanding of the correct symmetry model behind the Standard Model requires a physical understanding of what quarks are, how they arise, etc. For instance, bring 3 electrons close together and you start getting problems with the exclusion principle. But if you could somehow force a triad of such particles together, the net charge would be 3 times stronger than normal, so the vacuum shielding veil of polarized pairproduction fermions will be also 3 times stronger, shielding the bare core charges 3 times more efficently. (Imagine it like 3 communities combining their separate castles into one castle with walls 3 times thicker. The walls provide 3 times as much shielding; so as long as they can all fit inside the reinforced castle, all benefit.) This means that the long range (shielded) charge from each of the three charges of the triad will be 1/3 instead of 1. Since pairproduction, and polarization of electric charges cancelling out part of the electric field, are experimentally validated phenomena, this mechanism for fractional charges is real. Obviously, while it is easy to explain the downquark this way, you need a detailed knowledge of electroweak phenomena like the weak charges of quarks compared to leptons (which have chiral features) and also the strong force, to explain physically what is occurring with upquarks that have a +2/3 charge. Some interesting although highly abstract mathematical assaults on trying to understand particles have been made by Dr Peter Woit in http://arxiv.org/abs/hepth/0206135 which generates all the Standard Model particles using a U(2) spin representation (see also his popular nonmathematical introduction, Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics), which can be compared to the more pictorial preon models of particles advocated by loop quantum gravity theorists like Dr Lee Smolin. Both approaches are suggesting that there is a deep simplicity, with the different quarks, leptons, bosons and neutrinos arising from a common basic entity by means of symmetry transformations or twists of braids:
‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ Wiki. (Hence there is a simple relationship between leptons and fermions; more later on.)
Introduction to the basis for the dynamics of quantum gravity
You can treat the empirical Hubble recession law, v = HR, as describing a variation in velocity with respect to observable distance R, or as a variation of velocity with respect to time past, because as we look to greater distances in the universe, we’re seeing an earlier era, because of the time taken for the light to reach us. That’s spacetime: you can’t have distance without time. Because distance R = ct where c is the velocity of light and t is time, Hubble’s law can be written v = HR = Hct which clearly shows a variation of velocity as a function of time! A variation of velocity with time is called acceleration. By Newton’s 2nd law, the acceleration of matter produces force. This view of spacetime is not new:
‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Herman Minkowski, 1908.
To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H^{2}R.
Radial distance elements are equal for the Hubble recession in all directions around us,
H = dv/dr = dv/dx = dv/dy = dv/dz
implying
t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv
implying
dv/H = dr = dx = dy = dx
1/H is a way to measure the age of the universe. If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.
This makes spacetime easier to understand and allows a new unification scheme! The expanding universe has three orthagonal expanding timelike dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter. Surely this contradicts general relativity? No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square. To do this, we take dr = dx = dy = dz and convert them all into timelike equivalents by dividing each distance element by c, giving:
(dr)/c = (dx)/c = (dy)/c = (dz)/c
which can be written as:
dt_{r} = dt_{x} = dt_{y} = dt_{z}
So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal! This is why we only need one time to describe the expansion of the universe. If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions. Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic! This is quite a surprising result as some hostility to this new idea from traditionalists shows.
But the three time dimensions which are usually hidden by this isotropy are vitally important! Replacing the KaluzaKlein theory, Lunsford has a 6dimensional unification of electrodynamics and gravitation which has 3 timelike dimensions and appears to be what we need. It was censored off arXiv after being published in a peerreviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161177, which can be downloaded here. The massenergy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity. For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.
This sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity.
The outward motion of matter produces a force which for simplicity for the present (we will discuss correction factors for density variation and redshift effects below; see also this previous post) will be approximated by Newton’s 2nd law in the form
F = ma
= [(4/3)πR^{3}r].[dv/dt],
and since dR/dt = v = HR, it follows that dt = dR/(HR), so
F = [(4/3)πR^{3}r].[d(HR)/{dR/(HR)}]
= [(4/3)πR^{3}r].[H^{2}R.dR/dR]
= [(4/3)πR^{3}r].[H^{2}R]
= 4πR^{4}rH^{2}/3.
Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation – spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as we shall see – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram:
(force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s crosssectional area and the ratio R^{2}/r^{2}) / (total spherical area with radius R).
Later in this post, this will be evaluated proving that the shield’s crosssectional area is the crosssectional area of the event horizon for a black hole, π(2GM/c^{2})^{2}. But at present, to get the feel for the physical dynamics, we will assume this is the case without proving it. This gives
(force of gravity) = (4πR^{4}rH^{2}/3).(π(2GM/c^{2})^{2}R^{2}/r^{2})/(4πR^{2})
= (4/3)πR^{4}rH^{2}G^{2}M^{2}/(c^{4}r^{2})
We can simplify this using the Hubble law because HR = c gives R/c = 1/H so
(force of gravity) = (4/3)πrG^{2}M^{2}/(H^{2}r^{2})
This result ignores both the density variation in spacetime (the distant, earlier universe having higher density) and the effect of redshift in reducing the energy of gravitons and weakening quantum gravity contributions from extreme distance, because the momentum of a graviton will be p = E/c and where E is reduced by redshift since E = hf.
Quantization of mass
However, it is significant qualitatively that this gives a force of gravity proportional not to M_{1}M_{2} but instead to M^{2}, because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. (Obviously ‘large masses’ are just composites of many fundamental particles.) M^{2 }should only arise if the ultimate building blocks of mass (the ‘charge’ in a theory of quantum gravity) are quantized, because it shows that two units of mass are identical. This tells us about the way the massgiving field particles, the ‘Higgs bosons’, operate. Instead of there being a cloud of an indeterminate number of Higgs bosons around a fermion giving rise to mass, what happens is that each fermion acquires a discrete number of such massgiving particles.
(These ‘Higgs bosons’ surrounding the fermion acquire inertial and gravitational mass by interacting with the external gravitational field, which explains why mass increases with velocity but electric charge doesn’t. The core of a fermion doesn’t interact with the inertial/gravitational field, only with the massive Higgs bosons surrounding the core, which in turn do interact with the inertial/gravitational field. The core of the fermion only interacts with Standard Model forces, namely electromagnetism, weak force, and in the case of pairs or triads of closely confined fermions – quarks – the strong nuclear force. Inertial mass and gravitational mass arise from the Higgs bosons in the vacuum surrounding the fermion, and gravitons only interact with Higgs bosons, not directly with the fermions.)
This is explicable simply in terms of the vacuum polarization of matter and the renormalization of charge and mass in quantum electrodynamics, and is confirmed by an analysis of all relatively stable (half life of 10^{23} second or more) known particles, as discussed in an earlier post here (for a table of the mass predictions compared to measurements see Table 1). (Note that the simple description of polarization of the vacuum as two shells of virtual fermions, a positive one close to the electron core and a negative one further away, depicted graphically on those sites, is a simplification for convenience in depicting the net physical effect for the purpose of understanding what is going on for making accurate calculations. Obviously, in reality, all the virtual positive fermions and all the virtual negative fermions will not be located in two shells; they will be all over the place but on the average the virtual charges of like sign to the real particle core will be further away from the core than the virtual charges of unlike sign.)
Table 1: Comparison of measured particle masses with predicted particle masses using a physical model for the renormalization of mass (both mass and electric charge are renormalized quantities in quantum electrodynamics, due to the polarization of pairs of charged virtual fermions in the electron’s strong electric field, see previous posts such as this). Anybody wanting a high quality, printable PDF version of this table can find it here. (The theory of masses here was inspired by an arXiv paper by Drs. Rivero and de Vries, and on a related topic I gather than Carl Brannen is using density operators to explain theoretically and extend the application of Yoshio Koide’s empirical formula, which states that the sum of the masses of the 3 leptons electron, muon and tau, multiplied by 1.5, is equal to the square of the sum of the square roots of the masses of those three particles. If that works it may well be compatible with this mass mechanism. Although the mechanism predicts the possible quantized masses fairly accurately as first approximations, it is good to try to understand better how the actual masses are picked out. The mechanism which produced the table produced a formula containing two integers which predicts a lot of particles which are too shortlived to occur. Why are some configurations more stable than others? What selection principle picks out the proton as being particularly stable – if not completely stable? We know that the nuclei of heavy elements aren’t chaotic bags of neutrons and protons, but have a shell structure to a considerable extent, with ‘magic numbers’ which determine relative stability, and which are physically explained by the number of nucleons taken to completely fill up successive nuclear shells. Probably some similar effect plays a part to some extent in the mass mechanism, so that some configurations have magic numbers which are stable, while nearby ones are far less stable and decay quickly. This if true of the quantized vacuum surrounding fundamental particles, would lead to a new quantum theory of such particles, with similar gimmicks explaining the original ‘anomalies’ of the periodic table, viz. isotopes explaining noninteger masses, etc.)
This prediction doesn’t strictly demand perfect integers to be observable, because it’s possible that effects like isotopes to exist, where the different individuals of the same type of meson or baryon can be surrounded by different integer numbers of Higgs field quanta, giving noninteger average masses. (The number would be likely to actually change during a highenergy interaction, where particles are broken up.)
The early attempts of Dalton and others to work out an atomic theory were regularly criticised and even ridiculed by the fact that the measured mass of chlorine is 35.5 times the mass of hydrogen, i.e., nowhere near an integer! Here is a summary of the rules:
If a particle is a baryon, it’s mass should in general be close to an integer when expressed in units of 105 MeV (3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV).
If it is a meson, it’s mass should in general be close to an integer when expressed in units of 70 MeV (2/2 multiplied by the electron mass divided by alpha: 1*0.511*137 = 70 MeV).
If it is a lepton apart from the electron (the electron is the most complex particle), it’s mass should in general be close to an integer when expressed in units of 35 MeV (1/2 multiplied by the electron mass divided by alpha: 0.5*0.511*137 = 35 MeV).
This scheme has a simple causal mechanism in the quantization of the ‘Higgs field’ which supplies mass to fermions and bosons. By itself the mechanism just predicts that mass comes in discrete units, depending on how strong the polarized vacuum is in shielding the fermion core from the Higgs field quanta.
To predict specific masses (apart from the fact they are likely to be near integers if isotopes don’t occur), regular QCD ideas can be used. This prediction doesn’t replace lattice QCD predictions, it just suggests how masses are quantized by the ‘Higgs field’ rather than being a continuous variable.
Every mass apart form the electron is predictable by the simple expression: mass = 35n(N+1) MeV, where n is the number of real particles in the particle core (hence n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is is the integer number of ‘Higgs field’ quanta giving mass to that fermion (lepton or baryon) or meson (boson) core.
From analogy to the shell structure of nuclear physics where there are highly stable or ‘magic number’ configurations like 2, 8 and 50, and we can use n = 1, 2, and 3, and N = 1, 2, 8 and 50 to predict the most stable masses of fermions besides the electron, and also the masses of bosons (mesons):
For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV.
For mesons, n = 2 and N = 1 gives the pion: 35n(N+1) = 140 MeV.
For baryons, n = 3 and N = 8 gives nucleons: 35n(N+1) = 945 MeV.
For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV.
Particle mass predictions: the gravity mechanism implies quantized unit masses. As proved, the 1/a = 137.036… number is the electromagnetic shielding factor for any particle core charge by the surrounding polarised vacuum.
This shielding factor is obtained by working out the bare core charge (within the polarized vacuum) as follows. Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order hbar. The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct. Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s massenergy equivalence). Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post). In fact this relationship, i.e., product of energy and time equalling hbar, is widely used for the relationship between particle energy and lifetime. The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology. Now for the slightly clever bit:
px = hbar implies (when remembering p = mc, and E = mc^{2}):
x = hbar /p = hbar /(mc) = hbar*c/E
so E = hbar*c/x
when using the classical definition of energy as force times distance (E = Fx):
F = E/x = (hbar*c/x)/x
= hbar*c/x^{2}.
So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law! This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs. So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a. The bare core charge of an electron is 137.036… times the observed longrange (low energy) unit electronic charge. All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.
One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance. However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx. For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved. This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.
It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistically, scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)
Experimental evidence:
‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hepth/0510040, p 71.
Plus, in particular:
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
(Levine and Koltick experimentally found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 91 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong. At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases! Conservation of gauge boson massenergy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.)
Related to this exchange radiation, are the Feynman’s path integrals of quantum field theory:
‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here
‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerlymysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here
As for the indeterminancy of electron locations in the atom, the fuzzy picture is not a result of multiple universes interacting but simply the Poincare manybody problem, whereby Newtonian physics fails when you have more than 2 bodies of similar mass or charge interacting at once (the failure is that you lose deterministic solutions to the equations, having to resort instead to statistical descriptions like the Schroedinger equation and annihilationcreation operators in quantum field theory produce many pairs of charges randomly in location and time in strong fields, deflecting particle motions chaotically on small scales, similarly to Brownian motion; this is the ‘hidden variable’ causing indeterminancy in quantum theory, not multiverses or entangled states). Entanglement is a false interpretation physically of Aspect’s (and related) experiments: Heisenberg’s uncertainty principle only applies to slower than light velocity particles like massive fermions. Aspect’s experiment stems from the EinsteinRosenPolansky suggestion to measure the spins of two molecules; if the correlate in a certain way then that would prove entanglement, because molecular spin are subject to the indeterminancy principle. Aspect used photons instead of molecules. Photons cannot change polarization when measured as they are frozen in nature due to their velocity, c. Therefore, the correlation of photon polarizations observed merely confirms that Heisenberg’s uncertainty principle does not apply to photons, rather than implying that (believing that Heisenberg’s uncertainty principle does apply to photons) the photons ‘must’ have an entangled polarization until measured! Aspect’s results in fact discredits entanglement.
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of prequantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
Gravity is basically a boson shielding effect, while the errors of LeSage’s infamous pushinggravity model are due to fermion radiation assumptions, so they did not get anywhere. Once again, gravity is a massless boson – integer spin – exchange radiation effect. LeSage (or Fatio, whose ideas LeSage borrowed), assumed that very small material particles – fermions in today’s language – were the forcecausing exchange radiation (discussed by Feynman in the video here). Massless bosons don’t obey the exclusion principle and they don’t interact with one another like massive bosons and all fermions (fermions do obey the exclusion principle, so they always interact with one another). Hence, LeSage’s attractive force mechanism is only valid for shortranged particles like pions, which produce the strong nuclear attractive force between nucleons. Therefore, the ‘errors’ people found in the past when trying to use LeSage’s mechanism for gravity – the mutual interactions between the particles which equalize the force in the shadow region after a meanfreepath – don’t apply to bosonic radiation which doesn’t obey the exclusion principle. The shortrange of LeSage’s gravity becomes an advantage in explaining the pion mediated strong nuclear force. LeSage – or actually Newton’s friend Fatio, whose ideas were allegedly plagarised by LeSage – made a mess of it. The LeSage attraction mechanism is predicted to have a short range on the order of a mean free path of scatter before radiation pressure equalization in the shadows quenches the attractive force. This short range is real for nuclear forces, but not for gravity or electromagnetism:
(Source: http://www.mathpages.com/home/kmath131/kmath131.htm.)
The FatioLeSage mechanism is useless because it makes no prediction for the strength of gravity whatsoever, and it is plain wrong because it assumes gas molecules or fermions are the exchange radiation, instead of gauge bosons. The falsehood of the FatioLeSage mechanism is that the gravity force range would be short ranged, since the material pressure of the fermion particles (which bounce off each other due to the Pauli exclusion principle) or gas molecules causing gravity, would get diffused into the shadows within a short distance; just as air pressure is only shielded by a solid for a distance on the order of a mean free path of the gas molecules. Hence, to get a rubber suction cup to be pushed strongly to a wall by air pressure, the wall must be smooth, and it must be pushed firmly. Such a short ranged attractive force mechanism may be useful in making pionmediated Yukawa strong nuclear force calculations, but is not gravity.
(Some of the ancient objections to LeSage are plain wrong and in contradiction of YangMills theories such as the standard model. For example, it was alleged that gravity couldn’t be the result of an exchange radiation force because the exchange radiation would heat up objects until they all glowed. This is wrong because the mechanisms by which radiation interact with matter don’t necessarily transfer that energy into heat; classically all energy is usually degraded to waste heat in the end, but the gravitational field energy cannot be directly degraded to heat. Masses don’t heat up just because they are exchanging radiation, the gravitational field energy. If you drop a mass and it hits another mass hard, substantial heat is generated, but this is an indirect effect. Basically, many of the arguments against physical mechanisms are bogus. For an object to heat up, the charged cores of the electrons must gain and radiate heat energy; but the gravitational gauge boson radiation isn’t being exchanged with the electron bare core. Instead, the fermion core of the electron has no mass, and since quantum gravity charge is mass, the lack of mass in the core of the electron means it can’t interact with gravitons. The gravitons interact with some vacuum particles like ‘Higgs bosons’, which surround the electron core and produce inertial and gravitational forces indirectly. The electron core couples to the ‘Higgs boson’ by electromagnetic field interactions, while the ‘Higgs boson’ at some distance from the electron core interacts with gravitons. This indirect transfer of force can smooth out the exchange radiation interactions, preventing that energy from being degraded into heat. So objections – if correct – would also have to debunk the Standard Model which is based on YangMills exchange radiation, and which is well tested experimentally. Claiming that exchange radiation would heat things up until they glowed is similar to the Ptolemy followers claiming that if the Earth rotated daily, clouds would fly over the equator at 1000 miles/hour and people would be thrown off the ground! It’s a politicalstyle junk objection and doesn’t hold up to any close examination in comparison to experimentallydetermined scientific facts.)
When a massgiving black hole (gravitationally trapped) Zboson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have alpha shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is:
M_{z}a^{2} /(1.5*2p) = 0.51 MeV
If, however, the electron core has more energy and can get so close to a trapped Zboson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass:
M_{z}a/(2p ) = 105.7 MeV
The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is:
M_{e}n(N + 1)/(2a) = 35n(N+1) Mev.
(For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Zbosons. Lest this be dismissed as ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a physical mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements. There is a similarity in the physics between these vacuum corrections and the Schwinger correction to Dirac’s 1 Bohr magneton magnetic moment for the electron: corrected magnetic moment of electron = 1 + a/(2p) = 1.00116 Bohr magnetons. Notice that this correction is due to the electron interacting with the vacuum field, similar to what we are dealing with here. Also note that Schwinger’s correction is only the first (but is by far the biggest numerically and thus the most important, allowing the magnetic moment to be accurately predicted to 6 significant figures of accuracy) of an infinite series of correction terms involving higher powers of a for more complex vacuum field interactions. Each of these corrections is depicted by a different Feynman diagram. (Basically, quantum field theory is a mathematical correction for the probability of different reactions. The more classical and obvious things generally have the greatest probability by far, but stranger interactions occasionally occur in addition, so these also need to be included in calculations which give a prediction which is statistically very accurate.)
This kind of gravitational calculation also allows us to predict the gravitational coupling constant, G, as will be proved below. We know that the inward force is carried by gauge boson radiation, because all forces are due to gauge boson radiation according to the Standard Model of particle physics, which is the besttested physical theory of all time and and has made thousands of accurately confirmed predictions from an input of just 19 empirical parameters (don’t confuse this with the bogus supersymmetric standard model, which even in its minimal form requires 125 adjustable parameters and has a large landscape of possibilities, making no definite or precise predictions whatsoever). The Standard Model is a YangMills theory in which the exchange of gauge bosons between relevant charges for the force (i.e., colour charges for quantum chromodynamic forces, electric charges for electric forces, etc.) causes the force.
What happens is that YangMills exchange radiation pushes inward, coming from the surrounding, expanding universe. Since spacetime, as recently observed, isn’t boundless (there’s no observable gravity retarding the recession of the most distant galaxies and supernovae, as discovered in 1998, and so there is no curvature at the greatest distances), the universe is spherical and is expanding without slowing down. The expansion is caused by the physical pressure of the gauge boson radiation. This radiation exerts momentum p = E/c. Gauge boson radiation is emitted towards us by matter which is receding: the reason is Newton’s 3rd law. Because, as proved above, the Hubble recession in spacetime is an acceleration of matter outwards, the matter receding has an outward force by Newton’s 2nd empirical law F = ma, and this outward force has an equal and opposite reaction, just like the exhaust of a rocket. The reaction force is carried by gauge boson radiation.
What, you may ask, is the mechanism behind Newton’s 3rd law in this case? Why should the outward force of the universe be accompanied by an inward reaction force? I dealt with this in a paper in May 1996, made available via the letters page of the October 1996 issue of Electronics World. Consider the source of gravity, the gravitational field (actually gauge boson radiation), to be a frictionless perfect fluid. As lumps of matter, in the form of the fundamenta particles of galaxies, accelerate away from us, they leave in their wake a volume of vacuum which was previously occupied but is now unoccupied. The gravitational field doesn’t ignore spaces which are vacated when matter moves: instead, the gravitational field fills them. How does this occur?
What happens is like the situation when a ship moves along. It doesn’t suck in water from behind it to fill its wake. Instead, water moves around from the front to the back. In fact, there is a simple physical law: there is an equal volume of water moving to the ship’s displacement moving continuously in the opposite direction to the ship’s motion.
This water fills in the void behind the moving ship. For a moving particle, the gravitational field of spacetime does the same. It moves around the particle. If it did anything else, we would see the effects of that: for example, if the gravitational field piled up in front of a moving object instead of flowing around it, the pressure would increase with time and there would be drag on the object, slowing it down. The fact that Newton’s 1st law, inertia, is empirically based tells us that the vacuum field does flow frictionlessly around moving particles instead of slowing them down. The vacuum field does however exert a net force when an object accelerates; this causes increases the mass of the object and causes a flattening of the object in the direction of motion (FitzGeraldLorentz contraction). However, this is purely a resistance to acceleration, and there is drag to motion unless the motion is accelerative.
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp8990.
(Consider motion in the Dirac sea, which is incompressible owing to the Pauli exclusion principle: all states are filled: this predicted antimatter successfully. Nobody wants to hear of modelling physical effects of particles moving in the Dirac sea. Why? A good analogy is the particleandhole theory used in semiconductor electronics, solid state physics. Now plug in cosmology: both positive and negative real charges are streaming away from us in all directions. Hence virtual charges in the Dirac sea will stream inward. Moving fermions can’t occupy the same space as virtual fermions, they get shoved out of the way due to the Pauli exclusion principle. It is pretty obvious to anyone that the outward force of matter in the expanding universe is balanced by equal inward Dirac sea force, according to Newton’s 3rd law. Similarly, in a a corridor, a person of 70 litre volume moving at velocity v is compensated for by 70 litres of fluid air moving at velocity v, or the same speed but in the opposite direction to the person’s motion. This is pretty obvious because if the surronding fluid didn’t displace around the person to fill in the volume they are vacating, there would be a vacuum left behind them and the 14.7 psi air pressure in front would exert 144*14.7 ~ 2,100 pounds of pressure per square foot of the person which would prevent the person being able to walk. It is absolutely crucial for the person that air is a fluid which flows around and fills in the space being vacated behind. The lack of this mechanism explains why you need to apply substantial force to remove large suction plungers from smooth surfaces against air pressure. However, to my cost, I have found that this argument using the air pressure analogy or Dirac sea analogy is fruitless. Mainstream crackpots claim that it is all wrong and by deliberately misunderstanding the analogy they can create endless rows which have nothing to do with the point, the gravitational mechanism. As an analogy to this misunderstanding of a simple point, think about Feynman’s remark that energy was misunderstood even by the author of physics school textbook who claimed that ‘energy’ makes everything go. Taking up Feynman’s argument, if you calculate the energy of the air in your room, the air molecules have a mean velocity of about 500 m/s, and there is 1.2 kg of air per cubic metre of your room. Let’s say you are in a small room with 10 cubic metres of air in it, 12 kg of air. The kinetic energy that air possesses is half the mass multiplied by the square of the mean speed, i.e., 1.5 MJ. However, that ‘energy’ is useless to you unless you have a way of extracting it. You can’t power your laptop from the energy of air pressure and temperature! You could of course use it like a battery if you had a big vacuum chamber with a fan powering a generator at a hole in the wall of the vacuum chamber, so that the inrushing air would turn the fan and generate electricity. But the power it takes to create such a vacuum is more than the energy you can possibly get out of the collapsing vacuum. So the simple idea of ‘energy’ is misleading to mainstream crackpots. What counts is not gross energy, but usable energy! This is why most of the gauge boson radiation energy has nothing to do with the energy we use. Because the gauge boson radiation energy, such as ‘gravitons’, comes from all directions, most of it is not useful energy and cannot do work. Only the small asymmetries in it result in work, by creating the fundamental forces we experience!)
‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp323. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)
Fig. 2: The general allround pressure from the gravitational field does of course produce physical effects. The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is a compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the Earth; this was calculated by Feynman using general relativity in his famous Feynman Lectures on Physics. The reason why nearby, local masses shield the forcecarrying radiation exchange, causing gravity, is because the distant masses in the universe is in high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from a local, nonreceding mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you rather than exchanging gauge bosons with you, so you get pushed towards it. This is why apples fall.
Since there is very little shielding area (fundamental particle shielding crosssectional areas are small compared to the Earth’s area) so the Earth doesn’t block all of the gauge boson radiation being exchanged between you and the masses in the receding galaxies beyond the other far side of the Earth. The shielding by the Earth is by fundamental particles in it, specifically the fundamental particles which give rise to mass (supposed to be some form of Higgs bosons which surround fermions, giving them mass) by interacting with the gravitational field of exchange radiation. Although each local fundamental particle over its shielding crosssectional area stops the gauge boson radiation completely, most of Earth’s volume is devoid of fundamental particles because they are so small. Consequently, the Earth as a whole is an inefficient shield. There is little probability of different fundamental particles in the earth being directly behind one another (i.e., overlapping of shielded areas) because they are so small. Consequently, the gravitational effect from a large mass like the Earth is just the simple sum of the contributions from the fundamental particles which make the mass up, so the total gravity is proportional to the number of particles, which is proportional to the mass.
The point is that nearby masses, which are not receding from you significantly, don’t fire gauge boson radiation towards you, because there is no reaction force! However, they still absorb gauge bosons, so they shield you, creating an asymmetry. You get pushed towards such masses by the gauge bosons coming from the direction opposite to the mass. For example, standing on the Earth, you get pushed down by the asymmetry; the upward beam of gauge bosons coming through the earth is very slightly shielded. The shielding effect is very small, because it turns out that the effective crosssectional shielding area of an electron (or other fundamental particle) for gravity is equal to πR^{2} where R = 2GM/c^{2 }which is the event horizon radius of an electron. This is a result of the calculations, as is a prediction of the Newtonian gravitational parameter G! Now let’s prove it.
Approach 1
Referring to Fig. 1 above, we can evaluate the gravity force (which is the proportion of the total force indicated by the darkshaded cone; the observer is in the middle of the diagram at the apex of each cone). The force of gravity is not simply the total inward force, which is equal to the total outward force. Gravity is only the proportion of the total force which is represented by the dark cone.
The total force, as proved above, is = 4πR^{4}rH^{2}/3. The fraction of this which is represented by the dark cone is equal to the volume of the cone (XR/3, where X is the area of the end of the cone), divided by volume (4πR^{3}/3), of the sphere of radius R (the radius of the observable spacetime universe defined by R = ct = c/H). Hence,
Force of gravity = (4πR^{4}rH^{2}/3).(XR/3)/(4πR^{3}/3)
= R^{2}rH^{2}X/3,
where the area of the end of the cone, X, is observed in Fig. 1 to be geometrically equal to the area of the shield, A, multiplied by (R/r)^{2}.
X = A(R/r)^{2}.
Hence the force of gravity is R^{2}rH^{2}[A(R/r)^{2}]/3
= (1/3)R^{4}rH^{2}A/r^{2}.
(Of course you get exactly the same result if you take the fraction of the total force delivered in the cone to be the area of the base of the cone, X, divided into the surface area, 4πR^{2}, of the sphere of radius R.)
If we assume that the shield area is A = π(2GM/c^{2})^{2}, i.e., the crosssectional area of the event horizon of a black hole, then the formula above for the force of gravity, when set equal to the Newtonian law, F = mMG/r^{2}, gives for m = M and c/R = H, the result is the prediction that
G = (3/4)H^{2}/(rπ).
This is of course equal to twice the false amount you get from rearranging the ‘critical density’ formula of general relativity (without a cosmological constant), but what is more interesting is that we do not need to assume that the shield area is A = π(2GM/c^{2})^{2}. The critical density formula, and other cosmological applications of general relativity, is false because it ignores the quantum gravity dynamics which become important on very large scales due to recession of masses in the universe, because the gravitational interaction is a product of the cosmological expansion; both are caused by gauge boson exchange radiation (the radiation pushes masses apart over large, cosmological distance scales, while pushing things together on small scales; this is because the uniform gauge boson pressure between masses causes them to recede from all surrounding masses and fill the expanding volume of space like raisins in an expanding cake receding from one another, where the gauge boson radiation pressure is represented by the pressure of the dough of the cake as it expands; there is no contradiction whatsoever between this effect and the local gravitational attraction which occurs when two currants are close enough that there is no dough between them and plenty of dough around them, pushing them towards one another like gravity).
We get the same result by an independent method, which does not assume that the shield area is the event horizon cross section of a black hole. Now we shall prove it.
Approach 2
As in the above approach, the outward force of the universe is 4πR^{4}rH^{2}/3, and there is an equal inward force. The fraction of the inward force which is shielded is now calculated as the mass, Y, of those atoms in shaded cone in Fig. 1 which actually emit the gauge boson radiation that hits the shield, divided by the mass of the universe.
The important thing here is that Y is not simply the total mass of the universe in the shaded cone. (If it were, Y would be the density of the universe multiplied by volume of the cone.)
That total mass inside the shaded cone of Fig.1 is not important because part of the gauge boson radiation it emits misses the shield, because it hits other intervening masses in the universe. (See Fig. 3.)
The mass in the shaded cone which actually produces the gauge boson radiation which we are concerned with (that which causes gravity) is equal to the mass of the shield multiplied up geometrically by the ratio of the area of the base of the cone to the area of the shield, i.e., Y = M(R/r)^{2}, because of the geometric convergence of the inward radiation from many masses within the cone towards the center. This is illustrated in Fig. 3.
Hence, the force of gravity is:
(4πR^{4}rH^{2}/3)Y/[mass of universe]
= (4πR^{4}rH^{2}/3).[M(R/r)^{2}]/(4πR^{3}r/3)
= R^{3}H^{2}m/r^{2}.
Comparing this to Newton’s law F = mMG/r^{2}, gives us
G = R^{3}H^{2}/[mass of universe]
= (3/4)H^{2}/(rπ).
Fig. 3: The mass multiplication scheme basis of Approach 2.
So we get precisely the same result as the previous method where we assumed that the shield area of an electron was the crosssectional area of the black hole event horizon! This result for G has been produced entirely without the need for an assumption about what numerical value to take for the shielding crosssectional area of a particle. Yet it is the same result as that derived above in the previous method when assuming that a fundamental particle has a shielding crosssectional area for gravitycausing gauge boson radiation equal to the event horizon of a black hole. Hence, this result justifies and substantiates that assumption. We get two major quantitative results from this study of quantum gravity: a formula of G, and a formula of the crosssectional area of a fundamental particle for gravitational interactions.
The exact formula for G, including photon redshift and density variation
The toy model above began by assuming that the inward force carried by the gauge boson radiation is identical to the outward force represented the simple product of mass and acceleration in Newton’s 2nd law, F = ma. In fact, taking the density of the universe to be the local average around us (at a time of 14,000 million years after the big bang) is an error, because the density increases as we look back in time with increasing distance, seeing earlier epochs which have higher density. This effect tends to increase the effective outward force of the universe, by increasing the density. In fact, the effective mass would go to infinity unless there was another factor, which tends to reduce the force imparted by gravity causing gauge bosons from the greatest distances. This second effect is redshift. This problem of how to evaluate the extent to which these two effects partly offset one another is discussed in detail in the earlier post on this blog, here. It is shown there that the effective inward force should take some more complex form, so that the inward force is no longer simply F = ma but some integral (depending on the way that the redshift is modelled, and there are several alternatives) like
F = ma = mH^{2}r
= ò(4πr^{2}r )(1 – rc^{1}H)^{3}(1 – rc^{1}H)H^{2}r [1 + {1.1*10^{13 }(H ^{1}  r/c^{ })}^{1 }]^{1 }dr
= 4 π r c^{2 }ò r [ {c/(Hr) } – 1 ]^{2 }[1 + {1.1*10^{13 }(H ^{1}  r/c^{ })}^{1 }]^{1 }dr.
Where r is the local density, i.e., the density of spacetime at 14,000 million years after the big bang. I have not completed the evaluation of such integrals (some of them give an infinite answer, so it is possible to rule those out as either wrong or missing some essential factor in the model). However, an earlier idea, to take account of the rise in density with increasing spacetime around us, at the same time taking account of the redshift as a divergence of the universe, is to set up a more abstract model.
Density variation with spacetime and divergence of matter in universe (causing the redshift of gauge bosons by an effect which is quantitatively similar to gauge boson radiation being ‘stretched out’ over the increasing volume of space while in transit between receding masses in the expanding universe) can be modelled by the wellknown equation for mass continuity (based on the conservation of mass in an expanding gas, etc):
dρ/dt + Ñ (ρv) = 0
Or: dρ/dt = Ñ (ρv)
Where divergence term
Ñ .(ρv) = [{d(ρv)_{x}/dx} + {d(ρv)_{y}/dy} + {d(ρv)_{z}/dz}]
For the observed spherical symmetry of the universe we see around us
d(ρv)_{x}/dx = d(ρv)_{y}/dy = d(ρv)_{z}/dz = d(ρv)_{R}/dR
where R is radius.
Now we insert the Hubble equation v = HR:
dρ/dt = Ñ (ρv) = Ñ.(ρHR) = [{d(ρHR)/dR} + {d(ρHR)/dR} + {d(ρHR)/dR}]
= 3d(ρHR)/dR
= 3ρHdR/dR
= 3ρH.
So dρ/dt = 3ρH. Rearranging:
3Hdt = (1/ρ) dρ. Integrating:
ò3Hdt = ò(1/ρ) dρ.
The solution is:
3Ht = (ln ρ_{1}) – (ln ρ). Using the base of natural logarithms e to get rid of the ln’s:
e^{3Ht} = ρ_{1}/ρ
Because H = v/R = c/[radius of universe, R] = 1/[age of universe, t] = 1/t, we find:
e^{3Ht} = ρ_{1}/ρ = e^{3(1/t)t} = e^{3}.
Therefore
ρ = ρ_{1}e^{3} ~ 20.0855 ρ_{1}.
Therefore, if this analysis is a correct abstract model for the combined effect of graviton redshift (due to the effective ‘stretching’ of radiation as a result of the divergence of matter across spacetime caused by the expansion of the universe) and density variation of the universe across spacetime, our earlier result of G = (3/4)H^{2}/(rπ) should be corrected for spacetime density variation and redshift of gauge bosons, to:
G = (3/4)H^{2}/(rπe^{3}),
which is a factor of ~10 smaller than the rearranged traditional ‘critical density’ formula of general relativity, G = (3/8)H^{2}/(rπ). Therefore, this theory predicts gravity quantitatively and checkably, and it dispenses with the need for an enormous amount of unobserved dark matter. (There is clearly some dark matter, as neutrinos are known to have some mass, but this can be assessed from the rotation curves for spiral galaxies and other observational checks.)
Experimental confirmation for the black hole size as the crosssectional area for fundamental particles in gravitational interactions
In additional to the theoretical evidence above, there is independent experimental evidence. If the core of an electron is gravitationally trapped HeavisidePoynting electromagnetic energy current, it is a black hole and it has a magnetic field which is a torus (see Electronics World, April 2003).
Experimental evidence for why an electromagnetic field can produce gravity effects involves the fact that electromagnetic energy is a source of gravity (think of the stressenergy tensor on the right hand side of Einstein’s field equation). There is also the capacitor charging experiment. When you charge a capacitor, practically the entire electrical energy entering it is electromagnetic field energy (HeavisidePoynting energy current). The amount of energy carried by electron drift is negligible, since the electrons have a kinetic energy of half the product of their mass and the square of their velocity (typically 1 mm/s for a 1 A current).
So the energy current flows into the capacitor at light speed. Take the capacitor to be simple, just two parallel conductors separated by a dielectric composed of just a vacuum (free space has a permittivity, so this works). Once the energy goes along the conductors to the far end, it reflects back. The electric field adds to that from further inflowing energy, but most of the magnetic field is cancelled out since the reflected energy has a magnetic field vector curling the opposite way to the inflowing energy. (If you have a fully charged, ’static’ conductor, it contains an equilibrium with similar energy currents flowing in all possible directions, so the magnetic field curls all cancel out, leaving only an electric field as observed.)
The important thing is that the energy keeps going at light velocity in a charged conductor: it can’t ever slow down. This is important because it proves experimentally that static electric charge is identical to trapped electromagnetic field energy. If this can be taken to the case of an electron, it tells you what the core of an electron is (obviously, there will be additional complexity from the polarization of loops of virtual fermions created in the strong field surrounding the core, which will attenuate the radial electric field from the core as well as the transverse magnetic field lines, but not the polar radial magnetic field lines).
You can prove this if you discharge any conductor x metres long which is charged to v volts with respect to ground, through a sampling oscilloscope. You get a square wave pulse which has a height of v/2 volts and a duration of 2 x/c seconds. The apparently ‘static’ energy of v volts in the capacitor plate is not static at all; at any instant, half of it, at v/2 volts, is going eastward at velocity c and half is going westward at velocity c. When you discharge it from any point, the energy already by chance headed towards that point immediately begins to exit at v/2 volts, while the remainder is going the wrong way and must proceed and reflect from one end before it exits. Thus, you always get a pulse of v/2 volts which is 2 x metres long or 2 x/c seconds in duration, instead of a pulse at v volts and x metres long or x/c seconds in duration, which you would expect if the electromagnetic energy in the capacitor was static and drained out at light velocity by all flowing towards the exit.
This was investigated by Catt, who used it to design the first crosstalk (glitch) free wafer scale integrated memory for computers, winning several prizes for it. Catt welcomed me when I wrote an article on him for the journal Electronics World, but then bizarrely refused to discuss physics with me, while he complained that he was a victim of censorship. However, Catt published his research in IEEE and IEE peerreviewed journals. The problem was not censorship, but his refusal to get into mathematical physics far enough to sort out the electron.
It’s really interesting to investigate why classical (not quantum) electrodynamics is totally false in many ways: Maxwell’s model is wrong. Some calculations of quantum gravity based on a simple, empiricallybased model (no ad hoc hypotheses), which yields evidence (which needs to be independently checked) that the proper size of the electron is the black hole event horizon radius.
There is also the issue of a chickenandegg situation in QED where electric forces are mediated by exchange radiation. Here you have the gauge bosons being exchanged between charges to cause forces. The electric field lines between the charges have to therefore arise from the electric field lines of the virtual photons being continually exchanged.
How do you get an electric field to arise from neutral gauge bosons? It’s simply not possible. The error in the conventional thinking is that people incorrectly rule out the possibility that electromagnetism is mediated by charged gauge bosons. You can’t transmit charged photons one way because the magnetic selfinductance of a moving charge is infinite. However, charged gauge bosons will propagate in an exchange radiation situation, because they are travelling through one another in opposite directions, so the magnetic fields are cancelled out. It’s like a transmission line, where the infinite magnetic selfinductance of each conductor cancels out that of the other conductor, because each conductor is carrying equal currents in opposite directions.
Hence you end up with the conclusion that the electroweak sector of the Standard Model is in error: Maxwellian U(1) doesn’t describe electromagnetism properly. It seems that the correct gauge symmetry is SU(2) with three massless gauge bosons: positive and negatively charged massless bosons mediate electromagnetism and a neutral gauge boson (a photon) mediates gravitation. See Fig. 4.
Fig. 4: The SU(2) electrogravity mechanism. Think of two flakjacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!
This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation. See Fig. 5.
Fig. 5: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the pathintegral YangMills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of selfinductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for YangMills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the selfinductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.
The price of the random walk statistics needed to describe such a zigzag summation (avoiding opposite charges!) is that the net force is not approximately 10^{80} times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zigzag inefficiency of the sum, i.e., about 10^{40} times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 10^{40}/10^{80} = 10^{40} as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 10^{80} randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are YangMills radiation being exchanged between all charges (including all charges of similar sign) is 10^{40} times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges (Fig. 5).
Experimentally checkable consequences of this gravity mechanism, and consistency with known physics
1. Universal gravitational parameter, G
G = (3/4)H^{2}/(rπe^{3}), derived in stages above, where e^{3} is the cube of the base of natural logarithms (the correction factor due to the effects of redshift and density variation in spacetime), is a quantitative prediction. In the previous post here, the best observational inputs for Hubble parameter H and local density of universe r were identified: ‘The WMAP satellite in 2003 gave the best available determination: H = 71 +/ 4 km/s/Mparsec = 2.3*10^{18} s^{1}. Hence, if the present age of the universe is t = 1/H (as suggested from the 1998 data showing that the universe is expanding as R ~ t, i.e. no gravitational retardation, instead of the FriedmannRobertsonWalker prediction for critical density of R ~ t^{2/3 }where the 2/3 power is the effect of curvature/gravity in slowing down the expansion) then the age of the universe is 13,700 +/ 800 million years. … The Hubble space telescope was used to estimate the number of galaxies in a small solid area of the sky. Extrapolating this to the whole sky, we find that the universe contains approximately 1.3*10^{11 }galaxies, and to get the density right for our present time after the big bang we use the average mass of a galaxy at the present time to work out the mass of the universe. Taking our Milky Way as the yardstick, it contains about 10^{11 }stars, and assuming that the sun is a typical star, the mass of a star is 1.9889*10^{30 }kg (the sun has 99.86% of the mass of the solar system). Treating the universe as a sphere of uniform density and radius R = c/H, with the above mentioned value for H we obtain a density for the universe at the present time (~13,700 million years) of about 2.8*10^{27 }kg/m^{3}.’
Putting H = 2.3*10^{18} s^{1} and r = 2.8*10^{27 }kg/m^{3} into G = (3/4)H^{2}/(rπe^{3}), gives a result of G = 2.2*10^{11} m^{3 }kg^{1} s^{2} which is one third of the experimentally determined value of G = 6.673*10^{11} m^{3 }kg^{1} s^{2}. This factor of 3 error is within the error bars for the estimates of the density because of uncertainties in estimating the average mass of a galaxy. To put the accuracy of this prediction into perspective, try reading the statement by Eddington (quoted at the top of this blog post): how many other theories based entirely on observably verified facts like Hubble’s law and Newton’s laws, predict the strength of gravity? Alternatively, compare it to the classical (and incorrect) ‘critical density’ prediction from general relativity (which ignores the mechanism of gravitation), which rearranges to give a formula for G which is e^{3}/2 or 10 times bigger, thus the critical density is 3.3 times bigger than the experimental data.
This is actually an unfair comparison, because the rough estimate for the density is about 3 times too high. Most astronomers suggest that the observable density is 520% of the critical density, i.e, 10% with a factor of 2 error limit. This would put the density at r = 10^{27 }kg/m^{3} and our prediction is then exact, with a factor of 2 experimental error limit. The abundance of dark matter is not experimentally measured. There is some observational evidence for dark matter, and theoretically there are some solid reasons why there should be such matter in a dark, non luminous form (neutrinos have mass, as do black holes). The mainstream takes the critical density formula from general relativity and the measured density for luminous matter and uses the disagreement to claim that the difference is dark matter. That argument is weak, because general relativity is in error for cosmological purposes through ignoring quantum gravity effects which become important on large scales in an expanding universe (i.e., redshift of gravitons weaking the force gravity over large distances, the nature of the YangMills exchange radiation dynamical mechanism for gravity in which gravity is a result of radiation exchange with the other masses in the expanding universe, etc.). Another argument for a lot of dark matter is the flattening of galactic rotation curves, but if the final theory of quantum gravity is a departure from general relativity and Newtonian gravity, it could potentially resolve this problem (it will be at large distances, because gravitons are redshifted and there could be some significant graviton shielding effect of the immense amount of mass in a galaxy, which are trivial in the solar system).
Professor Sean Carroll writes a lot about cosmology, and is author of a very useful book on general relativity. In writing about the discovery of direct evidence for dark matter on his blog post http://cosmicvariance.com/2006/08/21/darkmatterexists/ and others, he does highlight some useful arguments. He starts by stating without evidence that 5% of the universe is ordinary matter, 25% dark matter and 70% dark energy. He then explains that the direct evidence for dark matter proves that mainstream cosmologists are not fooling themselves. The problem is that the direct evidence for dark matter doesn’t say how much dark matter there is: it’s not quantitative. It does not allow any confirmation of the theoretical guesswork for the statement he makes that there is 5 times as much dark matter as visible matter. He does then go on to discuss whether some kind of ‘modified Newtonian dynamics,’ rather than dark matter, could resolve the problems – and he writes that he would prefer some objective resolution of that type rather than in effect inventing ‘dark matter’ epicycles as convenient fixes which cannot be readily checked even in principle, but there is no definite proposal discussed which is really concrete and solves the quantum gravity facts (such as this gravity mechanism!).
2. Small size of the cosmic background radiation ripples
The prediction of gravity by this mechanism appears to be accurate to within experimental data, which is accurate to within a factor of approximately two. The second major prediction of this mechanism is the small size in the soundlike ripples in angular distribution of the cosmic background radiation which is the earliest directly observable radiation in the universe, whose emitted power peaked at 370,000 years after the big bang when the temperature was 3,500 Kelvin, and redshifted or ‘stretched out’ due to cosmic expansion which reduces its temperature to 2.7 Kelvin.
Because radiation and matter were in thermal equilibrium (an ionised gas) at the time the cosmic background radiation was emitted, the radiation carries an imprint of the nature of the matter at that time. The cosmic background radiation was found to be of extremely uniform temperature, far more uniform than expected at 370,000 years after the big bang, when conventional models of galaxy formation implied that should have been big ripples to indicate the ‘seeding’ of lumps that could become stars and galaxies.
This is called the ‘horizon problem’ or ‘isotropy problem’, because the microwave background radiation from opposite directions in the sky is similar to within 0.01%, and in the mainstream models gravity always has the same strength and would have caused bigger nonuniformities within 370,000 years of the big bang. A mainstream attempt to solve this problem is ‘inflation’ whereby the universe expanded at a faster than light speed for a small fraction of a second after the big bang, making the density of the universe uniform all over the sky before gravity had a chance to magnify irregularities in the expansion process.
This ‘horizon problem’ is closely related to the ‘flatness problem’ which is the issue that in general relativity, the universe depending on its density has three possible geometries: open, flat, and closed. At the critical density it will be flat, with gravitation causing its radius to increase in proportion to the twothirds power of time after the big bang. Mainstream consensus was that the universe was probably flat – which means of critical density, five to twenty times more than the observable density. The flatness problem is that if the universe was not completely flat, but of slightly different density across the universe, then the variation in density would be greatly magnified by the expansion of the universe and would be obvious today. The absense of any such large anisotropy is widely believed, by the mainstream, to be evidence for a flat geometry.
The mechanism for gravity solves these problems. It solves the flatness problem by showing that the critical density (distinguishing the open, flat, and closed solutions to the FriedmannRobertsonWalker metric of general relativity, which is applied to cosmology) is false for ignoring quantum gravity effects: there ars no long range gravitational influences in an expanding universe because the graviton exchange radiation of quantum gravity is becomes severely redshifted like light, and cannot produce curvature effects like forces on large distances. So the whole existing mainstream structure of using general relativity to work out cosmology falls apart.
The horizon problem as to why the cosmic background is so smooth is solved by this model in an interesting way. It is very simple. The relationship giving the gravity parameter G is directly proportional to the age of the universe. The older the universe gets, the stronger gravity gets. At 370,000 years after the big bang, G was 40,000 times smaller than it is now, and at earlier times it was even smaller. The ripples in the cosmic background radiation are extremely small, because the gravitational force was so small.
As proved earlier, the Hubble acceleration is a = dv/dt = H^{2}R = H^{2}ct, where t is time past when the light was emitted but can be set equal to the age of the universe for our purposes here. Hence the outward force F = ma = mH^{2}ct, is proportional to the age of the universe, as is the equal inward force according to Newton’s 3rd law of motion.
We can also see proportionality to time in the result G = (3/4)H^{2}/(rπe^{3}), since H^{2 }= 1/t^{2} and r is mass of universe divided by volume (which is proportional to the cube of radius, i.e., the cube of the product ct), so this formula implies that G is proportional to (1/t^{2})/(1/t^{3}) which is of course directly proportional to time.
Dirac did not have a mechanism for a timedependence of G but he guessed that G might vary. Unfortunately, lacking this mechanism, Dirac guessed that G was falling with time when it is actually increasing, and he did not realise that it is not just the strength constant for gravity that varies, but all the strength coupling constants vary in the same way. This disproves Edward Teller’s claim (based on just G varying) that if it were true, the sun’s radiant power would vary with time in a way incompatible with life (e.g., he calculated that the oceans would have been literally boiling during the Cambrian era if Dirac’s assumption was true).
It also disproves another claim that G is constant based on nucleosythesis in the big bang, in the same way. The argument here is that nuclear fusion in stars and in the big bang depends on gravity to cause the basic compressive force, causing electrically charged positive particles to collide hard enough to sufficiently break through the ‘barrier’, caused by the repulsive electric Coulomb force, so that the shortranged strong attractive force can then fuse the particles together. The big bang nucleosynthesis model correctly predicts the observed abundances of unfused hydrogen and fusion products like helium, assuming that G is constant. Because the result is correct, it is often claimed (even by students of Professor Carroll) that G must have had a value at 1 minutes after the big bang that is no more than 10% different to today’s value for G. The obvious fallacy here is that both electromagnstism and gravity vary the same way. If you double both the Coulomb force and the gravity force, the fusion rate doesn’t vary, because the Coulomb force is opposing fusion while gravity is causing fusion, and both are inverse square forces. The effect of G varying is not manifested in a change to the fusion rate in the big bang or in a star, because the corresponding change in the Coulomb force offsets it.
For a discussion of why the different forces unify by scaling similarly (it is due to vacuum polarization dynamics) see this earlier post: https://nige.wordpress.com/2007/03/17/thecorrectunificationscheme/
Louise Riofrio has investigated the dimensionally correct relationship GM = tc^{3} which was discussed earlier on this blog here, here and here where M is the mass of the universe and t is its age. This is algebraically equivalent to G = (3/4)H^{2}/(rπ), i.e, the gravity prediction without a dimensionless redshiftdensity correction factor of e^{3}. It is interesting that it can be derived on the basis of energy based methods, as first pointed out by John Hunter who suggested setting E = mc^{2 }= mMG/R, i.e, setting rest mass energy equal to gravitational potential energy.
Since the electromagnetic charge of the electron is massless bosonic energy trapped as a black hole, the gravitational potential energy would have to be equal, to keep it trapped.
This rearranges to give the equations of Riofrio and Rabinowitz, although physically it is obviously missing some dimensionless multiplication constant because the gravitational potential energy cannot be E = mMG/R, where R is the radius of the universe. It is evident that this equation describes the gravitational potential energy which would be released if the universe were (somehow) to collapse. However, the average radial distance of the mass of the universe M will be less than the radius of the universe R. This brings up the density variation problem: gravitons and light both go at velocity c so we see them coming from times in the past when the density was greater (density is proportional to the reciprocal of the cube of the age of the universe due to expansion). So you cannot assume constant density and get a simple solution. You really also need to take account of the redshift of gravitons from the greatest distances, or the density will cause you problems due to tending towards infinity at radii approaching R. Hence, this energybased approach to gravity is analogous to the physical mechanism described above. See also the derivation, by mathematician Dr Thomas R. Love of California State University, of Kepler’s law at https://nige.wordpress.com/2006/09/30/keplerslawfromkineticenergy/ which demonstrates that you can indeed treat problems generally by assuming that the rest mass energy of the spinning, otherwise static fundamental particle or the kinetic energy of the orbiting body, is being trapped by gravitation.
This leads to to a concrete basis for John Hunter’s suggestions published as a notice in the 12 July 2003 issue of New Scientist, page 17: he suggested that if E = mc^{2 }= mMG/R, then the effective value of G depends on distance since G = Rc^{2}/M, which is algebraically equivalent to the expression we obtained above for the gravity mechanism, and published in the article ‘Electronic Universe, Part 2′, Electronics World, April 2003 (excluding the suggested ecube correction for density variation with distance and graviton redshift, which was published in a letter to Electronics World in 2004). Hunter’s July 2003 notice in New Scientist indicated that this solves the horizon problem of cosmology (thus not requiring the speculative mainstream extravagances of Alan Guth’s inflation theory). Hunter pointed out in his notice that his E = mc^{2 }= mMG/R, when applied to the earth, should include another term for the influence of the nearby mass of the sun, leading to E = mc^{2 }= mMG/R + mM’G/r where m is mass of Earth, M is mass of universe, R is radius of universe (which is inaccurate as pointed out since the average distance of the mass of the surrounding universe can hardly be the radius of the universe, but must be a smaller distance, leading to the problem of the timevariation of density and thus also the redshift of the gravitons causing gravity), M’ is the mass of the Sun, and r is the distance of the Earth from the sun. Hunter argued that since r varies and is 3.4% bigger in July than in January (when Earth is closest to the sun), this leads to a suggestion for a definite experiment to test the theory: ‘Prediction: the weight of objects on the Earth will vary by 3.3 parts in 10 billion over a year, as the Earth to Sun distance changes.’ (My only problem with this prediction is simply that it is virtually impossible to test, just like the ‘not even wrong’ Planck scale unification supersymmetry ‘prediction’. Because the Earth is constantly vibrating due to seismic effects, you can never really hope to make such accurate measurements of weight. Anyone who has tried to make measurements of masses beyond a few significant figures for quantitative chemical analysis knows how difficult such a mass measurement is: making sensitive instruments is a problem, but the increased sensitivity multiplies up background vibrations so the instrument just becomes a seismograph. However, maybe some spacebased precise measurements with clever experimentalist/observationist tricks will one day be able to check this to some extent.)
3. Electric force constant (permittivity), Hubble parameter, etc.
The proof [above] predicts gravity accurately, with G = ¾ H^{2}/(pre^{3}). Electromagnetic force (discussed above and in the April 2003 Electronics World article) in quantum field theory (QFT) is due to ‘virtual photons’ which cannot be seen except via forces produced. The mechanism is continuous radiation from spinning charges; the centripetal acceleration of a = v^{2}/r causes the emission energy emission which is naturally in exchange equilibrium between all similar charges, like the exchange of quantum radiation at constant temperature. This exchange causes a ‘repulsion’ force between similar charges, due to recoiling apart as they exchange energy (two people firing guns at each other recoil apart). In addition, an ‘attraction’ force occurs between opposite charges that block energy exchange, and are pushed together by energy being received in other directions (shieldingtype attraction). The attraction and repulsion forces are equal for similar net charges. The net inward radiation pressure that drives electromagnetism is similar to gravity, but the addition is different. The electric potential adds up with the number of charged particles, but only in a diffuse scattering type way like a drunkards walk, because straightline additions are cancelled out by the random distribution of equal numbers of positive and negative charge. The addition only occurs between similar charges, and is cancelled out on any straight line through the universe. The correct summation is therefore statistically equal to the square root of the number of charges of either sign multiplied by the gravity force proved above.
Hence F(electromagnetism) = mMGN^{1/2}/r^{2} = q_{1}q_{2}/(4per^{2}) (Coulomb’s law), where G = ¾ H^{2}/(pre^{3}) as proved above, and N is as a first approximation the mass of the universe (4pR^{3}r/3= 4p(c/H)^{3}r/3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:
e = q_{e}^{2}e_{2.7…}^{3}[r/(12pm_{e}^{2}m_{proton}Hc^{3})]^{1/2} F/m.
Using old data as in the letter published in Electronics World some years ago which gave the G formula (r = 4.7 x 10^{28} kg/m^{3} and H = 1.62 x 10^{18} s^{1} for 50 km.s^{1}Mpc^{1}), gives e = 7.4 x 10^{12} F/m which is only 17% low as compared to the measured value of 8.85419 x 10^{12} F/m.
Rearranging this formula to yield r, and rearranging also G = ¾ H^{2}/(pre^{3}) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ¾ H^{2}/(pre^{3}) to give a prediction for r which is independent of H:
H = 16p^{2}Gm_{e}^{2}m_{proton}c^{3}e^{2}/(q_{e}^{4}e_{2.7…}^{3}) = 2.3391 x 10^{18} s^{1} or 72.2 km.s^{1}Mpc^{1}, so 1/H = t = 13,550 million years. This is checkable against the WMAP result that the universe is 13,700 million years old; the prediction is well within the experimental error bar.
r = 192p ^{3}Gm_{e}^{4}m_{proton}^{2}c^{6}e^{4}/(q_{e}^{8}e_{2.7…}^{9}) = 9.7455 x 10^{28} kg/m^{3}.
Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanismbased predictive method.
Furthermore, calculations show that Hawking radiation from electronmass black holes has the right force as exchange radiation of electromagnetism: https://nige.wordpress.com/2007/03/08/hawkingradiationfromblackholeelectronscauseselectromagneticforcesitistheexchangeradiation/
4. Particle masses
Fig. 6: Particle mass mechanism. The ‘polarized vacuum’ shell exists between IR and UV cutoffs. We can work out the shell outer radius from either using the IR cutoff energy as the collision energy to calculate the distance of closest approach in a particle scattering event (like Coulomb scattering, which predominates at low energies) or we use Schwinger’s formula for the minimum static electric field strength which is needed to cause pairproductions of fermionantifermion pairs to pop out of the Dirac sea in the vacuum. The outer radius of the polarized vacuum around a unit charge by either calculation is on the order 1 fm. This scheme doesn’t just explain and predict masses, it also replaces supersymmetry with a proper physical, checkable prediction of what happens to Standard Model forces at extremely high energy. The following text is an extract from an earlier blog post here:
‘The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997. Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).
‘There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons). How can gravitational charge be renormalized? There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field. For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized. If they are both displaced in the same direction by the field, they aren’t polarized. So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s Cavorite.
‘There is no evidence for this. Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value. However, this is not a problem. The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude). Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.
‘Finally, supersymmetry is totally flawed. What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification. You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).
‘The energy sapped from the gauge boson mediated field of electromagnetism is being used. It’s being used to create pairs of charges, which get polarized and shield the field. This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory. Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.
‘Now take the situation where you put N electrons close together, so that their cores are very nearby. What will happen is that the surrounding vacuum polarization shells of both electrons will overlap. The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger. So the shielding of the polarized vacuum is N times stronger! This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron. Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!
‘This has three remarkable consequences. First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy. Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.
‘Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials). If you could extend the Pauli exclusion principle, you could allow particles to acquire shortrange nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.
‘Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do. Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.
‘The right handed electron has a weak hypercharge of 2. The left handed electron has a weak hypercharge of 1. The left handed downquark (with observable low energy, electric charge of 1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of 2/3.
‘It’s totally obvious what’s happening here. What you need to focus on is the hadron (meson or baryon), not the individual quarks. The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks). The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them. It all comes from the electromagnetic field. Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces. Hence, at the usually assumed socalled Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric). This ties in with representation theory for particle physics, whereby symmetry transformation principles relate all particles and fields (the conservation of gauge boson energy and the exclusion principle being dynamic processes behind the relationship of a lepton and a quark; it’s a symmetry transformation, physically caused by quark confinement as explained above), and it makes predictions.
‘It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength. This is done when electric field energy is stored in a capacitor. In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose. See page 70 of http://arxiv.org/abs/hepth/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy. (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge. The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.) The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces. This allows predictions and more checks. It’s totally tied down to hard facts, anyway. If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.
‘To give an example from https://nige.wordpress.com/2006/10/20/loopquantumgravityrepresentationtheoryandparticlephysics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron. Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 – 1 = 136.036 times the electric charge energy of the electron experienced at large distances. This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.’
5. Quantum gravity renormalization problem is not real
The following text is an extract from an earlier blog post here:
‘Quantum gravity is supposed – by the mainstream – to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.
‘According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ hbar.
‘Since time = distance/c,
‘(energy)*(distance) ~ c*hbar.
‘Hence,
‘(distance) ~ c*hbar/(energy)
‘Very small distances therefore correspond to very big energies. Since gravitons capable of gravitongraviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is nonrenormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.
‘The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).
‘The problem is that gravity has only one type of ‘charge’, mass. There’s no antimass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.
‘This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.
‘However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.
‘This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.’
Experimental confirmation of the redshift of gauge boson radiation
All the quantum field theories of fundamental forces (the standard model) are YangMills, in which forces are produced by exchange radiation.
The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inversesquare law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E = hf. This is because the momentum carried by radiation is p = E/c = hf/c. Any reduction in frequency f therefore reduces the momentum imparted by a gauge boson, and this reduces the force produced by a stream of gauge bosons.
Therefore, in the universe all forces between receding masses should, according to YangMills quantum field theory (where forces are due to the exchange of gauge boson radiation between charges), suffer a bigger fall than the inverse square law. So, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening longrange gravity.
When you check the facts, you see that the role of ‘cosmic acceleration’ as produced by dark energy (the cc in GR) is designed to weaken the effect of longrange gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.
In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss E = hf and momentum loss p = E/c of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.
The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.
Nobel Laureate Phil Anderson points out:
‘… the flat universe is just not decelerating, it isn’t really accelerating …’ 
http://cosmicvariance.com/2006/01/03/dangerphilanderson/#comment10901
Supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R
Like my paper, Lunsford’s paper was censored off arxiv without explanation.
Lunsford had already had it published in a peerreviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.
The way the mainstream censors out the facts is to first delete them from arXiv and then claim ‘look at arxiv, there are no valid alternatives’. It’s a story of dictatorship:
‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, Nineteen Eighty Four, Chancellor Press, London, 1984, p225.
The approach above focusses on gauge boson radiation shielding. We now consider the interaction. In the intense fields near charges, pair production occurs, in which the energy of gauge boson radiation is randomly and spontaneously transformed into ‘loops’ of matter and antimatter, i.e., virtual fermions which exist for a brief period (as determined by the uncertainty principle) before colliding and annihilating back into radiation (hence the spacetime ‘loop’ where the pair production and annihilation is an endless cycle).
In this framework, we have physical material pressure from the Dirac sea of virtual fermions, not just gauge boson radiation pressure. To be precise, as stated before on this blog, the Dirac sea of virtual fermions only occurs out to a radius of about 1 fm from an electron; beyond that radius there are no virtual fermions in the vacuum because the electric field strength is below 10^{18} volts/metre, the Schwinger threshold for pair production. So at all distances beyond about 10^{15} metre from a fundamental particle, the vacuum only contains gauge boson radiation, and contains no pairs of virtual fermions, no chaotic Dirac sea. This cutoff of pair production is a reason why renormalization of charge is necessary with an ‘IR (infrared) cutoff’; the vacuum can only polarize (and thus shield electric charge) out to the range at which the electric field is strong enough to begin to cause pair production to occur in the first place. If it could polarize without such a cutoff, it would be able to completely cancel out all real electric charges, instead of only partly cancelling them. Since this doesn’t happen, we know there is a limit on the range of the Dirac sea of virtual fermions. (For those wanting to see the formula proving the minimum electric field strength that is required for pairs of virtual charges to appear in the vacuum, see equation 359 of Dyson’s http://arxiv.org/abs/quantph/0608140 or equation 8.20 of Luis AlvarezGaume and Miguel VazquezMozo, http://arxiv.org/abs/hepth/0510040.)
So what happens is that gauge boson exchange radiation powers the production of short ranged, massive spacetime loops of virtual fermions being created and annihilated (and polarized in the electric field between creation and annihilation).
Now let’s consider general relativity, which is the mathematics of gravity. Contrary to some misunderstandings, Newton never wrote down F = mMG/r^{2}, which is due to Laplace. Newton was proud of his claim ‘hypotheses non fingo’ (I feign no hypotheses), i.e., he worked to prove and predict things without making any ad hoc assumptions or guesswork speculations. He wasn’t a string theorist, basing his guesses on nonobserved gravitons (which don’t exist) or extradimensions, or unobservable Planckscale unification assumptions. The effort above in this blog post (which is being written totally afresh to replace obsolete scribbles at the current version of the page http://quantumfieldtheory.org/Proof.htm) similarly doesn’t frame any hypotheses.
It’s actually well proved geometry, wellproved Newtonian first and second law, well proved redshift (which can’t be explained by ‘tired light’ speculation, but is a known and provable effect which occurs from recession, since the Doppler effect – unlike ‘tired light’  is experimentally confirmed to occur) and similar hard, factual evidence. As explained in the previous post, the U(1) symmetry in the standard model is wrong, but apart from that misinterpretation and associated issues with the Higgs mechanism of electroweak symmetry breaking, the standard model of particle physics is the best checked physical theory ever: forces are the result of gauge boson radiation being exchanged between charges.
*****
I’ve just received an email from CERN’s document server:
From: “CDS Support Team” <cds.alert@cdsweb.cern.ch>
To: <undisclosedrecipients:>
Sent: Friday, May 25, 2007 4:30 PM
Subject: High Energy Physics Information Systems Survey
Dear registered CDS user,
The CERN Scientific Information Service, the CDS Team and the
SPIRES Collaboration are running a survey about the present and the future
of HEP Scientific Information Services.
The poll will close on May 30th. If you have not already
answered it, this is the last reminder to invite you to fill an anonymous
questionnaire at
<http://library.cern.ch/poll.html>
it takes about 15 minutes to be completed and *YOUR* comments and
opinions are most valuable for us.
If you have already answered to the questionnaire, we wish to
thank you once again!
With best regards,
The CERN Scientific Information Service, the CDS Team, the
SPIRES Collaboration
*****
This email relates to my authorship of one paper on CERN, http://cdsweb.cern.ch/record/706468, and it’s really annoying that I can’t update, expand and correct that paper because CERN closed that archive and now only accepts updates to papers that are on the American archive, arXiv (American spelling). I pay my taxes in Europe where they help fund CERN. I can’t complain if arXiv don’t want to publish physics or want to eradicate physics and replace it with extradimensional ‘not even wrong’ spin2 gravitons. But it is disappointing that there is no competitor to arXiv run by CERN anymore. By closing down external submissions and updates to papers hosted exclusively by CERN’s document server, they have handed total control of world physics to bunch of yanks obsessed by the string religion and trying to dictate it to everyone and to stop freedom of physicists to do checkable, empirically defensible research in fundamental problems. Well done, CERN.
(CERN by the way is a French abbreviation and in World War II, the government of France surrendered officially to another dictatorial bunch of mindless idealists, although fortunately there was an underground resistance movement. Although CERN is located on the border of France and Switzerland, France dominates Europe and seems to control the balance of power. I wouldn’t be surprised if their defeatist, collaborative attitude towards arXiv was responsible for this travesty of freedom. However, I’m grateful to have anything on such a server at all. If I was in America, my situation would be far worse. Some arXiv people in America appear to actually try to stop physicists giving lectures in London; it demonstrates what bitter scum some of the arXiv people are. See also the comments here. However, some respectable people have papers to arXiv so I’m not claiming that 100% of it is rubbish, although the string theory stuff is.)
Factual heresy
Below there is a little compilation of factual heresy from other people, just to well and truly finish off this post. The MichelsonMorley experiment preserves the gravitational field (‘aether’ to use an ambiguous and unhelpful term), simply because the contraction in the direction of motion (due to the behaviour of the gravitational field, causing inertial force which resists acceleration, according to Einstein’s equivalence principle whereby inertial mass = gravitational mass) means light has a shorter distance to go in the direction of motion!
The instrument is physically contracted. The fact that photons which are slowed down due to the Earth’s motion only have to travel a shorter distance than those doing transversely (which aren’t slowed down) means that the instrument shows no interference fringes: the effect of Earth’s motion in slowing down one beam is cancelled out by the contraction of the instrument which means they have to travel less far. It’s like a race where the slower the runner, the shorter the distance their lane extends before they arrive at the finish post: all runners arrive at the same time, having gone unequal distances at unequal speeds:
‘The MichelsonMorley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbingblock for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.
One funny or stupid denial of this was in a book called Einstein’s Mirror by a couple of physics lecturers, Patrick Hey and Tony Walters. They seemed to vaguely claim, in effect, that in the MichelsonMorley experiment the arms of the instrument are of precisely the same length and measure light speed absolutely, then they claimed that if anyone built a MichelsonMorley instrument with arms of unequal length, the contraction wouldn’t work. In fact, the arms were never of equal length to within a wavelength of light to begin with, and they only detected the relative difference in apparent light speed between two perpendicular directions by utilising interference fringes, which is a way to measure relative speed in one direction to another, not absolute speed in any direction. You can’t measure the speed of light with the MichelsonMorley instrument, it only shows if there is a difference between two perpendicular directions if you implicitly assume there is no length contraction!
It’s really funny that Eddington made Einstein’s special relativity (antiaether) famous in 1919 by confirming aetherial general relativity. The media couldn’t be bothered to explain aetherial general relativity, so they explained Einstein’s earlier false special relativity instead!
‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.
‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.
‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 1845; written quickly to get Jewish Infeld out of Nazi Germany and accepted as a worthy refugee in America.
‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities… According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.)
‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’ – Einstein’s Legacy – Where are the “Einsteinians?”, by Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm
‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’… A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 8990. (However, this is a massive source of controversy in GR because it’s a continuous approximation to discrete lumps of matter as a source of gravity which gives rise to a falsely smooth Riemann curvature metric; really continuous differential equations in GR must be replaced by a summing over discrete – quantized – gravitational interaction Feynman graphs.)
‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.
‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2^{nd} ed., v1, p. v, 1951.
‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.
‘U2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 6474.
‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and PseudoScience, pages 96102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hepth/0510040, p 71.
‘… the Heisenberg formulae [virtual particle interactions cause random pairproduction in the vacuum, introducing indeterminancy] can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’ – G. Builder, ‘Ether and Relativity’ in the Australian Journal of Physics, v11 (1958), p279.
(This paper of Builder on absolute velocity in ‘relativity’ is the analysis used and cited by the famous paper on the atomic clocks being flown around the world to validate ‘relativity’, namely J.C. Hafele in Science, vol. 177 (1972) pp 1668. So it was experimentally proving absolute motion, not ‘relativity’ as widely hyped Absolute velocities are required in general relativity because when you take synchronised atomic clocks on journeys within the same gravitational isofield contour and then return them to the same place, they read different times due to having had different absolute motions. This experimentally debunks special relativity. Einstein was wrong when he wrote in Ann. d. Phys., vol. 17 (1905), p. 891: ‘we conclude that a balanceclock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ See, for example, page 12 of the September 2005 issue of ‘Physics Today’, available at: http://www.physicstoday.org/vol58/iss9/pdf/vol58no9p12_13.pdf.)
So we see from this solid experimentally evidence that the usual statement that there is no ‘preferred’ frame of reference, i.e., a single absolute reference frame, is false. Experimentally, a swinging pendulum or spinning gyroscope is observed to stay true to the stars (which are not moving at sufficient angular velocities from our observation point to have any significant problem with being an absolute reference frame for most purposes).
If you need a more accurate standard, then use the cosmic background radiation, which is the truest blackbody radiation spectrum ever measured in history.
These different methods of obtaining measurements of absolute motion are not really examining ‘different’ or ‘preferred’ frames, or pet frames. They are all approximations to the same thing, the absolute reference frame. All the Copernican propaganda since the time of Einstein that: ‘Copernicus didn’t discover the earth orbits the sun, but instead Copernicus denied that anything really orbited anything because he thought there is no absolute motion, only relativism’, is a gross lie. That claim is just the sort of brainwashing doublethink propaganda which Orwell accused the dictatorships of doing in his book ’1984′. You won’t get any glory following the lemmings over the cliff. Copernicus didn’t travel throughout the entire universe to confirm that the earth is: ‘in no special place’. Even if he did make that claim, it would not be founded upon any evidence. Science is (or rather, should be) concerned with being unprejudiced in areas where there is a lack of evidence.
IMPORTANT:
The article above is extracted from the blog post here, and readers should be aware that there are vital comments with amplifications and explanations in them which are not included in the extract above. There are also further vital developments in other blog posts here, here, here and here.
Links
 Mahndisa’s Thoughts about Harvard Professors Lubos Motl et al.
 Tony Smith’s suppressed string theory work
 Christine Dantas LQG blogspot
 Not Even Wrong
 Louise Riofrio’s adventures in spacetime
 John Horgan’s http://discovermagazine.typepad.com/horganism/
 Stefan’s and Bee’s Backreaction Blog
 Davide Castelvecchi’s blog
 Cosmic Variance
 Professor Jacques Distler’s Musings
 The nCategory Café
 Life on the Lattice
 Electrogravity blogspot
 Professor Clifford V. Johnson’s Asymptotia
 Marni Dee Sheppeard’s Arcadian Functor
 Arun’s Musings
 Quantum Nonsense
 Carl Brannen’s Works site
 Galactic Interactions
 The Island of Doubt
 Cocktail Party Physics
 One of Ivor Catt’s few physically useful (not just electronics waffle) pages
 Another useful (physically semicorrect) Ivor Catt page
 Ivor Catt’s halfcorrect and vitally important article from Wireless World 1978
 Catt, Davidson, Walton book (physically semicorrect): Digital Hardware Design
 Catt’s entirely false attack on “Maxwell’s equations”
 Correction of Catt’s errors
 Quantum Field Theory domain
 Errors in Tired Light Cosmology: Tired light models invoke a gradual energy loss by photons as they travel through the cosmos to produce the redshiftdistance law. This has three main problems…
**************************************
Fig. 1 – Newton’s geometric proof that an impulsive pushing graviton mechanism is consistent with Kepler’s 3rd law of planetary motion, because equal areas will be swept out in equal times (the three triangles of equal area, SAB, SBC and SBD, all have an equal base of length SB, and they all have altitudes of equal length), together with a diagram we will use for a more modern analysis. Newton’s geometric proof of centripetal acceleration, from his book Principia, applies to any elliptical orbit, not just circular orbits as Hooke’s easier inversesquare law derivation did. (Newton didn’t include the graviton arrow, of course.) By Pythagoras’ theorem x^{2} = r^{2} + v^{2}t^{2}, hence x = (r^{2} + v^{2}t^{2})^{1/2}. Inward motion, y = x – r = (r^{2} + v^{2}t^{2})^{1/2} – r = r[(1 + v^{2}t^{2}/r^{2})^{1/2}  1], which upon expanding with the binomial theorem to the first two terms, yields: y ~ r[(1 + (1/2)v^{2}t^{2}/r^{2})  1] = (1/2)v^{2}t^{2}/r. Since this result is accurate for infidesimally small steps (the first two terms of the binomial become increasingly accurate as the steps get smaller, as does the approximation of treating the triangles as rightangled triangles so Pythagoras’ theorem can be used), we can accurately differentiate this result for y with respect to t to give the inward velocity, u = v^{2}t/r. Inward acceleration is the derivative of u with respect to t, giving a = v^{2}/r. This is the centripetal force formula which is required to obtain the inverse square law of gravity from Kepler’s third law: Hooke could only derive it for circular orbits, but Newton’s geometric derivation (above, using modern notation and algebra) applies to elliptical orbits as well. This was the major selling point for the inverse square law of gravity in Newton’s Principia over Hooke’s argument.
See Newton’s Principia, Book I, The Motion of Bodies, Section II: Determination of Centripetal Forces, Proposition 1, Theorem 1:
‘The areas which revolving bodies describe by radii drawn to an immovable centre of force … are proportional to the times on which they are described. For suppose the time to be divided into equal parts … suppose that a centripetal [inward directed] force acts at once with a great impulse [like a graviton], and, turning aside the body from the right line … in equal times, equal areas are described … Now let the number of those triangles be augmented, and their breadth diminished in infinitum … QED.’
This result, in combination with Kepler’s third law, gives the inversesquare law of gravity, although Newton’s argument is using geometry plus handwaving so it is actually far less rigorous than my rigorous algebraic version above. Newton failed to employ calculus and the binomial theorem to make his proof more rigorous, because he was the inventor of them, and most readers wouldn’t be familiar with those methods. (It doesn’t do to be so inventive as to both invent a new proof and also invent a new mathematics to use in making that proof, because readers will be completely unable to understand it without a large investment of time and effort; so Newton found that it payed to keep things simple and to use oldfashioned mathematical tools which were widely understood.)
Newton in addition worked out an ingeniously simple proof, again geometrically, to demonstrate that a solid sphere of uniform density (or radially symmetric density) has the same net gravity on the surface and at any distance, for all of its atoms in their three dimensional distribution, as would be the case if all the mass was concentrated in a point in the middle of the Earth. The proof for that is very simple: consider the sphere to be made up of a lot of concentric shells, each of small thickness. For any given shell, the geometry is such as that a person on the surface experiences small gravity effects from small quantities of mass nearby on the shell, while most of the mass of the shell is located at large distances. The inverse square effect, which means that for equal quantities of mass, the most nearby mass creates the strongest gravitational field, is thereby offset by the actual locations of the masses: only small amounts are nearby, and most of the mass of the shell is at a great distance. The overall effect is that the effective location for the entire mass of the shell is in the middle of the shell, which implies that the effective location of the mass of a solid sphere seen from a distance is in the middle of the sphere (if the density of each of the little shells, considered to be parts of the sphere, is uniform).
Feynman discusses the Newton proof in his November 1964 Cornell lecture on ‘The Law of Gravitation, an Example of Physical Law’, which was filmed for a BBC2 transmission in 1965 and can viewed on google video here (55 minutes). Feynman in his second filmed November 1964 lecture, ‘The Relation of Mathematics to Physics’, also on google video (55 minutes), stated:
‘People are often unsatisfied without a mechanism, and I would like to describe one theory which has been invented of the type you might want, that this is a result of large numbers, and that’s why it’s mathematical. Suppose in the world everywhere, there are flying through us at very high speed a lot of particles … we and the sun are practically transparent to them, but not quite transparent, so some hit. … the number coming [from the sun's direction] towards the earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see, after some mental effort, that the farther the sun is away, the less in proportion of the particles are being taken out of the possible directions in which particles can come. So there is therefore an impulse towards the sun on the earth that is inversely as square of the distance, and is the result of large numbers of very simple operations, just hits one after the other. And therefore, the strangeness of the mathematical operation will be very much reduced the fundamental operation is very much simpler; this machine does the calculation, the particles bounce. The only problem is, it doesn’t work. …. If the earth is moving it is running into the particles …. so there is a sideways force on the sun would slow the earth up in the orbit and it would not have lasted for the four billions of years it has been going around the sun. So that’s the end of that theory. …
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
The error Feynman makes here is that quantum field theory tells us that there are particles of exchange radiation mediating forces normally, without slowing down the planets: this exchange radiation causes the FitzGeraldLorentz contraction and inertial resistance to accelerations (gravity has the same mechanism as inertial resistance, by Einstein’s equivalence principle in general relativity). So the particles do have an effect, but only as a onceoff resistance due to the compressive length change, not continuous drag. Continuous drag requires a net power drain of energy to the surrounding medium, which can’t occur with gauge boson exchange radiation unless acceleration is involved, i.e., uniform motion doen’t involve acceleration of charges in such a way that there is a continuous loss of energy, so uniform motion doesn’t involve continuous drag in the sea of gauge boson exchange radiation which mediates forces! The net energy loss or gain during acceleration occurs due to the acceleration of charges, and in the case of masses (gravitational charges), this effect is experienced by us all the time as inertia and momentum; the resistance to acceleration and to deceleration. The physical manifestation of these energy changes occurs in the FitzGeraldLorentz transformation; contractions of the matter in the length parallel to the direction of motion, accompanied by related relativistic effects on local time measurements and upon the momentum and thus inertial mass of the matter in motion. This effect is due to the contraction of the earth in the direction of its motion. Feynman misses this entirely. The contraction of the earth’s radius by this mechanism of exchange radiation (gravitons) bouncing off the particles, gives rise to the empirically confirmed general relativity law due to conservation of massenergy for a contracted volume of spacetime, as proved in an earlier post. So it is two for the price of one: the mechanism predicts gravity but also forces you to accept that the Earth’s radius shrinks, which forces you to accept general relativity, as well. Additionally, it predicts a lot of empirically confirmed facts about particle masses and cosmology, which are being better confirmed by experiments and observations as more experiments and observations are done.
As pointed out in a previous post giving solid checkable predictions for the strength of quantum gravity and observable cosmological quantities, etc., due to the equivalence of space and time, there are 6 effective dimensions: three expanding timelike dimensions and three contractable material dimensions. Whereas the universe as a whole is continuously expanding in size and age, gravitation contracts matter by a small amount locally, for example the Earth’s radius is contracted by the amount 1.5 mm as Feynman emphasized in his famous Lectures on Physics. This physical contraction, due to exchange radiation pressure in the vacuum, is not only a contraction of matter as an effect due to gravity (gravitational mass), but it is also a contraction of moving matter (i.e., inertial mass) in the direction of motion (the LorentzFitzGerald contraction).
This contraction necessitates the correction which Einstein and Hilbert discovered in November 1915 to be required for the conservation of massenergy in the tensor form of the field equation. Hence, the contraction of matter from the physical mechanism of gravity automatically forces the incorporation of the vital correction of subtracting half product of the metric and the trace of the Ricci tensor, from the Ricci tensor of curvature. This correction factor is the difference between Newton’s law of gravity merely expressed mathematically as 4 dimensional spacetime curvature with tensors and the full EinsteinHilbert field equation; as explained on an earlier post, Newton’s law of gravitation when merely expressed in terms of 4dimensional spacetime curvature gives the wrong deflection of starlight and so on. It is absolutely essential to general relativity to have the correction factor for conservation of massenergy which Newton’s law (however expressed in mathematics) ignores. This correction factor doubles the amount of gravitational field curvature experienced by a particle going at light velocity, as compared to the amount of curvature that a lowvelocity particle experiences. The amazing thing about the gravitational mechanism is that it yields the full, complete form of general relativity in addition to making checkable predictions about quantum gravity effects and the strength of gravity (the effective gravitational coupling constant, G). It has made falsifiable predictions about cosmology which have been spectacularly confirmed since first published in October 1996. The first major confirmation came in 1998 and this was the lack of longrange gravitational deceleration in the universe. It also resolves the flatness and horizon problems, and predicts observable particle masses and other force strengths, plus unifies gravity with the Standard Model. But perhaps the most amazing thing concerns our understanding of spacetime: the 3 dimensions describing contractable matter are often asymmetric, but the 3 dimensions describing the expanding spacetime universe around us look very symmetrical, i.e. isotropic. This is why the age of the universe as indicated by the Hubble parameter looks the same in all directions: if the expansion rate were different in different directions (i.e., if the expansion of the universe was not isotropic) then the age of the universe would appear different in different directions. This is not so. The expansion does appear isotropic, because those timelike dimensions are all expanding at a similar rate, regardless of the direction in which we look. So the effective number of dimensions is 4, not 6. The three extra timelike dimensions are observed to be identical (the Hubble constant is isotropic), so they can all be most conveniently represented by one ‘effective’ time dimension.
Only one example of a very minor asymmetry in the graviton pressure from different directions, resulting from tiny asymmetries in the expansion rate and/or effective density of the universe in different directions, has been discovered and is called the Pioneer Anomaly, an otherwise unaccountedfor tiny acceleration in the general direction toward the sun (although the exact direction of the force cannot be precisely determined from the data) of (8.74 ± 1.33) × 10^{−10} m/s^{2} for longrange space probes, Pioneer10 and Pioneer11. However these accelerations are very small, and to a very good approximation, the three timelike dimensions – corresponding to the age of the universe calculated from the Hubble expansion rates in three orthagonal spatial dimensions – are very similar.
Therefore, the full 6dimensional theory (3 spatial and 3 time dimensions) gives the unification of fundamental forces; Riemann’s suggestion of summing dimensions using the Pythagorean sum ds^{2} = å (dx^{2}) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)^{2} can be included with the other dimensions dx^{2}, dy^{2}, and dz^{2}. There is then the question as to whether the term d(ct)^{2} will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx^{2 }+ dy^{2} + dz^{2} = d(ct)^{2}, which implies that d(ct)^{2} changes sign when passed across the equality sign to the other dimensions: ds^{2} = å (dx^{2}) = dx^{2 }+ dy^{2} + dz^{2} – d(ct)^{2} = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/timedilation in inertial motion); This formula, ds^{2} = å (dx^{2}) = dx^{2 }+ dy^{2} + dz^{2} – d(ct)^{2}, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854.
Professor Georg Riemann (182666) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the ndimensional manifold be expressed by n variable quantities x_{1}, x_{2}, x_{3}, and so on to x_{n}, then … ds = Ö [å (dx)^{2}] … I will therefore term flat these manifolds in which the square of the lineelement can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’
[The algebraic Newtonianequivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds^{2} = (1 – 2GM/r)^{1}(dx^{2} + dy^{2} + dz^{2}) – (1 – 2GM/r) d(ct)^{2}. This only reduces to the special relativity metric for the impossible, unphysical, imaginary, and therefore totally bogus case of M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]
WARNING: I’ve made a change to the usual tensor notation below and, apart from the conventional notation in the Christoffel symbol and Riemann tensor, I am indicating covariant tensors by positive subscript and contravariant by negative subscript instead of using indices (superscript) notation for contravariant tensors. The reasons for doing this will be explained and are to make this post easier to read for those unfamiliar with tensors but familiar with ordinary indices (it doesn’t matter to those who are familiar with tensors, since they will know about covariant and contravariant tensors already).
Professor Gregorio RicciCurbastro (18531925) took up Riemann’s suggestion and wrote a 23pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of coordinate system. In 1901, Ricci and Tullio LeviCivita (18731941) wrote a 77pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute coordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):
g = ds^{2} = g_{m n} dx_{m}dx_{n}. The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant). … We call four quantities A_{v} the components of a covariant fourvector, if for any arbitrary choice of the contravariant fourvector B^{v}, the sum over v, å A_{v} B^{v} = Invariant. The law of transformation of a covariant fourvector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
The rank is denoted simply by the number of letters of subscript notation, so that X_{a} is a ‘rank 1’ tensor (a vector sum of firstorder differentials, like net velocity or gradient over applicable dimensions), and X_{ab} is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, X_{a}) and a contravariant tensor of the same variable (say, X_{a}) are distinguished by the way they transform when converting from one system of coordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contravariant tensor by superscript (for example x^{n}). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor x^{n} = x^{1 }+ x^{2} + x^{3 }+… x^{n}, whereas for powers or indices x^{n} = x_{1} x_{2} x_{3} …x_{n}). [Another step towards ‘beautiful’ gibberish then occurs whenever a contravariant tensor is raised to a power, resulting in, say (x^{2})^{2}, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x^{4},causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contravariant notation, thus x_{n }is here the contravariant version of the covariant tensor x_{n}. Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and LeviCivita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’
This was fine for Einstein who had by that time been working with the theory of Ricci and LeviCivita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m and n able to take values representing the four components of spacetime (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4by4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):
g = ds^{2} = g_{mn} dx_{m}dx_{n}_{ }= å (g_{m n} dx_{m} dx_{n} )= (g_{11} dx_{1} dx_{1} + g_{21} dx_{2} dx_{1} + g_{31} dx_{3} dx_{1} + g_{41} dx_{4} dx_{1}) + (g_{12} dx_{1} dx_{2} + g_{22} dx_{2} dx_{2} + g_{32} dx_{3} dx_{2} + g_{42} dx_{4} dx_{2}) + (g_{13} dx_{1} dx_{3} + g_{23} dx_{2} dx_{3} + g_{33} dx_{3} dx_{3} + g_{43} dx_{4} dx_{3}) + (g_{14} dx_{1} dx_{4} + g_{24} dx_{2} dx_{4} + g_{34} dx_{3} dx_{4} + g_{44} dx_{4} dx_{4})
The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx_{1} dx_{1} = d(ct)^{2}, while dx_{2} dx_{2} = dx^{2}, dx_{3} dx_{3} = dy^{2}, and so on. Therefore:
g = ds^{2} = g_{ct} d(ct)^{2} + g_{x} dx^{2} + g_{y} dy^{2} + g_{z} dz^{2} + (a dozen trivial first order differential terms).
It is often asserted that Albert Einstein (18791955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and LeviCivita who wrote the longwinded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute coordinate systems. If Ricci and LeviCivita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully. Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into spacetime geometry:
‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for “higher mathematics”, which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and LeviCivita. In Zurich Einstein found a friend, Marcel Grossmann (18781936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)
General relativity equates the massenergy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.
This point is made very clearly by Professor Lee Smolin on page 42 of the USA edition of his 1996 book, ‘The Trouble with Physics.’ See Figure 1 in the post here. Next, in order to mathematically understand the Riemann curvature tensor, you need to understand the operator (not a tensor) which is denoted by the Christoffel symbol (superscript here indicates contravariance):
G _{ab}^{c} = (1/2)g^{cd} [(dg_{da}/dx^{b}) + (dg_{db}/dx^{a}) + (dg_{ab}/dx^{d})]
The Riemann curvature tensor is then represented by:
R^{a}_{cbe} = ( dG _{bc}^{a} /dx^{e} ) – ( dG _{be}^{a} /dx^{c} ) + (G _{te}^{a} G _{bc}^{t} ) – (G _{tb}^{a} G _{ce}^{t} ).
If there is no curvature, spacetime is flat and things don’t accelerate. Notice that if there is any (fictional) ‘cosmological constant’ (a repulsive force between all masses, opposing gravity an increasing with the distance between the masses), it will only cancel out curvature at a particular distance, where gravity is cancelled out (within this distance there is curvature due to gravitation and at greater distances there will be curvature due to the dark energy that is responsible for the cosmological constant). The only way to have a completely flat spacetime is to have totally empty space, which of course doesn’t exist, in the universe we actually know.
To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to cò ( g_{mn} dx_{m} dx_{n})^{1/2}, while the proper time is ò (g_{m n} dx_{m}dx_{n})^{1/2}.
Notice that the ratio of proper length to proper time is always c. The Ricci tensor is a Riemann tensor contracted in form by summing over a = b, so it is simpler than the Riemann tensor and is composed of 10 secondorder differentials. General relativity deals with a change of coordinates by using FitzgeraldLorentz contraction factor, g = (1 – v^{2}/c^{2})^{1/2}. Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible, notinouruniverse, case of zero mass. Einstein at first built a representation of Isaac Newton’s gravity law a = MG/r^{2} (inward acceleration being defined as positive) in the form R_{m n} = 4p GT_{m n} /c^{2}, where T_{mn} is the massenergy tensor, T_{m n} = r u_{m} u_{n}. ( This was incorrect since it did not include conservation of energy.) But if we consider just a single dimension for low velocities (g = 1), and remember E = mc^{2}, then T_{m n} = T_{00} = r u^{2} = r (g c)^{2} = E/(volume). Thus, T_{m n} /c^{2} is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). We ignore pressure, momentum, etc., here:
Above: the components of the stressenergy tensor (image credit: Wikipedia).
The scalar term sum or “trace” of the stressenergy tensor is of course the sum of the diagonal terms from the top left to the top right, hence the trace is just the sum of the terms with subscripts of 00, 11, 22, and 33 (i.e., energydensity and pressure terms).
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)^{1/2}, so v^{2} = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v^{2} = 2GM/x) into the FitzgeraldLorentz contraction, giving g = (1 – v^{2}/c^{2})^{1/2} = [1 – 2GM/(xc^{2})]^{1/2}.
However, there is an important difference between this gravitational transformation and the usual FitzgeraldLorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: FitzgeraldLorentz contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = (1 – v^{2}/c^{2})^{1/2} = 1 – ½v^{2}/c^{2} + … . Gravitational contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = [1 – 2GM/(xc^{2})]^{1/2} = 1 – GM/(xc^{2}) + …, where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGeraldLorentz contraction: x/x_{0} + y/y_{0} + z/z_{0} = 3r/r_{0}. Hence the radial contraction of space around a mass is r/r_{0} = 1 – GM/(xc^{2}) = 1 – GM/[(3rc^{2}]. Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c^{2}. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the LorentzFitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without molecular viscosity (this is due to the Schwinger threshold for pairproduction by an electric field: the vacuum only contains fermionantifermion pairs out to a small distance from charges, and beyond that distance the weaker fields can’t cause pairproduction – i.e., the energy is below the IR cutoff – so the vacuum contains just bosonic radiation without pairproduction loops that can cause viscosity; for this reason the vacuum compresses macroscopic matter without slowing it down by drag). Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.
More information can be found in the earlier posts here, here, here, here, here and here.