The Standard Model of particle physics is U(1) x SU(2) x SU(3), representing respectively electromagnetism/weak hypercharge, weak isospin charge that acts only on particles of lefthanded spin, and strong colour charge. This doesn’t include the Higgs field (there are several possible Higgs field versions which are to be tested in 2009 at the LHC), or gravitational interactions. Since massenergy is gravitational charge, there is clearly a link between gravitons and Higgs bosons, but this is not predicted by the Standard Model in its current form.
The role played by U(1) in the Standard Model can actually be done by massless gauge bosons of SU(2) because not all SU(2) gauge bosons acquire mass at low energy. We know that for those that do acquire mass, the resulting massive gauge bosons only interact with lefthanded spinors. It’s possible that one handedness of the SU(2) gauge bosons remain massless at low energy, and that these are the gauge bosons of electromagnetism and also gravitation (if the latter is mediated by a spin1 graviton, not a spin2 graviton). There are calculations and predictions which justify this. The spin1 gravitons cause universal repulsion of masses over large distances, i.e. they are the dark energy of the cosmological acceleration. Nearby masses which are relatively small compared to the mass of the surrounding universe are pushed together by the stronger exchange of gravitons (converging inwards from great distances) with the larger mass of the universe than with the relatively small mass of the Earth, a star or a galaxy. This seems to be the mechanism of gravity.
So the Standard Model U(1) x SU(2) x SU(3) could be replaced by SU(2) x SU(3) for all known interactions plus a replacement field for a Higgstype theory of mass. This would not affect any existing confirmed predictions of the Standard Model since it would preserve the checked predictions of electrodynamics, weak interactions and strong interactions. But it would add an enormous amount of further falsifiable predictions, while simplifying the theory at the same time. Since Higgstype field is composed of only one kind of charge (mass), it may well be described by a simple Abelian U(1) theory, in which case the total theory is again the mathematical group combination U(1) x SU(2) x SU(3), but this now has an entirely different physical dynamics to the same mathematical structure in the Standard Model, because in this U(1) x SU(2) x SU(3) (unlike the mathematically similar Standard Model):
U(1) now represents gravitational charge (massenergy) and spin1 ‘Higgs bosons’,
SU(2) now represents weak, electromagnetic and gravitational interactions (massless spin1 gauge bosons at low energy producing electromagnetism and gravity; massive versions of the same gauge bosons being the usual lowenergy weak interaction gauge bosons), and
SU(3) still represents the usual strong interactions.
To put this another way: modern physics has been developed by making mathematical guesses and checking them, but this approach seems to have reached the end of the road because sophisticated guesses become ever harder to check. I think that if progress is to continue, a change in tactics is required to make progress. If falsifiable predictions are required, physics needs to start off by being pretty well connected to reality. If the theory involves plenty of direct empirical input, it’s likely to produce a lot of checkable predictions as output. If it has little direct empirical input, then it’s less likely to produce checkable predictions. I think this is the major problem with string theory. It’s vague because the ratio of factual input to speculative beliefs (extra dimensions, supersymmetric unification, graviton spin) is low.
Gauge symmetry: whenever the motion or charge or angular momentum of spin, or some other symmetry, is altered, it is a consequence of conservation laws for momentum and energy in physics that radiation is emitted or received. This is Noether’s theorem, which was applied to quantum physics by Weyl, giving the concept of gauge symmetry. Fundamental interactions are modelled by Feynman diagrams of scattering between gauge bosons or ‘virtual radiation’ (virtual photons, gravitons, etc.) and charges (electric charge, mass, etc.). The existing mainstream model is abstract, so it doesn’t represent the gauge bosons as taking any time to travel between charges (massless radiations travel at light velocity), and there are various other errors. E.g., two extra polarizations have to be added to the photon on the mainstream model of quantum electrodynamics, to make it produce attractive forces between dissimilar charges. This is an ad hoc modification, similar to changing the spin of the graviton to spin2 to ensure universal attraction between similar gravitational charge (mass/energy). If you look at the physics more carefully, you find that the spin of the graviton is actually spin1 and gravity is a repulsive effect: we’re exchanging gravitons (as repulsive scattertype interactions) more forcefully with the immense masses of receding galaxies above us than we are with the graviton scatter crosssections of the particles in the Earth below us, so the net effect is a downward pushing force. What’s impressive about this (unlike spin2 graviton rubbish endorsed by Woit) is that it makes checkable predictions including the strength (coupling G) of gravity and many other things (see calculations below), unlike string ‘theory’ which is a spin2 graviton framework that leads to 10^{500} different predictions (so vague it could be made to fit anything that nature turns out to be, but makes no falsifiable predictions, i.e. junk science). When you look at electromagnetism more objectively, the virtual photons carry an additional polarization in the form of a net electric charge (positive or negative). This again leads to checkable predictions for the strength of electromagnetism and other things. The most important single prediction however was the acceleration of the universe, due to the longdistance repulsion between large masses in the universe mediated by spin1 gravitons. This was published in 1996 and confirmed by observations in 1998 published in Nature by Saul Perlmutter et al., but it is still censored out by charlatans such as string ‘theorists’ (quotes are around that word because it is no genuine scientific theory, just a landscape of 10^{500} different speculations). Drs Woit and Smolin, who have written books opposing the hype and overfunding of string theory lies, are too busy with other speculations to acknowledge these facts.
Typical string theory deception: ‘String theory has the remarkable property of predicting gravity.’ (E. Witten, ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996.)
Actually what he means but can’t be honest enough to say is that string theory in 10/11 dimensions is compatible with a false spin2 graviton speculation. Let’s examine the facts:
Above: Spin1 gravitons causing apparent “attraction” by repulsion, the “attraction” being due to similar charges being pushed together by repulsion from massive amounts of similar sign gravitational charges in the distant surrounding universe.
Nearby gravitational charges don’t exchange gravitons forcefully enough to compensate for the stronger exchange with converging gravitons coming in from immense masses (clusters of galaxies at great distances, all over the sky), due to the physics discussed below, so their graviton interaction crosssection effectively shields them on facing sides. Thus, they get pushed together. This is what we see as gravity.
By wrongly ignoring the rest of the mass in the universe and focussing on just a few masses (right hand side of diagram), Pauli and Fierz in the 1930s falsely deduced that for similar signs of gravitational charges (all gravitational charge so far observed falls the same way, downwards, so all known mass/energy has similar gravitational charge sign, here arbitrarily represented by “” symbols, just to make an analogy to negative electric charge to make the physics easily understood), spin1 gauge bosons can’t work because they would cause gravitational charges to repel! So they changed the graviton spin to spin2, to “fix it”.
This mechanism proves that a spin2 graviton is wrong; instead the spin1 graviton does the job of both ‘dark energy’ (the outward acceleration of the universe, due to repulsion of similar sign gravitational charges over long distances) and gravitational ‘attraction’ between relatively small, relatively nearby masses which get repelled more towards one another due to distant masses in the universe than they are repelling one another!
Above: Spin1 gauge bosons for fundamental interactions. In each case the surrounding universe interacts with the charges, a vital factor ignored in existing mainstream models of quantum gravity and electrodynamics.
The massive versions of the SU(2) YangMills gauge bosons are the weak field quanta which only interact with lefthanded particles. One half (corresponding to exactly one handedness for weak interactions) of SU(2) gauge bosons acquire mass at low energy; the other half are the gauge bosons of electromagnetism and gravity. (This diagram is extracted from the more detailed discussion and calculations made below in the more detailed treatment; which is vital for explaining how massless electrically charged bosons can propagate as exchange radiation while they can’t propagate – due to infinite magnetic selfinductance – on a oneway path. The exchange of electrically charged massless bosons in two directions at once along each path – which is what exchange amounts to – means that the curls of the magnetic fields due to the charge from each oppositelydirected component of the exchange will cancel out the curl of the other. This means that the magnetic selfinductance is effectively zero for massless charged radiation being endlessly exchanged from charge A to charge B and back again, even though it infinite and thus prohibited for a one way path such as from charge A to charge B without a simultaneous return current of charged massless bosons. This was suggested by the fact that something analogous occurs in another area of electromagnetism.)
Masses are receding due to being knocked apart by gravitons which cause repulsion between masses as already explained (pushing distant galaxies apart and also pushing nearby masses together). The inward force, presumably mediated by spin1 gravitons, from a receding mass m at distance r receding with the Hubble velocity law v = Hr is:
F = ma
= m*dv/dt
= m*d(Hr)/dt
= (mH*dr/dt) + (mr*dH/dt)
= mHv + 0
= mrH^{2}.
This is because mass accelerating away from us has an outward force due to Newton’s 2nd law, and an equal and opposite (inward) reaction force under Newton’s 3rd law. If its mass m is small and/or if distance r is small, then the inward force of gravitons (being exchanged), which is directed towards you from that small nearby mass, is trivial because of the linear dependence on force on m and r in the equation F = mrH^{2}. But there will be very large masses beyond that nearby mass (distant receding galaxies) sending in a large inward force due to their large distance and mass. These spin1 gravitons effectively interact with the mass by scattering back off the graviton scatter crosssection for that mass. So small nearby masses get pressed together, because a nearby, nonreceding particle with mass will cause an asymmetry in the graviton field being received from more distant masses in that direction, and you’ll be pushed towards it. This gives an inversesquare law force and it uniquely also gives an accurate prediction for the gravitational parameter G as proved later in this post.
When you push two things together using field quanta such as those from the electrons on the surface of your hands, the resulting motions can be modelled as an attractive effect, but it is clearly caused by the electrons in your hands repelling those on the surface of the other object. We’re being pushed down by the gravitational repulsion of immense distant masses distributed around us in the universe, which causes not just the cosmological acceleration over large distances, but also causes gravity between relatively small, relatively nearby masses by pushing them together. (In 1996 the spin1 quantum gravity proof given below in this post was inspired by an account of the ‘implosion’ principle, used in all nuclear weapons now, whereby the inwarddirected half of the force of an explosion of a TNT shell surrounding a metal sphere compresses the metal sphere, making the nuclei in that sphere approach one another as though there was some contraction.)
Notice that in the universe the fact that we are surrounded by a lot of similar gravitational charge (mass/energy) at large distances will automatically cause the accelerative expansion of the universe (predicted accurately by this gauge theory mechanism in 1996, well before Perlmutter’s discovery confirming the predicted acceleration/’dark energy’), as well as causing gravity, and uses spin1. It doesn’t require the epicycle of changing the graviton spin to spin2. Similar gravitational charges repel, but because there is so much similar gravitational charge at long distances from us with the gravitons converging inwards as they are exchanged with an apple and the Earth, the immense long range gravitational charges of receding galaxies and galaxy clusters repel a two small nearby masses together harder than they repel one another apart! This is why they appear to attract.
This is an error for the reason (left of diagram above) that spin1 only appears to fail when you ignore the bulk of the similar sign gravitational charge in the universe surrounding you. If you stupidly ignore that surrounding mass of the universe, which is immense, then the simplest workable theory of quantum gravity necessitates spin2 gravitons.
The best presentation of the mainstream longrange force model (which uses massless spin2 gauge bosons for gravity and massless spin1 gauge bosons for electromagnetism) is probably chapter I.5, Coulomb and Newton: Repulsion and Attraction, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 306. Zee uses an approximation due to Sidney Coleman, whereby you have to work through the theory assuming that the photon has a real mass m, to make the theory work, but at the end you set m = 0. (If you assume from the beginning that m = 0, the simple calculations don’t work, so you then need to work with gauge invariance.)
Zee starts with a Langrangian for Maxwell’s equations, adds terms for the assumed mass of the photon, then writes down the Feynman path integral, which is ò DAe^{iS(A)} where S(A) is the action, S(A) = ò d^{4}xL, where L is the Lagrangian based on Maxwell’s equations for the spin1 photon (plus, as mentioned, terms for the photon having mass, to keep it relatively simple and avoid including gauge invariance). Evaluating the effective action shows that the potential energy between two similar charge densities is always positive, hence it is proved that the spin1 gauge bosonmediated electromagnetic force between similar charges is always repulsive. So it works.
A massless spin1 boson has only two degrees of freedom for spinning, because in one dimension it is propagating at velocity c and is thus ‘frozen’ in that direction of propagation. Hence, a massless spin1 boson has two polarizations (electric field and magnetic field). A massive spin1 boson, however, can spin in three dimensions and so has three polarizations.
Moving to quantum gravity, a spin2 graviton will have 2^{2} + 1 = 5 polarizations. Writing down a 5 component tensor to represent the gravitational Lagrangian, the same treatment for a spin2 graviton then yields the result that the potential energy between two lumps of positive energy density (mass is always positive) is always negative, hence masses always attract each other.
This has now hardened into a religious dogma or orthodoxy which is used to censor the facts of the falsifiable, predictive spin1 graviton mechanism as being “weird”. Even Peter Woit and Lee Smolin, who recognise that string theory’s framework for spin2 gravitons isn’t experimentally confirmed physics, still believe that spin2 gravitons are right!
Actually, the amount of spin1 gravitational repulsion force between two small nearby masses is negligible, and it takes the immense masses in the receding surrounding universe (galaxies, clusters of galaxies, etc., surrounding us in all directions) to produce what we see as gravity. The fact that gravity is not cancelled out by coming in equal and opposite forms is the reason why we have to include the gravitational charges in the surrounding universe in quantum gravity, while in electromagnetism it is conventional orthodoxy to ignore surrounding electric charges which come in opposite types which appear to cancel one another out. There is definitely no such cancellation of gravitational charges from surrounding masses in the universe, because there is only one kind of gravitational charge observed! So we have to accept a spin1 graviton, not a spin2 graviton, as being the simplest theory (see the calculations below that prove it predicts the observed strength for gravitation!), and spin1 gravitons lead somewhere: the spin1 graviton it neatly fits gravity into a modified, simplified form of the Standard Model of particle physics!
‘If no superpartners at all are found at the LHC, and thus supersymmetry can’t explain the hierarchy problem, by the ArkaniHamed/Dimopoulos logic this is strong evidence for the anthropic string theory landscape. Putting this together with Lykken’s argument, the LHC is guaranteed to provide evidence for string theory no matter what, since it will either see or not see weakscale supersymmetry.’ – Not Even Wrong blog post, ‘Awaiting a Messenger From the Multiverse’, July 17th, 2008.
It’s kinda nice that Dr Woit has finally come around to grasping the scale of the terrible, demented string theory delusion in the mainstream, and can see that nothing he writes affects the victory to be declared for string theory, regardless of what data is obtained in forthcoming experiments! His position and that of Lee Smolin and other critics is akin to the dissidents of the Soviet Union, traitors like Leon Trotsky and nuisances like Andrei Sakharov. They can maybe produce minor embarrassment and irritation to the Empire, but that’s about all. The general opinion of string theorists to his writings is that it’s inevitable that someone should complain. The public will go on ignoring the real quantum gravity facts, and so indeed will Woit and Smolin. He writes:
‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length[…] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘Mtheory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.
‘This whole situation is reminiscent of what happened in particle theory during the 1960’s, when quantum field theory was largely abandoned in favor of what was a precursor of string theory. The discovery of asymptotic freedom in 1973 brought an end to that version of the string enterprise and it seems likely that history will repeat itself when sooner or later some way will be found to understand the gravitational degrees of freedom within quantum field theory.
‘While the difficulties one runs into in trying to quantize gravity in the standard way are wellknown, there is certainly nothing like a nogo theorem indicating that it is impossible to find a quantum field theory that has a sensible short distance limit and whose effective action for the metric degrees of freedom is dominated by the Einstein action in the low energy limit. Since the advent of string theory, there has been relatively little work on this problem, partly because it is unclear what the use would be of a consistent quantum field theory of gravity that treats the gravitational degrees of freedom in a completely independent way from the standard model degrees of freedom. One motivation for the ideas discussed here is that they may show how to think of the standard model gauge symmetries and the geometry of spacetime within one geometrical framework.
‘Besides string theory, the other part of the standard orthodoxy of the last two decades has been the concept of a supersymmetric quantum field theory. Such theories have the huge virtue with respect to string theory of being relatively welldefined and capable of making some predictions. The problem is that their most characteristic predictions are in violent disagreement with experiment. Not a single experimentally observed particle shows any evidence of the existence of its “superpartner”.’
– P. Woit, Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135, p. 52.
But notice that Dr Woit was convinced in 2002 that a spin2 graviton would explain gravity. More recently he has become less hostile to supersymmetric theories, for example by conjecturing that spin2 supergravity without string theory may be what is needed:
‘To go out on a limb and make an absurdly bold guess about where this is all going, I’ll predict that sooner or later some variant (”twisted”?) version of N=8 supergravity will be found, which will provide a finite theory of quantum gravity, unified together with the standard model gauge theory. Stephen Hawking’s 1980 inaugural lecture will be seen to be not so far off the truth. The problems with trying to fit the standard model into N=8 supergravity are well known, and in any case conventional supersymmetric extensions of the standard model have not been very successful (and I’m guessing that the LHC will kill them off for good). So, some sofarunknown variant will be needed. String theory will turn out to play a useful role in providing a dual picture of the theory, useful at strong coupling, but for most of what we still don’t understand about the SM, it is getting the weak coupling story right that matters, and for this quantum fields are the right objects. The dominance of the subject for more than 20 years by complicated and unsuccessful schemes to somehow extract the SM out of the extra 6 or 7 dimensions of critical string/Mtheory will come to be seen as a hardtounderstand embarassment, and the multiverse will revert to the philosophers.’
Evidence
As explained briefly above, there’s a fine back of the envelope calculation – allegedly proving that a spin2 graviton is needed for universal attraction – in the mainstream accounts, as exemplified by Zee’s online sample chapter from his ‘Quantum Field Theory in a Nutshell’ book (section 5 of chapter 1). But when you examine that kind of proof closely, it just considers two masses exchanging gravitons with one another, which ignores two important aspects of reality:
1. there are far more than two masses in the universe which are always exchanging gravitons, and in fact the majority of the mass is in the surrounding universe; and
2. when you want a law for the physics of how gravitons are imparting force, you find that only receding masses forcefully exchange gravitons with you, not nearby masses. Take Hubble’s recession law which is empirical: v = HR. Differentiate: a = d(HR)/dt = (H*dR/dt)+(R*dH/dt) = Hv = HHR = RH<sup2. This predicts the Perlmutter’s observed acceleration of the universe. It also gives receding matter outward force by Newton’s second law, and gives a law for gravitons: Newton’s third law gives an equal inwarddirected force, which by elimination of the possibilities known in the Standard Model and quantum gravity, must be mediated by gravitons. Nearby masses which aren’t receding have outward acceleration of zero and so produce zero inward graviton force towards you for their gravitoninteraction crosssectional area. This produces an asymmetry, so you get pushed towards nonreceding masses while being pushed away from highly redshifted masses. You can then do calculations to predict the strength of gravitation:
Above: the Feynman diagrams for gravity due to the classical nonquantum theory (general relativity), spin1 and spin2 gravitons, with their respective issues, plus the geometry of the asymmetry produced by a mass in the graviton field of the universe which is vital for working out, by summing vector contributions (effectively the summing of interaction histories to represent a path integral formulation), the net sum of spin1 graviton contributions: the labelled ’shield’ area is the crosssection for spin1 graviton backscatter from a fundamental particle such as an electron (or rather the effective crosssectional area for graviton interactions, because the electron’s mass or gravitational charge according to the Standard Model comes from a Higgstype bosonic quantum field surrounding the core of the electron; the Higgs field interacts with both the electron core and with gravitons, so it acts as a maninthemiddle and mediates gravitational force from gravitons to the electron). For evidence that this effective crosssection for gauge boson back scatter is (exchange of gauge bosons between gravitational charges such as masses) is the event horizon crosssectional area of a black hole with the mass of the electron or other fundamental particle being accelerated by gravity, see this post and its links; for evidence that the black hole event horizon crosssectional area is also that for electromagnetic interactions see the calculations summarised in this post. (There is other evidence as well published in other posts. I’ll try to organize everything better when time permits.)
1. Outward force of receding matter (recession velocity v = HR where H is Hubble constant and R is apparent distance) is
F = ma
= m.dv/dt
= m.d(HR)/dt
= m[H.dR/dt + R.dH/dt]
= m[Hv + 0]
= mH(HR)
= mRH^{2}.
This is on the order of F = 7 * 10^{43} Newtons, but there is a correction to be applied for the apparent increase in density as we look back to earlier times (great distances in spacetime), and for relativistic mass increase of receding matter. But for simplicity, to see how the maths works, ignore the correction:
F = ma = [(4/3)πR^{3}r].[dv/dt] = [(4/3)πR^{3}r].[H^{2}R] = 4πR^{4}rH^{2}/3.
2. Inward force (which must be carried by gravitons or the spacetime fabric, according to the possibilities from what is available in the empirically defensible Standard Model and quantum gravity frameworks), is equal to the outward force (action and reaction are equal and opposite – a simple empirical law usually called Newton’s third law). However, there is a redshift of gravitons approaching us from relativistically receding, extremely redshifted masses, which reduces the effective graviton energy when received. (This redshift effect offsets the infinityapproaching outward force effects of relativistic mass increase and the increasing density of the earlier universe at ever greater distances.)
3. Gravity force,
F(gravity)
= (total inwarddirected graviton force, F = ma = m.dv/dt = mRH^{2}).(fraction of total force which is uncancelled, due to the asymmetry in inward graviton force which imposed by the lack of graviton force from black hole event horizon crosssectional area π(2GM/c^{2})^{2} for the nonreceding nearby mass labelled ’shield’),
= (total inwarddirected graviton force).(fraction of total inward force which is uncancelled to to the asymmetry imposed by the shield, e.g. the greyed cone area)
= (total inwarddirected graviton force).(area of end of cone, as labelled x)/(total spherical surface area out to radius of R = ct where t is age of universe, t = 1/H instead of the old FriedmannRobertsonWalker prediction for a critical density universe with zero cosmological constant of t = (2/3)/H, since there is no observable longrange gravitational deceleration on expansion rates, e.g., at long ranges there is no curvature of spacetime because the acceleration of the universe offsets gravitation)
= (total inwarddirected graviton force).(area of end of cone, as labelled x)/(4πR^{2})
= ((ma)*π(2GM/c^{2})^{2}).(x)/(4πR^{2})
= ((ma)*π(2GM/c^{2})^{2}).((shield area).(R/r )^{2})/(4πR^{2})
= ((ma)*π(2GM/c^{2})^{2}).(π(2GM/c^{2})^{2}.(R/r )^{2})/(4πR^{2})
= (4πR^{4}rH^{2}/3).(π(2GM/c^{2})^{2}R^{2}/r^{2})/(4πR^{2})
= (4/3)πR^{4}rH^{2}G^{2}M^{2}/(c^{4}r^{2})
We can simplify this using the Hubble law because at great distances/early times (where the density of the universe is highest) it is a good approximation to put HR = c, which gives R/c = 1/H, so:
F(gravity) = (4/3)πrG^{2}M^{2}/(H^{2}r^{2})
Notice the inverse square law, 1/r ^{2}. There are several consequences from this beyond the obvious ability to uniquely make theoretically justifiable quantitative calculations of the strength of gravity when compared to Newton’s semiempirical law of gravity (which was deduced from Kepler’s laws of planetary motion and Galileo’s law of terrestial gravitational acceleration), e.g. the force of gravity for quantum phenomena between fundamental particles is proportional not to M_{1}M_{2} but instead to M^{2}, suggesting quantization of masses as to be compared to a similar result in QED where the electromagnetic inverse square force is proportional to the the square of the unit electric charge, not to the product of two different charges (i.e., the quantization of fundamental charges).
It’s tempting for people to dismiss new calculations without checking them, just because they are inconsistent with previous calculations such as those allegedly proving the need for spin2 gravitons (maybe combined with the belief that “if the new idea is right, somebody else would have done it before”; which is of course a brilliant way to stop all new developments in all areas by everybody …).
The deflection of a photon by the sun is by twice the amount predicted for the theory of a nonrelativistic object (say a slow bullet) fired along the same (initial) trajectory. Newtonian theory says all objects fall, as does this theory (gravitons may presumably interact with energy via unobserved Higgs field bosons or whatever, but that’s not unique for spin1, it’s also going to happen with spin2 gravitons). The reason why a photon is deflected twice the amount that Newton’s law predicts is that a photon’s speed is unaffected by gravity unlike the case of a nonrelativistic object which speeds up as it enters stronger gravitational field regions. So energy conservation forces the deflection to increase due to the gain in gravitational potential energy, which in the case of a photon is used entirely for deflection (not speed changes).
In general relativity this is a result of the fact that the Ricci tensor isn’t directly proportional to the stress energy tensor because the divergence of the stress energy tensor isn’t zero (which it should be for conservation of massenergy). So from the Ricci tensor, half the product of the metric tensor and the trace of the Ricci tensor must be subtracted. This is what causes the departure from Newton’s law in the deflection of light by stars. Newton’s law omits conservation of massenergy, a problem which is clear when it’s expressed in tensors. GR corrects this error. If you avoid assuming Newton’s law and obtain the correct theory direct from quantum gravity, this energy conservation issue doesn’t arise.
Spin 2 graviton exchanges between 2 masses cause attraction.
Spin 1 graviton exchanges between 2 masses cause repulsion.
Spin 1 graviton exchanges between all masses will push 2 nearby masses together.
This is because the actual graviton exchange force causing repulsion in the space between 2 nearby masses is totally negligible (F = mrH^{2} with small m and r terms) compared to the gravitons pushing them together from surrounding masses at great distances (F = mrH^{2} with big receding mass m at big distance r).
Similarly if you had two protons nearby and surrounded them with a spherical shell of immense positive charge, they might be pushed together. (Another example is squeezing two things together: the electrons in your hand repel the things, but that doesn’t stop the two things being pushed together as if there is ‘attraction’ occurring between them.) This is what is occurs when spin1 gravitons cause gravity by pushing things together locally. Gauge bosons are virtual particles, but they still interact to cause forces!
String theory is widely hailed for being compatible with the spin2 graviton widely held to be true because for universal attraction between two similar charges (all masses and all energy falls the same way in a gravitational field, so it all has similar gravitational charge) you need a gauge boson which has spin2. This argument is popularized by Professor Zee in section 5 of chapter 1 of his textbook Quantum Field Theory in a Nutshell. It’s completely false because we simply don’t live in a universe with two gravitational charges. There are more than two particles in the universe. The path integral that Zee and others do assume explicitly only two masses are involved in the gravitational interactions which cause gravity.
If you correct this error, the repulsion of similar charges cause gravity by pushing two nearby masses together, just as on large scales it pushes matter apart causing the accelerating expansion of the universe. The formula F=mrH^{2} can be obtained from solid facts based on observations of nature, e.g., by Newton’s second law if we find acceleration by differentiating the Hubble expansion velocity of v = rH. [F = m*dv/dt = m*d(rH)/dt = mH*dr/dt + mR*dH/dt = mHv = mrH^{2}. Notice here that Newton’s third law of motion then allows us to quantify the amount of inward force we get, since it is equal to the outward force. A mass which is receding from us with a given force gives rise to an equal graviton force directed towards us in order to satisfy the third law of motion. This force is totally negligible for small nearby masses, which therefore get pushed together by the repulsion which occurs over large distances where great masses (receding galaxies) are receding.]
There was a sequence of comments on the Not Even Wrong blog post about Awaiting a Messenger From the Multiverse concerning the spin of the graviton (some of which have been deleted since for getting off topic). Some of these comments have been retrieved from my browser history cache and are below. There was an anonymous comment by ‘somebody’ at 5:57 am on 20 July 2008 stating:
‘Perturbative string theory has something called conformal invariance on the worldsheet. The empirical evidence for this is gravity. The empirical basis for QFT are locality, unitarity and Lorentz invariance. Strings manage to find a way to tweak these, while NOT breaking them, so that we can have gravity as well. This is oftrepeated, but still extraordinary. The precise way in which we do the tweaking is what gives rise to the various kinds of matter fields, and this is where the arbitrariness that ultimately leads to things like the landscape comes in. … It can easily give rise to things like multiple generations, nonabelain gauge symmetry, chiral fermions, etc. some of which were considered thorny problems before. Again, constructing PRECISELY our matter content has been a difficult problem, but progress has been ongoing. … But the most important reason for liking string theory is that it shows the features of quantum gravity that we would hope to see, in EVERY single instance that the theory is under control. Black hole entropy, gravity is holographic, resolution of singularities, resolution of information paradox – all these things have seen more or less concrete realizations in string theory. Black holes are where real progress is, according to me, but the string phenomenologists might disagree. Notice that I haven’t said anything about gaugegravity duality (AdS/CFT). Thats not because I don’t think it is important, … Because it is one of those cases where two vastly different mathematical structures in theoretical physics mysteriously give rise to the exact same physics. In some sense, it is a bit like saying that understanding quantum gravity is the same problem as understanding strongly coupled QCD. I am not sure how exciting that is for a nonstring person, but it makes me wax lyrical about string theory. It relates black holes and gauge theories. …. You can find a bound for the viscosity to entropy ratio of condensed matter systems, by studying black holes – thats the kind of thing that gets my juices flowing. Notice that none of these things involve farout mathematical m***********, this is real physics – or if you want to say it that way, it is emprically based. … String theory is a large collection of promising ideas firmly rooted in the emprirical physics we know which seems to unify theoretical physics …’
To which anon. responded:
‘No it’s not real physics because it’s not tied to empirical facts. It selects an arbitary number of spatial extra dimensions in order to force the theory to give the nonfalsifiable agreement with existing speculations about gravity, black holes, etc. Gravity and black holes have been observed but spin2 gravitons and the detailed properties of black holes aren’t empirically confirmed. Extra spatial dimensions and all the extra particles of supersymmetries like supergravity haven’t been observed. Planck scale unification is again a speculation, not an empirical observation. The entire success of string theory is consistency with speculations, not with nature. It’s built on speculations, not upon empirical facts. Further, it’s not even an ad hoc model that can replace the Standard Model, because you can’t use experimental data to identify the parameters of string theory, e.g., the moduli. It’s worse therefore than ad hoc models, it can’t incorporate let alone predict reality.’
Anon. should have added that the AdS/CFT correspondence is misleading. [AdS/CFT correspondence work, with strong interactions being modelled by antide Sitter space with a negative (rather than positive) cosmological constant is misleading. People should be modelling phenomena by accurate models, not returning physics to the days when guys were arguing that epicycles are a clever invention and modelling the solar system using a false model (planets and stars orbiting the Earth in circles within circles) is a brilliant state of the art calculational method! (Once you start modelling phenomenon A using a false approximation from theory B, you’re asking for trouble because you’re mixing up fact and fiction. E.g., if a prediction fails, you have a readymade excuse to simply add further epicycles/fiddles to ‘make it work’.) See my comment at http://keamonad.blogspot.com/2008/07/ninjaprof.html]
somebody Says:
July 20th, 2008 at 10:42 am
Anon
The problems we are trying to solve, like “quantizing gravity” are already speculative by your standards. I agree that it is a reasonable stand to brush these questions off as “speculation”. But IF you consider them worthy of your time, then string theory is a game you can play. THAT was my claim. I am sure you will agree that it is a bit unreasonable to expect a nonspeculative solution to a problem that you consider already speculative.
Incidentally, I never said a word about supersymmetry and Planck scale unification in my post because it was specifically a response to a question on the empirical basis of string theory. So I would appreciate it if you read my posts before taking off on rants, stringing cliches, .. etc. It was meant for the critics of string theory who actually have scientific reasons to dislike it, and not gutreactions.
anon. Says:
July 20th, 2008 at 11:02 am
‘The problems we are trying to solve, like “quantizing gravity” are already speculative by your standards.’
Wrong again. I never said that. Quantizing gravity is not speculative by my standards, it’s a problem that can be addressed in other ways without all the speculation involved in the string framework. That’s harder to do than just claiming that string theory predicts gravity and then using lies to censor out those working on alternatives.
‘Incidentally, I never said a word about supersymmetry and Planck scale unification in my post because it was specifically a response to a question on the empirical basis of string theory.’
Wrong, because I never said that you did mention them. The reason why string theory is not empirical is precisely because it’s addressing these speculative ideas that aren’t facts.
‘It was meant for the critics of string theory who actually have scientific reasons to dislike it, and not gutreactions.’
If you want to defend string as being empirically based, you need to do that. You can’t do so, can you?
somebody Says:
July 20th, 2008 at 11:19 am
‘Quantizing gravity is not speculative by my standards,’
Even though the spin 2 field clearly is.
My apologies Peter, I truly couldn’t resist that.
anon. Says:
July 20th, 2008 at 11:53 am
The spin2 field for gravity is based on the false speculation that gravitons are exchanged purely between the attracting bodies. (To get universal attraction, such field quanta can be shown to require a spin of 2.) This speculation is contrary to the general principle that every body is a source of gravity. You never have gravitons exchanged merely between two masses in the universe. They will be exchanged between all the masses, and there is a lot of mass surrounding us at long distances.
There is no disproof which I’m aware of that the graviton has a spin of 1 and operates by pushing masses together. At least this theory doesn’t have to assume that there are only two gravitating masses in the universe which exchange gravitons!
somebody Says:
July 20th, 2008 at 12:20 pm
‘The spin2 field for gravity is based on the false speculation that gravitons are exchanged purely between the attracting bodies. This speculation is contrary to the general principle that every body is a source of gravity.’
How many gravitationally “repelling” bodies do you know?
Incidentally, even if there were two kinds of gravitational charges, AND the gravitational field was spin one, STILL there are ways to test it. Eg: I would think that the bending of light by the sun would be more suppressed if it was spin one than if it is spin two. You need two gauge invariant field strengths squared terms to form that coupling, one for each spin one field, and that might be suppressed by a bigger power of mass or something. Maybe I am wrong about the details (i haven’t thought it through), but certainly it is testable.
somebody Says:
July 20th, 2008 at 12:43 pm
One could have only repelling bodies with spin one, but NOT only attractive ones. Because attractive requires opposite charges.
anon. Says:
July 20th, 2008 at 6:51 pm
‘How many gravitationally “repelling” bodies do you know?’
This repulsion between masses is very well known. Galaxies are accelerating away from every other mass. It’s called the cosmic acceleration, discovered in 1998 by Perlmutter.
The acceleration of the universe is the derivative of the Hubble expansion velocity formula, v = dr/dt = Hr: a = dv/dt = d(Hr)/dt = Hv + 0 = rH^2.
This is factbased (differentiating a Hubble velocity to find acceleration isn’t speculative), and agrees with the observation error bars on the acceleration. F=ma then gives outward force of accelerating matter, and the third law of motion gives us equal inward force. All simple stuff. This inward force is F=ma = mrH^2.
Since this force is apparently mediated by spin1 gravitons, the gravitational force of repulsion from one relatively nearby small mass to another is effectively zero. Because of the terms m and r in F = mrH^2, the exchange of gravitons only produces a repulsive force over large distances from a large mass, such as a distant receding galaxy. This is why two relatively nearby (relative in cosmological sense of many parsecs) masses don’t repel, but are pushed together because they repel the very distant masses in the universe.
‘One could have only repelling bodies with spin one, but NOT only attractive ones. Because attractive requires opposite charges.’
As already explained, there is a mechanism for similar charges to ‘attract’ by repulsion if they are surrounded by a large amount of matter that is repelling them towards one another. If you push things together by a repulsive force, the result can be misinterpreted as attraction…
*******************************************************************
After this comment, ‘somebody’ (evidently a string supporter who couldn’t grasp physics) then gave a list issues he/she had about this comment. Anon. then responded to each:
anon. Says:
July 20th, 2008 at 6:51 pm
‘1. The idea of “spin” arises from looking for reps. of local Lorentz invariance. At the scales of the expanding universe, you don’t have local Loretz invarince.’
There are going to be graviton exchanges whether they are spin 1 or spin 2 or whatever, between distant receding masses in the expanding universe. So if this is a problem it’s a problem for spin2 gravitons just as it is for spn1. I don’t think you have any grasp of physics at all.
‘2. … The expanding background is a solution of the underlying theory, whatever it is. The generic belief is that the theory respects Lorentz invariance, even though the solution breaks it. This could of course be wrong, …’
Masses are receding from one another. The assumption that they are being carried apart on a carpet of expanding spacetime fabric which breaks Lorentz invariance is just a classical GR solution speculation. It’s not needed if masses are receding due to being knocked apart by gravitons which cause repulsion between masses as already explained (pushing distant galaxies apart and also pushing nearby masses together).
‘3. … For spin 1 partciles, this gives an inverse square law. In particular, I don’t see how you jumped from the empirical F=mrH^2 relation to the claim that the graviton is spin 1.’
The inward force, presumably mediated by spin1 gravitons, from a receding mass m at distance r is F=mrH^2. If it’s mass is small or r is small, the inward force towards you is small. But there will be very large masses beyond that nearby mass (distant receding galaxies) sending in a large inward force due to their large distance and mass. These spin1 gravitons will presumably interact with the mass by scattering back off the graviton scatter crosssection for that mass. So a nearby, nonreceding particle with mass will cause an asymmetry in the graviton field being received from more distant masses in that direction, and you’ll be pushed towards it. This gives an inversesquare law force.
‘4. You still have not provided an explanation for how the solar system tests of general relativity can survive in your spin 1 theory. In particular the bending of light. Einstein’s theory works spectacularly well, and it is a local theory. We know the mass of the sun, and we know that it is not the cosmic repulsion that gives rise to the bending of light by the sun.’
The deflection of a photon by the sun is by twice the amount predicted for the theory of a nonrelativistic object (say a slow bullet) fired along the same (initial) trajectory. Newtonian theory says all objects fall, as does this theory (gravitons may presumably interact with energy via unobserved Higgs field bosons or whatever, but that’s not unique for spin1, it’s also going to happen with spin2 gravitons). The reason why a photon is deflected twice the amount that Newton’s law predicts is that a photon’s speed is unaffected by gravity unlike the case of a nonrelativistic object which speeds up as it enters stronger gravitational field regions. So energy conservation forces the deflection to increase due to the gain in gravitational potential energy, which in the case of a photon is used entirely for deflection (not speed changes).
In general relativity this is a result of the fact that the Ricci tensor isn’t directly proportional to the stress energy tensor because the divergence of the stress energy tensor isn’t zero (which it should be for conservation of massenergy). So from the Ricci tensor, half the product of the metric tensor and the trace of the Ricci tensor must be subtracted. This is what causes the departure from Newton’s law in the deflection of light by stars. Newton’s law omits conservation of massenergy, a problem which is clear when it’s expressed in tensors. GR corrects this error. If you avoid assuming Newton’s law and obtain the correct theory direct from quantum gravity, this energy conservation issue doesn’t arise.
‘5. The problems raised by the fact that LOCALLY all objects attract each other is still enough to show that the mediator cannot be spin 1.’
I thought I’d made that clear;
Spin 2 graviton exchanges between 2 masses cause attraction.
Spin 1 graviton exchanges between 2 masses cause repulsion.
Spin 1 graviton exchanges between all masses will push 2 nearby masses together.
This is because the actual graviton exchange force causing repulsion in the space between 2 nearby masses is totally negligible (F = mrH^2 with small m and r terms) compared to the gravitons pushing them together from surrounding masses at great distances (F = mrH^2 with big receding mass m at big distance r).
Similarly if you had two protons nearby and surrounded them with a spherical shell of immense positive charge, they might be pushed together. (Another example is squeezing two things together: the electrons in your hand repel the things, but that doesn’t stop the two things being pushed together as if there is ‘attraction’ occurring between them.) This is what is occurs when spin1 gravitons cause gravity by pushing things together locally. Gauge bosons are virtual particles, but they still interact to cause forces!
somebody Says:
July 21st, 2008 at 3:35 am
Now that you have degenerated to weird theories and personal attacks, I will make just one comment about a place where you misinterpret the science I wrote and leave the rest alone. I wrote that expanding universe cannot be used to argue that the graviton is spin 1. You took that to mean “… if this is a problem it’s a problem for spin2 gravitons just as it is for spn1.”
The expanding universe has nothing to do with the spin of the particle was my point, not that it can be used to argue for this or that spin. Spin arises from local Lorentz invariance.
anon. Says:
July 21st, 2008 at 4:21 am
‘The expanding universe has nothing to do with the spin of the particle was my point, not that it can be used to argue for this or that spin.’
Spin1 causes repulsion. The universe’s expansion is accelerating. I’ve never claimed that particle spin is caused by the expansion of the universe. I’ve stated that repulsion is evident in the acceleration of the universe.
If you want to effectively complain about degeneration into weird theories and personal attacks, try looking at string theory more objectively. 10^500 universes, 10 dimensions, spin2 gravitons, etc. (plus the personal attacks of string theorists on those working on alternative ideas).
***********************************
However, I’m getting way off the topic. Which was that calculations are vital in physics, because they are something that can be checked for consistency with nature. In string theory, so far there is no experimental possible, so all of the checks done are really concerned with internal consistency, and consistency with speculations of one kind or another. String theorist Professor Michio Kaku summarises the spiritual enthusiasm and hopeful religious basis for the string theory belief system as follows in an interview with the ‘Spirituality’ section of The Times of India, 16 July 2008, quoted in a comment by someone on the Not Even Wrong weblog (notice that Michio honestly mentions ‘… when we get to know … string theory…’, which is an admission that it’s not known because of the landscape problem of 10^500 alternative versions with different quantitative predictions; at present it’s not a scientific theory but rather 10^500):
‘… String theory can be applied to the domain where relativity fails, such as the centre of a black hole, or the instant of the Big Bang. … The melodies on these strings can explain chemistry. The universe is a symphony of strings. The “mind of God” that Einstein wrote about can be explained as cosmic music resonating through hyperspace. … String theory predicts that the next set of vibrations of the string should be invisible, which can naturally explain dark matter. … when we get to know the “DNA” of the universe, i.e. string theory, then new physical applications will be discovered. Physics will follow biology, moving from the age of discovery to the age of mastery.’
As with the 200+ mechanical aether theories of force fields existing the 19th century (this statistic comes from Eddington’s 1920 book Space Time and Gravitation), string theory at best is just a model for unobservables. Worse, it comes in 10^500 quantitatively different versions, worse than the 200 or so aethers of the 19th century. The problems with theorising about the physics at the instant of the big bang and the physics in the middle of a black hole is that you can’t actually test it. Similar problems exist when explaining dark matter because your theory contains invisible particles whose masses you can’t predict beyond saying they’re beyond existing observations (religions similarly have normally invisible angels and devils, so you could equally use religions to ‘explain dark matter’; it’s not a quantitative prediction in string theory so it’s not really a scientific explanation, just a belief system). Unification at the Planck scale and spin2 gravitons are both speculative errors.
Once you remove all these the errors from string theory, you are left with something that is no more impressive than aether: it claims to be a model of reality and explain everything, but you don’t get any real use from it for predicting experimental results because there are so many versions it’s just too vague to be a science.It doesn’t connect well with anything in the real world at all. The idea that at least it tells us what particle cores are physically (vibrating loops of extradimensional ’string’) doesn’t really strike me as being science. People decide which version of aether to use by artistic criteria like beauty or fitting the theory to observations and arguing that if the aether was different from this or that version we wouldn’t exist to observe it’s consequences (the anthropic selection principle), instead of using factual scientific criteria: there are no factual successes of aether to evaluate. So it falls into the category of a speculative belief system, not a piece of science.
By Mach’s principle of economy, speculative belief systems are best left out of science until they can be turned into observables, useful predictions, or something that is checkable. Science is not divine revelation about the structure of matter and the universe, instead it’s about experiments and related factbased theorizing which predicts things that can be checked.
**************************************************
Update: If you look at what Dr Peter Woit has done in deleting comments, he’s retained the one from anon which states:
‘[string is] not real physics because it’s not tied to empirical facts. It selects an arbitary number of spatial extra dimensions in order to force the theory to give the nonfalsifiable agreement with existing speculations about gravity, black holes, etc. Gravity and black holes have been observed but spin2 gravitons and the detailed properties of black holes aren’t empirically confirmed. Extra spatial dimensions and all the extra particles of supersymmetries like supergravity haven’t been observed. Planck scale unification is again a speculation, not an empirical observation. The entire success of string theory is consistency with speculations, not with nature. It’s built on speculations, not upon empirical facts. Further, it’s not even an ad hoc model that can replace the Standard Model, because you can’t use experimental data to identify the parameters of string theory, e.g., the moduli. It’s worse therefore than ad hoc models, it can’t incorporate let alone predict reality.’
Although he has kept that, Dr Woit deleted the further discussion comments about the spin 1 versus spin 2 graviton physics, as being offtopic. Recently he argued that supergravity (a spin2 graviton theory) in low dimensions was a good idea (see post about this by Dr Tommaso Dorigo), so he is definitely biased in favour of the graviton having a spin of 2, despite that being not ‘not even wrong’ but plain wrong for reasons given above. If we go look at Dr Woit’s post ‘On Crackpotism and Other Things’, we find Dr Woit stating on January 3rd, 2005 at 12:25 pm:
‘I had no intention of promulgating a general theory of crackpotism, my comments were purely restricted to particle theory. Crackpotism in cosmology is a whole other subject, one I have no intention of entering into.’
If that statement by Dr Woit still stands, then facts from cosmology about the accelerating expansion of the universe presumably won’t be of any interest to him, in any particle physics context such as graviton spin. In that same ‘On Crackpotism and Other Things’ comment thread, Doug made a comment at January 4th, 2005 at 5:51 pm stating:
‘… it’s usually the investigators labeled “crackpots” who are motivated, for some reason or another, to go back to the basics to find what it is that has been ignored. Usually, this is so because only “crackpots” can afford to challenge long held beliefs. Noncrackpots, even tenured ones, must protect their careers, pensions and reputations and, thus, are not likely to go down into the basement and rummage through the old, dusty trunks of history, searching for clues as to what went wrong. …
‘Instead, they keep on trying to build on the existing foundations, because they trust and believe that …
‘In other words, it could be that it is an interpretation of physical concepts that works mathematically, but is physically wrong. We see this all the time in other cases, and we even acknowlege it in the gravitational area where, in the low limit, we interpret the physical behavior of mass in terms of a physical force formulated by Newton. When we need the accuracy of GR, however, Newton’s physical interpretation of force between masses changes to Einstein’s interpretation of geometry that results from the interaction between mass and spacetime.’
Doug commented on that ‘On Crackpotism and Other Things’ post at January 1st, 2005 at 1:04 pm:
‘I’ve mentioned before that Hawking characterizes the standard model as “ugly and ad hoc,” and if it were not for the fact that he sits in Newton’s chair, and enjoys enormous prestige in the world of theoretical physics, he would certainly be labeled as a “crackpot.” Peter’s use of the standard model as the criteria for filtering out the serious investigator from the crackpot in the particle physics field is the natural reaction of those whose career and skills are centered on it. The derisive nature of the term is a measure of disdain for distractions, especially annoying, repetitious, and incoherent ones.
‘However, it’s all too easy to yield to the temptation to use the label as a defense against any dissent, regardless of the merits of the case of the dissenter, which then tends to convert one’s position to dogma, which, ironically, is a characteristic of “crackpotism.” However, once the inevitable flood of anomalies begins to mount against existing theory, no one engaged in “normal” science, can realistically evaluate all the inventive theories that pop up in response. So, the division into camps of innovative “liberals” vs. dogmatic “conservatives” is inevitable, and the use of the excusionary term “crackpot” is just the “defender of the faith” using the natural advantage of his position on the high ground.
‘Obviously, then, this constant struggle, especially in these days of electronically enhanced communications, has nothing to do with science. If those in either camp have something useful in the way of new insight or problemsolving approaches, they should take their ideas to those who are anxious to entertain them: students and experimenters. The students are anxious because the defenders of multiple points of view helps them to learn, and the experimenters are anxious because they have problems to solve.
‘The established community of theorists, on the other hand, are the last whom the innovators ought to seek to convince because they have no reason to be receptive to innovation that threatens their domains, and clearly every reason not to be. So, if you have a theory that suggests an experiment that Adam Reiss can reasonably use to test the nature of dark energy, by all means write to him. Indeed, he has publically invited all that might have an idea for an experiment. But don’t send your idea to [cosmology professor] Sean Carroll because he is not going to be receptive, even though he too publically acknowledged that “we need all the help we can get” (see the Science Friday archives).’
Gauge symmetry: whenever the motion or charge or angular momentum of spin, or some other symmetry, is altered, it is a consequence of conservation laws in physics that radiation is emitted or received. This is Noether’s theorem, which was applied to quantum physics by Weyl. (Illustration credit: http://hyperphysics.phyastr.gsu.edu/hbase/particles/expar.html. Unfortunately the arrow they show for the antineutrino in this diagram points the wrong way: the antineutrino is emitted in beta decay.)
Noether’s theorem (discovered 1915) connects the symmetry of the action of a system (the integral over time of the Lagrangian equation for the energy of a system) with conservation laws. In quantum field theory, the WardTakahashi identity expresses Noether’s theorem in terms of the Maxwell current (a moving charge, such as an electron, can be represented as an electric current since that is the flow of charge). Any modification to the symmetry of the current involves the use of energy, which (due to conservation of energy) must be represented by the emission or reception of photons, e.g. field quanta. (For an excellent introduction to the simple mathematics of the Lagrangian in quantum field theory and its role in symmetry modification for Noether’s theorem, see chapter 3 of Ryder’s Quantum Field Theory, 2nd ed., Cambridge University Press, 1996.)
So, when the symmetry of a system such as a moving electron is modified, such a change of the phase of an electron’s electromagnetic field (which together with mass is the only feature of the electron that we can directly observe) is accompanied by a photon interaction, and viceversa. This is the basic gauge principle relating symmetry transformations to energy conservation. E.g., modification to the symmetry of the electromagnetic field when electrons accelerate away from one another implies that they emit (exchange) virtual photons.
All fundamental physics is of this sort: the electromagnetic, weak and strong interactions are all examples of gauge theories in which symmetry transformations are accompanied by the exchange of field quanta. Noether’s theorem is pretty simple to grasp: if you modify the symmetry of something, the act of making that modification involves the use or release of energy, because energy is conserved. When the electron’s field undergoes a local phase change to its symmetry, a gauge field quanta called a ’virtual photon’ is exchanged. However, it is not just energy conservation that comes into play in symmetry. Conservation of charge and angular momentum are involved in more complicated interactions. In the Standard Model of particle physics, there are three basic gauge symmetries, implying different types of field quanta (or gauge bosons) which are radiation exchanged when the symmetries are modified in interactions:
1. Electric charge symmetry rotation. This describes the electromagnetic interaction. This is supposedly the most simple gauge theory, the Abelian U(1) electromagnetic symmetry group with one element, invoking just one charge and one gauge boson. To get negative charge, a positive charge is represented as travelling backwards in time, and viceversa. The gauge boson of U(1) is mixed up with the neutral gauge boson of SU(2), to the amount specified by the empirically based Weinberg mixing angle, producing the photon and the neutral weak gauge boson. U(1) represents not just electromagnetic interactions but also weak hypercharge.
The U(1) maths is based on a type of continuous group defined by Sophus Lie in 1873. Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together. It was the representation theory of these groups that Weyl was studying.
‘A simple example of a Lie group together with a representation is that of the group of rotations of the twodimensional plane. Given a twodimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point. This is a symmetry of the plane. The thing that is invariant is the distance between a point on the plane and the central point. This is the same before and after the rotation. One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point. There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.
Argand diagram showing rotation by an angle on the complex plane. Illustration credit: based on Fig. 3.1 in Not Even Wrong.
‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one. If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers). As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1).
‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions]. Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave. This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees. Because of this analogy, U(1) symmetry transformations are often called phase transformations. …
‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N). It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest. The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N). Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large.
‘In the case N = 1, SU(1) is just the trivial group with one element. The first nontrivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3). The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’
2. Isospin symmetry rotation. This describes the weak interaction of quarks, controlling the transformation of quarks within one family of quarks. E.g., in beta decay a neutron decays into a proton by the transformation of a downquark into an upquark, and this transformation involves the emission of an electron (conservation of charge) and an antineutrino (conservation of energy and angular momentum). Neutrinos were a falsifiable prediction made by Pauli on 4 December 1930 in a letter to radiation physicists in Tuebingen based on the spectrum of beta particle energies in radioactive decay (‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding … the continous betaspectrum … I admit that my way out may seem rather improbable a priori … Nevertheless, if you don’t play you can’t win … Therefore, Dear Radioactives, test and judge.’ – Pauli’s letter, quoted in footnote of page 12, http://arxiv.org/abs/hepph/0204104). The total amount of radiation emitted in beta decay could be determined from the difference in mass between the beta radioactive material and its decay product, the daughter material. The amount of energy carried in readily detectable ionizing beta particles could be measured. However, the beta particles were emitted with a continuous spectrum of energies up to a maximum upper energy limit (unlike the line spectra of gamma ray energies): it turned out that the total energy lost in beta decay was equal to the upper limit of the beta energy spectrum, which was three times the mean beta particle energy! Hence, on the average, only onethird of the energy loss in beta decay was accounted for in the emitted beta particle energy.
Pauli suggested that the unobserved beta decay energy was carried by neutral particles, now called antineutrinos. Because they are weakly interacting, it takes a great intensity of beta decay in order to detect the antineutrinos. They were first detected in 1956 coming from intense beta radioactivity in the fission product waste of a nuclear reactor. By conservation laws, Pauli had been able to predict the exact properties to be expected. The beta decay theory was developed soon after Pauli’s suggestion in the 1930s by Enrico Fermi, who then invented the nuclear reactor used to discover the antineutrino. However, Fermi’s theory has a neutron decay directly into a beta particle plus an antineutrino, whereas in the 1960s the theory of beta decay had to be expressed in terms of quarks. Glashow, Weinberg and Salam discovered that to make it a gauge theory there had to be a massive intermediate ‘weak gauge boson’. So what really happens is more complicated than in Fermi’s theory of beta decay. A downquark interacts with a massive W_{} weak field gauge boson, which then decays into an electron and an antineutrino. The massiveness of the field quanta is needed to explain the weak strength of beta decay (i.e., the relatively long halflives of beta decay, e.g. a free neutron is radioactive and has a beta half life of 10.3 minutes, compared with the tiny lifetimes of a really small fraction of a second for hadrons which decay via the strong interaction). The massiveness of the weak field quanta was a falsifiable prediction, and in 1983 CERN discovered the weak field quanta with the predicted energies, confirming SU(2) weak interaction gauge theory.
There are two relative types or directions of isospin, by analogy to ordinary spin in quantum mechanics (where spin up and spin down states are represented by +1/2 and –1/2 in units of hbar). These two isospin charges are modelled by the YangMills SU(2) symmetry, which has (2*2)1 = 3 gauge bosons (with positive, negative and neutral electric charge, respectively). Because the interaction is weak, the gauge bosons must be massive and as a result they have a short range, since massive virtual particles don’t exist for long in the vacuum, and can’t travel far in that short life time. The two isospin charge states allow quarkantiquark pairs, or doublets, to form, called mesons. The weak isospin force only acts on particles with lefthanded spin. At high energy, all weak gauge bosons will be massless, allowing weak and electromagnetic forces become symmetric and unify. But at low energy, the weak gauge bosons acquire mass, supposedly from a Higgs field, breaking the symmetry. This Higgs field has not been observed, and the general Higgs models don’t predict a single falsifiable prediction (there are several possibilities).
3. Colour symmetry rotation. This changes the colour charge of a quark, in the process releasing colour charged gluons as the field quanta. Strong nuclear interactions (which bind protons into a nucleus against very strong electromagnetic repulsion, which would be expected to make nuclei explode in the absence of this strong binding force) are described by quantum chromodynamics, whereby quarks have a symmetry due to their strong nuclear or ‘colour’ charges. This originated with GellMann’s SU(3) eightfold way of arranging the known baryons by their properties, a scheme which successfully predicted the existence of the Omega Minus in before it was experimentally observed in 1964 at Brookhaven National Laboratory, confirming the SU(3) symmetry of hadron properties. The understanding (and testing) of SU(3) as a strong interaction YangMills gauge theory in terms of quarks with colour charges and gluon field quanta was a completely radical extension of the original convenient SU(3) eightfold way hadron categorisation scheme.
Experiments in scattering very high energy electrons off neutrons and protons first showed evidence that each nucleon had a more complex structure than a simple point in the 1950s, and therefore the idea that these nucleons were simply fundamental particles was undermined. Another problem with nucleons being fundamental particles was that of the magnetic moments of neutrons and protons. Dirac in 1929 initially claimed that the antiparticle his equation predicted for the electron was the alreadyknown proton (the neutron was still undiscovered until 1932), but because he couldn’t explain why the proton is more massive than the electron, he eventually gave up on this idea and predicted the unobserved positron instead (just before it was discovered). The problem with the proton being a fundamental particle was that, by analogy to the positron, it would have a magnetic moment of 1 nuclear magneton, whereas in fact the measured value is 2.79 nuclear magnetons. Also, for the neutron, you would expect zero magnetic moment for a neutral spinning particle, but the neutron was found to have a magnetic moment of 1.91 nuclear magnetons. These figures are inconsistent with neutrons being fundamental particles, but are consistent with quark structure:
‘The fact that the proton and neutron are made of charged particles going around inside them gives a clue as to why the proton has a magnetic moment higher than 1, and why the supposedly neutral neutron has a magnetic moment at all.’ – Richard P. Feynman, QED, Penguin, London, 1990, p. 134.
To explain hadron physics, Zweig and GellMann suggested the theory that baryons are composed of three quarks. But there was immediately the problem the Omega Minus would contain three identical strange quarks, violating the Pauli exclusion principle that prevents particles from occupying the same set of quantum numbers or states. (Pairs of otherwise identical electrons in an orbital have opposite spins, giving them different sets of quantum numbers, but because there are only two spin states, you can’t make three identical charges share the same orbital by having different spins. Looking at the measured 3/2spin of the Omega Minus, all of its 1/2spin strange quarks would have the same spin.) To get around this problem in the experimentally discovered Omega Minus, the quarks must have an additional quantum number, due to the existence of a new charge, namely the colour charge of the strong force that comes in three types (red, blue and green). The SU(3) symmetry of colour force gives rise to (3*3)1 = 8 gauge bosons, called gluons. Each gluon is a charged combination of a colour and the anticolour of a different colour, e.g. a gluon might be charged blueantigreen. Because gluons carry a charge, unlike photons, they interact with one another and also with with virtual quarks produced by pair production due to the intense electromagnetic fields near fermions. This makes the strong force vary with distance in a different way to that of the electromagnetic force. At small distances from a quark, the net colour charge increases in strength with increasing distance, which the opposite of the behaviour of the electromagnetic charge (which gets bigger at smaller distances, due to less intervening shielding by the polarized virtual fermions caused in pair production). The overall result is that quarks confined in hadrons have asymptotic freedom to move about over a certain range of distances, which gives nucleons their size. Before the quark theory and colour charge had been discovered, Yukawa discovered a theory of strong force attraction that predicted the strong force was due to pion exchange. He predicted the mass of the pion, although unfortunately the muon was discovered before the pion, and was originally inaccurately identified as Yukawa’s exchange radiation. Virtual pions and other virtual mesons are now understood to mediate the strong interaction between nucleons as a relatively longrange residue of the colour force.
Above: the electroweak charges of the Standard Model of mainstream particle physics. The argument we made is that U(1) symmetry isn’t real and must be replaced by SU(2) with two charges and massless versions of the weak boson triplet (we do this by replacing the Higgs mechanism with a simpler massgiving field that gives predictions of particle masses). The two charged gauge bosons simply mediate the positive and negative electric fields of charges, instead of having neutral photon gauge bosons with 4 polarizations. The neutral gauge boson of the massless SU(2) symmetry is the graviton. The lepton singlet with right handed spin in the standard model table above is not really a singlet: because SU(2) is now being used for electromagnetism rather than U(1), we have automatically a theory that unites quarks and leptons. The problem of the preponderance of matter over antimatter is also resolved this way: the universe is mainly hydrogen, one electron, two quarks and one downquark. The electrons are not actually produced alone. The downquark, as we will demonstrate below, is closely related to the electron.
The fractional charge is due to vacuum polarization shielding, with the accompanying conversion of electromagnetic field energy into shortranged virtual particle mediated nuclear fields. This is a predictive theory even at low energy because it can make predictions based on conservation of field quanta energy where vacuum polarization attenuates a field, and the conversion of leptons into quarks requires higher energy than existing experiments have had access to. So electrons are not singlets: some of them ended up being converted into quarks in the big bang in very high energy interactions. The antimatter counterpart for the electrons in the universe is not absent but is present in nuclei, because those positrons were converted into the upquarks in hadrons. The handedness of the weak force relates to the fact that in the early stages of the big bang, for each two electronpositron pairs that were produced by pair production in the intense, early vacuum fields of the universe, both positrons but only one electron were confined to produce a proton. Hence the amount of matter and antimatter in the universe is identical, but due to reactions related to the handedness of the weak force, all the antipositrons were converted into upquarks, but only half of the electrons were converted into downquarks. We’re oversimplifying a little because some neutrons were produced, and quite a few other minor interactions occurred, but this is approximately the overall result of the reactions. The Standard Model table of particles above is in error because it assumes that leptons and quarks are totally distinct. For a more fundamental level of understanding, we need to alter the electroweak portion of the Standard Model.
The apparent deficit of antimatter in the universe is simply a missobservation: the antimatter has simply been transformed from leptons into quarks, which from a long distance display different properties and interactions to leptons (due to cloaking by the polarized vacuum and to close confinement causing colour charge to physically appear by inducing asymmetry; the colour charge of a lepton is invisible because it is symmetrically distributed over three preons in a lepton, and cancels out to white unless an enormous field strength due to the extremely close proximity of another particle is present, creating an asymmetry in the preon arrangement is produced, allowing a net colour charge to operate on the other nearby particle), so it isn’t currently being acknowledged for what it really is. (Previous discussions of the relationship of quarks to leptons on this blog include http://nige.wordpress.com/2007/06/13/feynmandiagramsinloopquantumgravitypathintegralsandtherelationshipofleptonstoquarks/ and http://nige.wordpress.com/2007/07/17/energyconservationinthestandardmodel/ where suggestions by Carl Brannen and Tony Smith are covered.)
Considering the strange quarks in the Omega Minus, which contains three quarks each of electric charge 1/3, vacuum polarization of three nearby leptons would reduce the 1 unit observable charge per lepton to 1/3 observable charge per lepton, because the vacuum polarization in quantum field theory which shields the core of a particle occurs out to about a femtometre or so, and this zone will overlap for three quarks in a baryon like the Omega Minus. The overlapping of the polarization zone will make it three times more effective at shielding the core charges than in the case of a single charge like a single electron. So the electron’s observable electric charge (seen from a great distance) is reduced by a factor of three to the charge of a strange quark or a downquark. Think of it by analogy a couple sharing blankets which act as shields, reducing the emission of thermal radiation. If each of the couple contribute one blanket, then the overlap of blankets will double the heat shielding. This is basically what happens when N electrons are brought close together so that they share a common (combined) vacuum polarization shell around the core charges: the shielding gives each charge in the core an apparent charge (seen from outside the vacuum polarization, i.e., more than a few femtometres away) of 1/N charge units. In the case of upquarks with apparent charges of +2/3, the mechanism is more complex, since the 1/3 charges in triplets are the clearest example of the mechanism whereby shared vacuum polarization shielding transforms properties of leptons into those of quarks. The emergence of colour charge when leptons are confined together also appears to have a testable, falsifiable mechanism because we know how much energy becomes available for the colour charge as the observable electric charge falls (conservation of energy suggests that the attenuated electromagnetic charge gets converted into colour charge energy). For the mechanism of the emergence of colour charge in quarks from leptons, see the suggestions of Tony Smith and Carl Brannen, outlined at http://nige.wordpress.com/2007/07/17/energyconservationinthestandardmodel/.
In particularly, the Cabibbo mixing angle in quantum field theory indicates a strong universality in reaction rates for leptons and quarks: the strength of the weak force when acting on quarks in a given generation is similar to that for leptons to within 1 part in 25. The small 4% difference in reaction rates arises, as explained by Cabibbo in 1964, due to the fact that a lepton has only one way to decay, but a quark has two decay routes, with probabilities of 96% and 4% respectively. The similarity between leptons and quarks in terms of their interactions is strong evidence that they are different manifestations of common underlying preons, or building blocks.
Above: Coulomb force mechanism for electrically charged massless gauge bosons. The SU(2) electrogravity mechanism.
Think of two flakjacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!
This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation.
Above: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the pathintegral YangMills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of selfinductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for YangMills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the selfinductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equilibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.
The price of the random walk statistics needed to describe such a zigzag summation (avoiding opposite charges!) is that the net force is not approximately 10^{80} times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zigzag inefficiency of the sum, i.e., about 10^{40} times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 10^{40}/10^{80} = 10^{40} as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 10^{80} randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are YangMills radiation being exchanged between all charges (including all charges of similar sign) is 10^{40} times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.
Fig. 1a: Feynman diagrams for quantum gravity interactions discussed in this post. M_{1} and M_{2} are two masses which accelerate towards one another. Note that in the spin1 graviton model, the accelerating expansion of the universe is maintained by the longrange YangMills exchange of gravitons between all of the masses in the universe because the emission of gravitons causes the recoil of those masses further away from one another, and when gravitons are received they also help to knock receding masses further apart. The overall effect is accelerative recession of masses.
This same mechanism, i.e. the exchange of gravitons between masses, which causes the acceleration of the universe and the Hubble expansion, also causes gravitational attraction between masses which are not substantially redshifted relative to one another. This effect is due to the fact that nearby masses don’t recede substantially from one another, so they don’t exchange gravitons forcefully. The whole basis for graviton exchange is that masses must be receding. This recession leads to outward acceleration of one mass relative to another of a = dv/dt = d(HR)/dt = Hv = H(Hv), which results in outward force of one mass from another of F = ma = mRH^{2} (the Hubble recession law is v = HR, which can be differentiated to find acceleration).
By Newton’s 3rd law Law of Motion, it follows that there is an equal reaction force, which – from the possibilities implied by the Standard Model and gravitational physics – turns out to be mediated by gravitons. Hence nonreceding masses don’t have any outward force relative to one another, and thus no inward directed gravitonmediated reaction force. In other words, the physics tells us that nonreceding masses (or masses which are not receding from one another at immense, relativistic velocities) actually shield one another from gravitons exchanged with the rest of the universe (which is radially symmetrically distributed around the sky at the greatest distances for which graviton exchange contributions are most important, i.e. it leads to isotropic incoming graviton exchangeradiation). Hence, we are pushed down to Earth because the Earth shields us from gravitons in the downward direction, creating a small amount of asymmetry in the exchange of gravitons between us and the surrounding universe (the crosssection for graviton shielding by an electron is only its black hole event horizon crosssectional area, i.e. 5.75*10^{114} square metres). The special quasicompressive effects of gravitons on masses accounts for the ‘curvature’ effects of general relativity, such as the fact that the Earth’s radius is 1.5 mm less than the figure given by Euclidean geometry (Feynman Lectures on Physics, c42 p6, equation 42.3).
In the big bang (see http://www.astro.ucla.edu/~wright/tiredlit.htm for evidence that the big bang is the only scientifically defensible interpretation of the redshift of distant matter in the universe), the relative radial outward motion of matter away from us at velocity v = dR/dt = H*R (Hubble’s recession law) leads to outward cosmological acceleration of matter a = dv/dt = d(H*R)/dt = (H*dR/dt) + (R*dH/dt) = H*v = R*H^{2}.
This is the cosmological acceleration observed, and also gives us an outward force of receding matter (Newton’s 2nd law, F=dp/dt ~ ma), which by Newton’s 3rd law leads to an equalinward directed reaction force (which it turns out is mediated by gravitons, which predicts the strength of gravity as proved below). Notice that the path integral for nonloop quantum gravity interactions (those important at low energy, e.g. for determining the model of discreteinteraction quantum gravity which replaces the classical differential geometry approximation used in the theory of general relativity), is very simple. We need only to sum the simple (nonloop) exchanges of gravitons between masses. Because the model denies that you get substantially forceful graviton exchange between masses which aren’t receding at relativistic velocities, it follows that all relatively nearby (nonredshifted) masses like the sun act as a shield of gravitons, towards which we are pushed by the unimpeded gravitons from other directions. Unlike LeSage’s original idea, this model is substantiated by a fully predictive, falsifiable physical analysis and actually predicts the strength of gravity and other checkable facts such as the acceleration of the universe (see Fig. 3 below for a simplified version of the analysis).
The mainstream spin2 graviton speculation is not even wrong
The mainstream spin2 graviton model firstly makes the error of considering only two regions of energy or mass and ignoring all the other masses in the entire universe in the analysis! So it assumes – with no evidence for this whatsoever – that gravitons are only exchanged between the two masses which are attracted together. Actually, as explained above, this is the opposite of what occurs. There is no reason, in any case, why gravitons are not being exchanged with the rest of the masses in the universe. The fraction of the mass of the universe contained in an apple and the Earth is trivial. The omission from the physical model used by the mainstream of graviton exchanges with the mass of the surrounding universe causes a massive error. (A good analogy to this error is Sternglass’s confusion over lowlevel radiation effects, where he similarly begins with a false assumption and then turns the false assumption into a factlike arguing point to interpret his evidence wrongly. As with Sternglass and lowlevel radiation hype, the facts gain absolutely no publicity when published, and Sternglass does not retract and apologise any more than string theorists do, and the media continues to make a living from selling lies.)
But that is not all. Because all gravitational charge is positive (mass and energy), the mainstream compounds this error by arguing that spin1 exchange radiation would cause repulsion between such similar charges. We know that similar masses appear to attract, not repel, so there is an error somewhere in the assumptions made. But the mainstream, rather than finding the real error (which is that it is ignoring the effects of contributions from all the mass in the surrounding universe which is also exchanging spin1 gravitons with the two relkatively small masses of interest for the calculation!!), has instead (since the 1930s!!) followed into stringy fairyland a 1930s suggestion by Pauli and Fierz that the faulty assumption is just the spin of the graviton, and that if the graviton is spin2 instead of being spin1, it will cause universal attraction between similar charges (just as spin1 causes universal repulsion between similar charges).
So, compounding the first error of ignoring almost all of the mass in the universe when writing down its pathintegral for quantum gravity, the mainstream then adds to that error by making the second error of fraudulently ‘correcting’ the false prediction of repulsion that would occur using spin1 graviton exchange between two regions of massenergy, by fraudulently adjusting the assumed spin properties of the mediating graviton to make the force attractive instead. Using spin2 gives a 5polarisation graviton with a 5component tensor in the Lagrangrian, which when evaluated in a Feynman path integral would make two masses always attract. The failure here, aside from predicting nothing checkable unlike the spin1 graviton, is that it is false in the first place to assume that gravitons are only going to be exchanged between two masses. Why on Earth should the gravitons from other masses in the rest of the universe not be exchanged with the two masses you are considering when calculating gravitation? Of course they should be! It’s obvious to one who is concerned with the mathematical physics, rather than ignorantly working mathematical machinery with no concern in the physics.
As soon as you do include masses in the surrounding universe (which are far bigger even though they are further away, i.e., the mass of the Earth and an apple is only 1 part in 10^{29} of the mass of the universe, and all masses are gravitational charges which exchange gravitons with all other masses and with energy!), you begin to see what is really occurring. Spin1 gauge bosons are gravitons!
The correct model is radical and extremely predictive and checkable unlike the ‘not even wrong’ spin2 graviton belief system which leads to the stringy landscape of pseudoscience: in simple outline, receding (v = HR Hubble law) masses have an acceleration dv/dt = d(H*R)/dt = H*dR/dt + R*dH/dt = Hv + 0 = H(H*R) and thus carry an outward force F = m*dv/dt which has, by Newton’s 3rd law, an inward reaction force which is mediated by gravitons. Cosmologically distant masses push one another apart by exchanging gravitons, explaining the lack of gravitational deceleration observed in the universe. But masses which are nearby in cosmological terms (not redshifted much relative to one another) are pushed together by gravitons from the surrounding (highly redshifted) distant universe, because they don’t exert an outward force relative to one another, and so don’t fire a recoil force (mediated by spin1 gravitons) towards one another. They, in other words, shield each other. Think of the exchange simply as bullets bouncing off particles. If bullets are firing in from all directions, the proximity of a nearby mass which isn’t shooting at you will act as a shield, and you’d be pushed towards that shield (which is why things fall towards large masses). This is a quantitative prediction, predicting the strength of the gravitational coupling which can be checked. So this mechanism, which predicted the lack of gravitational deceleration in the big bang in 1996 (observed in 1998 by Saul Perlmutter’s automated CCD telescope software) ,also predicts gravitation, quantitatively.
It should be noted that in this diagram we have drawn the forcecausing gauge or vector boson exchange radiation in the usual convention as a horizontal wavy line (i.e., the gauge bosons are shown as being instantly exchanged, not as radiation propagating at the velocity of light and thus taking time to propagate). In fact, gauge bosons don’t propagate instantly and to be strictly accurate we would need to draw inclined wavy lines as shown in Fig. 2 below. The exchange of the gauge bosons as a kind of reflection process (which imparts an impulse in the case where it causes the mass to accelerate) would make the diagram more complex. Conventionally, Feynman diagrams are shorthand for categories of interactions, not for specific individual interactions. Therefore, they are not depicting all the similar interactions that occur when two particles attract or repel; they are way oversimplified in order to make the basic concept lucid.
Loops in Feynman diagrams and the associated infinite perturbative expansion
Because the gravitational phenomena we have observed manifested in checked aspects of general relativity are at low energy, phenomena such as loops (whereby bosonic field quanta undergo pair production and briefly become fermionic pairs which soon annihilate back into bosons, but become briefly polarized during their existence and in so doing modify the field) which are described by the infinite series of Feynman diagrams each representing one term in the infinite series of terms in the perturbative expansion to a Feynman path integral, can be ignored (this is discussed later in this post). So the direct exchange of gauge bosons such as gravitons, gives us only a few possible types of Feynman diagrams for nonloop, simple, direct exchange of field quanta between charges.
The illustration above summarises a few of the basic (widely ignored) points about the failure of existing general relativity to represent quantum fields (by making clear that curvature is an approximation of a lot of little deflections caused by lots of individual, discrete, quantum gravity interactions), and the failure of the mainstream quantum gravity model to include graviton exchange with surrounding masses in the rest of the universe. When receding masses appear to be accelerating radially away relative to us (as observed in spacetime), they are emitting gravitons which travel towards us at the same velocity as the visible light we observe from such receding galaxies. The recoil and impulses created by the emission and reception of such gravitons explain both gravitation and the acceleration of the universe in one go, as shown in the 3rd Feynman diagram of Fig. 1 above, and in the more technical mathematics below in this blog post. (I’ve only completed the first two sections in chapter 1 so far: book draft version 1.23.)
Fig. 1b: an illustration of some of the Feynman diagrams corresponding to successive terms in the perturbative expansion for electronelectron scattering (illustration credit: http://www.answers.com/topic/feynmandiagram?cat=technology). The first Feynman diagram shown represents the low energy (nonloop) approximation, i.e. Coulomb’s law in Maxwell’s equations (Gauss’s law in the Maxwell’s equations describes the diverging electric field from a charge and is physically equivalent to Coulomb’s law). This simple Feynman diagram contains no loops as it has only two vertices (it is secondorder). All of the other Feynman diagrams in the illustration have four vertices and thus are fourthorder; these are the ‘loop’ corrections.
It is very important to recognise that the simplest (nonloop) Feynman diagram is of overwhelming importance for calculations in lowenergy physics! It is the simplest Feynman diagram which corresponds to the classical approximation (the lowenergy or longdistance asymptotic limit to a quantum field theory). Although the presence of loops does cause charge and mass renormalization, whereby the apparent values of these parameters at low energy is different to their values at high energy (due to the shielding or antishielding of the respective force field by pairproduction virtual particles which arise in relatively intense fields), it is a fact that at low energy coupling parameters are constant.
This means that we can analyse the lowenergy limit to a quantum field theory of gravity and electromagnetism without complex calculations of looped Feynman diagrams. For example, the first loop Feynman diagram for the calculation of the magnetic moment of the electron, only increases the simple (Dirac equation) nonloop calculation of the magnetic moment of the electron from 1 Bohr magneton to about 1.00116 Bohr magnetons. In other words, the most important loop Feynman diagram only varies the calculated result by 0.116%. This itself is quite a trivial correction, and in general the more complex the Feynman diagram, the less likely it is to occur and so the smaller the contribution it makes to a prediction of what will be observed in experiments. For this reason, we can ignore loops when we analyse the pathintegrals for fundamental forces. This means that the path integral has a simple approximate solution for the nonloop factor, and omits the complex perturbative expansion of looped diagram terms.
This makes the calculations extremely straightforward and soluble by simple geometric methods, such as asymmetry analysis (see Fig. 3 below for a calculation of the force of gravity by this method of analyzing nonloop contributions to path integrals geometrically).
Fig. 1c: an illustration from a paper by Reinhard Alkofer demonstrating the complexity of loops in feynman diagrams. This demonstrates the simple cancellation of nonloop Feynman diagrams, as opposed to the noncancellation you get when loops occur. Generally, loops occur when a boson (an oscillatory electromagnetic wave, with as much negative electric field as positive electric field), when in a strong (>1.3*10^18 volts/metre) electric field, briefly becomes two virtual (shortlived) fermions, one positive and one negative. The fermions quickly (as predicted by the energyversustime version of the Heisenberg uncertainty principle, a simple scattering law) recombine and annihilate back into bosonic field quanta. But during the brief phase as virtual fermions, the virtual fermions move in opposite directions in the original electric field, introducing an electric dipole which tends to oppose and partially screen the original electric field (i.e., the observable charge, which is determined from the observed electric field, since nobody can see the core charge directly). Hence, the existence of loops in electric fields tend to shield those fields as seen from a great distance. In the case of colour charge fields in QCD, the virtual charges can increase the field strength rather than shielding it. Loops are important in highenergy, shortranged fields. For lowenergy, longrange electromagnetic and gravitational physics, loops don’t exist in spacetime far from charges because the field strengths are too weak to allow pairproduction phenomena. Generally, field strengths must exceed Schwinger’s threshold before there is any pairproduction. As far as quantum gravity field loops are concerned, there is no experimental evidence that they even exist, although they are assumed in vacuous string theory to exist very close to gravitational charges, as a means of making the weak force of gravity ‘unify’ with other forces at high energy (however, that stringy approach to numerological ‘unification’ of coupling constants ignores the conservation of energy as discussed in a previous post, and it is not a physical unification of different force fields, which has been demonstrated by different means using a physical mechanism).
For more simple discussion on loops in quantum fields, see this paper and this paper. This blog post is concerned primarily with nonloop interactions, i.e., force fields in low energy physics, i.e. the classical limit for quantum field theory, where quantum field effects are relatively simple and therefore, as shown below, simply don’t require the kind of very sophisticated mathematics required to accommodate loop effects.
“The cloud of virtual particles acts like a screen or curtain that shields the true value of the central core. As we probe into the cloud, getting closer and closer to the core charge, we ‘see’ less of the shielding effect and more of the core. This means that the electromagnetic force from the electron as a whole is not constant, but rather gets stronger as we go through the cloud and get closer to the core. Ordinarily when we look at or study an electron, it is from far away and we don’t realize the core is being shielded.” – Professor David Koltick.
Unlike the electromagnetic field which is shielded by the vacuum and gets stronger than predicted by the Coulomb inversesquare law as you approach the core of a fermion, the strong nuclear force – which is the “glue” that holds together elementary particles such as protons – actually gets weaker closer to the core charge. “Because the electromagnetic charge is in effect becoming stronger as we get closer and the strong force is getting weaker, there is a possibility that these two forces may at some energy be equal. Many physicists have speculated that when and if this is determined, an entirely new and unique physics may be discovered.” – Professor David Koltick, quoted at http://findarticles.com/p/articles/mi_m1272/is_n2625_v125/ai_19496192
‘… we [experimentally] find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hepth/0510040, p 71.
Plus, in particular:
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
Fig. 2: Feynman diagrams (left) by convention make various simplifications: the gauge boson radiation is not actually transmitted instantly between charges, contrary to the convention as depicted in places like http://hyperphysics.phyastr.gsu.edu/hbase/particles/expar.html. Instead, as the diagram on the right shows, it takes time for radiation to be transferred between charges. If gravitons went instantly (i.e. as a horizontal wavy line on a diagram where the vertical axis depicts time), then gravity would act instantly instead of being constrained by the velocity of light. The errors introduced by oversimplification of Feynman diagrams helps to keep mainstream physicists insulated from reality, and concentrating on nonexistent ‘problems’ like working out ways to avoid the difficulties in renormalizing a spin2 graviton theory. If they concentrated on the fact that gravitons are spin1, as demonstrated by the empirical, observed evidence, the entire problem could be sorted out straight away as shown below.
They don’t want to be heretics, however. Groupthink wins: ‘Groupthink is a type of thought exhibited by group members who try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. During Groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.’ – Wikipedia. ‘[Groupthink is a] mode of thinking that people engage in when they are deeply involved in a cohesive ingroup, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.’ – Irving Janis.
Fig. 3: This is the key diagram working out, without a fancy path integral formulation, the net sum of spin1 graviton contributions. The first few logical steps are included:
1. Outward force of receding matter (recession velocity v = HR where H is Hubble constant and R is apparent distance) is F = ma = m.dv/dt = m.d(HR)/dt = m[H.dR/dt + R.dH/dt] = m[Hv + 0] = mH(HR) = mRH^2. This is on the order of F = 10^43 Newtons, but there is a correction to be applied for the apparent increase in density as we look back to earlier times (great distances in spacetime), and for relativistic mass increase of receding matter. But for simplicity, to see how the maths works, ignore the correction:
F = ma = [(4/3)πR^{3}r].[dv/dt] = [(4/3)πR^{3}r].[H^{2}R] = 4πR^{4}rH^{2}/3.
2. Inward force (which must be carried by gravitons or the spacetime fabric, as explained in the book draft and here), is equal to the outward force (action and reaction are equal and opposite – Newton’s 3rd law). However, there is a redshift of gravitons approaching us from relativistically receding, extremely redshifted masses, which reduces the effective graviton energy when received. (This redshift effect offsets the infinityapproaching outward force effects of relativistic mass increase and the increasing density of the earlier universe at ever greater distances.)
3. Gravity force, F = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s crosssectional area and the ratio R^{2}/r^{2}) / (total spherical area with radius R).
In an earlier post, it is proved that the shield’s crosssectional area is the crosssectional area of the event horizon for a black hole, π(2GM/c^{2})^{2}. But at present, to get the feel for the physical dynamics, we will assume this is the case without proving it. This gives
(force of gravity) = (4πR^{4}rH^{2}/3).(π(2GM/c^{2})^{2}R^{2}/r^{2})/(4πR^{2})
= (4/3)πR^{4}rH^{2}G^{2}M^{2}/(c^{4}r^{2})
We can simplify this using the Hubble law because at great distances/early times (where the density of the universe is highest) it is a good approximation to put HR = c, which gives R/c = 1/H, so:
(force of gravity) = (4/3)πrG^{2}M^{2}/(H^{2}r^{2})
This key result ignores both the density variation in spacetime (the distant, earlier universe having higher density) and the effect of redshift in reducing the energy of gravitons and weakening quantum gravity contributions from extreme distance, because the momentum of a graviton will be p = E/c and where E is reduced by redshift since E = hf, but it does demonstrate three important things about this line of research:
1. Quantization of mass: the force of gravity is proportional not to M_{1}M_{2} but instead to M^{2}, which is a vital result because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. Lepton and hadron masses beyond the electron are nearly all integer denominations of 0.5*0.511*137 = 35 MeV where 0.511 MeV is the electron’s mass and 137.036… is the well known Feynman dimensioness factor in charge renormalization (discovered much earlier in quantum mechanics by Sommerfeld); furthermore, quark doublet or meson masses are close to multiples of twice this or 70 MeV while quark triplet or baryon masses are close to multiples of three times this or 105 MeV; it appears that the simplest possible model, which predicts masses of new as yet unobserved particles as well as explaining existing particle masses, is that the vacuum particle which is the building block of mass is 91 GeV like the Z weak boson; the muon mass for instance is 91,000 divided by the product of 137 and twice Pi, which is a combination of a 137 vacuum polarization shielding factor, and twice Pi which is a dimensionless geometric shield factor, e.g. spinning a particle or a missile in flight reduces the radiant exposure per unit area of its spinning surface by Pi as compared to a nonspinning particle or missile, because the entire surface area of the edge of a loop or cylinder is Pi times the crosssectional area seen sideon, while a spin1/2 fermion must rotate twice, i.e., by 720 not 360 degrees – like drawing a line right around the singlesurface of the Möbius strip – to expose its entire surface to observation and reset its symmetry. This is analysed in an earlier blog post, showing how all masses are built up from only one type of fundamental massive particle in the vacuum, and making checkable predictions. Polarized vacuum veils around particles reduce the strength of the coupling between the massive 91 GeV vacuum particles (which interact with gravitons) and the SU(2) x SU(3) particle core of interest (which doesn’t directly interact with gravitons), accounting for the observed discrete spectrum of fundamental particle masses.
The correct mass giving field is different in some ways to the electroweak symmetry breaking Higgs field of the conventional Standard Model (which gives the standard model charges as well as the 3 weak gauge bosons their symmetrybreaking mass at low energies by ‘miring’ them or resisting their acceleration): a discrete number of the vacuum mass particles (gravitational charges) become associated with leptons and hadrons, either within the vacuum polarized region which surrounds them (strong coupling to the massive particles, hence large effective masses) or outside it (where the coupling, which presumably relies on the electromagnetic interaction, is shielded and weaker, giving lower effective masses to particles). In the case of the deflection of light by gravity, the photons have zero rest mass so it is their energy content which is causing deflection. The massgiving field in the vacuum still mediates effects of gravitons, but the photon has no net electric charge (it has equal amounts of positive and negative electric field density), it has zero effective rest mass. The quantum mechanism by which light gets deflected as predicted by general relativity has been analysed in an earlier post: due to the FitzGeraldLorentz contraction, a photon’s field lines are all in a plane perpendicular to the direction of propagation. This means that twice the electric field’s energy density in a photon (or other light velocity particle) is parallel to a gravitational field line that the photon is crossing at normal incidence, compared to the case for a slowmoving charge with an isotropic electric field. The strength of the coupling between the photon’s electric field and the massgiving particles in the vacuum is generally not quantized, unless the energy of the photon is quantized.
If you are firmly attached to an accelerating horse, you will accelerate at the specific rate that the horse accelerates at. But if you are less firmly attached, the acceleration you get depends on your adhesion to the saddle. If you slide back as the horse accelerates, your acceleration is somewhat less than that of the horse you are sitting on. Particles with rest mass are firmly anchored to vacuum gravitational charges, the particles with fixed mass that replace the traditional role of Higgs bosons. But particles like photons, which lack rest mass, are not firmly attached to the massive vacuum field, and the quantized gravitational interactions – like a fixed acceleration of a horse – is not automatically conveyed upon the photon. The result is that a photon gets deflected more classically by ‘curved spacetime’ created by the effect of gravitons upon the Higgslike massive bosons in the vacuum, than particles with rest mass such as electrons.
2. The inverse square law, for distance r.
3. Many checked and checkable quantitative predictions. Because the Hubble constant and the density of the universe can be quantitatively measured (within certain error bars, like all measurements), you can use this to predict the value of G. As astronomy gets better measurements, the accuracy of the prediction gets better and can be checked experimentally.
In addition, the mechanism predicts the expansion of the universe: the reason why YangMills exchange radiation is redshifted to lower energy by bouncing off distant masses is that energy from gravitons is being used to cause the distant masses to speed up. This makes quantitative predictions, and is a firm test of the theory. (The outward force of a receding galaxy of mass m is F = mH^{2}R, which requires power P = dE/dt = Fv = mH^{3}R^{2}, where E is energy.)
It should be noted that the gravitons in this model would have a mean free path (average distance between interactions) of 3.10 x 10^77 metres in water, as calculated in the earlier post here. These are able to produce gravity by interacting with the the Higgslike vacuum field, due to the tremendous flux of gravitons involved. The radially symmetric, isotropic outward force of the receding universe is on the order 10^43 Newtons, and by Newton’s 3rd law this produces a similar equal and opposite (inward) reaction force. This is the immense field behind gravitation. Only a trivial asymmetry in the normal equilibrium of such immense forces is enough to produce gravity. Cosmologically nearby masses are pushed together because they aren’t receding much, and so don’t exert a forceful flux of graviton exchange radiation in the direction of other (cosmologically) nearby masses. Because (cosmologically) nearby masses therefore don’t exert graviton forces upon each other as exchange radiation, they are shielding one another in effect, and therefore get pushed together by the forceful exchange of gravitons which does occur with the receding universe on the unshielded side, as illustrated in Fig. 1 above.
Some other posts, besides the key one, that are useful to grasping the details are this, this, this, this, this and this. Some of the earlier posts contain omissions or errors which have later been corrected in later posts, or by comments added to the post. Science is not a religion or political business where a dogma or policy is agreed upon and then fixed forever. Where omissions or errors occur, they should be corrected.
Update: I’ve decided that in the finished book, every righthand page will be simply a fullpage illustration (using diagrams, graphs, etc.) of the technical content of the text on the lefthand page. Otherwise the book will rely on appealing to people who have the time to read a lot of technical text, which most people do not have. Hopefully the technical illustrations on all righthand pages will provide the ‘reader’ with the ability to grasp all the main points in a few seconds visually, and then they can refer to the text on the facing page if they want further particulars. I’ll probably wait until I finish the text for each chapter, before designing and inserting the full page illustrations.
As of 10 February 2008, I’ve changed the banner of this blog from SU(2) x SU(3) to “U(1) x SU(2) x SU(3) quantum field theory: Is electromagnetism mediated by charged, massless SU(2) gauge bosons? Is the weak hypercharge interaction mediated by the neutral massless SU(2) weak gauge boson? Is gravity mediated by the spin1 gauge boson of U(1)? This blog provides the evidence and predictions for this introduction of gravity into the Standard Model of particle physics.” This is driven by the fact, explained in the comments to this post, that:
… SU(2) x SU(3), … [it] seems too difficult to make SU(2) account for weak hypercharge, weak isospin charge, electric charge and gravity.I thought it would work out by changing the Higgs field so that some massless versions of the 3 weak gauge bosons exist at low energy and cause electromagnetism, weak hypercharge and gravity.However, since the physical model I’m working on uses the two electrically charged but massless SU(2) gauge bosons for electromagnetism, that leaves only the electrically neutral massless SU(2) gauge boson to perform both the role of weak hypercharge and gravity. That doesn’t work out, because the gravitational charges (masses) are evidently going to be different to the weak hypercharge which is only a factor of two different between an electron and a neutrino. Clearly, an electron is immensely more massive than a neutrino. So the SU(2) x SU(3) model must be wrong.The only possibility left seems to be similar to the Standard Model U(1) x SU(2) x SU(3), but with differences from the Standard Model. U(1) would model gravitational charge (mass) and spin1 (push) gravitons. The massless neutral SU(2) gauge boson in the model I’m working on would then mediate weak hypercharge only, instead of mediating gravitation as well.The whole point about my approach is that I’m working from factbased predictive mechanisms for fundamental forces, and in this world view the symmetry group is just a mathematical model which is found to describe the symmetries suggested by the mechanisms. Here are some links to some online basic information about hypercharge, weak hypercharge and SU(2) isospin. Ryder’s book Quantum Field Theory (2nd ed., 1996), chapters 13 and 89, contains the best (physically understandable) introduction to the basic mathematics including Lagrangians, path integrals, YangMills theory and the Standard Model. From my perspective, the symmetry groups are the end product of the physics; they summarise the symmetries of the interactions. The end product can change when the understanding of the details producing it changes. I’ve revised the latest draft book manuscript PDF file accordingly.
Dr Thomas Love of California State University has pointed out:
‘The quantum collapse [in the mainstream interpretation of of quantum mechanics, which has wavefunction collapse occur when a measurement is made] occurs when we model the wave moving according to Schroedinger (timedependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (timeindependent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’
That looks like a factual problem, undermining the mainstream interpretation of the mathematics of quantum mechanics. If you think about it, sound waves are composed of air molecules, so you can easily write down the wave equation for sound and then – when trying to interpret it for individual air molecules – come up with the idea of wavefunction collapse occurring when a measurement is made for an individual air molecule.
Feynman writes on a footnote printed on pages 556 of my (Penguin, 1990) copy of his book QED:
‘… I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of oldfashioned ideas … But at a certain point the oldfashioned ideas would begin to fail, so a warning was developed … If you get rid of all the oldfashioned ideas and instead use the [path integral] ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’
Feynman on p85 points out that the effects usually attributed to the ‘uncertainty principle’ are actually due to interferences from virtual particles or field quanta in the vacuum (which don’t exist in classical theories but must exist in an accurate quantum field theory):
‘But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these [classical] rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of intereference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’
Hence, in the path integral picture of quantum mechanics – according to Feynman – all the indeterminancy is due to interferences. It’s very analogous to the indeterminancy of the motion of a small grain of pollen (less than 5 microns in diameter) due to jostling by individual interactions with air molecules, which represent the field quanta being exchanged with a fundamental particle.
The path integral then makes a lot of sense, as it is the statistical resultant for a lot of interactions, just as the path integral was actually used for brownian motion (diffusion) studies in physics before its role in QFT. The path integral still has the problem that it’s unrealistic in using calculus and averaging an infinite number of possible paths determined by the continuously variable lagrangian equation of motion in a field, when in reality there are not going to be an infinite number of interactions taking place. But at least, it is possible to see the problems, and entanglement may be a redherring:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
 R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
copy of a comment:
http://asymptotia.com/2008/02/17/talesfromtheindustryxviijumpthoughts/
Hi Clifford,
Thanks for these further thoughts about being science advisor […] for what is (at least partly) a sci fi film. It’s fascinating.
“What I like to see first and foremost in these things is not a strict adherence to all known scientific principles, but instead internal consistency.”
Please don’t be too hard on them if there are apparent internal inconsistencies. Such alleged internal inconsistencies don’t always matter, as Feynman discovered:
“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …
“… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …
” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …” – Feynman, quoted at http://www.tony5m17h.net/goodnewsbadnews.html#badnews
I agree with you that: “Entertainment leading to curiosity, real questions, and then a bit of education …”
Update (23 February 2008): via Dr Woit’s blog Not Even Wrong, see the recent review of Smolin’s book in the Times Literary Review,
“… Smolin has launched a controversial attack on those working on the dominant model in theoretical physics. He accuses string theorists of racism, sexism, arrogance, ignorance, messianism and, worst of all, of wasting their time on a theory that hasn’t delivered.”

http://tls.timesonline.co.uk/article/0,,253722650590_1,00.html
Update (28 February 2008): via Woit, the latest hype for string is ‘rock guitars could hold secret to the universe’. It might sound like just more pathetic spin, but actually, the analogy of string theory hype to that of a community of rock groupies is sound.
Update (2 March 2008): The rock guitar string promoter referred to just above is Dr Lewney who has the site http://www.doctorlewney.com. He writes on Dr Woit’s blog:
‘I’m actually very open to ideas as to how best to communicate physics to schoolkids.’
Dr Lewney, if you want to communicate real, actual physics rather than useless blathering and lies to schoolkids, that’s really excellent. But please just remember that physics is not uncheckable speculation, and that twenty years of mainstream hype of string theory in British TV, newspapers and the New Sceintist has by freak ‘coincidence’ (don’t you believe it) correlated with a massive decline in kids wanting to do physics. Maybe they’re tired of sci fi dressed up as physics or something.
http://www.buckingham.ac.uk/news/newsarchive2006/ceerphysics2.html:
‘Since 1982 Alevel physics entries have halved. Only just over 3.8 per cent of 16yearolds took Alevel physics in 2004 compared with about 6 per cent in 1990.
‘More than a quarter (from 57 to 42) of universities with significant numbers of physics undergraduates have stopped teaching the subject since 1994, while the number of home students on firstdegree physics courses has decreased by more than 28 per cent. Even in the 26 elite universities with the highest ratings for research the trend in student numbers has been downwards.
‘Fewer graduates in physics than in the other sciences are training to be teachers, and a fifth of those are training to be maths teachers. Alevel entries have fallen most sharply in FE colleges where 40 per cent of the feeder schools lack anyone who has studied physics to any level at university.’
http://www.math.columbia.edu/~woit/wordpress/?p=651#comment34820:
‘One thing that is clear is that hype of speculative uncheckable string theory has at least failed to encourage a rise in student numbers over the last two decades, assuming that such speculation itself is not actually to blame for the decline in student interest.
‘However, it’s clear that when hype fails to increase student interest, everyone will agree to the consensus that the problem is a lack of hype, and if only more hype of speculation was done, the problem would be addressed.’
Professor John Conway, a physicist at the University of California, has written a post called ‘What’s the (Dark) Matter?’ where someone has referred to my post here as my ‘belief’ that gravitons are of spin1. Actually, this isn’t a ‘belief’. It’s a fact (not a belief) that so far, spin2 graviton ideas are at best uncheckable speculation that is ‘not even wrong‘, and it’s a fact (not a belief) that this post shows that spin1 gravitons do reproduce gravitation as already known from the checked and confirmed results of general relativity, plus quantitatively predicting more stuff such as the strength of gravity. This is not a mere ‘personal belief’, such as the gut feeling that is used by string theorists, politicians and priests to justify hype in religion or politics. It is instead factbased, not beliefbased, and it makes accurate predictions so far as the difficult calculations and the imperfect experimental data to date can be used to check it, so there’s no belief system here, just cold hard fact. This is why I’m writing about it, and why censoring it is wrong. If science is to be based on mainstream groupthink, then it is reduced to a religion or to politics, i.e., a dictatorship of the majority over minorities which is enforced not by solid physical reasoning from facts determined in nature, but by the political tools of censorship.
On the subject of dark matter, my analysis of the gravity coupling constant G shows that the usual critical density formula (for a flat universe) from general relativity, implies a density which is too high, simply because of quantum gravity effects on G which are ignored by general relativity which is classical on large scales. Sure there is some dark matter (neutrinos and large black holes which give off little Hawking radiation, for example), but it is nowhere near the amount suggested by general relativity’s quantumgravityignoring FriedmannRobertsonWalker metric. Actually, with graviton exchange between masses being the source of gravity, there is a difference between the classical approximation you get for fairly short range effects (like an apple falling to the earth, and the earth orbiting the sun), and very long range effects in cosmology where the masses involved are actually receding with motion that is relativistic (approaching c velocity). The latter case involves gravitons being received in a redshifted condition, i.e. with lower energy than is the case over shorter distances where masses aren’t receding so rapidly. This, and related effects, could easily be included in the usual framework of general relativity by reducing the value of G for long ranges to an effective value that allows for this graviton redshift effect. General relativity is only a classical approximation, but it needn’t be completely wrong for cosmological scales: by building in corrections for physical mechanism, it can be made to approximate cosmology far better.
To explain the apparent dark matter manifested in the flattened shapes of observed galactic rotation curves showing orbit velocity versus radial distance from the centre of a rotating galaxy, I recommend that the reader should check a page on John Hunter’s website, http://www.gravity.uk.com/galactic_rotation_curves.html. I first came across Hunter’s idea after he published a quarterpage notice in the New Scientist a few years ago, and we corresponded. Hunter is apparently not too interested in quantum gravity (spin1 graviton exchange as a mechanism), just with making a mathematical conjecture and checking it, but his simple idea is mathematically equivalent to the physical mechanism of graviton exchange I worked out (I didn’t investigate the idea of inertial and gravitational potential energy equivalence and its consequences for about galactic rotation curves). Hunter starts off with the conjecture that the rest mass of any object, mc^2, is equivalent to the gravitational potential energy GMm/R with respect to distant matter of mass M at distance R, which is important (see comment 18 of this blog post) from my standpoint because general relativity rests on the principle of equivalence between inertial and gravitational masses. Since inertial mass has an energy equivalent via Einstein’s famous formula, and gravitational mass also has an energy equivalent (the gravitational potential energy of that mass with respect to the surrounding universe, i.e. the energy which would be released by that mass via the gravitational field if the universe collapsed). Einstein failed to apply the equivalence principle (for inertial and gravitational masses in general relativity) to the energy equivalents of those respective inertial and gravitational masses, which are known from special relativity and from classical gravitation:
Fig. 4: John Hunter’s result: ‘So stars moving at a constant velocity at different radii means a constant m/r ratio. … For any given radius r, if the mass within this radius is such that the m/r value is higher than an average value (k), then the effective gravitational constant is lowered. This allows rotating matter to drift away from the centre, thus reducing the m/r ratio at this radius. If m/r < k (for any given radius r) then the effective gravitational constant is higher than average attracting more matter to within this radius, increasing the m/r ratio at this radius. In this way a constant m/r ratio for spiral galaxies can be maintained for different r, resulting in the constant velocity of stars and the flat shape of the rotation graphs. A reduction in the value of G at the centre of galaxies, … may lead to the phenomenon of active galactic nuclei and the emergence of jets.’
Notice that Hunter is oversimplifying the mass distribution in the universe, since due to the big bang the effective density increases with spacetime distance (the earlier universe had higher density) and he is not including all graviton interaction effects, but the basic conjecture and some of its consequences are mathematically similar to the physical mechanism of graviton exchange I’m working on. The normal equilibrium of radiated graviton power, which occurs via the exchange of gravitons between any given mass and the rest of the universe, produces the immense pressure on each mass which keeps it confined to a small as a tiny black hole; fermions are charged bosons which are trapped by gravitation. It is because of this graviton exchange equilibrium that the rest mass energy of a fundamental particle is equal to the gravitational potential energy of that mass with respect to the other masses in the surrounding universe: equilibrium of graviton exchange between one mass and all the other masses in the surrounding universe is the cause of the equality between inertial and gravitational masses/energies. It also shows why masses contract in the direction of motion and gain mass when in motion (as predicted by special relativity): acceleration of mass alters the exchange equilibrium, the resistance being the force of inertia, and the pressure effect of encountering gravitons in the direction of motion causes the Lorentz contraction.
If you look at Hunter’s conjecture mc^2 = mMG/R, since in spacetime R = ct, this immediately gives you Louise Riofrio’s fundamental equation, namely tc^3 = MG. Louise Riofrio is a physicist who has investigated whether this formula suggests that c is inversely proportional to the cuberoot of the age of the universe. The quantum gravity mechanism gives the same equation (ignoring dimensionless multiplication factors for redshift and varying density effects) and suggests that c isn’t varying; instead the effective value of G varies for various reasons as already discussed (see also discussion here and updates in comments at that post).
Update (3 March 2008): in Figure 4, John Hunter states that the basic result from his equivalence, G = R(c^2)/M, provides an explanation for the flatness problem. This is something I also obtained from the graviton interaction mechanism, as you can see in the detailed post here. From Hunter’s way of writing the formula, because it is so simplified (in certain ways it is oversimplified), is maybe easier to grasp why the universe is so flat at the greatest distances (earliest times): gravitation was weaker then. Gravitation coupling G increases in direct proportion to the age of the universe, G = R(c^2)/M = ct(c^2)/M. This formula is oversimplified because it ignores various subtle but important physical effects like the variation in the density of the universe with distance in spacetime (the density tends towards infinity at the greatest distances and hence earliest times after the big bang), and the effect of graviton exchange in an expanding universe which quenches certain aspects of gravitation over immense distances because gravitons received as a result of exchange between two receding masses arrive at each mass in a redshifted state, weaking that interaction.
Because the effective value of G at early times after the big bang is so small from our spacetime perspective, we see small gravitational effects: the universe looks very flat, i.e., gravity was so weak it was unable to clump matter very much at 400,000 years after the big bang, which is the time of our information on flatness, i.e. the time that the closely studied cosmic background radiation was emitted. The mainstream ad hoc explanation for this kind of observation is a nonfalsifiable (endlessly adjustable) idea from Alan Guth that the universe expanded or ‘inflated’ at a speed faster than light for a small fraction of a second, which would have allowed the limited total mass to get very far dispersed very quickly, which would have reduced the curvature of the universe and suppressed the effects of gravitation at subsequent times in the early universe.
Hence, the ‘peer’ review mainstream has blocked the proper publication of Hunter’s research just as it blocks mine, because nonfalsifiable mainstream ideas are in place and as Dr Stanley Brown, editor of PRL, emailed me in January 2004, checkable ‘alternatives’ to uncheckable mainstream speculation are unpublishable in mainstream journals due to the attitude of ‘peer’ reviewers.
On the topic of variations in G, Edward Teller falsely claimed in a 1948 paper that if G had varied as Dirac suggested a few years earlier, then the gravitationally caused compression in the early universe and in stars including the sun would vary with time, affecting fusion rates dramatically because fusion is highly sensitive to the amount of compression (which he knew from his Los Alamos studies on the difficulty of producing a hydrogen bomb at that time). However, the YangMills mechanism of electromagnetism (whose role in fusion is the Coulomb repulsion of protons, i.e., the stronger electromagnetism is, the less fusion you get because protons approach less closely because they are repelled more strongly, so the shortranged strong force which causes protons to fuse together ends up causing less fusion), shows that it will vary with time in the same way that gravitation does.
This invalidates Teller’s theory, because if you for example halve the value of G (making fusion more difficult by reducing the compression of protons long ago), you simultaneously get an electromagnetic coupling charge which is halved, and the effect of the latter is to increase fusion by reducing the Coulomb barrier which protons need to overcome in order to fuse. The two effects – reduced G which tends to reduce fusion by reducing compression, and reduced Coulomb charge which allows protons to approach closer before being repelled, and therefore increases fusion – offset one another. Dirac wrongly suggested that G falls with time, because he believed that at early times G was as strong as electromagnetism and numerically ‘unified’; actually all attempts to explain the universe by claiming that the fundamental forces including gravity are the same at a particular very early time/high energy, are physically flawed and violate the conservation of energy – the whole reason why the strong force charge strength falls at higher energies is because it is being caused by pairproduction of virtual particles including virtual quarks accompanied by virtual gluons. This pairproduction is a result of the electromagnetic charge, which increases at higher energy.
The electromagnetic force has been proved to cause pairproduction (this is a major source of shielding of gamma rays above 1.022 MeV by nuclei with a high Coulomb charge like lead, and this has been very carefully observed studied for eighty years now using all the gear of particle physics from the obsolete Wilson cloud chamber onwards) which produces the virtual particles including mesons and gluons which mediate the shortrange interactions. By the principle of conservation of massenergy, you can work out and predict exactly how the this works. Electromagnetic charge increases with collision energy (and thus decreasing distance between particles) if the collision energy exceeds that which takes the particles close enough so that their electric field strengths exceed 1.3 x 10^18 v/m, Schwinger’s threshold for pairproduction in the vacuum. Where the electric field exceeds this value, virtual fermions form a dielectric medium of polarized dipoles which on average tend to align to oppose the electric charge of the real particle core, reducing the value of the latter as observed from a large distance. The energy density of an electromagnetic field is precisely known from electromagnetism. Integrating it over successive radial distances, r + dr, where the charge is varying, tells you how much energy is being shielded by the polarized vacuum and is becoming available to power shortranged nuclear forces at any particular distance. Conservation of massenergy tells you, therefore, exactly how much energy is being used to create pairs of polarized charges at any given distance from a particle’s core. The textbook equations of quantum field theory don’t investigate this obvious physical approach to explaining the different forces; they instead simply find a logarithmic variation of effecive charge as a function of energy between two cutoffs for the Standard Model forces. The lower energy or ‘infrared’ cutoff must physically correspond to Schwinger’s pairproduction threshold electric field strength, and the upper energy or ‘ultraviolet’ cutoff physically corresponds to some kind of ‘grain size’ in the Dirac sea, or – far more likely – to a minimum physical distance scale that is required for pair production charges to become polarized before they annihilate back into bosonic field quanta. When two particles get very close, the strong nuclear charge decreases because there is less shielding of the electromagnetic charge between them, and therefore less electromagnetic energy is being transformed by pair production into strong force mediating pions and gluons. This is the physics, and it’s a checkable prediction because you can calculate the details to see if they work out if you have the time.
Mainstream string gatherings have the allure of attending a rock concert, i.e. social entertainment, while also maintaining some features of religious dogma and political inertia and reluctance to listen to or investigate new ideas, except new ideas building on the mainstream speculations such as string theory. When ‘peer’ reviewers are mainstream faithbased physicists, they are not the ‘peers’ of those who build on empirically confirmed facts; they are in competition with them. Expecting such ‘peers’ to behave ethically (i.e. recommend the publication of facts that don’t fit in with uncheckable mainstream Party speculation) is as irrational and misguided as expecting honest and decent behaviour from politicians or sellers of religious dogmas: they’re bored and repelled by physics of the downtoearth fact based, checkable type.
‘A Party member … is supposed to live in a continuous frenzy of hatred of foreign enemies and internal traitors … The discontents produced by his bare, unsatisfying life are deliberately turned outwards and dissipated by such devices as the Two Minutes Hate, and the speculations which might possibly induce a skeptical or rebellious attitude are killed in advance by his early acquired inner discipline … called, in Newspeak, crimestop. Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – Orwell, 1984.
Update (25 March 2008):
Maybe part of the problem here is that most people (including Catt) don’t grasp the fault in Maxwell’s electromagnetism:
Fig. 5: Maxwell’s error in electromagnetic theory and how it physically maps classical electromagnetism on to quantum field theory.
VITAL POINTS TO NOTE:
1. Maxwell and Heaviside claimed that a vacuum “displacement current” of polarized virtual charges occurs, with the process of polarization being a “displacement current” which closes the open circuit between the two conductors before the logic step has completed the full circuit (i.e., while the logic step is moving along the circuit at light velocity for the insulator which must be presumed to be a “dielectric”, even if a vacuum).
2. Julian Schwinger worked out that the quantum field theory vacuum only undergoes any polarization in electric fields above 1.3*10^18 v/m. Such fields don’t occur in computers, but they still work!
3. In each conductor, as the energy step passes a given location, the relatively loosely bound (conduction band) electrons get accelerated from a mean of zero to their full mean drift speed. This causes them to radiate and swap EM energy!!! This is the physical mechanism for what happens, replacing Maxwell’s mistaken “displacement current” with tested physics.
As Fig. 5 indicates, the electrons accelerate in opposite directions along each of the two conductors, so each conductor radiates a waveform of EM radiation which is the exact inversion of that from the other conductor. Hence, at distances from the transmission line, there is perfect cancellation or interference, cancelling any detectable signal! Thus, no net energy loss occurs due to the radiation. The sole effect of this radiation (ignored by Catt and leading to a serious argument between us, even after I wrote an Electronics World coverstory about Catt’s best invention) is that it is exchanged between the two conductors. This is the physical mechanism that does the same job that Maxwell’s false pet theory of “displacement current” (which doesn’t exist, because as Nobel Laureate Schwinger proved, the quantum field theory vacuum doesn’t polarize in electrif fields below 1.3*10^18 volts/metre, and you don’t get that kind of field strength in radio waves or computers, where field strengths are very much lower).
(I’ve uploaded some vital background information on Catt’s vital background work to this blog here, here, here, here, here, here, here and here, since Wikipedia entries are being vandalised and some of Catt’s web pages are now disappearing owing to his long hospitalization.)
This changes the physical understanding of Maxwell’s equations: from it, we know that wherever Maxwell claimed “displacement currents” to exist, exchange radiation is occurring which produces the same forces and energy transfers but tells us about the previously hidden quantum field theory mechanism behind the quantum electromagnetic interaction.
Surely the quantum gravitational charge, mass, can be expected to behave as a first approximation like electric charge when accelerated.
Whereas the acceleration of electric charge produces an asymmetry in the field (which itself is mediated by gauge boson exchange radiation) that ripples outward as an observable transverse EM wave (mediated by numerous gauge bosons or field quanta), with gravity what you are doing is accelerating a mass (a unit gravitational charge), which introduces an asymmetry into the graviton exchange mechanism, that propagates as a gravitational wave (mediated by numerous gravitons, or field quanta).
Why introduce additional complexity? It looks as if the mechanism for gravitational waves is a perfect analogy to electromagnetic waves, and that the relative weakness of the gravitational waves is simply due to the relatively weakness of the gravitational coupling, as compared to electromagnetism.
Update (31 March 2008):
Fig. 6: Simplified depiction of the coupling scheme for mass to be given to Standard Model particles by a separate field, which is the maninthemiddle between graviton interactions and electromagnetic interactions. A more detailed analysis of the model, with a couple of mathematical variations and some predictions of masses for different leptons and hadrons, is given in the earlier post here and there are updates in other recent posts on this blog. In the case of quarks, the cores are so close together that they share the same ‘veil’ of polarized vacuum, so N quarks in close proximity (asymptotic freedom inside hadrons) boosts its electric charge shielding factor by a factor N, so if you have three quarks of bare charge j each and normal vacuum polarization shielding factor j, the total charge is not jN but is jN/N, where the N in the denominator is there to account for the increased vacuum shielding. Obviously jN/N = j, so 3 electroncharge quarks in close proximity will only exhibit the combined charge of 1 electron, as seen at a distance beyond 33 fm from the core. Hence, in such a case, the apparent electric charge contribution per quark is only 1/N = 1/3, which is the exactly what happens in the Omega Minus particle (which has 3 strange quarks of apparent charge 1/3 each, giving the Omega Minus a total apparent electric charge as observed beyond 33 fm of 1 unit). More impressively, this model predicts the masses of all leptons and hadrons, and also makes falsifiable predictions about the variation in coupling constants as a function of energy which result from the conversion of electromagnetic field energy into short range nuclear force field quanta as a result of pairproduction of particles including weak gauge bosons, virtual quarks and gluons in the electromagnetic field at high energy (short distances from the particle core). The energy lost from the electromagnetic field, due to vacuum polarization opposing the electric charge core, gets converted into short range nuclear force fields. From the example of the Omega Minus particle, we can see that the electric charge per quark observable at long ranges is reduced from 1 to 1/3 unit due to the close proximity of three similarly charge quarks, as compared to a single particle core surrounded by polarized vacuum, i.e. a lepton (the Omega Minus is a unique, very simple situation; usually things are far and away more complicated because hadrons generally contain pairs or triplets of quarks of different flavour). Hence, 2/3rds of the electric field energy that occurs when only one particle is alone in a polarized vacuum (i.e. a lepton) is used to generate shortranged weak and strong nuclear force fields when three such particles are closely confined.
As discussed in earlier posts, the similarity of leptons and quarks has been known since 1964, when it was discovered by the Italian physicist Nicola Cabibbo: the rates of lepton interactions are identical to those of quarks to within just 4%, or one part in 25. The weak force when acting on quarks within one generation of quarks is identical to within 1 part in 25 of that when acting on leptons (although if the interaction is between two quarks of different generations, the interaction is weaker by a factor of 25). This similarity of quarks and leptons is called ‘universality’. Cabibbo brilliantly suggested that the slight (4%) difference between the action of the weak force on leptons and quarks is due to the fact that a lepton has only one way to decay, whereas a quark has two possible decay routes, with relative probabilities of 1/25 and 24/45, the sum being of course (1/25) + (24/25) = 1 (the same as that for a lepton). But because only the one quark decay route or the other (1/25 or 24/25) is seen in an experiment, the effective rate of quark interactions are lower than those for leptons. If the weak force involves an interaction between just one generation of quarks, it is 24/25 or 96% as strong as between leptons, but if it involves two generations of quarks, it is only 1/25th as strong as when mediating a similar interaction for leptons.
This is very strong evidence that quarks and leptons are fundamentally the same thing, just in a different disguise due to the way they are paired or tripleted and ’dressed’ by the surrounding vacuum polarization (electric charge shielding effects, and the use of energy to mediate shortrange nuclear forces).
A quick but vital update about my research (particularly updating the confusion in some of the comments to this blog post): I’ve obtained the physical understanding which was missing from the QFT textbooks I’ve been studying by Weinberg, Ryder and Zee, from the 2007 edition of Professor Frank Close’s nicely written little book The Cosmic Onion, Chapter 8, ‘The Electroweak Force’.
Close writes that the field quanta of U(1) in the standard model is not the photon, but is a B_{0} field quanta.
SU(2) gives rise to field quanta W_{+}, W_{} and W_{0}. The photon and the Z_{0} both result from the Weinberg ‘mixing’ of the electrically neutral W_{0} from SU(2) with the electrically neutral B_{0} from U(1).
This is precisely the information I was looking for, which was not clearly stated in the QFT textbooks. It enables me to get a physical feel for how the mathematics works.
The Weinberg mixing angle determines how W_{0} from SU(2) and B_{0} from U(1) mix together to yield the photon (textbook electromagnetic field quanta) and the Z_{0} massive neutral weak gauge boson.
If the Weinberg mixing angle were zero, then W_{0} = Z_{0} and B_{0} = electromagnetic photon. However, this simple scheme fails (although this failure is not made clear in any of the QFT textbooks I’ve read, which have obfuscated instead), and an ad hoc or fudged mixing angle of about 26 degrees (this is the angle between the Z_{0} and W_{0} phase vectors) is required.
This mixing angle is physically unexplained in the Standard Model, it’s just an epicycle needed to make it represent the experimentally known facts well enough to predict other things accurately (like Ptolemiac epicycles), just as force coupling constants and particle masses have to be put in by hand. Because neutrinos also mix as they propagate (changing flavour), there are mixing parameters there too. The Standard Model has 19 parameters which have to be put in by hand. My objective to to get away from such fiddled factors and to get a theory which does more while requiring less speculative assertion. The total number of parameters that need to be supplied is far smaller than the Standard Model requires, because the model predicts masses of leptons and quarks, and force coupling parameters.
I haven’t had time yet to analyse the Weinberg mixing yet. My first reaction is that it is a failure of the Standard Model that you need to mix up the U(1) field quanta with the usually massive weak electrically neutral field quanta from SU(2) in order to arrive at empirically useful descriptions of the electromagnetic field quanta and the weak field quanta. This is definitely being coveredup by the textbooks, which obfuscate it terribly. In fact, the mixing angle is an epicycle or fudge factor which is needed to force an inaccurate physical description of electromagnetism to work. It has no natural explanation.
However, it is vital to understand what the existing theory is in order to get a complete grasp on what needs to go in its place. This is getting very interesting. Unfortunately, I will have no time for several weeks to work on this further.
Note that Professor Close was kind enough to email me back this evening within four hours of my emailing him an error I spotted in his book (however if I had sent a longer email with a paper or a request for him to spend valuable time on my pet ideas, it would predictably have been a very different story):
From: Frank Close
To: Nigel Cook
Sent: Monday, March 31, 2008 10:48 PM
Subject: RE: The Cosmic Onion, 2007 ed., Fig 11.3 page 156
yes. well spotted. pythagaros requires 1+24 =25 (all over 25)
—–Original Message—–
From: Nigel Cook
Sent: Mon 3/31/2008 7:06 PM
To: Frank Close
Subject: The Cosmic Onion, 2007 ed., Fig 11.3 page 156
Dear Professor Close,
Should the small square in Fig 11.3 on p 156 of The Cosmic Onion (2007 ed.) be labelled 1/25 rather than 1/5? It seems to be a printing error.
Thanks for your clear discussion of universality in that book which I only discovered very recently. I’ve only done undergraduate physics at university, and am interested in quantum field theory, so it’s great to get some semipopular discussions of basic stuff to supplement the more mathematical works by Weinberg, Ryder, Zee, et al.
Kind regards,
Nige Cook
http://quantumfieldtheory.org/
Update (23 April 2008): Below is the text of a comment, summarising what is missing from existing quantum mechanics and quantum field theory, in the moderation queue to:
http://egregium.wordpress.com/2008/03/30/legendarylecturesonqftbysidneycoleman/
Geroch’s Special Topics in Particle Physics are very concise and begin in a simple way, but soon become extremely technical.
Coleman’s notes on QFT (as written up by Tong) are slightly longer and more detailed, and in some ways address the key questions I have with QFT a lot better.
As a latecomer to QFT (I’ve only recently read Zee, Weinberg vI and vII, and Ryder), it’s amazing that the structure of the theory is entirely based on classical field equations for the lagrangian. I had read (previous to seeing the maths of QFT) that in principle the path integral can be used to model the motion of orbital electrons. Feynman gives an illustration of this in his book QED: the random exchange of discrete Coulomb field quanta between electrons and the proton cause the chaotic motion and indeterminancy according to Feynman: fields are quantized so they aren’t classical.
‘… when seen on a large scale, they [electrons, photons, etc.] travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from quantum interactions, each represented by a Feynman diagram] becomes very important, and we have to sum the arrows [amplitudes] to predict where an electron is likely to be.’
 R. P. Feynman, QED, Penguin, 1990, page 845.
This implies that the physical difference between Bohr’s atomic model and quantum mechanics is that the Coulomb field should be quantized properly. If you derived the Bohr model using a Coulomb force equation that correctly modelled the fluctuations in the electric attractive force on small scales (in atoms), the key problem of Bohr’s incorrect quantum mechanics would be solved.
Instead of that, quantum mechanics leaves the classical Coulomb potential intact, and adopts a statistical wave equation with which to introduce indeterminancy. This leads to a lot of issues. QFT follows quantum mechanics in this model, using classical Maxwell equations for things like the Coulomb potential.
QFT only quantizes the field indirectly by using integrating the Maxwell lagrangian equations over space. The integral is then evaluated as a perturbative expansion with each term in the infinite series of terms representing one Feynman interaction diagram (i.e., one category of interaction of the field quanta which contributes to the force). Since more complex interactions produce smaller forces, the expansion is convergent and a few terms (the simplest Feynman diagrams in the infinite series) represent most of the actual interactions.
This seems to be the major selling point of QFT. However, the very fact that the perturbative expansion is convergent and only a few Feynman diagrams contribute significantly to the observed forces, suggests that nature is for practical purposes as simple as those few Feynman diagrams. For example, the first perturbative correction (corresponding to the Feynman diagram where there is a loop for a field quanta travelling from a magnet to the electron) for the magnetic moment of the electron that Schwinger calculated in 1948, only increases the magnetic moment of the electron by 0.116%. That’s trivial! The lesson here is surely that nature is basically very simple. So why introduce all the complex mathematics?
The first thing to get a handle on is that Schroedinger’s nonrelativistic equation and its relativistic variants such as the GordonKlein and Dirac equations, are not causal models but statistical models. If they were properly representing the dynamics, you wouldn’t have a wave equation in there; you wouldn’t need it. By taking the classical model and correcting the Coulomb field equation to contain indeterminancy on small scales (caused by random field quanta interactions, rather than the classical continuum field represented by Maxwell’s equations), the chaotic interactions of field quanta would produce the indeterminancy which would lead to statistically wavy motions of orbital electrons.
So what has gone wrong in quantum mechanics and quantum field theory – assuming that Feynman’s analysis in QED suggests the correct physics – is that the field equations have never been properly quantized to simulate the chaotic, random interactions of field quanta with charges, producing forces. On the atomic scale, the chaos of field quanta in the Coulomb force results in all the wave effects and indeterminancy which are usually and falsely attributed to mathematics via the Schroedinger (plus GordonKlein and Dirac) time dependent and time independent equations.
Of course a wave equation such as Schroedinger’s is accurate statistically. If you look at sound waves in air, they are the statistical resultant of a massive number of very small chaotic interactions of air molecules hitting each other at average speeds of 500 m/s. That chaos appears to give rise to classical sound waves on large scales. Similarly, the timeaveraged motion of the electron in an atom is well modelled by the Schroedinger time independent wave equation. It’s not really a surprise. Mathematically, it’s even a correct model in the statistical limit that you average the position of the electron over a very long period of time, and the equation gives you the correct probability distribution of finding it anywhere.
What’s wrong is that quantum mechanics falsely claims that there is no better physical model. From Feynman’s argument, there is: the path integral! The whole physical reason why the electron suffers indeterminancy is the chaos of the field quanta which give a nonclassical Coulomb potential. Instead of fixing that by making the Coulomb potential properly quantized (random on scales of spacetime), the founders of quantum mechanics and quantum field theory ignored it completely, continued to use a classical smooth Coulomb potential, and introduced indeterminancy by employing a wave equation for the motion of the particle.
The resulting problems with wavefunction collapse are exactly analogous to trying to apply a sound wave equation to a single air molecule and ending up with all sorts of crazy interpretations of how the sound wave equation can tally with a single air molecule. Anyone can see that the problem here is that the sound wave equation is not suited to model a air molecule, but only a very large number of air molecules so that the averaged motion corresponds statistically to a wave.
For some reason, the analogy between air molecules in classical pressure theory, and field quanta in quantum field theory, has been missed by the experts. Where you have a large rate of air molecule interactions, the result can be statistically modelled by classical wave equations and by the assumption that the resulting pressure is a constant. But on small scales (for example, when dealing with a pollen grain), the motion becomes chaotic and determinancy vanishes because the impacts occur from random directions after random intervals of time. By analogy to this, the randomness of field quanta exchanges between atomic electrons and the proton in an atom creates indeterminancy. You can’t predict where the electron will be because you can’t predict when individual field quanta will interact with the electron.
For the life of me, I can’t see why the experts don’t see this, and move away from using classical calculus (Maxwell equation lagrangian) in QFT, and towards a stochastic simulation (a Monte Carlo computer code, with random numbers – weighted in frequency according to statistical distribution of occurrance – representing parameters for individual quantum interaction events). This would properly simulate quantum mechanics and quantum field theory, without the problems that you get with the use of continuously variable lagrangians for quantized fields.
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
 R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
copy of a comment:
http://motls.blogspot.com/2008/04/conspiracytheoriesaboutmagnets.html
http://www.haloscan.com/comments/lumidek/7922516010231994065/?a=32019#1028671
“Nope, gravitons (or gauge bosons) are not hidden variables. Gravitons (and gauge bosons) are actual measurable particles. Hidden variables are coordinates of some invisible and separately undetectable objects (particles) that should, according to the (wrong) theories of hidden variables, decide about the particular outcome of quantum experiments even though quantum mechanics can only predict the probabilities, not the particular outcomes – not even in principle. …
“As explained above, it is not true that Feynman has ever complained that Taylor expansions contain infinitely many terms because every person with an elementary qualification in maths – and Feynman belonged to that group since he was a kid – knows that virtually all functions have infinitely many terms when Taylor expanded. There is nothing wrong about it whatsoever.” – Lubos Motl
Thank you for saying that gauge bosons like gravitons are not hidden variables but real particles. Now you should be treating them like real particles, and making checkable calculations to prove it, instead of ignoring virtual particles and the chaotic (effects) they produce on individual electrons in an atom.
If you want to understand the simplest real interaction, your path integral is expanded into an infinite series of terms each with a corresponding, distinct Feynman diagram. That corresponds to requiring an infinite number of gauge boson interactions for even events occurring in no matter how small an interval of time. This is the physical problem Feynamn had. You have confused this problem (infinitely many Feynman diagram interactions in a tiny space and tiny time) with the maths of an infinite perturbative expansion. Of course in maths you can have an infinite number of terms in a Taylor series and it is not wrong; Feynman was objecting that it is wrong to have an infinite number of Feynman diagram interactions occurring between two electrons in any short period of time!
The gauge interaction equation which is written as the Lagrangian is an empirical equation, e.g., based on Maxwell’s equation treating the motion of charge as a current. Because it is an empirical equation (formulated by Maxwell from experimental observations by Faraday, Ampere, Gauss, et al.) it is a net result already containing the effects of all virtual particles. E.g., it already includes the observable effects of the Feynman diagram terms in the perturbative expansion.
When you place that empirical Lagrangian into the action for a path integral of all possible interactions and represent that integral as a perturbative expansion, you can work out the contributions of the different Feynman diagram interactions to the Lagrangian.
This is the physical way that the path integral and perturbative expansion converts classical empirical Maxwell equations into a quantized field containing discrete interactions, each represented pictorially by Feynman diagrams and mathematically by a term in the perturbative expansion which corresponds to the path integral.
The failure of the use of calculus in the path integral is that although you quantize the field into an infinite number of different Feynman diagrams (interactions or histories), you are not quantizing individual interactions, only categories of interactions which correspond to each Feynman diagram.
E.g., the interactions depicted in the simplest Feynman diagrams (which correspond to the first terms in the perturbative expansion), will each represent very common interactions of virtual particles if the perturbative expansion converges quickly (as is the case for selfinteraction corrections for leptons).
The relative contribution of each term in a perturbative expansion is dependent upon how frequently that type of interaction actually occurs. Simple interactions occur more frequently than complex interactions, if the perturbative expansion converges rapidly.
As a good approximation (to twosignificant figures) in low energy lepton interactions, you can ignore all the Feynman diagram loops (selfinteraction correction terms) in the perturbative expansion, because only the the simplest (direct) interaction of a field quanta being exchanged between two charges is important to accuracy of two significant figures. The first loop correction for the magnetic moment of a lepton only increases the magnetic moment by 0.116%. It’s trivial.
Feynman diagrams with loops are important for three or more significant figures when calculating lepton interactions at low energy, but they are still only about 0.1% contributions! Hence 99.9% of low energy lepton physics is nonloop Feynman diagram contributions, the straight exchange of field quanta between charges. This is the priority in working on quantum gravity.
Feynman diagram loops (represented by the terms in a perturbative expansion) are only really big contributors for quarks or for lepton interactions at high energies.
Schwinger showed that there is no pair production in the vacuum at electromagnetic field strengths less than the Schwinger threshold, 1.3*10^18 volts/metre. Hence, there are no loops when gauge bosons are exchanged in the weaker fields which exist at more than a few femtometres from an electron or a proton.
Loops (for instance the brief conversion of a field boson into a pair of fermions that start to polarize in the field, disturbing the field before they attract and annihilate back into a field boson) only occur at very short distances from charges, where the field is strong. For the purpose of making predictions of gravitation at low energy over large distances from charges, there are no loops in the vacuum and everything is very simple.
A simulation for exchange of gauge bosons could therefore find the correct theory of quantum gravity in the lowenergy limit (which correspond to general relativity), by ignoring loop terms in the perturbative expansion. This will make the interactions direct and simple, allowing checkable predictions.
This is the simplicity Feynman was referring to, I think.
“In quantum gravity, the entropy in a volume – the logarithm of the dimension of the required effective Hilbert space – is bounded by the area of the region and is thus finite. In this sense, every finite region does behave as a computer with finitely many registers. Except that the number of registers is still huge because it equals the area over four times the Planck area, and the Planck area in the denominator is extraordinarily tiny.”
The Planck area is a not even wrong conjecture, because the Planck scale has never been observed. There is no evidence for that particular scale. Planck could have dimensionally written the “fundamental” length as not the
(hbar * G/c^3)^{1/2} = 1.6*10^{35} metre
(which is the Planck length) but instead as the smaller more fundamental size for the electron core,
2GM/c^2 = 1.353*10^{−57} metre. ( http://en.wikipedia.org/wiki/Black_hole_electron )
Wikipedia defends the Planck scale as follows:
‘The Planck length is deemed “natural” because it can be defined from three fundamental physical constants: the speed of light, Planck’s constant, and the gravitational constant.’ – http://en.wikipedia.org/wiki/Planck_length
But it’s far more “natural” to choose the three constants to be the speed of light, the electron mass and the gravitational constant, because then the length you get is the size of a black hole of electron mass! This has physical meaning because it suggests the physics of the core of an fermion is that of a black hole. It is also a smaller and therefore more fundamental unit of length than the Planck scale.
[End of comment to Lubos’ blog]
*******************************
Here’s a brief comment about the vague concept of a ‘zero point field’ which unhelpfully ignores the differences between fundamental forces and mixes up gravitational and electromagnetic field quanta interactions to create a muddle. Traditional calculations of that ‘field’ (there isn’t only one field acting on ground state for electrons; there is gravity and electromagnetism with different field quanta needed to explain why one is always attractive and the other is attractive only between unlike charges and repulsive between similar charges, not to mention explaining the 10^40 difference in field strengths for those forces) give a massive energy density to the vacuum, far higher than that observed with respect to the small positive cosmological constant in general relativity. However, two separate force fields are there being confused. The estimates of the ‘zero point field’ which are derived from electromagnetic phenomena such as electrons in the ground state of hydrogen being in an equilibrium of emission and reception of field quanta, have nothing to do with the graviton exchange that causes the cosmic expansion (Figure 1 above has the mechanism for that). There is some material about traditional ‘zero point field’ philosophy on Wikipedia
Added on 8 May 2008: Extract from an email to SM:
If you come up with a really original idea, nobody exists to act as a “peer reviewer” because you don’t have any peers in that new discipline.
E.g., Hubble’s law for recession is simply v = HR, recession velocity is directly proportional to distance R. By calculus, acceleration, a = dv/dt = d(HR)/dt = H*(dR/dt) + R*(dH/dt) = Hv + 0 = H(HR) = RH^2.
This proof that a=RH^2 is very simple science, but is nevertheless too “out of the box” to be taken seriously by cranks like the string theorists who are “peerreviewers” (better named “rivalreviewers” or “mainstream cranks”)!
This result tells you that expansion of the universe is the acceleration of the universe, and it gives you a quantitative prediction of the acceleration of the universe using just Hubble’s law.
Not only that, but by using Newton’s second and third laws of motion with it, it enables you to predict gravity and to prove the mechanism of gravity. Simply, Newton’s second law is F = dp/dt ~ ma which gives you an outward force (radially outward in all directions from us, which can be understood simply by analogy to the net outward force of the pressure wave in an explosion; the socalled dynamic or drag pressure of a blast wave is the outwarddirected pressure component of a shock wave in an explosion, and it’s outward force is its pressure multiplied by the spherical surface area of the shock).
By Newton’s 3rd law – action and reaction are equal and opposite – we then get an equal inwarddirected force, which from the possibilities (vector bosons) of quantum field theory, seems to be mediated by gravitons. This allows checkable predictions of the force of gravity: ttp://nige.wordpress.com/
Maybe all of this factbased physics I’ve done is “crackpot”, and all the hocuspocus based (e.g. see my site http://quantumfieldtheory.org/ for proof it is nonsense) string theory graviton nonphysics is correct; but I’ve got evidence and string theory doesn’t. If society wants to throw the “crackpot” label at pseudoscience, it should do so at string theory, not at entirely factbased theories which make tested, confirmed predictions (even if string theorists run mainstream journal peerreview teams and censor out factual alternatives to stringy spin2 graviton trash.
Dr Samuel Glasstone authored 40 different technical textbooks ranging from “The Effects of Nuclear Weapons” to “Theoretical Chemistry” and “Nuclear Reactor Engineering”. Recently I found a very enlightening essay from one of his coauthors: http://www.garfield.library.upenn.edu/classics1988/A1988Q713800001.pdf
The article is by Glasstone’s coauthor Keith J. Laidler, an Oxford graduate who went to Princeton University aged 22 in 1938 to do a PhD under Henry Eyring. Erying had come up with a controversial theory of chemical reaction rates in 1935, which had been initially rejected by the Journal of Chemical Physics.
As a result of the lack of comprehension which his theoretical paper was greeted with, he realised that he would need to write a book in order to:
“present the basic theory in a fairly detailed way, discuss its implications and assumptions, and apply it to rate processes of various kinds. Eyring knew that he would find it difficult to settle down to long sessions of writing, which are necessary to produce a book. He therefore invited me to collaborate with him on the book, with the arrangement to be that I would do the actual writing, in regular consultation with him.”
Laidler undertook the writing of the book while working on his PhD, and Samuel Glasstone took over the editing in the Summer of 1939 when he arrived at Princeton:
“In the summer of 1939 Samuel Glasstone arrived in Princeton as a research associate in the Department of Chemistry. Glasstone, then aged about 40, had already had a successful research career at the University of Sheffield and was the author of several very successful books on physical chemistry. In view of his background, it was natural to enlist his help with the writing of the book, especially since it would be necessary for me to leave Princeton in 1940 to carry out war research. I provided Glasstone with everything I had written and continued to give him material as I wrote it during my second year at Princeton. At the same time, Glasstone, Erying, and I collaborated on research on overvoltage, a subject on which Glasstone had previously worked. Glasstone greatly supplemented the material I gave him for the book, and he put everything into final form. Eyring himself did hardly any of the writing, but he made numerous and valuable comments on everything we wrote, and I well remember many vigorous but always very friendly arguments on a number of fundamental points. Although World War II interrupted most basic scientific work for a few years after the book’s publication in 1941, the book attracted much attention from the start, particularly as it was the first comprehensive treatment of the new rate theory and of its applications to a variety of chemical and physical processes; it also contained a good deal of previously unpublished material. Records of sales during the war years have been lost, but probably at least 10,000 copies were sold during that period. After 1947 about 10,000 further copies were sold until the book went out of print in 1970. The Science Citation Index shows that it has been frequently quoted and that it has been Eyring’s most often cited publication. In 1948 a pirated Russian translation of the book appeared, and there have also been Japanese and Spanish editions.”
This particular anecdote about the reason for writing a book and how collaboration worked is very interesting!
Update (9 May 2008): copy of comments to http://keamonad.blogspot.com/2008/05/thoofttalk.html
… I was really amazed to learn that the weak mixing angle as an ad hoc fix literally mixes up the U(1) gauge boson with the neutral SU(2) gauge boson to produce something that fits the description of the gauge boson of electromagnetism.
It’s simply not true that U(1) represents electromagnetism and SU(2) the weak interaction: instead the Standard Model weak mixing angle blends the neutral gauge boson properties of U(1) and of SU(2) (where the massiveness of the SU(2) neutral gauge boson isn’t inherently natural, but has to be explained by an external agency, the Higgs field; i.e. the intrinsic mass of the SU(2) gauge bosons is zero and they acquire virtual mass from Higgs bosons).
The gauge boson of U(1) is B, which isn’t observed in nature, and the neutral gauge boson of SU(2) is W_0, which again isn’t observed in nature. The ad hoc “epicycle” of mixing the B from U(1) with the W_0 from SU(2) yields two mixed up combinations, the observed electromagnetic gauge boson and the massless version of the observed Z_0 weak gauge boson.
So even the electroweak sector of the Standard Model is a messy ad hoc theory. I look forward to reading ‘t Hooft’s paper and seeing if it sheds light on the kind of mathematics which I’m interested in at the moment…
Wow! I’ve just started to read the ‘t Hooft paper and was struck by his slide which states:
The radius of the event horizon of a black hole electron is on the 1.4*10^{57} m, the equation being simply r = 2GM/c^2 where M is electron mass.
Compare this to Planck’s length 1.6 * 10^{−35} metres which is a dimensional analysis based (non physical) length far larger in size, yet historically claimed to be the smallest physically significant size!
The black hole length equation is different from the Planck length equation principally in that Planck’s equation includes Planck’s constant h, and doesn’t include electron mass. Both equations contain c and G. The choice of which is the more fundamental equation should be based on physical criteria, not groupthink or the vagaries of historical precedence.
The Planck length is complete rubbish, it’s not based on physics, it’s unchecked physically, it’s not even wrong uncheckable speculation.
The smaller black hole size is checkable because it causes physical effects. According to the Wikipedia page: http://en.wikipedia.org/wiki/Black_hole_electron
“A paper titled “Is the electron a photon with toroidal topology?” by J. G. Williamson and M. B. van der Mark, describes an electron model consisting of a photon confined in a closed loop. In this paper, the confinement method is not explained. The Wheeler suggestion of gravitational collapse with conserved angular momentum and charge would explain the required confinement. With confinement explained, this model is consistent with many electron properties. This paper argues (page 20) “–that there exists a confined singlewavelength photon state, (that) leads to a model with nontrivial topology which allows a surprising number of the fundamental properties of the electron to be described within a single framework.” “
My papers in Electronics World, August 2002 and April 2003, similarly showed that an electron is physically identical to a confined charged photon trapped into a small loop by gravitation (i.e., a massless SU(2) charged gauge boson which has not been supplied by mass from the Higgs field; the detailed way that the magnetic field curls cancel when such energy goes round in a loop or alternatively is exchanged in both directions between charges, prevent the usual infinitemagneticselfinductance objection to the motion of charged massless radiations).
The Wiki page on black hole electrons then claims wrongly that:
All of these “objections” are based on flawed versions Hawking’s black hole radiation theory which neglects a lot of vital physics which make the correct theory more subtle.
See the Schwinger equation for pair production field strength requirements: equation 359 of the mainstream work http://arxiv.org/abs/quantph/0608140 for equation 8.20 of the mainstream work http://arxiv.org/abs/hepth/0510040.
First of all, Schwinger showed that you can’t get spontaneous pairproduction in the vacuum if the electromagnetic field strength is below the critical threshold of 1.3*10^18 volts/metre.
Hawking’s radiation theory requires this, because his explanation is that pair production must occur near the event horizon of the black hole.
One virtual fermion falls into the black hole, and the other escapes from the black hole and thus becomes a “real” particle (i.e., one that doesn’t get drawn to its antiparticle and annihilated into bosonic radiation after the brief Heisenberg uncertainty time).
In Hawking’s argument, the black hole is electrically uncharged, so this mechanism of randomly escaping fermions allows them to annihilate into real gamma rays outside the event horizon, and Hawking’s theory describes the emission spectrum of these gamma rays (they are described by a black body type radiation spectrum with a specific equivalent radiating temperature).
The problem is that, if the black hole does need pair production at the event horizon in order to produce gamma rays, this won’t happen the way Hawking suggests.
The electrical charge needed to produce Schwinger’s 1.3*10^18 v/m electric field which is the minimum needed to cause pairproduction /annihilation loops in the vacuum, will modify Hawking’s mechanism.
Instead of virtual positrons and virtual electrons both having an equal chance of falling into the real core of the black hole electron, what will happen is that the pair will be on average polarized, with the virtual positron moving further towards the real electron core, and therefore being more likely to fall into it.
So, statistically you will get an excess of virtual positrons falling into an electron core and an excess of virtual electrons escaping from the black hole event horizon of the real electron core.
From a long distance, the sum of the charge distribution will make the electron appear to have the same charge as before, but the net negative charge will then come from the excess electrons around the event horizon.
Those electrons (produced by pair production) can’t annihilate into gamma rays, because not enough virtual positrons are escaping from the event horizon to enable them to annihilate.
This really changes Hawking’s theory when applied to fundamental particles as radiating black holes.
Black hole electrons radiate negatively charged massless radiation: gauge bosons. These are the Hawking radiation from black hole electrons. The electrons don’t evaporate to nothing, because they’re all evaporating and therefore all receiving radiation in equilibrium with emission.
This is part of the reason why SU(2) rather than U(1)xSU(2), looks to me like the best way to deal with electromagnetism as well as the weak and gravitational interaction! By simply getting rid of the Higgs mechanism and replacing it with something that provides mass to only a proportion of the SU(2) gauge bosons, we end up with massless charged SU(2) gauge bosons which mimic the charged, forcecausing, Hawking radiation from black hole fermions. The massless neutral SU(2) gauge boson is then a spin1 graviton, which fits in nicely with a quantum gravity mechanism that makes checkable predictions and is compatible with observed approximations such as checked parts of general relativity and quantum field theory.
********
Dr Peter Woit has a couple of new posts up. One is called “So what will you do if string theory is wrong?” and it quotes a draft paper of that title by string theorist Moataz Emam which is to be published in the American Journal of Physics:
“So even if someone shows that the universe cannot be based on string theory, I suspect that people will continue to work on it. It might no longer be considered physics, nor will mathematicians consider it to be pure mathematics. I can imagine that string theory in that case may become its own new discipline; that is, a mathematical science that is devoted to the study of the structure of physical theory and the development of computational tools to be used in the real world. The theory would be studied by physicists and mathematicians who might no longer consider themselves either. …
“Whether or not string theory describes nature, there is no doubt that we have stumbled upon an exceptionally huge and elegant structure which might be very difficult to abandon. …”
I’ll have to get around to adding that to my domain http://quantumfieldtheory.org/ when I update it.
Another of Dr Woit’s new posts is “Witten on Dark Energy”. This contains the following text:
The crucial point of course is … how can you ever test these [string theory landscape of dark energy] ideas, making them real science and not metaphysics? At the end of [Edward Witten’s] talk, Rachel Bean tried to pin him down on this question, leading to this exchange:
Bean: “If we have this landscape, this multiverse, … can we learn nothing, or is there some hope, do you have some hope, that if you were to find a universe that had remarkably small CC [cosmological constant, i.e. the term measuring the dark energy needed to model cosmological acceleration] you could also make some allusion to the other properties of that universe for example the fine structure constant, or are we saying that all of these things are random variables, uncorrelated and we’ll never get an insight.”
Witten: “Well, I don’t know of course, I’m hoping that we’ll learn more, perhaps the LHC will discover supersymmetry and maybe other unexpected discoveries will change the picture. I wasn’t meaning to advocate anything.”
Bean: “I’m asking your opinion.”
Witten (after a silence): “I don’t really know what to think has got to be the answer…”
Update (16 May 2008):
Heaviside, Wolfgang Pauli, and Bell on the Lorentz spacetime
There are a couple of nice articles by Professor Harvey R. Brown of Oxford University (he’s the Professor of the Philosophy of Physics there, see http://users.ox.ac.uk/~brownhr/), http://philsciarchive.pitt.edu/archive/00000987/00/Michelson.pdf and http://philsciarchive.pitt.edu/archive/00000218/00/Origins_of_contraction.pdf
The former paper states:
“… in early 1889, when George Francis FitzGerald, Professor of Natural and Experimental Philosophy at Trinity College Dublin, wrote a letter to the remarkable English autodidact, Oliver Heaviside, concerning a result the latter had just obtained in the field of Maxwellian electrodynamics.
“Heaviside had shown that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the ether. In this letter, FitzGerald asked whether Heaviside’s distortion result—which was soon to be corroborated by J. J. Thompson—might be applied to a theory of intermolecular forces. Some months later, this idea would be exploited in a letter by FitzGerald published in Science, concerning the baffling outcome of the 1887 etherwind experiment of Michelson and Morley. … It is famous now because the central idea in it corresponds to what came to be known as the FitzGeraldLorentz contraction hypothesis, or rather to a precursor of it. This hypothesis is a cornerstone of the ‘kinematic’ component of the special theory of relativity, first put into a satisfactory systematic form by Einstein in 1905. But the FitzGeraldLorentz explanation of the MichelsonMorley null result, known early on through the writings of Lodge, Lorentz and Larmor, as well as FitzGerald’s relatively timid proposals to students and colleagues, was widely accepted as correct before 1905—in fact by the time of FitzGerald’s premature death in 1901. Following Einstein’s brilliant 1905 work on the electrodynamics of moving bodies, and its geometrization by Minkowski which proved to be so important for the development of Einstein’s general theory of relativity, it became standard to view the FitzGeraldLorentz hypothesis as the right idea based on the wrong reasoning. I strongly doubt that this standard view is correct, and suspect that posterity will look kindly on the merits of the preEinsteinian, ‘constructive’ reasoning of FitzGerald, if not Lorentz. After all, even Einstein came to see the limitations of his own approach based on the methodology of ‘principle theories’. I need to emphasise from the outset, however, that I do not subscribe to the existence of the ether, nor recommend the use to which the notion is put in the writings of our two protagonists (which was very little). The merits of their approach have, as J. S. Bell stressed some years ago, a basis whose appreciation requires no commitment to the physicality of the ether.
“…Oliver Heaviside did the hard mathematics and published the solution [Ref: O. Heaviside (1888), ‘The electromagnetic effects of a moving charge’, Electrician, volume 22, pages 147–148]: the electric field of the moving charge distribution undergoes a distortion, with the longitudinal components of the field being affected by the motion but the transverse ones not. Heaviside [1] predicted specifically an electric field of the following form …
“In his masterful review of relativity theory of 1921, the precocious Wolfgang Pauli was struck by the difference between Einstein’s derivation and interpretation of the Lorentz transformations in his 1905 paper [12] and that of Lorentz in his theory of the electron. Einstein’s discussion, noted Pauli, was in particular “free of any special assumptions about the constitution of matter”6, in strong contrast with Lorentz’s treatment. He went on to ask:
‘Should one, then, completely abandon any attempt to explain the Lorentz contraction atomistically?’
“It may surprise some readers to learn that Pauli’s answer was negative. …
“[John S.] Bell’s model has as its starting point a single atom built of an electron circling a much more massive nucleus. Ignoring the backeffect of the electron on the nucleus, Bell was concerned with the prediction in Maxwell’s electrodynamics as to the effect on the twodimensional electron orbit when the nucleus is set gently in motion in the plane of the orbit. Using only Maxwell’s equations (taken as valid relative to the rest frame of the nucleus), the Lorentz force law and the relativistic formula linking the electron’s momentum and its velocity—which Bell attributed to Lorentz—he determined that the orbit undergoes the familiar longitudinal “Fitzgerald” contraction, and its period changes by the familiar “Larmor” dilation. Bell claimed that a rigid arrangement of such atoms as a whole would do likewise, given the electromagnetic nature of the interatomic/molecular forces. He went on to demonstrate that there is a system of primed variables such that the the description of the uniformly moving atom with respect to them is the same as the description of the stationary atom relative to the orginal variables—and that the associated transformations of coordinates are precisely the familiar Lorentz transformations. But it is important to note that Bell’s prediction of length contraction and time dilation is based on an analysis of the field surrounding a (gently) accelerating nucleus and its effect on the electron orbit.12 The significance of this point will become clearer in the next section. …
“The difference between Bell’s treatment and Lorentz’s theorem of corresponding states that I wish to highlight is not that Lorentz never discussed accelerating systems. He didn’t, but of more relevance is the point that Lorentz’s treatment, to put it crudely, is (almost) mathematically the modern changeofvariables, basedoncovariance, approach but with the wrong physical interpretation. …
“It cannot be denied that Lorentz’s argumentation, as Pauli noted in comparing it with Einstein’s, is dynamical in nature. But Bell’s procedure for accounting for length contraction is in fact much closer to FitzGerald’s 1889 thinking based on the Heaviside result, summarised in section 2 above. In fact it is essentially a generalization of that thinking to the case of accelerating bodies. It is remarkable that Bell indeed starts his treatment recalling the anisotropic nature of the components of the field surrounding a uniformly moving charge, and pointing out that:
‘In so far as microscopic electrical forces are important in the structure of matter, this systematic distortion of the field of fast particles will alter the internal equilibrium of fast moving material. Such a change of shape, the Fitzgerald contraction, was in fact postulated on empirical grounds by G. F. Fitzgerald in 1889 to explain the results of certain optical experiments.’
“Bell, like most commentators on FitzGerald and Lorentz, prematurely attributes to them length contraction rather than shape deformation (see above). But more importantly, it is not entirely clear that Bell was aware that FitzGerald had more than “empirical grounds” in mind, that he had essentially the dynamical insight Bell so nicely encapsulates.
“Finally, a word about time dilation. It was seen above that Bell attributed its discovery to J. Larmor, who had clearly understood the phenomenon in 1900 in his Aether and Matter [21]. 16 Indeed, it is still widely believed that Lorentz failed to anticipate time dilation before the work of Einstein in 1905, as a consequence of failing to see that the “local” time appearing in his own (secondorder) theorem of corresponding states was more than just a mathematical artifice, but rather the time as read by suitably synschronized clocks at rest in the moving
system. …
“One of Bell’s professed aims in his 1976 paper on ‘How to teach relativity’ was to fend off “premature philosophizing about space and time” 19. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatiotemporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of spacetime Galilean or Minkowskian, say—it is immersed in? 20 Some critics of Bell’s position may be tempted to appeal to the general theory of relativity as supplying the answer. After all, in this theory the metric field is a dynamical agent, both acting and being acted upon by the presence of matter. But general relativity does not come to the rescue in this way (and even if it did, the answer would leave special relativity looking incomplete). Indeed the BellPauliSwann lesson—which might be called the dynamical lesson—serves rather to highlight a feature of general relativity that has received far too little attention to date. It is that in the absence of the strong equivalence principle, the metric g_μv in general relativity has no automatic chronometric operational interpretation. 21 For consider Einstein’s field equations … A possible spacetime, or metric field, corresponds to a solution of this equation, but nothing in the form of the equation determines either the metric’s signature or its operational significance. In respect of the last point, the situation is not wholly dissimilar from that in Maxwellian electrodynamics, in the absence of the Lorentz force law. In both cases, the ingredient needed for a direct operational interpretation of the fundamental fields is missing.”
Interesting recent comment by anon. to Not Even Wrong:
Even a theory which makes tested predictions isn’t necessarily truth, because there might be another theory which makes all the same predictions plus more. E.g., Ptolemy’s excessively complex and fiddled epicycle theory of the Earthcentred universe made many tested predictions about planetary positions, but belief in it led to the censorship of an even better theory of reality.
Hence, I’d be suspicious of whether the multiverse is the best theory – even if it did have a long list of tested predictions – because there might be some undiscovered alternative theory which is even better. Popper’s argument was that scientific theories can never be proved, only falsified. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools. Mixing beliefs with science quickly makes the fundamental revision of theories a complete heresy. Scientists shouldn’t start to begin believing that theories are religious creeds.
Update (22 May 2008):
I now understand quantum field theory sufficiently well to begin writing a book about it. What is interesting is the marketing perspective on this subject. I think Figure 1 on this blog post, for instance, is a very clear statement of the facts. However, those people unfamiliar with Feynman diagrams will not appreciate why the time and spatial distance axes are reversed from the usual display. Hence, any attempt to appeal to the technically educated will automatically alienate further the ignorant masses. I’ll have to explain things from both a nontechnical and from a technical perspective in the book. Glasstone’s books usually had chapters written with two sections: a nontechnical section first, followed by a section with the more mathematical and technical material. Possibly this is what I will need to do with each chapter in the planned book, to cater for the widest possible audience without leaving out crucial technical evidence.
David Holloway’s book, Stalin and the Bomb, is noteworthy for analysing Stalin’s state of mind over American proposals for pacifist antiproliferation treaties after World War II. Holloway demonstrates in the book that any humility or goodwill shown to Stalin by his opponents would be taken by Stalin as (1) evidence of exploitable weakness and stupidity, or (2) a suspicious trick. Stalin would not accept good will at face value. Either it marked an exploitable weakness of the enemy, or else it indicated an attempt to trick Russia into remaining weaker than America. Under such circumstances (which some would attribute to Stalin’s paranoia, others would call it his narcissism), there was absolutely no chance of reaching an agreement for peaceful control of nuclear energy in the postwar era. (However, Stalin had no qualms about making the SovietNazi peacepact with Hitler in 1939, to invade Poland and murder people. Stalin found it easy to trust a fellow dictator because he thought he understood dictatorship, and was astonished to be doublecrossed when Hitler invaded Russia two years later.) Similarly, the facts on this blog post (the 45th post on this blog) and in previous posts, are assessed the same way by the mainstream: they are ignored, not checked or investigated properly. Everyone thinks that they have nothing to gain from a theory based on solid, empirical facts! Once I have written the book, I will update http://quantumfieldtheory.org/ and make the book freely available, with some video explanations.
Update (25 May 2008):
There isn’t any “curved” smooth classical spacetime, it’s just an approximation using calculus to represent effects of discrete field quanta being exchanged between gravitational charges composed of mass or energy.
The universe isn’t curved, this was discovered by Perlmutter around 1998, when it was found that the predicted curvature (gravitational acceleration) as assessed from the redshifts of distant supernovae, was absent.
Einstein wrote to Besso in 1954:
“I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included…”
Quantum field theory is something that definitely needs to be considered by the mainstream more realistically than the halfbaked, nonpredictive approach taken in socalled string ‘theory’ (there isn’t a string theory, there are 10^500 variants, all different, so it’s not quantitatively predictive science).
Rutherford and Bohr were extremely naive in 1913 about the electron “not radiating” endlessly. They couldn’t grasp that in the ground state, all electrons are radiating (gauge bosons) at the same rate they are receiving them, hence the equilibrium of emission and absorption of energy when an electron is in the ground state, and the fact that the electron has to be in an excited state before an observable photon emission can occur:
“There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”
 Rutherford to Bohr, 20 March 1913, in response to Bohr’s model of quantum leaps of electrons which explained the empirical Balmer formula for line spectra. (Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)
The ground state energy and thus frequency of the orbital oscillation of an electron is determined by the average rate of exchange of electromagnetic gauge bosons between electric charges. So it’s really the dynamics of quantum field theory (e.g. the exchange of gauge boson radiation between all the electric charges in the universe) which explains the reason for the ground state in quantum mechanics. Likewise, as Feynman showed in QED, the quantized exchange of gauge bosons between atomic electrons is a random, chaotic process and it is this chaotic quanta nature for the electric field on small scales which makes the electron jump around unpredictably in the atom, instead of obeying the false (smooth, nonquantized) Coulomb force law and describing nice elliptical or circular shaped orbits.
Update (27 May 2008):
The comment immediately above is taken from a submission to Kea’s Arcadian Functor blog. Apparently, Dr Sheppeard didn’t much like the comment with respect to Rutherford, who came from New Zealand, aand thought I was calling him an idiot. Actually, I was just pointing out that even great geniuses can make mistakes, missing mechanisms, and so on. I didn’t call him an idiot, and have the greatest respect for him with regards to the useful work he did in nuclear physics (interpreting Geiger and Marsden’s experimental results correctly) and many other things. Notice that Rutherford did make other blunders which I haven’t even mentioned, for example he said:
“Anyone who expects a source of power from the transformation of the atom is talking moonshine” (Quoted by Richard Rhodes, “The Making of the Atomic Bomb”, Simon and Schuster, 1986.)
FURTHER READING (SELECTED POSTS ON THIS BLOG WHICH ARE RELEVANT TO BUT MORE EXTENSIVE THAN THIS POST, although note that some of the older posts are obsolete or in error):
3. http://nige.wordpress.com/2007/05/25/quantumgravitymechanismandpredictions/
7. http://nige.wordpress.com/2007/07/04/metricsandgravitation/
8. http://nige.wordpress.com/2007/07/17/energyconservationinthestandardmodel/
9. http://nige.wordpress.com/2007/08/27/stringtheoryversusphysicalfactsinscientificamerican/
10. http://nige.wordpress.com/2007/11/28/predictingthefuturethatswhatphysicsisallabout/
12. http://nige.wordpress.com/2008/01/22/farewelltoblogging/
14. http://nige.wordpress.com/2007/02/20/thephysicsofquantumfieldtheory/
15. http://nige.wordpress.com/2007/03/16/whyolddiscardedtheorieswontbetakenseriously/
comment:
http://riofriospacetime.blogspot.com/2008/05/einsteinssphere.html
Surprised,
That’s obviously what is meant because that’s Dirac’s prediction from the spinor of his famous equation. He had to modify the Hamiltonian and one consequence is antimatter.
It was Schwinger, however, who found that pair production occurs spontaneously in the vacuum if the electric field strength exceeds a threshold of 1.3*10^18 volts/metre. See equation 359 in Dyson’s 1951 Lectures on Advanced Quantum Mechanics, Second Edition, http://arxiv.org/abs/quantph/0608140, or equation 8.20 of Luis AlvarezGaume and Miguel A. VazquezMozo, Introductory Lectures on Quantum Field Theory, http://arxiv.org/abs/hepth/0510040
One thing that really annoys me about popular books on the subject is that they claim – falsely – that pairs of fermions are constantly popping into existence and annihilating everywhere in the vacuum, without limit.
Actually, that only occurs with[in] a distance of 32.953 fm from an electron (see http://nige.wordpress.com/2007/06/13/feynmandiagramsinloopquantumgravitypathintegralsandtherelationshipofleptonstoquarks/ ).
So all those physicists who state that the entire vacuum is a seething foam of Heisenbergformula controlled pairproduction and annihilation (i.e., looped Feynman diagrams), are talking out of their hats.
It’s been known for over fifty years that there is a cutoff on the pair production. It’s pair production that allows pairs of shortlived (virtual) fermions to become briefly polarized in a field, which opposes and partially the primary electric field, thereby explaining physically the reason for electric charge renormalization.
If pairproduction occurred throughout the vacuum, there would be no infrared cutoff on the lowenergy range for running couplings, and the observable electric charge would get for ever smaller as you got further from an electron. This doesn’t happen, proving that pair productionannihilation certainly doesn’t occur everywhere in the vacuum.
*****
Here’s a section from my site http://feynman137.tripod.com/ which is generally obsolete now, but still contains some useful bits of information (I’ve found a slightly different and later version of this argument in my post http://nige.wordpress.com/2007/07/04/metricsandgravitation/ but it is not necessarily better, and all variants will need to be reviewed to get the best lucid presentation when writing the book):
Penrose’s Perimeter Institute lecture is interesting: ‘Are We Due for a New Revolution in Fundamental Physics?’ Penrose suggests quantum gravity will come from modifying quantum field theory to make it compatible with general relativity…I like the questions at the end where Penrose is asked about the ‘funnel’ spatial pictures of blackholes, and points out they’re misleading illustrations, since you’re really dealing with spacetime not a hole or distortion in 2 dimensions. The funnel picture really shows a 2d surface distorted into 3 dimensions, where in reality you have a 3dimensional surface distorted into 4 dimensional spacetime. In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c^{2} = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences.
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)^{1/2}, so v^{2} = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the [gravitational binding] energy [which] mass [has when it is bound] in a gravitational field at radius x from the centre of mass [which is producing the gravitational binding effect on the other mass] is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.
By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v^{2} = 2GM/x) into the FitzgeraldLorentz contraction, giving g = (1 – v^{2}/c^{2})^{1/2} = [1 – 2GM/(xc^{2})]^{1/2}. However, there is an important difference between this gravitational transformation and the usual FitzgeraldLorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:
FitzgeraldLorentz contraction effect:
g = x/x_{0} = t/t_{0} = m_{0}/m = (1 – v^{2}/c^{2})^{1/2} = 1 – ½v^{2}/c^{2} + …
Gravitational contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = [1 – 2GM/(xc^{2})]^{1/2} = 1 – GM/(xc^{2}) + …,where for [radially] spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGeraldLorentz contraction: x/x_{0} + y/y_{0} + z/z_{0} = 3r/r_{0}. Hence the radial contraction of space around a mass is r/r_{0} = 1 – GM/(xc^{2}) = 1 – GM/[(3rc^{2}]
Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The [amount of radial contraction of a spherically symmetric mass is] (1/3) GM/c^{2}. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the LorentzFitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.
*****
Copy of a comment to Dr Tommaso Dorigo’s blog:
On the reality of the big bang, can I recommend http://www.astro.ucla.edu/~wright/tiredlit.htm for an analysis of the redshift facts and the reasons why pseudoscientists can’t accept the big bang facts as valid.
Notice also that Alpher and Gamow predicted the cosmic background radiation in 1948 and it was discovered in 1965.
Actually, the big bang theory is incomplete, because when you take the derivative of the Hubble expansion law v = HR, you get acceleration a=dv/dt = d(H*R)/dt = (H*dR/dt) + (R*dH/dt) = H*v = R*H^2. This tells you that receding masses around us have a small outward acceleration, only on the order of 10^{10} ms^{2} for the most distant objects. This is a tremendous prediction. I published it via Electronics World back in Oct 96, well before Perlmutter confirmed it observationally.
This is just about the observed acceleration of the universe! Smolin points this amount of acceleration and the “numerical coincidence” that it is on the order of a = Hc = R*H^2 out in his book “The Trouble with Physics” (2006) but neglects to state that you get this result by differentiating the Hubble recession law! Note that arXiv.org allowed my paper upload from university in 2002, but then deleted in within seconds, unread!
Dr Bob Lambourne of the Open University years ago suggested submitting my paper to the Institute of Physics’ Classical and Quantum Gravity, the editor of which sent it for “peerreview” to a string theorist who rejected it because it added nothing to string theory!
So some additional evidence and confirmed predictions of the big bang do definitely exist (the outward acceleration of matter leads to radially outward force, which by newton’s 3rd law gives a predictable inward reaction force, which allows quantitative predictions of gravity that again are confirmed by empirical facts). Don’t just believe that only stuff that survives censorship by string theorists is factual. Classical and Quantum Gravity was publishing the Bogdanov’s string theory speculations (which the journal later had to retract) at the time it was rejecting my factbased paper!
**********
Above: the first two pages of the four pages long article in the August 2002 issue of Electronics World, introducing the particle physics for the gravity mechanism; the second part (published in the April 2003 issue, six pages) gave the gravity mechanism in its original formulation. As explained in the earlier post on this blog here, a new formulation was developed over the last couple of years to make the basic physical principles more transparent to other physicists. CERN Document server hosted a preprint, but stringy moderators on arXiv instantly (within 5 seconds) deleted an updated paper to arXiv in December 2002 without even taking the time to first read it. This kinda indicates paranoia, but the discovery of the basic fact that the Hubble recession law implies a very small acceleration outward, giving an immense outward force (because the mass of the universe is really huge, and force is the product of mass and acceleration), dates back to twelve years ago and was first published via the October 1996 letters pages of Electronics World.
**********
The primary purpose of this blog is to provide evidence that electromagnetism is mediated by charged, massless gauge bosons, replacing the ad hoc and poorly predictive Higgs mechanism for electroweak symmetry breaking with a factbased and falsifiable predictive mechanism. Another objective is to provide evidence from a working (checkable, successful) factbased mechanism of quantum gravity that the graviton is a spin1 gauge boson, possibly the massless neutral gauge boson of SU(2) or a gravity field described by by U(1). This blog sets out to provide evidence for the correct way to introduce gravity into the Standard Model of particle physics, by making qnantitative predictions that are checkable.
Update (13 June 2008):
copy of comment (in case accidentally lost) by anon. to the Not Even Wrong weblog (quotation is from Tony Smith):
“… dipole axis of amplitude – due to motion with respect to the Cosmic Microwave Background. – according to astroph/0302207 (First Year WMAP) “… COBE determined the dipole amplitude is 3.353 +/ 0.024 mK in the direction …”.”
‘Dipole axis of amplitude’ is a very polite euphemism for this massive +/ 3mK cosine anisotropy in the CBR, compared to the original name given by discoverer R. A. Muller in his Scientific American article (v238, May 1978, pp. 6474): see http://adsabs.harvard.edu/abs/1978SciAm.238…64M
(If the CMB is used to establish a reference frame for motion, this anisotropy indicates absolute motion with respect to that frame.)
Update (21 June 2008):
Some comments have been added to the post http://nige.wordpress.com/2007/03/16/whyolddiscardedtheorieswontbetakenseriously/ which clarify the errors made by crackpots who are dismissing the Standard Model and quantum gravity work on the basis that, if exchanged gauge bosons caused fundamental forces, charges would get hot or would be slowed down by drag effects due to graviton fields in space (this is the standard false dismissal for the Standard Model, quantum gravity and LeSage gravity by exchange radiation mechanism haters). Actually, gauge boson radiation fields in space don’t have the right frequency to oscillate charges into gaining internal energy (heat) so they don’t heat up matter; they just impact and thereby cause the accelerative effects of force fields, and the related the contraction effects (the contraction of length of moving bodies and the radialonly contraction of gravitating mass/energy due to graviton compression which results in the distorted, nonEuclidean geometry that is modelled approximately by general relativity) of spacetime.
Two electrons repel by exchanging positively charged massless electromagnetic gauge bosons (actually the massless positively charged radiations of SU(2), rather than the assumed uncharged massless radiation of U(1) which is currently part of the Standard Model). In the case of quantum gravity, two nearby similarly gravitationallycharged (massive) objects are pushed together, because they forcefully exchange gravitons with every mass in the receding universe, but not nonreceding masses. In the case of the repulsion of two electrons, the exchanged radiations are charged, massless Hawking type radiation so the situation is reversed relative to that of quantum gravity.
I.e., in quantum gravity the exchanged radiations are only able to carry a net force if the objects are receding from one another; this is due to the physics whereby objects must have a Hubble recession giving acceleration dv/dt = d(Hv)/dt and thus an outward force relative to the other object of F = ma = m*d(Hv)/dt, so that there is an inward reaction force of similar magnitude (Newton’s 3rd law of motion) mediated by the spacetime fabric of gravitons (the only thing which can carry the force). Hence, nearby masses which aren’t receding, cannot physically exchange any gravitons which deliver a net force. This is the physical mechanism for the shadowing effect in the LeSage mechanism.
Sadly the few people who take the LeSage mechanism seriously seem to be prejudiced against the redshift facts of the big bang. For example, the editor of a book called ‘Pushing Gravity’ (Matthew Edwards) initially emailed me encouragement, but later claimed it was wrong because in his opinion the big bang is not the right explanation of the Hubble redshift law. However, redshift due to recession is the only empirically demonstrated mechanism that not only models consistently all the observational details of redshifted spectra (e.g., if redshift were due to scattering of radiation, the spectrum of redshifted light would be skewed rather than uniformly shifted to lower frequencies), see http://www.astro.ucla.edu/~wright/tiredlit.htm
(It’s fascinating to study the reactions of physicists to a = dv/dt = d(HR)/dt = (R*dH/dt)+(H*dR/dt) = H*v = H*H*R, and the implications. They want to dismiss it so they say either it is too simple to be right or too complex to be right. It’s so simple that if it was right, any one of the great physicists of yesteryear would have discovered it or thought about it in this way already and dismissed it for some good reason without recording it, so nature cannot be simple. Alternatively, they argue that it is too complex to be right. They think that because we have to differentiate the Hubble law, it’s too complex for them to follow, and it would be simpler to stick to string theory with it’s landscape of 10^500 combinations of invisibly small compactified CalabiYau 6dimensional manifold speculation. Alternatively, they claim that the LeSage gravity mechanism has been disproved for gauge boson radiation because ‘if moving gravitons existed in space, they would slow down objects and heat them up until they glowed red hot’. When you explain the evidence that gravitons only interact with massive particles thay provide mass to the Standard Model particles which are normally massless, and that gravitons cause effects like inertia and momentum rather than drag and heating, people just don’t want to listen. Conversations begin because they think they can explain to you why they think you are wrong, and when it becomes clear that they are wrong, instead of being pleased to learn something, they still try to claim they are right, just as pseudoscientists do. Their final line of defence is that this mechanism is being censored out, and it would not be censored out – in their opinion – if it was correct. If you counter this by citing wellknown historical examples from physics of the initial hostility to correct work, they claim that you are claiming that you are a genius like the people in the well known (undisputed) historical examples, and they therefore miss the point entirely (innovation is routinely hated by society and particularly the media which waits until a revolution has occurred before commenting instead of promoting the news in advance of a revolution, precisely because innovation is not considered clever but is rather a nuisance to everyone or an act of pathetic egotism). If you alternatively show that peerreviewers haven’t even bothered to read or check the results, but have just rejected it because you are not a fellow string theorist or working in the community of grouthink ideas which are currently popular, then that really angers them. They have no way out other than to get angry with you, while continuing to ignore the actual science you have been talking about. Actually, it is quite common that irrational actions have to be enforced by society without a sensible excuse. As an example, nobody in a modern society knows all the laws; there are too many laws and the statute books are too long to read and remember in a lifetime. The nearest thing is to find a specialised lawyer, who knows at least the major laws in a particular area. However, any use of solicitors is expensive. So some people end up accidentally breaking laws, purely through ignorance, just because there are too many of them to know about. If ignorance could be used as a defence in law, then everyone would be immune to the law unless there was proof that the person had read the relevant law in question before breaking the law (this is why people with security clearance traditionally must sign a copy of the official secrets act, so they can be proved to know precisely what they can and cannot do with the secret information they possess). Because in practise the law would not work if ignorance of law was accepted as an excuse, the Latin dictum Ignorantia juris non excusat (ignorance of law is no excuse) reigns:
‘Ignorance of the law excuses no man; not that all men know the law; but because ’tis an excuse every man will plead, and no man can tell how to confute him.’ – John Selden (15841654), Table Talk.
Similarly to this unhappy situation, scientific advances that don’t follow the mainstream string theory groupthink are treated as guilty of error (not because they have been shown wrong, but because they take time and effort to understand and check) until they become mainstream in their turn. The process of becoming mainstream is a very ugly, very unscientific thing, all to do with scientific journal media publicity and celebrity endorsement (marketing called peerreview and citation counting). This marketing effort has nothing whatsoever to do with real science, although some people (those who likewise think that Jesus endorsed the financially lucrative dogmatic organised religion which is actually a travesty of Jesus’ message) redefine science to make it compatible with dogma, officialdom, groupthink, belief systems, orthodoxy, innovationhatred, supernatural nonfalsifiable string theory and such like. Hence there are two totally different extremes of scientists: those obsessed with nature and those obsessed with being good little team players who earn Brownie points for not being a nuisance by endlessly coming up with new ideas and theories.)
The physics for electromagnetic gauge boson exchange is different from gravitation. Charged particles like electrons and quarks are black holes, emitting radiation. Because the black hole electrons and quarks are electrically charged, this affects the Hawking radiation mechanism: only virtual charged SU(2) gauge boson pair production charges of opposite sign to the real particle fall into the event horizon, so the other halves of the pairs (with similar sign to the real particle, but with no mass) get radiated away. So an electron is consistently radiating in all directions negatively charged massless gauge boson radiations, and is receiving similar radiation from other electrons in the surrounding universe. This exchange is possible despite the massless nature of the charged radiation, because the magnetic field curls of radiation going from charge A to charge B is cancelled out by the magnetic field curls of massless charged radiation going from charge B to charge A. Hence, the usual problem with the propagation of massless charged radiation does not apply where there is an equilibrium of radiation being exchanged in two directions at the same time (the normal equilibrium situation): we don’t find that there is any infinite magnetic field selfinductance causing problems in the theory, simply because the physics naturally cancels out the magnetic field curls. The usual equilibrium is disturbed, however, if another electric charge is nearby. Unlike the case of gravity, the forceful exchange of charged massless SU(2) gauge bosons occurs even between two charges which are not receding. Two electrons repel because they exchange radiation that is not redshifted, unlike the radiation being exchanged with most of the other electrons in the the rest of the mainly distant, receding universe. As a result, the exchange of gauge bosons between two electrons is stronger between them than in other directions, so they are repelled (knocked apart). If the universe was not receding, this would not happen, because there would be a perfect equilibrium.
This equilibrium is disturbed by the asymmetry that receding masses send back redshifted gauge bosons with lower energy and thus less momentum than those of nearby masses which are not receding. In both gravity and electromagnetism, the asymmetry of the exchange of gauge bosons being exchanged is vitally dependent on the Hubble expansion. Without the expansion of the universe, there would be no gravity and no electromagnetic forces. But there is also an interdependence between such forces and the big bang, because it is the exchange of gauge bosons bewteen masses which causes the expansion of the universe.
I first worked out the vector summation that proves that electromagnetic gauge bosons cause attraction of unlike unit charges with the same force as they cause repulsion of similar sized unit charges of like sign, at Christmas 2000, and it was published in the April 2003 issue of Electronics World. The equilibrium referred to in the previous paragraphs is not absolutely perfect: it’s almost isotropic, but it is not timeindependent. Because the gauge bosons are being exchanged between receding masses (and are causing the recession), there is a timedelay between emission and reception during which then during which the redshift increases. The energy of the exchanged gravitons is being routinely converted into the kinetic energy of the receding masses of the universe. But because gravity and electromagnetism are both powered by the effect of the expansion of the universe upon the exchange of gauge bosons with such surrounding, receding masses and charges, these forces are very gradually increasing in coupling constant as time passes.
This is one strong nail in the coffin of the mainstream ideas of
1. inflation (to flatten the universe at 300,000 years after the big bang when gravitational effects were much smaller in the cosmic background radiation than you would expect from the structures which have grown from those minor density fluctuations over the last 13,700 million years; we don’t need inflation because the weaker gravity towards time zero explains the lack of curvature then and how gravity has grown in strength since then: traditional arguments used to dismiss gravity coupling variations with time are false because they assume that electromagnetic Coulomb repulsion effects on fusion rates are timeindependent instead of similarly varying with gravity, i.e., when gravity was weaker the big bang fusion and later the fusion in the sun wasn’t producing less fusion energy, because Coulomb repulsion between protons was also weaker, offsetting the effect of reduced gravitational compression and keeping fusion rates stable), and
2. force numerical unification to similar coupling constants, at very high energy such as at very early times after the big bang.
The point (2) above is very important because the mainstream approach to unification is a substitution of numerology for physical mechanism. They have the vague idea from the Goldstone theorem that there could be a broken symmetry to explain why the coupling constants of gravity and electromagnetism are different assuming that all forces are unified at high energy, but it is extremely vague and unpredictive because they have no falsifiable theory, nor a theory based upon known observed facts.
They are actually wrong, because if you picture a sphere of radius R around any fundamental particle, the total flux (Joules per second per square metre) of gauge boson radiation summed over that surface area, passing into and out of that spherical area on the way to the particle in the middle, must be conserved or if not conserved, then some of the energy must be transformed in nature and used in some other way. The running couplings (change in effective observable electromagnetic charge of the electron, for example) at short distances from electrons and other particles need explanation in terms of this conservation of energy principle. The total output from a charge of electromagnetic gauge boson radiation is varied due to the polarized vacuum shielding it at small distances around a lepton. As you get closer than a few femtometres to an electron, the electric charge of the electron starts to rise. Hitting electrons together at 91 GeV (to make them approach very closely before bouncing off) gives a repulsion force measured to be 7% stronger than predicted using Coulomb’s law, so the electric charge at that energy (close to an electron) is higher than at low energies (below 0.5 MeV scattering energy it is a constant and equal to the normal textbook value). The rise in measurable electric charge at very small distances is due to seeing less intervening shielding due to the polarized vacuum of pair production charges around the core of the particle. But what happens to the electromagnetic field energy that is lost from the gauge bosons that are emitted from a charge core as they are attenuated by the polarized vacuum surrounding the core? The energy lost from the electromagnetic gauge bosons is deposited in the virtual fermion field , and is actually used to create pairs of leptons and bosons and quarks. The shortranged nuclear force charge running couplings are powered by the massive virtual particles like SU(2) massive particles which are produced by electromagnetic energy being attenuated by the polarized vacuum.
Therefore, instead of all forces (including gravity) numerically unifying in strength as you get to very small distances from bare particle cores high energies which are unobservable, or very early times after the big bang, what happens is that the electromagnetic charge continues to increase towards its barecore (maximum) value as you approach it, but the nuclear charges there are there zero because there is no attenuated electromagnetic energy to produce such field effects. There is no direct relationship between gravitational charge (mass) and electric charge (which doesn’t increase with particle velocity, unlike mass), and in the Standard Model all mass is supplied by a separate bosonic Higgs field with which gravitons interact instead of directly interacting with the Standard Model charges. As a result, there is simply no mechanism for gravity to suddenly increase in coupling by a factor of 10^40 at unobservably high energies: the mainstream claims that this happens because gravitons possess energy and are therefore sources for gravitation in their own right. However, it is clear that this is missing the fact that gravitons don’t produce gravity alone, but only in combination with a massive Higgslike field which supplies mass. Energy and matter both acquire gravitational charge by coupling to the Higgslike field bosons, which are the only things that directly interact with gravitons:
Photon <> Higgstype mass bosons <> Gravitons <> Higgstype mass bosons <> Electron
Above: the chain of indirect links in the deflection of a photon by the gravity field of an electron. In this picture, only Higgstype bosons interact with gravitons, and gravitons don’t have gravitational charge: energy doesn’t actually have any intrinsic mass because it doesn’t directly interact with gravitons. Gravitational charge is supplied to energy by a separate Higgstype field of massive bosonic particles. Gravitons travel at light velocity and the whole idea that they are interacting with one another directly is in disagreement with the nature of the simple model which predicts all of the correct (observed) features of gravitation.
In other words, it is an illusion to think that either electrons or photons have gravitational charge: they don’t and they merely acquire it indirectly from a secondary Higgslike field. This distinction is important for the issue of whether gravitons actually have gravitational charge: they don’t. All gravitational effects occur due to gravitons hitting into Higgslike bosons, not other gravitons.
In the fundamental force mechanism for gauge boson exchange which has been explained in this blog post (see previous posts as listed above for more information), all fundamental forces were zero initially in the big bang, and the couplings of each have increased in direct proportion to the time since the big bang. At extremely high energy today (i.e., seeing through the polarized vacuum to bare particle cores), the fundamental forces differ from those believed to be the case by string theorists: electromagnetic charge is 137 times stronger than at low energies as you approach the black hole event horizon of an electron, and since massive (gravitonexchnaging) Higgslike bosons are couples to the electron by electromagnetic interaction, the effective massof the core you can experience varies in the same way. This is not the 10^40 factor jump in G argued for by string theorists who want gravity and electromagnetism to be identical in strength at high energy. At extremely high energy, shortranged nuclear couplings begin to fall as you approach the black hole event horizon of a fundamental particle, where they are zero. This is based on factual mechanisms which have already made tested, confirmed originally falsifiable predictions (such as of the acceleration of the universe two years before first observation), and conservation principles, not abjectly speculative ‘good ideas’ or the consensus of a lot of physicists who belief because their friends believe or some other religious, pseudoscientific reason.
Update (22 June 2008):
Copy of a comment in moderation queue to John Horgan’s string theory post on his blog (will probably be deleted for going off topic or being too long, or whatever):
http://www.stevens.edu/csw/cgibin/blogs/csw/?p=162
‘String theory now comes in so many versions that it “predicts” virtually anything!’
So you’re unconvinced by Lubos Motl’s top twelve results of string theory:
http://motls.blogspot.com/2006/06/toptwelveresultsofstringtheory.html
Notice that Joe Polchinski added the last couple, including the claim that the landscape of 10^500 variants of string theory is actually a major selling point, because it makes string theory cover such a large number of different models, it’s possible that one of them will include a small positive cosmological constant.
I think that Peter Woit was working on QCD lattice calculations in the early 1980s and used an impressive idea discovered by Witten to make checkable predictions. Then Woit moved on to studying the chiral symmetry problems (i.e. the issue of why the weak force only acts on lefthanded spinning particles, and how this relates to electroweak theory and its symmetry breaking), which is a real problem. Then after the first string revolution in 1985, mainstream attention moves away from trying to better understand the real world symmetries of the standard model, and towards string theories with their nonempirical, imaginary ‘problems’ about unobserved grand unification and unobserved gravitons. So I think that’s the reason why Dr Woit has a different perspective; he was left high and dry when the tide went out leaving gauge theory a dead end. The string theorists can’t predict the standard model details because the nature of the vacuum in string theory is sensitively dependent on the compactification of 6 unobservable dimensions, and the CalabiYau manifold can allow many variations depending on the moduli of the extra dimensions’ sizes and shapes, as well as the way that those are stabilized with RubeGoldbery machines to give meta stable ground state configurations to the vacuum.
The key question is whether electroweak theory is completely correct (the simplest Higgs theory of electroweak symmetry breaking is wrong, and there are several variants with multiple Higgs bosons to decide from at a more complex level, assuming that the Goldstone theorem of symmetry breaking is correct), is how gravity relates to the existing U(1) X SU(2) x SU(3) standard model of electromagnetic, weak, and strong forces.
“In the standard model, at temperatures high enough so that the symmetry is unbroken, all elementary particles except the scalar Higgs boson are massless. At a critical temperature, the Higgs field spontaneously slides from the point of maximum energy in a randomly chosen direction, like a pencil standing on end that falls. Once the symmetry is broken, the gauge boson particles — such as the leptons, quarks, W boson, and Z boson — get a mass. The mass can be interpreted to be a result of the interactions of the particles with the “Higgs ocean”.” – http://en.wikipedia.org/wiki/Higgs_mechanism#General_Discussion
The U(1) x SU(2) x SU(3) Standard Model is very nice at first contact. I’m convinced that the QCD model represented by the SU(3) symmetry is correct, because it’s so firmly tied to empirical evidence such as the eightfold way of categorising and predicting particle properties. So my interest is focussed on the U(1) x SU(2) electroweak symmetry.
What disgusts me is that the symmetry is broken at low energy, U(1) is not electromagnetism and SU(2) is not the weak force.
Instead, the ad hoc, empirical electroweak Weinberg mixing angle of the Standard Model means that the true gauge boson of electromagnetism is not simply the massless photon of U(1), but is instead a composite of the gauge boson of U(1) and the neutral gauge boson of SU(2). Similarly, the neutral massive weak gauge boson of SU(2) is not the observed weak massive gauge boson of the weak force!
To make U(1) x SU(2) model experimentally known facts of electromagnetism, the neutral gauge bosons of each must be mixed together. The usual over simplified books on the standard model falsely claim that electromagnetism is the U(1) symmetry and the weak force is the SU(2) symmetry.
But the SU(2) gauge bosons are W+, W, and W0, where the W0 has never been observed. Instead, the observed neutral weak gauge boson is the Z0, which is a mix of the electromagnetic U(1) massless neutral gauge boson and the W0. Similarly, the electromagnetic gauge boson isn’t the photon of U(1) but is instead a B particle which is another result of the mix up between the photon and the W0 of SU(2). There is no theoretical basis for mixing up electromagnetism and the weak force in this way. The Weinberg mixing angle is 26 degrees and this comes empirically by fitting the theory to model data; it’s not a theoretical prediction that is compared to experiment.
So the U(1) x SU(2) electroweak symmetry theory is not too impressive, it’s a gigantic mess and a gigantic fraud in the way it is explained to the public as being simple and elegant. In addition, because gravity has more similarity to electromagnetism than it does to any other known force (both are infiniterange, inverse square law forces), one would expect that gravity could come into the electroweak symmetry sector of the Standard Model, when a complete theory including quantum gravity is discovered.
U(1) has other problems as a gauge theory. E.g, it has only one kind of charge, while electromagnetism comes with two types (positive and negative, or for magnetism north pole and south pole). It also has only one type of gauge boson, which has to have 4 polarizations in order to account for being able to cause attraction of unlike and repulsion of like. It’s pretty obvious to me that if you consider a single electron, it’s surrounded by a negative electric field, which is due to the gauge bosons being exchanged. Hence, negative charge is mediated by charged (negative) massless gauge bosons. Because exchange radiation passes two ways along any route (from charge A to B and back to charge A), the magnetic field curls of the charged gauge bosons will automatically cancel, preventing infinite self inductance, so there is no problem with having massless electrically charged radiation as long as it is being exchanged along a twoway route and not just trying to propagate in one direction only.
So why not kick U(1) out of the picture and replace it by a SU(2) group for electromagnetism. Here you have two charges and three gauge bosons: by legitimately adapting the Higgs symetry breaking theory (Higgs is a speculative theory with no empirical confirmation, not an established fact), you can easily get some replacement to U(1) x SU(2) which looks like SU(2) x SU(2) or just SU(2) where instead of the gauge bosons being massless at only high energy and massive at low energy, some (say one handedness) remain massless at low energy.
Hence, the massless charged SU(2) gauge bosons at low energy produce electromagnetism, while still at low energy the massive versions of those remain the W+ and W and produce the lefthanded weak force. The uncharged W0 produces the weak force’s massive Z0 for neutral currents, and it’s massless version is the graviton with spin1. This fits into a gauge mechanism I’m interested in that makes predictions that can be checked.
Update (24 June 2008):
http://coraifeartaigh.wordpress.com/2008/06/15/thestandardmodel/#comment129
A big problem comes from the Abelian U(1) electromagnetic theory, e.g. the Weinberg mixing angle.
U(1) has 1 charge and 1 gauge boson and it is supposed to model electromagnetism, while SU(2) has 2 charges (two isospins) and 3 gauge bosons (neutral, positive and negative in charge) for the neutral currents and W+/ bosons of the weak force.
But the observable gauge boson of electromagnetism and the observable Z_0 of the weak neutral currents, are not adequately modelled by U(1) and SU(2) respectively, so an ad hoc mixing of the two is needed. So neutral gauge bosons B and W_0 from U(1) and SU(2) need to be mixed together to produce something modelling the observable photon and observable Z_0 gauge boson.
This is an entirely empirical correction, with the Weinberg mixing angle coming not from theory but from adjustment to make the theory model the observables. It makes the U(1) x SU(2) a very complex and inelegant theory of electroweak phenomena, even before you get into discussing the Higgs mechanism you need to break the symmetry and make it consistent with observations.
I think that a much better model would be to change the Higgs mechanism and just use SU(2), so instead of the Higgs mechanism giving mass to all of the gauge bosons at low energy, it only gives mass to some of them, allowing only lefthanded spinors get to interact with weak gauge bosons.
The rest of the W_+, W_, and W_0 gauge bosons of SU(2) remain massless, and we observe them as electromagnetic (massless W_+ and W_) and gravitational (massless W_0) gauge bosons. The extra polarizations of the gauge boson photon (it has 4, rather than the usual 2 polarizations for photons) come from the electric charge carried. Normally massless radiation can’t propagate it it has an electric charge, due to the infinite magnetic selfinductance which would result, but that is cancelled out in the case of exchange radiation, because the magnetic field curls cancel out between each oppositelydirected flow of gauge bosons from one charge to another and back. This scheme allows a full causal mechanism of exchange radiation causing electromagnetic and gravitational forces, predicting the coupling parameters.
When you think about U(1) for electromagnetism, it is an extremely problematic theory. You have only one electric charge, so opposite charge must be considered to be charge going backwards in time. Then you only have 1 type of gauge boson, with 4 polarizations and no explanation of what the additional 2 polarizations are, beyond the fact that you need them to allow repulsion as well as attraction forces. It is possible to remove U(1) and extend the role of SU(2) to include electromagnetism and gravitation, simply by modifying the Higgs mechanism so that it allows some massless versions of SU(2) gauge bosons to exist at low energy. This makes new checkable predictions, and is consistent with the observationally checked aspects of the existing Standard Model.
******************************************
25 June 2008:
Simple calculation of barecore electromagnetic charge of a single fundamental particle (without shielding by the polarized vacuum). (I’m not including the delta symbol here, the detlas cancel out in the end anyway. Also, the inclusion of mass in the field quanta here is no more wrong than the inclusion of a mass term in Professor Zee’s calculation of the Coulomb law from quantum field theory – attributed to an approximation by Sidney Coleman in a footnote – in his book Quantum Field Theory in a Nutshell. The only difference is that the calculation below makes a quantitative prediction, and the Zee/Coleman doesn’t. Both agree with the inverse square law.)
Heisenberg’s uncertainty principle (momentumdistance form):
ps = hbar (minimum uncertainty)
For relativistic particles, momentum p = mc, and distance s = ct.
ps = (mc)(ct) = t*mc^2 = tE = hbar
This is the energytime form of Heisenberg’s law.
E = hbar/t
= hbar*c/s
Putting s = 10^{15} metres into this (i.e. the average distance between nucleons in a nucleus) gives us the predicted energy of the strong nuclear exchange radiation, about 200 MeV. According to Ryder’s Quantum Field Theory, 2nd ed. (Cambridge University press, 1996, p. 3), this is what Yukawa did in predicting the mass of the pion (140 MeV) which was discovered in 1947 and which causes the attraction of nucleons. In Yukawa’s theory, the strong nuclear binding force is mediated by pion exchange, and the pions have a range dictated by the uncertainty principle, s = hbar*c/E. he found that the potential energy in this strong force field is proportional to e^{R/s}/R, where R is the distance of one nucleon from another and s = hbar*c/E, so the strong force between two nucleons is proportional to e^{R/s}/R^{2}, i.e. the usual square law and an exponential attenuation factor. What is interesting to notice is that this strong force law is exactly what the old (inaccurate) LeSage theory predicts for with massive gauge bosons which interact with each other and diffuse into the geometric “shadows” thereby reducing the force of gravity faster with distance than the inversesquare law observed (thus giving the exponential term in the equation e^{R/s}/R^{2}). So it’s easy to make the suggestion that the original LeSage gravity mechanism with limitedrange massive particles and their “problem” due to the shadows getting filled in by the vacuum particles diffusing into the shadows (and cutting off the force) after a distance of a few meanfreepaths of radiationradiation interactions, actually is a clue about the real mechanism in nature for the physical cause behind the shortrange of strons and weak nuclear interactions which are confined in distance to the nucleus of the atom! For gravitons, in a previous post I have calculated their mean free path in matter (not the vacuum!) to be 3.10*10^{77} metres of water; because of the tiny (event horizonsized) crosssection for particle interactions with the intense flux of gravitons that constitutes the spacetime fabric, the probability of any given graviton hitting that crosssection is extremely small. Gravity works because of an immense number of very weakly interacting gravitons. Obviously quantum chromodynamics governs strong interactions between quarks, but a residue of that allows pions and other mesons to mediate strong interactions between nucleons. But there is still a further step, one which makes falsifiable predictions, that we can make using this result of E = hbar*c/s.
Work energy is force multiplied by distance moved due to force, in direction of force (we won’t need to use bold print to remember that these are vectors because we know what we are doing physically):
E = Fs = hbar*c/s
F = hbar*c/s^{2}
which is the inversesquare geometric form for force. This derivation is a bit oversimplified, but it allows a quantitative prediction: it predicts a relatively intense force between two unit charges, some 137.036… times the observed (low energy physics) Coulomb force between two electrons, hence it indicates an electric charge of about 137.036 times that observed for the electron. This is the barecore charge of the electron (the value we would observe for the electron if it wasn’t for the shielding of the core charge by the intervening polarized vacuum veil which extends out to a radius on the order 1 femtometre). What is particularly interesting is that it should enable QFT to predict the bare core radius (and the grain size vacuum energy) for the electron simply by setting the logarithmic runningcoupling equation to yield a bare core electron charge of 137.036 times the value observed in low energy physics. That logarithmic equation (see http://arxiv.org/PS_cache/hepth/pdf/9803/9803075v2.pdf and particularly http://arxiv.org/PS_cache/hepth/pdf/0510/0510040v2.pdf pages 7071) correctly predicted an increase in observable electron charge to 1.07 times the low energy value when electrons are collided at 90 GeV. The collision energy is directly related to distance from a particle core, because the harder electrons collide, the closer they approach before being repelled (bouncing back). They obviously reach the point of closest approach when 100% of their kinetic energy has been converted into potential energy in the Coulomb field of the particles, which stops the approaching particle, then immediately electrostatic repulsion begins to accelerate the particle backwards. However, the logarithmic equation includes a summation of all the pairproduction loops of fermions which can be polarized in the electric field, from low energy right up to the extremely high energy of concern to us. There are lots of virtual fermions that exist over such an enormous span of energy scales, so the summation is a major project.
Update (9 July 2008):
How to censor out scientific reports without bothering to read them
Here’s the standard fourstaged mechanism for avoiding decisions by ignoring reports. I’ve taken it directly from the dialogue of the BBC TV series Yes Minister, Series Two, Episode Four, The Greasy Pole, 16 March 1981, where a scientific report needs to be censored because it reaches scientific but politicallyinexpedient conclusions (which would be very unpopular in a democracy where almost everyone has the same prejudices and the majority bias must be taken as correct in the interests of democracy, regardless of whether it actually is correct):
Permanent Secretary Sir Humphrey: ‘There’s a procedure for suppressing … er … deciding not to publish reports.’
Minister Jim Hacker: ‘Really?’
‘You simply discredit them!’
‘Good heavens! How?’
‘Stage One: give your reasons in terms of the public interest. You point out that the report might be misinterpreted. It would be better to wait for a wider and more detailed study made over a longer period of time.’
‘Stage Two: you go on to discredit the evidence that you’re not publishing.’
‘How, if you’re not publishing it?’
‘It’s easier if it’s not published. You say it leaves some important questions unanswered, that much of the evidence is inconclusive, the figures are open to other interpretations, that certain findings are contradictory, and that some of the main conclusions have been questioned.’
‘Suppose they haven’t?’
‘Then question them! Then they have!’
‘But to make accusations of this sort you’d have to go through it with a fine toothed comb!’
‘No, no, no. You’d say all these things without reading it. There are always some questions unanswered!’
‘Such as?’
‘Well, the ones that weren’t asked!’
‘Stage Three: you undermine recommendations as not being based on sufficient information for longterm decisions, valid assessments, and a fundamental rethink of existing policies. Broadly speaking, it endorses current practice.
‘Stage Four: discredit the man who produced the report. Say that he’s harbouring a grudge, or he’s a publicity seeker, or he’s hoping to be a consultant to a multinational company. There are endless possibilities.’
Go to 2 minutes and 38 seconds in the Utube video (above) to see the advice quoted on suppression!
These are the key steps used in ignoring science. They’re used not just by governments, but by everyone as an excuse to avoid the expenditure of time necessary to check out new reports which attempt to move beyond status quo. Now try reading this and deciding how easy it is to censor it:
‘In 1929 Hubble discovered a linear correlation between redshift of distant galaxies and clusters of galaxies, and their distance. The recession velocities v were directly proportional to the radial distance of the receding object from us, r. Hence v = Hr, where H is Hubble’s constant, which has the units of 1/time. This is then fitted into general relativity using the FriedmannRobertsonWalker metric, which cant fit any observations since it has a continuously variable landscape of infinitelymany arbitrarilyadjustable solutions and doesn’t predict the cosmological acceleration.
‘Because of spacetime, you are simultaneously looking back to earlier times when you look to ever greater distances. So the recession velocity is varying as a function of observable time, giving the observable acceleration.
‘Hence, Hubble could have legitimately plotted acceleration: the derivative of the Hubble law is: a = dv/dt = d(Hr)/dt = H(dr/dt) + r(dH/dt) = Hv + r*0 = Hv = rH^2.
‘For the greatest distances approaching the visible horizon (where recession is highly relativistic with massive redshifts), this predicts a = Hc = 6*10^{10} ms^{2}, which is the tiny observered cosmological acceleration at such distances.
‘Another option is that the Hubble could just have plotted velocities versus times past, defined by the travel time of the light from source to the observer on the Earth, t = r/c. This would have given him a constant ratio of v/t which has units of acceleration: a = v/t = (Hr)/(r/c) = Hc = 6*10^{10} ms^{2}.
‘Although this gives the same prediction for cosmological acceleration at very large distances, it differs from the prediction at smaller distances:
‘Approach 1: a = dv/dt = d(Hr)/dt = H(dr/dt) + r(dH/dt) = Hv + r*0 = Hv = rH^2.
‘Approach 2: a = v/t = (Hr)/(r/c) = Hc.
‘The first approach differs from the second because of the definition of time. All times are implicitly defined as t = r/v in the first approach, but in the second approach time is explicitly defined as t = r/c. The change from v to c results in the different prediction. Approach 1 deals with the times taken for recession, while Approach 2 deals with the time taken by light or other bosonic radiation such as gravitons to reach us.
‘However, the magnitude of the acceleration for the furthest distances, where most of the mass of the universe is effectively located from our perspective, is similar in both arguments (remembering that observable density increases with distance, since we’re looking back in time to earlier times after the big bang, where density was higher, and in any case – even if there was a uniform density distribution – most of the mass is still located at the greatest distances from an observer, because the mass contained within a given volume of uniform density is proportional to the cube of the radius of that volume, so a disproportionately large amount of mass is found at the greatest radial distances).
‘Besides making the subsequently confirmed prediction of the acceleration of the universe, this also predicts the correct strength of quantum gravity, because the acceleration of matter radially outward implies outward force e.g. Newton’s second (empirically confirmed) law states F=ma, and this outward force implies by Newton’s third (empirically confirmed) states that there is an equal force in the opposite direction, i.e. inwarddirected (analogous to the inwarddirected implosion wave you get when you set off an explosion, which is well understood and is actually used to squeeze the fissile core in all modern fission explosives). This inward force is mediated by the spacetime fabric of gravitational field quanta, gravitons. We can predict the quantitative amount of gravity because the mechanism is quantitative due to being closely tied to empirical facts, unlike string theory’s cranky spin2 graviton.
‘The factbased prediction of the acceleration (small positive cosmological constant) was published in 1996 in Electronics World (the editor of Nature turned it down) and confirmed by observations made two years later by Perlmutter who published in Nature. There was no mention in Nature that the acceleration had been predicted in 1996. Philip Campbell, editor of Nature, censored repeated letters sent by recorded delivery, as did other physical sciences editors at Nature. Other journals including Classical and Quantum Gravity submitted the paper to string ‘theorists’ for socalled ‘peer’review who had no knowledge of this area of physics or interest in reading the paper, and didn’t read it judging from their ignorant comments they made about facts.
‘While this has been censored out, I’ve invested my spare time in extending and checking this further. However, the mainstream doesn’t want to investigate it because it’s innovative and therefore unorthodox, and it is stuck with trying to use a classical approximation to gravitation – general relativity’s FriedmannRobertsonWalker metric – to interpret the Hubble recession, while the crackpots who are more interested in quantum gravity (and see general relativity as just a classical approximation to quantum gravity) aren’t interested because they don’t pay attention to the Hubble recession effect at all, perhaps because of crackpot dismissals of the Hubble law, which are rightly discredited at http://www.astro.ucla.edu/~wright/tiredlit.htm
‘Whenever I set out to explain this mechanism to anybody, I face having to teach them the facts, then the difference between the facts of physics and the speculations. Instead of discussing the mechanism of gravity, the conversation or correspondence gets immediately focussed on me having to explain to them the facts of physics, e.g. the evidence for cosmological recession. If they are professional physicists who happen to have studied cosmology, then the conversation again gets bogged down with me explaining what is right in general relativity, and what is unhelpful in general relativity. E.g., general relativity models the gravitational and accelerative forces as results of a cuvature in spacetime caused by a gravity source. It makes falsifiable predictions because the simple tensor field equation representing the relationship between the source of gravitation and the curvature resulting has to have a term included for relative effects in order for massenergy to be conserved. It is this added term which makes lightvelocity objects fall with twice the gravitational acceleration as lowvelocity (non relativistic) objects, and this is the basis for predicting that a photon gets deflected by the sun’s gravity to twice the extent that a slow bullet would be deflected by gravity when moving along the same initial trajectory. So general relativity is genuine, falsifiable physics in this sense. It’s right in this sense. It’s only wrong in the sense that it’s based on using calculus (continuous variables) to represent discontinuous field quanta (graviton) effects, and also in the stressenergy tensor where the really discontinuous source of gravity (fundamental particles of mass and energy) are replaced by a perfect fluid approximation to make the calculus work.
‘The real tragedy of general relativity is that it prevents the professional physicist from investigating the simple facts. General relativity doesn’t include a mechanism for gravity, so the large scale models of cosmology based on general relativity, e.g. the FriedmannRobertsonWalker metric of general relativity, are missing vital facts because they don’t include the cause of gravity as the result of a reaction force (mediated by gravitons) to the accelerative expansion of the universe. So the mainstream professional physicist who is trying to use the FriedmannRobertsonWalker metric of general relativity to model cosmology is in the position of Ptolemy, using epicycles. The results are impressively complex mathematical epicycles that can be fiddled to fit anything, but which don’t predict the cosmological acceleration or the strength of gravitation.
‘Professional physicists specializing in quantum field theory are either dismissive of cosmology or else dismissive of mechanisms. So they either refuse to read http://www.astro.ucla.edu/~wright/tiredlit.htm or else they proclaim their religious faith as being that the universe is purely mathematical, with no mechanisms present for fundamental interactions. Sadly, there is a third category, in which quantum field theorists have zero interest in whether Hubble’s law if being correctly applied to cosmology, and also zero interest in mechanism for quantum gravity. Such people have no willingness to discuss physics at all, just nonfalsifiable speculative mathematical models. The gauge boson exchange radiations in Feynman diagrams are to such people just imaginary, and the discovery of the weak gauge bosons in 1983 and other evidence that quantum field exchange radiations are real, are best ignored. So in every direction, this mechanism faces opposition or apathy from crackpots of one sort or another.’
Is the calculation of isolated quark masses pseudoscience (since you can’t isolate quarks)?
Mass of electron, M_e = (Mass of Z_0)*(alpha^2)/(3*Pi) = 0.51 MeV
Mass of all other isolated particles = (M_e)*n(N + 1)/(2*alpha) = 35n(N + 1) MeV
where n is number of fundamental particles in the isolated particle (n = 1 for leptons, n = 2 for 2 quarks in a meson, and n = 3 for 3 quarks in a baryon), and N is the discrete number of massive field quanta which give mass to the particle (as with nuclear physics ‘magic numbers’ for stable nucleon shell configurations, N = 1, 2, 8 and 50 give high stability results, predicting observed particle masses).
Some examples:
For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV
For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV
For mesons (n = 2 quarks per meson), N = 1 gives the pion: 35n(N+1) = 140 MeV
For baryons (n = 3 quarks per baryon), N = 8 gives nucleons: 35n(N+1) = 945 MeV
Above: observable fundamental particle masses, i.e. hadrons and leptons with lifetimes exceeding 10^{23} second. ‘Quark masses’ tend to be a crackpot concept because they aren’t observable even in principle because quarks can never be isolated, although you can calculate effective masses for different interactions. The mainstream focus on calculating isolated quark masses when quarks can’t be isolated is crackpot. Quarks have observable properties such as causing effects in particle scattering and in causing the neutron to have a magnetic moment, but it makes no sense to go on claiming that quark masses are the fundamental building blocks of hadronic matter.
In previous posts here and here, there’s evidence for the composition of all observable lepton and hadron masses from a slightly Higgslike (miring) boson field which exists in the vacuum and interact with gravitons. Massless gravitons are exchanged between massive Higgslike bosons; the latter give observed particles their mass and cause photons to be deflected by gravitation.
Overemphasis on quark masses – since quarks can’t be isolated, most of the observable mass of particles that contain quarks is the strong force field energy binding the quarks together – is pseudoscience. In fact, the calculations that try to predict hadron masses are lattice gauge theories, mechanistic simulations of the real vacuum interactions of particles with fields (exactly what I’m interested in, but focussed just on strong fields not on gravitation and electromagnetism).
http://dorigo.wordpress.com/2008/07/08/atopmassmeasurementtechniqueforcmsandatlas/
Hi Tommaso, thanks for this interesting article. The way you estimate the top quark mass looks very complicated and indirect. I’m just a bit surprised by discussion’s everywhere of ‘quark masses’ as if such masses are somehow fundamental properties of nature. Because quarks can’t be isolated, giving them masses as if they were isolated doesn’t make sense: the databook quoted masses of the three quarks in a baryon add up to only about 10 MeV which is just 1% of the mass of the baryon. So most of the mass in matter is due to mass associated with virtual particle vacuum fields, not intrinsically with the longlived quarks. Therefore the whole exercise in assigning masses to particles which can’t be isolated looks like nonsense? Surely the most striking fact here is that observable masses of particles (hadrons) containing longlived quarks has nothing much to do with the masses associated with those longlived quarks? So why discuss imaginary (not isolated in practice) quark masses at all? What matters in physics is what we can observe, and since quarks can’t be isolated, the calculations have no physical corespondence to the mass of anything that really exists!
On p102 of Siegel’s book Fields, http://arxiv.org/PS_cache/hepth/pdf/9912/9912205v3.pdf, he points out:
‘The quark masses we have listed are the “current quark masses”, the effective masses when the quarks are relativistic with respect to their hadron (at least for the lighter quarks), and act as almost free. But since they are not free, their masses are ambiguous and energy dependent, and defined by some convenient conventions. Nonrelativistic quark models use instead the “constituent quark masses”, which include potential energy from the gluons. This extra potential energy is about .30 GeV per quark in the lightest mesons, .35 GeV in the lightest baryons; there is also a contribution to the binding energy from spinspin interaction. Unlike electrodynamics, where the potential energy is negative because the electrons are free at large distances, where the potential levels off (the top of the “well”), in chromodynamics the potential energy is positive because the quarks are free at high energies (short distances, the bottom of the well), and the potential is infinitely rising. Masslessness of the gluons is implied by the fact that no colorful asymptotic states have ever been observed.’
I first heard about quarks when an Alevel physics student in 1988, and couldn’t understand why the masses of the three quarks in a proton weren’t each about a third of the proton mass. I did quantum mechanics and general relativity later, no particle physics or quantum field theory, so it’s only recently that I’ve come to understand the enormous importance of field energy (binding energy) in particle masses. So from my perspective, I can’t see why anybody cares about quark masses. Because quarks can’t be isolated, such masses are just a mathematical invention; quarks will always really have different masses because of the fields around them when they are in hadrons. In the Standard Model, quarks don’t have any intrinsic masses anyway; the mass is supplied externally by the Higgs field. Whether it is 99% or 100% of the mass of quark composed matter which is in the field that is binding quarks into hadrons, surely the masses of quarks is not important. Surely, to predict observable particle masses, a theory needs to predict the binding energy tied up in the strong force field, not just add up quark masses. This seems to indicate that the mainstream [is] off in fantasy land when trying to estimate quark masses and present them as somehow ‘real’ masses, when they aren’t real masses at all.
2 Comments »
 Dear Nige,it is of course true that quarks cannot be isolated, and that their “current mass” values are not measurable with infinite precision. In that sense, you could even say that those quantities are illdefined, to a certain extent (larger for light quarks, O(100 MeV) for the top quark).What should surprise you is that that kind of fuzzy definition does happen with most physical quantities we measure and use, to some extent: that is a source of systematic uncertainty which is usually neglected. What does it mean, for instance, when we say that the inner temperature of a alive human body is 100+x °F ? Define body, define alive, define inner, define temperature from a microscopic point of view… What does it mean to say that the gravitational acceleration at sea level is 9.8 m/s^2 ? Define acceleration, define sea level, explain how it depends on whether we are at the north pole or on the equator, whether there’s water or rock around.The message is the following: physical quantities we measure and use have a meaning in a certain context, less so if absolutized. The fine structure constant is a very good quantity to use in lowenergy electrodynamics, but it is no constant in highenergy physics.Current quark masses are crucial to perform calculations of cross sections, which are proportional to clicks in our detectors, and in a number of other theoretical calculations. You should be careful to avoid equivocating the meaning of pseudoscience, which certainly does not apply to the case you mentioned.
Cheers,
T.  Hi Tommaso,Thank you for your reply. My response:http://dorigo.wordpress.com/2008/07/08/atopmassmeasurementtechniqueforcmsandatlas/#comment98677‘Quark masses are crucial to perform calculations of cross sections, which are proportional to clicks in our detectors, and in a number of other theoretical calculations.’ – TommasoHi Tommaso,
Isn’t that a circular argument, because you’re defining quark masses on the basis of a calculation based on measuring crosssections, and then using that value to calculate crosssections?
Since crosssections are the effective target areas for specific interactions, I don’t see how mass comes into it. Whatever crosssection you are dealing with, it will be for a standard model interaction, not gravitation. Since mass would be a form of gravitational charge, mass is only going to be key to calculating crosssections for gravitational interactions in quantum gravity.
Clearly mass can come into calculations of other interactions, but only indirectly. E.g., the masses of different particles produced in a collision will determine how much velocity the particles get, because of conservation of momentum.
Re: the analogy to the temperature of the human body. In comment 2 I’m not denying that quarks have mass, just curious as to why so much mainstream attention is focussed on something that’s ambigious. The internal temperature of the human body is easily measurable: a thermometer can be inserted into the mouth.
You can’t isolate quarks, so whatever mass you calculate for an isolated quark, you aren’t calculating a mass that exists. The actual mass will always be much higher because of the mass contribution from the strong force field surrounding the quark in a hadron.
Hi Lubos,
Quark masses can be welldefined in various different ways. They just can’t be isolated, so while they are useful parameters for calculations, they aren’t describing anything that can be isolated (even in principle). Nobody has ever measured the mass of an isolated quark, they have made measurements of interactions and inferred the quark mass, which doesn’t physically exist because quarks can’t exist in isolation. Other masses, such as lepton and hadron masses, may also involve indirect measurements, but at least there you end up with the mass of something that does exist on its own.
In any case, their masses are negligible compared to the masses of the hadrons containing quarks. So hadron masses are accounted for by the strong field energy between the 2or 3 quarks in the hadron, not the masses of the quarks themselves.
Comment by nige cook — July 9, 2008 @ 2:27 pm
Further discussion on Tommaso’s blog:
‘Nige, no, no circularity. You need a top mass as a parameter if you want to determine the theoretical prediction for the number of topantitop events you collect. The top mass is needed because the parton luminosities depend on the fraction of momentum of the parent proton or antiproton they carry. There are fewer partons at larger momentum, so the cross section decreases with the top mass, because a higher top mass “fishes out” the rarer highmomentum partons.’ – Tommaso
Thanks for this explanation, but nobody measures the isolated mass of any quark, since quark masses can’t be isolated. The derivation of the mass of the quark comes from reaction crosssections to make the theory work, and then you use that calculated quark mass to calculate somethiung else. At no point has the isolated quark mass been measured, because it has never been isolated.
By analogy, the original 1960s string theory of strong interactions requires that the strings have a tension of something like 10 tons weight (100 kiloNewtons of force). This figure is required to make the theory describe the strong force, and using this parameter other things about the nucleus can be calculated. However, this isn’t the same thing as ‘measuring’ the tension of strings which bind nuclei together. Just because you can indirectly use experimental data to quantify some parameter and then use that parameter to make checkable calculations, doesn’t mean that it is a real parameter or that it is a real ‘measurement’. In the case of hadronic string theory, it was soon realised that exchange of gluons caused the strong interaction, not string tension.
I think it’s fundamentally misleading for properties of quarks to be quoted where those properties aren’t observable even in principle because of the impossibility of isolating a quark. It’s against Mach’s concept that physics be based on observables. Once you start popularising values for isolated quark masses when isolated quarks never exist even in principle, you break away from Mach’s conception of physics. Hadron masses are directly observable, and they are only 1% constituent quark masses, and 99% mass associated with hadron binding energy. I think that people should be analyzing the latter as a priority, to understand masses.
Hadron masses can be correlated in a kind of periodic table summarized by the expression
M= mn(N+1)/(2*alpha) = 35n(N+1) MeV,
where m is the mass of an electron, alpha = 1/137.036, n is the number of particles in the isolatable particle (n = 2 quarks for mesons, and n = 3 quarks for baryons), and N is the number of massive field quanta (such as Higgs bosons) which give the particle its mass. The particle is a lepton or a pair or triplet of quarks surrounded by shells of massive field quanta which couple to the charges and give them mass, then the number of massive particles which have a highly stable structure might be expected to correspond to the ‘magic numbers’ of nucleon shells in nuclear physics: N = 1, 2, 8 and 50 are such numbers for high stability.
For leptons, n=1 and N=2 gives: 35n(N+1) = 105 MeV (muon)
Also for leptons, n=1 and N=50 gives 35n(N+1) = 1785 MeV (tauon)
For quarks, n=2 quarks per meson and N=1 gives: 35n(N+1) = 140 MeV (pion).
Again for quarks, n=3 quarks per baryon and N=8 gives: 35n(N+1) = 945 MeV (nucleon)
I’ve checked this for particles with lifespans above 10^{23} second, and the model does correlate well with the other data: http://quantumfieldtheory.org/table1.pdf Obviously there’s other complexity involved in determining masses. For example, as with the periodic tables of the elements you might get effects like isotopes, whereby different numbers of uncharged massive particles can give mass to a particular species, so that certain masses aren’t integers. (For a long while, the mass of chlorine was held by some people as a disproof of Dalton’s theory of atomic weights.)
It’s just concerning that emphasis on ‘measuring’ and ‘explaining’ unobservable (isolated) quark masses deflects too much attention from the observable masses of leptons and hadrons.
nige cook – July 10, 2008
‘… Modern postmach Physics has and will continue to do very well to introduce and deal with primitive concepts that might even be unobservable as long as there are observable consequences…’
Hi goffredo,
There aren’t any observable consequences of calculating and using a value for the isolated mass of nonisolatable quarks.
All the other features of quarks apart from the calculated ‘isolated’ mass are real features that quarks have when in hadrons, not hypothetical values for isolated quarks, when quarks can’t be isolated. E.g., colour charge and spin are properties that quarks actually have when in hadrons. These are perfectly scientific because they refer to properties of quarks when in pairs or triplets inside mesons and baryons. What is less helpful in teaching the subject is to specify isolated masses for things that can’t be isolated.
nige cook – July 10, 2008
A pretty good example of the issue of the lack of observable consequences is epicycles. Ptolemy used observational measurements to calculate the sizes and speeds of epicycles of planets. Those parameters were solid numbers, based on observations. He then calculated the positions of planets based on this model. However, despite the sizes of the epicycles being based on fitting the model to real world data, and despite the calculations based on the epicycle parameters being checked by observations, at no point was the epicycle size parameter measuring anything real. (It’s the same fallacy as where theorists use hadronic string theory to work out that the strong force is due to strings with a given amount of tension, and then use that parameter to calculate other things. At no point is that parameter anything measurable or real, even though it is indirectly based on measured data, and predicts measurable data.)
30. nige cook – July 11, 2008
“… the parameters describing epicycles are absolutely real and every theory that had or has at least an infinitesimal chance to survive must be able to account for their values… ” – Lubos Motl
This is sadly incorrect, Lubos. Those parameters aren’t real. The epicycle theory did manage to fit and predict the apparent positions of planets which could be observed in Ptolemy’s time (150 A.D.), but it fails today to predict the distances of the planets from us which can now be measured. It assumes that all the planets, the moon, and the sun orbit the Earth in circles and [also] go around [in] small circles (epicycles) around that path [during] the orbit in order to resolve the problems with circular orbits around the Earth.
http://everything2.com/title/Ptolemaic%2520system
“Ptolemy’s model was finally disproved by Galileo, when, using his telescope, Galileo discovered that Venus goes through phases, just like our moon does. Under the Ptolemaic system, however, Venus can only be either between the Earth and the Sun, or on the other side of the Sun (Ptolemy placed it inside the orbit of the Sun, after Mercury, but this was completely arbitrary; he could just as easily swapped Venus and Mercury and put them on the other side, or any combination of placements of Venus and Mercury, as long as they were always colinear with the Earth and Sun). If that was the case, however, it would not appear to go through all phases, as was observed. If it was between the Earth and Sun, it would always appear mostly dark, since the light from the sun would be falling mainly where we can’t see it. On the other hand, if it was on the far side, we would only be able to see the lit side. Galileo saw it small and full, and later large and crescent. The only (reasonable) way to explain that is by having Venus orbit the Sun.”
Other specific points on the error of epicycles:
1. The Moon was always a serious problem for the theory of epicycles. In order to predict where in the sky the moon would be at any time using epicyles (instead of an ellipical orbit of the moon as it goes around the earth), Ptolemy’s epicycles for the Moon has the unfortunate problem of making the moon recede and approach the earth regularly to the extent that the apparent diameter of the Moon would vary by a factor of two. Since the Moon’s apparent diameter doesn’t vary by a factor of two, there is a serious disagreement between the correct elliptical orbit theory and Ptolemy’s epicycle’s fit to observations of the path of the Moon around the sky. You can get epicycles to fit the positions of planets or the Moon in terms of latitude and longitude on the map of the sky, but it doesn’t accurately model how far the planet or the Moon is from the Earth. The problem for the moon also exists with the other planets, whose diameters were too small to check against the epicycle theory in Ptolemy’s time when there were no telescopes.
2. If you knew the history of the solar system, you’d also be aware that the classical area of physics including Newton’s laws stem ultimately from Tycho Brahe’s observations on the planets. He obtained many accurate data points on the position of Mars, and Kepler tried analysing that data according to epicycles and [g]ave up when it didn’t provide a suitable accurate model. This is why he moved over to elliptical orbits. With lots of epicycles and many adjustable parameters adjusting these, the landscape of possible models made from epicycles is practically infinite, so you can use the anthopic principle to select suitable epicycles and parameters model planetary positions of longitude and latitude on the celestial sphere, and once you have determined nice fits to the empirical data by selecting suitable parameters for epicycle sizes (by analogy to the selectable size and shape moduli of the CalabiYau manifold, when producing the string landscape), you can then “predict” the paths of planets and the Moon around the sky as seen from the Earth. But it is not a good threedimensional model; it fails to predict accurately how far the planets are from the Earth.
“Moreover, cook Nigel has also screwed the analogy with highenergy physics. His map is upside down.” – Lubos.
Maybe my map just appears to up to be upside down to you, because you’re standing on your head? Thanks for giving us your wisdom on how we can go on using epicycles and string theory.
Copy of a comment to Louise Riofrio’s blog (11 July 2008):
http://riofriospacetime.blogspot.com/2008/07/andromeda.html
Nice post. Andromeda is very interesting, since it’s relatively nearby and is a rare blueshifted galaxy, due to the fact that the Milky Way is being attracted to it so the two galaxies are approaching, not receding as is the case with other galaxies.
I love the fact that black holes exist in the centre of galaxies.
“If Black Holes seeded formation of these stars, their continued presence would keep the stars stable.”
Presumably the first stars that began shortly after the big bang grew very large because there were massive really clouds of hydrogen gas which collapsed to form them.
They fused hydrogen into heavier elements quickly, then exploded as supernovae (such as the one which created all the heavy elements in the solar system’s planets) or collapsed into black holes, which then seeded galaxy formation.
I realise that you are busy with spacesuit design and that other people like Kea and Carl Brannen are busy with Category Theory and Mass Operators/Koide formula theory development, but may I just summarise here some evidence about the possibility of fundamental particle cores being black holes and Hawking radiation as a gauge theory exchange radiation?
1. A black hole with the electron’s mass would by Hawking’s theory have an effective black body radiating temperature of 1.35*10^53 K. The Hawking radiation is emitted by the black hole event horizon which has radius R = 2GM/c^2.
2. The radiating power per unit area is the StefanBoltzmann constant multiplied by the kelvin temperature raised to the fourth power, which gives 1.3*10^205 watts/m^2. For the black hole event horizon spherical surface area, this gives a total radiated power of 3*10^92 watts.
3. For an electron to keep radiating, it must be absorbing a similar power. Hence it looks like an exchangeradiation theory where there is an equilibrium. The electron receives 3*10^92 watts of gauge bosons and radiates 3*10^92 watts of gauge bosons. When you try to move the electron, you introduce an asymmetry into this normal equilibrium and this is asymmetry felt as inertial resistance, in the way broadly argued (for a zeropoint field) by people like Professors Haisch and Rueda. It also causes compression and mass increase effects on loving bodies, because of the snowplow effect of moving into a radiation field and suffering a net force.
When the 3*10^92 watts of exchange radiation hit an electron, they each impart momentum of absorbed radiation is p = E/c, where E is the energy carried, and when they are reemitted back in the direction they came from (like a reflection) they give a recoil momentum to the electron of a similar p = E/c, so the total momentum imparted to the electron from the whole reflection process is p = E/c + E/c = 2E/c.
The force imparted by successive collisions, as in the case of any radiation hitting an object, is The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c = 2*10^84 Newtons, where P is power as distinguished from momentum p.
So the Hawking exchange radiation for black holes would be 2*10^84 Newtons.
Now the funny thing is that in the big bang, the Hubble recession of galaxies at velocity v = HR implies an outward acceleration of either
a = v/t = (HR)/(R/c) = Hc
or else
a = dv/dt = d(HR)/dt = H*dR/dt + R*dH/dt = Hv + R*0 = Hv = RH^2.
For distances near the horizon radius of the universe R = ct, both of these estimates for a are the same, although they differ for smaller distances.
However, since most of the mass is at great distances, an order of magnitude estimate is that this acceleration causes an outward force of
F = ma = Hcm = 7*10^43 Newtons.
If that outward force causes an equal inward force which is mediated by gravitons (according to Newton’s 3rd law of motion, equal and opposite reaction), then the crosssectional area of an electron for graviton interactions (predicting the strength of gravity correctly) is the crosssectional area of the black hole event horizon for the electron, i.e. Pi*(2GM/c^2)^2 m^2. (Evidence here.)
Now the fact that the black hole Hawking exchange radiation force calculated above is 2*10^84 Newtons, compared 7*10^43 Newtons for quantum gravity, suggests that the Hawking black hole radiation is the exchange radiation of a force roughly (2*10^84)/(7*10^43) = 3*10^40 stronger than gravity.
Such a force is of course electromagnetism.
So I find it quite convincing that the cores of the leptons and quarks are black holes which are exchanging electromagnetic radiation with other particles throughout the universe.
The asymmetry caused geometrically by the shadowing effect of nearby charges induces net forces which we observe as fundamental forces, while accelerative motion of charges in the radiation field causes the LorentzFitzGerald transformation features such as compression in the direction of motion, etc.
Hawking’s heuristic mechanism of his radiation emission has some problems for an electron, however, so the nature of the Hawking radiation isn’t the highenergy gamma rays Hawking suggested. Hawking’s mechanism for radiation from black holes is that pairs of virtual fermions can pop into existence for a brief time (governed by Heisenberg’s energytime version of the uncertainty principle) anywhere in the vacuum, such as near the event horizon of a black hole. Then one of the pair of charges falls into the black hole, allowing the other one to escape annihilation and become a real particle which hangs around near the event horizon until the process is repeated, so that you get the creation of real (longlived) real fermions of both positive and negative electric charge around the event horizon. The positive and negative real fermions can annihilate, releasing a real gamma ray with an energy exceeding 1.02 MeV.
This is a nice theory, but Hawking totally neglects the fact that in quantum field theory, no pair production of virtual electric charges is possible unless the electric field strength exceeds Schwinger’s threshold for pair production of 1.3*10^18 v/m (equation 359 in Dyson’s http://arxiv.org/abs/quantph/0608140 and equation 8.20 in Luis AlvarezGaume, and Miguel A. VazquezMozo’s http://arxiv.org/abs/hepth/0510040). If you check out renormalization in quantum field theory, this threshold is physically needed to explain the IR cutoff on the running coupling for electric charge. If the Schwinger threshold didn’t exist, the running coupling or effective charge of an electron would continue to fall at low energy instead of becoming fixed at the known electron charge at low energies. This would occur because the vacuum virtual fermion pair production would continue to polarize around electrons even at very low energy (long distances) and would completely neutralize all electric charges, instead of leaving a constant residual charge at low energy that we observe.
Once you include this factor, Hawking’s mechanism for radiation emission starts have a definite backreaction on the idea, and to modify his mathematical theory. E.g., pair production of virtual fermions can only occur where the electric field exceeds 1.3*10^18 v/m, which is not the whole of the vacuum but just a very small spherical volume around fermions!
This means that black holes can’t radiate any Hawking radiation at all using Hawking’s heuristic mechanism, unless the electric field strength at the black hole event horizon radius 2GM/c^2 is in excess of 1.3*10^18 volts/metre.
That requires the black hole to have a relatively large net electric charge. Personally, from this physics I’d say that black holes the size of those in the middle of the galaxy don’t emit any Hawking radiation at all, because there’s no mechanism for them to have acquired a massive net electric charge when they formed. They formed from stars which formed clouds of hydrogen produced in the big bang, and hydrogen is electrically neutral. Although stars give off charged radiations, they emit as much negative charge as electrons and negatively charged ions, as they emit positive charge such as protons and alpha particles. So there is no way they can accumulate a massive electric charge. (If they did start emitting more of one charge than another, as soon as a net electric charge developed, they’d attract back the particles whose emission had caused the net charge and the net charge would soon be neutralized again.)
So my argument physically from Schwinger’s formula for pair production is that the supermassive black holes in the centres of galaxies have a neutral electric charge, have zero electric field strength at their event horizon radius, and thus have no pairproduction there and so emit no Hawking radiation whatsoever.
The important place for Hawking radiations is the fundamental particle, because fermions have an electric charge and at the black hole radius of a fermion the electric field strength way exceeds the Schwinger threshold for pair production.
In fact, the electric charge of the fermionic black hole modifies Hawking’s radiation, because it prejudices which of the virtual fermions near the event horizon will fall into. Because fermions are polarized in an electric field, the virtual positrons which form near the event horizon to a fermionic black hole will on average be closer to the black hole than the virtual electrons, so the virtual positrons will be more likely to fall in. This means that instead of virtual fermions of either electric charge sign falling at random into the black hole fermion, you instead get a bias in favour of virtual positrons and other virtual fermions of positive sign being more likely to fall into the black hole, and an excess of virtual electrons and other virtual negatively charged radiation escaping from the black hole event horizon. This means that a black hole electron will emit a stream of negatively charged radiation, and a black hole positron will emit a stream of positively charged radiation.
Although such radiation would appear to be massive fermions, because there is an exchange of such radiation in both directions simultaneously once an equilibrium of such radiation is set up in the universe (towards and away from the event horizon), the overlap of incoming and outgoing radiation will have some peculiar effects, turning the fermionic sub relativistic radiation into bosonic relativistic radiation.
The reason why a fermion differs from a boson is down to spin and can be grasped by the example of an electron and a positron annihilating into gamma rays and vice versa. When the fermionic 1/2spins of an electron and positron are combined, you get bosonic 1spin radiation. Physically what happens can be understood in terms of the magnetic field curls you get when electric charge propagates through space.
There is a backreaction effect called selfinductance which arises when an electric charge is accelerated. The magnetic field produces a force which opposes acceleration. The increased inertial mass can be considered an ellect of this. A massless charged radiation would have an infinite selfinductance, and wouldn’t be able to propagate.
However, if you have two fermionic electric charges side by side, as in all examples of electricity, you get the emergence of a special phenomenon whereby energy propagates like bosonic radiation. E.g., the TEM wave logic step of electricity requires that you have two parallel conductors in a power ‘transmission line’. At any moment where electric power is propagative, the electric charge in one conductor of the transmission line is opposite to that in the other conductor immediately adjacent. The mechanism for what happens is simply that the magnetic curl around the negative conductor is in the opposite direction to the magnetic curl of around the positive conductor, so that the superimposed curls cancel each other, cancelling out the magnetic inductance and therefore allowing electric power to cease behaving line subrelativistic massive fermions and to instead behave as light velocity bosonic radiation: the electric lightvelocity power transmission is a case of two oppositely charged fermions (one in each conductor) combining in such a way that together they behave as a boson for the purpose of allowing light velocity transmission of electric power. (This is clear to me from Catt’s research in transmission lines, e.g. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4039191.)
Other examples of this kind of superposition are well known. For example, superconductivity occurs for exactly the same reason, you get Cooper pairs of electrons forming which behave as photons. Generally, in condensed matter physics (low temperature phenomena generally) pairs of half integer spin fermions can associate to produce composite particles that have the properties of integer spin particles, bosons.
This is the mechanism by which Hawking gauge theory exchange radiations, while overlapping in space in the process of going to and coming from the event horizons of black holes, behave as bosons rather than as fermions.
The diagram here: http://nige.files.wordpress.com/2007/05/fig5.jpg?w=702&h=1065 shows in terms of electromagnetic field strengths the difference between Maxwell’s imaginary photon, the real transverse path integralsuggested photon of QED, and the exchange radiation composed of two fermionlike charges superimposed which occurs in the case of both lightvelocity electricity power transmission and the exchange of Hawking radiation I’ve described above.
The diagram here: http://nige.files.wordpress.com/2007/05/fig4.jpg?w=505&h=548 shows how all the longrange forces (gravity and electromagnetism) arise physically from exchange radiations. E.g., why universal attraction comes from gravity, why like charges repel and unlike charges attract with the same force for unit charges as the repulsion of like charges. My current effort to distinguish what is correct from what is incorrect in quantum field theory is site, including calculations for quantum gravity. However, it’s again in need of rewriting, updating and improving. (It’s just as well that virtually everybody is negative about it, because if there was a fanfare of interest I’d probably soon be locked down to the theory in a particular state, and unable to keep reformulating it, finding out new details and problems and tackling them in my leisure time. It would be more stressful to have to work fulltime on this. I’m developing an SQL database and ASP website at the moment, which is a welcome change from this crazylooking but factually surprisingly solid physics.)
************
Further thoughts
U(1) describes the singlets of electroweak theory because it has only one element for charge, and SU(2) describes the doublets since it has two charges.
I think that U(1) is a flawed symmetry for electromagnetism. Electric charges come in doublets via pair production, so I don’t think that there are any real singlets. It would be nice to construct a theory of electromagnetism based on SU(2) which has two charges, positive and negative, just as in the weak isospin SU(2) symmetry you have two types of spin.
The meson consists of a quark and antiquark pair as the doublet in SU(2) or, in the case of leptons, the doublet would be a lefthanded electron and a left handed neutrino (both with equal and opposite amounts of weak isospin charge), with the righthanded spinning electron being the singlet with zero isospin charge but twice as much weak hypercharge as the lefthanded electron.
I think that in a preon theory downquarks should be correlated with electrons. The lefthanded electron has exactly the same weak isotopic charge as the lefthanded downquark, 1/2. The lefthanded electron has weak hypercharge and electric charge of 1 unit each, while the lefthanded downquark has weak hypercharge and electric charge of 1/3 unit each.
The fractional hyper and electric charge of the downquark has a pretty obvious mechanism in pair production polarization when the simplest hadron, the omega minus (containing three almost identical ‘strange’ downquarklike charges) is investigated. The electric field of a particle causes virtual fermion pairproduction at high field strengths, the pairs can become briefly polarized by the electric field radially around the downquark, and as a result of this polarization they cancel out some of the primary electric field as observed from greater distances.
If you triple the charge that is causing the polarization, by having three electronsized charges confined in a small volume, the polarization of the surrounding vacuum will be 3 times more intense, and so the shielding of the electric charge will be 3 times greater than in the case of a single charge. If the relative shielding factor is say N = 137 units for a single electron sized charge, then the observed charge at a long distance is the bare core charge divided by N, e.g. 137/N = 137/137 = 1. For three strange quarks based on the same preons as electrons, we get a shielding factor of N = 3*137 because the stronger charge causes more polarization and thus more shielding of the electric field and apparent (observable) charge seen from a large distance, which becomes 137/N = 137/(3*137) = 1/3.
I can see why this heuristic argument isn’t being taken seriously; firstly the mainstream is hostile to mechanisms, and secondly there are upquarks where the electric charge observable is +2/3 units. However, it’s clear that electric charge is related to weak isotopic charge. An upquark has the same weak hypercharge as a downquark (+1/3) but has 1 unit more weak isotopic charge (+1/2 instead of 1/2), and this extra unit of weak isotopic charge directly shows in the 1 unit of extra electric charge that an upquark has over a downquark (+2/3 is 1 unit more than 1/3).
If you look at the a table of the first generation of standard model particles, e.g., http://nige.files.wordpress.com/2007/06/particles2.jpg?w=612&h=361 it’s clear that the weak isotopic charges for the three weak SU(2) gauge bosons are identical to the corresponding electric charges of those gauge bosons. So I think it’s a fact that SU(2) describes both electromagnetism (electric charge, weak hypercharge) as well as weak interactions. The differences between the two types of interaction (electromagnetism and weak) are down to the mass acquired by certain of the SU(2) gauge bosons, while electromagnetic gauge bosons remain massless. There is obviously quite a lot of mathematical work to be done to really get this to work in detail.
The key connection between the lagrangian equation for a field and the symmetry group is Noether’s theorem, but it’s pretty hard to grasp the physics in books like Ryder’s (which seems to be far more physical than Weinberg’s first two volumes of QFT maths). The things I want to understand are glossed over, and a lot of space is devoted to explaining in detail a lot of trivia which I don’t need because I don’t have much spare time or energy for irrelevant stuff which is just there to provide some basis for examination questions.
Ryder (2nd ed.) gives two sections on SU(2), firstly section 3.5 on the YangMills field (which is pretty abstract maths and doesn’t give any simple physics of the Noether theorem’s application in getting from the SU(2) symmetry group to the lagrangian for the YangMills field), and section 8.5 on the WeinbergSalam model, which is easier to grasp because it starts with the Dirac lagrangian equation, simplifies it to the massless case then separates it into left and right handed components.
Then Ryder tries to introduce weak isospin as acting on lefthanded particles, and claims that the resulting lagrangian is invariant under conditions which form the rotational symmetry of SU(2). He does all this in half a page, and I can’t follow the physical reasoning. On the next page he discusses weak hypercharges modelled by U(1). The maths again isn’t physically ground in reality, it’s just model building and there is no reason for that particular way of building a model.
I think my next step will be to try to get hold of the original Yang and Mills paper on SU(2) and see if that helps to clear up my questions about Ryder’s approach.
*************************
SU(2)xSU(3) particle physics based on solid facts, giving quantum gravity predictions
Hubble’s law: v = dR/dt = HR. => Acceleration: a = dv/dt = d(HR)/dt = (H*dR/dt) + (R*dH/dt) = Hv + 0 = RH^{2}. 0 < a < 6*10^{10} ms^{2}. Outward force: F = ma. Newton’s 3rd law: equal inward reaction force (via gravitons). Since nonreceding nearby masses don’t cause reaction, they cause an asymmetry, predicting gravity and in 1996 this theory predicted the ‘cosmological acceleration’ discovered in 1998.
Above: how the flux of YangMills gravitational exchange radiation (gravitons) being exchanged between all the masses in the universe physically creates an observable gravitational acceleration field directed towards a cosmologically nearby or nonreceding mass, labelled ‘shield’. (The Hubble expansion rate and the distribution of masses around us are virtually isotropic, i.e., radially symmetric.) The mass labelled ‘shield’ creates an asymmetry for the observer in the middle of the sphere, since it shields the graviton flux because it doesn’t have an outward force relative to the observer (in the middle of the circle shown), and thus doesn’t produce a forceful graviton flux in the direction of the observer according to Newton’s 3rd law (action and reaction, an empirical fact, not a speculative assumption).
Hence, any mass that is not at a vast cosmological distance (with significant redshift) physically acts as a shield for gravitons, and you get pressed towards that shield from the unshielded flux of gravitons on the other side. Gravitons act by pushing, they have spin1. In the diagram, r is the distance to the mass that is shielding the graviton flux from receding masses located at the far greater distance R. As you can see from the simple but subtle geometry involved, the effective size of the area of sky which is causing gravity due to the asymmetry of mass at radius r is equal to the crosssectional area of the mass for quantum gravity interactions (detailed calculations, included later in this post, show that this crosssection turns out to be the area of the event horizon of a black hole for the mass of the fundamental particle which is acting as the shield), multiplied by the factor (R/r)^{2}, which is how the inverse square law, i.e., the 1/r^{2} dependence on gravitational force, occurs.
Because this mechanism is built on solid facts of expansion from redshift data that can’t be explained any other way than recession, and on experiment and observation based laws of nature such as Newton’s, it is not just a geometric explanation of gravity but it uniquely makes detailed predictions including the strength of gravity, i.e., the value of G, and the cosmological expansion rate; it is a simple theory as it uses spin1 gravitons which exert impulses that add up to an effective pressure or force when exchanged between masses. It is quite a different theory to the mainstream model which ignores graviton interactions with other masses in the surrounding universe.
The mainstream model in fact can’t predict anything at all. It begins by ignoring all the masses in the universe except for two masses, such as two particles. It then represents gravity interactions between those two masses by a Lagrangian field equation which it evaluates by a Feynman path integral. It finds that if you ignore all the other masses in the universe, and just consider two masses, then spin1 gauge boson exchange will cause repulsion, not attraction as we know occurs for gravity. It then ‘corrects’ the Lagrangian by changing the spin of the gauge boson to spin2, which has 5 polarizations. This new ‘corrected’ Lagrangian with 5 tensor terms for the 5 polarizations of the spin2 graviton being assumed, gives an alwaysattractive force between two masses when put into the path integral and evaluated. However, it doesn’t say how strong gravity is, or make any predictions that can be checked. Thus, the mainstream first makes one error (ignoring all the graviton interactions between masses all over the universe) whose fatally flawed prediction (repulsion instead of attraction between two masses) it ‘corrects’ using another error, a spin2 graviton.
So one reason why the actual spin2 gravitons don’t cause masses to repel is because the path integral isn’t just a sum of interactions between two gravitational charges (composed of massenergy) when dealing with gravity; it’s instead a sum of interactions between all massenergy in the universe. The reason why mainstream people don’t comprehend this is that the mathematics being used in the Lagrangian and path integral are already fairly complex, and they can’t readily include the true dynamics so they ignore them and believe in a fiction instead. (There is a good analogy with the false mathematical epicycles of the Earthcentred universe. Whenever the theory was in difficulty, they simply added another epicycle to make the theory more complex, ‘correcting’ the error. Errors were actually celebrated and simply relabelled being ‘discoveries’ that nature must contain more epicycles.)
Some papers here, home page here. CERN Doc Server deposited draft preprint paper EXT2004007, 15/01/2004 (this is now obsolete and can’t be updated to the revised version such as something similar to the discussion and mathematical proof below, because CERN now only accepts feed through arXiv.org which is blocked (even to some string theorists who work on nonmainstream ideas) by mainstream (Mtheory) string ‘theorists’ (who have no testable predictions and no checkable theory, and so are not theorists in a scientific sense): ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, Mtheory originator, Physics Today, April 1996. What Witten’s claimed ‘prediction of gravity’ is, is the spin2 graviton and it isn’t a falsifiable prediction, unlike all the predictions made and subsequently confirmed by the spin1 gravity idea. To grasp Dr Woit’s assessment of the “not even wrong” spin2 graviton idea, try the following passage:
“For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. […] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘Mtheory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135.
Gravity gets weaker than the inverse square over massive distances in this universe. This is because gravity is mediated by gravitons which get redshifted and thus the quanta lose energy when exchanged between masses which are receding at relativistic velocities, i.e. well apart in this expanding universe, which would reduce the effective value of G over immense distances. Additionally, from empirical facts (see the calculations below in this blog post), the mechanism of gravity depends on surrounding recession of masses around any point. This means that if general relativity is just a classical approximation to quantum gravity (due to the graviton redshift effect just explained, which implies that spacetime is not curved over cosmological distances), we have to treat spacetime as finite and not bounded, so that what you see is what you get and the universe may be approximately analogous to a simple expanding fireball.
Masses near the real ‘outer edge’ (the radial distance in spacetime which corresponds to the time of big bang, i.e. 13,700 million lightyears distance) of such a fireball (remember that since gravity doesn’t act over cosmological distances due to graviton redshift when exchanged between receding masses, there is no spacetime curvature causing gravitation over such distances) get an asymmetry in the exchange of gravitons: exchanging them on one side only (the side facing the core of the fireball, where other masses are located).
Hence such masses tend to just get pushed outward, instead of suffering the usual gravitational attraction, which is of course caused by shielding of allround graviton pressure. In such an expanding fireball where gravitation is a reaction to surrounding expansion due to exchange of gravitons, you will get both expansion and gravitation as results of the same fundamental process: exchange of gravitons. The pressure of gravitons will cause attraction (due to mutual shadowing) between masses which are relatively nearby, but over cosmological distances the whole collection of masses will be expanding (masses receding from one another) due to the momentum imparted in the process of exchanging gravitons. This prediction was put forward via the October 1996 Electronics World, two years before evidence from Perlmutter’s supernovae observations which confirmed that the universe is not decelerating contrary to the standard predictions of cosmology at that time (i.e., that the expansion of the universe looks as if there is a small positive cosmological constant – of predictable magnitude – offsetting gravitational deceleration over cosmological distances).
I’ve been preparing a Google or Utube video about physical mechanisms, physical forces in the fireballs in the 1962 nuclear tests at high altitude (particularly the amazing films of the fireball dynamics in the Bluegill test), and exchange radiation which will make the material and figures in the post here easier to grasp. It was a study of fireball phenomenology, the break down of general relativity due to a weakening of the gravity coupling constant in an expanding universe (gravitons exchanged between relativistically receding masses – quantum gravity charges – in an expanding universe are redshifted, reducing the effective strength of gravitational interactions in proportion to amount of redshift of the gravitons and the visible light observed, since energy is related to frequency by E = hf) and the analogy to the big bang which suggested the mechanism of gravity in 1996. In an air blast wave, Newton’s 3rd law – the equality of action and reaction forces – always holds. Initially, there is extremely high pressure throughout the fireball, communicating reaction forces in spherical symmetry, i.e., the Northward portion of the shock wave exerts a net outward force equal to its pressure times its surface area, and the reaction force is found in the Southward portion of the shock wave.
But after a while, the amount of air in the shock front is so compressed that the density falls in the central region, which cools and loses pressure. Hence, the central region can no longer mediate the reaction force of the shock wave in different directions. What happens at this stage is that a negative pressure wave, directed inward towards the centre of the explosion, then develops which has lower pressures but longer duration, allowing a reaction force to be exerted. A shock wave cannot exert outward pressure (and thus force, being equal to pressure times area) without satisfying Newton’s 3rd law of action and reaction. The reversed phase of the blast wave (with pressure acting towards the point of the explosion, i.e., suction or ‘negative pressure’ – below the ambient 14.7 psi/101 kPa normal air pressure phase) is vital for maintaining Newton’s 3rd law of motion in a shock wave.
The negative pressure phase consists of an inner shell of blast with a force directed inward in response to the outward force at the shock front. This feature is vital in implosion systems used to actually cause a nuclear explosion in the first place: the implosion relies on the fact that half the force of an explosion is initially directed inward within the mass of exploding material (the inwardtravelling shock wave reflects back when it reaches the centre, and the rebounded shock wave travels outward, but in the meantime it squashes very effectively anything placed at the core, like a lump of subcritical fissile material). Implosion is also a feature of the big bang:
The product rule of differentiation is: d(uv)/dx = (v*du/dx) + (u*dv/dx)
Hence the observationally based 1929 Hubble law, v = HR, differentiates as follows:
dv/dt = d(RH)/dt = (H*dR/dt) + (R*dH/dt)
The second term here, R*dH/dt, is zero because in the Hubble law v = HR the term H is a constant from our standpoint in spacetime, so H doesn’t vary as a function of R and thus it also doesn’t vary as a function of apparent time past t = R/c. In the spacetime trajectory we see as we look out to greater distances, R/t is always in the fixed ratio c, because when we see things at distance R the light from that distance has to travel that distance at velocity c to get to us, so when we look out to distance R we’re automatically looking back in time to time t = R/c seconds ago.
Hence R*dH/dt = R*dH/d[R/c] = Rc*dH/dR = Rc*0 = 0.
This is because dH/dR = 0. I.e., there is no variation of the Hubble constant as a function of observable spacetime distance R.
Thus, the acceleration of any distant, receding lump of matter as we perceive it in spacetime, is
a = dv/dt = d(RH)/dt = H*dR/dt = H*v = H*[RH] = R*H^2.
Now the outward acceleration, a, is very small. It reaches only about 6*10^{10} ms^{2} for the most distant receding objects. But because the mass of the receding universe is really big, that comes to an outward force on the order of say 7*10^43 Newtons. Newton’s 3rd law tells us there should be an equal and opposite reaction force. According to what is physically known about the possible particles and fields that exist, this inward reaction force might be carried by spin1 gravitons (nonstring theory gravitons; string theory hype supposes spin2 gravitons), which cause all gravitational field and observed general relativity (contraction, etc.) effects physically by exerting pressure as a quantum field of exchange radiation.
When we calculate the universal gravitational parameter, G, by this theory we get a figure that’s good (within available experimental data). There are complexities because what counts in spacetime for graviton exchange is the observable density of the universe as a function of distance/time past, which increases towards infinity as we look back to immense distances (approaching time zero); however this massive increase in effective outward force is cancelled out by the fact that the reaction force is mediated by gravitons which are extremely redshifted from such locations where the recession velocities are very close to the velocity of light (i.e., relativistic).
One way to imagine the mechanism for why an outwardaccelerating particle should fire back gravitons as a reaction force to satisfy Newton’s 3rd law of motion is very simple: walk down a corridor and observe what happens to air that vacates the region in front of you and fills in the region behind you as you walk. Or better, push a ball along while holding it underwater. There is a resistance due to motion against the water (which is a crude model for moving an electron or other object having rest mass in a graviton field in the vacuum of spacetime), which compresses the ball slightly in the direction of motion. There is then a flow of the field quanta (water in the analogy) around the particle from front to back. This flow permits things to move, and because the field flow – once set up after effort (against resistance) – has momentum, it adds inertia to the moving object. (Ships and submarines are hard to stop suddenly because they have extra momentum – not just the usual momentum, but momentum from the water field’s motion around them. This hints that the intrinsic momentum of any moving mass is due to a similar effect involving the vacuum graviton field flowing around individual fundamental particles. As Einstein pointed out, inertial and gravitational masses are indistinguishable.)
Hence, as a 70 kg (70 litre) person walks down a corridor at 1 m/s, some 70 litres of air moving at a net velocity of 1 m/s in the opposite direction flows into the void the person is vacating. (In internet discussions, some ingenious bigots claimed that when you walk, you attract air from behind which follows you to fill the volume of space you are vacating. If that were true, the air pressure along the corridor would become ever more become unequal because of (1) air becoming compressed in front of you (instead of flowing around you to fill in the void behind), and (2) air pressure being reduced still further behind you as air expands to fill in the void. This doesn’t happen. In any case, the example with water makes it clear what happens: water flows from the front to the back of a moving object.
If the object accelerates, the surrounding field responds similarly if the motion of the particles in it is adequately fast that it can respond. (Air molecules have an average velocity of only 500 m/s, but spin1 gravitons travel at 300 Mm/s.) Hence, if you have a long line of people walking in one direction only along a corridor, you have a current flowing in that direction, which is compensated for by a net flow of the surrounding field (air) in the opposite direction. Although the individual air molecules are going at about 500 m/s, the net flow of the bulk ‘field’ composed of air is equal to the speed of the current of people moving, while the net volume of the field which is effectively flowing is equal to the volume of the people who are moving.
Similarly, when matter moves away from us in the big bang, the graviton field around that matter responds by moving in the other direction at the same time, causing the graviton reaction force as described quantitatively by Newton’s 3rd law.
I’ll insert the video into a blog post on this site in the near future, along with a free PDF download link for the accompanying book. In the meanwhile, please make do with the posts on this page, especially this, this, this, this, this, this, this, this, this, this, this, and this.
To understand why mainstream hype of unchecked stringy theory with its nonfalsifiable speculative extra dimensions, multiverse/landscape, and so on are destructive, see this link. The mechanism proved in detail below does work, although it is still very much in a nascent stage. The problems are (1) that it leads to interesting applications in so many directions in physics that it absorbs a great deal of time, (2) it is extremely unpopular because “mechanisms” are sneered at out of prejudice (in favour of mechanismless mathematical “models”) , and are regarded as being “crazy” by essentially all mainstream physicists, i.e. most professional physicists. People like LeSage and Maxwell (who developed a mechanical model of space which was flawed), with false, halfbaked ideas have permanently damaged the credibility of mechanisms in fundamental physics, not to mention the metaphysical (nonfalsifiable) hidden variable “interpretation” of quantum mechanics.
The absurdity of this situation is demonstrated by the fact that quantum field theory postulates gauge boson radiations being exchanged in the vacuum between charges in order to mediate force fields (i.e., causing forces), yet the attitude is to believe in this without searching for the underlying physical mechanism! It’s exactly like religion where you allowed to believe things without investigating them scientifically. Moreover, the majority of people in the world actually want to heroworship religious beliefs in science, in place of supporting accurate, predictive physical mechanisms based on solid facts: people are today using modern physics as an alternative religion. They have (1) abandoned the search for reality, (2) lied that it is not possible to understand physics by mechanisms (it is), and (3) embarked on a campaign to censor out the facts and replace them with false speculations. Differential equations describing smooth curvatures and continuously variable fields in general relativity and mainstream quantum field theory are wrong except for very large numbers of interactions, where statistically they become good approximations to the chaotic (particle interactions) which are producing accelerations (spacetime curvatures, i.e. forces). See http://nige.wordpress.com/2007/07/04/metricsandgravitation/ and in particular see Fig. 1 of the post: http://nige.wordpress.com/2007/06/13/feynmandiagramsinloopquantumgravitypathintegralsandtherelationshipofleptonstoquarks/.
Think about air pressure as an analogy. Air pressure can be represented mathematically as a continuous force acting per unit area: P = F/A. However, air pressure is not a continuous force, it is due to impulses delivered by discrete random, chaotic strikes by air molecules (travelling at average speeds of 500 m/s in sea level air) against surfaces. If therefore you take a very small area of surface, you will not find a continuous uniform pressure P acting on it. Instead, you will find a series of chaotic impulses due to individual air molecules striking the surface! This is an example of how a useful mathematical fiction on large scales like air pressure, loses its accuracy if applied on small scales. It is well demonstrated by Brownian motion. The motion of an electron in an atom is subjected to the same thing simply because the small size doesn’t allow large numbers of interactions to be averaged out. Hence, on small scales, the smooth solutions predicted by mathematical models are flawed. Calculus assumes that spacetime are endlessly divisible, which is not true when calculus is used to represent a curvature (acceleration) due to a quantum field! Instead of perfectly smooth curvature as modelled by calculus, the path of a particle in a quantum field is affected by a series of discrete impulses from individual quantum interactions. The summation of all these interactions gives you something that is approximated in calculus by the “path integral” of quantum field theory. The whole reason why you can’t predict deterministic paths of electrons in atoms, etc., using differential equations is that their applicability breaks down for individual quantum interaction phenomena. You should be summing impulses from individual quantum interactions to get a realistic “path integral” to predict quantum field phenomena. The total and utter breakdown of mechanistic research in modern physics has instead led to a lot of nonsense, based on sloppy thinking, lack of calculations, and the failure to make checkable, falsifiable predictions and obtain experimental confirmation of them. The abusiveness and hatred directed towards people like myself by those “brane”washed with failed ideas from Dr Witten et al., is not unique to modern physics. It’s a mixture of snobbish hatred of innovation based on simple ideas, and a lack of real interest in physics by people who claim to be physicists but are in fact only crackpot mathematicians.
Predicted fundamental force strengths, all observable particle masses, and cosmology from a simple causal mechanism of vector boson exchange radiation, based on the existing mainstream quantum field theory
Solution to a problem with general relativity: A YangMills mechanism for quantum field theory exchangeradiation dynamics, with prediction of gravitational strength, spacetime curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests
(For an introduction to quantum field theory concepts, see The physics of quantum field theory.)
‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64.
Here’s an outline of the basic ‘idea’ (actually it is wellestablished 100% factual evidence just assembled in a 100% new way, and it is not merely a personal idea, not a speculation, not guesswork, not a pet ‘theory’, but it is scientific fact pure and simple) behind the new mechanistic physics involved (described in detail on this page and more recent pages of this blog):
Galaxy recession velocity in spacetime (Hubble’s empirical law): v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^{2} so: 0 < a < 6*10^{10} ms^{2}. Outward force: F = ma by Newton’s 2nd empirical law. Newton’s empirical 3rd law predicts equal inward force (which according to the possibilities in quantum field theory, will be carried by gravitons, exchange radiation between gravitational charges in quantum gravity): but nonreceding nearby masses don’t give rise to any reaction force according to this mechanism, so they act as shields and thus cause an asymmetry, producing gravity. This predicts the strength of gravity and electromagnetism, particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.
The underlying symmetry group physics which follows from this mechanism is to replace SU(2)xU(1) + Higgs sector in the Standard Model with simply a version of SU(2) where the 2^{2} 1 = 3 gauge bosons can exist in both massless and massive forms. Some field in the vacuum (different to the Higgs field in detail, but similar in that it provides rest mass to particles) gives masses to some of each of the 3 massless gauge bosons of SU(2), and the massive versions are the massive neutral Z, charged W, and charged W+ weak gauge bosons just as occur in the Standard Model. However, the massless versions of Z, W and W+ are the gauge bosons of gravity, negative electromagnetic fields, and positive electromagnetic fields, respectively.
The basic method for electromagnetic repulsion is the exchange of similar massless W gauge bosons between negative charges, or massless W+ gauge bosons between positive charges. The charges recoil apart because they get hit repeatedly by radiation emitted by the other charge. But for a pair of opposite charges, like a negative electron and positive nucleus, you get attraction because each charge can only interact with similar charges, so the effect is opposite charges on one another is to simply shadow them from radiation coming in from other charges in the surrounding universe. A simple vector force diagram (published in Electronics World in April 2003) shows that in this mechanism the magnitudes of the attraction and repulsion forces of electromagnetism are identical. The fact that electromagnetism is on the order of 10^{40} times as strong as gravity for fundamental charges (the precise figure depends on which fundamental charge are compared), is due to the fact that in this mechanism radiation is only exchanged between similar charges, so you get a statisticaltype “random walk” vector summation across the similar charges distributed in the universe. This was also illustrated in the April 2003 Electronics World article. Because gravity is carried by neutral (uncharged) gauge bosons, it’s net force doesn’t add up this way, so it turns out that gravity is weaker than electromagnetism by a factor equal to the square root of the number of similar charges of either sign in the universe. Since 90% of the universe is hydrogen, composed of two negative charges (electron and downquark) and two positive charges (two upquarks), it is easy to make approximate calculations of such numbers, using the density and size of the universe.
Obviously, massless charged radiation is a nonstarter in classical physics because it won’t propagate due to it’s magnetic selfinductance; however for YangMills theory (exchange radiation causing forces) this objection doesn’t hold because the transit of similar radiation in two opposite directions along a path at the same time cancels out the magnetic field vectors, allowing propagation. In a different context, we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying charged currents flowing in opposite directions to enable a signal (or pulse, or logic step, or net energy flow) to propagate: the magnetic fields of each charged current (one on each conductor in the pair of conductors) cancel one another out, preventing the infinite selfinductance problem and thus allowing propagation of charged energy currents. Thus the analogy of electricity propagating in a pair of conductors when a switch is closed shows how charged exchange radiation works.
Abstract
The objective is to unify the Standard Model and General Relativity with a causal mechanism for gauge boson mediated forces which makes checkable predictions. In very brief outline, Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH^{2} and outward force F = ma ~ 10^{43} Newtons. Newton’s 3rd law implies an inward force, which from the possibilities available seems to be carried by gauge boson radiation (gravitons), which predicts gravitational curvature, other fundamental forces, cosmology and particle masses. Nonreceding (local) masses don’t cause a reaction force, because they don’t present an outward force, so they act as a shield and cause an asymmetry that we experience as the attraction effect of gravity: see Fig. 1.
The symmetrical inward pressure of graviton radiation (see Fig. 2) exerts a pressure on masses (acting on masses, i.e., what is referred to as ‘Higgs field quanta’, which act on the interaction crosssectional areas of fundamental particles, and not on the macroscopic surface area of a planet) which causes the gravitational contraction predicted by general relativity, i.e., Earth’s radius is contracted by (1/3)MG/c^{2} = 1.5 mm by this graviton exchange radiation hitting masses, imparting momentum p = E/c, and then reflecting back (in the process causing another impulse on the mass, by the recoil effect, equal to p = E/c, so that the total imparted momentum is obviously p = 2E/c). (This ‘reflection’ is not the literal mechanism, because although a ball thrown against a wall can bounce back, a photon ‘reflected’ from a mirror actually undergoes a complex series of interactions, the sum of which (or path integral) is merely equivalent to a simple reflection: the photon is absorbed by the mirror and a new photon then gets emitted. Similarly with gauge boson radiations, the interactions involved are far more complex in detail than a simple reflection, although that is a useful approximation to the total process, under some circumstances.) Applying this contraction to motions, we find that the same behaviour of the gravitational field causes inertial force which resists acceleration, because of Einstein’s equivalence principle whereby inertial mass = gravitational mass!
To understand the picture of writing the Hubble expansion rate as an expansion in a time dimension, think of time (age of universe) as 1/Hubble constant (until 1998 it was assumed to be 0.67/Hubble constant with the 2/3 factor due to gravitational deceleration, but that gravitational deceleration was debunked by supernovae observations made by Perlmutter and published in Nature that year; so either gravitons are redshifted over large cosmological distances and lose energy by E = hf, being thus unable to slow down the expansion of the universe, or else there is some “dark energy” which produces an outward acceleration that offsets the inward acceleration of gravity).
If the Hubble constant was different in different directions, the age of the universe, 1/H, would be different in different directions. Hence the isotropy of the big bang we observe around us: there are three effective time dimensions, each corresponding to an expanding spatial dimension. (The redshift of radiation exchanged between receding masses in an expanding universe prevents thermal equilibrium being established, and therefore provides an endless heatsink.) Because of the isotropy, we see the 3 effective time dimensions as always being equal, so they are identical and can be represented by SO(3,1), hence we observe effectively 4 different dimensions including one of time and 3 of space.
Lunsford (discussed and cited below) has proved that the 3 spatial and 3 time dimension spin orthagonal group, SO(3,3) unifies gravity and electrodynamics correctly without the reducible problems of the old KaluzaKlein unification. I’ve shown that this is reasonable because 3 spatial dimensions are contracted by gravity in general relativity (for example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres), while 3 time dimensions are continuously expanding: this means that the Hubble expansion should be written in terms of velocity as a function of time, not distance:
Remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H^{2}R. So we have a real outward acceleration in Hubble’s law!
We then use Newton’s 2nd empirical law F=ma to estimate outward force of big bang, and then his 3rd empirical law to estimate the inward recation force carried by gauge bosons exchanged between local and distant receding masses. This makes quantum gravity quantitative and we can calculate the strength of gravity and lots of other things from the resulting mechanism. This post concentrates on gravity’s mechanism.
The Physical Relationship between General Relativity and Newtonian gravity
(1) Newtonian gravity
Let’s begin with a look at the Newtonian gravity law F = mMG/r^{2}, which is based on empirical evidence, not a speculative theory (remember Newton’s claim: hypotheses non fingo!). The inverse square law is based on Kepler’s empirical laws, which were obtained by Brahe’s detailed observations of motion of the planet Mars. The mass dependence was more of a guess by Newton, since he didn’t actually calculate gravitational forces (he did not know or even write the symbol for G, which arrived long after from the pen of Laplace). However, Newton’s other empirical law, F = ma, was strong evidence for a linear dependence of force on mass, and there was some evidence from the observation of the Moon’s orbit. The Moon was known to be about 250,000 miles away and to take about 30 days to orbit the earth, so it’s centripetal acceleration could be calculated from Newton’s law, a = v^{2}/r. This could confirm Newton’s law in two ways. First, since 250,000 miles is about 60 times the radius of the Earth, the acceleration due to gravity from the Earth should, from the inversesquare law, be 60^{2} times weaker at the Moon than it is at the Earth’s surface where it is 9.8 m/s^{2}.
Hence it was possible to check the inversesquare law in Newton’s day. Newton also made a good guess at the average density of the earth, which indicates G fairly accurately using Galileo’s measurement of the gravitational acceleration at the Earth’s surface and – applied also to the Moon (assumed to have a similar density to the Earth) gives a very approximate justification for the assumption of Newton’s that gravitational force is directly proportional to the product of the two masses involved. Newton worked out geometrically proofs for using his law. For example, the mass of the Earth is not located in a point at its centre, but is distributed over a large threedimensional volume. Newton proved that you can treat the entire mass of the earth as being in a small place in the centre of the Earth for the purpose of making calculations, and this proof is as clever as his demonstration that the inverse square law applies to elliptical planetary orbits (Hooke showed that it applied to circular orbits, which is much easier). Newton treated the mass of the earth as a series of uniform shells of small thickness. He proved that outside the shell, the gravitational field is identical, at any radius from the middle of the shell, to the gravitational field from an equal mass all located in a small lump in the middle. This proof also applies to the quantum gravity mechanism (below).
Cavendish produced a more accurate evaluation of G by measuring the twisting force (torsion) in a quartz fibre due to the gravitational attraction of two heavy balls of known mass located a known distance apart.
(2) General relativity as a modification needed to include relativistic phenomena
Eventually failures in the Newtonian law became apparent. Because orbits of planets are elliptical with the sun at one focus, the planets speed up when near the sun, and this causes effects like time dilation and it also causes their mass to increase due to relativistic effects (this is significant for Mercury, which is closest to the sun and orbits fastest). Although this effect is insignificant over a single orbit, so it didn’t affect the observations of Brahe or Kepler’s laws upon which Newton’s inverse square law was based, the effect accumulates and is substantial over a period of centuries, because it the perhelion of the orbit precesses. Only part of the precession is due to relativistic effects, but it is still an important anomaly in the Newtonian scheme. Einstein and Hilbert developed general relativity to deal with such problems. Significantly, the failure of Newtonian gravity is most important for light, which is deflected by gravity twice as much when passing the sun as that predicted by Newton’s a = MG/r^{2}.
Einstein recognised that gravitational acceleration and all other accelerations are represented by a curved worldline on a plot of distance travelled versus time. This is the curvature of spacetime; you see it as the curved line when you plot the height of a falling apple versus time.
Einstein then used tensor calculus to represent such curvatures by the Ricci curvature tensor, R_{ab}, and he tried to equate this with the source of the accelerative field, the tensor T_{ab}, which represents all the causes of accelerations such as mass, energy, momentum and pressure. In order to represent Newton’s gravity law a = MG/r^{2} with such tensor calculus, Einstein began with the assumption of a direct relationship such as R_{ab} = T_{ab}. This simply says that massenergy tells is directly proportional to curvature of spacetime. However, it is false since it violates the conservation of massenergy. To make it consistent with the experimentally confirmed conservation of massenergy, Einstein and Hilbert in November 1915 realised that you need to subtract from T_{ab} on the right hand side the product of half the metric tensor, g_{ab}, and the trace, T (the sum of scalar terms, across the diagonal of the matrix for T_{ab}). Hence
R_{ab} = T_{ab } (1/2)g_{ab}T.
[This is usually rewritten in the equivalent form, R_{ab} – (1/2)g_{ab}R = T_{ab}.]
There is a very simple way to demonstrate some of the applications and features of general relativity. Simply ignore 15 of the 16 terms in the matrix for T_{ab}, and concentrate on the energy density component, T_{00}, which is a scalar (it is the first term in the diagonal for the matrix) so it exactly equal to its own trace:
T_{00} = T.
Hence, R_{ab} = T_{ab } (1/2)g_{ab}T becomes
R_{ab} = T_{00 } (1/2)g_{ab}T, and since T_{00} = T, we obtain
R_{ab }= T[1 _{} (1/2)g_{ab}]
The metric tensor g_{ab} = ds^{2}/(dx^{a}dx^{b}), and it depends on the relativistic Lorentzian metric gamma factor, (1 – v^{2}/c^{2})^{1/2}, so in general g_{ab} falls from about 1 towards 0 as velocity increases from v = 0 to v = c.
Hence, for low speeds where, approximately, v = 0 (i.e., v << c), g_{ab} is generally close to 1 so we have a curvature of
R_{ab }= T[1 _{} (1/2)(1)] = T/2.
For high speeds where, approximately, v = c, we have g_{ab }= 0 so
R_{ab }= T[1 _{} (1/2)(0)] = T.
The curvature experienced for an identical gravity source if you are moving at the velocity of light is therefore twice the amount of curvature you get at low (nonrelativistic) velocities. This is the explanation as to why a photon moving at speed c gets twice as much curvature from the sun’s gravity (i.e., it gets deflected twice as much) as Newton’s law predicts for low speeds. It is important to note that general relativity doesn’t supply the physical mechanism for this effect. It works quantitatively because is its a mathematical package which accounts accurately for the use of energy.
However, it is clear from the way that general relativity works that the source of gravity doesn’t change when such velocitydependent effects occur. A rapidly moving object falls faster than a slowly moving one because of the difference produced in way the moving object is subject to the gravitational field, i.e., the extra deflection of light is dependent upon the LorentzFitzGerald contraction (the gamma factor already mentioned), which alters length (for a object moving at speed c there are no electromagnetic field lines extending along the direction of propagation whatsoever, only at right angles to the direction of propagation, i.e., transversely). This increases the amount of interaction between the electromagnetic fields of photon and the gravitational field. Clearly, in a slow moving object, half of the electromagnetic field lines (which normally point randomly in all directions from matter, apart from minor asymmetries due to magnets, etc.), will be pointing in the wrong direction to interact with gravity, and so slow moving objects only experience half the curvature that fast moving ones do, in a similar gravitational field.
Some issues with general relativity are focussed on the assumed accuracy of Newtonian gravity which is put into the theory as the low speed, weak field solution normalization. As we shall show below, this is incompatible with a YangMills (Standard Model type) quantum gravity theory for reasons other than the renormalization problems usually assumed to exist. First, over very large distances in an expanding universe, the exchange of gravitons weakens gravitons because redshift reduces the frequency and thus the energy of radiation dramatically over cosmological sized distances. This eliminates curvature over such distances, explaining the lack of gravitational deceleration in supernova data. This is falsely explained by the mainstream by adding an epicycle, i.e.,
(gravitational deceleration without redshift of gravitons in general relativity) + (acceleration due to small positive cosmological constant due to some kind of dark energy) = (observed, nondecelerating, recession of supernovae)
instead of the simpler quantum gravity explanation (predicted in 1996, two years ahead of observation):
(general relativity with G falling for large distances due to redshift of exchange gravitons reducing the energy of gravitational interactions) = (observed, nondecelerating, recession of supernovae).
So there is no curvature of spacetime at extremely big distances! On small scales, too, general relativity is false, because the tensor describing the source of gravity uses an average density to smooth out the real discontinuities resulting from the quantized, discrete nature of particles which have mass! The smoothness of a curvature in general relativity is false in general on small scales due to the input assumption – required for the stressenergy tensor to work (it is a summation of continuous differential terms, not discrete terms for each fundamental particle). So on both very large and very small scales, general relativity is a fiddle. But this is not a problem when you understand the physical dynamics and know the limitations of the theory. It only becomes a problem when people take a lot of discrete fundamental particles representing a real mass causing gravity, average their masses over space to get an average density, and then calculate the curvature from the average density, getting a smooth result and claiming that this proves that curvature is really smooth on small scales. Of course it isn’t. That argument is like averaging the number of kids per household and getting 2.5, then claiming that the average proves that one third of kids are born with only half of their bodies. But there is also a problem with quantum gravity as usually believed (see the previous post, and also this comment, on Cosmic Variance blog, by Professor John Baez).
Symmetry groups which include gravity
We will show how you can make checkable predictions for quantum gravity in this post. In the previous two posts, here and here, the inclusion of gravity in the standard model was shown to require a change of the electroweak force SU(2) x U(1) to SU(2) x SU(2) where the three electroweak gauge bosons (W_{+}, W_{}, and Z_{o}) occur in both shortranged massive versions and massless, infiniterange versions with the charged ones producing electromagnetic force and the neutral one producing gravitation, and the issues in calculating the outward force of the big bang were described. Depending on how the Higgs mechanism for mass will be modified, this SU(2) x SU(2) electroweakgravity may be replacable by a new version of a single SU(2). In the existing Standard Model, SU(3) x SU(2) x U(1), only one handedness of fundamental particles respond to the SU(2) weak force, so if you change the electroweak groups SU(2) x U(1) to SU(2) x SU(2) it can lead to a different way of understanding chiral symmetry and electroweak symmetry breaking. See also this earlier post, which discusses with quantum force effects as Hawking radiation emissions.)
The understanding of the correct symmetry model behind the Standard Model requires a physical understanding of what quarks are, how they arise, etc. For instance, bring 3 electrons close together and you start getting problems with the exclusion principle. But if you could somehow force a triad of such particles together, the net charge would be 3 times stronger than normal, so the vacuum shielding veil of polarized pairproduction fermions will be also 3 times stronger, shielding the bare core charges 3 times more efficently. (Imagine it like 3 communities combining their separate castles into one castle with walls 3 times thicker. The walls provide 3 times as much shielding; so as long as they can all fit inside the reinforced castle, all benefit.) This means that the long range (shielded) charge from each of the three charges of the triad will be 1/3 instead of 1. Since pairproduction, and polarization of electric charges cancelling out part of the electric field, are experimentally validated phenomena, this mechanism for fractional charges is real. Obviously, while it is easy to explain the downquark this way, you need a detailed knowledge of electroweak phenomena like the weak charges of quarks compared to leptons (which have chiral features) and also the strong force, to explain physically what is occurring with upquarks that have a +2/3 charge. Some interesting although highly abstract mathematical assaults on trying to understand particles have been made by Dr Peter Woit in http://arxiv.org/abs/hepth/0206135 which generates all the Standard Model particles using a U(2) spin representation (see also his popular nonmathematical introduction, Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics), which can be compared to the more pictorial preon models of particles advocated by loop quantum gravity theorists like Dr Lee Smolin. Both approaches are suggesting that there is a deep simplicity, with the different quarks, leptons, bosons and neutrinos arising from a common basic entity by means of symmetry transformations or twists of braids:
‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ Wiki. (Hence there is a simple relationship between leptons and fermions; more later on.)
Introduction to the basis for the dynamics of quantum gravity
You can treat the empirical Hubble recession law, v = HR, as describing a variation in velocity with respect to observable distance R, or as a variation of velocity with respect to time past, because as we look to greater distances in the universe, we’re seeing an earlier era, because of the time taken for the light to reach us. That’s spacetime: you can’t have distance without time. Because distance R = ct where c is the velocity of light and t is time, Hubble’s law can be written v = HR = Hct which clearly shows a variation of velocity as a function of time! A variation of velocity with time is called acceleration. By Newton’s 2nd law, the acceleration of matter produces force. This view of spacetime is not new:
‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Herman Minkowski, 1908.
To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H^{2}R.
Radial distance elements are equal for the Hubble recession in all directions around us,
H = dv/dr = dv/dx = dv/dy = dv/dz
implying
t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv
implying
dv/H = dr = dx = dy = dx
1/H is a way to measure the age of the universe. If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.
This makes spacetime easier to understand and allows a new unification scheme! The expanding universe has three orthagonal expanding timelike dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter. Surely this contradicts general relativity? No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square. To do this, we take dr = dx = dy = dz and convert them all into timelike equivalents by dividing each distance element by c, giving:
(dr)/c = (dx)/c = (dy)/c = (dz)/c
which can be written as:
dt_{r} = dt_{x} = dt_{y} = dt_{z}
So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal! This is why we only need one time to describe the expansion of the universe. If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions. Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic! This is quite a surprising result as some hostility to this new idea from traditionalists shows.
But the three time dimensions which are usually hidden by this isotropy are vitally important! Replacing the KaluzaKlein theory, Lunsford has a 6dimensional unification of electrodynamics and gravitation which has 3 timelike dimensions and appears to be what we need. It was censored off arXiv after being published in a peerreviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161177, which can be downloaded here. The massenergy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity. For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.
This sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity.
The outward motion of matter produces a force which for simplicity for the present (we will discuss correction factors for density variation and redshift effects below; see also this previous post) will be approximated by Newton’s 2nd law in the form
F = ma
= [(4/3)πR^{3}r].[dv/dt],
and since dR/dt = v = HR, it follows that dt = dR/(HR), so
F = [(4/3)πR^{3}r].[d(HR)/{dR/(HR)}]
= [(4/3)πR^{3}r].[H^{2}R.dR/dR]
= [(4/3)πR^{3}r].[H^{2}R]
= 4πR^{4}rH^{2}/3.
Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation – spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as we shall see – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram:
(force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s crosssectional area and the ratio R^{2}/r^{2}) / (total spherical area with radius R).
Later in this post, this will be evaluated proving that the shield’s crosssectional area is the crosssectional area of the event horizon for a black hole, π(2GM/c^{2})^{2}. But at present, to get the feel for the physical dynamics, we will assume this is the case without proving it. This gives
(force of gravity) = (4πR^{4}rH^{2}/3).(π(2GM/c^{2})^{2}R^{2}/r^{2})/(4πR^{2})
= (4/3)πR^{4}rH^{2}G^{2}M^{2}/(c^{4}r^{2})
We can simplify this using the Hubble law because HR = c gives R/c = 1/H so
(force of gravity) = (4/3)πrG^{2}M^{2}/(H^{2}r^{2})
This result ignores both the density variation in spacetime (the distant, earlier universe having higher density) and the effect of redshift in reducing the energy of gravitons and weakening quantum gravity contributions from extreme distance, because the momentum of a graviton will be p = E/c and where E is reduced by redshift since E = hf.
Quantization of mass
However, it is significant qualitatively that this gives a force of gravity proportional not to M_{1}M_{2} but instead to M^{2}, because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. (Obviously ‘large masses’ are just composites of many fundamental particles.) M^{2 }should only arise if the ultimate building blocks of mass (the ‘charge’ in a theory of quantum gravity) are quantized, because it shows that two units of mass are identical. This tells us about the way the massgiving field particles, the ‘Higgs bosons’, operate. Instead of there being a cloud of an indeterminate number of Higgs bosons around a fermion giving rise to mass, what happens is that each fermion acquires a discrete number of such massgiving particles.
(These ‘Higgs bosons’ surrounding the fermion acquire inertial and gravitational mass by interacting with the external gravitational field, which explains why mass increases with velocity but electric charge doesn’t. The core of a fermion doesn’t interact with the inertial/gravitational field, only with the massive Higgs bosons surrounding the core, which in turn do interact with the inertial/gravitational field. The core of the fermion only interacts with Standard Model forces, namely electromagnetism, weak force, and in the case of pairs or triads of closely confined fermions – quarks – the strong nuclear force. Inertial mass and gravitational mass arise from the Higgs bosons in the vacuum surrounding the fermion, and gravitons only interact with Higgs bosons, not directly with the fermions.)
This is explicable simply in terms of the vacuum polarization of matter and the renormalization of charge and mass in quantum electrodynamics, and is confirmed by an analysis of all relatively stable (half life of 10^{23} second or more) known particles, as discussed in an earlier post here (for a table of the mass predictions compared to measurements see Table 1). (Note that the simple description of polarization of the vacuum as two shells of virtual fermions, a positive one close to the electron core and a negative one further away, depicted graphically on those sites, is a simplification for convenience in depicting the net physical effect for the purpose of understanding what is going on for making accurate calculations. Obviously, in reality, all the virtual positive fermions and all the virtual negative fermions will not be located in two shells; they will be all over the place but on the average the virtual charges of like sign to the real particle core will be further away from the core than the virtual charges of unlike sign.)
Table 1: Comparison of measured particle masses with predicted particle masses using a physical model for the renormalization of mass (both mass and electric charge are renormalized quantities in quantum electrodynamics, due to the polarization of pairs of charged virtual fermions in the electron’s strong electric field, see previous posts such as this). Anybody wanting a high quality, printable PDF version of this table can find it here. (The theory of masses here was inspired by an arXiv paper by Drs. Rivero and de Vries, and on a related topic I gather than Carl Brannen is using density operators to explain theoretically and extend the application of Yoshio Koide’s empirical formula, which states that the sum of the masses of the 3 leptons electron, muon and tau, multiplied by 1.5, is equal to the square of the sum of the square roots of the masses of those three particles. If that works it may well be compatible with this mass mechanism. Although the mechanism predicts the possible quantized masses fairly accurately as first approximations, it is good to try to understand better how the actual masses are picked out. The mechanism which produced the table produced a formula containing two integers which predicts a lot of particles which are too shortlived to occur. Why are some configurations more stable than others? What selection principle picks out the proton as being particularly stable – if not completely stable? We know that the nuclei of heavy elements aren’t chaotic bags of neutrons and protons, but have a shell structure to a considerable extent, with ‘magic numbers’ which determine relative stability, and which are physically explained by the number of nucleons taken to completely fill up successive nuclear shells. Probably some similar effect plays a part to some extent in the mass mechanism, so that some configurations have magic numbers which are stable, while nearby ones are far less stable and decay quickly. This if true of the quantized vacuum surrounding fundamental particles, would lead to a new quantum theory of such particles, with similar gimmicks explaining the original ‘anomalies’ of the periodic table, viz. isotopes explaining noninteger masses, etc.)
This prediction doesn’t strictly demand perfect integers to be observable, because it’s possible that effects like isotopes to exist, where the different individuals of the same type of meson or baryon can be surrounded by different integer numbers of Higgs field quanta, giving noninteger average masses. (The number would be likely to actually change during a highenergy interaction, where particles are broken up.)
The early attempts of Dalton and others to work out an atomic theory were regularly criticised and even ridiculed by the fact that the measured mass of chlorine is 35.5 times the mass of hydrogen, i.e., nowhere near an integer! Here is a summary of the rules:
If a particle is a baryon, it’s mass should in general be close to an integer when expressed in units of 105 MeV (3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV).
If it is a meson, it’s mass should in general be close to an integer when expressed in units of 70 MeV (2/2 multiplied by the electron mass divided by alpha: 1*0.511*137 = 70 MeV).
If it is a lepton apart from the electron (the electron is the most complex particle), it’s mass should in general be close to an integer when expressed in units of 35 MeV (1/2 multiplied by the electron mass divided by alpha: 0.5*0.511*137 = 35 MeV).
This scheme has a simple causal mechanism in the quantization of the ‘Higgs field’ which supplies mass to fermions and bosons. By itself the mechanism just predicts that mass comes in discrete units, depending on how strong the polarized vacuum is in shielding the fermion core from the Higgs field quanta.
To predict specific masses (apart from the fact they are likely to be near integers if isotopes don’t occur), regular QCD ideas can be used. This prediction doesn’t replace lattice QCD predictions, it just suggests how masses are quantized by the ‘Higgs field’ rather than being a continuous variable.
Every mass apart form the electron is predictable by the simple expression: mass = 35n(N+1) MeV, where n is the number of real particles in the particle core (hence n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is is the integer number of ‘Higgs field’ quanta giving mass to that fermion (lepton or baryon) or meson (boson) core.
From analogy to the shell structure of nuclear physics where there are highly stable or ‘magic number’ configurations like 2, 8 and 50, and we can use n = 1, 2, and 3, and N = 1, 2, 8 and 50 to predict the most stable masses of fermions besides the electron, and also the masses of bosons (mesons):
For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV.
For mesons, n = 2 and N = 1 gives the pion: 35n(N+1) = 140 MeV.
For baryons, n = 3 and N = 8 gives nucleons: 35n(N+1) = 945 MeV.
For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV.
Particle mass predictions: the gravity mechanism implies quantized unit masses. As proved, the 1/a = 137.036… number is the electromagnetic shielding factor for any particle core charge by the surrounding polarised vacuum.
This shielding factor is obtained by working out the bare core charge (within the polarized vacuum) as follows. Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order hbar. The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct. Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s massenergy equivalence). Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post). In fact this relationship, i.e., product of energy and time equalling hbar, is widely used for the relationship between particle energy and lifetime. The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology. Now for the slightly clever bit:
px = hbar implies (when remembering p = mc, and E = mc^{2}):
x = hbar /p = hbar /(mc) = hbar*c/E
so E = hbar*c/x
when using the classical definition of energy as force times distance (E = Fx):
F = E/x = (hbar*c/x)/x
= hbar*c/x^{2}.
So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law! This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs. So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a. The bare core charge of an electron is 137.036… times the observed longrange (low energy) unit electronic charge. All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.
One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance. However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx. For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved. This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.
It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistically, scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)
Experimental evidence:
‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hepth/0510040, p 71.
Plus, in particular:
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
(Levine and Koltick experimentally found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 91 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong. At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases! Conservation of gauge boson massenergy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.)
Related to this exchange radiation, are the Feynman’s path integrals of quantum field theory:
‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here
‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerlymysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here
As for the indeterminancy of electron locations in the atom, the fuzzy picture is not a result of multiple universes interacting but simply the Poincare manybody problem, whereby Newtonian physics fails when you have more than 2 bodies of similar mass or charge interacting at once (the failure is that you lose deterministic solutions to the equations, having to resort instead to statistical descriptions like the Schroedinger equation and annihilationcreation operators in quantum field theory produce many pairs of charges randomly in location and time in strong fields, deflecting particle motions chaotically on small scales, similarly to Brownian motion; this is the ‘hidden variable’ causing indeterminancy in quantum theory, not multiverses or entangled states). Entanglement is a false interpretation physically of Aspect’s (and related) experiments: Heisenberg’s uncertainty principle only applies to slower than light velocity particles like massive fermions. Aspect’s experiment stems from the EinsteinRosenPolansky suggestion to measure the spins of two molecules; if the correlate in a certain way then that would prove entanglement, because molecular spin are subject to the indeterminancy principle. Aspect used photons instead of molecules. Photons cannot change polarization when measured as they are frozen in nature due to their velocity, c. Therefore, the correlation of photon polarizations observed merely confirms that Heisenberg’s uncertainty principle does not apply to photons, rather than implying that (believing that Heisenberg’s uncertainty principle does apply to photons) the photons ‘must’ have an entangled polarization until measured! Aspect’s results in fact discredits entanglement.
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of prequantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
Gravity is basically a boson shielding effect, while the errors of LeSage’s infamous pushinggravity model are due to fermion radiation assumptions, so they did not get anywhere. Once again, gravity is a massless boson – integer spin – exchange radiation effect. LeSage (or Fatio, whose ideas LeSage borrowed), assumed that very small material particles – fermions in today’s language – were the forcecausing exchange radiation (discussed by Feynman in the video here). Massless bosons don’t obey the exclusion principle and they don’t interact with one another like massive bosons and all fermions (fermions do obey the exclusion principle, so they always interact with one another). Hence, LeSage’s attractive force mechanism is only valid for shortranged particles like pions, which produce the strong nuclear attractive force between nucleons. Therefore, the ‘errors’ people found in the past when trying to use LeSage’s mechanism for gravity – the mutual interactions between the particles which equalize the force in the shadow region after a meanfreepath – don’t apply to bosonic radiation which doesn’t obey the exclusion principle. The shortrange of LeSage’s gravity becomes an advantage in explaining the pion mediated strong nuclear force. LeSage – or actually Newton’s friend Fatio, whose ideas were allegedly plagarised by LeSage – made a mess of it. The LeSage attraction mechanism is predicted to have a short range on the order of a mean free path of scatter before radiation pressure equalization in the shadows quenches the attractive force. This short range is real for nuclear forces, but not for gravity or electromagnetism:
(Source: http://www.mathpages.com/home/kmath131/kmath131.htm.)
The FatioLeSage mechanism is useless because it makes no prediction for the strength of gravity whatsoever, and it is plain wrong because it assumes gas molecules or fermions are the exchange radiation, instead of gauge bosons. The falsehood of the FatioLeSage mechanism is that the gravity force range would be short ranged, since the material pressure of the fermion particles (which bounce off each other due to the Pauli exclusion principle) or gas molecules causing gravity, would get diffused into the shadows within a short distance; just as air pressure is only shielded by a solid for a distance on the order of a mean free path of the gas molecules. Hence, to get a rubber suction cup to be pushed strongly to a wall by air pressure, the wall must be smooth, and it must be pushed firmly. Such a short ranged attractive force mechanism may be useful in making pionmediated Yukawa strong nuclear force calculations, but is not gravity.
(Some of the ancient objections to LeSage are plain wrong and in contradiction of YangMills theories such as the standard model. For example, it was alleged that gravity couldn’t be the result of an exchange radiation force because the exchange radiation would heat up objects until they all glowed. This is wrong because the mechanisms by which radiation interact with matter don’t necessarily transfer that energy into heat; classically all energy is usually degraded to waste heat in the end, but the gravitational field energy cannot be directly degraded to heat. Masses don’t heat up just because they are exchanging radiation, the gravitational field energy. If you drop a mass and it hits another mass hard, substantial heat is generated, but this is an indirect effect. Basically, many of the arguments against physical mechanisms are bogus. For an object to heat up, the charged cores of the electrons must gain and radiate heat energy; but the gravitational gauge boson radiation isn’t being exchanged with the electron bare core. Instead, the fermion core of the electron has no mass, and since quantum gravity charge is mass, the lack of mass in the core of the electron means it can’t interact with gravitons. The gravitons interact with some vacuum particles like ‘Higgs bosons’, which surround the electron core and produce inertial and gravitational forces indirectly. The electron core couples to the ‘Higgs boson’ by electromagnetic field interactions, while the ‘Higgs boson’ at some distance from the electron core interacts with gravitons. This indirect transfer of force can smooth out the exchange radiation interactions, preventing that energy from being degraded into heat. So objections – if correct – would also have to debunk the Standard Model which is based on YangMills exchange radiation, and which is well tested experimentally. Claiming that exchange radiation would heat things up until they glowed is similar to the Ptolemy followers claiming that if the Earth rotated daily, clouds would fly over the equator at 1000 miles/hour and people would be thrown off the ground! It’s a politicalstyle junk objection and doesn’t hold up to any close examination in comparison to experimentallydetermined scientific facts.)
When a massgiving black hole (gravitationally trapped) Zboson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have alpha shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is:
M_{z}a^{2} /(1.5*2p) = 0.51 MeV
If, however, the electron core has more energy and can get so close to a trapped Zboson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass:
M_{z}a/(2p ) = 105.7 MeV
The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is:
M_{e}n(N + 1)/(2a) = 35n(N+1) Mev.
(For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Zbosons. Lest this be dismissed as ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a physical mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements. There is a similarity in the physics between these vacuum corrections and the Schwinger correction to Dirac’s 1 Bohr magneton magnetic moment for the electron: corrected magnetic moment of electron = 1 + a/(2p) = 1.00116 Bohr magnetons. Notice that this correction is due to the electron interacting with the vacuum field, similar to what we are dealing with here. Also note that Schwinger’s correction is only the first (but is by far the biggest numerically and thus the most important, allowing the magnetic moment to be accurately predicted to 6 significant figures of accuracy) of an infinite series of correction terms involving higher powers of a for more complex vacuum field interactions. Each of these corrections is depicted by a different Feynman diagram. (Basically, quantum field theory is a mathematical correction for the probability of different reactions. The more classical and obvious things generally have the greatest probability by far, but stranger interactions occasionally occur in addition, so these also need to be included in calculations which give a prediction which is statistically very accurate.)
This kind of gravitational calculation also allows us to predict the gravitational coupling constant, G, as will be proved below. We know that the inward force is carried by gauge boson radiation, because all forces are due to gauge boson radiation according to the Standard Model of particle physics, which is the besttested physical theory of all time and and has made thousands of accurately confirmed predictions from an input of just 19 empirical parameters (don’t confuse this with the bogus supersymmetric standard model, which even in its minimal form requires 125 adjustable parameters and has a large landscape of possibilities, making no definite or precise predictions whatsoever). The Standard Model is a YangMills theory in which the exchange of gauge bosons between relevant charges for the force (i.e., colour charges for quantum chromodynamic forces, electric charges for electric forces, etc.) causes the force.
What happens is that YangMills exchange radiation pushes inward, coming from the surrounding, expanding universe. Since spacetime, as recently observed, isn’t boundless (there’s no observable gravity retarding the recession of the most distant galaxies and supernovae, as discovered in 1998, and so there is no curvature at the greatest distances), the universe is spherical and is expanding without slowing down. The expansion is caused by the physical pressure of the gauge boson radiation. This radiation exerts momentum p = E/c. Gauge boson radiation is emitted towards us by matter which is receding: the reason is Newton’s 3rd law. Because, as proved above, the Hubble recession in spacetime is an acceleration of matter outwards, the matter receding has an outward force by Newton’s 2nd empirical law F = ma, and this outward force has an equal and opposite reaction, just like the exhaust of a rocket. The reaction force is carried by gauge boson radiation.
What, you may ask, is the mechanism behind Newton’s 3rd law in this case? Why should the outward force of the universe be accompanied by an inward reaction force? I dealt with this in a paper in May 1996, made available via the letters page of the October 1996 issue of Electronics World. Consider the source of gravity, the gravitational field (actually gauge boson radiation), to be a frictionless perfect fluid. As lumps of matter, in the form of the fundamenta particles of galaxies, accelerate away from us, they leave in their wake a volume of vacuum which was previously occupied but is now unoccupied. The gravitational field doesn’t ignore spaces which are vacated when matter moves: instead, the gravitational field fills them. How does this occur?
What happens is like the situation when a ship moves along. It doesn’t suck in water from behind it to fill its wake. Instead, water moves around from the front to the back. In fact, there is a simple physical law: there is an equal volume of water moving to the ship’s displacement moving continuously in the opposite direction to the ship’s motion.
This water fills in the void behind the moving ship. For a moving particle, the gravitational field of spacetime does the same. It moves around the particle. If it did anything else, we would see the effects of that: for example, if the gravitational field piled up in front of a moving object instead of flowing around it, the pressure would increase with time and there would be drag on the object, slowing it down. The fact that Newton’s 1st law, inertia, is empirically based tells us that the vacuum field does flow frictionlessly around moving particles instead of slowing them down. The vacuum field does however exert a net force when an object accelerates; this causes increases the mass of the object and causes a flattening of the object in the direction of motion (FitzGeraldLorentz contraction). However, this is purely a resistance to acceleration, and there is drag to motion unless the motion is accelerative.
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp8990.
(Consider motion in the Dirac sea, which is incompressible owing to the Pauli exclusion principle: all states are filled: this predicted antimatter successfully. Nobody wants to hear of modelling physical effects of particles moving in the Dirac sea. Why? A good analogy is the particleandhole theory used in semiconductor electronics, solid state physics. Now plug in cosmology: both positive and negative real charges are streaming away from us in all directions. Hence virtual charges in the Dirac sea will stream inward. Moving fermions can’t occupy the same space as virtual fermions, they get shoved out of the way due to the Pauli exclusion principle. It is pretty obvious to anyone that the outward force of matter in the expanding universe is balanced by equal inward Dirac sea force, according to Newton’s 3rd law. Similarly, in a a corridor, a person of 70 litre volume moving at velocity v is compensated for by 70 litres of fluid air moving at velocity v, or the same speed but in the opposite direction to the person’s motion. This is pretty obvious because if the surronding fluid didn’t displace around the person to fill in the volume they are vacating, there would be a vacuum left behind them and the 14.7 psi air pressure in front would exert 144*14.7 ~ 2,100 pounds of pressure per square foot of the person which would prevent the person being able to walk. It is absolutely crucial for the person that air is a fluid which flows around and fills in the space being vacated behind. The lack of this mechanism explains why you need to apply substantial force to remove large suction plungers from smooth surfaces against air pressure. However, to my cost, I have found that this argument using the air pressure analogy or Dirac sea analogy is fruitless. Mainstream crackpots claim that it is all wrong and by deliberately misunderstanding the analogy they can create endless rows which have nothing to do with the point, the gravitational mechanism. As an analogy to this misunderstanding of a simple point, think about Feynman’s remark that energy was misunderstood even by the author of physics school textbook who claimed that ‘energy’ makes everything go. Taking up Feynman’s argument, if you calculate the energy of the air in your room, the air molecules have a mean velocity of about 500 m/s, and there is 1.2 kg of air per cubic metre of your room. Let’s say you are in a small room with 10 cubic metres of air in it, 12 kg of air. The kinetic energy that air possesses is half the mass multiplied by the square of the mean speed, i.e., 1.5 MJ. However, that ‘energy’ is useless to you unless you have a way of extracting it. You can’t power your laptop from the energy of air pressure and temperature! You could of course use it like a battery if you had a big vacuum chamber with a fan powering a generator at a hole in the wall of the vacuum chamber, so that the inrushing air would turn the fan and generate electricity. But the power it takes to create such a vacuum is more than the energy you can possibly get out of the collapsing vacuum. So the simple idea of ‘energy’ is misleading to mainstream crackpots. What counts is not gross energy, but usable energy! This is why most of the gauge boson radiation energy has nothing to do with the energy we use. Because the gauge boson radiation energy, such as ‘gravitons’, comes from all directions, most of it is not useful energy and cannot do work. Only the small asymmetries in it result in work, by creating the fundamental forces we experience!)
‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp323. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)
Fig. 2: The general allround pressure from the gravitational field does of course produce physical effects. The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is a compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the Earth; this was calculated by Feynman using general relativity in his famous Feynman Lectures on Physics. The reason why nearby, local masses shield the forcecarrying radiation exchange, causing gravity, is because the distant masses in the universe is in high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from a local, nonreceding mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you rather than exchanging gauge bosons with you, so you get pushed towards it. This is why apples fall.
Since there is very little shielding area (fundamental particle shielding crosssectional areas are small compared to the Earth’s area) so the Earth doesn’t block all of the gauge boson radiation being exchanged between you and the masses in the receding galaxies beyond the other far side of the Earth. The shielding by the Earth is by fundamental particles in it, specifically the fundamental particles which give rise to mass (supposed to be some form of Higgs bosons which surround fermions, giving them mass) by interacting with the gravitational field of exchange radiation. Although each local fundamental particle over its shielding crosssectional area stops the gauge boson radiation completely, most of Earth’s volume is devoid of fundamental particles because they are so small. Consequently, the Earth as a whole is an inefficient shield. There is little probability of different fundamental particles in the earth being directly behind one another (i.e., overlapping of shielded areas) because they are so small. Consequently, the gravitational effect from a large mass like the Earth is just the simple sum of the contributions from the fundamental particles which make the mass up, so the total gravity is proportional to the number of particles, which is proportional to the mass.
The point is that nearby masses, which are not receding from you significantly, don’t fire gauge boson radiation towards you, because there is no reaction force! However, they still absorb gauge bosons, so they shield you, creating an asymmetry. You get pushed towards such masses by the gauge bosons coming from the direction opposite to the mass. For example, standing on the Earth, you get pushed down by the asymmetry; the upward beam of gauge bosons coming through the earth is very slightly shielded. The shielding effect is very small, because it turns out that the effective crosssectional shielding area of an electron (or other fundamental particle) for gravity is equal to πR^{2} where R = 2GM/c^{2 }which is the event horizon radius of an electron. This is a result of the calculations, as is a prediction of the Newtonian gravitational parameter G! Now let’s prove it.
Approach 1
Referring to Fig. 1 above, we can evaluate the gravity force (which is the proportion of the total force indicated by the darkshaded cone; the observer is in the middle of the diagram at the apex of each cone). The force of gravity is not simply the total inward force, which is equal to the total outward force. Gravity is only the proportion of the total force which is represented by the dark cone.
The total force, as proved above, is = 4πR^{4}rH^{2}/3. The fraction of this which is represented by the dark cone is equal to the volume of the cone (XR/3, where X is the area of the end of the cone), divided by volume (4πR^{3}/3), of the sphere of radius R (the radius of the observable spacetime universe defined by R = ct = c/H). Hence,
Force of gravity = (4πR^{4}rH^{2}/3).(XR/3)/(4πR^{3}/3)
= R^{2}rH^{2}X/3,
where the area of the end of the cone, X, is observed in Fig. 1 to be geometrically equal to the area of the shield, A, multiplied by (R/r)^{2}.
X = A(R/r)^{2}.
Hence the force of gravity is R^{2}rH^{2}[A(R/r)^{2}]/3
= (1/3)R^{4}rH^{2}A/r^{2}.
(Of course you get exactly the same result if you take the fraction of the total force delivered in the cone to be the area of the base of the cone, X, divided into the surface area, 4πR^{2}, of the sphere of radius R.)
If we assume that the shield area is A = π(2GM/c^{2})^{2}, i.e., the crosssectional area of the event horizon of a black hole, then the formula above for the force of gravity, when set equal to the Newtonian law, F = mMG/r^{2}, gives for m = M and c/R = H, the result is the prediction that
G = (3/4)H^{2}/(rπ).
This is of course equal to twice the false amount you get from rearranging the ‘critical density’ formula of general relativity (without a cosmological constant), but what is more interesting is that we do not need to assume that the shield area is A = π(2GM/c^{2})^{2}. The critical density formula, and other cosmological applications of general relativity, is false because it ignores the quantum gravity dynamics which become important on very large scales due to recession of masses in the universe, because the gravitational interaction is a product of the cosmological expansion; both are caused by gauge boson exchange radiation (the radiation pushes masses apart over large, cosmological distance scales, while pushing things together on small scales; this is because the uniform gauge boson pressure between masses causes them to recede from all surrounding masses and fill the expanding volume of space like raisins in an expanding cake receding from one another, where the gauge boson radiation pressure is represented by the pressure of the dough of the cake as it expands; there is no contradiction whatsoever between this effect and the local gravitational attraction which occurs when two currants are close enough that there is no dough between them and plenty of dough around them, pushing them towards one another like gravity).
We get the same result by an independent method, which does not assume that the shield area is the event horizon cross section of a black hole. Now we shall prove it.
Approach 2
As in the above approach, the outward force of the universe is 4πR^{4}rH^{2}/3, and there is an equal inward force. The fraction of the inward force which is shielded is now calculated as the mass, Y, of those atoms in shaded cone in Fig. 1 which actually emit the gauge boson radiation that hits the shield, divided by the mass of the universe.
The important thing here is that Y is not simply the total mass of the universe in the shaded cone. (If it were, Y would be the density of the universe multiplied by volume of the cone.)
That total mass inside the shaded cone of Fig.1 is not important because part of the gauge boson radiation it emits misses the shield, because it hits other intervening masses in the universe. (See Fig. 3.)
The mass in the shaded cone which actually produces the gauge boson radiation which we are concerned with (that which causes gravity) is equal to the mass of the shield multiplied up geometrically by the ratio of the area of the base of the cone to the area of the shield, i.e., Y = M(R/r)^{2}, because of the geometric convergence of the inward radiation from many masses within the cone towards the center. This is illustrated in Fig. 3.
Hence, the force of gravity is:
(4πR^{4}rH^{2}/3)Y/[mass of universe]
= (4πR^{4}rH^{2}/3).[M(R/r)^{2}]/(4πR^{3}r/3)
= R^{3}H^{2}m/r^{2}.
Comparing this to Newton’s law F = mMG/r^{2}, gives us
G = R^{3}H^{2}/[mass of universe]
= (3/4)H^{2}/(rπ).
Fig. 3: The mass multiplication scheme basis of Approach 2.
So we get precisely the same result as the previous method where we assumed that the shield area of an electron was the crosssectional area of the black hole event horizon! This result for G has been produced entirely without the need for an assumption about what numerical value to take for the shielding crosssectional area of a particle. Yet it is the same result as that derived above in the previous method when assuming that a fundamental particle has a shielding crosssectional area for gravitycausing gauge boson radiation equal to the event horizon of a black hole. Hence, this result justifies and substantiates that assumption. We get two major quantitative results from this study of quantum gravity: a formula of G, and a formula of the crosssectional area of a fundamental particle for gravitational interactions.
The exact formula for G, including photon redshift and density variation
The toy model above began by assuming that the inward force carried by the gauge boson radiation is identical to the outward force represented the simple product of mass and acceleration in Newton’s 2nd law, F = ma. In fact, taking the density of the universe to be the local average around us (at a time of 14,000 million years after the big bang) is an error, because the density increases as we look back in time with increasing distance, seeing earlier epochs which have higher density. This effect tends to increase the effective outward force of the universe, by increasing the density. In fact, the effective mass would go to infinity unless there was another factor, which tends to reduce the force imparted by gravity causing gauge bosons from the greatest distances. This second effect is redshift. This problem of how to evaluate the extent to which these two effects partly offset one another is discussed in detail in the earlier post on this blog, here. It is shown there that the effective inward force should take some more complex form, so that the inward force is no longer simply F = ma but some integral (depending on the way that the redshift is modelled, and there are several alternatives) like
F = ma = mH^{2}r
= ò(4πr^{2}r )(1 – rc^{1}H)^{3}(1 – rc^{1}H)H^{2}r [1 + {1.1*10^{13 }(H ^{1} – r/c^{ })}^{1 }]^{1 }dr
= 4 π r c^{2 }ò r [ {c/(Hr) } – 1 ]^{2 }[1 + {1.1*10^{13 }(H ^{1} – r/c^{ })}^{1 }]^{1 }dr.
Where r is the local density, i.e., the density of spacetime at 14,000 million years after the big bang. I have not completed the evaluation of such integrals (some of them give an infinite answer, so it is possible to rule those out as either wrong or missing some essential factor in the model). However, an earlier idea, to take account of the rise in density with increasing spacetime around us, at the same time taking account of the redshift as a divergence of the universe, is to set up a more abstract model.
Density variation with spacetime and divergence of matter in universe (causing the redshift of gauge bosons by an effect which is quantitatively similar to gauge boson radiation being ‘stretched out’ over the increasing volume of space while in transit between receding masses in the expanding universe) can be modelled by the wellknown equation for mass continuity (based on the conservation of mass in an expanding gas, etc):
dρ/dt + Ñ (ρv) = 0
Or: dρ/dt = Ñ (ρv)
Where divergence term
Ñ .(ρv) = [{d(ρv)_{x}/dx} + {d(ρv)_{y}/dy} + {d(ρv)_{z}/dz}]
For the observed spherical symmetry of the universe we see around us
d(ρv)_{x}/dx = d(ρv)_{y}/dy = d(ρv)_{z}/dz = d(ρv)_{R}/dR
where R is radius.
Now we insert the Hubble equation v = HR:
dρ/dt = Ñ (ρv) = Ñ.(ρHR) = [{d(ρHR)/dR} + {d(ρHR)/dR} + {d(ρHR)/dR}]
= 3d(ρHR)/dR
= 3ρHdR/dR
= 3ρH.
So dρ/dt = 3ρH. Rearranging:
3Hdt = (1/ρ) dρ. Integrating:
ò3Hdt = ò(1/ρ) dρ.
The solution is:
3Ht = (ln ρ_{1}) – (ln ρ). Using the base of natural logarithms e to get rid of the ln’s:
e^{3Ht} = ρ_{1}/ρ
Because H = v/R = c/[radius of universe, R] = 1/[age of universe, t] = 1/t, we find:
e^{3Ht} = ρ_{1}/ρ = e^{3(1/t)t} = e^{3}.
Therefore
ρ = ρ_{1}e^{3} ~ 20.0855 ρ_{1}.
Therefore, if this analysis is a correct abstract model for the combined effect of graviton redshift (due to the effective ‘stretching’ of radiation as a result of the divergence of matter across spacetime caused by the expansion of the universe) and density variation of the universe across spacetime, our earlier result of G = (3/4)H^{2}/(rπ) should be corrected for spacetime density variation and redshift of gauge bosons, to:
G = (3/4)H^{2}/(rπe^{3}),
which is a factor of ~10 smaller than the rearranged traditional ‘critical density’ formula of general relativity, G = (3/8)H^{2}/(rπ). Therefore, this theory predicts gravity quantitatively and checkably, and it dispenses with the need for an enormous amount of unobserved dark matter. (There is clearly some dark matter, as neutrinos are known to have some mass, but this can be assessed from the rotation curves for spiral galaxies and other observational checks.)
Experimental confirmation for the black hole size as the crosssectional area for fundamental particles in gravitational interactions
In additional to the theoretical evidence above, there is independent experimental evidence. If the core of an electron is gravitationally trapped HeavisidePoynting electromagnetic energy current, it is a black hole and it has a magnetic field which is a torus (see Electronics World, April 2003).
Experimental evidence for why an electromagnetic field can produce gravity effects involves the fact that electromagnetic energy is a source of gravity (think of the stressenergy tensor on the right hand side of Einstein’s field equation). There is also the capacitor charging experiment. When you charge a capacitor, practically the entire electrical energy entering it is electromagnetic field energy (HeavisidePoynting energy current). The amount of energy carried by electron drift is negligible, since the electrons have a kinetic energy of half the product of their mass and the square of their velocity (typically 1 mm/s for a 1 A current).
So the energy current flows into the capacitor at light speed. Take the capacitor to be simple, just two parallel conductors separated by a dielectric composed of just a vacuum (free space has a permittivity, so this works). Once the energy goes along the conductors to the far end, it reflects back. The electric field adds to that from further inflowing energy, but most of the magnetic field is cancelled out since the reflected energy has a magnetic field vector curling the opposite way to the inflowing energy. (If you have a fully charged, ’static’ conductor, it contains an equilibrium with similar energy currents flowing in all possible directions, so the magnetic field curls all cancel out, leaving only an electric field as observed.)
The important thing is that the energy keeps going at light velocity in a charged conductor: it can’t ever slow down. This is important because it proves experimentally that static electric charge is identical to trapped electromagnetic field energy. If this can be taken to the case of an electron, it tells you what the core of an electron is (obviously, there will be additional complexity from the polarization of loops of virtual fermions created in the strong field surrounding the core, which will attenuate the radial electric field from the core as well as the transverse magnetic field lines, but not the polar radial magnetic field lines).
You can prove this if you discharge any conductor x metres long which is charged to v volts with respect to ground, through a sampling oscilloscope. You get a square wave pulse which has a height of v/2 volts and a duration of 2 x/c seconds. The apparently ‘static’ energy of v volts in the capacitor plate is not static at all; at any instant, half of it, at v/2 volts, is going eastward at velocity c and half is going westward at velocity c. When you discharge it from any point, the energy already by chance headed towards that point immediately begins to exit at v/2 volts, while the remainder is going the wrong way and must proceed and reflect from one end before it exits. Thus, you always get a pulse of v/2 volts which is 2 x metres long or 2 x/c seconds in duration, instead of a pulse at v volts and x metres long or x/c seconds in duration, which you would expect if the electromagnetic energy in the capacitor was static and drained out at light velocity by all flowing towards the exit.
This was investigated by Catt, who used it to design the first crosstalk (glitch) free wafer scale integrated memory for computers, winning several prizes for it. Catt welcomed me when I wrote an article on him for the journal Electronics World, but then bizarrely refused to discuss physics with me, while he complained that he was a victim of censorship. However, Catt published his research in IEEE and IEE peerreviewed journals. The problem was not censorship, but his refusal to get into mathematical physics far enough to sort out the electron.
It’s really interesting to investigate why classical (not quantum) electrodynamics is totally false in many ways: Maxwell’s model is wrong. Some calculations of quantum gravity based on a simple, empiricallybased model (no ad hoc hypotheses), which yields evidence (which needs to be independently checked) that the proper size of the electron is the black hole event horizon radius.
There is also the issue of a chickenandegg situation in QED where electric forces are mediated by exchange radiation. Here you have the gauge bosons being exchanged between charges to cause forces. The electric field lines between the charges have to therefore arise from the electric field lines of the virtual photons being continually exchanged.
How do you get an electric field to arise from neutral gauge bosons? It’s simply not possible. The error in the conventional thinking is that people incorrectly rule out the possibility that electromagnetism is mediated by charged gauge bosons. You can’t transmit charged photons one way because the magnetic selfinductance of a moving charge is infinite. However, charged gauge bosons will propagate in an exchange radiation situation, because they are travelling through one another in opposite directions, so the magnetic fields are cancelled out. It’s like a transmission line, where the infinite magnetic selfinductance of each conductor cancels out that of the other conductor, because each conductor is carrying equal currents in opposite directions.
Hence you end up with the conclusion that the electroweak sector of the Standard Model is in error: Maxwellian U(1) doesn’t describe electromagnetism properly. It seems that the correct gauge symmetry is SU(2) with three massless gauge bosons: positive and negatively charged massless bosons mediate electromagnetism and a neutral gauge boson (a photon) mediates gravitation. See Fig. 4.
Fig. 4: The SU(2) electrogravity mechanism. Think of two flakjacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!
This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation. See Fig. 5.
Fig. 5: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the pathintegral YangMills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of selfinductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for YangMills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the selfinductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.
The price of the random walk statistics needed to describe such a zigzag summation (avoiding opposite charges!) is that the net force is not approximately 10^{80} times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zigzag inefficiency of the sum, i.e., about 10^{40} times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 10^{40}/10^{80} = 10^{40} as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 10^{80} randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are YangMills radiation being exchanged between all charges (including all charges of similar sign) is 10^{40} times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges (Fig. 5).
Experimentally checkable consequences of this gravity mechanism, and consistency with known physics
1. Universal gravitational parameter, G
G = (3/4)H^{2}/(rπe^{3}), derived in stages above, where e^{3} is the cube of the base of natural logarithms (the correction factor due to the effects of redshift and density variation in spacetime), is a quantitative prediction. In the previous post here, the best observational inputs for Hubble parameter H and local density of universe r were identified: ‘The WMAP satellite in 2003 gave the best available determination: H = 71 +/ 4 km/s/Mparsec = 2.3*10^{18} s^{1}. Hence, if the present age of the universe is t = 1/H (as suggested from the 1998 data showing that the universe is expanding as R ~ t, i.e. no gravitational retardation, instead of the FriedmannRobertsonWalker prediction for critical density of R ~ t^{2/3 }where the 2/3 power is the effect of curvature/gravity in slowing down the expansion) then the age of the universe is 13,700 +/ 800 million years. … The Hubble space telescope was used to estimate the number of galaxies in a small solid area of the sky. Extrapolating this to the whole sky, we find that the universe contains approximately 1.3*10^{11 }galaxies, and to get the density right for our present time after the big bang we use the average mass of a galaxy at the present time to work out the mass of the universe. Taking our Milky Way as the yardstick, it contains about 10^{11 }stars, and assuming that the sun is a typical star, the mass of a star is 1.9889*10^{30 }kg (the sun has 99.86% of the mass of the solar system). Treating the universe as a sphere of uniform density and radius R = c/H, with the above mentioned value for H we obtain a density for the universe at the present time (~13,700 million years) of about 2.8*10^{27 }kg/m^{3}.’
Putting H = 2.3*10^{18} s^{1} and r = 2.8*10^{27 }kg/m^{3} into G = (3/4)H^{2}/(rπe^{3}), gives a result of G = 2.2*10^{11} m^{3 }kg^{1} s^{2} which is one third of the experimentally determined value of G = 6.673*10^{11} m^{3 }kg^{1} s^{2}. This factor of 3 error is within the error bars for the estimates of the density because of uncertainties in estimating the average mass of a galaxy. To put the accuracy of this prediction into perspective, try reading the statement by Eddington (quoted at the top of this blog post): how many other theories based entirely on observably verified facts like Hubble’s law and Newton’s laws, predict the strength of gravity? Alternatively, compare it to the classical (and incorrect) ‘critical density’ prediction from general relativity (which ignores the mechanism of gravitation), which rearranges to give a formula for G which is e^{3}/2 or 10 times bigger, thus the critical density is 3.3 times bigger than the experimental data.
This is actually an unfair comparison, because the rough estimate for the density is about 3 times too high. Most astronomers suggest that the observable density is 520% of the critical density, i.e, 10% with a factor of 2 error limit. This would put the density at r = 10^{27 }kg/m^{3} and our prediction is then exact, with a factor of 2 experimental error limit. The abundance of dark matter is not experimentally measured. There is some observational evidence for dark matter, and theoretically there are some solid reasons why there should be such matter in a dark, non luminous form (neutrinos have mass, as do black holes). The mainstream takes the critical density formula from general relativity and the measured density for luminous matter and uses the disagreement to claim that the difference is dark matter. That argument is weak, because general relativity is in error for cosmological purposes through ignoring quantum gravity effects which become important on large scales in an expanding universe (i.e., redshift of gravitons weaking the force gravity over large distances, the nature of the YangMills exchange radiation dynamical mechanism for gravity in which gravity is a result of radiation exchange with the other masses in the expanding universe, etc.). Another argument for a lot of dark matter is the flattening of galactic rotation curves, but if the final theory of quantum gravity is a departure from general relativity and Newtonian gravity, it could potentially resolve this problem (it will be at large distances, because gravitons are redshifted and there could be some significant graviton shielding effect of the immense amount of mass in a galaxy, which are trivial in the solar system).
Professor Sean Carroll writes a lot about cosmology, and is author of a very useful book on general relativity. In writing about the discovery of direct evidence for dark matter on his blog post http://cosmicvariance.com/2006/08/21/darkmatterexists/ and others, he does highlight some useful arguments. He starts by stating without evidence that 5% of the universe is ordinary matter, 25% dark matter and 70% dark energy. He then explains that the direct evidence for dark matter proves that mainstream cosmologists are not fooling themselves. The problem is that the direct evidence for dark matter doesn’t say how much dark matter there is: it’s not quantitative. It does not allow any confirmation of the theoretical guesswork for the statement he makes that there is 5 times as much dark matter as visible matter. He does then go on to discuss whether some kind of ‘modified Newtonian dynamics,’ rather than dark matter, could resolve the problems – and he writes that he would prefer some objective resolution of that type rather than in effect inventing ‘dark matter’ epicycles as convenient fixes which cannot be readily checked even in principle, but there is no definite proposal discussed which is really concrete and solves the quantum gravity facts (such as this gravity mechanism!).
2. Small size of the cosmic background radiation ripples
The prediction of gravity by this mechanism appears to be accurate to within experimental data, which is accurate to within a factor of approximately two. The second major prediction of this mechanism is the small size in the soundlike ripples in angular distribution of the cosmic background radiation which is the earliest directly observable radiation in the universe, whose emitted power peaked at 370,000 years after the big bang when the temperature was 3,500 Kelvin, and redshifted or ‘stretched out’ due to cosmic expansion which reduces its temperature to 2.7 Kelvin.
Because radiation and matter were in thermal equilibrium (an ionised gas) at the time the cosmic background radiation was emitted, the radiation carries an imprint of the nature of the matter at that time. The cosmic background radiation was found to be of extremely uniform temperature, far more uniform than expected at 370,000 years after the big bang, when conventional models of galaxy formation implied that should have been big ripples to indicate the ‘seeding’ of lumps that could become stars and galaxies.
This is called the ‘horizon problem’ or ‘isotropy problem’, because the microwave background radiation from opposite directions in the sky is similar to within 0.01%, and in the mainstream models gravity always has the same strength and would have caused bigger nonuniformities within 370,000 years of the big bang. A mainstream attempt to solve this problem is ‘inflation’ whereby the universe expanded at a faster than light speed for a small fraction of a second after the big bang, making the density of the universe uniform all over the sky before gravity had a chance to magnify irregularities in the expansion process.
This ‘horizon problem’ is closely related to the ‘flatness problem’ which is the issue that in general relativity, the universe depending on its density has three possible geometries: open, flat, and closed. At the critical density it will be flat, with gravitation causing its radius to increase in proportion to the twothirds power of time after the big bang. Mainstream consensus was that the universe was probably flat – which means of critical density, five to twenty times more than the observable density. The flatness problem is that if the universe was not completely flat, but of slightly different density across the universe, then the variation in density would be greatly magnified by the expansion of the universe and would be obvious today. The absense of any such large anisotropy is widely believed, by the mainstream, to be evidence for a flat geometry.
The mechanism for gravity solves these problems. It solves the flatness problem by showing that the critical density (distinguishing the open, flat, and closed solutions to the FriedmannRobertsonWalker metric of general relativity, which is applied to cosmology) is false for ignoring quantum gravity effects: there ars no long range gravitational influences in an expanding universe because the graviton exchange radiation of quantum gravity is becomes severely redshifted like light, and cannot produce curvature effects like forces on large distances. So the whole existing mainstream structure of using general relativity to work out cosmology falls apart.
The horizon problem as to why the cosmic background is so smooth is solved by this model in an interesting way. It is very simple. The relationship giving the gravity parameter G is directly proportional to the age of the universe. The older the universe gets, the stronger gravity gets. At 370,000 years after the big bang, G was 40,000 times smaller than it is now, and at earlier times it was even smaller. The ripples in the cosmic background radiation are extremely small, because the gravitational force was so small.
As proved earlier, the Hubble acceleration is a = dv/dt = H^{2}R = H^{2}ct, where t is time past when the light was emitted but can be set equal to the age of the universe for our purposes here. Hence the outward force F = ma = mH^{2}ct, is proportional to the age of the universe, as is the equal inward force according to Newton’s 3rd law of motion.
We can also see proportionality to time in the result G = (3/4)H^{2}/(rπe^{3}), since H^{2 }= 1/t^{2} and r is mass of universe divided by volume (which is proportional to the cube of radius, i.e., the cube of the product ct), so this formula implies that G is proportional to (1/t^{2})/(1/t^{3}) which is of course directly proportional to time.
Dirac did not have a mechanism for a timedependence of G but he guessed that G might vary. Unfortunately, lacking this mechanism, Dirac guessed that G was falling with time when it is actually increasing, and he did not realise that it is not just the strength constant for gravity that varies, but all the strength coupling constants vary in the same way. This disproves Edward Teller’s claim (based on just G varying) that if it were true, the sun’s radiant power would vary with time in a way incompatible with life (e.g., he calculated that the oceans would have been literally boiling during the Cambrian era if Dirac’s assumption was true).
It also disproves another claim that G is constant based on nucleosythesis in the big bang, in the same way. The argument here is that nuclear fusion in stars and in the big bang depends on gravity to cause the basic compressive force, causing electrically charged positive particles to collide hard enough to sufficiently break through the ‘barrier’, caused by the repulsive electric Coulomb force, so that the shortranged strong attractive force can then fuse the particles together. The big bang nucleosynthesis model correctly predicts the observed abundances of unfused hydrogen and fusion products like helium, assuming that G is constant. Because the result is correct, it is often claimed (even by students of Professor Carroll) that G must have had a value at 1 minutes after the big bang that is no more than 10% different to today’s value for G. The obvious fallacy here is that both electromagnstism and gravity vary the same way. If you double both the Coulomb force and the gravity force, the fusion rate doesn’t vary, because the Coulomb force is opposing fusion while gravity is causing fusion, and both are inverse square forces. The effect of G varying is not manifested in a change to the fusion rate in the big bang or in a star, because the corresponding change in the Coulomb force offsets it.
For a discussion of why the different forces unify by scaling similarly (it is due to vacuum polarization dynamics) see this earlier post: http://nige.wordpress.com/2007/03/17/thecorrectunificationscheme/
Louise Riofrio has investigated the dimensionally correct relationship GM = tc^{3} which was discussed earlier on this blog here, here and here where M is the mass of the universe and t is its age. This is algebraically equivalent to G = (3/4)H^{2}/(rπ), i.e, the gravity prediction without a dimensionless redshiftdensity correction factor of e^{3}. It is interesting that it can be derived on the basis of energy based methods, as first pointed out by John Hunter who suggested setting E = mc^{2 }= mMG/R, i.e, setting rest mass energy equal to gravitational potential energy.
Since the electromagnetic charge of the electron is massless bosonic energy trapped as a black hole, the gravitational potential energy would have to be equal, to keep it trapped.
This rearranges to give the equations of Riofrio and Rabinowitz, although physically it is obviously missing some dimensionless multiplication constant because the gravitational potential energy cannot be E = mMG/R, where R is the radius of the universe. It is evident that this equation describes the gravitational potential energy which would be released if the universe were (somehow) to collapse. However, the average radial distance of the mass of the universe M will be less than the radius of the universe R. This brings up the density variation problem: gravitons and light both go at velocity c so we see them coming from times in the past when the density was greater (density is proportional to the reciprocal of the cube of the age of the universe due to expansion). So you cannot assume constant density and get a simple solution. You really also need to take account of the redshift of gravitons from the greatest distances, or the density will cause you problems due to tending towards infinity at radii approaching R. Hence, this energybased approach to gravity is analogous to the physical mechanism described above. See also the derivation, by mathematician Dr Thomas R. Love of California State University, of Kepler’s law at http://nige.wordpress.com/2006/09/30/keplerslawfromkineticenergy/ which demonstrates that you can indeed treat problems generally by assuming that the rest mass energy of the spinning, otherwise static fundamental particle or the kinetic energy of the orbiting body, is being trapped by gravitation.
This leads to to a concrete basis for John Hunter’s suggestions published as a notice in the 12 July 2003 issue of New Scientist, page 17: he suggested that if E = mc^{2 }= mMG/R, then the effective value of G depends on distance since G = Rc^{2}/M, which is algebraically equivalent to the expression we obtained above for the gravity mechanism, and published in the article ‘Electronic Universe, Part 2′, Electronics World, April 2003 (excluding the suggested ecube correction for density variation with distance and graviton redshift, which was published in a letter to Electronics World in 2004). Hunter’s July 2003 notice in New Scientist indicated that this solves the horizon problem of cosmology (thus not requiring the speculative mainstream extravagances of Alan Guth’s inflation theory). Hunter pointed out in his notice that his E = mc^{2 }= mMG/R, when applied to the earth, should include another term for the influence of the nearby mass of the sun, leading to E = mc^{2 }= mMG/R + mM’G/r where m is mass of Earth, M is mass of universe, R is radius of universe (which is inaccurate as pointed out since the average distance of the mass of the surrounding universe can hardly be the radius of the universe, but must be a smaller distance, leading to the problem of the timevariation of density and thus also the redshift of the gravitons causing gravity), M’ is the mass of the Sun, and r is the distance of the Earth from the sun. Hunter argued that since r varies and is 3.4% bigger in July than in January (when Earth is closest to the sun), this leads to a suggestion for a definite experiment to test the theory: ‘Prediction: the weight of objects on the Earth will vary by 3.3 parts in 10 billion over a year, as the Earth to Sun distance changes.’ (My only problem with this prediction is simply that it is virtually impossible to test, just like the ‘not even wrong’ Planck scale unification supersymmetry ‘prediction’. Because the Earth is constantly vibrating due to seismic effects, you can never really hope to make such accurate measurements of weight. Anyone who has tried to make measurements of masses beyond a few significant figures for quantitative chemical analysis knows how difficult such a mass measurement is: making sensitive instruments is a problem, but the increased sensitivity multiplies up background vibrations so the instrument just becomes a seismograph. However, maybe some spacebased precise measurements with clever experimentalist/observationist tricks will one day be able to check this to some extent.)
3. Electric force constant (permittivity), Hubble parameter, etc.
The proof [above] predicts gravity accurately, with G = ¾ H^{2}/(pre^{3}). Electromagnetic force (discussed above and in the April 2003 Electronics World article) in quantum field theory (QFT) is due to ‘virtual photons’ which cannot be seen except via forces produced. The mechanism is continuous radiation from spinning charges; the centripetal acceleration of a = v^{2}/r causes the emission energy emission which is naturally in exchange equilibrium between all similar charges, like the exchange of quantum radiation at constant temperature. This exchange causes a ‘repulsion’ force between similar charges, due to recoiling apart as they exchange energy (two people firing guns at each other recoil apart). In addition, an ‘attraction’ force occurs between opposite charges that block energy exchange, and are pushed together by energy being received in other directions (shieldingtype attraction). The attraction and repulsion forces are equal for similar net charges. The net inward radiation pressure that drives electromagnetism is similar to gravity, but the addition is different. The electric potential adds up with the number of charged particles, but only in a diffuse scattering type way like a drunkards walk, because straightline additions are cancelled out by the random distribution of equal numbers of positive and negative charge. The addition only occurs between similar charges, and is cancelled out on any straight line through the universe. The correct summation is therefore statistically equal to the square root of the number of charges of either sign multiplied by the gravity force proved above.
Hence F(electromagnetism) = mMGN^{1/2}/r^{2} = q_{1}q_{2}/(4per^{2}) (Coulomb’s law), where G = ¾ H^{2}/(pre^{3}) as proved above, and N is as a first approximation the mass of the universe (4pR^{3}r/3= 4p(c/H)^{3}r/3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:
e = q_{e}^{2}e_{2.7…}^{3}[r/(12pm_{e}^{2}m_{proton}Hc^{3})]^{1/2} F/m.
Using old data as in the letter published in Electronics World some years ago which gave the G formula (r = 4.7 x 10^{28} kg/m^{3} and H = 1.62 x 10^{18} s^{1} for 50 km.s^{1}Mpc^{1}), gives e = 7.4 x 10^{12} F/m which is only 17% low as compared to the measured value of 8.85419 x 10^{12} F/m.
Rearranging this formula to yield r, and rearranging also G = ¾ H^{2}/(pre^{3}) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ¾ H^{2}/(pre^{3}) to give a prediction for r which is independent of H:
H = 16p^{2}Gm_{e}^{2}m_{proton}c^{3}e^{2}/(q_{e}^{4}e_{2.7…}^{3}) = 2.3391 x 10^{18} s^{1} or 72.2 km.s^{1}Mpc^{1}, so 1/H = t = 13,550 million years. This is checkable against the WMAP result that the universe is 13,700 million years old; the prediction is well within the experimental error bar.
r = 192p ^{3}Gm_{e}^{4}m_{proton}^{2}c^{6}e^{4}/(q_{e}^{8}e_{2.7…}^{9}) = 9.7455 x 10^{28} kg/m^{3}.
Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanismbased predictive method.
Furthermore, calculations show that Hawking radiation from electronmass black holes has the right force as exchange radiation of electromagnetism: http://nige.wordpress.com/2007/03/08/hawkingradiationfromblackholeelectronscauseselectromagneticforcesitistheexchangeradiation/
4. Particle masses
Fig. 6: Particle mass mechanism. The ‘polarized vacuum’ shell exists between IR and UV cutoffs. We can work out the shell outer radius from either using the IR cutoff energy as the collision energy to calculate the distance of closest approach in a particle scattering event (like Coulomb scattering, which predominates at low energies) or we use Schwinger’s formula for the minimum static electric field strength which is needed to cause pairproductions of fermionantifermion pairs to pop out of the Dirac sea in the vacuum. The outer radius of the polarized vacuum around a unit charge by either calculation is on the order 1 fm. This scheme doesn’t just explain and predict masses, it also replaces supersymmetry with a proper physical, checkable prediction of what happens to Standard Model forces at extremely high energy. The following text is an extract from an earlier blog post here:
‘The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997. Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).
‘There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons). How can gravitational charge be renormalized? There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field. For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized. If they are both displaced in the same direction by the field, they aren’t polarized. So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s Cavorite.
‘There is no evidence for this. Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value. However, this is not a problem. The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude). Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.
‘Finally, supersymmetry is totally flawed. What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification. You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).
‘The energy sapped from the gauge boson mediated field of electromagnetism is being used. It’s being used to create pairs of charges, which get polarized and shield the field. This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory. Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.
‘Now take the situation where you put N electrons close together, so that their cores are very nearby. What will happen is that the surrounding vacuum polarization shells of both electrons will overlap. The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger. So the shielding of the polarized vacuum is N times stronger! This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron. Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!
‘This has three remarkable consequences. First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy. Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.
‘Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials). If you could extend the Pauli exclusion principle, you could allow particles to acquire shortrange nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.
‘Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do. Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.
‘The right handed electron has a weak hypercharge of 2. The left handed electron has a weak hypercharge of 1. The left handed downquark (with observable low energy, electric charge of 1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of 2/3.
‘It’s totally obvious what’s happening here. What you need to focus on is the hadron (meson or baryon), not the individual quarks. The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks). The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them. It all comes from the electromagnetic field. Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces. Hence, at the usually assumed socalled Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric). This ties in with representation theory for particle physics, whereby symmetry transformation principles relate all particles and fields (the conservation of gauge boson energy and the exclusion principle being dynamic processes behind the relationship of a lepton and a quark; it’s a symmetry transformation, physically caused by quark confinement as explained above), and it makes predictions.
‘It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength. This is done when electric field energy is stored in a capacitor. In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose. See page 70 of http://arxiv.org/abs/hepth/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy. (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge. The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.) The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces. This allows predictions and more checks. It’s totally tied down to hard facts, anyway. If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.
‘To give an example from http://nige.wordpress.com/2006/10/20/loopquantumgravityrepresentationtheoryandparticlephysics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron. Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 – 1 = 136.036 times the electric charge energy of the electron experienced at large distances. This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.’
5. Quantum gravity renormalization problem is not real
The following text is an extract from an earlier blog post here:
‘Quantum gravity is supposed – by the mainstream – to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.
‘According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ hbar.
‘Since time = distance/c,
‘(energy)*(distance) ~ c*hbar.
‘Hence,
‘(distance) ~ c*hbar/(energy)
‘Very small distances therefore correspond to very big energies. Since gravitons capable of gravitongraviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is nonrenormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.
‘The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).
‘The problem is that gravity has only one type of ‘charge’, mass. There’s no antimass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.
‘This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.
‘However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.
‘This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.’
Experimental confirmation of the redshift of gauge boson radiation
All the quantum field theories of fundamental forces (the standard model) are YangMills, in which forces are produced by exchange radiation.
The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inversesquare law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E = hf. This is because the momentum carried by radiation is p = E/c = hf/c. Any reduction in frequency f therefore reduces the momentum imparted by a gauge boson, and this reduces the force produced by a stream of gauge bosons.
Therefore, in the universe all forces between receding masses should, according to YangMills quantum field theory (where forces are due to the exchange of gauge boson radiation between charges), suffer a bigger fall than the inverse square law. So, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening longrange gravity.
When you check the facts, you see that the role of ‘cosmic acceleration’ as produced by dark energy (the cc in GR) is designed to weaken the effect of longrange gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.
In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss E = hf and momentum loss p = E/c of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.
The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.
Nobel Laureate Phil Anderson points out:
‘… the flat universe is just not decelerating, it isn’t really accelerating …’ 
http://cosmicvariance.com/2006/01/03/dangerphilanderson/#comment10901
Supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R
Like my paper, Lunsford’s paper was censored off arxiv without explanation.
Lunsford had already had it published in a peerreviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.
The way the mainstream censors out the facts is to first delete them from arXiv and then claim ‘look at arxiv, there are no valid alternatives’. It’s a story of dictatorship:
‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, Nineteen Eighty Four, Chancellor Press, London, 1984, p225.
The approach above focusses on gauge boson radiation shielding. We now consider the interaction. In the intense fields near charges, pair production occurs, in which the energy of gauge boson radiation is randomly and spontaneously transformed into ‘loops’ of matter and antimatter, i.e., virtual fermions which exist for a brief period (as determined by the uncertainty principle) before colliding and annihilating back into radiation (hence the spacetime ‘loop’ where the pair production and annihilation is an endless cycle).
In this framework, we have physical material pressure from the Dirac sea of virtual fermions, not just gauge boson radiation pressure. To be precise, as stated before on this blog, the Dirac sea of virtual fermions only occurs out to a radius of about 1 fm from an electron; beyond that radius there are no virtual fermions in the vacuum because the electric field strength is below 10^{18} volts/metre, the Schwinger threshold for pair production. So at all distances beyond about 10^{15} metre from a fundamental particle, the vacuum only contains gauge boson radiation, and contains no pairs of virtual fermions, no chaotic Dirac sea. This cutoff of pair production is a reason why renormalization of charge is necessary with an ‘IR (infrared) cutoff’; the vacuum can only polarize (and thus shield electric charge) out to the range at which the electric field is strong enough to begin to cause pair production to occur in the first place. If it could polarize without such a cutoff, it would be able to completely cancel out all real electric charges, instead of only partly cancelling them. Since this doesn’t happen, we know there is a limit on the range of the Dirac sea of virtual fermions. (For those wanting to see the formula proving the minimum electric field strength that is required for pairs of virtual charges to appear in the vacuum, see equation 359 of Dyson’s http://arxiv.org/abs/quantph/0608140 or equation 8.20 of Luis AlvarezGaume and Miguel VazquezMozo, http://arxiv.org/abs/hepth/0510040.)
So what happens is that gauge boson exchange radiation powers the production of short ranged, massive spacetime loops of virtual fermions being created and annihilated (and polarized in the electric field between creation and annihilation).
Now let’s consider general relativity, which is the mathematics of gravity. Contrary to some misunderstandings, Newton never wrote down F = mMG/r^{2}, which is due to Laplace. Newton was proud of his claim ‘hypotheses non fingo’ (I feign no hypotheses), i.e., he worked to prove and predict things without making any ad hoc assumptions or guesswork speculations. He wasn’t a string theorist, basing his guesses on nonobserved gravitons (which don’t exist) or extradimensions, or unobservable Planckscale unification assumptions. The effort above in this blog post (which is being written totally afresh to replace obsolete scribbles at the current version of the page http://quantumfieldtheory.org/Proof.htm) similarly doesn’t frame any hypotheses.
It’s actually well proved geometry, wellproved Newtonian first and second law, well proved redshift (which can’t be explained by ‘tired light’ speculation, but is a known and provable effect which occurs from recession, since the Doppler effect – unlike ‘tired light’  is experimentally confirmed to occur) and similar hard, factual evidence. As explained in the previous post, the U(1) symmetry in the standard model is wrong, but apart from that misinterpretation and associated issues with the Higgs mechanism of electroweak symmetry breaking, the standard model of particle physics is the best checked physical theory ever: forces are the result of gauge boson radiation being exchanged between charges.
*****
I’ve just received an email from CERN’s document server:
From: “CDS Support Team” <cds.alert@cdsweb.cern.ch>
To: <undisclosedrecipients:>
Sent: Friday, May 25, 2007 4:30 PM
Subject: High Energy Physics Information Systems Survey
Dear registered CDS user,
The CERN Scientific Information Service, the CDS Team and the
SPIRES Collaboration are running a survey about the present and the future
of HEP Scientific Information Services.
The poll will close on May 30th. If you have not already
answered it, this is the last reminder to invite you to fill an anonymous
questionnaire at
<http://library.cern.ch/poll.html>
it takes about 15 minutes to be completed and *YOUR* comments and
opinions are most valuable for us.
If you have already answered to the questionnaire, we wish to
thank you once again!
With best regards,
The CERN Scientific Information Service, the CDS Team, the
SPIRES Collaboration
*****
This email relates to my authorship of one paper on CERN, http://cdsweb.cern.ch/record/706468, and it’s really annoying that I can’t update, expand and correct that paper because CERN closed that archive and now only accepts updates to papers that are on the American archive, arXiv (American spelling). I pay my taxes in Europe where they help fund CERN. I can’t complain if arXiv don’t want to publish physics or want to eradicate physics and replace it with extradimensional ‘not even wrong’ spin2 gravitons. But it is disappointing that there is no competitor to arXiv run by CERN anymore. By closing down external submissions and updates to papers hosted exclusively by CERN’s document server, they have handed total control of world physics to bunch of yanks obsessed by the string religion and trying to dictate it to everyone and to stop freedom of physicists to do checkable, empirically defensible research in fundamental problems. Well done, CERN.
(CERN by the way is a French abbreviation and in World War II, the government of France surrendered officially to another dictatorial bunch of mindless idealists, although fortunately there was an underground resistance movement. Although CERN is located on the border of France and Switzerland, France dominates Europe and seems to control the balance of power. I wouldn’t be surprised if their defeatist, collaborative attitude towards arXiv was responsible for this travesty of freedom. However, I’m grateful to have anything on such a server at all. If I was in America, my situation would be far worse. Some arXiv people in America appear to actually try to stop physicists giving lectures in London; it demonstrates what bitter scum some of the arXiv people are. See also the comments here. However, some respectable people have papers to arXiv so I’m not claiming that 100% of it is rubbish, although the string theory stuff is.)
Factual heresy
Below there is a little compilation of factual heresy from other people, just to well and truly finish off this post. The MichelsonMorley experiment preserves the gravitational field (‘aether’ to use an ambiguous and unhelpful term), simply because the contraction in the direction of motion (due to the behaviour of the gravitational field, causing inertial force which resists acceleration, according to Einstein’s equivalence principle whereby inertial mass = gravitational mass) means light has a shorter distance to go in the direction of motion!
The instrument is physically contracted. The fact that photons which are slowed down due to the Earth’s motion only have to travel a shorter distance than those doing transversely (which aren’t slowed down) means that the instrument shows no interference fringes: the effect of Earth’s motion in slowing down one beam is cancelled out by the contraction of the instrument which means they have to travel less far. It’s like a race where the slower the runner, the shorter the distance their lane extends before they arrive at the finish post: all runners arrive at the same time, having gone unequal distances at unequal speeds:
‘The MichelsonMorley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbingblock for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.
One funny or stupid denial of this was in a book called Einstein’s Mirror by a couple of physics lecturers, Patrick Hey and Tony Walters. They seemed to vaguely claim, in effect, that in the MichelsonMorley experiment the arms of the instrument are of precisely the same length and measure light speed absolutely, then they claimed that if anyone built a MichelsonMorley instrument with arms of unequal length, the contraction wouldn’t work. In fact, the arms were never of equal length to within a wavelength of light to begin with, and they only detected the relative difference in apparent light speed between two perpendicular directions by utilising interference fringes, which is a way to measure relative speed in one direction to another, not absolute speed in any direction. You can’t measure the speed of light with the MichelsonMorley instrument, it only shows if there is a difference between two perpendicular directions if you implicitly assume there is no length contraction!
It’s really funny that Eddington made Einstein’s special relativity (antiaether) famous in 1919 by confirming aetherial general relativity. The media couldn’t be bothered to explain aetherial general relativity, so they explained Einstein’s earlier false special relativity instead!
‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.
‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.
‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 1845; written quickly to get Jewish Infeld out of Nazi Germany and accepted as a worthy refugee in America.
‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities… According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.)
‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’ – Einstein’s Legacy – Where are the “Einsteinians?”, by Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm
‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’… A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 8990. (However, this is a massive source of controversy in GR because it’s a continuous approximation to discrete lumps of matter as a source of gravity which gives rise to a falsely smooth Riemann curvature metric; really continuous differential equations in GR must be replaced by a summing over discrete – quantized – gravitational interaction Feynman graphs.)
‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.
‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2^{nd} ed., v1, p. v, 1951.
‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.
‘U2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 6474.
‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and PseudoScience, pages 96102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hepth/0510040, p 71.
‘… the Heisenberg formulae [virtual particle interactions cause random pairproduction in the vacuum, introducing indeterminancy] can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’ – G. Builder, ‘Ether and Relativity’ in the Australian Journal of Physics, v11 (1958), p279.
(This paper of Builder on absolute velocity in ‘relativity’ is the analysis used and cited by the famous paper on the atomic clocks being flown around the world to validate ‘relativity’, namely J.C. Hafele in Science, vol. 177 (1972) pp 1668. So it was experimentally proving absolute motion, not ‘relativity’ as widely hyped Absolute velocities are required in general relativity because when you take synchronised atomic clocks on journeys within the same gravitational isofield contour and then return them to the same place, they read different times due to having had different absolute motions. This experimentally debunks special relativity. Einstein was wrong when he wrote in Ann. d. Phys., vol. 17 (1905), p. 891: ‘we conclude that a balanceclock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ See, for example, page 12 of the September 2005 issue of ‘Physics Today’, available at: http://www.physicstoday.org/vol58/iss9/pdf/vol58no9p12_13.pdf.)
So we see from this solid experimentally evidence that the usual statement that there is no ‘preferred’ frame of reference, i.e., a single absolute reference frame, is false. Experimentally, a swinging pendulum or spinning gyroscope is observed to stay true to the stars (which are not moving at sufficient angular velocities from our observation point to have any significant problem with being an absolute reference frame for most purposes).
If you need a more accurate standard, then use the cosmic background radiation, which is the truest blackbody radiation spectrum ever measured in history.
These different methods of obtaining measurements of absolute motion are not really examining ‘different’ or ‘preferred’ frames, or pet frames. They are all approximations to the same thing, the absolute reference frame. All the Copernican propaganda since the time of Einstein that: ‘Copernicus didn’t discover the earth orbits the sun, but instead Copernicus denied that anything really orbited anything because he thought there is no absolute motion, only relativism’, is a gross lie. That claim is just the sort of brainwashing doublethink propaganda which Orwell accused the dictatorships of doing in his book ‘1984’. You won’t get any glory following the lemmings over the cliff. Copernicus didn’t travel throughout the entire universe to confirm that the earth is: ‘in no special place’. Even if he did make that claim, it would not be founded upon any evidence. Science is (or rather, should be) concerned with being unprejudiced in areas where there is a lack of evidence.
IMPORTANT:
The article above is extracted from the blog post here, and readers should be aware that there are vital comments with amplifications and explanations in them which are not included in the extract above. There are also further vital developments in other blog posts here, here, here and here.
Links
 Mahndisa’s Thoughts about Harvard Professors Lubos Motl et al.
 Tony Smith’s suppressed string theory work
 Christine Dantas LQG blogspot
 Not Even Wrong
 Louise Riofrio’s adventures in spacetime
 John Horgan’s http://discovermagazine.typepad.com/horganism/
 Stefan’s and Bee’s Backreaction Blog
 Davide Castelvecchi’s blog
 Cosmic Variance
 Professor Jacques Distler’s Musings
 The nCategory Café
 Life on the Lattice
 Electrogravity blogspot
 Professor Clifford V. Johnson’s Asymptotia
 Marni Dee Sheppeard’s Arcadian Functor
 Arun’s Musings
 Quantum Nonsense
 Carl Brannen’s Works site
 Galactic Interactions
 The Island of Doubt
 Cocktail Party Physics
 One of Ivor Catt’s few physically useful (not just electronics waffle) pages
 Another useful (physically semicorrect) Ivor Catt page
 Ivor Catt’s halfcorrect and vitally important article from Wireless World 1978
 Catt, Davidson, Walton book (physically semicorrect): Digital Hardware Design
 Catt’s entirely false attack on “Maxwell’s equations”
 Correction of Catt’s errors
 Quantum Field Theory domain
 Errors in Tired Light Cosmology: Tired light models invoke a gradual energy loss by photons as they travel through the cosmos to produce the redshiftdistance law. This has three main problems…
**************************************
Fig. 1 – Newton’s geometric proof that an impulsive pushing graviton mechanism is consistent with Kepler’s 3rd law of planetary motion, because equal areas will be swept out in equal times (the three triangles of equal area, SAB, SBC and SBD, all have an equal base of length SB, and they all have altitudes of equal length), together with a diagram we will use for a more modern analysis. Newton’s geometric proof of centripetal acceleration, from his book Principia, applies to any elliptical orbit, not just circular orbits as Hooke’s easier inversesquare law derivation did. (Newton didn’t include the graviton arrow, of course.) By Pythagoras’ theorem x^{2} = r^{2} + v^{2}t^{2}, hence x = (r^{2} + v^{2}t^{2})^{1/2}. Inward motion, y = x – r = (r^{2} + v^{2}t^{2})^{1/2} – r = r[(1 + v^{2}t^{2}/r^{2})^{1/2} – 1], which upon expanding with the binomial theorem to the first two terms, yields: y ~ r[(1 + (1/2)v^{2}t^{2}/r^{2}) – 1] = (1/2)v^{2}t^{2}/r. Since this result is accurate for infidesimally small steps (the first two terms of the binomial become increasingly accurate as the steps get smaller, as does the approximation of treating the triangles as rightangled triangles so Pythagoras’ theorem can be used), we can accurately differentiate this result for y with respect to t to give the inward velocity, u = v^{2}t/r. Inward acceleration is the derivative of u with respect to t, giving a = v^{2}/r. This is the centripetal force formula which is required to obtain the inverse square law of gravity from Kepler’s third law: Hooke could only derive it for circular orbits, but Newton’s geometric derivation (above, using modern notation and algebra) applies to elliptical orbits as well. This was the major selling point for the inverse square law of gravity in Newton’s Principia over Hooke’s argument.
See Newton’s Principia, Book I, The Motion of Bodies, Section II: Determination of Centripetal Forces, Proposition 1, Theorem 1:
‘The areas which revolving bodies describe by radii drawn to an immovable centre of force … are proportional to the times on which they are described. For suppose the time to be divided into equal parts … suppose that a centripetal [inward directed] force acts at once with a great impulse [like a graviton], and, turning aside the body from the right line … in equal times, equal areas are described … Now let the number of those triangles be augmented, and their breadth diminished in infinitum … QED.’
This result, in combination with Kepler’s third law, gives the inversesquare law of gravity, although Newton’s argument is using geometry plus handwaving so it is actually far less rigorous than my rigorous algebraic version above. Newton failed to employ calculus and the binomial theorem to make his proof more rigorous, because he was the inventor of them, and most readers wouldn’t be familiar with those methods. (It doesn’t do to be so inventive as to both invent a new proof and also invent a new mathematics to use in making that proof, because readers will be completely unable to understand it without a large investment of time and effort; so Newton found that it payed to keep things simple and to use oldfashioned mathematical tools which were widely understood.)
Newton in addition worked out an ingeniously simple proof, again geometrically, to demonstrate that a solid sphere of uniform density (or radially symmetric density) has the same net gravity on the surface and at any distance, for all of its atoms in their three dimensional distribution, as would be the case if all the mass was concentrated in a point in the middle of the Earth. The proof for that is very simple: consider the sphere to be made up of a lot of concentric shells, each of small thickness. For any given shell, the geometry is such as that a person on the surface experiences small gravity effects from small quantities of mass nearby on the shell, while most of the mass of the shell is located at large distances. The inverse square effect, which means that for equal quantities of mass, the most nearby mass creates the strongest gravitational field, is thereby offset by the actual locations of the masses: only small amounts are nearby, and most of the mass of the shell is at a great distance. The overall effect is that the effective location for the entire mass of the shell is in the middle of the shell, which implies that the effective location of the mass of a solid sphere seen from a distance is in the middle of the sphere (if the density of each of the little shells, considered to be parts of the sphere, is uniform).
Feynman discusses the Newton proof in his November 1964 Cornell lecture on ‘The Law of Gravitation, an Example of Physical Law’, which was filmed for a BBC2 transmission in 1965 and can viewed on google video here (55 minutes). Feynman in his second filmed November 1964 lecture, ‘The Relation of Mathematics to Physics’, also on google video (55 minutes), stated:
‘People are often unsatisfied without a mechanism, and I would like to describe one theory which has been invented of the type you might want, that this is a result of large numbers, and that’s why it’s mathematical. Suppose in the world everywhere, there are flying through us at very high speed a lot of particles … we and the sun are practically transparent to them, but not quite transparent, so some hit. … the number coming [from the sun’s direction] towards the earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see, after some mental effort, that the farther the sun is away, the less in proportion of the particles are being taken out of the possible directions in which particles can come. So there is therefore an impulse towards the sun on the earth that is inversely as square of the distance, and is the result of large numbers of very simple operations, just hits one after the other. And therefore, the strangeness of the mathematical operation will be very much reduced the fundamental operation is very much simpler; this machine does the calculation, the particles bounce. The only problem is, it doesn’t work. …. If the earth is moving it is running into the particles …. so there is a sideways force on the sun would slow the earth up in the orbit and it would not have lasted for the four billions of years it has been going around the sun. So that’s the end of that theory. …
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
The error Feynman makes here is that quantum field theory tells us that there are particles of exchange radiation mediating forces normally, without slowing down the planets: this exchange radiation causes the FitzGeraldLorentz contraction and inertial resistance to accelerations (gravity has the same mechanism as inertial resistance, by Einstein’s equivalence principle in general relativity). So the particles do have an effect, but only as a onceoff resistance due to the compressive length change, not continuous drag. Continuous drag requires a net power drain of energy to the surrounding medium, which can’t occur with gauge boson exchange radiation unless acceleration is involved, i.e., uniform motion doen’t involve acceleration of charges in such a way that there is a continuous loss of energy, so uniform motion doesn’t involve continuous drag in the sea of gauge boson exchange radiation which mediates forces! The net energy loss or gain during acceleration occurs due to the acceleration of charges, and in the case of masses (gravitational charges), this effect is experienced by us all the time as inertia and momentum; the resistance to acceleration and to deceleration. The physical manifestation of these energy changes occurs in the FitzGeraldLorentz transformation; contractions of the matter in the length parallel to the direction of motion, accompanied by related relativistic effects on local time measurements and upon the momentum and thus inertial mass of the matter in motion. This effect is due to the contraction of the earth in the direction of its motion. Feynman misses this entirely. The contraction of the earth’s radius by this mechanism of exchange radiation (gravitons) bouncing off the particles, gives rise to the empirically confirmed general relativity law due to conservation of massenergy for a contracted volume of spacetime, as proved in an earlier post. So it is two for the price of one: the mechanism predicts gravity but also forces you to accept that the Earth’s radius shrinks, which forces you to accept general relativity, as well. Additionally, it predicts a lot of empirically confirmed facts about particle masses and cosmology, which are being better confirmed by experiments and observations as more experiments and observations are done.
As pointed out in a previous post giving solid checkable predictions for the strength of quantum gravity and observable cosmological quantities, etc., due to the equivalence of space and time, there are 6 effective dimensions: three expanding timelike dimensions and three contractable material dimensions. Whereas the universe as a whole is continuously expanding in size and age, gravitation contracts matter by a small amount locally, for example the Earth’s radius is contracted by the amount 1.5 mm as Feynman emphasized in his famous Lectures on Physics. This physical contraction, due to exchange radiation pressure in the vacuum, is not only a contraction of matter as an effect due to gravity (gravitational mass), but it is also a contraction of moving matter (i.e., inertial mass) in the direction of motion (the LorentzFitzGerald contraction).
This contraction necessitates the correction which Einstein and Hilbert discovered in November 1915 to be required for the conservation of massenergy in the tensor form of the field equation. Hence, the contraction of matter from the physical mechanism of gravity automatically forces the incorporation of the vital correction of subtracting half product of the metric and the trace of the Ricci tensor, from the Ricci tensor of curvature. This correction factor is the difference between Newton’s law of gravity merely expressed mathematically as 4 dimensional spacetime curvature with tensors and the full EinsteinHilbert field equation; as explained on an earlier post, Newton’s law of gravitation when merely expressed in terms of 4dimensional spacetime curvature gives the wrong deflection of starlight and so on. It is absolutely essential to general relativity to have the correction factor for conservation of massenergy which Newton’s law (however expressed in mathematics) ignores. This correction factor doubles the amount of gravitational field curvature experienced by a particle going at light velocity, as compared to the amount of curvature that a lowvelocity particle experiences. The amazing thing about the gravitational mechanism is that it yields the full, complete form of general relativity in addition to making checkable predictions about quantum gravity effects and the strength of gravity (the effective gravitational coupling constant, G). It has made falsifiable predictions about cosmology which have been spectacularly confirmed since first published in October 1996. The first major confirmation came in 1998 and this was the lack of longrange gravitational deceleration in the universe. It also resolves the flatness and horizon problems, and predicts observable particle masses and other force strengths, plus unifies gravity with the Standard Model. But perhaps the most amazing thing concerns our understanding of spacetime: the 3 dimensions describing contractable matter are often asymmetric, but the 3 dimensions describing the expanding spacetime universe around us look very symmetrical, i.e. isotropic. This is why the age of the universe as indicated by the Hubble parameter looks the same in all directions: if the expansion rate were different in different directions (i.e., if the expansion of the universe was not isotropic) then the age of the universe would appear different in different directions. This is not so. The expansion does appear isotropic, because those timelike dimensions are all expanding at a similar rate, regardless of the direction in which we look. So the effective number of dimensions is 4, not 6. The three extra timelike dimensions are observed to be identical (the Hubble constant is isotropic), so they can all be most conveniently represented by one ‘effective’ time dimension.
Only one example of a very minor asymmetry in the graviton pressure from different directions, resulting from tiny asymmetries in the expansion rate and/or effective density of the universe in different directions, has been discovered and is called the Pioneer Anomaly, an otherwise unaccountedfor tiny acceleration in the general direction toward the sun (although the exact direction of the force cannot be precisely determined from the data) of (8.74 ± 1.33) × 10^{−10} m/s^{2} for longrange space probes, Pioneer10 and Pioneer11. However these accelerations are very small, and to a very good approximation, the three timelike dimensions – corresponding to the age of the universe calculated from the Hubble expansion rates in three orthagonal spatial dimensions – are very similar.
Therefore, the full 6dimensional theory (3 spatial and 3 time dimensions) gives the unification of fundamental forces; Riemann’s suggestion of summing dimensions using the Pythagorean sum ds^{2} = å (dx^{2}) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)^{2} can be included with the other dimensions dx^{2}, dy^{2}, and dz^{2}. There is then the question as to whether the term d(ct)^{2} will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx^{2 }+ dy^{2} + dz^{2} = d(ct)^{2}, which implies that d(ct)^{2} changes sign when passed across the equality sign to the other dimensions: ds^{2} = å (dx^{2}) = dx^{2 }+ dy^{2} + dz^{2} – d(ct)^{2} = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/timedilation in inertial motion); This formula, ds^{2} = å (dx^{2}) = dx^{2 }+ dy^{2} + dz^{2} – d(ct)^{2}, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854.
Professor Georg Riemann (182666) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the ndimensional manifold be expressed by n variable quantities x_{1}, x_{2}, x_{3}, and so on to x_{n}, then … ds = Ö [å (dx)^{2}] … I will therefore term flat these manifolds in which the square of the lineelement can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’
[The algebraic Newtonianequivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds^{2} = (1 – 2GM/r)^{1}(dx^{2} + dy^{2} + dz^{2}) – (1 – 2GM/r) d(ct)^{2}. This only reduces to the special relativity metric for the impossible, unphysical, imaginary, and therefore totally bogus case of M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]
WARNING: I’ve made a change to the usual tensor notation below and, apart from the conventional notation in the Christoffel symbol and Riemann tensor, I am indicating covariant tensors by positive subscript and contravariant by negative subscript instead of using indices (superscript) notation for contravariant tensors. The reasons for doing this will be explained and are to make this post easier to read for those unfamiliar with tensors but familiar with ordinary indices (it doesn’t matter to those who are familiar with tensors, since they will know about covariant and contravariant tensors already).
Professor Gregorio RicciCurbastro (18531925) took up Riemann’s suggestion and wrote a 23pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of coordinate system. In 1901, Ricci and Tullio LeviCivita (18731941) wrote a 77pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute coordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):
g = ds^{2} = g_{m n} dx_{m}dx_{n}. The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant). … We call four quantities A_{v} the components of a covariant fourvector, if for any arbitrary choice of the contravariant fourvector B^{v}, the sum over v, å A_{v} B^{v} = Invariant. The law of transformation of a covariant fourvector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
The rank is denoted simply by the number of letters of subscript notation, so that X_{a} is a ‘rank 1’ tensor (a vector sum of firstorder differentials, like net velocity or gradient over applicable dimensions), and X_{ab} is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, X_{a}) and a contravariant tensor of the same variable (say, X_{a}) are distinguished by the way they transform when converting from one system of coordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contravariant tensor by superscript (for example x^{n}). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor x^{n} = x^{1 }+ x^{2} + x^{3 }+… x^{n}, whereas for powers or indices x^{n} = x_{1} x_{2} x_{3} …x_{n}). [Another step towards ‘beautiful’ gibberish then occurs whenever a contravariant tensor is raised to a power, resulting in, say (x^{2})^{2}, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x^{4},causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contravariant notation, thus x_{n }is here the contravariant version of the covariant tensor x_{n}. Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and LeviCivita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’
This was fine for Einstein who had by that time been working with the theory of Ricci and LeviCivita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m and n able to take values representing the four components of spacetime (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4by4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):
g = ds^{2} = g_{mn} dx_{m}dx_{n}_{ }= å (g_{m n} dx_{m} dx_{n} )= (g_{11} dx_{1} dx_{1} + g_{21} dx_{2} dx_{1} + g_{31} dx_{3} dx_{1} + g_{41} dx_{4} dx_{1}) + (g_{12} dx_{1} dx_{2} + g_{22} dx_{2} dx_{2} + g_{32} dx_{3} dx_{2} + g_{42} dx_{4} dx_{2}) + (g_{13} dx_{1} dx_{3} + g_{23} dx_{2} dx_{3} + g_{33} dx_{3} dx_{3} + g_{43} dx_{4} dx_{3}) + (g_{14} dx_{1} dx_{4} + g_{24} dx_{2} dx_{4} + g_{34} dx_{3} dx_{4} + g_{44} dx_{4} dx_{4})
The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx_{1} dx_{1} = d(ct)^{2}, while dx_{2} dx_{2} = dx^{2}, dx_{3} dx_{3} = dy^{2}, and so on. Therefore:
g = ds^{2} = g_{ct} d(ct)^{2} + g_{x} dx^{2} + g_{y} dy^{2} + g_{z} dz^{2} + (a dozen trivial first order differential terms).
It is often asserted that Albert Einstein (18791955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and LeviCivita who wrote the longwinded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute coordinate systems. If Ricci and LeviCivita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully. Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into spacetime geometry:
‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for “higher mathematics”, which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and LeviCivita. In Zurich Einstein found a friend, Marcel Grossmann (18781936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)
General relativity equates the massenergy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.
This point is made very clearly by Professor Lee Smolin on page 42 of the USA edition of his 1996 book, ‘The Trouble with Physics.’ See Figure 1 in the post here. Next, in order to mathematically understand the Riemann curvature tensor, you need to understand the operator (not a tensor) which is denoted by the Christoffel symbol (superscript here indicates contravariance):
G _{ab}^{c} = (1/2)g^{cd} [(dg_{da}/dx^{b}) + (dg_{db}/dx^{a}) + (dg_{ab}/dx^{d})]
The Riemann curvature tensor is then represented by:
R^{a}_{cbe} = ( dG _{bc}^{a} /dx^{e} ) – ( dG _{be}^{a} /dx^{c} ) + (G _{te}^{a} G _{bc}^{t} ) – (G _{tb}^{a} G _{ce}^{t} ).
If there is no curvature, spacetime is flat and things don’t accelerate. Notice that if there is any (fictional) ‘cosmological constant’ (a repulsive force between all masses, opposing gravity an increasing with the distance between the masses), it will only cancel out curvature at a particular distance, where gravity is cancelled out (within this distance there is curvature due to gravitation and at greater distances there will be curvature due to the dark energy that is responsible for the cosmological constant). The only way to have a completely flat spacetime is to have totally empty space, which of course doesn’t exist, in the universe we actually know.
To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to cò ( g_{mn} dx_{m} dx_{n})^{1/2}, while the proper time is ò (g_{m n} dx_{m}dx_{n})^{1/2}.
Notice that the ratio of proper length to proper time is always c. The Ricci tensor is a Riemann tensor contracted in form by summing over a = b, so it is simpler than the Riemann tensor and is composed of 10 secondorder differentials. General relativity deals with a change of coordinates by using FitzgeraldLorentz contraction factor, g = (1 – v^{2}/c^{2})^{1/2}. Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible, notinouruniverse, case of zero mass. Einstein at first built a representation of Isaac Newton’s gravity law a = MG/r^{2} (inward acceleration being defined as positive) in the form R_{m n} = 4p GT_{m n} /c^{2}, where T_{mn} is the massenergy tensor, T_{m n} = r u_{m} u_{n}. ( This was incorrect since it did not include conservation of energy.) But if we consider just a single dimension for low velocities (g = 1), and remember E = mc^{2}, then T_{m n} = T_{00} = r u^{2} = r (g c)^{2} = E/(volume). Thus, T_{m n} /c^{2} is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). We ignore pressure, momentum, etc., here:
Above: the components of the stressenergy tensor (image credit: Wikipedia).
The scalar term sum or “trace” of the stressenergy tensor is of course the sum of the diagonal terms from the top left to the top right, hence the trace is just the sum of the terms with subscripts of 00, 11, 22, and 33 (i.e., energydensity and pressure terms).
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)^{1/2}, so v^{2} = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v^{2} = 2GM/x) into the FitzgeraldLorentz contraction, giving g = (1 – v^{2}/c^{2})^{1/2} = [1 – 2GM/(xc^{2})]^{1/2}.
However, there is an important difference between this gravitational transformation and the usual FitzgeraldLorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: FitzgeraldLorentz contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = (1 – v^{2}/c^{2})^{1/2} = 1 – ½v^{2}/c^{2} + … . Gravitational contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = [1 – 2GM/(xc^{2})]^{1/2} = 1 – GM/(xc^{2}) + …, where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGeraldLorentz contraction: x/x_{0} + y/y_{0} + z/z_{0} = 3r/r_{0}. Hence the radial contraction of space around a mass is r/r_{0} = 1 – GM/(xc^{2}) = 1 – GM/[(3rc^{2}]. Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c^{2}. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the LorentzFitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without molecular viscosity (this is due to the Schwinger threshold for pairproduction by an electric field: the vacuum only contains fermionantifermion pairs out to a small distance from charges, and beyond that distance the weaker fields can’t cause pairproduction – i.e., the energy is below the IR cutoff – so the vacuum contains just bosonic radiation without pairproduction loops that can cause viscosity; for this reason the vacuum compresses macroscopic matter without slowing it down by drag). Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.
More information can be found in the earlier posts here, here, here, here, here and here.
copy of a comment directly relevant to the book:
http://keamonad.blogspot.com/2008/01/categorical.html
“In M Theory, lumping everything into an arbitrary wellknown category (or functor category) is analogous to deciding that path integrals for quantum gravity should rely on classical geometry, …”
Do you have specific examples of people with such confused thinking in quantum gravity?
One example I sa[w] which set me thinking was chapter I.5, ‘Coulomb and Newton: Repulsion and Attraction’, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 306.
Zee starts by (non)quantizing Coulomb’s law (write down a Lagrangian consisting of Maxwell’s classical equations with mass terms for photon, put that into a Feynman path integral, evaluate the action and show that this leads to an alwayspositive potential between two similar charges; hence, similar charges repel). He then moves on to (nonquantized) quantum gravity, writing a 5component tensor representing the 5 polarizations of the graviton in the Lagrangian (assuming spin2 gravitons, which should have 1 + 2^2 polarizations), evaluates the path integral for that Lagrangian and finds (as expected) that the potential energy between two lumps of positive energy density is always negative, so masses attract.
This is a fiddle for two reasons. First, nobody has ever seen a spin2 graviton. They are merely assumed, based precisely on the calculation showing that they should always provide an attractive force between positive energy densities or masses. Hence, it is circular logic to calculate that quantum gravity based on spin2 gravitons is always attractive. It’s because you get that result for a spin2 vector boson, that mainstream people think gravitons are spin2.
In addition, the procedure is really just a very slight modification to classical physics, and is not a real quantum field theory. There is no mathematical expression describing individual quanta interactions there; just categories of interactions (each Feynman diagram represents a category of quantum interactions).
In quantum electrodynamics, the path integral relies on classical geometry because Maxwell’s classical field equations (differential equations modelling continuously variable fields, not quantized fields) with similarly nonquantized terms for the mass of the field quanta, are stuck into the Lagrangian which in turn goes into the path integral.
Is this problem (classical field geometry being used in path integrals) the actual kind of problem you are referring to?
It you want a true mathematical model of air pressure, you can’t say it’s a constant 14.7 pounds per square inch, because on the smallest area it’s not constant; instead it’s quantized into chaotic, randomly timed strikes of individual air molecules having unpredictable speed and direction. The mathematical concept of air pressure just averages out the chaotic air molecules strikes. On large scales, it’s a useful statistical approximation.
But that model breaks down on small scales, where the statistical approximation (constant pressure) leads to deterministic predictions, while in fact molecular impacts occur at random in chaotic style and prevent determinism.
Most of the equations in quantum field theory, including the Lagrangian and the path integral into which the Lagrangian is inserted, are completely nonquantized statistical models, valid only approximately for large interaction numbers.
I think that the perturbative expansions of terms you get from path integrals, where each term corresponds to a “Feynman diagram”, is really categorization of interactions, not true quantization. Each Feynman diagram represents a category of interactions.
In actual fact, most of the interactions which are normally going on correspond to the very simplest Feynman diagrams, i.e. simple interactions occur many times per second to particles, while more subtle interactions (higher order perturbative corrections) occur less frequently.
Because the biggest contributions to common quantum processes are usually due to the simplest Feynman diagrams, there is no reason to suppose that quantum mechanics is particularly weird. As Feynman points out, the biggest contributions to most path integrals occur from interactions occurring very close to the classical model:
‘Light … uses a small core of nearby space.’ – R. P. Feynman, QED, Penguin, 1990, page 54.
‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from Brownianmotion type impacts of individual gauge bosons] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’ R. P. Feynman, QED, Penguin, 1990, page 845.
Those people who claim that quantum theory is ‘weird’ have an easy excuse not to investigate the possibility that it is not weird but just very simple, i.e., mainly due to simple interactions (the most basic Feynman diagram).
What’s weird is that people ignore the fact that complex Feynman disgrams only in general give rise to very small perturbative corrections. Modern physics has very little weirdness, just a bit of chaotic randomness on small scales due to individual (but usually very simple) quantum interactions.
So many people are drawn to physics for physically false reasons (e.g., believing that it shows that the world is weird or complex), that just stating the factual evidence for underlying simplicity is a “heresy” in itself.
Ambiguous results from experiments like Aspect’s experiment, are tken by these people to prove that particles are “magically” entangled, but when you look at the experiment it’s actually interpreted using physically flawed mathematics. It’s based entirely upon wavefunction collapse stuff, but as Thomas Love has pointed out:
“The quantum collapse occurs when we model the wave moving according to Schroedinger (timedependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (timeindependent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”
If you get a proper quantized quantum field theory, say a Coulomb potential where actual exchange of individual field quanta between electron and proton occurs randomly in time, then you’ll end up with nondeterministic electron orbits which (over long periods) can be statistically modelled by the Schroedinger wave equation. (The timeindependent Schroedinger wave equation is easy to derive this way, just by analogy to a classical wave arising from an ensemble of particles such as field quanta interactions.)
In other words, classical models of the atom only need a slight correction (the introduction of an electric field composed of radiation quanta being exchanged between charges).
The production of quantum gravity doesn’t need that much change to existing ideas, just corrections and modifications to allow for the actual quantum interactions occurring.
To make the vast amount of graviton interactions in the universe subject to simple mathematical modelling, the near symmetry of the universe in any radial distance from us can be exploited. All you have to do then is to model the distribution of mass and other relevant parameters (like recession velocities) as a function of radial distance, and the maths becomes relatively simple. Gravitation then focusses on asymmetries to the normal radial symmetry, caused by things like large fairly nearby masses: Earth, Sun, etc. The problem is that too much belief exists in the complexity of the world for most people to bother with simple models. The mainstream actually uses a lot of classical physics where it claims to be doing quantum field theory, while at the same time claiming that the universe is beyond simple understanding…
copy of another relevant comment:
http://keamonad.blogspot.com/2008/01/categorical.html
A better way to put my argument against the spin2 graviton is as follows.
The claimed reason to have a spin2 graviton is the claim that you need a 5 polarization gauge boson in order to have an alwaysattractive force between two regions of massenergy.
This is a false model because you never have two regions of energy: gravitons are not just being exchanged between an apple and the Earth. They are being exchanged with all the other masses in the universe around us as well.
Therefore, the failure of spin2 gravitons (and a massive chunk of the failure of string theory, too) is the lie that you can analyse quantum gravity by ignoring 99.999… % of the mass involved in exchanging gravitons.
By altering the Feynman path integral formulation to include all the masses in the entire universe which are exchanging gravitons (and not falsely restricting the analysis to two regions of positive energy), the need for a spin2 graviton with 5 polarization tensor is eliminated. Hence, gravitons must be spin1 radiation.
Extending this manycharge analysis to electromagnetism (instead of the usual treatment that considers a path integral between just two charges), there now must be two types of electromagnetic gauge boson radiation in order that all charges of like sign can exchange such radiation with other charges of like sign, and in order to incorporate the attraction of unlike charges and repulsion of like charges.
Does this seem any clearer? Maybe when I’ve published the maths, it will.
copy of a comment in moderation queue at site below (it quotes a bit from the linked page http://www.news.wisc.edu/14678 ):
http://www.math.columbia.edu/~woit/wordpress/?p=645
‘The shape of the dimensions is crucial because, in string theory, the way the string vibrates determines the pattern of particle masses and the forces that we feel,’ says the UWMadison physics professor. …
‘There are myriad possibilities for the shapes of the extra dimensions out there. It would be useful to know a way to distinguish one from another and perhaps use experimental data to narrow down the set of possibilities.’ …
‘Shiu compares the effect to a darkened room in which patterns of sound resonating off the walls can reveal the shape of the room. Similarly, KK (KaluzaKlein) gravitons are sensitive to the extradimensional shape and, through their behavior and decay, may reveal clues to that shape.’
String theorists won’t understand why this is ‘today’s hype’. As far as they’re concerned, if you are a physicist who wants experimental evidence for speculations, you should be celebrating that they’re trying to get some concrete connection between extra dimensions and experimental data.
Do you recommend that physicists only put out press releases when they have some alleged evidence for what they’re saying. The ability of scientists who lack any scientific evidence for a theory to get media attention relies on their use of mainstream credentials (affiliation, peerreviewed technical publications, and groupthinkstyle mutual backslapping e.g. where the journalist can construct a ‘balanced’ story by phoning other experts and getting reactions to quote) in lieu of any scientific evidence. This looks like an unethical abuse of power to drum up excitement and help get more funding to check ideas. But it might just be ignorance.
The technical point that predictions are only scientific if they are falsifiable is difficult for some people to grasp. A falsifiable particle physics theory is financially too risky to invest much money in. That’s why string theory is such a good commercial investment. Students of string theory don’t fear that it will turn out a dead end.
There’s a comment by anon. on Not Even Wrong that I agree with concerning the need to first attack a failed idea, before trying to get people to listen to your idea which replaces the failed idea. This is because, if you just put forward a new idea without first making sure everybody is aware of the failure in the existing idea, it will be impossible to get interest. Remember the email I reproduced in a recent post from Stanley G. Brown, editor of Physical Review Letters in 2004, stating that “alternatives to currently accepted theories” aren’t generally publishable? Well that’s the whole point: first you need to effectively attack the failed mainstream idea so that these people are aware of why the new idea is needed. You cannot make progress in a field dominated by failed ideas without attacking the failed ideas, to make people aware of the need for new ideas. If you don’t do that, then people will simply dismiss the new ideas as unnecessary “alternatives”. If you make the case why the ideas you’re replacing are “not even wrong” and your ideas are better, then it is clearer to people that when you get dismissed simply because your factbased, falsifiable, predictive theory is an “alternative” to string theory ideas, they’re being disingenious. By the way I did make the point by email to Stanley Brown after his email in 2004, that the currently accepted ideas are failures. He then relied upon officialdom by ignoring this and sending me a report by an associate editor which stated that my paper claiming to prove the mechanism of gravity was in fact “based on various assumptions” and he claimed that the existence of assumptions meant that it wasn’t a proof. This is totally vacuous because facts are used, not mere “assumptions” used, see http://nige.wordpress.com/about/ . For example, the Hubble recession rate is an observational fact, see http://www.astro.ucla.edu/~wright/tiredlit.htm for some reasons why that is so (i.e. why the redshift indeed is factually due to recession). The outward acceleration of the universe is a fact, because recession speed v = HR for distance R = ct (in spacetime, all distance is related to time). Hence velocity depends on time, and acceleration, a = dv/dt = d(HR)/dt = H.dR/dt + R.dH/dt where dH/dt is constant so: a = H.dR/dt = Hv = H(HR) = RH^2.
Then put this into Newton’s 2nd empirical law F = dp/dt = ma (this law is an empirical fact, i.e., it is based on observations of nature, it is not an “assumption” to be sneered at by an associate editor of PRL, who should save such sneers for the spin2 gravitons, supersymmetry and 11 dimensions assumed in Witten’s Mtheory and ), and you get F = mRH^2. By Newton’s 3rd law there is an inward reaction force, an effect carried physically by gravitons (the flow the spacetime fabric towards us, moves that way to fill in the voids being vacated behind each of the receding subatomic particles of matter in the universe, but also exerts pressure which carries force).
This reaction force only exists upon an apple from the surrounding universe or upon the Earth from the surrounding universe: it does not exist upon the Earth from the apple or on the apple from the Earth, because the Earth and the apple are not receding (or at least, not significantly receding) from one another. Thus, the the particles within Earth and those within the apple don’t exchange gravitons with any significant force, they only exchange gravitons with receding matter in the surrounding universe (which is radially symmetric around us), so they get pushed together. In consequences, these dynamics of gravitons carrying reaction forces to the expansion of the universe, predict that gravity is caused simply by the expansion of the universe as observed by Hubble’s law. Doing the detailed calculations for this, we can predict G and various other things, and compare them to measured values. It survives tests and is selfconsistent as well as being consistent with the experimentallyvalidated aspects of GR and QFT. See http://nige.wordpress.com/about/
Anyway, here’s anon’s comment in case Dr Woit accidentally deletes it:
http://www.math.columbia.edu/~woit/wordpress/?p=643#comment34365
anon. Says:
February 2nd, 2008 at 8:55 am
‘The point … It’s really simple: if someone wants to disparage work addressing some issue (in this case finding a theoretical underpinning for inflation) then they should provide a more promising approach to addressing the issue. I don’t find the stringinspired stuff at all appealing, so I don’t work on it, but at the same time I don’t disparage it since I don’t have any better proposal for the issues they are trying to address. Isn’t that a reasonable attitude that everyone should have?.’
– amused
Amused, it’s unreasonable because if you ban attacks on popular ideas that are failures – unless the person who is attacking has a better approach to the problem – then the need for a better approach to be developed may never arise. It’s then Catch22 because, only when a failed idea has first been found wanting, is there a need to develop a better approach.
If you insist that a better approach be developed before criticising failures in the current ideas, 1) nobody will listen because they’ll think you’re just trying to hype a new theory, and 2) you will eliminate the usual twostep route to advance, whereby a failed idea is first attacked, then replaced by new developments that are inspired by the faults in the existing ideas.
Historically, the twostep route to progress (discredit a bad idea, then afterwards develop a better approach) is more common that a onestep process of replacing an idea in one go without first making the case widely understood for why the idea needs to be replaced.
Your suggestion that criticism must always be constructive, while fine in an ideal world, will eliminate a lot of progress in this (real) world by preventing the twostage progress model from working. People aren’t motivated to fix things that aren’t first known to be broken (’if it isn’t broken, don’t fix it’). If an error is being made, the sooner it is publicised, the sooner it can be fixed. Any censorship of criticisms just slows down progress.
copy of a comment. In the comment below, the mention of cosmological “acceleration” is not referring to the acceleration inherent in the Hubble recession, but to a false acceleration – that is of similar size – which the mainstream adds to general relativity’s field equation to make gravitation cancel out over very large distances, in agreement with perlmutter’s supernovae observations that there is no gravitational deceleration over large distances. The mainstream cosmological acceleration basically doubles the real acceleration, due to mainstream confusion of quantum gravity dynamics. First, the mainstream refuses to publish the fact that the Hubble law v = HR implies a variation of v with time R = ct, i.e., v = Hct suggesting an intrinsic acceleration of: a = dv/dt = d(HR)/dt = RH^2. They obfuscate the simple physics of the radial retardation of receding masses in this radially symmetric universe with the full mathematical machinery of general relativity, which covers up the underlying physical simplicity (although tensors formulation general relativity has important uses, see for instance http://nige.wordpress.com/about/ ):
http://keamonad.blogspot.com/2008/02/mtheorylesson153.html
mathematicans working in physics often use the term ‘dimensions’ loosely to to refer to degrees of freedom. See for example, this comment by Dr Woit:
http://cosmicvariance.com/2005/12/07/howmanydimensionsarethere/#comment8689
‘Sure, there are all sorts of interesting ways of thinking about particle physics models using more “dimensions” than four. I would claim that the standard model is best thought of by thinking about a 16 dimensional space (a fiber bundle with fibers SU(3)xSU(2)xU(1) over spacetime). The thing for which there is no evidence is not extra dimensions in general, but extra Riemannian geometry dimensions where the metric now carries many more degrees of freedom with supposedly the same dynamics as the four we know.’
According to the KaluzaKlein method of adding dimensions to incorporate gauge equations into general relativity, 1 dimension is added for Maxwell’s equations which are interpreted by the mainstream as a simple U(1) Abelian symmetry (I won’t mention my objection to this U(1) model here), while isospin charge represented by SU(2) requires 2 dimensions, and colour charge represented by SU(3) requires 4 extra dimensions. Adding 4 spacetime dimensions gives a total of 11 dimensions.
Hence where Kea writes that the extra dimensions are: ‘two for spin, three for mass, six for em charge and so on’, she’s clearly referring to the 2 extra dimensions representing isospin SU(2), while for 3 dimensional mass and 6 dimensional em charge, I think she might just be referring to Lunsford’s unification, available on CERN (but censored from arXiv by a conspiracy of string worshippers).
Lunsford’s paper ‘Gravitation and Electrodynamics over SO(3,3)’ is a very simple and clear analysis which convincingly shows why the KaluzaKlein 5 dimensional theory is a failure, and replaces it with a theory that has 3 time and 3 spatial dimensions.
The 3 spatial dimensions are associated with mass, making mass 3 dimensional, while the full em and gravity unification requires 6 dimensions altogether.
In my understanding, the 3 time dimensions are always identical, which has caused confusion up to now. Time dimensions are related to spatial dimensions by x = ct(x), y = ct(y), and z = ct(z), where t(x), t(y) and t(z) are all identical sized time dimensions.
You can see this in the fact that the Hubble expansion rate is the same in all directions. After 1998, it has been known that the universe isn’t decelerating due to gravity as expected, so the ultimate measure of time is the age of the universe,
t = 1/H
where H is Hubble constant from recession velocity law v(x) = Hx, v(y) = Hy, and v(z) = Hz.
It’s precisely because the expansion of the universe is isotropic (independent of the direction or axes), that time itself appears to be only 1 dimensional instead of 3 dimensional.
If you got 3 different values of the Hubble constant when measuring it from recession in 3 orthagonal directions using v = Hr, you’d get 3 different ages of the universe given by applying t = 1/H to the 3 different values of H.
It’s because the expansion is isotropic that time itself seems to 1dimensional. This is purely down to the fact that all the 3 time dimensions are indistinguishable!
My argument above isn’t in Lunsford’s paper, and his argument is purely abstract; using 3 time dimensions and 3 spatial dimensions you can unify general relativity and electromagnetism making a prediction. The prediction is that the cosmological constant is zero. This fits the observations if you do a nonad hoc quantum gravity analysis, instead of forcing general relativity on to observations of supernovae redshift by adding an ad hoc small cosmological constant.
The quantum gravity analysis is that gravitons exchanged between receding masses over immense distances are redshifted, thus losing energy when received (this is described by Planck’s law, relating the received energy of quanta to their received frequency). Such a redshift thus causes an energy loss, which must reduce the effective gravitational force coupling constant, G.
This loss means that distant receding objects are not slowed as much as predicted by GR. The mainstream GR cosmological recession model is basically classical Newtonian gravitation: the receding supernova at radial distance R is like a bullet fired upward from the earth being slowed by gravity (you just need to insert the supernova mass to replace the bullet’s mass, and the mass of the universe contained within radius R to replace Earth’s mass). This GR/Newtonian model of cosmology is false because it predicts an amount of gravitational deceleration of receding supernovas that is too large, due to the error in GR of assuming G is constant. When you reduce G in direct proportion to the redshiftinduced frequency change factor for the distance of teh supernova, then you correct Perlmutter’s results without needing to add a small cosmological constant. (There are also other small modifications suggested by quantum gravity, according to my research.)
Thus, Lunsford’s 6d unification correctly predicts that there is no cosmological constant (dark energy which is supposed to be speeding up the expansion to offset the unobserved gravitational deceleration predicted by the faulty mainstream GR model, which omits graviton redshift effects which reduce G).
Nobel Laureate Philip Anderson grasps the basic point about the cosmological constant hoax:
‘the flat universe is just not decelerating, it isn’t really accelerating’ – Phil Anderson
But this vital point was simply ignored by Professor Sean carroll in that blog discussion.
copy of a comment:
http://keamonad.blogspot.com/2008/02/mtheorylesson153.html
“Nige, suppose the expansion of the universe was observably anisotropic. What would that have to do with extra time dimensions? The expansion would have been measured in three spatial dimensions, but against the same time dimension.” – mitchell
To answer your first point, “suppose the expansion of the universe was observably anisotropic. What would that have to do with extra time dimensions?”, if the expansion of the universe was observably anisotropic, I’ll explain how we’d be subjected to a net gravitational field from the surrounding universe, which would affect time (in fact we are subjected to this, but to a very small extent which is downplayed for obvious reasons, which was revealed by the CMB – it is however a way bigger anisotropy in the CMB than the small scale features discovered by COBE and WMAP, see
http://adsabs.harvard.edu/abs/1978SciAm.238…64M).
The mechanism for how time is related to expansion is the gravitational field. The rate at which time flows depends on gravity, as is well known from experiments with atomic clocks. The stronger the gravitational field, the slower a clock runs.
The gravitational interaction is mediated by the exchange of gravitons between masses. A gravitational field is a flux of gravitons flowing between masses or locations containing energy.
If the expansion rate of the universe is bigger in one direction than in another, the mass will be distributed to greater distances where the expansion rate is bigger.
This means that the gravitational coupling constant will be weaker for interaction with the immense receding masses of material in directions where the recession rate is biggest (the gravitons will be more redshifted, to lower energy, when emitted by material which is receding faster).
Hence, if the universe is anisotropic, a clock will be subjected to a net gravitational field from the surrounding masses in the universe, which will affect time.
Secondly, if the universe were observably anisotropic, the age of the universe would vary according to the direction you looked. This would make the extra time dimensions observable.
The fact that the universe is anisotropic, does mean that the effective age of the universe is different in different directions! The Hubble constant does vary with direction slightly, but that is currently either ascribed to an “aether” (!) as in http://adsabs.harvard.edu/abs/1978SciAm.238…64M, or it is not analysed at all (glossed over).
There is no evidence that the anisotropy is the failure of special relativity, or proof of an “aether” by requiring that we have an absolute motion in the universe. The anisotropy is really evidence that there are 3 time dimensions, one corresponding to each spatial dimension.
The age of universe, i.e. time in cosmology, is t = 1/H, where Hubble constant H = v/R (here R is distance in a spatial dimension, and v is recession velocity). The age of the universe is calculated from the expansion rate, so your statement that:
“The expansion would have been measured in three spatial dimensions, but against the same time dimension”,
is missing the point I’m making that different time dimensions emerge if you have an anisotropic expansion of the universe, because the Hubble expansion rate v = HR will be different in different spatial dimensions R, and so the measure of time as age of the universe t = 1/H, will be different for different spatial dimensions.
The only way to allow 3 spatial dimensions to give rise to 3 different ages of the universe is to increase the number of time dimensions until it coincides with the number of spatial dimensions.
An anisotropic expansion would mean that the Hubble constant varied with direction.
So you’d have a different age of the universe in different directions, as calculated from t = 1/H = 1/(v/R) = R/v.
The reason why there are 3 spatial dimensions is that we see an asymmetry when looking in different directions. If everything was the same in every direction, the fact that there are 3 spatial dimensions would be just as much an obscure mathematical finding as Lunsford’s finding that there are 3 effective time dimensions.
copy of a comment:
http://keamonad.blogspot.com/2008/02/mutualunbias.html
“Our main motivation is that most discussions of quantum mechanics use a background spacetime that is the same as classical spacetime, usually without any supportive arguments and even sometimes denying that quantum mechanics is a spacetime theory. And yet many of the difficulties in understanding quantum phenomena derive from the use of classical spacetime. We claim in the present paper that the spacetime of quantum phenomena differs from that of classical phenomena in the nature of its continuum. According to our theory [10], the description of quantum phenomena requires a real number continuum that is not the classical continuum. It is not even a fixed element of the theory but varies with the quantum system in a way similar to the way the metric geometry of Einstein’s general relativity varies with the physical system [2]. This is not part of the usual paradigm of quantum theory but adopting it enables us to reformulate the paradoxes of the standard interpretation when each quantum system has its own real number continuum.” – p2 of Quantum mechanics as a spacetime theory, http://arxiv.org/abs/quantph/0512220
Thanks for this link, Kea. What I like about the paper (apart from the paragraph above) is the title, the abstract mentioning a comparison to Bohmian mechanics, and the fact that the paper contains a section called “physical interpretation” (a subject that is proudly missing from a lot of mathematical physics). I’m glad that Carl, you, and Matti, are investigating it.
If forces and thus accelerations are really due to quantum field interactions, then spacetime is not “curved” at the quantum scale or atomic size scale. A geodesic will only appear approximately curved when the number of graviton interactions (or whatever the field quanta are) which are causing interactions is large, e.g. motion on large scales. So any final theory has got to take account of how randomness emerges from field quanta interactions on small scales, and how these little zigzag deflections add up on large scales (large numbers of field quanta being involved) to give something that is approximated by “curvature” i.e. differential geometry. I’m glad that this paper at least is trying to address this real physical problem. It’s empirically known from GR that spacetime is classical on large distance scales, and it’s well known from QM that spacetime is not classical on small distance scales – this is experimental, observational fact and not speculation like “problems” caused by spin2 gravitons. However, looking at the paper carefully, it’s just nowhere near radical enough. Still, it’s a step towards tackling real problems.
copy of a comment in moderation queue to:
http://cosmicvariance.com/2008/02/05/chaosatthepollingstation/#comment310148
Your comment is awaiting moderation.
nc on Feb 5th, 2008 at 4:59 pm
‘They were now encouraging people to cast “provisional ballots” — you could vote, but it wouldn’t be immediately counted. Someone would later check to see if you were really registered, and if you were, then it would be added to the total.’
Well at least they don’t carry guns and force you to vote for the ruling elite, then claim to have achieved a “democratic” victory of majority consensus about something. (E.g., leading mainstream scientific theorists who force other people to conform or be censored out.)
I’d just like to clarify SU(2) x SU(3) a bit.
The Standard Model symmetry groups U(1) x SU(2) x SU(3) do not include a symmetry group for the Higgs field, which is supposed to act differently (miring charges).
So SU(2) x SU(3) is a replacement for the symmetry groups U(1) x SU(2) x SU(3) as explained on in the book.
SU(2) x SU(3) symmetry groups, like U(1) x SU(2) x SU(3), do not contain symmetry for a massgiving field (a replacement for the Higgs field).
The gravity mechanism suggests quantization of masses, see http://nige.wordpress.com/about/
I’ve updated this bost a bit, including an analogy to binding of massive vacuum particles to SU(2) x SU(3) charges which compares them to firmlymounted riders on a horse. Photons don’t have a rest mass despite having energy, because photons go at light velocity and are not firmly attached to the massive bosons in the vacuum which give rise to mass. This is a bit like a rider out in front in a race who isn’t firmly seated in her saddle so that when the horse accelerates violently, she slides back and falls off.
copy of a comment:
http://keamonad.blogspot.com/2008/02/mtheorylesson153.html
Thanks Tony, for that information about SO(3,3).
Mitchell,
I’ve read what Lunsford wrote and in my comment above I pointed out that: “My argument above isn’t in Lunsford’s paper, and his argument is purely abstract; using 3 time dimensions and 3 spatial dimensions you can unify general relativity and electromagnetism making a prediction.”
So I’m not misinterpreting Lunsford’s 3 dimensional time, just interpreting it.
I’ve some evidence why there is a pairing up of space and time dimensions, which is independent from, apparently supplements, Lunsford’s analysis.
Hubble in 1929 ignored spacetime when he interpreted the recession of galaxies as a velocity directly proportional to apparent distance, v = HR.
You get an equally physically valid and yet entirely different interpretation of the entire universe if you remember that distance R = ct, so that v = Hct, i.e., velocity is directly proportional to the time past you are observing.
This implies an acceleration, of sorts a variation of velocity with effective time. I’ve never seen anyone else analyse the Hubble law this way (they all try to incorporate it into a suitable metric of GR then have to add an unexplained, ad hoc small positive cosmological constant to make it fit the evidence).
Treating the Hubble expansion rate v = HR as an acceleration is easy:
a = dv/dt = d(HR)/dt = H*dR/dt + R*dH/dt = H*dR/dt
(because H is constant, dH/dt = 0)
a = H*dR/dt = H*v = H*(HR)
This gives a new, physical way to analyse the big bang: every galaxy of mass m has an effective outward force of F = ma = mRH^2.
Newton’s 3rd law then suggests that there’s an equal inward force, which from the possibilities available appears to be carried by the spacetime fabric, i.e. gravitons. This allows quite a bit of physics to be understood, and makes checkable predictions.
It appears you don’t see the symmetry of having a separate effective time dimension corresponding to each spatial dimension.
“If you have one clock that runs twice as fast as another clock, you end up with two different coordinatizations of time, that differ by a scale factor, but it’s still the same time dimension being measured.”
You’re missing the point, time dilation (or time scaling) only implies a single time dimension when you have one time that is being scaled to take different values at different speeds or in different gravitational fields.
If you have 3 different times coming out, then that implies 3 different time dimensions. How else do you account for 3 different time scaling factors? You will need to scale all three times, you can’t scale them all to the same value.
Similarly, you could use your fruitless argument to replace 3 spatial dimensions by 1 spatial dimension and some scale factors:
“The small differences in the apparent radius of the universe (R = c/H) in different directions are not evidence of the existence of 3 different spatial dimensions, and there is only one spatial dimension in the universe with 3 different scaling factors.”
Obviously we’ve got more evidence for 3 spatial dimensions than that, and more evidence than we have for 3 time dimensions, but all the same, you are simply ignoring evidence for 3 time dimensions. How will your scale factors work?
Consider the issue of where the 3 spatial dimensions might look like one spatial dimension. Suppose that you were blind and invalid and has only one sensory source of information, the automatic readout of an ultrasonic distancemeasuring gadget that rotates within a room. As it rotates, it gives a different read out to the distance to the walls it is facing, closer or further away.
Instead of admitting the existence of 3 spatial dimensions, if you are consistent, you would presumably claim that there is only one spatial dimension (equivalent to radial distance), with a scale factor accounting for variations.
We can only see spatial dimensions because things are not radially symmetric every direction we look. If everything was the same in all directions we turned, we would appear to be a world with one spatial dimension only. Radial distance would be the single effective dimension, since there would be no basis upon which to discriminate 3 spatial dimensions if space was identical all around us in every direction.
Having one separate time dimension for each separate spatial dimension does seem to be supported by several pieces of evidence. Lunsford shows that there are failures in the KaluzaKlein theory and others. The simplest which can unify gravity and electrodynamics has 3 spatial and 3 time dimensions. It also makes a correct prediction!
I really think that if you want to be hostile to innovation, you should either read some of the papers and try to find errors, or else maybe you should direct your hostility towards mainstream string theory which has no evidence but a lot more funding and prestige than alternative work.
For drawing, you might try learning GIMP, which is a free version of photoshop. I used it for my new cover art for the density matrix paperback version. While it’s intended for photos, it also has a remarkable set of drawing tools and I’ve used it as such. It makes beautiful illustrations.
copy of a comment in moderation queue at:
http://egregium.wordpress.com/2008/01/17/listofbooksonquantumgravityandotherhelpfultips
I’m still learning a lot in this area, but is a case that all these approaches assume a massless spin2 graviton? Fierz and Pauli in 1939 argued that […] the field quanta for gravity must have spin2, but that is open to argument.
In the “Feynman Lectures on Gravitation” (AddisonWesley, 1995), page 30, Feynman points out that gravitons don’t necessarily have to be spin2, and spin2 may be a failure. The evidence for spin2 gravitons is a gauge boson (i.e. particle with integer spin) is required to mediate force in a YangMills theory, and spin1 between two similar charges (mass/energy) would cause repulsion (gravitational charges, unlike electric charges, only have one sign).
Spin2 is the simplest boson which (in the mainstream treatment) provides an alwaysattractive force between two similar charges. This works because spin2 gives the graviton 1+2^2 = 5 polarizations represented by a 5component tensor in the field Lagrangian, and when you put that Lagrangian into the path integral the force between gravitational charges or energy is always positive (attractive).
What’s less attractive about spin2 gravitons is that they are probably wrong for the following reason. In electromagnetic theory, you can make a better case to ignore all the distant charges in the universe, when evaluating a Lagrangian for a forces between two charges, because the rest of the universe is electrically neutral (atoms are neutral as a whole). But for quantum gravity, the rest of the “charges” (mass and energy) in the universe are definitely not neutral.
Gravitons will be exchanged between all mass/energy in the universe, not just between the particles in an apple and the particles in the Earth. Since the mass of the surrounding universe is 10^29 times bigger than the mass of the Earth, even though the surrounding masses (clusters of galaxies) are distant, the exchange of gravitons with them should be considered in the calculation.
If you do consider the exchange of gravitons with the surrounding masses in the universe which is isotropic to a large degree, you can come up with a spin1 graviton theory where masses get pressed together.
The Hubble recession velocity law observed, v = HR, gives receding mass m an effective outward force F = m*dv/dt = m*d(HR)/dt = m(H*dR + R*dH)/dt = mHv = mH(HR). Newton’s 3rd law then suggests an effective inward reaction force, which according to the possibilities available, would seem to be mediated by spacetime fabric or gravitons. Because universe is virtually isotropic (except for small anistropic effects, like the apparent 400 km/s drift of the Milky Way toward Andromeda which is the biggest anisotropy observable in the CBR, a 3 mK increase in apparent CBR temperature or blueshift in the CBR in the direction of Andromeda), a local asymmetry would have a great effect.
A mass which is nearby in cosmological distance scales will not have enough outward recession to cause any appreciable inward reaction force, and would tend to instead act as a partial shield to gravitons from greater distances in that direction. Thus, a person gets pushed downwards because the reaction force of gravitons coming upward through the nonreceding Earth is less than that downward graviton pressure coming from the sky overhead.
One interesting feature of this model is that, besides quantitatively making predictions about the strength of gravity and the nature of quantum gravity, it also makes cosmological predictions. For distances between masses corresponding to only very small redshifts, the masses will not exchange reaction forces via gravitons because they don’t have an outward force relative to each other. So they get pushed together by gravitons exchanged with receding matter in the surrounding universe, causing gravitation. But if masses are at large (cosmological) distances corresponding to significant redshifts, they exchange gravitons forcefully with one another which causes the expansion of the universe.
“For drawing, you might try learning GIMP, which is a free version of photoshop. I used it for my new cover art for the density matrix paperback version. While it’s intended for photos, it also has a remarkable set of drawing tools and I’ve used it as such. It makes beautiful illustrations.” – Carl
Thank you very much for the suggestion.
The other piece of drawing software I’d like is Corel Draw, but there is also a GNU free version of that as well and I’ll probably get it instead.
copy of an anonymous comment to
http://motls.blogspot.com/2008/02/aethercompactification.html
Lubos,
Has it occurred to you that this arXiv paper on “Aether Compactification” http://arxiv.org/PS_cache/arxiv/pdf/0802/0802.0521v1.pdf by stringers Sean M. Carroll and Heywood Tam, linking extra dimension compactification to Aether compactification, is harmful to string theory in the same sense that Nobel laureate Brian Josephson’s paper “String Theory, Universal Mind, and the Paranormal” http://arxiv.org/abs/physics/0312012 was linking string theory to ESP?
In other words, it serves to emphasise the fact that string theory appeals to people who like aether and ESP, and it is “not even wrong” in being nonfalsifiable, permanently “safe from refutation” groupthink, which causes the cohesion of mainstream string theorists into a kind of dogmatic religion:
’Groupthink is a type of thought exhibited by group members who try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. During Groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.’ – Wikipedia. ‘[Groupthink is a] mode of thinking that people engage in when they are deeply involved in a cohesive ingroup, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.’ – Irving Janis.
copy of another anonymous comment:
http://motls.blogspot.com/2008/02/aethercompactification.html
Shawn,
Lubos will have plenty of difficulty with this one, because string theory is not even a theory of gravitons:
“The sole argument generally given to justify this [Mtheory, i.e. mainstream string theory] picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” – http://arxiv.org/abs/hepth/0206135
String theory in other words is at best just an empty box for a graviton theory, as emphasised by ‘t Hooft:
‘Actually, I would not even be prepared to call string theory a ‘theory’ … Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ – Nobel Laureate Gerard ‘t Hooft [Quoted in PW’s book ‘Not Even Wrong’, 2006]
copy of comment:
http://riofriospacetime.blogspot.com/2008/02/explorer.html
Louise,
I’d like to revisit the following scientific points:
1. There’s a lot of mass (galaxies, etc.) almost isotropically distributed around us in the universe. (I don’t think anyone disputes that.)
2. If it wasn’t receding, i.e., if it was static and nothing was keeping it there, from our reference frame it might be expected to collapse (we’d see it falling towards us).
3. The collapse would turn gravitational potential energy into kinetic energy of material coming together.
4. The average distance matter would fall would be some fraction of the radius of the universe, say half the radius of the universe for a very rough first approximation.
5. The argument used to get the potential energy of the universe can be compared to a collapsing star. If you had a star of uniform density and radius R, and it collapsed, the energy release from gravitational potential energy being turned into explosive (kinetic and radiation) energy is E = (3/5)(M^2)G/R. The 3/5 factor from the integration which produces this result is not applicable to the universe where the density rises with apparent distance because of spacetime (you are looking to earlier, more compressed and dense, epochs of the big bang when you look to larger distances). It’s more sensible to just remember that the gravitational potential energy of mass m located at distance R from mass M is simply E = mMG/R. In a supernova explosion, the gravitational collapse of distributed matter releases energy.
The gravitational potential energy released in the collapse would be on the approximate order of magnitude
E = (M^2)G/r = (M^2)G/(ct)
where t is age of universe, where r is the average distance the matter falls before it hits other matter.
6. The gravitational field energy needed to keep this from occurring is therefore a similar amount of expansion kinetic energy:
E = Mc^2.
The relativistic equation for total energy is:
E = Mc^2
where M is the actual mass (which is a function of velocity), and
E = M_0 c^2 (1 – v^2 /c^2)^(1/2)
where M_0 is the rest mass (not the actual mass), because
M = M_0 (1 – v^2 /c^2)^(1/2).
Since by the equivalence principle inertial mass (which increases with velocity by the formula just given) is equivalent to gravitational mass, it is the true mass M not the rest mass M_0 which we need to consider.
Hence
E = Mc^2
is the formula to use.
Since the energy of the big bang E = Mc^2 caused the expansion (against inertial and gravitation) in the first place, it must be at least equal to the approximate gravitational potential energy E = (M^2)G/(ct), or the universe wouldn’t have been able to expand in the first place because it would have become a black hole (Professor Susskind, in an interview about his book “The Cosmic Landscape” argued that the universe simply had enough outward explosive or expansive force to counter the gravitational pull which would otherwise have turned it into a black hole). So:
Mc^2 {is greater than or equal to} M^2G/(ct)
Hence at least as an approximation:
tc^3 = MG.
What part of the above “derivation” is wrong?
It’s pretty straightforward.
Let’s take a serious look at a comment which was abusively attacking my competence on 29 March 2007 by an anonymous commentator using the name “Guess Who” on Tommaso Dorigo’s blog (the same comment was copied that day to a posting on Mahndisa’s blog after she wrote after ignoring the content of a comment of mine: “We have gone over this already. You are not applying the equations correctly. The conclusions are incorrect due to a misapplication of physical law. See my comments above. I am turning comments off for this.” She then quoted the “Guess Who” comment I’m about to discuss as alleged evidence of my incompetence: “You are not defending science; you are defending misguided and incompetently performed calculations, which I cannot abide.”).
The “Guess Who” comment made a long series of ignorant claims, so let’s go through them all:
“You are using the expression for rest mass. That means literally mass at rest in some reference frame. But you know that the early universe was radiative: all particles were moving randomly and very close to the speed of light, so almost all their energy was in the momentum (p) part of the full expression E = sqrt((p*c)^2 + (m*c^2)^2). Because of the randomness, there was no reference frame in which p = 0.”
This is wrong, because the relativistic equation for energy only includes momentum where it is written in terms of rest mass. If you are dealing with the actual mass, which increases with velocity, which is what we’re concerned with, the E = Mc^2. So this person doesn’t know that the relativistic correction applies to rest masses (which are imaginary in practice, becauses masses are in motion and the energy of their motion adds to their mass!).
“> the gravitational potential energy E = MMG/R = (M^2)G/(ct).
Most quantities which you put in this expression are ill or undefined.”
I don’t think that energy, mass G, radius, c and t are illdefined. The person just needs to bother to read the definitions.
“In general relativity, which you must (and claim to) use in this context, energy does not stand alone as a separately conserved quantity: it’s just one component of the 4×4 energymomentum tensor.”
I know the 16 component energymomentum tensor: it has components for energy density, energy flux, momentum density, momentum flux, pressure and vicosity. All of these quantities get translated into energy density. You can measure energy many ways, and all energy contributes to gravitation. For our purposes, the restmass energy of the universe includes all of these contributions.
“Your M is supposed to be the rest mass of the universe, which is neither at rest nor, to the best of our knowledge, finite. So you have an infinity squared there.”
No, M is the mass of the universe, which depends on velocity, M_0 is rest mass. (In any case, unless v is a significant fraction of c, the difference between M and M_0 is not major. You don’t know what you’re talking about. You’re just playing with things you don’t understand: physics concepts can always be extended to include more and more correction factors for known, factual physical processes. That’s not the issue. What’s the issue is whether the basic concepts are right. They are in this case.)
“Your R = c*t looks like it could be the Hubble radius (up to some factor of order unity, e.g. 2 in a radiative universe) but you say that it’s “defined as the effective distance the majority of the mass would be moving if the universe
collapsed”. Excuse me, but that would make R = 0.”
No, if the majority of the universe had zero distance to move if the universe collapsed, it would be here already. It isn’t. It’s at a great distance, and would have a fraction of the radius of the universe to fall to reach here.
“So, you are equating an incorrect expression for an illdefined E with an undefined quantity containing a square of an illdefined, presumably infinite M divided by 0.”
Wrong, E, M and the distance have all be precisely defined. In the case of the fall distance, the details are complex due to quantum gravity effects and density variation with distance/time past (see my calculations on my blog) and we’re using an approximation here just to get the basic principle across. The mass of the surrounding universe is located between radii of 0 to R where R = ct. This is because, as discovered by Perlmutter in 1998, gravity isn’t causing the universe to decelerate, so the old Friedmann solution to general relativity for the scale factor or effective radius of the universe, R = (2/3)ct is false; the (2/3) factor came from gravitation and this solution from Robertson/Walker/Friedmann’s metric is wrong empirically. The Hubble expansion rate shows that the gravitational deceleration is not occurring at that rate. Quantum gravity suggests that the reason is that gravitons exchanged between receding masses are redshifted to lower energy, reducing the effective gravitational coupling constant over cosmologically large distances, although the mainstream solution is to ignore quantum gravity, assuming that general relativity needs to instead be supplemented by dark energy to provide repulsion that over cosmologically large distances offsets the attractive force of gravity.
“General relativity fully contains special relativity. To do serious cosmology, you need to solve the equations of general relativity. You are not doing that.”
No, [special relativity] is incompatible with general relativity because it is a generally covariant theory and special relativity isn’t, and if you don’t believe me, try reading some papers on general relativity by someone called Albert Einstein:
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).
‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.
‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’ – Einstein’s Legacy – Where are the “Einsteinians?”, by Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm
“Again illdefined. But let’s say you take the standard FRW solutions of Einstein’s equations, put yourself in the comoving frame and compute the gravitational potential of a test particle according to your prescription. Since the FRW solutions are isotropic, your result will = 0. So now you’re saying that taking the square of an illdefined, presumably infinite M and dividing by 0 yields 0.”
This is just trash. The FRW solutions to general relativity wrong by observation, and adding an epicycle in effect by unobserved dark energy, ignores graviton redshift energy degradation in quantum gravity.
Since gravitons will be redshifted towards zero energy as redshift goes towards infinity (recession velocities approaching c), there can be no “curvature” on the largest distance scales in the universe. Hence, apart form Perlmutter’s actual observations of no gravitational deceleration using automated CCD telescope observations of supernovas, quantum gravity itself tells you that general relativity is wrong when applied to cosmology, i.e. massive distances in an expanding universe.
A lot of personallydirected, ignorant, abusive garbage directed towards me follows in the comment which I need not quote here because it is just a list of quotes of me followed by sneers which ignore my work.
I’ll copy this to my blog, because I think it is a useful defense of the conceptual derivation of tc^3 = MG, and Mahndisa has closed the comments section on the relevant post on her blog, preventing any response being made there. I don’t agree with everything you suggest(I’ve investigated changes in G rather than c), but I agree with the general concept tc^3 = MG.
What’s interesting is that there are lots of other physicists around who could be investigating this critically and checking it.
Instead, they don’t tend to make scientifically useful comments.
I think that there should be a lot more support for people like Dr Peter Woit who standing up against the hypocrisy of physically vacuous mainstream Mtheory which predicts nothing and is “not even wrong”. Dr Lee Smolin too, although in his case he’s been accused (unfairly) of being critical of the mainstream in order to get attention for his own ideas which are maybe not immensely better than string theory (although Smolin’s “doubly special relativity” does make some predictions than may be tested).
One interesting thing I’ve mentioned is the fact that general relativity is a failure at describing cosmology scientifically. There’s no curvature on cosmologically large scales (e.g., the scale of the universe or its effective radius) indicated by observations of redshifts. It also won’t occur theoretically, if quantum gravity involves exchange of gravitons between receding masses, because those gravitons will be received with lower energy due to the recession of gravitational charges in the universe.
I think from memory that the two critics of all this stuff have been Professor Distler commenting using his actual name on Professor Johnson’s Asymptotia blog, both people working on mainstream string theory, and “Guess Who”. Neither of them have come up with any physical discussion at all, if we discount the false claims by “Guess Who” and the claim by Distler that tensors are needed. Anyone can take tc^3 = MG and put into into a tensor field equation simply by writing it as a definition of G = (tc^3)/M and putting that into Einstein’s field equation. However, the equation tc^3 = MG relates to cosmology, where Einstein’s field equation fails to make falsifiable predictions (the small ad hoc cosmological constant wasn’t predicted). So Distler is missing the point.
What we have is a problem similar to a political innovation situation well explained in Nicolo Machiavelli’s classic work, published in 1531 A.D.:
http://www.constitution.org/mac/prince06.htm
“Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.”
Tony Smith quotes the problems which Feynman had at the Pocono Conference in 1948, where leading physicists Teller, Pauli and Bohr all dismissed Feynman’s work. See http://www.valdostamuseum.org/hamsmith/goodnewsbadnews.html
“… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right.
… For instance,
take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …
… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …
… Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …
… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.
I gave up, I simply gave up …”.
– “The Beat of a Different Drum: The Life and Sciece of Richard Feynman”, by Jagdish Mehra (Oxford 1994) (pp. 245248).
Feynman’s idea was explained to Oppenheimer by Dyson, who had no time for new ideas from youngsters and was abusive towards Dyson until Bethe intervened on Dyson’s behalf, as Dyson explains in an interview.
Tony Smith also mentions on his page http://www.valdostamuseum.org/hamsmith/ecgstcklbrg.html the work of Ernst Stückelberg who came up with Feynman’s key ideas about 5 years earlier, but had them rejected by the Physical Review in 1943.
Another example is George Zweig, whose quark model called Aces was rejected by Physical Review Letters.
It’s unsurprising that after his experience of 1948, with ignorant attacks from a consensus of the top physicists who were all certain Feynman was wrong, Feynman went on to write things like:
‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307)
and
‘Science is the belief in the ignorance of [committees of speculative] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.
The real challenge is overcoming groupthink:
’Groupthink is a type of thought exhibited by group members who try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. During Groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.’ – Wikipedia.
‘[Groupthink is a] mode of thinking that people engage in when they are deeply involved in a cohesive ingroup, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.’ – Irving Janis.
This is virtually impossible to do. The tactic of mainstream (groupthink) people who have no evidence simply throwing up garbage and personal abuse from under cover of anonymity which others are misled into believing to be correct, is appalling.
There is very little that can be done against it. If you attack it as being ignorant, most bystanders will think you are the villain because they don’t know anything about physics. They can’t tell what’s right by looking at the facts, so they side with the majority instead.
I’m writing a book as time permits but don’t hold your breath. It’s a hell of an undertaking as there are thousands of details I must get right and if I do succeed scientifically, nobody will read it anyway. A book that is long enough to contain sufficient detail to convince people, will almost by definition be too long for anyone to bother reading.
I’m sticking to this project because there is factual evidence that nobody listens to, and I don’t like dictatorship. Dictatorship was supposed to have been ended by freedomwinning wars. Instead, there is dictatorship everywhere, even in science. You expect problems with democracy, but it’s just too much that in science – a subject where I was taught that facts are the things which count, not prejudices – fundamental physics is run by physically ignorant dictators who can’t tell apart facts of nature from orthodox wishful thinking.
copy of a comment in the moderation queue at:
http://michaeldcassidy.wordpress.com/2008/02/08/agreatpieceonscienceandreligion/
“The fundamentals are used to build the models by carefully following the rules. The discipline of explicit (spelledout) or implicit (not stated, but implied) commitment to rules and definitions (that are themselves, either spelledout, or implied) is essential, otherwise, the TowerofBabeleffect would prevail; no one would understand anyone else; communication would fail.”
This is a key point, but needs a lot more analysis because of arguments over what the fundamentals really are. In order to make incremental progress, it’s true that you build on existing assumptions. However, radical progress usually involves (by definition) changing fundamental concepts in such a way that the way facts of nature are interpreted changes. For example, general relativity describes accelerations as results of a curvature in spacetime, and treats all accelerations classically as truly differential increases in velocity, not the sum of a lot of individual graviton interactions from a quantized gravitational field. General relativity has been tested in various ways, and confirmed very accurately on certain scales (the small positive cosmological constant needed on the largest scales was however an ad hoc modification, not a prediction, and general relativity has not been tested on quantum scales).
So should the acceptance of general relativity be a universally agreed upon axiom for all future progress, or not? Regarding the tower of babel, this kind of foundational problem is one of the key issues for modern physics.
Tony Smith quotes the problems which Feynman had at the Pocono Conference in 1948, where leading physicists Teller, Pauli and Bohr all dismissed Feynman’s work. See http://www.valdostamuseum.org/hamsmith/goodnewsbadnews.html
“… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right.
… For instance,
take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …
… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …
… Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …
… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.
I gave up, I simply gave up …”.
– “The Beat of a Different Drum: The Life and Sciece of Richard Feynman”, by Jagdish Mehra (Oxford 1994) (pp. 245248).
Feynman’s idea was explained to Oppenheimer by Dyson, who had no time for new ideas from youngsters and was abusive towards Dyson until Bethe intervened on Dyson’s behalf, as Dyson explains in an interview.
Tony Smith also mentions on his page http://www.valdostamuseum.org/hamsmith/ecgstcklbrg.html the work of Ernst Stückelberg who came up with Feynman’s key ideas about 5 years earlier, but had them rejected by the Physical Review in 1943.
Another example is George Zweig, whose quark model called Aces was rejected by Physical Review Letters. He stated:
‘Getting the CERN report [on the discovery of quarks] published in the form that I wanted was so difficult that I finally gave up trying. When the physics department of a leading university was considering an appointment for me, their senior theorist, one of the most respected spokesmen for all of theoretical physics, blocked the appointment at a faculty meeting by passionately arguing that the ace [quark] model was the work of a “charlatan.” … Murray GellMann [codiscoverer with Zweig of quarks/aces] once told me that he sent his first quark paper to Physics Letters for publication because he was certain that Physical Review Letters would not publish it.’
– George Zweig, codiscoverer (with Murray GellMann) of quarks, quoted on page 95 of John Gribbin’s, In Search of Superstrings: Supersymmetry, Membranes and the Theory of Everything, Icon Books, Cambridge, England, 2007.
It’s unsurprising that after Feynman’s experience at the Pocono Conference of 1948, with ignorant attacks from a consensus of the top physicists who were all certain he was wrong, Feynman later went on to write things like:
‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, The trouble with physics, 2006, p. 307)
and
‘Science is the belief in the ignorance of [committees of speculative] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.
One problem relevant to the tower of babel (people using different assumptions) is that it is vital for people to explore different possibilities and different types of mathematics in order to overcome groupthink:
’Groupthink is a type of thought exhibited by group members who try to minimize conflict and reach consensus without critically testing, analyzing, and evaluating ideas. During Groupthink, members of the group avoid promoting viewpoints outside the comfort zone of consensus thinking. A variety of motives for this may exist such as a desire to avoid being seen as foolish, or a desire to avoid embarrassing or angering other members of the group. Groupthink may cause groups to make hasty, irrational decisions, where individual doubts are set aside, for fear of upsetting the group’s balance.’ – Wikipedia.
‘[Groupthink is a] mode of thinking that people engage in when they are deeply involved in a cohesive ingroup, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.’ – Irving Janis.
Sharing the same beliefs in the validity of certain mathematical systems for dealing with quantum gravity and sharing the same interpretative assumptions like dark energy, is a step towards groupthink. Moving in the other direction, of course the Tower of Babel problem occurs.
“But when estimating realworld risks and rewards, unchallengeable religious or ideological beliefs are very poor substitutes for the weighted skepticism of science.”
I agree that skepticism is vital. But it is all too easy to corrupt scientific skepticism. Take mainstream string theory. Peter Woit wrotein 2002:
“For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. […] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘Mtheory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.” – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135.
‘Actually, I would not even be prepared to call string theory a ‘theory’ … Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ – Nobel Laureate Gerard ‘t Hooft [Quoted in PW’s book ‘Not Even Wrong’, 2006]
In his book ‘Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law’, Woit explains that there are many “known unknowns” (to use Donald Rumsfeld’s popular phrase) in modern physics that are real problems which need to be addressed. E.g., a theory to explain the values of the parameters describing mass needed in the Standard Model. None of these problems are actually addressed by string theory, which builds upon, instead, speculative unknowns or “unknown unknowns” like Planck scale unification guesswork.
So string “theory” is just like Wolfgang Pauli’s “empty box” (which is printed on the right hand side of the page here: http://www.americanscientist.org/template/AssetDetail/assetid/18638/page/2#19239 ).
The very fact that string “theory” is being hyped and needs to be countered by Woit, proves that we live in an extremely pseudoscientific age with regards to mainstream ideas being hyped. Woit points out that there is no problem with scientists pursuing whatever they want, including extra dimensional theories like “string” which as yet predict nothing checkable and have not been shown to even reproduce the Standard Model.
What’s wrong is for people to falsely hype such things with misleading claims. Penrose in his “Road to Reality” (2004) criticised Edward Witten’s hyped claim that:
‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, Mtheory originator, Physics Today, April 1996.
Witten in 2006 wrote:
‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to nonexperts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, Mtheory originator, Nature, Vol 444, 16 November 2006.
He suggested that string “theorists” should not respond directly to critics, for fear of adding fuel to controversies by sounding elitist. What they do instead of responding to criticisms, is to spew out more hype claiming to “explain” to the ignorant that their noncheckable spin is science. The underlying message is that string “theories” have no other way available to defend uncheckable abject speculation than to be elitist and patronising, i.e., to say they are right because they are the mainstream elite and any critics are simply ignorant, confused, or haters of science. Most people accept what they are told by a committee of experts like a group of top string “theorists”.
“The value of reducing uncertainties ties in closely with beliefs about survival value and ethical values. The complex way that survival and ethics fit into the spectrum of belief is still another story, neglected here for ‘brevity’.”
That’s a pity, because this is key to understanding why a group of alleged scientists are using physics as a substitute for religion.
Copy of a comment, with some typos corrected in square brackets:
http://carlbrannen.wordpress.com/2008/02/08/lovenegativeenergy/
That multiple choice question is a fascinating way of looking at nuclear binding energy, and I like your answer. I hadn’t thought about this before, despite having been interested in nuclear physics since 14[.] [I]t clarifies my understanding.
When nucleons emit energy, they falls to a lower energy level, so [they] get closer together. At closer distances, the strong force which binds nucleons into the nucleus is stronger, so the energy binding the nucleons gets bigger. The binding energy is the energy you need to supply to release the particles, not what you can release.
It’s like the emission of quanta from electrons. Once the energy is gone, the electron has fallen to its ground state, and then it can’t emit more energy. You need to supply energy from outside the sustem to make the electron gain energy and escape. Stability increases after energy is lost because it stops further energy from being lost.
Binding energy is the potential energy of the field attracting say an electron to a nucleus. If the electron loses energy by emitting a photon, then it falls closer to the nucleus where the electrostatic potential energy is actually higher than it was before, because the potential energy is inversely proportional to distance of electron from nucleus (hence, the smaller the distance of the electron from the nucleus, the bigger the electrostatic binding energy): http://hyperphysics.phyastr.gsu.edu/hbase/electric/elepe.html
The strong nuclear attractive Yukawa force between nucleons, mediated by pions, is pretty similar in its general form to the Coulomb force, except for an exponential attenuation which occurs in addition to the inverse of distance (for potential energy) or inverse square law (for force or acceleration), making the force shortranged.
So when a nucleon which is in a high energy state falls back to a lower energy state by emitting a gamma ray, the same general process happens as occurs in the emission of a photon by an electron. The nucleon loses its excitation energy, falls towards the ground state, and thus gets slightly closer to the middle of the nucleus, gaining some potential energy or “binding energy”.
It’s counterintuitive at first that binding energy increases when a gamma ray is released. However, it’s just one of those things.
If I drop an apple, the apple releases sound waves when it hits the floor, and at the same time it gains gravitational [binding, not potential] energy because it moves slightly closer to the Earth’s core, where the apple has more binding energy (the closer a mass is the the Earth’s centre, the more energy you need to carry it away from the Earth). Hence, potential energy is not the same as binding energy. Potential energy is at a maximum when two attracting particles are far apart, when there is a maximum amount of kinetic energy to be gained by releasing them. Binding energy is at a maximum when two particles are as close as possible together (i.e. in the ground state), because the Coulomb or Yukawa force which does the binding gets bigger the closer the particles are.
So really, the overall energy available in the nucleus decreases when a gamma ray is emitted. The increase in binding energy is not available or releasable energy, but just what you need to supply to break up the nucleus.
Nickel and iron nuclei have the highest binding energy, which means you can’t get any energy out of them: http://hyperphysics.phyastr.gsu.edu/hbase/nucene/nucbin.html
The graph of binding energy on that page shows that iron has about 8.7 MeV/nucleon of binding energy. Uranium235 only has about 7.5 MeV/nucleon.
So the more nuclear binding energy a nucleus has, the more stable it is. The less nuclear binding energy, the more unstable it is in general, and the more likely the nucleus is to undergo nuclear fission or nuclear fusion. In other words, the potential for getting energy out of a nucleus is not proportional to its binding energy.
Having a maximum binding energy actually prevents either fusion or fission from occurring in principle. In the process of both fusion of light elements and fission of heavy ones, an increase in binding energy occurs as well in addition to the release of nuclear energy.
I think this is very interesting because of the analogy between electron structure and photon emission and nuclear shell structure and gamma ray emission. It’s well known that the line spectra of gamma rays emitted by nuclei are analogous in some ways to the line spectra of photons emitted by electrons, unlike beta particles which have a continuous spectra up to a limit or alpha particles which tunnel out of the nucleus. Moreover, just as stable chemical elements occur with “magic numbers” of electrons which correspond to filled, closed outer shells (e.g, helium), the same kind of thing occurs with the number of nucleons in the nucleus. From the stability of nuclei and from the details of the gamma ray line spectra, good nuclear shell models have been worked out.
Physics becomes interesting when questions of that sort are asked, because the person then wants an answer and is curious to find out something.
In the nuclear shell structure model, the most stable nuclei are those with 2, 8, 20, 50 or 82 protons or 2, 8, 20, 50, 82 or 126 neutrons, or both.
This is analogous to the numbers of electrons in closed shells around the atom, 2, 8, 18, 32 and 50 electrons. So there is some evidence for a shell structure in nuclei.
In the case of electrons, the numbers 2, 8, 18, 32, and 50 come from the different combinations of 4 different quantum numbers: n, l, m and s. The Pauli exclusion principle says that each electron in an individual atom has a unique set of the four quantum numbers. The number s can only have two different values (it represents spin). Spinning charges like electrons have a magnetic moment, and if you drop magnets into a small box, then tend to be most stable when they pair up in parallel with the North pole of one pointing in the direction that the South pole of another is pointing. The fact that electrons have spin and thus are magnetic dipoles hence seems the reason why the electron spin is quantized into two values, i.e. the Zeeman effect.
n (shell number): 1, 2, 3, 4, …
s (spin number): +1/2 or 1/2
l (ellipticity number): n1, n2, n3, …
m (magnetic number)= l , l1, l2, … 0, (l2), (l), l.
Applying the Pauli exclusion principle to these numbers, you find that for n=1 (hence l = 0 and m = 0) only 2 unique electron number sets exist, so the first shell can only accommodate 2 electrons. For n = 2, there are 8 combinations of quantum numbers, so 8 electrons fill the second shell, and so on for other values of n. These numbers of filled electron shells correspond with the number of elements in successive periods of the periodic table, explaining the basics of chemistry.
So presumably the nuclear shell structure comes about because the nucleons have a set of quantum numbers which give rise by the exclusion principle to the “magic numbers” of neutrons and of protons in stable (nonradioactive) nuclei.
Because nucleons thus seem to have quantum numbers, it would seem possible that the virtual particles in the vacuum around a fundamental particle which provide mass (some kind of Higgs field effect) may undergo a similar process. This may link up to the problem of fundamental particle masses. Excluding the electron, virtually all other particle masses are closely quantized to near integer multiples of
{electron mass, 0.511 MeV}*n(N+1)/(2*{alpha})
= 35n(N+1) MeV
where n = number of apparent fundamental particles per observable particle (n=1 for leptons, n = 2 for mesons i.e. quark doublets, and n=3 for baryons i.e. quark triplets), and N is an integer which seems to be related to how many massive bosons in the vacuum (Higgslike quanta) become associated with the particle. N appears to take “magic numbers” of 2, 8 and 50, if the formula above is correct. I’ve still a lot more work to do on this, mainly because I’ve found that my composite write up contains different ideas I’ve had on the subject in spare moments over a period of years which don’t yet all fit together seemlessly. I will take account of your work on particle masses when I straighten out the details.
copy of another comment:
http://carlbrannen.wordpress.com/2008/02/08/lovenegativeenergy
Sorry again, the first sentence of the second para to the first comment should read:
“When nucleons emit energy, they fall to a lower energy level, so they get closer together.”
It’s 1.25 am and my brain isn’t functioning.
By the way, I’ve just about given up on my idea that the correct symmetry of the universe is SU(2) x SU(3), because it seems too difficult to make SU(2) account for weak hypercharge, weak isospin charge, electric charge and gravity.
I thought it would work out by changing the Higgs field so that some massless versions of the 3 weak gauge bosons exist at low energy and cause electromagnetism, weak hypercharge and gravity.
However, since the physical model I’m working on uses the two electrically charged but massless SU(2) gauge bosons for electromagnetism, that leaves only the electrically neutral massless SU(2) gauge boson to perform both the role of weak hypercharge and gravity. That doesn’t work out, because the gravitational charges (masses) are evidently going to be different to the weak hypercharge which is only a factor of two different between an electron and a neutrino. Clearly, an electron is immensely more massive than a neutrino. So the SU(2) x SU(3) model must be wrong.
The only possibility left seems to be similar to the Standard Model U(1) x SU(2) x SU(3), but with differences from the Standard Model. U(1) would model gravitational charge (mass) and spin1 (push) gravitons. The massless neutral SU(2) gauge boson in the model I’m working on would then mediate weak hypercharge only, instead of mediating gravitation as well.
copy of an interesting comment on Not Even Wrong, responding to hypedefence by Valerie of New Scientist:
http://www.math.columbia.edu/~woit/wordpress/?p=651#comment34820
anon. Says:
February 11th, 2008 at 9:01 am
‘At this time of funding cuts in the UK and US and worries over student numbers, surely it’s heartening to find so many people getting excited about the big questions that physics addresses.’
Valerie,
New Scientist, as I’m sure you know, has been promoting speculative, noncheckable ideas since string theory came to fame over two decades ago. The fall in student numbers, see http://www.buckingham.ac.uk/news/newsarchive2006/ceerphysics2.html doesn’t correlate to Woit’s blog or even to the popularity of the internet, but it does correlate to the rise of speculative stuff on your front covers:
‘Since 1982 Alevel physics entries have halved. Only just over 3.8 per cent of 16yearolds took Alevel physics in 2004 compared with about 6 per cent in 1990.
‘More than a quarter (from 57 to 42) of universities with significant numbers of physics undergraduates have stopped teaching the subject since 1994, while the number of home students on firstdegree physics courses has decreased by more than 28 per cent. Even in the 26 elite universities with the highest ratings for research the trend in student numbers has been downwards.
‘Fewer graduates in physics than in the other sciences are training to be teachers, and a fifth of those are training to be maths teachers. Alevel entries have fallen most sharply in FE colleges where 40 per cent of the feeder schools lack anyone who has studied physics to any level at university.’
One thing that is clear is that hype of speculative uncheckable string theory has at least failed to encourage a rise in student numbers over the last two decades, assuming that such speculation itself is not actually to blame for the decline in student interest.
However, it’s clear that when hype fails to increase student interest, everyone will agree to the consensus that the problem is a lack of hype, and if only more hype of speculation was done, the problem would be addressed. Nobody will believe that a reduction in speculative hype could possibly address the problem, or that changing the focus of the front cover of New Scientist to more solid areas of physics would help. Electronics and computing innovation of the real world variety (not quantum computing hype from qubit/Deutch) for example, has been censored from New Scientist as too boring. I’m not including my name here as this isn’t a personal matter.
Maybe the vast number of excited readers of New Scientist physics sci fi hype who don’t take up Alevel physics as a result, take up writing science fiction or take up religious orders, instead?
more bits from discussion:
http://www.math.columbia.edu/~woit/wordpress/?p=651
Chris Oakley Says:
February 11th, 2008 at 9:39 am
Anon.,
Forgive me for pointing out the obvious, but one of the reasons that fewer students are taking Alevel physics is that there are more options available for the technicallyminded student, mostly related to electronics and computing, which have come on in leaps and bounds, both theoretically and practically, since then.
anon. Says:
February 11th, 2008 at 10:09 am
Chris,
Thanks, but those technicallyminded students of electronics and computing could also do an Alevel in physics (which is an allied subject), instead of avoiding it like the plague which is what currently occurs.
JC Says:
February 11th, 2008 at 11:10 am
anon, Chris
A better question to ask from an historical perspective is, did science hype/pornography increase the number of engineering, physics, and math majors back in the 1960’s? Or did the increase in engineering, science, and math majors have more to do with Sputnik era increases in science funding? Or was it a more mundane reason like the sheer large numbers of baby boomers attending university in the 1960’s?
anon. Says:
February 11th, 2008 at 11:47 am
JC: this is about a fall in the percentage of students doing physics, not a fall in birth rate. Disillusionment with physics is the problem, otherwise physics would be widely taken in addition to electronics, chemistry, computing, or maths.
‘Or did the increase in engineering, science, and math majors have more to do with Sputnik era increases in science funding?’
Here in the UK, the funding of physics isn’t the key problem, which is student numbers. Funding has to follow students. You can’t really save a department with no students by increasing funding. It’s really not a moneyrelated. When physics ‘hype’ stopped being tied to facts and went sci fi, physics became a not just nerdy but really weird and cultlike, which didn’t appeal to the technicallyminded.
copy of a comment to Dr Tommaso Dorigo’s blog:
http://dorigo.wordpress.com/2008/02/12/thesecondlectureinbassano/
This is a very optimistic post. What students need to be aware of, however, is that they might not be able to actually do a research degree, say a PhD, in an area of fundamental physics that interests them, say a neglected backwater where one has a chance of striking gold (so to speak).
Friends of mine in the UK who have done PhD’s in experimental physics were controlled by the department head and thesis advisors, who ensured that the work was on the frontiers of mainstream research. They were not free to investigate what they found interesting, or areas which were totally unknown. They had to build on someone else’s work in an existing frontier. The reasons were chiefly due to industrial sponsorship of research.
I’d like to investigate an alternative to the U(1) x SU(2) electroweak theory in the Standard Model. The YangMills SU(2) symmetry involves two types of charge and results in two charged massless gauge bosons and one neutral gauge bosons (the unobserved Higgs boson is supposed to make these three gauge bosons massive and hence shortranged at low energy). The Abelian U(1) symmetry involves one charge and one massless gauge boson, which is currently used to model electromagnetic charge and photons.
From the fact that you might get two types of electric field (positive and negative) around electric charges, and it is possible to explain both the repulsion of similar charges and attraction of dissimilar charges by having two types of charged massless gauge boson, one alternative to the existing U(1) x SU(2) + Higgs field, for the electroweak theory, is that the massgiving (Higgs type) field doesn’t actually give mass to all of the two SU(2) massless charged gauge bosons mass at low energy.
Maybe only a portion gets mass (in wuch a way that the resulting massive weak bosons only interact with lefthanded spinors), and the rest of the charged massless SU(2) gauge bosons remain massless at low energy, giving electromagnetic force. This could work because if there is exchange of charged massless gauge bosons between all similar charges, the charged massless gauge bosons will (when there is equilibrium in the exchange) be passing in opposite directions so their magnetic fields will have opposing curls and cancel out.
This idea comes from experiments by Wafer scale chip engineer and computer signal crosstalk theorist Catt, who showed that when you charge up a length of cable, energy enters at light velocity and has no means to slow down thereafter. The cable is charged up like a capacitor in a series of steps which arise due to reflections at the unterminated ends of a transmission line being charged up (see http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4039191 and http://www.ivorcatt.org/icrwiworld78dec2.htm but beware that Catt is an experimental electronics engineer, not a theoretical physicist, and makes some errors in interpreting the experimental facts to his physics concepts). Although there is a drift current of electrons in the cable, it can be shown that most of the energy is being carried by the electromagnetic gauge bosons, and that once the cable or effective capacitor is charged up, there is an equilibrium of boson energy vacillating (or oscillating) at light velocity in both directions along the transmission line, just like two logic signals in a computer transmission line travelling through one another from different directions (the electric field components add, the magnetic field components cancel).
So you can use such modern logicstep crosstalk experimental work in electromagnetism (which was unknown when the U(1) model of electromagnetism was being formuled many decades ago) to show that electric fields may be mediated by charged massless, not neutral, gauge bosons. So SU(2) without the Higgs field may be the correct model for electromagnetism, not U(1).
This is just one line of research which is based on new experimental work on electromagnetism. I’ve written several articles over the past decade about Catt’s research in Electronics World, a British journal, but for various reasons – mainly Catt’s general hostility towards modern physics and people like me, just because of the speculative excesses and elitism of areas like string theory – there is very little interest.
Mainstream physics journals use peerreview or editorial censorship to eliminate anything out of the ordinary that originates from someone without a mainstream reputation or even a PhD. I did QM and GR (cosmology) modules at university, but it is still extremely difficult studying the Standard Model from books (Ryder’s QFT book is the most lucid introduction I’ve found), while working in IT and having nobody to discuss it with: I can grasp whatever maths I need, but it’s not always clear what physical evidence it is based upon. E.g., is weak hypercharge (in the GellMann–Nishijima formula) a real quantum field charge, or just a mathematical concept (twice the difference between electric charge and weak isospin charge)?
Because gravity is always attractive, it can be most simply modelled by a neutral gauge boson exchange between gravitational charges, which for technical reasons would push masses (etc.) together. The average hypercharge of particles is twice the electric charge ( http://en.wikipedia.org/wiki/Hypercharge ), so could effects from exchanges of the massless neutral boson of SU(2) account for weak hypercharge? Sorry if such questions are too boring/offtopic.
Just a note about Valentine cards/emails and flowers.
I’m not sending any out.
I’ll start sending them out after I’m happily married to Miss Dancer, if that ever happens.
In the past, when I sent them, I sent them to girls who (I can see in retrospect) I didn’t have a hope of dating, and the effect is that it just annoys other girls who I would have had more chance of dating. The worst year, when I was a teenager, was when I was completely under the illusion that because I fancied a girl, there must be some kind of romance or love. However, life isn’t like a romantic movie or book, and I didn’t exactly get thanked for sending a Valentine’s card.
If you want to send a Valentine’s card to a girl, first ensure that she is not going to be irritated to receive one from you, and secondly think carefully and ensure that you really do want to encourage her.
If you try to be clever and hedge against failure by sending out several Valentine’s cards, expect that (by Murphy’s Law) the only interest you’ll get (if any) will be from the girl you least fancy. And it won’t be the sort of interest you need, but more likely just insulting or patronising:
“Hi, you’re the crazy guy who thinks he is in love with me, aren’t you? Prove you really love me by lending me your credit card and pin number, I want to see what your limit is!”
For the most part, Valentine’s day is for married people or people with a partner or girlfriend, not for people sending cards to girls who are flooded with cards.
The whole “love industry” which caters for 14 Feb., like the Santa Claus industry which caters for 25 Dec., is not romantic. Love itself is not romantic: the man wants love but the girl wants money. It’s not surprising that most marriages end in a painful divorce.
Love isn’t as portrayed by either romantic novelists or filmmakers. It’s actually just the physical effects of a sequence of chemicals: dopamine (producing contented happiness), norepinephrine (which makes the heart pound when you have a crush on someone and get to chat to them), etc., etc.
There’s nothing real about it. It’s all a hoax really, just an attraction based on looks and money which exists for the main purpose of producing children. That’s why most people now divorce after having children, when people’s oxytocin levels (oxytocin is required for cuddling) diminishes.
To claim that love isn’t simply an illusion produced by chemical drug effects in the brain, is like defending the lie that Edward Witten’s string Mtheory predicts gravity, or that Santa Claus delivers all the presents. It might be nice in fairy tales, but it doesn’t help make the right decisions in life.
copy of a comment:
http://riofriospacetime.blogspot.com/2008/02/explorer.html
Hi Tony,
Thank you, there is a great deal of very useful information on your site. It seems to me that if extra spatial dimensions were the correct way to approach unification, then the way you are suggesting – i.e. 26dimensional string theory without 1:1 boson:fermion supersymmetry – would be the way forward.
The compactification of 6 spatial dimensions in 10dimensional superstring theory using a CalabiYau manifold which has to be stabilised by just over a hundred unknown moduli, is the cause of the main failure of superstring theory, the 10^500 metastable vacua in the “landscape” of solutions which prevent it from making falsifiable predictions or from even modelling the observed vacua in an ad hoc fashion.
So clearly it is shameful of arXiv to have censored your paper e.g. http://cdsweb.cern.ch/record/730325.
I recognise that you can make some predictions about Higgs mass from the masses of weak gauge bosons and top quark. It will be interesting to see how well experiments confirm them.
While you have checkable predictions that could falsify your model if the predictions are firmly dscredited by experiments, there is no way to really confirm the theory: the mainstream ideas could accommodate whatever data comes out of LHC by adding suitable epicyclelike “corrections” and “finetuning” to their theory.
It seems to me that there are two kinds of physical theory. The proofs that Archimedes gave for his laws of buoyancy are the kind of proof that builds on facts, and are not really speculative to begin with. His predictions may appear to be falsifiable, but actually since he has only put facts (not unknowns or speculations) as the input assumptions into his proofs to start with, they can’t actually be falsified within their range of validity, unless you try to apply the laws outside the range for which the factual input assumptions are valid. Archimedes’ proof was factbased. It wasn’t a speculative theory requiring eternal distrust and falsifiability. It wasn’t Popperian falsifiability.
Archimedes says that if you are st the bottom of the sea, the water pressure is the same regardless of whether there is a ship floating above you or not. Hence, the total weight of water and ship above any point (producing the downward water pressure) is equal to the weight of water if there was no ship floating above you. Thus, the weight of water displaced when ship floats is equal to the weight of the ship.
You can’t falsify this kind of tight physical proof, it’s a theory only in the sense of showing the relationship between various established facts. It’s not speculative.
I think there is something to be said for this kind of physics in particle physics, because if you can get somewhere working out the relationship between different facts without invoking any speculations, then it’s pretty solid. The problem here is that there is generally disagreeemnt on what are fundamental facts. E.g., I need to ascertain if the Hubble v = HR recession of galaxies at velocity v for apparent observable distance R, is really equivalent to an acceleration of those galaxies at
a = dv/dt
= d(HR)/dt
= H*dR/dt + R*dH/dt
= H*dR/dt + R*0
= H*dR/dt
= Hv
= H(RH)
= RH^2
which is about 7 * 10^{10} metres per second squared for the most distant receding matter (at nearly the radius of the horizon).
It’s a tiny acceleration, but the mass of the universe is immense, so the outward force would be on order 7*10^43 newtons.
Newton’s 3rd law then suggests an equal inward reaction force, which allows a lot of algebraic fun, resulting in an interpretation of graviton exchange as a pushing effect. Space is filled with a fabric consisting of gauge bosons, virtual particles, Dirac’s sea, etc., which gets knocked out of the way when a fundamental particles moves. Like a submarine moving under the sea, the surrounding stuff doesn’t pile up at the front causing ever increasing pressure that prevents motion, but rather it flows around teh moving object and into the vacated space. If the moving object is effectively accelerating, then a frictionless (perfect fluid) would accelerate in the other direction, exerting a force equal to the force of the accelerating object. Masses are pushed together by the inward force carried by graviton exchange radiation. This is because nearby masses which aren’t receding significantly with respect to one another, don’t fire off a significant force of gravitons in the direction of the nearby mass.
If you do some fact based calculations of this, you can predict G and a lot of cosmology without requiring dark energy (unless you treat that as the energy of the gravitons, which cause masses to repel where the masses are already accelerating from one another, but cause attraction when they aren’t). However, it’s a very heretical theory simply because the “facts” it is based upon aren’t normally interpreted this way. Of course, if they were normally interpreted that way, then there would be nothing for me to say.
Am I correct at the first step to be calculating an acceleration with basic physics using Hubble’s law? E.g, the acceleration = dv/dt = d(HR)/dt = RH^2 calculation seems to me to be a perfect example of doing real physics, calculating things from known facts using correct maths.
However, to someone who is hostile to innovative physics, it would certainly be “just plain wrong” because nobody else has done that before or because any statement of a new idea must be due to my ignorance of general relativity (which I studied at college, in addition with quantum mechanics).
However, maybe I missing some subtle point in working out acceleration from Hubble’s law and applying it to outward and inward forces using Newton’s laws of motion, and maybe one day someone will figure out why putting facts together my way is a crime worthy of censorship from arXiv (the censors didn’t say what was wrong with my calculations).
“Don’t let the critics get you down, such people aren’t even worth the price of a bullet.” – Louise
Thank you, I’ll try to conserve ammunition in future. There is no challenge anyway in shooting back at critics who can’t fire straight.
copy of a comment made by “anon.” with whom I agree:
http://www.math.columbia.edu/~woit/wordpress/?p=653
I watched the video link of ‘Colbert Report: Lisa Randall’. I didn’t find it funny. String theorists will go to any lengths to hype the claim that string theory is checkable and predicts the weakness of gravity, without making any solid calculations. It’s just pseudoscience. Colbert should have asked for the alleged (nonexistent) formula for the weakness of gravity and if something was supplied offthecuff on the back of an envelope, he should probed how it was derived. Everything Colbert did say was purely prononsense, just giving more airing to vacuous hype. Extra dimensional hype was funny for a day sometime around 1985, but the joke is wearing a bit thin nowadays.
another comment by “anon” (unfortunately it is obviously missing a full stop and the word “Some” in the second para, so it may be deleted; I’ll interpolate the correction needed below, inside square brackets):
http://www.math.columbia.edu/~woit/wordpress/?p=653
Eric: yes, Lisa does work on some alternative extra dimensional ideas to mainstream string, but it seems that such ideas are string theories; at least, they are KaluzaKlein theories with extra spatial dimensions.
Remember, it’s widely claimed that not all dimensions were necessarily compactified to unobservable size in the landscape of 10^500 variants of the CalabiYau [. Some] are supposed to have become unravelled into vast cosmic strings that astronomers should be able to see (when they point their telescopes in the right direction, and remove the lens cover).
Lisa’s idea is that gravitons, unlike electromagnetic gauge bosons, are free to propagate in an extra spatial dimension, and this dilutes the gravitational interaction relative to electromagnetism, whose photons can only move in observable spacetime dimensions.
Mtheory in fact has 11 dimensions, with 10 dimensional superstring resting like a (mem)brane or a surface structure on an 11dimensional complete ‘bulk’.
Lisa’s idea would suggest that in Mtheory photons are confined to the 10 dimensional brane (3+1 spacetime dimensions + 6 compactified spatial dimensions), but gravitons can also travel through the 11 dimensional bulk.
Because the gravitons have one extra dimension to travel in, they appear to us in 3+1 spacetime dimensions to give rise to a gravitational coupling constant weaker than electromagnetism, because gravitons spend less time on the brane than photons do.
Photons are a bit like a film of oil floating on the surface of a bulk of water, very concentrated (giving strong electromagnetism), whereas gravitons are like dye thrown into the bulk of the water, which dilutes them throughout the entire volume not just the surface (brane). C’est magnifique, mais ce n’est pas la science.
I’m changing the design of this blog which will remove the following subheading at the top:
U(1) x SU(2) x SU(3) quantum field theory
Evidence that electromagnetism is mediated by charged, massless SU(2) gauge bosons, changing the Higgs mechanism for electroweak symmetry breaking. Evidence from a working (checkable, successful) factbased mechanism of quantum gravity that the graviton is a spin1 gauge boson, possibly the neutral gauge boson of SU(2) or else a gravity field described by U(1). This blog provides evidence and predictions for the introduction of gravity into the Standard Model of particle physics.
Comment 24 above gives my outlook at present: I know from the mechanism that the graviton is a spin1 gauge boson, and that electromagnetism forces are due to the two charged massless gauge bosons of SU(2), but the graviton could be either the massless neutral spin1 boson of SU(2) (without the usual addition of the massgiving Higgs field at low energy), or it could be the gauge boson of a U(1) group. It depends on the nature of weak hypercharge, nd whether it can be described adequately using the charged and neutral massless SU(2) gauge bosons (not to be confused to their massive, chort ranged counterparts which are provided with mass by a separate Higgslike mechanism), and whether gravity fits into this or not. More research is needed on my part to understand the lagrangian of the mainstream U(1) x SU(2) electroweak sector with Higgs field, and grasp the role of weak hypercharge in mathematical and physical detail. Otherwise, confusion will continue. I will do this asap, but am busy with work commitments at present.
This uncertainty doesn’t reflect on the physical mechanisms or their proofs, which are independent of the correct symmetry group of the universe: my working method is to build from solid facts to obtain checkable mechanisms, and then try to find the correct symmetry groups that model those checkable mechanisms. (This is the complete opposite of mainstream attempts to guess a symmetry group from thin air, and then work out and test its consequences.)
Obviously Newton’s 2nd law is
F = dp/dt = d(mv)/dt = (v.dm/dt) + (m.dv/dt)
which is only equal to m.dv/dt, i.e. to ma, if dm/dt is zero. (Newton never actually claimed that F = ma, which is a notrelativistic solution to what he did claim, which was that force is the rate of change of momentum, i.e. F = dp/dt, which is entirely compatible with relativity since mass increases can be included in that form: p = mv.)
In fact, because of relativistic mass increase with velocity, this F = ma solution isn’t quite the case. Calculations should be modified to include the effect of relativistic mass increases. There are quite a lot of correction factors needed for density variation with time after the big bang and for the redshift (energy shift) of gravitons already. When I reformulate the presentation of the calculations at http://nige.wordpress.com/about/ for the book, I’ll include every pertinent piece of physics.
copy of a comment:
http://riofriospacetime.blogspot.com/2008/02/farthestgalaxy.html
The outward force of a receding galaxy of mass m is F = mRH^2, which requires power P = dE/dt = Fv = m(H^3)(R^2), where E is energy. This comes from the normal Hubble recession v = HR which implies acceleration a = dv/dt = H(dR/dt) + R(dH/dt) = H(dR/dt) = Hv = RH^2.
For radius of universe R, the acceleration is just 6*10^{10} ms^2 or so, which (according to Smolin, TTWP, 1996) is about the same figure as the acceleration of the universe derived from Perlmutter’s observations of receding supernovas in 1998. The prediction of acceleration = RH^2 = Hc = c/t where t is age of universe, based just on Hubble’s law, was published in October 1996 via Electronics World.
It’s weird that this is completely censored out of mainstream cosmology, when it is fact based. Nobody has even claimed that calculus doesn’t apply to the Hubble law or that a recession with velocities increasing with distance is a kind of acceleration because distance is equivalent to time past, and any apparent variation of velocity with time gives rise to an apparent acceleration.
I think that the small size of the acceleration of the universe, only about 1 part in 10^10 of the acceleration due to gravity at Earth’s surface, is the reason why it was ignored. It only becomes significant at the greatest distances, which is why it was only discovered in 1998.
I had tried with much energy to get the original research published somewhere appropriate, but had been brushed aside. At that time I was a parttime Open University student and tried to correspond to my physics professor there, Russell Stannard, but only received instead letters from Dr Bob Lambourne defending the status quo as then taught in the Open University’s cosmology course. He had no comment to make on the prediction of cosmological acceleration.
Going back to the formula for the power needed to make a distant galaxy recede with the observed acceleration:
P = dE/dt = Fv = m(H^3)(R^2).
For a typical galaxy like the Milky Way, the mass is roughly m = 1.2*10^42 kg, the Hubble constant is about H = 2.1*10^{18} s^{1}, and R = c/H = 1.4*10^26 metres. It’s interesting that the power in watts needed to accelerate each kilogram of the most distant masses away from us is 0.19 watts. For the 1.2*10^42 kg mass of a galaxy (ignoring relativistic mass increase), 2*10^41 watts is needed.
That’s a lot of power. The simplest explanation for “dark energy” viewed this way as the cause of the acceleration is the graviton exchange radiation which causes gravity between masses which aren’t relativistically receding.
For distances which are a large fraction of the effective radius of the universe (i.e. a large fraction of the horizon radius), masses are accelerating away from one another because they are exchanging gravitons with great force. The outward force of a receding piece of matter is accompanied, according to Newton’s 3rd law, by a reaction force directed towards us, which from the available possibilities seems to be mediated through space by the spacetime fabric i.e. graviton radiation.
The mass of the Earth isn’t accelerating away from us with any force; because its distance from us is only a trivial fraction of the radius of the universe, it’s force away from us and the reaction force it sends towards us as gravitons is trivial (F = mRH^2). If the Earth is contributing in any significant way to quantum gravity interactions with us, it’s only doing so by preventing some of the gravitons coming through the Earth to reach us (some of the fundamental particles in the Earth will get in the way of gravitons which will interact with them, and so there will be a reduced flux of gravitons coming from that direction).
The imbalance produced by the Earth’s presence therefore causes us to get accelerated towards the Earth. The same gravitons which cause gravitational attraction by this shadowing mechanism also produce cosmological expansion by impacting on distant masses. A balloon inflates because of air molecules hitting each other and causing the gas to expand. The action of gravitons being exchanged between masses in the universe is similar.
It’s conceivable that the mainstream LambdaCDM model is an approximation if the role of the cosmological constant is that of gravitons on large scales, causing masses to accelerate away from one another due to the forceful exchange of gravitons. I.e., “dark energy” is the energy of the gravitational field, gravitons.
Just a note about blog statistics, i.e. the number of hits this blog receives according to wordpress who host it on their servers.
This blog began in May 2006 and the number of hits per month to date is approximately fitted by the expression 2000t^{1/2} hits/month +/ 50%, where t is time in years since blog began. I haven’t used any elaborate statistical analysis to find this expression or its error bar, it’s just based on my quick visual inspection of the ragged graph wordpress provide.
This curve does however reveal what is occurring. There is no mass appeal of the information; which isn’t surprising with only about 42 or so very long and technical (boring to the uninitiated) posts.
However, science – at least at the nascent stage where research is being done and checked carefully – is not (or at least should not be) a popularity contest.
The single day with the greatest number of hits so far was when I posted a defense of Louise Riofrio’s formula in 2006.
However, I don’t agree with everything Louise analyses, e.g. rather than the velocity of light varying in the formula, my analysis indicates that the gravitational coupling factor G increases linearly with time after the big bang. This and its observable effects (the small size of the ripples in the CBR) is explained in http://nige.wordpress.com/2007/05/25/quantumgravitymechanismandpredictions/
In the long term, completing my book and preparing a video explanation are the priorities. In the short term, I’m busy with my IT career as I have to earn a living, and it’s not at this stuff.
A bit more commentary related to comment 32 (immediately above):
“… my analysis indicates that the gravitational coupling factor G increases linearly with time after the big bang. This and its observable effects (the small size of the ripples in the CBR) is explained in http://nige.wordpress.com/2007/05/25/quantumgravitymechanismandpredictions/ …”
When you find the section I’m referring to, you can see two mathematical approaches which both show that G increases in linear proportion to time after the big bang. One of those is relatively foolproof. The other is less so, and it suggests an important principle as I’ll now explain.
Since the great outward force of the most distant receding matter M is F = mRH^2 = mctH^2, the equal inward force which is mediated by the pressure (or the rate of change of momentum) of gravitons, which gives rise to gravity, will be directly proportional to time t.
However, this assumes that H is constant (this is justified by the other, less ambiguous, calculation given at http://nige.wordpress.com/2007/05/25/quantumgravitymechanismandpredictions/ in more detail for G which proves that G increases in direct proportion to the age of the universe).
We know that 1/H = t, so why not put this for H in the formula F = mctH^2, giving F = mc/t? Well, this result is wrong, and the reason why it is wrong is important and leads to new physical understanding.
The Hubble constant is recession velocity divided into apparent distance when the light was received, H = v/R, but R is only equal to ct, where t is age of the universe, when v = c (approximately).
In general, H = v/(ct) is a constant, because t is the time we are looking back to, and that t is directly proportional to v. Hence, as v rises in the numerator of H = v/(ct), so does t rise in the denominator, keeping H a constant.
To be accurate, there is no matter receding from us at velocity c, because relativity would give it infinite mass and any light or other radiation it emitted towards us would have infinite redshift and hence zero energy. I.e., we don’t actually deal with the situation R = ct. This is only an rough approximation that is useful for some calculations, but completely invalid in others. It is completely invalid to set H = v/R = v/(ct) = c/(ct) = 1/t into the equation F = mctH^2, giving F = mc/t. Instead, we have to keep F = mctH^2 intact, because the physics shows that for our purposes here, H is a constant, and force is directly proportional to time.
What is important to note above is the easy that confusion can be created by even the simplest mathematics, when causal physics is ignored and equations are simply played with. When Dirac first discovered antimatter solutions to his relativistic Hamiltonian spinor required to make Schroedinger’s time dependent equation relativistic, Dirac first published the claim that the antiparticle of the electron was the already known proton, despite the massive difference in mass! Eventually, and just ahead of Carl Anderson’s discovery of the positron, Dirac was forced to admit that his formula predicted an unknown particle. This shows the difficulty in correctly interpeting real predictions from mathematical physics: they are often unwelcome and human nature more often than not tries to misinterpret the natural physics by abusing the mathematics and ignoring the dynamics.
If, as argued in comment 31 above, the exchange of gravitons between masses is causing the acceleration of the universe,
a = dv/dt
= d(HR)/dt
= (H*dR/dt) + (R*dH/dt)
= H*dR/dt + 0
= H*v = H*HR,
then it follows that the reason for the recession velocities increasing with radial distance is the acceleration of galaxies away from one another due to graviton exchange, like the pressure of dough between currants in a cake being baked, which causes the cake to expand in the oven.
In other words, the galaxies are not coasting. They are being accelerated by graviton impacts, which causes their distribution of apparent recession velocities.
REDSHIFTS AND BLUESHIFTS
If a galaxy is moving relative to an observer, the galaxy light will be blue shifted if it is approaching relative to the observer (like the nearby galaxy Andromeda, which the Milky Way is approaching, hence Andromeda’s light is blue shifted slightly), and redshifted if the distance between galaxy and observer is increasing.
If you think about conservation of energy, the reduction in energy per photon received from a receding galaxy is not a violation of the principle of conservation of energy. The galaxy emits X watts of power. When the distance between observer and galaxy is constant, X/2 watts of power is emitted into one hemisphere around the galaxy and the same in the other hemisphere.
When the galaxy is receding from the observer, however, the light which the observer receives is redshifted to lower frequency and thus lower energy (remember Planck’s law for quanta, E = hf).
So the effective power emitted by galaxy in the direction of the receding observer is less than X/2. This doesn’t violate the conservation of energy, because the light the galaxy emits in the other direction is blue shifted (in the observer’s frame of reference) by the same degree, so the galaxy’s total power output is unaffected by the relative motion of the observer.
This explanation for energy conservation when light is redshifted should satisfy Dr Mario Rabinowitz’s question on the subject. The problem that I get when I try to explain gravitational mechanism for physicists who are prepared to listen (entirely composed of mainstreamskeptics) is that they are skeptical of solid facts that I’m building on, like redshift interpretation.
For a good account of what’s wrong with nonrecession interpretations of redshift, please see the excellent page by Edward Wright: http://www.astro.ucla.edu/~wright/tiredlit.htm
Errors in Tired Light Cosmology
Tired light models invoke a gradual energy loss by photons as they travel through the cosmos to produce the redshiftdistance law. This has three main problems:
There is no known interaction that can degrade a photon’s energy without also changing its momentum, which leads to a blurring of distant objects which is not observed. The Compton shift in particular does not work.
The tired light model does not predict the observed time dilation of high redshift supernova light curves. This time dilation is a consequence of the standard interpretation of the redshift: a supernova that takes 20 days to decay will appear to take 40 days to decay when observed at redshift z=1.
In 2001 Goldhaber and the Supernova Cosmology Project published results of a time dilation analysis of 60 supernovae. …
The tired light model can not produce a blackbody spectrum for the Cosmic Microwave Background without some incredible coincidences. …
… in the tired light model the energy of the CMB photons will go down but the density will not go down to match the density of a cooler blackbody.
The local Universe is transparent and has a wide range of temperatures, so it does not produce a blackbody, which requires an isothermal absorbing situation. So the CMB must have come from a far away part of the Universe, and its photons will thus lose energy by the tired light effect. …
… Note that the CMB cannot be redshifted starlight. Some diehards refuse to face these facts, and continue to push tired light models of the CMB, but these models do not agree with the observations. …
Such people as Wright refers to, who push pseudoscience, are even worse than the mainstream spin2 graviton believers. They refuse to listen to facts, they aren’t interested in physics, they believe that their gut instinct hatred of big bang ideas is more important than the facts, etc., etc. They’re like the ignorant people who opposed Darwin on the grounds that it “evolution is immoral”, whereas the mainstream scientific viewpoint of many aged professors of biology for a long time was that “evolution is just a new idea which doesn’t yet have ancient authority, so we don’t have to bother reading about it, let alone taking it seriously and independently checking it; if we take it seriously we’re taking a risk on our reputations by just taking such a radical idea seriously, why should we?” Both these viewpoints are common today in physics.
Virtually all physicists are not interested in advances per se (i.e., advances unaccompanied by hype from famous people). It’s not evidence that counts, but getting politicaltype backing from someone with a reputation. At present, however, I don’t think that any reputable physicist exists; with the rise of string theory all the big names are to some extent or other crackpot.
Backing of the hype variety from such people wouldn’t help real science, which is not a political adventure. However, constructive criticism and genuine checks by other people might at some stage in the future help to speed up progress…
On the subject of groupthinkled destructive criticism, take a look at
http://www.logbook.freeserve.co.uk/riposte%20capacitor.html
RIPOSTE: Ivor Catt’s view of Capacitors
by Leslie Green CEng MIEE
… updated 27 August 2003
I had thought that Ivor had deleted this section from his website as it is clearly erroneous.
Given that electromagnetism is evidently a subject of great interest to engineers, Ivor’s site attracts a respectable volume of readers. It is therefore worthwhile correcting one of the most blatant and demonstrable errors on the website.
Ivor claims that capacitors do not have selfinductance if measured without their leads. He makes this claim on purely theoretical grounds. The problem with this assertion is that it relates to no known realworld components! …
Ivor has got himself a bit confused about transmission lines and real components. Ivor says that capacitors are really transmissions lines and should be treated as such. This is a bit backwards. According to electromagnetic theory, everything is based on Maxwell’s theory, transmission lines, waves, fields and so forth. All very complicated. Rather than confront the huge mass of differential equations necessary to solve even simple problems, practical engineers have come up with “lumped element models”. Rather than consider a coil of wire as a transmission line, it is easier to consider it as “an inductor”. This approximation is only valid up to a certain limiting frequency where the phase shift of the current in the wire becomes too great. When the phase shift is relatively small the system is described as “quasistatic” and the simple lumped element approximation is used. We know it is not exact, but it is good enough for engineering purposes. Thus Ivor has “invented” nonquasistatic systems, something known about for over a century! It has to be said in Ivor’s defence, however, that such descriptions are not usually seen in modern electronics books, but were published in some good text books between say 1930 and 1955.
Green’s comments quoted above are related to Catt’s inaccurate article, coauthored with Dr Walton and with Malcolm Davidson, http://www.ivorcatt.org/icrwiworld78dec1.htm
(It is also in the published book “Digital Hardware Design”, http://www.ivorcatt.org/digitalhardwaredesign.htm )
Green ignores everything Catt writes and does, and then Green makes the false claim that Catt is wrong because the theoretical model of a capacitor which Catt uses is not what is used in real electronics. Of course not. Catt uses a simple model for analysing the physics, which has nothing to do with electronics. For a start, Catt’s capacitor has a vacuum between the plates, not a plastic or other electrolyte.
I argued with Catt since 1996 that he should make it clear that his model of the capacitor has nothing to do with electronic capacitors used in electronics, but is a model of use to understanding and correcting electromagnetic theory, i.e. pure physics.
He ignored me. However, that doesn’t excuse the abuse from Green, ignoring the physics. Catt and his coauthors make errors, inherited from Heaviside’s false “logic step” with a zerorise time which would make the displacement current rise from zero to full current the instant the step arrives, i.e. a change in current of di/dt = di/0 = infinity.
Since in a transmission line, a logic signal requires a virtual vacuum displacement current (going in the transverse direction, from one conductor towards the other at the place where the moving logic signal is located on the transmission line) of some sort to complete the circuit before the logic signal reaches the end of the transmission line (where it reflects without inversion if there is an open circuit, or reflects with inversion if there is a closed circuit, i.e. dead short), if the current in the transmission line wires rises in a step from 0 to i amps in zero time, that would imply that the electrons would be accelerated in zero time, and since the transverse radio emission from an accelerating electron depends on the rate of change of current, that would produce an infinite power radio emission from such electrons. In fact, this can’t happen. That’s why Catt’s and Heaviside’s use of the “logic step” is physically a fraud. When you correct the error, you find that the entire physics underlying the logic pulse changes.
What is occurring is that gauge bosons or field quanta are being exchanged from one conductor to another, transversely, as the logic signal propagates along the transmission line. These gauge bosons are similar to radio emission of charged massless gauge bosons (i.e., they are like half radio waves, the half containing one electric field such as positive or negative, not whole ones which contain equal field of each charge sign and are electrically neutral overall).
So a full analysis of the physics tells us about electromagnetism, replacing Maxwell’s “displacement current in a vacuum” concept with a real physical mechanism of what is occurring in quantum electrodynamics.
See also my posts:
http://nige.wordpress.com/2007/04/05/aretherehiddencostsofbadscienceinstringtheory/
http://electrogravity.blogspot.com/2006/04/maxwellsdisplacementandeinsteins.html
the latter of which is in need of updating and rewriting to incorporate some improvements in the physics which have occurred since it was written.
copy of a comment:
http://dorigo.wordpress.com/2008/02/13/multipleinteractionsatlhcanexerciseinelementarystatistics
Hi Tommaso,
Thanks for taking the trouble to reply, and thanks also for the analogy of parsecs. I did a cosmology module as well as quantum mechanics, and yes the parsec only really made sense in astronomy when absolute distance scales were uncertain.
At that time, all astronomers could do for reporting absolute measurements was to measure the angle of parallex. If you measure the angle of parallex to a star, i.e., the difference in apparent angle (relative to far more distant stars in the sky) for two times in the year 6 months apart (when the Earth is at different sides of the sun), this parallex or variation in apparent angular position can be measured in seconds of arc, i.e. 1 parsec is 1 second of arc in the sky, which is 1 part in 3600 of 1 degree of angle of the sky.
Hence 1 parsec is 1 second of arc difference in star location seen from opposite sides of the sun. Since it has been determined accurately that the radius of the Earth’s orbit is about 150 million km, it follows from the trigonometry of a rightangled triangle that a star with a parallax of 1 parsec would be at a distance of (1.5*10^8)/sin(1/3600) = 3.1*10^13 km.
What’s surprising […] here is that this kind of conventionalism is the cause of a major failure by Hubble. Instead of thinking deeply about his recession law v/R = H, he expressed H conventionally in units of km/s/Mparsec. If he had thought about it, spacetime implies that you can represent a distance as a time. If he had written the Hubble law that way, he would have v/t = Hc, which is interesting since it [i.e., the constant here written as the product “Hc”] naturally has units of acceleration.
Even if you just take the regular mainstream Hubble law v = HR, you can see that it implies acceleration: a = dv/dt = d(HR)/dt = (H*dR/dt) + (R*dH/dt) = Hv + 0 = HHR. So the Hubble law itself predicts that the universe is accelerating at the small rate of about 6*10^{10} ms^{2}. This is such a tiny acceleration that it was first observed only in 1998 by Saul Perlmutter’s clever automatic supernovasignature detecting software which was directly run with live digital input from CCD telescopes.
Mainstream cosmology is completely halfbaked because it doesn’t bother to analyse the few solid facts it has at it’s disposal. Everytime I try to pointed out that it’s possible to prove the universe was accelerating (and I published it in 1996, years before the discovery), and the allied facts that the outward acceleration implies an outward force which leads to quantitative predictions in quantum gravity, I was just censored out for dozens of reasons. People don’t listen because they either (1) assume that the mainstream orthodoxy is gospel truth, or because they (2) completely reject the big bang and recession discovery factual evidence and want to preach about false “tired light” nonsense (against the facts) for pseudoscientific, metaphysical personal reasons . It’s very weird how orthodoxy is so helpful in experimental particle physics, but is unhelpful in other areas.
copy of a comment:
http://dorigo.wordpress.com/2008/02/17/scarlettandnatalie
Look how shiny their faces are compared to say their arms. Clearly they’re wearing face cream, makeup, etc. Also their eyebrows look artificially groomed and perfect. As for the teeth, they either don’t drink a lot of coffee, have false teeth made of teflon, have porcelain inlays glued on the outside of their visible teeth, or alternatively have regular teeth bleaching sessions to keep them white.
When I occasionally meet girls tarted up to look a bit like that at parties, you feel as if you’re talking to someone who is totally artificial, even down to the wellgroomed artificial “personality”. It’s really damning that some film directors get hold of such poor scripts and have such boring direction that they feel the need to include such phoneyappearing dolls just to attract some silly viewers.
I’ve actually had some teeth bleaching done and such like, but that’s not to become an actor, it’s just because I’m still single at 35 and finding any single girls who go on a date is extremely difficult (in fact the older I get, the harder it gets because the larger the percentage of girls of my age who are already married). I can’t understand this media (and public) obsession with girls who look (and maybe are) like plastic dolls.
Yes, they look perfectly cute, but under that skindeep surface they’re just egotistic, wealthy, actresses, models, or (even worse) lawyers. Yuck. The worst part about it is that they get millions for playing the parts of poor, normal girls in films. The silly viewer then gets confused and mentally associates the nice, loving part being played by the girl with her real personality, which is in fact the exact opposite.
The best way to be sensible about such women is to imagine how they will look in 25 years time. Would you still want to date them when their eyelids etc. are sagging? (If the girl has had cosmetic treatment, shouldn’t you remember that you’re really admiring the skill of her dentist or surgeon, rather than the girl’s intrinsic beauty?)
copy of a comment:
http://asymptotia.com/2008/02/15/moviespoiler/
‘This time I get to do it officially, since Doug Liman’s people are doing a private screening of the film this evening and there’ll be a panel of some of the film’s creators and a scientist for questions and answers afterward. I’ll be the scientist.’
Clifford, did it go well? I’d be scared to be teleported into an extra dimension, because I saw a sci fi film about it when a kid, where a fly gets into the teleportation chamber and messes things up a little. But good luck to whoever has the guts to try this for real. Presumably all it would take using today’s technology is to slice a person up into molecularthick layers, then quickly toss those slices into a special scanner like a computer scanner but having an electron microscope head in place of the CCD chip, before they move out of position.
At the receiver end, you could have something like a fine laser jet printer, with ink wells filled with the various amino acids and such like, which will be quickly sprayed out in a tiny jet as layer and layer are deposited to regenerate the 3d person.
I did read the article you linked to, where you don’t go into these technical trivia of teleportation, but discuss the more advanced stuff like the worry of whether the reproduced person will have atoms in exactly the same state as before, and showing quantum entanglement might help out.
Tne evidence from Alain Aspect’s and many other experiments for wanefunction entanglement is indirect and really can bbe interpreted as showing problem in mainstream quantum mechanics. It’s only when you assume that the usual wavefunction description is correct, that you end up having to take the experimental correlation of spins of photons to imply that they are entangled. Dr Thomas Love of California State University has pointed out:
‘The quantum collapse occurs when we model the wave moving according to Schroedinger (timedependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (timeindependent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’
That looks like a factual problem, undermining the mainstream interpretation of the mathematics of quantum mechanics. If you think about it, sound waves are composed of air molecules, so you can easily write down the wave equation for sound and then – when trying to interpret it for individual air molecules – come up with the idea of wavefunction collapse occurring when a measurement is made for an individual air molecule.
Feynman writes on a footnote printed on pages 556 of my (Penguin, 1990) copy of his book QED:
‘… I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of oldfashioned ideas … But at a certain point the oldfashioned ideas would begin to fail, so a warning was developed … If you get rid of all the oldfashioned ideas and instead use the [path integral] ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’
Feynman on p85 points out that the effects usually attributed to the ‘uncertainty principle’ are actually due to interferences from virtual particles or field quanta in the vacuum (which don’t exist in classical theories but must exist in an accurate quantum field theory):
‘But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these [classical] rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit'; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of intereference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’
Hence, in the path integral picture of quantum mechanics – according to Feynman – all the indeterminancy is due to interferences. It’s very analogous to the indeterminancy of the motion of a small grain of pollen (less than 5 microns in diameter) due to jostling by individual interactions with air molecules, which represent the field quanta being exchanged with a fundamental particle.
The path integral then makes a lot of sense, as it is the statistical resultant for a lot of interactions, just as the path integral was actually used for brownian motion (diffusion) studies in physics before its role in QFT. The path integral still has the problem that it’s unrealistic in using calculus and averaging an infinite number of possible paths determined by the continuously variable lagrangian equation of motion in a field, when in reality there are not going to be an infinite number of interactions taking place. But at least, it is possible to see the problems, and entanglement may be a redherring:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
– R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
If so, any sci fi hype about using entanglement for perfect teleportation may hold back progress by increasing the overall prejudice in society in favour of nonfact based interpretations of experiments. The measuring the polarization of two individual originally identical photons and finding a correlation which suggests entanglement looks like an extreme case of ignoring Sagan’s dictum “extraordinary claims require extraordinary evidence”. The correlation simply tells you how much the polarization measurement process interferes with the original polarization of the photon, it’s not direct proof that the two photons shared an entangled wavefunction which collapsed upon measurement sometime later.
It’s pretty obvious that if Feynman’s guess was right, everyone presently enthralled by mainstream modern physics will end up very depressed by the way the universe it. The process will be a bit like the replacement of Genesis by evolution: it will upset many people for totally nonscientific reasons, and be a rough road.
copy of a comment:
http://asymptotia.com/2008/02/17/talesfromtheindustryxviijumpthoughts/
Hi Clifford,
Thanks for these further thoughts about being science advisor to for what is (at least partly) a sci fi film. It’s fascinating.
“What I like to see first and foremost in these things is not a strict adherence to all known scientific principles, but instead internal consistency.”
Please don’t be too hard on them if there are apparent internal inconsistencies. Such alleged internal inconsistencies don’t always matter, as Feynman discovered:
“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …
“… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …
” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …” – Feynman, quoted at http://www.tony5m17h.net/goodnewsbadnews.html#badnews
I agree with you that: “Entertainment leading to curiosity, real questions, and then a bit of education …”
Clifford kindly emailed me back material he had deleted from my comment (copied in comment 37 above), so I’ve just added some of the deleted material into this blog post, plus a more recent comment.
The mainstream string researchers go in for selfconsistency in a big way when they do not have checkable predictions. That’s fine. They are welcome to find selfconsistent theories of wavefunction collapse, entanglement, extra dimensions, unification, etc.
Actually, theories like string haven’t even been shown to be selfconsistent because there are an infinite number of loop corrections, and proofs of selfconsistency are restricted to a finite number of loop corrections (only to two loops, in fact, so far).
What I’m concerned with is entirely different: factbased scientific predictions which throw light on how fundamental interactions occur and can be checked experimentally.
Too much media hype of sci fi damages backwater grass roots physics by depriving it of any attention whatsoever, any peerreviewers, any funding, any interest from other people, and making it seem “boring” in comparison to mainstream speculations about teleportation, etc.
copy of a comment:
http://asymptotia.com/2008/02/17/talesfromtheindustryxviijumpthoughts
Hi Clifford,
Thanks for your reply.
“For every Feynman who can tell a fancy story about how he did not worry about it and came out on top (and gosh, how he loved his stories…. but don’t get me started), there are thousands of scientists who got absolutely nowhere by doing the same thing.”
Could it be argued that if only one Feynman emerges by using intuition per many thousands who get nowhere using that route, surely the way to make progress fastest is to encourage even more scientists to use an intuitive approach? Besides, surely everyone trusts their intuition to some extent when deciding which speculative area to work in?
Maybe people have to trust their own intuition when deciding whether to investigate string theory (which has not been proved finite beyond two loops), which is an example of an amazing intuitive idea that hasn’t been proved to be selfconsistent?
When students decide to work on string theory, they are doing so maybe for a lot of reasons, such as because it is fashionable, and because it interconnects so many different areas of frontier physics, even though it hasn’t won Nobel Prizes yet for experimental confirmation.
So students just have to trust their physical intuition in deciding what to study when available theories haven’t been proved selfconsistent and can’t be checked experimentally.
This brings to mind what Wigner said about the different emphasis in the physics culture he met in America from that in his home country, Hungary, (in his autobiography, The Recollections of Eugene P. Wigner, as told to Andrew Szanton). Wigner said that in Hungary intuitive ideas are the most valued, but in America it is the long hard calculus of working out the consequences of ideas in detail [which] is valued the most.
copy of a comment:
http://riofriospacetime.blogspot.com/2008/02/smoggydayonvenus.html
“These diverse worlds could all have internal heat powered by central Black Holes.”
Hi Louise,
I’ve got a huge number of questions about this. If planets have black holes in their centres, why doesn’t the mass get sucked in, how does it produce just the right about of heat?
Since Hawking radiation in the mainstream picture is just gamma radiation with a blackbody (Plank) radiation spectrum resulting from the gamma rays which originate near the event horizon from the annihilation of charged virtual fermions, small black holes would evaporate quickly by emitting radiation (according to the mainstream model), unless mass endlessly falls into them.
Are you sure that this is a stable system? It looks pretty unstable to me: either the black hole will get converted completely into gamma rays (Hawking radiation) ans disappear quickly, or it will grow rapidly by swallowing up the planet and then other planets, the sun, etc.
I think that one thing that needs a lot of careful thought is a conflict between facts from quantum field theory and black hole theory.
According to quantum field theory, Schwinger shows that pair production doesn’t occur in the vacuum at any distance around matter.
It only occurs in electric fields which are stronger than Schwinger’s threshold for pair production. This is well established experimentally, because the polarization radially of virtual fermions from pair production around a charge accounts for the renormalization of charges, by shielding core charges in part.
If there was no limit to the amount or range of pair production, the vacuum would be able to totally shield (not partly shield) all electric charges, so that all electric fields would be quenched within a short distance (not by just geometric divergence of field lines which gives the infinite range inversesquare law).
Schwinger’s threshold is 1.3*10^18 v/m, which occurs out to a radius of 33 femto metres from the middle of an electron. Electrons will therefore be black holes radiating some kind of “photons” from their black hole event horizon radius of 2GM/c^2, assuming that they have no larger scale structure (string theory is based on the idea that electron cores are strings of Planck scale, but there’s no evidence for anything existing at the Planck scale, it’s just a size based dimensional analysis and isn’t physically based; it’s not even the smallest unit of physical length, which is actually the event horizon radius for an electron’s mass, which is far smaller than the Planck length).
If you apply Schwinger’s threshold field strength for pair production to black hole Hawking radiation, you can see that no Hawking radiation can occur from an electrically neutral black hole.
In order for Hawking radiation to be emitted, pair production must occur all the way out to the event horizon, so that virtual pairs of fermions created near the event horizon can become real by the mechanism of one of the pair falling into the black hole, while the other particles escapes beyond the event horizon.
This is only possible if there is an electric field of over 1.3 x 10^18 v/m at the event horizon. (equatuon 8.20 in the http://arxiv.org/abs/hepth/0510040 lectures on quantum field theory, equation 359 in Dyson’s http://arxiv.org/abs/quantph/0608140 book on QED).
Hence, the whole basis of Hawking’s theory is undermined by QED, which implies that charge renormalization demands a cutoff to the amount of pair production and vacuum polarization.
The vacuum is not full of virtual charges; these only occur in the case where a static electric field exceeds 1.3×10^18 volts/metre.
Large black holes which are electrically neutral therefore emit exactly zero Hawking radiation, because there is no electric field at the event horizon and consequently no mechanism for Hawking’s pair production mechanism to create annihilation gamma rays.
But it is more interesting is when you treat the electron as a black hole.
It does radiate, and it an interesting way. Because it is charged, it has an adequately strong electric field at its event horizon to enable it to radiate, but – also because it is electrically charged – it doesn’t radiate by the same mechanism as envisaged by Hawking.
Hawking’s mechanism for black hole radiation is completely fictional, because there is no fermion pair production without a strong electric field. Fermions of opposite charges must accumulate beyond the event horizon radius in Hawking’s model, to enable them to annihilate forming gamma rays (Hawking radiation). This can’t happen in reality.
Because only electrically charged black holes can radiate, the electric charge on the black hole will prejudice which charge escapes.
E.g., pair production of fermions near the event horizon of a negatively charged black hole will result in fermions of electric charge opposite to the black hole tending to fall into it, while the fermions of the opposite sign will tend to escape.
This mechanism will act towards neutralising the electric charge which exists within a black hole, while charges accumulate outside the event horizon which are of similar sign to the original charge of the core of the black hole.
Once this has occurred and the core of the black hole has become electrically neutral while it’s charge has in effect been transferred beyond the event horizon by the subtle pair production process, the black hole will no longer possess an electric field capable of causing pair production at its event horizon.
Specifically, once a black hole electron causes one virtual electronpositron pair to be created near its event horizon, the virtual positron will tend to fall into the event horizon and neutralise the real electron’s core.
The virtual electron will escape and become a real electron, then will repeat the process.
If this process is occurring all the time to electrons, it would go some way towards explaining the lack of determinism of electron motions on very small scales, such as inside the atom.
copy of a comment:
http://keamonad.blogspot.com/2008/02/againstsymmetry.html
Thanks for this interesting attack on symmetry groups, Kea. However, I can’t really believe that writing like this:
“… symmetry, on its own, explains nothing at all. The 20th century idea that (standard) model building has sufficient explanatory power in itself is hopelessly inadequate for tackling the problems of quantum gravity. Lots of smart people tried this idea (Lie group based GUTs) and they failed. Did anyone notice? This idea FAILED!”
Why cares if “a lot of smart people failed”. If they failed, maybe that fact simply implies that they weren’t so smart as they (or their professors and their fans) claimed, rather than implying that searching for symmetry is a waste of time.
A lot of smart people can all be wrong, particularly if they all use variants of the same thinking (which happens in today’s great educational environment, which manufactures clones).
“Consider some basic examples of symmetry groups and their representations: say rotations of a sphere. One easy way to shift to a larger group is to increase the dimension of the sphere. But in doing so, observe that nothing in the underlying geometry of the space has been enriched.”
The key thing from my perspective is that – as Carl Brannen points out – the basic symmetries are broken. Why is the weak isospin force lefthanded? Why is electroweak symmetry broken? Why does the SU(3) part of U(1) x SU(2) x SU(3) only apply to quarks, not leptons? In each case, the mainstream answer is broken symmetry, mechanisms for breaking symmetry.
When you have so many exceptions (breaks to symmetry), is it really worth while insisting that the universe is based on symmetry? Clearly, most of the most important things in physics are based on broken symmetry, i.e., asymmetry, which is a different story.
“One easy way to shift to a larger group is to increase the dimension of the sphere. But in doing so, observe that nothing in the underlying geometry of the space has been enriched.”
Maybe the point of shifting to SU(5) (or whatever the GUT is supposed to be), is that it looks simpler the groupthink mentality can announce that nature holds a “deep beautiful simplicity”, SU(5) or whatever. It doesn’t address symmetry breaking, left force handedness, or anything already known for sure.
We know electromagnetic, weak and strong interactions exist, and if U(1), SU(2) and SU(3) describe these interactions, which they’re supposed to (although U(1) looks a joke to me because I think there is evidence for a more complex structure behind electromagnetism), then symmetry does at least model stuff.
The description of mesons (quarkantiquark pairs) by SU(2) and the the description of baryon properties by SU(3) does seem to indicate that SU(2) as a description of weak interactions and SU(3) as a description of strong interactions, are sound.
“But symmetry, on its own, explains nothing at all.”
Symmetry in particle physics is abstract, it’s a mathematical description, not an explanation, so I presume the point you are making about Lisi’s use of E8 is that it’s not an explanation but just a way to model things abstractly.
You’re not going to get anywhere along that road, because the word “explain” is meaningless in physics, which is just about making calculations if you’re genuine (or blathering about largely uncheckable speculations if you’re not).
“In this scheme, what is the symmetry group of a point? You don’t know? Shouldn’t we actually understand this if we want our spaces to be associated with physical spacetime and matter’s internal degrees of freedom?”
I think the key priority is understanding how to link up a field Lagrangian formulation of say electromagnetism, to a symmetry group. I don’t think it is currently being done correctly.
E.g., what is the symmetry group representing quantum gravity?
The Standard Model was constructed from empirical observations of three fundamental forces. Why not just add a fourth?
What’s wrong with this idea? Why aren’t people doing that? Is it a case can’t find the correct Lagrangian of gravity? Most people think gravity is mediated by spin2 particles so that like gravitational charges (mass, energy) always attract rather than repel as occurs with spin1 particles mediating forces between similar charges.
I think U(1) is wrong in the S.M. because it is built on a flawed classical model of electromagnetism – the Lagrangian contains Maxwell’s equations in tensor form.
This covers up errors in the underlying physics. The chief error in Maxwell’s equations is the oversimplification inherent in displacement current, the alleged polarization of the vacuum results in a displacement of virtual charges. We know that this doesn’t happen because Schwinger showed that the vacuum can’t polarize at low energy (you need strong electric fields, above 10^18 v/m, to get pair production of polarizable charges in the vacuum).
So clearly, there is a more subtle mechanism at work which produces the illusion of displacement current. From evidence in practical electromagnetism, i.e. the propagation of a logic signal guided by conductors occurs at the velocity of light in the insulating medium between the conductors, it seems to be due to exchange radiation consisting of charged, massless bosons. The conventional explanation is some displacement current in the vacuum flowing from one charged conductor to the other (oppositely charged) conductor as the signal passes every point. However, since the field strengths are below Schwinger’s threshold for pair production, that can’t be the answer. The only way to account for the behaviour of a logic signal flowing the way it does is then to throw away Maxwell’s displacement current model, and replace it with a radiation exchange between the conductors as a signal passes. The radiation emulates the mathematical relation that Maxwell gave in some circumstances, but Maxwell’s law is only a crude oversimplification in other circumstances. To describe with the exchange radiation between the two conductors which replaces the classical theory of displacement current, quite a lot of new physics – closely linked to quantum field theory – is needed. E.g., the exchange radiation needs described by a lagrangian and evaluated with a path integral.
In order to make it work, it appears to suggest a change is needed to the usual Abelian U(1) formulation of electromagnetism in quantum field theory.
The key problem here is seeing how this relates to the SU(2) lagrangian in the standard model, e.g. see section 8.5 (pages 298306) in Ryder’s Quantum field theory, 2nd ed. Mathematically, Ryder’s SU(2) lagrangian description is within my grasp, but trying to actually understand it is hard.
For example, the U(1) symmetry (from mainstream electromagnetism) is supposed to imply a weak hypercharge. I can’t understand from reading Ryder (let alone from reading Weinberg’s even more mathematically fuzzy writings in his three volumes on QFT), what the experimental evidence for each of the interpretations in the U(1) model is supposed to be: what is weak hypercharge, physicaly? Is there any direct evidence that it physically exists, or is it just a useful mathematical description like conductivity, entropy, frequency, etc. These things don’t physically exist like fundamental particles or electric charges do, they are just descriptions of effects. Simply giving a name to something doesn’t tell you if there is an underlying mechanism or not, let alone what the underlying mechanism (if one exists) is.
Either physicists should be concerned with mechanism or equations. If they don’t believe that mechanisms exist or are loo lazy to investigate them, fine, let’s just see the equations and see where the equations come from physically, i.e. what is the experimental evidence for the equations.
If you say this in quantum field theory, you get given lots of equations but the key insights are clearly not directly supported by experimental evidence. The thousands of tests of the Standard Model don’t prove that the model used to describe electromagnetism in it is correct, any more than they check anything to do with the Higgs field. It’s pretty obvious that the Standard Model, being based on experimental evidence from particle physics, is an accurate quantitative model for such physics. If it wasn’t, then it wouldn’t have been constructed that way. But is it the correct model, or is it to some extent an epicycletype model?
copy of a comment:
http://riofriospacetime.blogspot.com/2008/02/smoggydayonvenus.html
Hi Louise,
Thank you very much for your reply, which is very thought provoking.
“Treating the electron as a Black Hole is an approach I have tried before. Einstein wondered what the lowly electron really is. Like a Black Hole the electron can be described by charge, mass and spin. In the electron they are restricted to quantum values.”
I think that the mainstream objection is that the black hole event horizon for an electron is so small, way smaller than the Planck scale, that they’re sure that quantum field theory can’t apply there.
The high energy cutoff (UV cutoff) needed to make quantum field theories ignore the unphysically massive momenta of virtual particles as you reach very high energies such as the Planck scale (very small distances between colliding particles) seems to suggest that there is a grain size to the vacuum (assumed to be the Planck scale), and that smaller sizes than the Planck scale are meaningless because there is nothing on smaller scales.
However, from the equations of quantum field theory I’ve seen, there is no real evidence for this at all. It’s clear that there is a need for a cutoff on a logarithmic term for a force coupling strength, but there is no evidence that the reason for such a cutoff is that nothing exists at smaller distances than the Planck scale.
If you look at a plot of how the three fundamental forces of the standard model approach each other (either with or without supersymmetry), they appear to approach one another at a size much bigger than the Planck scale.
I think the whole problem there is mathematics symmetrysearching winning out over physical laws like conservation of massenergy.
What they should be doing (instead of assuming that standard model forces can all be the same at one point using supersymmetry, implying that forces “unify” because the strengths of the forces become similar) is to concentrate on where how conservation of the field massenergy is occurring.
E.g., an electromagnetic field of given strength has a given amount of energy per cubic metre, as will weak and strong force fields. What is the physical reason why the strong force strength coupling increases with increasing distance (up to a certain point), while the electromagnetic force coupling decreases? Clearly, from conservation of massenergy and the mainstream model of a force field as mediated by exchange of gauge bosons, the mechanism is that the polarization of the vacuum attenuates some of the electromagnetic field, and this attenuated energy is used for pairproduction of short range particles in the vacuum. Presumably some of this energy transferred from the electromagnetic field to virtual shortranged particles, eventually is converted into the gluon and meson fields which are powering the strong nuclear force. This would explain why the strong force coupling parameter varies in the opposite way to the electromagnetic force coupling parameter as you get closer or further away from a quark. Instead of investigating such physical mechanisms, the mainstream approach is to search for mathematical beauty, and after failing to find it, to attempt to fabricate such mathematical beauty where it doesn’t exist by constructing string theories which can’t connect to reality.
copy of a comment:
http://asymptotia.com/2008/02/22/alltoofamiliar/
My Alevel physics teacher at school was a female, as was my Alevel maths teacher [in England]. My undergraduate “personal tutor” was a female with a PhD, who was the department head. Maybe America and Kea’s part of the world down under are simply more backward than England.
The Harvard president (Lawrence Summers) referred to by “anonymus I.” above was clearly trying to get his 15 minutes of fame through notoriety, having failed to have accomplished fame ethically through constructive behaviour.
copy of a comment:
http://asymptotia.com/2008/02/22/alltoofamiliar/
“Excellent scientific reasoning nige. The results of your own personal sampling trumps everything else. Nicely argued.”
Clifford, thanks, yet personal experience is no more “argued” by me to be a piece of “scientific reasoning” than putting a cartoon on a blog is “argued” by you to a piece of scientific reasoning.
I make a suggestion that since the complaints I’ve seen [of] sexism in science are coming from USA and Australia, and since I’ve over my life seen female scientists and teachers held only in respect in England at all levels during my education, “maybe” the problem is regional to the areas where people do have complaints! Of course I may be totally wrong – maybe everything I’ve seen and heard in education over many years may be […] totally misrepresentative of the situation elsewhere in England – but I haven’t actually claimed that, I’ve merely suggested that the problem may be a regional one, localised to […] geographic regions where the complaints of sexism in science appear to be coming from. I didn’t claim to make […] a scientific study, and made clear from my wording where my information came from.
MikeinAppalachia: the personal tutor is someone who about ten students have regular meetings with to discuss general matters. I suggest you reread my comment if you can’t grasp the relevance.
Summers’ general statement that females are “less strong” in sciences is not an opinion or a suggestion, it is stated by him as if it is fact. Generalized patronising statements about women are certain to grab media attention.
Since making such an inaccurate statement would lead to controversy, I [argue] that would explain the facts. Yes, Summers had a degree of “fame” as President of Harvard and from his work as an economist up to then, but that hardly [g]ave him the media attention widely across society that he got out of his controversial comment. The media eagerly reports such statements from prestigious academics.
http://asymptotia.com/2008/02/22/alltoofamiliar/
(apologies for the grammatical errors, it’s late)
copy of a comment
http://asymptotia.com/2008/02/22/alltoofamiliar/
‘You, on the other hand, are using your single data point to try to invalidate the collective experience of thousands of others. … There are umpteen women I know who would happily tell you about their similar experiences there, if you’d care to listen.’
Clifford, thanks again. Giving my personal experience, and making a suggestion, is not – in my opinion – exactly the same thing as trying to invalidate the experience of others. I always listen to others, and have been doing so all my life. It is on this basis that I made the comment. If you’d care to reread what I wrote, maybe you’d see that I’m not trying to invalidate other people’s experiences.
I’m going to have to carefully study Woit’s lectures at http://www.math.columbia.edu/~woit/LieGroups/ e.g. http://www.math.columbia.edu/%7Ewoit/RepThy/repthynotes1.pdf through http://www.math.columbia.edu/%7Ewoit/notes23.pdf to understand the symmetry groups in the standard model. Ryder (on the pages cited in previous comments above) uses 3 x 3 matrices for SU(2) whereas the 2 in SU(2) implies a mathematical dimension of 2 and thus 2 x 2 matrices. Possibly also reading some of the books Dr Woit lists on that page may help:
Knapp, Anthony W., Lie Groups: Beyond an Introduction (Second Edition)
Birkhauser, 2002.
The first half of this book contains a very careful discussion of many of the topics we
will be covering.
Carter, Roger, Segal, Graeme, and MacDonald, Ian,
Lectures on Lie Groups and Lie Algebras,
Cambridge University Press, 1995.
This book is at the other extreme from the book by Knapp, providing a quick sketch
of the subject.
Sepanski, Mark,
Compact Lie Groups,
SpringerVerlag, 2006.
This book gives a detailed discussion of one of our main topics, the representations of
compact Lie groups, leading up to the BorelWeil geometrical construction of these
representations.
copy of a comment:
http://asymptotia.com/2008/02/22/alltoofamiliar/
“Sexism in the UK is far from undocumented, so I would query how hard you are looking for your information.”
I […] wrote lifetime personal experience from the UK of seeing how my female teachers and professors were respected:
“My Alevel physics teacher at school was a female, as was my Alevel maths teacher in England. My undergraduate “personal tutor” was a female with a PhD, who was the department head. Maybe America and Kea’s part of the world down under are simply more backward than England.”
It is crystal clear as to where my information comes from! This makes me query how hard you looked for your information, and whether you actually read what I wrote.
“Have you asked women/minority scientists whether they ever feel isolated, unheard, invisible, or discriminated against?”
Yes, that’s why I commented. Discrimination occurs where “women/minority scientists” are isolated from the majority, and I’ve seen plenty of evidence in the U.K. that this isn’t occurring in the red brick colleges I was educated at.
“Had you tried something as simple as typing the search term “women in science UK” into Google, you would have noticed that the majority of the top hits are pages for organisations that deal with the gender inequality in the sciences.”
Gender inequality in the sciences is a sad fact, although it is fast diminishing. My comment was about gender prejudice, not gender inequality, because the cartoon on this blog is about gender prejudice, not gender inequality. There are still a smaller number of females entering the sciences, but that inequality might not be completely due to the existence of prejudice by men against women and discrimination. There are many factors involved, e.g. society to some extent tries to role model the sexes from an early age, with females pushed maybe more into arts as a rule than into sciences. This is done by peerpressure, parents, relatives, etc. So the inequality in the numbers of the different sexes employed is not the same thing as males disrespecting females who want to enter science.
I’ve had prejudice due to hearing and related speech problems at an early age, so I’m interested in how it arises from groupthink (lack of thinking) and from my experience the few women scientists that there are in the UK which I’ve been fortunate enough to have as teachers, are highly respected people.
It may be true that there is more sexist in science in the US and Australia (i.e., places from where I’ve seen complaints come from, e.g. Harvard and Kea’s Australia/New Zealand). But in the UK, such a cartoon as produced on this blog post would not be helpful. It’s not statistically based science, it’s not even a particular piece of individual experience: it’s a message that females are prejudiced against in mathematics by males. Maybe this kind of message causes the inequality by discouraging many females from entering science in the first place?
Summer’s controversial speech as President of Harvard on 14 January 2005 claimed:
“… if one is talking about physicists at a top twentyfive research university, one is … talking about people who are three and a half, four standard deviations above the mean [intelligence]… If you do that calculation – and I have no reason to think that it couldn’t be refined in a hundred ways – you get five to one [males per female], at the high end.”
He claimed that this calculation explains why you should get more male than female scientists – there are more males of high intelligence. This is wrong because the intelligence tests are just telling you about prejudice, not explaining it. In my own case, IQ scores when I couldn’t hear/understand or speak properly at school were over 20 points lower than they were once I got treatment. In studies of genetically and genderidentical identical twins, environmental factors cause differences as large as 20 points or so. On average, only 80% of IQ is innate. Females aren’t less intelligent. The percentage difference in IQ tests correlates to the proportion of questions in the tests requiring fairly deep immersion for several years in maths, patterns of numbers and geometric shapes, etc. All the difference in such statistics is telling you is that that on average there are fewer women being immersed in science from a young enough age. It’s not genetic. As in the case of identical twins, males who (on average) through social peer pressure get interested in maths more deeply than females, do on average get a higher IQ score just because of that acquired skill. So Summers was missing the point: the mean IQ difference is smaller than environmental variations in IQ proved to exist in identical twin studies. Summers’ argument that IQ explains inequality is false, because IQ differences and inequality can both result from the same cause, i.e. the social prejudice that maths and sciences are “traditionally (biasedly) male territory”.
My point in my previous comment is totally different, however. Far from claiming that no numerical inequality exists, all I argued was that from personal experience those females who are in science, are respected. If I’m correct, may be there is a selection effect, whereby all the females who aren’t in science but wanted to be, ended up working outside science as a result of having been discriminated against (i.e. only the most hardy survived to take up a career in science). Maybe to some extent it may be due to legislation in the UK which protects females from discrimination and dates back to 1975. I don’t know how much better the sexual equality law is here than in the US or Australia.
copy of a comment:
http://asymptotia.com/2008/02/22/alltoofamiliar/
Sorry again for typos despite quickly proof reading the comment before submitting, but it’s 1.47am and I’m busy at the moment revising for difficult exams…
copy of a comment:
http://riofriospacetime.blogspot.com/2008/03/prediction.html
Your Tshirt quotes Einstein:
“When forced to summarise the General Theory of Relativity in one sentence, Time and Space and Gravity have no separate existence from Matter.” – Albert Einstein
Then you give the equations R = ct and GM = tc^3. One way to get GM = tc^3 from general relativity is by applying massenergy equivalence from special relativity to the equivalence principle between inertial and gravitational masses in general relativity:
inertial mass = gravitational mass
=>
inertial massenergy = gravitational massenergy
mc^2 = mMG/R (for a nice graphical illustration of this equivalence, see for example http://www.gravity.uk.com/galactic_rotation_curves.html).
Inserting R = ct into this equivalence of mc^2 = mMG/R gives you:
tc^3 = MG.
For the benefit of people who think that general relativity is not physics but merely a tensor mathematical representation of extra dimensions, there are various other compressions of general relativity of interest. E.g., Wheeler says that matter determines the curvature of spacetime, and the curvature explains the motion of matter/energy.
This is a description of the tensor maths for any theory of gravitation, however, not specifically the unique EinsteinHilbert field equation which subtracts from the Ricci tensor half the product of the metric tensor and trace of the Ricci tensor (i.e., the scalar sum of the diagonal components from top left to bottom right in the matrix representing the Ricci tensor, R_{mn}).
This is the brains of general relativity; otherwise the deflection of starlight by gravity will be the same as the deflection of a bullet according to Newton’s law, not twice that amount. Writing Newton’s gravity law in terms of curvature and massenergy tensors gives something of the general equivalence (ignoring dimensionless factors like 4*Pi),
R_{mn} = T_{mn}.
This simple statement of gravity in terms of a curvature tensor is physically wrong because the divergence of the source term T_{mn} should always be zero, in order to conserve massenergy (as explained here).
As explained on that justlinked page, this is analogous to the fact that the divergence of any application of a curl operator must be zero. Although you can prove it using algebra, you really don’t need to. A curl operator produces a field line which forms a closed loop, often a circle but always a closed loop. A closed loop doesn’t have divergence; the total sum of the vectors (small arrows) representing all the differential elements of the field line which is a closed loop is always zero. In order to have divergence, you would have to abandon closed loops and abandon, therefore, curls.
Similarly, the geodesics described by the stress energy tensor where massenergy is conserved are in a sense closed loops. The Earth isn’t always at a fixed distance from the sun; it is closest in January and it then has more kinetic energy than the average, and less gravitational potential energy than the average. Although objects can fall, this doesn’t contravene conservation of massenergy as all that happens is a conversion between kinetic energy and gravitational potential energy.
So in general relativity the stressenergy tensor is not equal to the Ricci curvature but had to have a correction factor subtracted from it to enable the resulting model to abide the conservation of massenergy: [T_{mn}] – [(1/2)(g_{mn})T] where T is the trace of T_{mn} (i.e., T is simply the sum of energy and pressure components, T = T_00 + T_11 + T_22 + T_33).
As explained at the page here, this expression for the Ricci curvature, R_{mn} = [T_{mn}] – [(1/2)(g_{mn})T], is mathematically equivalent to the better known (but physically less intuitive) form of the EinsteinHilbert field equation:
R_{mn} – [(1/2)(g_{mn})R] = [T_{mn}]
This doubles the amount of curvature (and thus lightdeflection angles) you get when travelling at the velocity of light, as compared to that at nonrelativistic velocities. Hence it was this particular correction that enabled Einstein to correctly predict that starlight when travelling near the sun gets deflected by twice the angle than that predicted for a bullet or other nonrelativistic small mass moving along the same trajectory of approach to the sun.
Feynman, in his Lectures on Physics, explains extradimension curvature as the result of a contraction of spacetime which is produced by the existence of matter (including energy, pressure, etc.). If for simplicity (or as a first approximation) the matter (or energy field) can be treated as a uniform density sphere, then the radius is contracted by (1/3)MG/c^2, which is onesixth of the horizon radius for a black hole of that mass.
The whole point of treating the time dimension as equivalent to an extra (fourth) spatial dimension in general relativity, is to mathematically model this contraction of spacetime radii around sources of gravitational fields.
Because it is the radial distance alone that is being contracted by the gravitational field, and not the circumference lines around a mass or energy field, you get geometric distortion if you try to continue using Euclidean geometry to model accurately the spatial extent of a mass (e.g., circumference is greater than 2*Pi*R because R is contracted by gravitation yet circumference is not contracted because it’s at right angles to the gravitational field lines so is unaffected).
So by mathematically accounting for the extra curvature in terms of an extra dimension using time as a fourth spatial dimension, any actual physical mechanisms for real contraction of radial distance (suggesting the correct physical dynamics behind quantum gravity) are neatly ignored in general relativity, and everyone can live happily ever after (ignoring the physical dynamics, but content with a physically incomplete mathematical model of gravitation).
copy of a comment submitted (in moderation) to:
http://carlbrannen.wordpress.com/2008/03/03/pascalstriangleandlisise8quantumnumbers/
Thanks for this very interesting post, Carl. This gives me a lot of clear, vital facts, although quite a lot of it is far too mathematically abstract and physically abstruse for me. Firstly, isospin is very abstract. What physically is the isospin charge?
I know historically Heisenberg is supposed to have suggested isospin symmetry in 1932 when the neutron was discovered, because neutrons and protons (nucleons) had apparently identical nuclear properties and very similar masses, despite having grossly different net electric charge.
So isospin charge was at first supposed to be the strong nuclear charge that binds nucleons together in nuclei. Then after the YangMills theory was invented and used to improve Fermi’s theory of beta decay, isospin was the name for the weak interaction charge.
But what about weak hypercharge, which is so closely related to the electric charge in the Standard Model?
Wiki page you link to, http://en.wikipedia.org/wiki/Weak_isospin makes a very interesting claim:
“W^0 boson (T_3 = 0) would be emitted in reactions where T_3 does not change. However, under electroweak unification, the W^0 boson mixes with the weak hypercharge gauge boson B, resulting in the observed Z^0 boson and the photon of Quantum Electrodynamics.”
The key thing here is the mention of the “weak hypercharge gauge boson B”.
So the electroweak theory U(1) x SU(2) doesn’t just have 1 + 3 gauge bosons (photon, W^+, W^, Z^0), it has also a B hypercharge gauge boson!
But isn’t that identical to the photon? How is the B hypercharge gauge boson supposed to differ from the photon? If it doesn’t differ from the photon, then surely it is the photon, and in that case electric charge (mediated by B bosons) would be indistinguishable from weak hypercharge.
I don’t see physically how U(1) is supposed to represent two things, electric charge and hypercharge, without having a separate gauge boson. If it has a separate gauge boson, then shouldn’t the standard model be written as U(1) x U(1) x SU(2) x SU(3), instead of U(1) x SU(2) x SU(3)?
The standard discussions in textbooks (I think it is in Weinberg, Zee and Ryder) say that hypercharge is fundamental and electric charge is just an aspect of that. However, they don’t provide any physical evidence for that view so it’s just a belief. The Standard Model has been formed as a combination of U(1) electromagnetic, SU(2) weak isospin, and SU(3) colour charge gauge interactions, plus some form of Higgs field to provide mass and electroweak symmetry breaking.
I don’t see how (as Wikipedia claims): “W^0 boson mixes with the weak hypercharge gauge boson B, resulting in the observed Z^0 boson and the photon of Quantum Electrodynamics.”
This sounds very vague and speculative to me. Is there any physical evidence for it? Surely the W^0 is just a label for Z^0? I know that evidence for the Z^0 (neutral currents, involving exchanges of Z^0 gauge bosons between charges) was discovered experimentally in the 1970s, and that in 1983 CERN discovered concrete evidence for the Z^0, W^+ and W^ weak gauge bosons, but what about the alleged weak hypercharge gauge boson, B? Is there any direct evidence for B?
It’s pretty sad that it’s hard to distinguish the fact based physics from the speculation in the Standard Model. It’s all treated alike in textbooks.
I can grasp how SU(N) gives rise to a matrix of N x N gauge bosons, e.g. red, blue, green for N = 3 gives the following 3 x 3 matrix for SU(3):
{redantired, redantigreen, redantiblue}
{greenantired, greenantigreen, greenantiblue}
{blueantired, blueantigreen, blueantiblue}
or
{rar, rag, rab}
{gar, gag, gab}
{bar, bag, bab}
I can even grasp physically why these 9 gauge bosons are too many for any real interaction between two charges (one of the gauge bosons can’t have the right colour combination to contribute in any given interaction, so it appears colourless and only 8 can effectively contribute, http://math.ucr.edu/home/baez/physics/ParticleAndNuclear/gluons.html ).
Applying this to SU(2) gives the following. Weak isospin charge has values of +1/2 and 1/2. Therefore the 2 x 2 matrix of gauge bosons for SU(2) should be:
{+1/2 – 1/2, +1/2 + 1/2}
{1/2 1/2, 1/2 + 1/2}
=
{0, +1}
{1, 0}
So two of these are zeroisospin gauge bosons, which are clearly the same thing, so the 4 components of this matrix only give 3 distinct elements: 0, +1, and 1 isospins.
Then the W^0 or Z^0 gauge boson is that with 0 isospin, the W^+ gauge bosons have +1 isospin, and the W^1 gauge bosons have 1 isospin.
There is absolutely no problem with this, which is experimentally confirmed. I just wonder whether weak hypercharge is real, or is just a mathematical transmogrification of the values of weak isospin and electric charge?
I don’t believe in U(1) x SU(2) as being an electroweak unification. If nature is simple, instead of having a 4polarization photon as gauge boson for one kind of electric charge in U(1) as occurs in the Standard Model, you would need a theory like SU(2) which has two kinds of electric charge (positive and negative), with force fields mediated by charged gauge bosons. This will produce repulsion of unlike charges because the directly exchanged gauge bosons will make similar nearby charges recoil apart. Attraction of unlike charges then arises because of the unlike charges shielding and shadowing one another from charged exchange radiation from the rest of the universe. It can easily be proved from the vector sum that the force of attraction between unit unlike charges is equal magnitude but opposite in sign to that between unit like charges.
So it’s tempting to suggest that SU(2) with massless charged gauge bosons is actually electromagnetism. This merely requires the removal of U(1) electromagnetism from the Standard Model, plus a change in the Higgs field so that it doesn’t give mass to all charged SU(2) gauge bosons at low energy, only to a certain number of them at high energy so that weak interactions occur between particles with lefthanded spin.
It’s interesting that charged massless gauge bosons can actually propagate in the vacuum as exchange radiation (because the magnetic field selfinductance gets cancelled out if such bosons are streaming in two opposite directions in the vacuum between charges), even though they can’t propagate along a oneway route in the vacuum due to infinite selfinductance.
Now, if we consider this carefully, for the massless gauge boson SU(2) you get electromagnetism from two charged gauge bosons, and the neutral, massless gauge boson of SU(2) can either play the role of weak hypercharge or of something else like gravity.
Initially I thought that it was the graviton, but now I’m wondering if it’s the weak hypercharge boson, B, if that is a physically real concept, and if the details work out; e.g. the mechanism I suggest for electromagnetism requires two charged gauge bosons, so the neutral massless gauge boson might well not have the right properties to give the physically known facts about weak hypercharge, although I’ve having difficulty finding any direct physical evidence of weak hypercharge.
Is there any evidence of weak hypercharge from Fermi’s theory of beta decay? The basic idea as summarised at places like http://hyperphysics.phyastr.gsu.edu/hbase/quantum/fermi2.html doesn’t seem to physically invoke weak hypercharge.
If weak hypercharge isn’t a physical necessity from experimental evidence, knowing that would clear up my confusion and suggest to me that SU(2) with massless gauge bosons is the correct symmetry group for quantum gravity and electromagnetism, and allow concentration on a detailed predictive mechanism (to replace the Higgs theory) by which mass is given to such bosons to give rise to the lefthanded weak force.
copy of a comment in moderation queue to:
http://www.math.columbia.edu/~woit/wordpress/?p=662
Guillen’s claim that the widespread belief in multiple universes justifies religion is one that Leonard Susskind intelligently tried to forestall by attacking religion, as indicated by the title of Susskind’s book: The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. Susskind argues that string theory is justified because the string theory landscape of 10^500 metastable vacua explains the surprising features of our particular world by using the anthropic selection principle, and this theory is totally incompatible with that of intelligent design by God.
Susskind’s use of string theory against religion should be one of the major selling points for string theory, and it makes intelligent design skeptics like Richard Dawkins pay close attention to the use of the stringy cosmic landscape as being ‘scientific evidence’ that intelligent design is junk. An early Dawkin[s] argument (that the accuracy of quantum theory is evidence for multiple universes) is on video: http://www.ted.com/index.php/talks/view/id/98
The real question is whether there is any way of getting proof for the existence of a string landscape so that it can disprove religion?
copy of a comment:
http://keamonad.blogspot.com/2008/03/darkside.html
New Scientist is a mainstream journal with two primary markets: job advert listings for students and “wacky” science articles for trade consumption, although for marketing purity the current editor […] Jeremy Webb […] in an interview published by The Hindu here (where he gets his picture published!), is quoted as saying:
‘Scientists have a duty to tell the public what they are doing… ’
However, he (following his predecessor Alun M. Anderson) don’t publish science per se if it’s not financially expedient to publish it, i.e. if the science is being censored by the mainstream. For example, as Catt complains here, they claim they will take an interest if an idea gets past peer review and generates interest, but then they won’t reply when you respond that Catt’s work was published in a peerreviewed IEEE journal in Dec. 1967 and was also discussed in articles or letters in almost every issue of Wireless World from 197888; instead Jeremy publishes stuff that makes the magazine money in the short term, as he reveals in a Daily Telegraph article, which quotes his editorial cynicism:
‘Prof Heinz Wolff complained that cosmology [dark energy belief systems, etc.] is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.’ (Emphasis added.)
On 30 August 2004, Jeremy emailed me to show off about his bigname star writers:
‘Paul Davies writes for us between zero and three times a year, writing as much about biology these days as he does about physics. He is invited to write.’
Helene Guldberg in an article for Spiked Science on 26 April 2001 reported that Jeremy Webb’s behaviour had been sarcastic and rude towards her and others who disagreed with the New Scientist during ‘the horrendous event that was the New Scientist’s UK Global Environment Roadshow’:
‘Webb asked – after the presentations – whether there was anybody who still was not worried about the future. In a room full of several hundred people, only three of us put our hands up. We were all asked to justify ourselves (which is fair enough). But one woman, who believed that even if some of the scenarios are likely, we should be able to find solutions to cope with them, was asked by Webb whether she was related to George Bush!
‘When I pointed out that none of the speakers had presented any of the scientific evidence that challenged their doomsday scenarios, Webb just threw back at me, ‘But why take the risk?’ What did he mean: ‘Why take the risk of living?’ You could equally say ‘Why take the risk of not experimenting? Why take the risk of not allowing optimum economic development?’ But had I been able to ask these questions, I suppose I would have been accused of being in bed with Dubya.’
It’s the same old story of arrogant stupidity:
‘If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner – even though he sat at the feet of Faraday… beetles could do that… he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!’
– Oliver Heaviside, Electromagnetic Theory Vol. 1, p337, 1893.
‘(1). The idea is nonsense.
‘(2). Somebody thought of it before you did.
‘(3). We believed it all the time.’
– Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle, Home is Where the Wind Blows Oxford University Press, 1997, p154).
For a good explanation of the hatred of the mainstream peerreviewers towards new ideas from outside the Party consensus, see Orwell’s book 1984:
‘A Party member … is supposed to live in a continuous frenzy of hatred of foreign enemies and internal traitors … The discontents produced by his bare, unsatisfying life are deliberately turned outwards and dissipated by such devices as the Two Minutes Hate, and the speculations which might possibly induce a skeptical or rebellious attitude are killed in advance by his early acquired inner discipline … called, in Newspeak, crimestop. Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’
copy of another comment to
http://keamonad.blogspot.com/2008/03/darkside.html
What is really weird is that the acceleration of the universe that is being accounted for by the ad hoc small positive cosmological constant is exactly what you’d expect from Hubble’s expansion rate of v = dR/dt = HR
Acceleration, a = dv/dt = d(HR)/dt
= (H*dR/dt) + (R*dH/dt)
Since dH/dt here is zero because H is just a constant,
a = H*dR/dt
= H*v
= H*HR
which is the tiny amount of acceleration that has been observed, i.e. 6*10^{10} ms^2.
This is the tiny acceleration that shows up only on the largest distance scales, e.g. in distant supernova redshifts and gamma ray bursters.
Lee Smolin comments in his 2006 book The Trouble with Physics that the amount of cosmic acceleration is by “coincidence” numerically like the value
a = H*HR = R/t^2 = cH
and equivalents you get by inserting (for a flat universe with no apparent deceleration due to gravity on large scales), R/t = c, H = 1/t, etc.
People just aren’t thinking. Why can’t they see that Hubble’s empirical law v = HR mathematically leads straight to the expression for cosmological acceleration a = Hc?????
What part of the maths above (calculus 400 years old) is ‘speculative’??? Why can’t people think for themselves?
I’m not a believer in unexplained “dark energy”, because the fact is that if you sort out quantum gravity correctly (using spin1 gravitons, not spin2), then gravitons cause both the observed acceleration of the universe on large scales and simultaneously cause attraction between masses and energy on smaller scales.
copy of a comment:
http://riofriospacetime.blogspot.com/2008/03/shadow.html
Thanks for this very nice history. I think that Eddington definitely did “cook the books” to support Einstein.
I’ve got Eddington’s book “Space Time and Gravitation” (Cambridge University Press, 1st ed., 1920).
The frontispiece of that book is a photograph of one of the telescopes used to determine that Einstein was correct.
It’s a telescope with a clockwork powered mechanical device that keeps the telescope aligned on the elipsed sun for the duration of the eclipse.
The problem is that all materials such as the metal telescope casing tend to expand/contract as a function of temperature, and temperature varies during the eclipse (it gets cooler).
This causes some distortion.
Since the star displacements are tiny compared to the effect of temperature caused contraction of the instruments, there is an immense amount of “noise” in the data.
Eddington measured all the star displacements, then chose to ignore those that were way off what was expected. By choosing suitable pieces of data to include in the averaging, he got an average displacement that was closer to Einstein’s prediction than an average of all the raw data indicated.
His book, by the way, is quite nice. Eddington explains in a simple way that Einstein’s general theory of relativity predicts double the deflection of starlight that Newton’s theory predicts (if light is treated as particlelike bullets being deflected by gravity), because the deflection of any object by gravity depends on the velocity of the object. As the object’s velocity increases to c, it’s gravitational fall doubles.
The physical explanation is that if you fire bullets (travelling a velocity v << c) past the sun, they speed up as they approach the sun then slow down after they have reached the closest distance to the sun and begin going away from the sun.
In that case, the gravitational potential energy gained by the bullet is partly used to change the speed of the bullet.
In the case of light, this can’t happen, because gravity doesn’t affect the speed of light, only the velocity (i.e. the direction).
As light approaches the sun, a full 100% of the gained gravitational potential energy of the light is devoted to changing the direction that the light is travelling (i.e. causing deflection), and no energy is wasted in changing the speed of light.
Because gravitational potential energy in the case of light is used with 100% efficiency for deflecting the trajectory of the light, it gets deflected more that the Newtonian law (which assumes that objects speeds are varied) predicts. It turns out that for a bullet at low speed approaching the sun, 50% the gravitational potential energy gained ends up changing the speed of the bullet, and the other 50% ends up deflecting the trajectory of the bullet.
Since 100% of the gravitational potential energy is used to deflct the trajectory of a light ray, Einstein’s general relativity predicts 100%/50% = twice the deflection of light that Newton’s theory (assuming light to be nonrelativistic bullets) predicts.
Yet another way to look at the physics of light ray deflection by gravity is to look at the directions of the electromagnetic field lines in a photon and compare their direction to the radial gravitational field lines from the sun. Exactly 100% electromagnetic field lines in a photon are in the plane transverse to the direction of propagation, so on the average (say at the distance of closest approach of the photon to the sun), half of the electromagnetic field lines will effectively point in the direction of the gravitational field lines. But for a slowmoving bullet, the electromagnetic field lines act in all directions (it is only if such a bullet moves at a velocity approaching c that Lorentz contraction squashes the field from spherical geometry into a planar disc), so only half as many of them effectively interact with the radial gravitational field lines from the sun.
What’s so interesting about Eddington’s approach is that he made a real effort to grasp the physics behind Einstein’s mathematics.
Eddington did massage his 1919 data, but that was commonplace back then. If you look at Millikan’s work on the charge of the electron, he chose arbitrarily which bits of data to average by hand, ignoring all data points that looked way off. He didn’t disclose this weeding of data in his published data; it was only disclosed when his lab notebooks were studied many years later
This caused a severe row between Millikan and Felix Ehrenhaft who did the same experiment and got extremely fuzzy data (because he didn’t weed out the wayout data points like Millikan).
In the same way, people who replicated Eddington’s study of the deflection of starlight by the sun’s gravity later got results which were far less certain than Eddington’s claims. If you are trying to ascertain whether Newton’s prediction (1 unit) or Einstein’s prediction (2 units) is right, and you get a result of say 1.5 give or take a factor of 2, then the experimental data doesn’t help you discriminate at all.
Eddington was fortunate to have a good prediction to fiddle his data to match by ignoring false values.
Millikan wasn’t as lucky as Eddington. He got the Nobel Prize in 1923 for determining(falsely) the electric charge of the electron, just by ignoring data points that didn’t cluster around one particular value. because Millikan didn’t have a theoretical prediction to fiddle his data to fit, the cluster of data points he decided to publish was not centred around the real value. Ehrenhaft who didn’t arbitrarily purge his data of farout data points, missed out on a Nobel Prize in consequence: the penalty for not fiddling your data to claim a degree of precision which was unwarranted by the data, was the failure to secure Nobel’s accolade:
http://www.aps.org/publications/apsnews/200608/history.cfm
In 1910 Millikan published the first results from these experiments, which clearly showed that charges on the drops were all integer multiples of a fundamental unit of charge. But after the publication of those results, Viennese physicist Felix Ehrenhaft claimed to have conducted a similar experiment, measuring a much smaller value for the elementary charge. Ehrenhaft claimed this supported the idea of the existence of “subelectrons.”
Ehrenhaft’s challenge prompted Millikan to improve on his experiment and collect more data to prove he was right. He published the new, more accurate results in August 1913 in the Physical Review. He stated that the new results had only a 0.2% uncertainty, a great improvement of over his previous results. Millikan’s reported value for the elementary charge, 1.592 x 10^19 coulombs, is slightly lower than the currently accepted value of 1.602 x 10^19 C, probably because Millikan used an incorrect value for the viscosity of air.
It appeared that it was a beautiful experiment that had determined quite precisely the fundamental unit of electric charge, and clearly and convincingly established that “subelectrons” did not exist. Millikan won the 1923 Nobel Prize for the work, as well as for his determination of the value of Plank’s constant in 1916.
But later inspection of Millikan’s lab notebooks by historians and scientists has revealed that between February and April 1912, he took data on many more oil drops than he reported in the paper. This is troubling, since the August 1913 paper explicitly states at one point, “It is to be remarked, too, that this is not a selected group of drops, but represents all the drops experimented upon during 60 consecutive days.” However, at another point in the paper he writes that the 58 drops reported are those “upon which a complete series of observations were made.” Furthermore, the margins of his notebook contain notes such as, “beauty publish” or “something wrong.”
Did Millikan deliberately disregard data that didn’t fit the results he wanted? Perhaps because he was under pressure from a rival and eager to make his mark as a scientist, Millikan misrepresented his data. Some have called this a clear case of scientific fraud. However, other scientists and historians have looked closely at his notebooks, and concluded that Millikan was striving for accuracy by reporting only his most reliable data, not trying to deliberately mislead others. For instance, he rejected drops that were too big, and thus fell too quickly to be measured accurately with his equipment, or too small, which meant they would have been overly influenced by Brownian motion. Some drops don’t have complete data sets, indicating they were aborted during the run.
copy of a comment:
http://keamonad.blogspot.com/2008/03/quoteoflastcentury.html
“A possible explanation of the physicist’s use of mathematics to formulate his laws of nature is that he is a somewhat irresponsible person. As a result, when he finds a connection between two quantities which resembles a connection wellknown from mathematics, he will jump at the conclusion that the connection is that discussed in mathematics simply because he does not know of any other similar connection. It is not the intention of the present discussion to refute the charge that the physicist is a somewhat irresponsible person. Perhaps he is. However, it is important to point out that the mathematical formulation of the physicist’s often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena.”
– Eugene P. Wigner, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, 1960.
“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”
– R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
copy of a comment:
http://keamonad.blogspot.com/2008/03/grb080319b.html
Can anyone theoretically predict the exact shape of the gravitational shockwave transmitted via the spacetime fabric (gravitons, i.e. gravitational field quanta) from a gamma ray burster?
How much of the energy of a gamma ray burster is supposed to get converted into gravitational waves, anyway?
Before any statistical sense can be made of measurements, you need to understand theoretically what is going on.
I think that the simplest and best way to understand gravitational waves is by analogy to electromagnetic radiation due to the acceleration of electric charges.
If you accelerate an electron, it emits radio waves.
Of course, if you accelerate a hydrogen atom, you don’t get any net radiation output because both the accelerating electron and the accelerating proton in the hydrogen atom are both emitting radio waves exactly out of phase with one another, so the two radiative waves perfectly interfere, “cancelling out” completely as seen from a distance which is large compared to the size of the atom (i.e., a distance that’s large, as compared to the distance between electron and proton). What happens is that both of the opposite accelerating charges in the atom exchange electromagnetic radiation with one another, which allows them to accelerate without losing energy.
In the case of gravitational waves, gravitational charge consists of massenergy so the acceleration of any mass should cause the emission of gravitational waves in a way similar to the emission of radio waves by accelerating single charges.
However, the coupling constant for gravitation for single charges is 10^{40} times that of electromagnetism, so the power of emission of gravitational waves are correspondingly weaker than radio waves.
If gamma ray bursters are stars collapsing into black holes, then this physical mechanism (acceleration of gravitational charge effect, by analogy to electromagnetism) suggests the power of gravitational waves emitted by a gamma ray burster will on the order of be 10^{40} of the energy of the observed gamma ray burst.
Since gamma ray bursters emit 10^44 J in the form of gamma rays, it follows that they emit only 10,000 Joules as gravitational waves.
That’s the amount of energy released by 2.4 grams of TNT.
Sorry, GIGO isn’t going to measure that gravitational wave over a cosmological distance. It’s going to be swamped with too much noise from natural earth tremors.
In order to unequivocally detect the gravitational waves from a gamma ray burster, it would have to occur so close that we’ll all get a lethal dose of gamma rays! Gravitational waves are just too weak to detect by comparison.
copy of a comment:
http://keamonad.blogspot.com/2008/03/quoteoflastcentury.html
Carl, I agree with you that curve fitting by equations in itself is not very advanced physics.
Is anyone really sure if there are any really continuous curves in nature? Because everything is made up of particles, if you magnify a curvy line or anything physical that looks curved, eventually you’ll come to a series of atoms arranged in what (on larger scales) looks like the shape of a curve. The illusion of continuous curvature will of course disappear on on the scale where you can see the individual molecules and particles.
Similarly, I don’t see how there can be any curved trajectories, because if fields are quantized, the field quanta will only approximate to curves on large scales. On sufficiently small scales, motion will be more erratic.
When Newton’s apple fell, presumably it was accelerated by gravitons interacting with it. That’s not a truly continuous acceleration. Presumably according to quantum gravity, only at the instants when gravitons are being exchanged, do accelerations occur as impulses.
Maybe this is why differential geometry describing curvature was recognised by Einstein as a problem for quantum field theory:
“I consider it quite possible that physics cannot be based on the field concept, i. e., on continuous structures. In that case nothing remains of my entire castle in the air, gravitation theory included, [and of] the rest of modern physics.”
– Einstein, 1954 letter to M. Besso, quoted by Abraham Pais in his biography Subtle is the Lord: The Science and the Life of Albert Einstein, Oxford University press, 1983, p. 467.
There is a problem on both sides of the differential field equation of general relativity: firstly, you can’t fundamentally model (except as an approximation valid statistically only for large scales) the distribution of particulate matter using the energymomentum tensor T_{nm}, and second, you can’t model field quanta interactions accurately by the Ricci tensor for curvature R_{nm}.
There are no smooth geodesics of curved trajectories in quantized fields, just a lot of impulses from field quanta, gravitons. Einstein was exaggerating the problems of quantum field theory, however, since calculus is a useful approximation on large scales where the flux of field quanta involved in the interactions between particles is large. The real problem is that the differential geometry of tensors provides the wrong framework mathematically for making progress on the fundamental problem of quantum gravity, a[n]d when the mainstream is in a hole, it keeps digging instead of trying alternatives.
copy of a comment:
http://keamonad.blogspot.com/2008/03/grb080319b.html
“Kea, it’s somewhat outside my area of specialty, but I was under the impression that the explanation is that E&M radiation is dipole while gravitational radiation is quadropole and this causes their signals to drop off at different powers of distance.” – Carl
According to Wikipedia:
“… the second time derivative of the quadrupole moment (or the lth time derivative of the lth multipole moment) of an isolated system’s stressenergy tensor must be nonzero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current necessary for electromagnetic radiation.” – http://en.wikipedia.org/wiki/Gravitational_waves
That Wikipedia page gives the equation for the gravitational waves give off by the EarthSun system: the radiated power of gravitational waves falls by the fifth power of distance of separation between the Earth and the Sun.
But further down the same page, the wave amplitude is shown to fall off just inversely as distance of separation.
In any case, the distance of separation of two masses has nothing to do with observer distance.
The if the radiated gravitational waves are emitted isotropically, the received intensity (or radiated power per unit area) will be the source power divided by the area of a sphere with radius the distance of the observer from the centre of mass of the radiating system. The distance to the observer has nothing to do with the distance of separation of the two masses in the gravitational wave source. The separation distance for the two masses is analogous to say the distance between the two parts of a dipole antenna in radio transmission: this distance helps you determine the total power transmitted, not the way the power depends on the distance to the observer.
In all isotropic transmission cases, the power per unit area received at a distance falls by the inverse square of distance, while the amplitude falls just as the inverse of distance (because the energy density in an EM field is proportional to the square of the field strength).
Carl, what do you mean by “E&M radiation is dipole”? I can generate EM radiation by accelerating a single charge in one direction, i.e. I can generate radio waves from a monopole not just a dipole antenna. (Maybe you are referring to the nature of EM radiation in having both a half cycle of negative electric field and positive electric field, propagating together?)
A dipole antenna is illustrated at http://en.wikipedia.org/wiki/Dipole_antenna. It’s a bit peculiar and is not the sort of antenna used generally in radio transmission from radio transmitter masts, handheld radios, and vehicle mounted transmitter antennas. These all use monopole antennas.
The Wikipedia page on monopole antennas, http://en.wikipedia.org/wiki/Monopole_antenna claims:
“A monopole antenna is a type of radio antenna formed by replacing one half of a dipole antenna with a ground plane at rightangles to the remaining half. If the ground plane is large enough, the monopole behaves exactly like a dipole, as if its reflection in the ground plane formed the missing half of the dipole (see image antenna).”
Physically this is wrong because what is going on is electromagnetic radiation from accelerating electrons.
Similarly, when a light photon is released as an electron jumps from an excited state to the ground state, you need a charge accelerating in one direction.
You don’t need a ground plane.
The wiki page is just totally misleading because monopole antennas work above the ground, and above nonconducting ground media.
Those people don’t grasp the physics of electromagnetic radiation from accelerating charges. They have absolutely no practical experience and are confused by Maxwell’s equations.
The key fact they are missing is that observable electromagnetic radiation is just an asymmetry in the normally intense exchange of gauge boson (field) radiation between charges.
If you have a hydrogen atom, it is neutral as seen from a distance, but in reality the electron and proton in the hydrogen atom are exchanging gauge bosons with other charges of similar signs in the surrounding universe.
One way to demonstrate this is to separate a proton and electron by a great distance. Is this “extremely excited atom” still “neutral”. Obviously, if you are near the electron you’ll detect a net negative electric field, and if you’re near the proton you’ll detect a net positive electric field. These fields are mediated entirely by exchanged gauge boson radiation.
If we just consider an isolated electron, how are exchanged gauge bosons physically producing an electric field around it? What other charges are there for the electron to exchange gauge bosons with? Easy, the electrons in the surrounding universe. Charges don’t stop exchanging gauge bosons with charges in the surrounding universe just because they are paired up with opposite charges (this pairing – atoms – only affects the gauge bosons transmitted in the direction of the other charge, thus producing the electric force between the charges).
Electromagnetic radiation is emitted by accalerating charges because the acceleration of a charge introduces an asymmetry in the otherwise symmetric exchange of gauge bosons. The disturbance in the exchange of gauge bosons that occurs when a charge accelerates has two effects: (1) a force on the charge, and (2) the emission of electromagnetic radiation (oscillations in the gauge boson radiation field are composed of gauge bosons; just as sound waves are composed entirely of the energy of air molecules but that doesn’t mean that sound is just air molecules, any more than book is the same as a piece of pulped wood).
My understanding is that the issue of what electromagnetic radiation is, is crucial for understanding quantum field theory: it’s an asymmetry in the normally symmetrical (undetectable) exchange of gauge bosons between charges. Likewise, we don’t feel 14.7 pounds/sq. inch or 10 tons per square metre of air pressure normally, because it’s normally symmetrical on all sides. We only feel the force when an asymmetry occurs, e.g. wind drag forces.
Gauge boson exchange radiation (static force fields, such as the radial electric field around a stationary electron) is very analogous to static pressure: the field is produced by the exchange of gauge bosons between charges, by analogy to water molecule impacts. Likewise, electromagnetic radiation is similar to transverse water waves in some respects, i.e. an asymmetry in the otherwise uniform water pressure across a lake.
The key thing to grasp to understand the problem in the conventional picture is the Maxwell equation for “displacement current” in a logic step moving in your computer. When a logic step heads off it is an energy wave propagated between two conductors, a ground plane and say a 5 volt rail. The energy wave propagates at the velocity of light for the insulator between the rail and the ground plane. This is not theory or speculation: it was experimentally investigated by Catt, see I. Catt, “Crosstalk (Noise) in digital systems,” IEEE Trans Electronic Computers, Vol. EC16, pp. 743763, December 1967.
Now, how does such a logic pulse propagate when a switch closes, sending it on its way? It has no way to sense whether it is flowing into an open circuit or a closed circuit.
The energy starts at one end of a transmission line (pair of conductors) and races along it at the velocity of light for whatever is the insulator that is being used between and around the two conductors comprising the line.
If the potential difference between the conductors in the energy wave is 5 volts, then what is the current? It’s not given by Ohm’s law I = V/R where R is the resistance of the complete circuit, because we have already already said that the energy has no clue of what is ahead of it: it doesn’t know whether the line ahead of it is open or closed. It is going at the velocity of light, as it has no information of what if anything is in front of it, or of whether the resistance of the circuit as a whole is very low, or infinite (open circuit).
What happens in this case is that impediance controls the current flow, and acts like a finite (noninfinite) resistance between the two conductors at distances where the energy wave has already reached:
Illustration
Illustration
Points to note:
1. Maxwell and Heaviside claimed that a vacuum “displacement current” of polarized virtual charges occurs, with the process of polarization being a “displacement current” which closes the open circuit between the two conductors before the logic step has completed the full circuit (i.e., while the logic step is moving along the circuit at light velocity for the insulator which must be presumed to be a “dielectric”, even if a vacuum).
2. Julian Schwinger worked out that the quantum field theory vacuum only undergoes any polarization in electric fields above 1.3*10^18 v/m. Such fields don’t occur in computers, but they still work!
3. In each conductor, as the energy step passes a given location, the relatively loosely bound (conduction band) electrons get accelerated from a mean of zero to their full mean drift speed. This causes them to radiate and swap EM energy!!! This is the physical mechanism for what happens, replacing Maxwell’s mistaken “displacement current” with tested physics.
As the illustration indicates, the electrons accelerate in opposite directions along each of the two conductors, so each conductor radiates a waveform of EM radiation which is the exact inversion of that from the other conductor. Hence, at distances from the transmission line, there is perfect cancellation or interference, cancelling any detectable signal! Thus, no net energy loss occurs due to the radiation. The sole effect of this radiation (ignored by Catt and leading to a serious – phone shouting – row between us, even after I wrote an Electronics World coverstory about Catt’s best invention) is that it is exchanged between the two conductors. This is the physical mechanism that does the same job that Maxwell’s false pet theory of “displacement current” (which doesn’t exist, because as Nobel Laureate Schwinger proved, the quantum field theory vacuum doesn’t polarize in electrif fields below 1.3*10^18 volts/metre, and you don’t get that kind of field strength in radio waves or computers, where field strengths are very much lower).
This changes the physical understanding of Maxwell’s equations: from it, we know that wherever Maxwell claimed “displacement currents” to exist, exchange radiation is occurring which produces the same forces and energy transfers but tells us about the previously hidden quantum field theory mechanism behind the quantum electromagnetic interaction.
Surely the quantum gravitational charge, mass, can be expected to behave as a first approximation like electric charge when accelerated.
Whereas the acceleration of electric charge produces an asymmetry in the field (which itself is mediated by gauge boson exchange radiation) that ripples outward as an observable transverse EM wave (mediated by numerous gauge bosons or field quanta), with gravity what you are doing is accelerating a mass (a unit gravitational charge), which introduces an asymmetry into the graviton exchange mechanism, that propagates as a gravitational wave (mediated by numerous gravitons, or field quanta).
Why introduce additional complexity? It looks as if the mechanism for gravitational waves is a perfect analogy to electromagnetic waves, and that the relative weakness of the gravitational waves is simply due to the relatively weakness of the gravitational coupling, as compared to electromagnetism.
The major difference between fundamental components of the electromagnetic radiation (electromagnetic gauge bosons) and fundamental components of the gravitational radiation (gravitons) is that the former consists of a combination of the fields from two types of electric charge (positive electric field, and negative electric field, both propagating in harmony in one wave; let’s follow Lorentz’ genius by treating magnetism as simply an electric field in motion relative to the observer), while gravitons consist of fields created by only one type of charge: massenergy. While attraction of unlike charges and repulsion of like charges occurs in electromagnetism, only attraction is observed between masses in proximity. So gravitons are composed of a monopolar field, unlike the dipole radiation of electromagnetism (photons which are half a cycle of positive electric field, accompanied by half a cycle of negative electric field).
copy of a note to Sir Kevin Aylward’s page:
http://en.wikipedia.org/wiki/User_talk:Kevin_aylward#Ensemble_interpretation_of_quantum_mechanics
== Ensemble interpretation of quantum mechanics ==
Sir Aylward, you might want to incorporate a bit of factbased quantum field theory into the quantum mechanics ensemble interpretation page. The key thing is that in an electric field of strength above 1.3*10^18 volts/metre, which occurs out to a range of 33 fm or so from the middle of the electron (see equation 359 in http://arxiv.org/abs/quantph/0608140 or equation 8.20 in http://arxiv.org/abs/hepth/0510040 ), pairproduction occurs spontaneously in the Dirac sea, and the pairs get radially polarized by the electron’s core electric field before annihilating back into field quanta (this radial polarization consists of virtual positrons being on average slightly closer to the real electron core than virtual electrons). This polarization shields part of the core charge of the electron, necessitating the renormalization of charge in calculations of things like the magnetic moment of the electron, known accurately to many decimals.
But what’s more important, the spontaneous production of pairs of virtual (i.e. shortlived) fermions around electrons at random due to the intense gauge boson (electric field quanta) radiation in strong fields breaking down the vacuum “Dirac sea”, will have chaotic effects on the motion of the electron on small scales (although on large scales the chaos will cancel out, just as a large number of random air molecule impacts averages out on large scales to be approximated well by the concept of constant air pressure, but doesn’t cancel out on small scales where individual impacts become important, causing Brownian motion of small particles).
It will randomly cause smallscale deflections, each deflection occurring when pair production produces pairs of fermions at random near an electron.
Feynman states on a footnote printed on pages 556 of my (Penguin, 1990) copy of his book QED:
‘… I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of oldfashioned ideas … But at a certain point the oldfashioned ideas would begin to fail, so a warning was developed … If you get rid of all the oldfashioned ideas and instead use the [path integral] ideas that I’m explaining in these lectures – adding arrows [each arrow representing the contribution to one kind of reaction, embodied by a single Feynman diagram] for all the ways an event can happen – there is no need for an uncertainty principle!’
Feynman on p85 points out that the effects usually attributed to the ‘uncertainty principle’ are actually due to interferences from virtual particles or field quanta in the vacuum (which don’t exist in classical theories but must exist in an accurate quantum field theory):
‘But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these [classical] rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of intereference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’
Hence, in the path integral picture of quantum mechanics – according to Feynman – all the indeterminancy is due to interferences. It’s very analogous to the indeterminancy of the motion of a small grain of pollen (less than 5 microns in diameter) due to jostling by individual interactions with air molecules, which represent the field quanta being exchanged with a fundamental particle.
The path integral then makes a lot of sense, as it is the statistical resultant for a lot of interactions, just as the path integral was actually used for brownian motion (diffusion) studies in physics before its role in QFT. The path integral still has the problem that it’s unrealistic in using calculus and averaging an infinite number of possible paths determined by the continuously variable lagrangian equation of motion in a field, when in reality there are not going to be an infinite number of interactions taking place. But at least, it is possible to see the problems, and entanglement may be a redherring:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
– R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 578.
Also notice that people like Dr Thomas Love of California State University has pointed out to me via an unpublished manuscript called “Towards an Einsteinian Quantum Theory”:
‘The quantum collapse [in the mainstream interpretation of of quantum mechanics, which has wavefunction collapse occur when a measurement is made] occurs when we model the wave moving according to Schroedinger (timedependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (timeindependent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’
Also see the following statement of Feynman on the hostility towards path integrals from Teller, Dirac and Bohr (who were all prejudiced in favour of crackpot orthodoxy which had no evidence behind it):
“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …
“… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …
” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …” – Feynman, quoted at http://www.tony5m17h.net/goodnewsbadnews.html#badnews
Cheers,
Nigel Cook
http://quantumfieldtheory.org/
http://nige.wordpress.com/
copy of part of my comment in response to Dr Le Sage of Houston University, USA:
http://nige.wordpress.com/2007/03/16/whyolddiscardedtheorieswontbetakenseriously/#comment7201
Being a mathematician hasn’t helped Dr Edward Witten to make falsifiable predictions that extend the Standard Model and test quantum gravity.
Feynman argues that because the path integral over all possible interactions has an infinite series of terms in its perturbative expansion, it’s not physically real mathematics. It’s a continuum (or classical) approximation to the noninfinite number of actual interactions involved in any event in the universe. As Feynman pointed out in his Nov 1964 Cornell University lecture “The Character of Physical Law – The Relation of Mathematics to Physics”, this suggests that the universe isn’t based on mathematics:
“So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”
It’s also clear that the use of differential tensors in general relativity is another classical approximation, which falsely smooths out a quantized distribution of mass energy (in particles, atoms) as if it were a perfect fluid in the source term, and then describes the effect as a smooth “curvature” using the Ricci tensor. This is completely false, because the exchange of gravitons in quantum gravity must cause accelerations, and the acceleration of a particle will be a series of quantum leaps as gravitons are received, not a smooth differential acceleration.
Le Sage gravity in a sense is the simplest way to get quantum gravity predictions.
Ultimately, it’s not math skill you need here because differential geometry – which is the basis of the errors in quantum field theory, Maxwell’s equations, and general relativity – prevents progress.
Plenty of people have tried to the same well worn route and they haven’t got any further for decades. There has been no significant progress in the theory of fundamental particle physics since the discovery of asymptotic freedom of quarks in the early 1970s.
The main reason for this is the focus on differential geometry, both in general relativity research and in quantum field theory research.
Switching from analytical efforts in quantum theory towards physical understanding will require Monte Carlo computer simulations of interactions, modelling the exchange of field quanta between charges, etc. This (computer programming) approach is entirely different to the type of mathematics which has failed to achieve significant progress for 35 years.
I work in IT and have little free time, so all that I’ve been doing on this blog is trying to clarify my understanding as far as possible with the minimum expenditure of time.
Just a comment about Einstein and the speed of light. I don’t know how far you’ve gone into this, but here is my understanding of it:
1. Maxwell’s equations show that if you are moving relative to an electric charge (say, an electron), you’ll detect a magnetic field from the moving charge. If you and the electron are both moving sidebyside so that you are not moving relative to one another, then you detect no magnetic field. This is relativity. Maxwell however predicted that the mechanical aether in his theory (explaining the vacuum “displacement current” term, the product of vacuum permittivity and dE/dt, where E is electric field strength in volts/metre) would provide an absolute reference frame. He published this and suggested basically the MichelsonMorley experiment to test it, in Encyclopedia Britannica sometime around 1870 (from memory).
2. Michelson and Morley decided to test Maxwell’s prediction that light waves are carried at absolute speed by the aether. They failed to detect any absolute motion.
3. FitzGerald in 1889 then suggested that there is absolute motion, but this absolute motion was undetectable because the MichelsonMorley instrument was contracted in the direction of its motion by the headon pressure of the aether (or the flux of exchange radiation, as in modern quantum field theory, or the spacetime continuum as in general relativity), somewhat like the pressure of water at the bow of a ship causes the ship’s length to contract, or the air drag pressure on the nose of an aircraft causes it to contract in length.
This is a very thorny issue, because literally 99.9% of physicists have a religious faith in Einstein’s denial of aether, so just stating FitzGerald’s aether explanation is a heresy. It’s not just the journal editors, it’s a widespread dismissal of vacuum dynamics.
Part of the reason is a very rapid descent into crackpot aether ideas. Eddington in his 1920 book “Space Time and Gravitation” stated that there were then 200 aetherial theories of gravity, none of which were scientific in having evidence and being useful.
From memory I believe that Gamow wrote in one of his popular books (possibly “One Two Three … Infinity”) that the physical contraction by aether analogy to a ship in motion to the MichelsonMorley apparatus being contracted (and thus averting interference fringes, and making Einstein’s special relativity appear corect) is wrong, because the contraction by spacetime is always by the same FitzGeraldLorentz factor (1 – v^2 / c^2)^(1/2), while the amount of lengthways contraction of an aircraft or ship due to its motion within a fluid medium is dependent upon the atomic material and chemical nature of the matter (wood or metal, etc.).
This is a pretty shrewd observation. If you look at general relativity, it predicts that the gravitational field contracts the earth’s radius by 1.5 millimetres or x = (1/3)MG/c^2 (see Feynman’s Lectures on Physics, volume 3 I believe, the lecture on curved spacetime).
You can understand this pretty simply from the fluid pressure analogy. Place an orange in a tank of compressed fluid, and the higher the fluid pressure, the more contraction the orange’s radius will undergo.
However, it’s obvious that the contraction in this case is also independent of the structural strength or composition of the earth.
What’s happening is that gravitons are acting on the fundamental particles with mass. According to the Standard Model, which is at least partly correct because it’s mainly empirical (apart from symmetry breaking aspects like the hypothetical Higgs field, which haven’t been observed or checked yet) and makes many checkable predictions about particle reaction rates which have been confirmed, all particles are massless and mass is provided by a separate field.
Because the masses of fundamental particles are not a continuous spectrum like photon energies from a radiating blackbody, but are instead quantized into definite units, presumably masses are composed of massive field quanta which interact with fundamental electromagnetic and nuclear charges as well as with gravitons.
One of the key things about the Le Sage model is that the meanfreepath of gravitons is very large even in relatively dense matter. Most gravitons passing through the Earth don’t interact at all, and the small asymmetry due to the small portion which do interact involve gravitons interacting with only one particle.
This is why there is no mechanical effect due to the strength of the earth. The compression of the earth’s radius by 1.5 mm by gravitons is accomplished by the gravitons interacting only on fundamental particles, distributed throughout the earth’s volume. So it not a pressure on the outer surface of the earth which is transmitted through the earth mechanically. Thus, the structural strength of the earth is irrelevant for the mechanism of the compression which occurs.
If you consider a single fundamental particle (with rest mass) moving in space, it’s supposed to become flattened into a sheet if it’s velocity approaches c. The loss in its spherically symmetric electric field from this LorentzFitzGerald contraction is accompanied by a net magnetic field which grows stronger as the particle moves faster. In addition, it gets an increase in inertial and presumably gravitational mass (Einstein’s equivalence principle says inertial mass equals gravitational mass).
The physical basis for the contraction is headon pressure when moving in a fluid: drag pressure in a perfect fluid spacetime fabric doesn’t bring you to a half, it simply contracts you in the direction of motion (where the fluid takes energy from you to move itself out of your path but returns the energy by flowing back into the void and pushing in behind you, filling the void you are forever forming behind you as you move, as for “Aristotle’s arrow”). The (1 – v^2 /c^2)^(1/2) contraction factor is easily explained by analogy to a fluid: v is your velocity relative to the velocity of exchanged gravitons, c. Here c is basically analogous to the speed of sound in a fluid.
The increase in mass of a moving particle is due to the snowplough effect: although the fluid (or graviton field) ahead of a particle will be turned out of the way by the moving particle, the faster the particle is going, the bigger the wave of spacetime fabric that is being turned aside and flowing around the particle and pushing in again at the rear of the moving particle, like the wave of water flowing around a moving ship or submarine from bow to stern. The increase in the rate and strength of graviton interactions in the direction of motion due to increased speed of particle, increases the particle’s mass. Mass is determined simply by the graviton exchange interaction rate, so it increases with velocity.
I don’t think that it is that helpful to quote Einstein as saying that nothing can go faster than the speed of light. A highenergy beta particle in water can exceed the local speed of light of the water, and emits blue light (Cherenkov radiation). This is quite an important example of how particles can travel faster than light, and the proof is the blue glow from a water moderated nuclear reactor core: http://en.wikipedia.org/wiki/Cherenkov_radiation
Obviously, you’re referring to the velocity of light in the vacuum as a supposed limit asserted by Einstein. I tend to disagree with you because electromagnetism relies on a spacetime fabric (the exchange of field quanta between charges), so light is physically just an asymmetry in the normal equilibrium rate of exchange of field quanta between the electromagnetic charges of the universe. I don’t see how this velocity can be varied.
In any case, if a particle approaches the velocity c, its mass approaches infinity. (This may be best explained by the increase in the magnetic field with velocity, since you get selfinductance from the magnetic field which tends to increase inertia.)
I don’t see how you can overcome this fact. It’s well established that mass increases with velocity. You can do the experiment by measuring the deflection of beta particles going at various speeds, by a magnetic field. The more inertial force the particle has, the less it will be deflected by a given magnetic field. Clearly this is a barrier that prevents you going faster than the velocity of light.
Einstein recognised that special relativity was phoney because it ignored accelerations, which are absolute motions without any relative reference frame. E.g., if you rotate rapidly you can tell you are in absolute accelerative motion (you feel ill) without requiring the need to look at anything else for reference, whereas if you simply are going in a straight line at uniform velocity, you do need to look at some other object to determine that you are moving relative to that other object.
Because one principle of special relativity ignores accelerative motions such as centripetal acceleration, that principle is plain wrong in this universe. So Einstein replaced false special relativity with the principle of general covariance in general relativity, whereby any true laws of motion MUST be true in all reference frames, not just uniform motion:
“The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant).”
– Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
The most important occurrance of the LorentzFitzGerald contraction in general relativity is in the stressenergy tensor which is the source term for the gravitational field.
I think that the ultimate reality is that all gravitational charges (mass and energy) is exchanging lightvelocity gravitons with other gravitational charges. These gravitons, together with electromagnetic exchange radiations of the same velocity, form the background field of spacetime.
Observable photons are asymmetries in the normally equilibrium exchange of (otherwise unobservable) electromagnetic gauge bosons between electromagnetic charges, while gravitational waves will be asymmetries in the (otherwise unobservable) graviton exchange between masses.
In this physical context, I don’t see how you can go faster than the velocity of light, since that is the velocity of the field quanta which is giving rise to all the properties of spacetime. (The weak gauge bosons and gluons are massive and thus are both short ranged and travel slower than c, so we aren’t concerned with them in the context of the universe at late times.)
copy of a comment:
http://keamonad.blogspot.com/2008/04/greetings.html
Greetings to New Zealand! It’s off topic, I suppose, but what happened about the open air physics conference you were planning? It was supposed to be held sometime this year, and it’s already Spring here (Autumn where you are, I expect)? You specifically invited, I recall, Carl Brannen, Tony Smith, Louise Riofrio, and others.
Has this event occurred, is it still in the pipeline, or are you too busy now to host it? It would be fun to visit New Zealand (although the flight would be awful I imagine, however some sleeping tablets – plus aspirin to avoid any risk of thrombosis – would probably make it survivable).
Also off topic, I’ve just found the answer to a vital puzzle that was driving me crazy over the Standard Model, which I couldn’t resolve from any of the books (Weinberg, Zee, Ryder, etc.). I found it in the latest edition of an old book by Frank Close.
It turns out that U(1) can’t correctly predict electromagnetism by itself, it’s gauge boson B_0 has to be mixed with the W_0 gauge boson from SU(2) to give the weak Z_0 and the electromagnetic gauge boson. This is pretty interesting, because the oversimplified discussions say U(1) is electromagnetic interaction and SU(2) is weak interaction. In order for the simplified explanation of the Standard Model to be true, where B_0 is the photon (field quanta of electromagnetism) and W_0 of SU(2) is the Z_0 weak neutral gauge boson, the Weinberg mixing angle between the vectors representing Z_0 and W_0 would have to be zero, when in fact it’s 25 degrees empirically.
It’s exciting to find that U(1) and SU(2) field quanta are so well and truly “mixed up” in the Standard Model, instead of separately representing electromagnetic and weak interactions. This seems to be played down, or covered up, in all the other books I’ve read because it’s an unexplained fixup of the Standard Model, and not something people are particularly proud of, a little bit like having a theory that uses epicycles. However, the real challenge is to replace it with something that corresponds to similarly accurate field lagrangians, while requiring less speculation and ad hoc fixes, and making additional predictions. I think it’s extremely attractive to focus on the possibility of replacing U(1) x SU(2) + Higgs mechanism with an SU(2) which allows the 3 field quanta of SU(2) to exist as massless electromagnetic and gravitational field quanta at low energy, not just as massive weak gauge bosons at low energy. The way that mass is coupled to those 3 field quanta of SU(2) to produce weak gauge bosons at low energy can account for chiral symmetry. The deep problem here is figuring out in detail the massgiving field required to supply mass to the right proportion of field quanta at low energy in such a way that they only act on lefthanded charges. I think that this can only be accomplished by fully understanding the Standard Model lagrangian and Higgs field, and seeing how these need to be modified. As far as I’m concerned, the electric field around an electron is negative because it’s carried by charged (negative) field quanta, while the electric field around a proton is positive because it’s mediated by positive field quanta. We never get to see the core of the electron, because we don’t have high enough energy collisions: we only observed the “charge” of the electric field, and what we are observing is therefore the charge of the field quanta which mediate the electric field. In my view, the electrically neutral photon is a 50:50 mixing between negative and positive electromagnetic field quanta. You can’t get a neutral photon to mediate a charged field. The mainstream idea of finding that electromagnetic field quanta have 4 polarizations, not 2 polarizations (like ordinary photons of light) is correct really, but the 2 additional polarizations are the two kinds of net electric charge the electromagnetic gauge boson can carry. Because charged field quanta are exchanged between similar charges, but will be absorbed by unlike charges, two electrons will tend to repel because they’re exchanging negative field quanta with one another, so the impulses are knocking each electron apart from the other (the inbound electromagnetic exchange radiation from distant charges in the surrounding universe is redshifted to low energy, so can’t cancel out this net “repulsion”). In the case of unlike charges, they shield one another rather than xchanging field quanta, so they get pushed together by shielding each other on facing sides from exchange radiation from the surrounding universe. In this mechanism, by adding up vectors from all directions for net energy flow rates, you can actually show that the acceleration of two unit similar charges away from one another is identical to the attractive acceleration of two unit opposite charges. Both forces are basically powered by the equilibrium of exchange of field quanta with the surrounding universe, and the proximity of two local charges just creates an asymmetry which results in the attraction of unlike charges and repulsion of unlike charges. Because there are two types of electric charge, and alternating positive and negative charges can be found by a large number of different ‘drunkard’s walks’ of electromagnetic gauge bosons between all the fundamental particles in the universe (like a series of alternating positive and negative capacitor plates with a vacuum dielectric between them), it follows that the net electromagnetic force will be bigger than the gravitational force (which with only one kind of gravitational charge, can’t be multiplied in this way) by a vector sum of the drunkard’s walk between positive and negative charges in the universe, about the square root of the number of atoms, ie (10^80)^0.5 = 10^40.
It’s nice how simply this stuff works out, but it’s a daunting challenge to investigate a complete replacement of the electroweak sector of the Standard Model by a theory which includes gravity and sorts out all the ad hoc problems with the Standard Model. I’m up to my neck with IT problems at the moment and it will be several weeks before I have any more free time to look at this.
Here is the beginning of a drafted post intended for http://groups.google.com/group/sci.physics.foundations/
However, I’m not posting it there yet because it’s already getting too long before it has even gone into the important details, which nobody would end up reading. I think I will have to produce a video lecture with powerpoint illustrations to clarify the important points:
It’s well known that the ‘special’ photons exchanged between electromagnetic charges to produce forces must have not 2 but 4 polarizations. If a photon had only 2 polarizations, exchanging such a photon would only produce repulsive forces, but unlike electric charges (and unlike magnetic poles) attract one another. Over very long distances, two of the 4 polarizations cancel out and hence are not observed with ordinary photons, but those polarizations are important for electromagnetic field quanta where attractive forces occur.
Similarly, in quantum gravity, Pauli and Fierz showed in 1939 (Proc. Roy. Soc. v173,
p211) that the exchanged field quanta must have 5 polarizations (and consequently a spin of 2 units), giving a field Lagrangrian with 5 tensor components which ensures that massenergy will always attract other massenergy. This early quantum gravity research was inspired by experimentally successful predictions, such as those stemming from Yukawa’s work in predicting the pion field quanta of the strong force.
Considering the field quanta of electromagnetism, it’s easy to physically understand how they have two polarizations: they are composed of electric and magnetic fields orthagonal to one another and to the direction of propagation. The electric field vector and the magnetic field vector each represent one polarization.
The other two polarizations are more interesting. The size of the core charge in an electron is so small that it has never been experimentally observed (if it is at the Planck scale or even just the black hole size scale for its mass, then no conceivable accelerator will even be able to observe it), so all we observe is the field of the the electron. The apparent charge observed is just the electromagnetic field observed, which mediates the effect of the electron’s core charge.
It’s interesting that the classical electromagnetic wave is electrically neutral, due to the fact that the electric field oscillates between positive and negative values of field strength, the sum of which is zero. But a quanta in the electron’s electric field mediates to an observer an electric field whose strength is not zero, so it follows that the 2 additional polarizations of the field quanta of electromagnetism are net charges: the field quanta must be negative around negative charges and positive around positive charges.
Normally, a massless electrically charged quanta cannot propagate in the vacuum, because the motion of massless electric charge would produce a magnetic field and selfinductance that would exactly oppose the motion of the quanta (Lenz’s Law). However, it is clear that this kind of objection to charged field quanta doesn’t apply to exchange radiations in equilibrium, because where you have a flux of charged gauge bosons from electron A to electron B that is equal to the flux of charged gauge bosons going from electron B to electron A, the magnetic field curls of each current will be in an opposite direction to that of the other, so magnetic effects will exactly cancel.
One of the interesting things about this model is that it leads to checkable predictions. The existing Standard Model specifies that the electromagnetic gauge boson is a mixture of B boson of U(1) and the W_0 boson of SU(2), the mixture being represented by the ad hoc Weinberg mixing angle of about 25 degrees between Z_0 and W_0 vectors.
However, using the model of the gauge boson as being electrically charged suggests that the U(1) x SU(2) electroweak sector of the Standard Model with its ad hoc mixing angles and its dodgy Higgs field speculations may be replaced by something of the form SU(2) where – with a different massgiving field – the 3 gauge bosons exist in both massive and massless forms. The two massless electrically charged SU(2) field quanta form electromagnetic fields, while the massless neutral gauge boson is the graviton. […]
Here’s a copy of a comment I made to the backreaction blog:
http://backreaction.blogspot.com/2008/04/modelsandtheories.html
The theory of the luminiferous aether on the other hand is a theory as well, but one that was proved wrong with the experiment by Michelson and Morley [3].
The Wikipedia page you link as ref [3] states:
… A possible explanation was found in the Fitzgerald–Lorentz contraction, also simply called length contraction. According to this hypothesis all objects physically contract along the line of motion (originally thought to be relative to the aether), so while the light may indeed transit slower on that arm, it also ends up travelling a shorter distance that exactly cancels out the drift. In 1932 the Kennedy–Thorndike experiment modified the Michelson–Morley experiment by making the path lengths of the split beam unequal, with one arm being very short. In this version a change of the velocity of the earth would still result in a fringe shift except if also the predicted time dilation is correct. Once again, no effect was seen, which they presented as evidence for both length contraction and time dilation, both key effects of relativity.
Ernst Mach was among the first physicists to suggest that the experiment actually amounted to a disproof of the aether theory. …
I’ve never seen a clear explanation as to how the MichelsonMorley experiment disproves the aether, it’s more a case that the MichelsonMorley experiment has to be interpreted with the false assumption that there is no length contraction, in order to get rid of the aether.
Empirically, there is length contraction, and there is evidence in general relativity for a differential geometry (or a spacetime fabric manifold) that gets curved by the presence of mass, etc. But Einstein’s relativity, both special and general, is classical physics. It doesn’t involve quantized fields.
As soon as you start looking at spacetime as involving quantized gravity fields, with gravitons and other field quanta being exchanged between particles to produce forces, the whole concept of the universe being based on classical differential geometry evaporates.
Now, what effects will occur if an atom moves into a quantized gravity field? If it’s going to be running into a graviton field, it’s going to encounter graviton impacts more frequently in the direction of motion than in other directions like someone running in the rain, so will that type of effect distort a moving particle’s shape and flatten it? Clearly, general relativity tells us a bit about the correct way to model such fields as a while: the source term for the gravitational field is best modelled by analogy to a perfect, frictionless, fluid.
It seems that since the field quanta, gravitons, require a velocity c to carry the gravitational field at the correct velocity to match empirically confirmed predictions of general relativity, the various effects on a atom trying to move in such a graviton populated field will depend on the ratio of the velocity of the atom, v, to the velocity c. The LorentzFitzGerald factor [1(v/c)^2]^0.5 models the physical effects from the space time fabric on moving matter, such as slowing down internal motions and contracting the length in the direction of motion.
The reason why Einstein’s 2 posulates became popular in place of the contraction and timedilation generating aether of FitzGerald (1889) and Lorentz (1893), is that there was a “cosmic landscape” of 200 versions of the aether theory (according to Eddington’s 1920 book, Space, Time and Gravitation), and no experimental way to pick out the correct aether (if any of those models in the known landscape was correct).
However, since the advent of the Standard Model, there is some evidence for the kind of theory describing the general nature of force fields in the vacuum: gauge theories of the Abelian and YangMills variety. This gives a model of the general type of process involves in generating forces: gauge bosons (field quanta) are exchanged between charges.
So there is now more hope that progress could be made towards picking out a physical model of the vacuum that could be experimentally validated.
Mach was the guy who claimed that atoms were pseudoscience, just because he had never laid eyes on one. This kind of proud hostility against admittedly unfashionable “crackpottery” sent Boltzmann over the edge.
Excessive rigor in science is as harmful as excessive openmindedness. There are lots of real cranks out there who are wrong because they don’t like the big bang theory, so they claim that the experimental evidence for it is inadequate, or they make up some false claim that that it is based on shoddy evidence.
Because they’re being excessively critical, wrongly they believe they’re being extremely scientific. The problem they have is that they don’t have an alternative idea that is as simple as the big bang, and makes as many or more accurate predictions. So they’re the crackpots, not those working on the big bang theory.
In the case of special relativity, you drive some equations based on two postulates. Fine, no problem. What goes wrong is if you try to claim that you’ve disproved aether because of the fact you can derive the 1889 aether theory formulae of FitzGerald and Lorentz from Einstein’s two postulates that don’t involve aether.
There are many other examples of cases where it is possible to derive accurate formulae with different arguments, using different assumptions in mathematical physics. If there is no difference between the results, you can’t arbitrarily argue that one method of deriving the equations is wrong and another is correct.
They may all be right within the realm of validity of the assumptions made. If one method of derivation is ultimately wrong, it might not be the one you think is wrong based on what the consensus of fashionable opinion is.
What do you think of that, there are some ideas from one of my physics professor in the university ( he’s considered as a crackpot by my other physics professors ), but who knows, maybe he’s right, I cannot judge, and I have found some ideas like yours, mister Cook:
http://grandcosmos.org/english/index_ang.htm
Hi Laurent,
Thanks for that link. I had a quick look at the material, but it will take some time to read it carefully. My first reaction is that most of the material on the page http://grandcosmos.org/english/index_ang.htm appears to numerology, which is of interest if some kind of feasible physical mechanism can be suggested for the numerican coincidence, or if numerical coincidences can be generalised by a formula which then makes other predictions.
Objections to numerology are sometimes exaggerated, with the critics ignoring the fcat that empirical approximate formulae often precede more rigorous physics.
The initial evidence Bohr had for his model of the atom came from the empirical formula for Balmer’s line spectra which was constructed by Rydberg. This had no theoretical basis at the time it was produced, but later Bohr was able to deduce his atomic model from it.
Similarly, the first efforts to correlate the integer masses of the elements ran into enormous difficulties because of isotopes of chlorine, and binding energy effects which mean that many elements don’t appear to have integer masses relative to hydrogen. It took a lot of work by many people to secure the period table, and to eventually explain it many years later using quantum numbers and the pauli exclusion principle.
In a sense, it could be argued that Newton’s formula for gravity is a piece of numerology, because it’s empirically based on Kepler’s laws, which are based on observations. In general, a great deal of physics, including much of quantum mechanics, is based on mathematical models that are both based upon and justified by empirical evidence, not by an axiomatically derived theory in the strict mathematical sense. Special relativity is based on two axioms, but it applies to a spacetime free of accelerations and thus curvature, which is not this universe.
On the subject of the “crackpot” label, that’s just an ad hominem attack on a person, not a scientific attack on particular ideas, and it is not really impressive one way or the other. There is a general feeling in physics that any idea which doesn’t build entirely on currently accepted foundational concepts is wrong, and the author is a “crackpot”.
Obviously to a student it is important that a large proportion of time is invested in mainstream consensus, the standard textbooks, and understanding the most popular ideas, so you can do well in exams.
However, it’s also clear that classical physics like general relativity is built on differential geometry, which at best is just an approximation to effects of quantized interactions in particulate vacuum fields. When departing from mainstream orthodoxy, it is important to try to come up with experimentally defensible ideas.
I think it’s sometimes important to try to find original ideas in science, instead of sticking to the mainstream party line and being a clone.
Mainstream string theory ( http://quantumfieldtheory.org/ ) is interesting because it sets out to defy Occam’s razor of economy in science by postulating many extra spatial dimensions that nobody has ever observed, in order to explain the speculation of spin2 gravitons, which again nobody has ever observed. In addition, it builds up a detailed picture of fundamental particles as invisibly small (Planck scale) compactifications of the extra spatial dimensions nobody has ever seen. This violates Mach’s dictum upon which special relativity is based, because it speculates in great detail about things that can’t even in principle ever be observed. Finally string theory fails to make any predictions because the extra dimensions can be arranged in 10^500 different ways, each having different physical properties. So the string theorists “explain” away the extra dimensional problems by speculating that we live in a multiverse of 10^500 distinct universes, each having different parameters of compactified extra spatial dimensions, which can be plotted to produce a “cosmic landscape” of different values to the cosmological constant. Our universe is supposed to be at the bottom of a deep valley in this cosmic landscape, where the cosmological constant is suitably small. How convenient can you get! This again seems to violate Occam’s razor of economy when making speculations. It makes no falsifiable predictions, and it’s not based upon facts but just speculations about unification occurring when force strengths are similar, which seems to be to ignore energy conservation for the mechanisms (like vacuum polarization in strong fields) which lie behind running couplings. If you want a good example of a truly “crackpot” theory, it is mainstream string theory. It combines the worst aspects of all all varieties of crackpotism in one place. However, it is not considered to be good form to denounce it as crackpot, because so many people earn a disreputable living by promoting it (just as is the case with astrology and pseudodemocratic politics). I actually once had a girl I fancied think I meant “psychics” when I wrote I had an interest in “physics”. That’s how low the status of physics has sunk today due to mainstream myths being promoted as if they were facts.
copy of a comment in the moderation queue to:
http://www.math.columbia.edu/~woit/wordpress/?p=673#comment36683
nigel cook
April 10th, 2008 at 5:45 am
That’s a worthless book review because it doesn’t address the main issue. It’s like an ad hominem attack, trying to debunk one thing by attacking something else. The alleged trivial errors aren’t necessarily a sign that the big arguments are wrong, and I think the reviewer is incompetent to review the book because he ignores the main arguments altogether.
It’s simply not good enough to spot a few trivial errors in a book and then try to discredit the main message as suspect. The amount of polishing and error checking of a book written by a very busy author is mainly down to the publisher’s editor, not the author. I read a couple of earlier popular books by Professor Kaku, and found them to be well written. The fact that he writes a lot about nonspeculative (nonpredictive) stuff like string theory is the reason why he is so popular. Fiction outsells fact, and you can’t debunk it. If fiction sells, someone will write and publish it.
copy of a comment in the moderation queue to Carl Brannen’s blog:
http://carlbrannen.wordpress.com/2008/04/09/thequantumzenoparadoxoreffect/
“And therefore, there are an infinite number of distances to be travelled and the arrow could never reach the target.”
Physically, Zeno was making a massive leap in assuming that there is some sense in dividing a journey into an infinite number of infidesimal steps, and he was making an error in assuming that an infinite number of infidesimal steps can’t be make in a non infinite amount of time.
I agree with you that Zeno’s crazy argument has echos of quantum mechanics in it.
While I agree with quantum mechanics as an approximation for electrons in atoms and for alpha decay by quantum tunnelling, I prefer the Feynman interpretation of what is going on: the crazy looking effects are due to the virtual particles in the Dirac sea (or however the vacuum should be described physically) getting involved with fundamental particles. Virtual particle effects cancel out on large scales, just like the impacts from air or water molecules can be described classically as a continuous pressure on large scales. On small scales, they cause chaos and introduce unpredictability unless you average out the motion, just like Brownian motion which can be treated statistically by a path integral.
On the exponential radioactive decay equation, it’s maybe worth the physical (not mathematical) comparison to the dynamics of a capacitor discharging.
A simple charged capacitor is two charged conductors separated by an insulator, which at the simplest can be just a distance of vacuum. Hence a capacitor physically be composed of charges separated by a distance of vacuum.
A nucleus about to decay by emitting a charged particle is conceptually somewhat similar to a capacitor plate about to discharge. We have to remember that in the universe there are similar numbers of positive and negative charged particles, so the “other” capacitor plate to balance the nucleus we are focussing on is elsewhere.
The analogy is interesting because if you have a large number of radioactive nuclei, they do decay as a whole giving the exponential decay curve, just as you get when a capacitor is discharged.
Physically, when a capacitor discharges, charge is removed from it.
Now here’s the fun part. Charge comes in lumps! At no time does a charged capacitor plate contain a continuously variable amount of charge. It only ever contains a discrete number of electrons. The maths of exponential decay of charge in the capacitor is a fiction. The amount of charge must fall in discrete steps as each electron leaves. So the classical theory of the exponential decay of charge in a discharging capacitor is a largenumbers approximation.
If you shrink the capacitor plates so that you are considering a tiny capacitor with just a few charges in it, the fall of charge when each electron leaves will no longer be a smooth exponential curve, but a series of steps.
In other words, the true model for the capacitor is not the differential model given at http://hyperphysics.phyastr.gsu.edu/Hbase/electric/capdis.html
Current is quantized into lumps (fundamental particles like electrons), so it’s definitely not possible to represent it by I = dQ/dt where Q is charge. Q is not a continuous variable, dQ doesn’t exist, because the smallest amount of Q that exists is the charge of a unit fundamental particle. This smashes up the mathematics of the calculus completely. You can only apply the calculus statement I = dQ/dt as a crude approximation where you can ignore individual charges (fundamental particles) because the numbers of charges flowing are extremely large.
The (incorrect) nonstepwise formula for the proportion of charge remaining in a capacitor with capacitance C when discharged through resistance R for t seconds is:
exp[t/(RC)]
Catt has analysed the capacitance of a fundamental particle with respect to the surrounding universe, see http://nige.files.wordpress.com/2008/04/http___wwwivorcattcom_1_3.pdf
Basically, you consider a capacitor made of two concentric spherical shells (each being a capacitor plate, with vacuum dielectric between them) with radii A (inner shell) and A + B (outer shell):
Capacitance, C = 4*Pi*[permittivity of free space]*A*(A + B)/B.
Where the outer charge shell is at large distance compared to the inner charge shell, as is the case when dealing with an isolated nucleus, with a shell of electron charges at distances many times the radius of the nucleus, A + B ~ B, so:
C ~ 4*Pi*[permittivity of free space]*A.
So we can calculate the capacitance of the nucleus of radius A. We only now need to estimate the resistance R against the alpha or beta particle being emitted, in order to perfectly represent radioactive decay as an electrical discharge of a charged capacitor platelike nucleus.
The product RC in the capacitor discharge formula is identical to the mean life of a radioactive atom, which for exponential decay is always bigger than the statistical halflife by a factor 1/ln2 = 1.44. Hence the radioactive halflife is predicted to be RC*ln2 = 0.693*RC.
Estimating the electrical resistance, R, to emission that a charged particle in the nucleus experiences is an interesting problem. Using Ohm’s law:
R = V/I.
For alpha decay, pion exchange is the strong attractive nuclear force which is acting against radioactive decay, while Coulomb repulsion is acting in favour of radioactive decay. Nucleons tend to have quantum numbers in the nucleus and bunch up into stable configurations like alpha particles in the outermost (most weakly bound) nuclear shells, in the nuclear shell model which is somewhat similar to the model of electron shells.
The Coulomb electric force field acts to accelerate the alpha particle away from the radioactive nucleus, while the strong nuclear force mediated by pions is effectively the source of electrical resistance, by hindering the motion of the alpha particle away from the nucleus.
This can be worked out by calculating the repulsive Coulomb force and attractive strong force operating on the alpha particle in the outer shell of the nucleus as it moves outward; the velocity the alpha particle gains will be equivalent to the drift velocity a charge in a circuit gains as a result of the balance between acceleration from the electric field and deceleration due to drag effects like collisions.
Since R = V/I, the calculation is fairly easy.
“The difference in voltage measured when moving from point A to point B is equal to the work which would have to be done, per unit charge, against the electric field to move the charge from A to B.” – http://hyperphysics.phyastr.gsu.edu/hbase/electric/elevol.html (see also http://hyperphysics.phyastr.gsu.edu/hbase/electric/ev.html#c2 )
So all we have to do is to work out the voltage from the electrical work done as an alpha particle is repelled away from a nucleus to its effective electric current due to that charged alpha particle. This should allow the halflife to be calculated without the usual quantum mechanical Gamow obfuscation of quantum tunnelling. I think that mechanically, quantum tunnelling does have some sense it: it works because the force fields aren’t smooth, continuously operating forces. Instead, they are quantum fields of virtual particles acting at random intervals. On large scales they mimic classical approximations, but one difference is that the can’t always trap an alpha particle into the nucleus. The rate of exchange of pions and electromagnetic gauge bosons is irregular and random, like the irregular clicking of a geiger counter. Sometimes there are random intervals of very few interactions, when an alpha particle has a chance to escape from the nucleus.
Update (12 April 2008):
Weinberg deals with the electroweak mixing angle in section 21.3 of vol. 2 (Modern Applications) of The Quantum Theory of Fields (CUP, 2005). He fails to make it physically clear that the mixing angle controls the mixup of the neutral weak gauge boson of SU(2) with the U(1) boson. It’s just mathematical ad hoc modelling, without any physical insight into mechanisms. Weinberg isn’t interested in physics, just mathematical modelling that is full of ad hoc epicycles and adjustments that are convenient, but not demonstrated to be real. The man is a juggler. Section 8.5 of Ryder’s Quantum Field Theory (CUP, 2nd ed., 1996) also includes the Weinberg mixing angle in an obfuscating way. It appears in an equation without physical explanation.
My draft book section (linked to in the first section of this blog post) is now obsolete. I’m going to replace it with something in different format, that starts instead by deriving the physics of the path integral. I’ll begin with the use of path integrals in Brownian motion, and then extend this physical concept to the vacuum dynamics of gauge bosons (field quanta) randomly affecting the path taken by real particles. This is tthe ideal starting point, because the influence of gauge bosons on moving real particles is central to everything in the physical applications of quantum field theory.
One other thing that has changed recently is my outlook on human prejudices. I’ve always hated stereotyping because it’s so ignorant. For example, I had a hearing defect which affected my speech when a kid, and the inability to hear clearly reduced my response times in classes (I would have to think hard to work out about half the words that the teacher was saying from the context of the other words which I could understand). People never had any time to grasp what the problem was, which was frequency distortion due to eustachian tube blockage with fluid in the ears. This reduces your ability to hear higher frequencies, so you only hear (and mimic in speech) the lower frequencies, causing a speech defect and also preventing you from being able to understand many words, particularly when young when you are being introduced to many new words in class every day. Most people jump to conclusions and assume that you are stupid or deaf, and shout louder (which increases the volume of the distortion you are hearing, instead of reducing the distortion). Then, because it reduces the rate at which you can learn, it means that you end up with lower grades in exams or having to resit exams. So the brilliant university admission tutors stereotype you as someone who isn’t interested in studying physics, and you get rejections because of their ignorant reading of exam results and the fact you had to resit a maths exam. So when you do eventually get somewhere, they think they are doing you a favour by letting you on a course you don’t want to do, which you’ve worked hard to get on all your life. In the process, you don’t have any time for any social life, and miss out on social interactions and dating entirely. But what’s interesting about this is that stereotyping is a very natural thing that most people do all the time, saving themselves time by not having to actually bother finding out the facts.
The whole logic of stereotyping is that you start out with a denial that people are different, let alone special. Then you come up with a few categories or templates that represent all people. For example, racial stereotyping, disability stereotyping, etc. Then you refuse to discuss any particular person as if they were an individual human being. Instead, you divert your attention to a hatefilled stereotyping template that you have constructed, and talk about the presumed deficiencies in that that category of people, instead of bothering to check the facts concerning the individual person. This is a variant of the ad hominem fallacy (if someone comes up with detailed reasons why your argument is wrong, instead of trying to discuss the details objectively, you instead observe that the person is of an inferior race, and then you launch into your standard arguments for why that particular race is stupid and not worthy of being listened to). However obvious it is that stereotyping is defective, people still use it because it saves them so much time and effort.
copy of relevant parts of a recent email of mine to SM:
… I agree with what Simone Weil writes in the quotation you provide. In science, the human failing is all too often to give mere unproven assumptions and mere conceptual frameworks the status of a religion. There is an obvious contradiction present when you hear people speaking about “defending science”. Religion is something that needs to be defended. If science is based on facts, then what defence does science require? Surely science is not about defending or attacking, but is instead about finding out and stating the facts. You can’t attack or defend facts of this world. If there is ice at the North Pole, that’s not something to be defended. There either is ice there, or not. You simply need to show the evidence. If you don’t have evidence, or you have poor “evidence”, then it is not science that you are defending yourself from if people criticise your work, it is instead your poor work which are trying to defend. Any scientist who talks about defending science is really losing grip on what science is all about, which is not party politics or marketing or spin/hype, but finding out facts and stating those facts as lucidly as possible.
The first time I felt uneasy about this was around 1992, when I read Dr Conrad Longmire’s first IEEE published paper on the electromagnetic pulse from a nuclear explosion. Dr Longmire discovered the mechanism for the electromagnetic pulse from high altitude explosions, where electrons get deflected by the Earth’s magnetic field and this deflection causes them to emit electromagnetic radiation. His paper is actually very good in many ways, but he let himself down badly in the acknowledgements section, where he pays tribute to a colleague who helped to “defend electromagnetic pulse theory” from criticisms that the theory should have been formulated in a different way. I think that in science nobody should try to force the development of a subject to be based on the first successful approach for the rest of time. That’s not science. People should forever be allowed to criticise scientific theories, and to try to reformulate them in a better way. Otherwise what happens is that science becomes a too much of a human enterprise, doomed to be a religion based on the authority of pioneers. In the case of Dr Longmire’s paper, he used Maxwell’s equations in such a way as to ignore the summing of the electromagnetic pulse from individual electrons: he treated the electrons as a mathematically continuous electric current, applied the equations and got the right answer. (Critics were worried about how the electron deflections lost their coherence when the conductivity rose.)
Ultimately there must be three different influences upon the scientist: peerpressure, media interest, and factual evidence. Peerpressure (from colleagues, editors and journal peerreviewers) as well as media interest are major influences upon the ability of a scientist to get funding and do research in order to actually obtain factual evidence. Because any kind of serious research requires time and money, many important projects are stick in a vicious circle – whereby sufficient factual evidence to overcome ignorance and apathy on the parts of the peer group and the media, is not forthcoming because there is no money to get it. This is the problem many people had who criticised Ptolemy’s epicycle model of the planets orbiting the Earth. Ptolemy’s system literally became mixed into religion, and it was a heresy to try to work on alternative ideas.
Peerpressure to conform to prejudiced frameworks shuts down whole avenues of promising factual research, just because such areas were deemed unfashionable back in 1927. E.g., we know in quantum field theory that spacetime is not smooth on small scales, and therefore spacetime is not classically curved (the spacetime curvature being literally represented by a curved line of the position of a small freefalling particle on a graph of time versus one spatial dimension). Instead, all understood forces occur as the result of the exchange of quantized radiation (named gauge bosons by Hermann Weil). The repeated exchanges of such gauge bosons between the fundamental particles in atoms cause the forces of nature. These aren’t smoothly operating; they can’t be represented accurately by differential equations, except as a rough approximation for very large numbers of gauge bosons (by analogy, on large scales you can ignore the individual, impulsive impacts of air molecules hitting the sail of your boat, and you can instead approximate the force as a continuously acting entity, F = air pressure * surface area). But on small scales, this approximation is a nonsense. The individual impacts become important on small scales, and can’t be ignored as they can on large distance scales. By the analogy of gauge bosons to air molecules, the overall force appears pretty constant when you have billions of them striking a large area every second, but it becomes erratic and chaotic when the air molecules are hitting a very small target such as a fragment of a pollen grain (5 microns in diameter or less); in that case, the rate at which individual air molecules are striking the particle is not “averaged out properly” on all sides. So the pollen grain fragment doesn’t stay still. It undergoes a zigzag trajectory at random. This is the Brownian motion.
Clearly, the reason why electrons undergo chaotic motion on subatomic scales is due to this effect. There is no magic involved; quantum field theory is based on exchanged gauge bosons causing forces. The simplest model of this gives you a chaotic motion of small particles, such as inside the atom. This is completely compatible with Feynman’s path integrals formulation of quantum field theory, which is the mainstream mathematical model of quantized fields.
However, because Feynman’s path integral originates from 1948 and was originally rejected by Bohr, Pauli, Oppenheimer, etc., for various prejudiced reasons (Bohr claimed that i