Kepler’s law (following on from previous post)

The previous post, https://nige.wordpress.com/2006/09/22/gravity-equation-discredits-lubos-motl, has led to an interesting development.

Dr Thomas R. Love of California State University, Dominguez Hills, writes in an email to me: ‘Consider a planet of mass m, orbiting a star of mass M with an average radius of r. The theorem of equipartion of energy requires that the average kinetic energy is equal to the average potential energy [this is because the energy for escape velocity v = (2GM/r1/2 of an orbiting body is exactly equal in magnitude to its existing kinetic energy, so the gravitational potential energy (which is the energy you need to throw an object up to an infinite height and by energy conservation this is equal to the energy an object gets by falling from an infinite height) of an object in orbit is equal to its orbital kinetic energy, E = (1/2) mv 2 = (1/2) m((2GM/r1/2 )2 = mMG/r ]:

(1/2) mv 2 = mMG/r

cancelling the m

(1/2) v 2 = MG/r

Since the orbit is close to being a circle, we can take the average velocity to be:

v = 2p r/T

where T is the period.  Substitute

(1/2)(2p r/T)2 = MG/r

and simplify to obtain:

r 3 = MGT 2 /(2p 2 )

which is Kepler’s law.’

This is a nice extension of the idea in the previous post in this weblog.  I’ve sent Dr Love an email stating that if you next consider a photon orbiting the mass M, by simply setting v = c, and using Einstein’s equivalence for mass m = E/c 2 , then (1/2) mv 2 = mMG/r immediately gives you the correct black hole event horizon radius that general relativity predicts, namely: r = 2GM/c 2 . 

This implies that the effective kinetic energy of a photon is E = (1/2) mc 2 = (1/2) pc (because the photon has no rest mass, whatever mass is – Higgs field or whatever – momentum p = mc is less objectionable).  This is half the amount in the usual formula relating the energy of a photon to its momentum, which is E = pc.

The factor of two discrepancy here is due to the fact that the photon is a transverse wave of electromagnetic field energy, so it oscillates ar right angles to its propagation direction, and the transverse oscillation carries half of the kinetic energy.  In fact, it has equal energy in its electric and magnetic fields, which oscillate at right angles to one another.  Therefore, the kinetic energy of the electromagnetic vibrations of the photon in the direction of the gravitational field vector (as the photon orbits around the mass) is half its total energy E = pc.

Update (3 October 2006):

The physical dynamics for Dr Love’s (1/2) mv 2 = mMG/r is clearly that gravity is trapping the oribiting mass into a closed orbit.  So if the kinetic energy (1/2) mv 2 of mass m was bigger than its gravitational potential energy with respect to the bigger mass (M) that it is orbiting, mMG/r, then it would spiral outwards instead of being in a closed orbit.

But if the kinetic energy of the mass m was smaller than its gravitational potential energy with respect to M, then it would obviously spiral inward (until the energies balanced).

See comments on this and the previous post for some more information.  One thing I’d like to add is that in the Yang-Mills gravity dynamics where gauge boson exchanges between masses cause gravity in an orbital situation such as Dr Love considers, the centripetal force (gravity) is often said to be cancelled by a fictitious outward force, called the centrifugal force.  The key equation a = v 2 /r leads to F = ma = mv 2 /r  for this force, see http://en.wikipedia.org/wiki/Centripetal_force for a couple of derivations of a = v 2 /r (sadly, both of the Wikipedia derivations are relatively inelegant and ugly, compared to a really nice derivation which they don’t give; sometime I’ll try to add it).

It is then usually explained that the centrifugal (outward) force is an illusion and the real physics is down to the inertia of the mass (and is thus explained by Newton’s 1st law of motion).  However, when you consider the dynamics of gauge boson exchanges causing gravitational mass, you realise by Einstein’s equivalence principle (the equivalence between inertial and gravitational mass) that quantum gravity is must explain inertial mass as well as gravitational mass, and must therefore explain Newton’s 1st law of motion.

As we know, at least in the part of the universe we inhabit, any gauge boson radiation exchange causing gravitation and inertia normally occurs with isotropic symmetry in all directions with all the other masses in the universe.   Hence, earth’s radius is simply compressed uniformly by the amount general relativity predicts, (1/3)GM/c2 = 1.5 mm.  Therefore you only usually feel forces from this Yang-Mills quantum gravity mechanism due to asymmetries such as the presence of nearby, non-receding masses.  The earth is an asymmetry, and you get pushed towards it, because because the earth isn’t receding from us siginficantly like the distant masses in the universe.  Because the earth isn’t receding in spacetime with a velocity that increases with its apparent time past from us (and this a force directed away from us equal to its mass multiplied by the rate of change of velocity as a function of observable time past, F = ma), it doesn’t have a force directed away from us, so the gauge bosons it transmits to us don’t carry a recoil force towards us by Newton’s 3rd law.  Hence, it acts as a shield because it isn’t receding.

The dynamics of inertia are not very simple: http://thumbsnap.com/v/ZF9FQD7v.jpg shows some dynamics but not the FitzGerald-Lorentz contraction of the atoms at different places in the mass.  The orbital speed of the atoms at different places in the mass is slightly different: those further from the origin of the curvature (eg, the centre of the orbit) move faster than those located closer.  However, the spatial distribution of the atoms in the mass does not vary the overall effect, what counts is the mass and its speed. 

When a mass moves along a straight line, the paths of successive gauge bosons emitted by perpendicular to its trajectory by atoms of the mass (which is spread out spatially) are parallel, but when it moves on a curved trajectory, the paths of successive transmission of gauge bosons emitted on the side facing the origin of the curvature (say the centre of a circular orbit, or a focus in an elliptical orbit) are not parallel but instead converge at the centre or focus of the orbit.  On the other side of the orbital mass, successive gauge bosons emitted perpendicular to its direction of motion diverge from one another.  The difference in the angular distribution of the gauge bosons on the two sides emitted by a mass moving on a curved trajectory causes a real centrifugal force, ie, it is the origin of the inertial force which opposes gravity and keeps the mass orbiting without either falling inward or flying outward.  It is fairly clear that to prove this rigorously will be the next step, following the kind of dynamics described at http://feynman137.tripod.com/#a.

If you consider a gyroscope’s physics, see http://www.mariner.connectfree.co.uk/html/gyro.htm, the angular momentum effects are subtle when you get away from mathematical models and try to use simple physical concepts; for example see http://www.newton.dep.anl.gov/askasci/phy99/phy99191.htm:

‘If you push sideways a speeding car you do not expect the path of the car to suddenly change so as to lie along the direction of the push.  Rather, you expect the car to acquire a little extra velocity in the direction of the push, and the combined action of this new velocity and the car’s original velocity to result in a path mostly along the original direction but deflected slightly towards the direction of the push.  The key insight is that a force changes directly the velocity of an object and not its path, and the path only changes eventually, via the change in velocity.’

Professor Eric Laithwaite turned the gyroscope into a tool for mocking the mainstream of physics in the 1974 Royal Institution Christmas Lectures he delivered, causing uproar.  It is dangerous to go down that road, see the videos of the lectures at http://www.gyroscopes.org/1974lecture.asp:

‘Air powered gyroscope (5000rpm – 8lb). Searching for centrifugal force. Gyroscope hanging over the top of a table. Out of balance by 2kg. … Gyroscope on an arm with a second pivot point. Making a body lighter than it is. … Denis lifts a 18lb gyroscope with a 6lb shaft running at 2000rpm. … The energy contained within a gyroscope. … What’s wrong with the scientific world? … Ohm’s law only applies to DC and not AC.’

Laithwaite showed evidence that Newton’s laws don’t apply in situations where the acceleration of mass is changing (they do apply where the velocity is changing).  Laithwaite may have made a mistake in trying to question empirical laws, after all the equations which Einstein got from special relativity were the already-known FitzGerald-Lorentz contraction and time dilation, and other electromagnetic theory results.  Nobody sensible attacks empirically defensible laws.

Poor old Royal Institution, having such a load of crackpotism transmitted on TV!  Little did they expect a crackpot lecture when they invited the distinguished Professor Laithwaite to explain the gyroscope at the lectures initiated over a century earlier by Michael Faraday.  The problem is that, as you can see in the lectures he gave, he did experiments which were transmitted on TV and demonstrated all of his claims.

(I haven’t replicated Laithwaite’s experiments with gyroscopes, but I can tell you that Ohm’s law only applies to steady state systems: when you send logic pulses, the logic pulses can be shorter than the size of the circuit, so they certainly can’t tell if the circuit is complete or not (or what its complete resistance is) when they start out.  In fact, logic pulses start out the same regardless of whether the circuit is complete.  You can send a logic pulse into an unterminated transmission line, where Ohm’s law would say has infinite resistance because the two conductors are separate by insulators.  What happens in this case was worked out by Heaviside around 1875, when he was experimenting with and mathematically modelling the undersea telegraph cable between Newcastle and Denmark.  Heaviside found that electric signals travel at the speed of light, and they have no way of telling in advance of travelling around the circuit, what the resistance of the complete circuit will turn out to be.  Instead, Heaviside found that there is what he considered – like Maxwell – to be an aetherial effect called impedance which has the same units as resistance (Ohm) but behaves very differently, being dependent only on the geometry of the conductors involved and the insulator used.)

The Royal Institution refused to publish the text of Laithwaite’s lectures (although the lectures were transmitted live on TV by the BBC and video recorded on tape).  Wikipedia states that Laithwaite responded by quoting a cynical comment by quantum field theorist, Professor Freeman Dyson:

“The scientific establishment, in the form of the Royal Institution, rejected his theory and his lecture was not published by the RI. His feelings on this can be seen in one of the 19741975 Royal Institution Christmas Lectures which he presented. In an apparent defence of his position he quoted Freeman Dyson: ‘Most of the crackpot papers that are submitted to the Physical Review are rejected, not because it is impossible to understand them, but because it is possible. Those that are impossible to understand are usually published.’ (Freeman Dyson, Innovations in Physics, Scientific American, September 1958).”

(That 1958 Dyson article in Sci. Am. Vol. 199, No. 3, pp. 74-82, is very important historically.  It quotes Niels Bohr’s statement to Wolfgang Pauli: ‘We are all agreed that your theory is crazy. The question which divides us is whether it is crazy enough to have a chance of being correct. My own feeling is that is not crazy enough.’  Dyson also states in the article: ‘I have observed in teaching quantum mechanics (and also in learning it) that students go through the following experience: The student begins by learning how to make calculations in quantum mechanics and get the right answers; it takes about six months. This is the first stage in learning quantum mechanics, and it is comparatively easy and painless. The second stage comes when the student begins to worry because he does not understand what he has been doing. He worries because he has no clear physical picture in his head. He gets confused in trying to arrive at a physical explanation for each of the mathematical tricks he has been taught. He works very hard and gets discouraged because he does not seem able to think clearly. This second stage often lasts six months or longer, and it is strenuous and unpleasant. Then, quite unexpectedly, the third stage begins. The student suddenly says to himself, “I understand quantum mechanics”, or rather he says, “I understand now that there isn’t anything to be understood”. The difficulties which seemed so formidable have mysteriously vanished. What has happened is that he has learned to think directly and unconsciously in quantum mechanical language, and he is no longer trying to explain everything in terms of pre-quantum conceptions.’  This is a gutless surrender to the Copenhagen Interpretation.)

It is significant that Laithwaite was a Professor at Imperial College of London University, which was a hotbed of dissent in theoretical physics: Professor Herbert Dingle was there at the same time (note that the Wikipedia article on him is prejudiced by a disgraceful error that I have pointed out on the discussion page of the article) and also Theo Theocharis who graduated there in the early 1980s and stayed on to do- as I understand it – do a PhD on the errors of stringy stuff in mainstream physics (naturally that had to be stopped).  Theocharis and M. Psimopoulos did succeed in getting an attack on the Copenhagen Interpretation etc into a peer-reviewed journal: ‘Where Science Has Gone Wrong’, Nature, v329, p595, 1987.  However, that just caused more uproar:

‘Teachers of history, philosophy, and sociology of science … are up in arms over an attack by two Imperial College physicists … who charge that the plight of … science stems from wrong-headed theories of knowledge. … Scholars who hold that facts are theory-laden, and that experiments do not give a clear fix on reality, are denounced. … Staff on Nature, which published a cut-down version of the paper after the authors’ lengthy attempts to find an outlet for their views, say they cannot recall such a response from readers. ‘It really touched a nerve,’ said one. There was unhappiness that Nature lent its reputation to the piece.’ – Jon Thurney, Times Higher Education Supplement, 8 Jan 88, p2. [This refers to the paper by T. Theocharis and M. Psimopoulos, ‘Where Science Has Gone Wrong’, Nature, v329, p595, 1987.]

The dangers of pointing out errors in orthodoxy without correcting them at the same time are potentially massive.  Hans Christian Anderson and George Orwell effectively explain problems in modern physics between the research and the teaching orthodoxy:

‘The Emperor realized that the people were right but could not admit to that. He though it better to continue the procession under the illusion that anyone who couldn’t see his clothes was either stupid or incompetent. And he stood stiffly on his carriage, while behind him a page held his imaginary mantle.’ – Hans Christian Anderson, 1837.

Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to (an authority) and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, 1949.

String theory being ‘not even wrong’ demonstrates this very nicely.

Gravity Equation Discredits Lubos Motl

Louise Riofrio published some physics papers on a model of cosmology based on a simple relationship

GM = tc3

But then Assistant Professor Lubos Motl of Harvard University (a string theorist who religiously believes in 10/11 dimensional spacetime, but has no objective evidence for it whatsoever) made some rude sexist remarks about her being female on his Reference Frame blog, and claimed this equation to have no physical connection.  He dismissed it as merely dimensional analysis.  Being thus duped, I stupidly believed him at first, but now it is clear that he was not even wrong:

Simply equate the rest mass energy of m with its gravitational energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

Hence E = mc2 = mMG/(ct)

Cancelling and collecting terms,

GM = tc3

So Louise’s formula is derivable, while extra dimensional Lubos is as usual proved to be not even wrong (just like his beloved string theory).  But women physicists are more careful and so more likely to be correct.  They don’t go dismissing things they can’t understand by making a sexist remark, so they are more likely to get the physics correct.

In more detail:

To prove Louise’s MG = tc3 (for a particular set of assumptions which avoid a dimensionless multiplication factor of e3 which could be included according to my detailed calculations from a gravity mechanism):

(1) Einstein’s equivalence principle of general relativity:

gravitational mass = inertial mass.

(2) Einstein’s inertial mass is equivalent to inertial mass potential energy:

E = mc2

This equivalent energy is “potential energy” in that it can be released when you annihilate the mass using anti-matter.

(3) Gravitational mass has a potential energy which could be released if somehow the universe could collapse (implode):

Gravitational potential energy of mass m, in the universe (the universe consists of mass M at an effective average radius of R):

E = mMG/R

(4) We now use principle (1) above to set equations in arguments (2) and (3) above, equal:

E = mc2 = mMG/R

(5) We use R = ct on this:

c3 = MG/t

or

MG = tc3

Which is Louise’s equation. QED.

Christine Dantas has a PhD in astrophysics and studies Smolin’s very mathematical loop quantum gravity as an alternative to string theory.  However, after listing the evidence for loop quantum gravity, Lubos Motl then subjected her to dismissive rudeness (calling her guilty of ‘sloppy thinking‘ was horribly inaccurate and also hypocritical of Lubos, seeing his sloppy uncheckable claims for extra dimensions and string, and his errors such as the example above) combined with his usual loud sexist comments.  Lubos Motl seems determined to stop women rising to prominent positions in physics.  Why does he not want this?  Is it because the hot air of string hype may be reduced, I wonder?

Even his senior at Harvard, string theorist Professor Lisa Randall, states in the preface of her nicely caveated and polished book Warped Passages that she does not entirely agree with Lubos’ view of females, and she does admit the possibility that string may be not even wrong if it can’t be checked.

If Lubos is the role model of macho physics in action, then the future of physics certainly lies with female physicists who don’t allow such hormone driven prejudices to destroy their objective judgement on scientific matters.  It is largely because of pseudo-macho hype from male string theorists, such as Lubos et al., (I won’t mention Witten’s name here because he is nowhere near as rude as Lubos) mathematical physics gets ever less popular.  (More on the decline: here.)

I’m reading Woit’s course materials on Representation Theory as time permits (this is deep mathematics and takes time to absorb and to become familiar with).  Wikipedia gives a summary of representation theory and particle physics:

‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’

Woit’s historical approach in his course notes is very clear and interesting, but is not particularly easy to read at length on a computer screen, and ideally should be printed out and studied carefully.  I hope it is published as a book with his arXiv paper on applications to predicting the Standard Model.  I’m going to write a summary of this subject when I’ve finished, and will get to the physical facts behind the jargon and mathematical models.  Woit offers the promise that this approach predicts the Standard Model with electroweak chiral symmetry features, although he is cautious about it, which is the exact opposite of the string theorists in the way that he does this, see page 51 of the paper (he is downplaying his success in case it is incomplete or in error, instead of hyping it).

By contrast, Kaku recently hyped string theory by claiming that it predicts the Standard Model, general relativity’s gravity, and lots more; but of course this is completely untrue, because in string theory, to get it to work, you first have to fiddle the dimensions to 10 just in order to produce particle physics and to 11 to produce gravity (although the 10-11 dimensions paradox was allegedly overcome by Witten’s M-theory in 1995, which is a kind of mathematical Holy Trinity).

In no case has the string theory – even once fiddled to a number of dimensions that makes it work “ad hoc” – then managed to make even a single checkable physical prediction!

This is why string theory is a complete disgrace as physics, although Woit (perhaps because he now works in a mathematical department) is always keen to kindly say that at least string theory has led to an increased mathematical understanding of extra dimensional manifolds like solutions to the Calabi-Yau manifold which gives about 10500 metastable ground states of the vacuum (and thus 10500 ‘dark energy’/cosmological constant levels, forming Susskind’s anthropic ‘cosmic multiverse landscape’ of universes!) in an oscillating string due to the many possible parameters of the 6 dimensional manifold’s size and shape dimensions (how elegant and how beautiful … I don’t think).

More: see http://electrogravity.blogspot.com/ and https://nige.wordpress.com/2006/09/30/keplers-law-from-kinetic-energy/

Predictions of Quantum Field Theory (draft introductory passages)

(INSERT ILLUSTRATIONS, MATHS, PROOFS, TABLES FROM MY OLD SITES)

Introduction Modern physics is based on the implications of quantum field theory, the quantum theory of fields. The mathematical and practical utility of the theory is proved by the fact that it predicts thousands of particle reaction rates to within a precision of 0.2 percent, and the non-nuclear quantum field theory of electrons (quantum electrodynamics) predicts the Lamb shift and the magnetic moment of the electron more accurately than any other theory in the history of science.Paul Dirac founded quantum field theory by guessing a relativistic time-dependent particle-applicable version of the Schroedinger wave equation (that modelled the electron in the atom). Schroedinger’s equation could not be used for free particle because it was non-relativistic, so its solutions were not bounded by the constraint of ‘relativity’ (or spacetime) whereby changing electromagnetic fields are restricted to propagate only the distance ct in the time interval t.

Spectacular physical meaning was immediately derived from Dirac’s equation because it implied the existence of a bound state electron-positron sea in the vacuum throughout spacetime, and predicted that a gamma ray of sufficient energy travelling in a strong electromagnetic field – near a high atomic (proton) number nucleus – can release an electron-positron pair. This creation of positrons (antimatter electrons, positively charged) was observed in 1932, confirming Dirac.

Pair production only occurs where gamma rays enter a very strong electric field (caused by the close confinement of many protons in a nucleus), because in a strong electric field the Dirac sea is polarized strongly enough along the electric field lines, weakening the electron-positron binding energy. Polarization consists of separation of charges along electric field lines. As the average distance of vacuum electrons from positrons is slightly increased, the Coulomb binding energy falls, hence gamma rays with energy above the energy of a freed electron-positron pair (1.022 MeV) have a significant chance of freeing such a pair. This pair production mechanism in practical use enables lead nuclei to stop shield gamma rays with energies above 1.022 MeV. (Of course, electrons in atoms can also shield gamma rays by the Compton and photoelectric effects.)

Gravity (readers should pay special attention to the following!)

Electrons and positrons in bound states in the vacuum take up space, and are fermions; each space taken up by a fermion cannot be shared with another fermion as demonstrated by the experimental verification of the Pauli exclusion principle. Therefore, when a real fermion moves, it cannot move into a virtual fermion’s space. The vacuum charges are therefore displaced around the moving real charge according to the restraint of Pauli’s exclusion principle. We can make definite predictions from this because the net flow of the Dirac sea around a moving real fermion is (by Pauli’s exclusion principle) constrained have exactly equal charge and mass but oppositely directed motion (velocity, acceleration) to the real moving fermion. This prediction means that in the cosmological setting where real charges (matter) is observed to be receding at a rate proportional to radial spacetime, there is an outward force of matter given by Newton’s second law F = mdv/dt = mdc/dt = mcH where H is Hubble’s constant.

By Newton’s 3rd law, we then find that there is an equal inward reaction force carried by some aspect of the Dirac sea. This predicts the strength of gravity. The duality of this Dirac sea pressure gravity prediction is that the inward reaction force is carried via the Dirac sea specifically by the light speed gauge boson radiation of a Yang-Mills quantum field theory, which allows us to deduce the nature of matter from the quantitative shielding area associated with a quark or with a lepton such as an electron. This gives us the size of a fundamental particle as the black-hole radius, not the Planck length, so we obtain useful information from factual input without any speculations whatsoever.

The Standard Model

The greatest difficulty for a quantum field theory is the prediction of all observed properties of matter and energy, which are summarised by the Standard Model SU(3)xSU(2)xU(1) which is a set of symmetry groups constraining quantum field theory to make contact with particle physics correctly. The problem here is that the symmetry description varies as a function of collision energy or distance of closest approach.

Unfortunately, the Standard Model as it stands does not consistently model all of particle physics because different forces unify at different energies or distances from a particle, which implies that the symmetries are broken at low energy but become unified at high energy. The symmetries, while excellent for most properties, omit masses entirely.

The Standard Model does not supply rigorous or usefully predictive (checkable) mechanisms for electroweak symmetry breaking which is the process by which the SU(2)xU(1) electroweak symmetry breaks to yield U(1) at low energy. It is obvious that the 3 weak gauge bosons of the SU(2) symmetry are attenuated in the polarized vacuum somehow, such as by a hypothetical ‘Higgs field’ of inertia-giving (and hence mass-giving) ‘Higgs bosons’, but there are no properties scientifically predicted for such a field. The SU(3) symmetry unitary group describes the strong nuclear (gluon) field by means of a new symmetry parameter called colour charge.Instead of coming up with a useful, checkable, electroweak symmetry breaking theory, the mainstream effort has been devoted since 1985 to a speculative, non-checkable hypothesis that the Standard Model particles and gravity can be explained by string theory. One major problem with string theory is that it’s claim to predict unification at extremely high energy is uncheckable; the energy is beyond experimental physics and would require going back in time to the moment of the big bang or using a particle accelerator as big as the solar system.

Another major problem with string theory is that its alleged unification of gravity and the standard model rests upon unifying speculative (unchecked) ideas about what quantum gravity is (gravitons mediated between mass-giving Higgs bosons), with speculative ideas concerning 10/11 dimensional time (M-theory). Speculation is only useful in physics where there is some hope of experimental checks. If a speculation is made that God exists, that speculation is not scientific because it is impossible in principle to refute it. Similarly, extra dimensions cannot even in principle be refuted.

Finally, string theory invents many properties of the universe in such a vague and ambiguous way that virtually any experimental results could be read as a validation of some variant of a stringy speculation. Such experimental results could also be consistent with many other theories, so such indirect ‘validation’ will turn physics into either a farce and battleground or into an orthodox religion whose power comes not from unique evidence by from suppressing counter evidence as a religious-type heresy. Critics of general relativity in 1919 wrongly claimed that there are potentially other theories that predicted the correct deflection of sunlight by gravity, or they disputed the accuracy of the evidence (Sir Edmund Whittaker being an example). However, the starlight deflection in general relativity can be justified on energy conservation grounds, from the way that gravitational potential energy – gained by a photon approaching the sun – must be used entirely for directional deflection and not partly used for speeding up an object as would occur to an object initially moving slower than the speed of light (light cannot be speeded up, so all gained gravitational energy is used for deflecting it). So such local predictions of general relativity are constrained to empirical facts. However, Einstein also ‘predicted’ in 1917 from general relativity that the entire universe is static and not expanding, which is a completely false prediction and was based on Einstein’s ad hoc cosmological constant value.

If Einstein’s steady state theory of cosmology had been defended based on the correct (local) predictions of the theory, then the failure of general relativity as a steady state cosmology may never have been exposed either by experiment (peer-review would suppress big bang crackpotism and force authors to invent ‘epicycles’ to fit experiments to the existing paradigm) or theory (Einstein’s steady state solution was unstable theoretically!).

The problem is that string theory, quite unlike general relativity, cannot even be objectively criticised because it contains no objective physics, it is just a ‘hunch’ to use ‘t Hooft’s description. String theory, explains Woit, is not even wrong. It has no evidence and can never have direct evidence because we can only experiment and observe in a limited number of dimensions which is smaller than the number of dimensions postulated by string theory, and even if it did have some alleged indirect evidence, that would destroy rigor in science by turning it into a religion of those who believe the holy alleged evidence, and those who have alternatives.

This is because almost any evidence can be ‘explained away’ by some of the numerous versions of string theory. Supersymmetry is a 10 dimensional string theory in which there is a bosonic superpartner for every fermion of the Standard Model. This is supposed to unify forces, but that cannot be checked as we can’t measure how forces unify since the energy is way too high. In addition, of the 10 dimensions, 6 are rolled up into a Calabi-Yau manifold that has many variable parameters, and hence a vast number of possible states! Nobody knows exactly how many different ground states of the universe are even possible – if the Calabi-Yau manifold is real, but it is probably between 10^100 and 10^1000 solutions. These numbers are far greater than the total number of fermions in the universe (about 10^80). There is no way to predict objectively which vacuum state describes the universe. The best that can be done is to plot a ‘landscape’ of solutions as a three dimensional plot and then to claim that the nearest one to experimental data is the prediction, by the ‘anthropic principle’ (which says we would not exist if it was another state, because the laws of nature are sensitive to the value of the vacuum energy).

By the same scientifically fraudulent argument, a child asked ‘what is 2 + 2?’ would reply: ‘it is either 1, 2, 3, 4, 5, 6, 7 … or 10^1000, the correct answer being decided by whichever solution of mine happens to correspond to the experimentally determined fact, shown by the counting beads!’

Sir Fred Hoyle used the anthropic argument (sometimes falsely called a principle) to ‘predict’ life exists due to nuclear carbon energy level which allows three alpha particles (helium-4 nuclei) to stick together forming carbon-12. He did this simply because his theory would fail otherwise. However, it was a subjective ‘I exist, therefore helium fuses’ prediction and was not objectively based on an understanding of nuclear science. Therefore, Hoyle did not win a Nobel Prize, and his so-called ‘explanation’ of the carbon-12 energy level – despite correctly predicting the value later observed in experiment – does not deliver you hard physics.

To be added:

OBJECTIVE DETAILS OF THE STANDARD MODEL (from my old site)

NATURE OF GAUGE BOSONS (new section on physical propagation of polarized radiations)

Quantum field theory phenomenology

I’ve put a sketch of the fundamental forces as a function of distance here, and an article [not] illustrated with that sketch is at http://gaugeboson.blogspot.com/UPDATE (23 Feb 2007): this illustration is inaccurate in assuming unification.

The GUT (grand unified theory) scale unification may be wrong itself. The Standard Model might not turn out to be incomplete with regards to requiring supersymmetry: the QED electric charge rises as you get closer to an electron because there’s less polarized vacuum to shield the corer charge. Thus, a lot of electromagnetic energy is absorbed in the vacuum above the IR cutoff, producing loops. It’s possible that the short ranged nuclear forces are powered by this energy absorbed by the vacuum loops.

In this case, energy from one force (electromagnetism) gets used indirectly to produce pions and other particles that mediate nuclear forces. This mechanism for sharing gauge boson energy between different forces in the Standard Model would get rid of supersymmetry which is an attempt to get three lines to numerically coincide near the Planck scale. With the strong and weak forces caused by energy absorbed when the polarized vacuum shields electromagnetic force, when you get to very high energy (bare electric charge), there won’t be any loops because of the UV cutoff so both weak and strong forces will fall off to zero. That’s why it’s dangerous to just endlessly speculate about only one theory, based on guesswork extrapolations of the Standard Model, which doesn’t have evidence to confirm it.

The whole idea of unification is wrong, if the nuclear force gauge bosons are vacuum loop effects powered by attenuation of the electromagnetic charge due to vacuum polarization; see:

Copy of a comment:

http://kea-monad.blogspot.com/2007/02/luscious-langlands-ii.html

Most of the maths of physics consists of applications of equations of motion which ultimately go back to empirical observations formulated into laws by Newton, supplemented by Maxwell, Fitzgerald-Lorentz, et al.

The mathematical model follows experience. It is only speculative in that it makes predictions as well as summarizing empirical observations. Where the predictions fall well outside the sphere of validity of the empirical observations which suggested the law or equation, then you have a prediction which is worth testing. (However, it may not be falsifiable even then, the error may be due to some missing factor or mechanism in the theory, not to the theory being totally wrong.)

Regarding supersymmetry, which is the example of a theory which makes no contact with the real world, Professor Jacques Distler gives an example of the problem in his review of Dine’s book Supersymmetry and String Theory: Beyond the Standard Model:

http://golem.ph.utexas.edu/~distler/blog/

“Another more minor example is his discussion of Grand Unification. He correctly notes that unification works better with supersymmetry than without it. To drive home the point, he presents non-supersymmetric Grand Unification in the maximally unflattering light (run α 1 ,α 2 up to the point where they unify, then run α 3 down to the Z mass, where it is 7 orders of magnitude off). The naïve reader might be forgiven for wondering why anyone ever thought of non-supersymmetric Grand Unification in the first place.”

The idea of supersymmetry is the issue of getting electromagnetic, weak, and strong forces to unify at 10^16 GeV or whatever, near the Planck scale. Dine assumes that unification is a fact (it isn’t) and then shows that in the absense of supersymmetry, unification is incompatible with the Standard Model.

The problem is that the physical mechanism behind unification is closely related to the vacuum polarization phenomena which shield charges.

Polarization of pairs of virtual charges around a real charge partly shields the real charge, because the radial electric field of the polarized pair is pointed the opposite way. (I.e., the electric field lines point inwards towards an electron. The electric field likes between virtual electron-positron pairs, which are polarized with virtual positrons closer to the real electron core than virtual electrons, produces an outwards radial electric field which cancels out part of the real electron’s field.)

So the variation in coupling constant (effective charge) for electric forces is due to this polarization phenomena.

Now, what is happening to the energy of the field when it is shielded like this by polarization?

Energy is conserved! Why is the bare core charge of an electron or quark higher than the shielded value seen outside the polarized region (i.e., beyond 1 fm, the range corresponding to the IR cutoff energy)?

Clearly, the polarized vacuum shielding of the electric field is removing energy from charge field.

That energy is being used to make the loops of virtual particles, some of which are responsible for other forces like the weak force.

This provides a physical mechanism for unification which deviates from the Standard Model (which does not include energy sharing between the different fields), but which does not require supersymmetry.

Unification appears to occur because, as you go to higher energy (distances nearer a particle), the electromagnetic force increases in strength (because there is less polarized vacuum intervening in the smaller distance to the particle core).

This increase in strength, in turn, means that there is less energy in the smaller distance of vacuum which has been absorbed from the electromagnetic field to produce loops.

As a result, there are fewer pions in the vacuum, and the strong force coupling constant/charge (at extremely high energies) starts to fall. When the fall in charge with decreasing distance is balanced by the increase in force due to the geometric inverse square law, you have asymptotic freedom effects (obviously this involves gluon and other particles and is complex) for quarks.

Just to summarise: the electromagnetic energy absorbed by the polarized vacuum at short distances around a charge (out to IR cutoff at about 1 fm distance) is used to form virtual particle loops.

These short ranged loops consist of many different types of particles and produce strong and weak nuclear forces.

As you get close to the bare core charge, there is less polarized vacuum intervening between it and your approaching particle, so the electric charge increases. For example, the observable electric charge of an electron is 7% higher at 90 GeV as found experimentally.

The reduction in shielding means that less energy is being absorbed by the vacuum loops. Therefore, the strength of the nuclear forces starts to decline. At extremely high energy, there is – as in Wilson’s argument – no room physically for any loops (there are no loops beyond the upper energy cutoff, i.e. UV cutoff!), so there is no nuclear force beyond the UV cutoff.

What is missing from the Standard Model is therefore an energy accountancy for the shielded charge of the electron.

It is easy to calculate this, the electromagnetic field energy for example being used in creating loops up to the 90 GeV scale is the energy of a charge which is 7% of the energy of the electric field of an electron (because 7% of the electron’s charge is lost by vacuumn loop creation and polarization below 90 GeV, as observed experimentally; I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424).

So this physical understanding should be investigated. Instead, the mainstream censors physics out and concentrates on a mathematical (non-mechanism) idea, supersymmetry.

Supersymmetry shows how all forces would have the same strength at 10^16 GeV.

This can’t be tested, but maybe it can be disproved theoretically as follows.

The energy of the loops of particles which are causing nuclear forces comes from the energy absorbed by the vacuum polalarization phenomena.

As you get to higher energies, you get to smaller distances. Hence you end up at some UV cutoff, where there are no vacuum loops. Within this range, there is no attenuation of the electromagnetic field by vacuum loop polarization. Hence within the UV cutoff range, there is no vacuum energy available to create short ranged particle loops which mediate nuclear forces.

Thus, energy conservation predicts a lack of nuclear forces at what is traditionally considered to be “unification” energy.

So there would seem to discredit supersymmetry, whereby at “unification” energy, you get all forces having the same strength. The problem is that the mechanism-based physics is ignored in favour of massive quantities of speculation about supersymmetry to “explain” unification, which are not observed.

***************************

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Comment by nc — February 23, 2007 @ 11:19 am  

Return to old (partly obsolete) discussion:

The text of the post at http://gaugeboson.blogspot.com/ is:

For electromagnetic charge, the relative strength is 1/137 for low energy collisions, below E = 0.511 MeV, the lower limit so-called “infrared” cutoff. Putting this value of E into the formula gives the 10^-15 metre range of vacuum polarization around an electron. For distances within this radius but not too close (in fact for the range: 0.511 MeV < E < 92,000 MeV/92GeV, see https://nige.wordpress.com/ for further details and links. In calculating the charges (coupling strengths) for fundamental forces as a function of distance as indicated above, for all distances closer than 10^-15 metre you need to take account of the charge increase in the the formula for closest approach in Coulomb scattering where the kinetic energy of the particle is converted entirely into electrostatic potential energy E = (electric charge^2)/(4.Pi.Permittivity.Distance). The electric charge in this formula is higher than the normal charge of the particle when you get within the polarization region, because the polarization shields the charge and the less polarization between you and the particle core, the less shielding of the charge.

The two graphs above on the left hand side are the standard presentation, the sketch graph on the right hand side is a preliminary illustration of the same data plotted as a function of distance from particle core instead of collision energy. Obviously I can easily compute the full details quantitatively, but am worried about what criticism may result from the simple treatment detailed above whereby I am assuming head-on Coulomb scattering. I know from nuclear physics that the scattering may be far more complex and messy so the quantitative details may differ. For example, the treatment above assumes a perfectly elastic scatter, not inelastic scatter, and it deals with only one mechanism for scatter and one force being involved. If we are dealing with penetration of the vacuum polarization zone, the forces involved will not only be Coulomb electric scatter, but also weak and possibly strong nuclear forces, depending upon whether the particles we are scattering off one another are leptons like electrons (which don’t seem to participate in the strong nuclear force at all, at least at the maximum experimentally checkable energy of scatter to date!), or hadron constituents, quarks.

I think the stagnation in HEP (high energy physics) comes from ignoring the problem of plotting force strengths as a function of distance, as I’ve stetched above. Looking at the right hand side force unification graph, you can see that the strong nuclear force charge or coupling strength over a wide range of small distances (note distance axis is logarithmic) actually falls as the particle core is approached. This offsets the inverse-square law, whereby for constant charge or constant coupling strength the force would increase as distance from core is reduced. This offset means that over that range where the strong nuclear charge is falling as you get closer to the core, the actual force on quarks is not varying. This clearly is the physical cause of asymptotic freedom of quarks, when you consider that they are also subjected to electromagnetic forces. The very size of the proton is given by the range to which asymptotic freedom of quarks extends.

I’ve also pointed out that the variations of all these fundamental forces as a function of distance clearly brings out the fact from conservation of energy that the gauge boson radiation which causes forces is getting shielded by vacuum polarization, so that the ‘shielded’ electromagnetic force gauge boson energy (being soaked up by the vacuum in the polarization zone) is being converted into the energy of nuclear force gauge bosons.

These are physical facts. I can’t understand why other people don’t think physically about physics, preferring to steer clear of phenomenology and to remain in abstract mathematical territory, which they believe to be safer ground despite the failure of string theory to actually explain or predict anything real or useful for experiments or for actually understanding the Standard Model.

I’m writing a proper review paper (to replace http://feynman137.tripod.com/ and related pages) on quantum field theory for phenomenologists which will replace supersymmetry (string theory) with a proper dynamic vacuum model based entirely on well established empirical laws. I’m going to place all my draft calculations and analyses here as blog posts as I go, and then edit the review paper from the results. In the meantime, I’ve re-read Luis Alvarez Gaume and Miguel A. Vazquez-Mozo’s Introductory Lectures on Quantum Field Theory and find them a lot more lucid in the June 27 2006 revision than the the earlier 2005 version. They have a section now explaining pair production in the vacuum and give a threshold electric field strength (page 85) of 1.3 x 10^16 v/cm which is on the order (approximately) of the electric field strength at 10^-15 m from an electron core, the limiting distance for vacuum polarization (see https://nige.wordpress.com/ , top post, for details).

The review paper focusses on the links between two approaches to quantum field theory. On one side of the coin, you have the particles in the three generations of the Standard Model, and on the other you have the forces. However, if you can model the forces you will understand the particles, which after all are totally characterised by which forces they interact via. If you can understand physically why pairs or triads of fundamental particles have fractional electric charges (as seen outside the polarized vacuum) and why they interact by strong nuclear interactions in addition to electroweak interactions, while single particles (which don’t share their polarized vacuum region with one or two other particles) have integer electric charges (seen at large distances) and don’t participate in the strong nuclear force, then that is the same thing as understanding the Standard Model because it will tell you physically the reason for the differing electric charges and for the different types of particle charges (strong nuclear force charge is called ‘color charge’, while the gravitational field charge is simply called ‘the inertial mass’).

I think part of the answer is already known at https://nige.wordpress.com/ and http://feynman137.tripod.com/, namely when you have three charge cores in close proximity (sharing the same overall vacuum polarization shell around all of them), the electric field energy creating the vacuum polarization field is three times stronger, so the polarization is three times greater, which means that the electric charge of each downquark is 1/3 that of the electron. Of course this is a very incomplete piece of physical logic, and leads to further questions where you have upquarks with 2/3 charge, and where you have pairs of quarks in mesons. But some of these have been answered: consider the neutron which has an overall electric charge of zero, where is the electric field energy being used? By conservation of electromagnetic field energy, the reduction in electric charge indicated by fractional charge values due to vacuum polarization shielding implies that the energy shielded is being used to bind the quarks (within the asymptotic freedom range) via the strong nuclear force. Neutrons and protons have zero or relatively low electric charge for their fundamental particle number because so much energy is being tied up in the strong nuclear binding force, ‘color force’.

Electroweak symmetry breaking and strong nuclear force

Electric charge of the electron as a function of distance

The electron has a charge which varies with collision energy, increasing by 7% when collision energy is increased to 92 GeV.  This is not to do with relativity (which only varies mass, length, and local time) but is due to vacuum polarization.

Fig. 1: Vacuum polarization around a charge

Fig. 2: The effect of polarization is that the effective electric charge rises at higher energy collisions, as electrons approach close enough to penetrate part of the vacuum polarization shield before being turned back by the Coulomb repulsive force.  The force causing gauge boson energy absorbed by the vacuum polarization at short ranges appears as other, short-ranged, ‘nuclear’ forces.  The weak force has been unified with the electromagnetic force above a certain unification energy.  Below that energy, this electroweak symmetry is broken, because weak gauge bosons get stopped or shielded by the vacuum somehow (officially it is due to the as-yet-unobserved Higgs field, but that has problems as the official theory says that a single Higgs boson would have infinite mass, and also since the Higgs field would cause all masses, it would have to cause gravitational mass as well as inertial by Einstein’s equivalence principle of general relativity, ie, the Higgs field would have to be a quantum gravity theory).  It is possible that particles related to the Z_o gauge boson fill the vacuum and have mass in their own right, thus playing the role today allocated to the Higgs field by the mainstream.

See http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html for a simple model of vacuum polarization.   A vacuum loop for electron-positron creation and annihilation is something like: gamma ray -> e + e+ -> gamma ray -> e + e+ …  The virtual charges exist for a shot timeand are polarized, which shields the real bare electric core charge.

This loop of matter creation and annihilation is occurring in the vacuum, and is described by the creation-annihilation operators in quantum field theory. The basic physics can be grasped with Heisenberg’s uncertainty equation. Physically, particles are hitting one another in the vacuum. Just as in an particle accelerator, such collisions produce short-lived bursts of particles which have observable consequences, such as increasing the magnetic moment of an electron by 0.116% and causing the Lamb shift in the hydrogen atom’s spectrum.

The average time that such particles persist depends on how long another collision in the vacuum annihilates them. So Heisenberg’s uncertainty relation for energy and time predicts the lifetime of any particular energy fluctuation. [There is no need to get into science fiction or metaphysics about Hawking’s ‘interpretations’ of the uncertainty principle (parallel universes, baby universes, etc.) because that simply isn’t physics, which is about checkable calculations.]

Particles which have the lowest energy exist for the longest period, a simple inverse law relationship between energy and time. Therefore, electron-positron pairs will dominate the vacuum, because they have the least rest mass-energy of all charged particles, and exist for the longest period of time.

The next particle beyond the electron has merely the same electric charge as the electron, but is over two hundred times as massive, and will therefore exist on average for under 1/200th or under 0.5% of the duration of time that electron-positron charges exist in the vacuum.

An electron-positron pair is created by 1.022 MeV = 1.634 x 10-13 J of energy, and has a lifetime according to the Heisenberg uncertainty relationship of t ~ h/E ~ (6.626 x 10-34 m2kg/s) / (1.634 x 10-13 J) ~ 4 x 10-21 seconds, during which the maximum possible distance moved by either particle is x = ct = (3 x 108 m/s).(4 x 10-21 s) = 1.2 x 10-12 metres.

This is why the vacuum isn’t observably radioactive! The virtual particles have just enough range (10-12 metres) to continuously upset the electron orbits in atoms, creating the Schroedinger statistical wave distribution instead of classical orbits, but they don’t have enough range to break up molecules. The radioactivity of the vacuum is short-ranged enough that it can only upset the orbits of electrons and nuclear particles. This is rather like the effects of a gas on producing Brownian motion of very small (micron sized) dust particles, but being unable to induce chaotic motion in large objects because statistically their effects balance out on large scales (creating pressure).

Pairs of heavier charged particles could exist closer to the core of an electron if the electric field of the electron is responsible for creating particles. This in the Yang-Mills theory is the same as saying that force-causing gauge photons – which constitute the electric field in the U(1) quantum field theory – behave like very high energy gamma rays when close to a heavy nucleus (when pair production occurs, gamma ray -> electron + positron). Obviously, this route is suggesting that gauge photons interact with one another to create vacuum charges of higher mass when closer to the electron middle. If I shield cobalt-60 with lead, some of the gamma rays (with a mean energy of 1.25 MeV) will be converted into electron-positron pairs by interacting with the strong nuclear field in the lead nucleus.

I cannot however see why the polarization of heavier particles would produce a greater attenuation of the electric charge, because after all they have the same amount of charge as electrons and more inertia, so are less mobile to become polarized. Physically, they should produce less shielding, so Lubos Motl’s off-the-cuff response to a question (see comments section of a post at http://electrogravity.blogspot.com/) is illucid.

At very close ranges, you get strong and weak forces occurring due to the effect of the electromagnetic force field strength on the vacuum. These stronger forces – due to creation of charges in the vacuum near the middle of the electron – sap energy from the electromagnetic field. For example, if hypothetically on average half of the gauge boson energy of the photons causing electromagnetism is used to produce heavy charges at a certain distance close to the middle of the electron, then the electromagnetic field energy at that distance as carried by gauge bosons (photons for electromagnetism) will be reduced by 50%.

So you can’t have your cake and eat it; if one observational charge gets weaker, that implies that energy is being used by some other process. The strong force for quarks within say nucleons is optimised at low energies, and gets weaker at high energy collisions or when the quarks are brought closer together or close to other quarks. The reason may well be simply that the whole strong force phenomenology is driven by the variation in energy of the electroweak force field in the polarized vacuum. As the electromagnetic force gets stronger very close to the middle of a particle, less energy is therefore available for strong force coupling.

This suggestion is in sharp contrast to the official highly abstract renormalization approach to quantum field theory. The existing system uses a logarithmic correction for observable charge variation with energy (which is, in general, inversely proportional to distance from the middle of the particle):

e(x)/e(x = infinity) ~ 1 + [0.005 ln(A/E)]

This equals 1.07 for A = 92,000 MeV and E = 0.511 MeV as experimentally validated (Levine, PRL 1997), hence the relative shielding factor of the vacuum falls from 137 at 0.511 MeV collisions to about 128 at 92 GeV. What is artificial about this equation is the lower limit (cutoff) of 0.511 MeV. From quantum mechanics you would expect a smooth curve for the shielding from charge polarization, not an abrupt lower limit. The electron charge is exactly e until you get to a certain distance from it (corresponding to a collision energy of 0.511 MeV) when it abruptly starts to increase. No way this could happen, why should vacuum polarization stop at some arbitrary distance?!!  Answer: the virtual particles are not only being polarized by the electric field, they are being created/freed by it, and this requires a threshold field strength which we will calculate below (at this threshold, the energy density of the electric field is sufficiently high to create/free charged pairs which then become polarized and shield the electric field!).

The whole of quantum field theory as traditionally taught is therefore, as Dr Chris Oakley points out, a fiddle physically and this proves it.  There is confusion introduced into QFT allowing charge variation as a function of collision energy, instead of plotting the variation for observable charge as a function of distance from the middle of the electron in the simplest case of a static electron.

The shielding factor is falling at closer distances (and thus higher collision energies in particle accelerators) because you are getting closer to the core of the electron (or whatever you are colliding), and so seeing effects due to less of the polarised shield, and more of the unshielded charge. Similarly, if it is a cloudy day and you get in an aircraft and get much higher in the atmosphere, you will see more sunlight and less attenuated sunlight. If you could get above the cloud cover, you would see completely unshielded sunlight. Similarly, if you hit particles so hard together that the clouds of virtual charges are no longer intervening in the small distance between the particle cores when they rebound by Coulomb repulsion, the electric charge involved is much stronger simply because it is less shielded by polarization.

The final theory involves getting to grips with charge and mass, which first means getting a working model for what these things are in quantum field theories. Charge is the priority, because it is less variable than mass (mass varies in relativity, charge doesn’t).

Fundamental particles have charges in the ratio 1: -2 : 3 (example, d quark is -1/3, u is +2/3, electron is -1). The fractionational charges of quarks in the final theory must arise from polarization either by itself or in combination with some other effect or principle.

When you get two or three quarks close together, they share the same cloud of polarized charge, but the cloud is two or three times stronger, so explaining why the electric charge of each individual quark as observed from a great distance (outside the intensely polarized vacuum shield) is only a fraction of what you get at a large distance from an electron!

CALCULATION OF THE POLARIZATION CUTOFF RANGE: 

Dyson’s paper http://arxiv.org/abs/quant-ph/0608140 is very straightforward and connects deeply with the sort of physics I understand (I’m not a pure mathematician turned physicist). Dyson writes on page 70:

‘Because of the possibility of exciting the vacuum by creating a positron-electron pair, the vacuum behaves like a dielectric, just as a solid has dielectric properties in virtue of the possibility of its atoms being excited to excited states by Maxwell radiation. This effect does not depend on the quantizing of the Maxwell field, so we calculate it using classical fields.

‘Like a real solid dielectric, the vacuum is both non-linear and dispersive, i.e. the dielectric constant depends on the field intensity and on the frequency. And for sufficiently high frequencies and field intensities it has a complex dielectric constant, meaning it can absorb energy from the Maxwell field by real creation of pairs.’

Pairs are created by the high intensity field near the bare core of the electron, and the pairs become polarised, shielding part of the bare charge. The lower limit cutoff in the renormalized charge formula is therefore due to the fact that polarization is only possible where the field is intense enough to create virtual charges.

This threshold field strength for this effect to occur is 6.9 x 10^20 volts/metre. This is the electric field strength by Gauss’ law at a distance 1.4 x 10^-15 metre from an electron, which is the maximum range of QED vacuum polarization. This distance comes from the ~1 MeV collision energy used as a lower cutoff in the renormalized charge formula, because in a direct (head on) collision all of this energy is converted into electrostatic potential energy by the Coulomb repulsion at that distance: to do this just put 1 MeV equal to potential energy (electron charge)^2 / (4Pi.Permittivity.Distance).

Can someone explain to me why there are no books or articles with plots of observable (renormalized) electric charge versus distance from a quark or lepton, let alone plots of weak and nuclear force as a function of distance? Everyone plots forces as a function of collision energy only, which is obfuscating. What you need is to know is how the various types of charge vary as a function of distance. Higher energy only means smaller distance. It is pretty clear that when you plot charge as a function of distance, you start thinking about how energy is being shielded by the polarized vacuum and electroweak symmetry breaking becomes clearer. The electroweak symmetry exists close to the bare charge but it breaks at great distances due to some kind of vacuum polarization/shielding effect. Weak gauge bosons get completely attenuated at great distances, but electromagnetism is only partly shielded.

To convert energy into distance from particle core, all you have to do is to set the kinetic energy equal to the potential energy, (electron charge)^2 / (4Pi.Permittivity.Distance). However, you have to remember to use the observable charge for the electron charge in this formula to get correct results (hence at 92 GeV, the observable electric charge of the electron to use is 1.07 times the textbook low-energy electronic charge).

Here is a nice essay dealing with the Dirac and the perturbative QFT in physical terms:

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … It will be apparent that a hole in the negative energy states is equivalent to a particle with the same mass as the electron … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949]. ‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001…’

CALCULATION OF THE BARE CHARGE OF ELECTRON:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

Heisenberg’s uncertainty principle says

pd = h/(2.Pi)

where p is uncertainty in momentum, d is uncertainty in distance.
This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process.

For light wave: momentum p = mc, while distance d = ct, hence:
pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc2), and t is uncertainty in time.

Hence, Et = h/(2.Pi)

t = h/(2.Pi.E)

d/c = h/(2.Pi.E)

d = hc/(2.Pi.E)

This result is often used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it’s reliable to use this.

Now, E = Fd implies

d = hc/(2.Pi.E) = hc/(2.Pi.Fd)

Hence

F = hc/(2.Pi.d^2)

This force is 137.036 times higher than Coulomb’s law for unit fundamental charges.
Notice that in the last sentence I’ve suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.
Clearly what’s physically happening is that the true force is 137.036 times Coulomb’s law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:

“… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).” – arxiv hep-th/0510040, p 71.

The unified Standard Model force is F = hc/(2.Pi.d^2)

That’s the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(-d/x) for vacuum attenuation by short-ranged nuclear particles, where x = hc/(2.Pi.E)

All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles.

Quarks have asymptotic freedom because the strong force and electromagnetic force cancel where the strong force is weak, at around the distance of separation of quarks in hadrons. That’s because of interactions with the virtual particles (fermions, quarks) and the field of gluons around quarks. If the strong nuclear force fell by the inverse square law and by an exponential quenching, then the hadrons would have no volume because the quarks would be on top of one another (the attractive nuclear force is much greater than the electromagnetic force).

It is well known you can’t isolate a quark from a hadron because the energy needed is more than that which would produce a new pair of quarks. So as you pull a pair of quarks apart, the force needed increases because the energy you are using is going into creating more matter. This is why the quark-quark force doesn’t obey the inverse square law. There is a pictorial discussion of this in a few books (I believe it is in “The Left Hand of Creation”, which says the heuristic explanation of why the strong nuclear force gets weaker when quark-quark distance decreases is to do with the interference between the cloud of virtual quarks and gluons surrounding each quark). Between nucleons, neutrons and protons, the strong force is mediated by pions and simply decreases with increasing distance by the inverse-square law and an exponential term something like exp(-x/d) where x is distance and d = hc/(2.Pi.E) from the uncertainty principle.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:’… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification. The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you may learn that SUSY just isn’t needed or is plain wrong, or else you will get a better grip on what is real and make some testable predictions as a result.

At low energies, the experimentally determined strong nuclear force strength is alpha = 1 (which is about 137 times the Coulomb law), but it falls to alpha = 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.

In the next post on this blog, I will present (using the method outlined in this post) a full conversion of the normal force-strength-versus-collision-energy curves for Standard Model forces into force-strength-versus-distance-from-particle core curves, which will euclidate the nature of the strong nuclear force and electroweak symmetry breaking on the basis that these result from shielding of gauge boson radiation by charge polarization.  I will show the correct mathematical model for charge polarization as a shielding effect, and show by energy conservation of gauge bosons that as the shielding due to charge polarization depletes electromagnetism, this energy is transformed into the short ranged energy of nuclear forces.  The quantitative model will unify all forces without stringy supersymmetry.

Gravitational mechanism: http://electrogravity.blogspot.com/2006/07/observable-radial-expansion-around-us.html.  For recent updates and further explanation of relationship between gravity and electromagnetism see comments of http://electrogravity.blogspot.com/2006/08/updates-1-peter-woits-not-even-wrong.html and see http://feynman137.tripod.com/.  I am producing a new properly organised paper which explains all of the problems and solutions methodically, with many new results and more lucid proofs, improvements and clarifications of older results.  This new paper will be published on this wordpress blog within a week.