Feynman diagrams in loop quantum gravity, path integrals, and the relationship of leptons to quarks

Fig. 1 - Quantum gravity versus smooth spacetime curvature of general relativity

Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron (see previous post for the mechanism and quantitative checked prediction of the strength of gravity).  If you believe string theory, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity.  (Basically, spin-1 gravitons push, while spin-2 gravitons suck.  So if you want a checkable, predictive, real theory of quantum gravity that pushes forward, check out spin-1 gravitons.  But if you merely want any old theory of quantum gravity that well and truly sucksyou can take your pick from the ‘landscape’ of 10500 stringy theories of mainstream sucking spin-2 gravitons.)  In general relativity, an electron accelerates due to a continuous smooth curvature of spacetime, due to a spacetime ‘continuum’ (spacetime fabric).

In mainstream quantum gravity ideas (at least in the Feynman diagram for quantum gravity), an electron accelerates in a gravitational field because of quantized interactions with some sort of graviton radiation (the gravitons are presumed to interact with the mass-giving Higgs field bosons surrounding the electron core).  As explained in the discussion of the stress-energy curvature in the previous post, in addition to the gravity mediators (gravitons) presumably being quantized rather than a continuous or continuum curved spacetime, there is the problem that the sources of fields such as discrete units of matter, come in quantized units in locations of spacetime.  General relativity only produces smooth curvature (the acceleration curve in the left hand diagram of Fig. 1) by smoothing out the true discontinuous (atomic and particulate) nature of matter by the use of an averaged density to represent the ‘source’ of the gravitational field.

The curvature of the line in the Feynman diagram for general relativity is therefore due to the smoothness of the source of gravity spacetime, resulting from the way that the presumed source of curvature – the stress-energy tensor in general relativity – averages the discrete, quantized nature of mass-energy per unit volume of space. Quantum field theory is suggestive that the correct Feynman diagram for any interaction is not a continuous, smooth curve, but instead a number of steps due to discrete interactions of the field quanta with the charge (i.e., gravitational mass).  However, the nature of the ‘gravitons’ has not been observed, so there are some uncertainties remaining about their nature.  Fig. 1 (which was inspired – in part – by Fig. 3 in Lee Smolin’s Trouble with Physics) is designed to give a clear idea of what quantum gravity is about and how it is related to general relativity:

‘Loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. The model is not as speculative as string theory…’ – http://quantumfieldtheory.org/

The previous post predicts gravity and cosmology correctly; the basic mechanism was published (by Electronics World) in October 1996, two years ahead of the discovery that there’s no gravitational retardation.  More important, it predicts gravity quantitatively, and doesn’t use any ad hoc hypotheses, just experimentally validated facts as input.  I’ve used that post to replace the earlier version of the gravity mechanism discussion here, here, etc., to improve clarity.

I can’t update the more permanent paper on the CERN document server here because as Tony Smith has pointed out, “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …”  The only way you can update a paper on the CERN document server is if it a mirror copy of one on arXiv; update the arXiv paper and CERN’s mirror copy will be updated.  This is contrary to scientific ethics whereby the whole point of electronic archives is that corrections and updates should be permissible.  Professor Jacques Distler, who works on string theory and is a member of arXiv’s advisory board, despite being warmly praised by me, still hasn’t even put Lunsford’s published paper on arXiv, which was censored by arXiv despite having been peer-reviewed and published.

Path integrals of quantum field theory

The path integral for the incorrect spin-2 idea was discussed at the earlier post here, while as stated the correct mechanism with accurate predictions confirming it, is at the post here. Let’s now examine the path integral formulation of quantum field theory in more depth.  Before we go into the maths below, by way of background, Wiki has a useful history of path integrals, mentioning:

‘The path integral formulation was developed in 1948 by Richard Feynman. … This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s called the renormalization group which unified quantum field theory with statistical mechanics. If we realize that the Schrödinger equation is essentially a diffusion equation with an imaginary diffusion constant, then the path integral is a method for the enumeration of random walks. For this reason path integrals had also been used in the study of Brownian motion and diffusion before they were introduced in quantum mechanics.’

As Fig. 1 shows, according to Feynman, ‘curvature’ is not real and general relativity is just an approximation: in reality, graviton exchange causes accelerations in little jumps.  If you want to get general relativity out of quantum field theory, you have to sum over the histories or interaction graphs for lots of little discrete quantized interactions.  The summation process is what we are about to describe mathematically.  By way of introduction, we can remember the random walk statistics mentioned in the previous post.  If a drunk takes n steps of approximately equal length x in random directions, he or she will travel an average of distance xn1/2 from the starting point, in a random direction!  The reason why the average distance gone is proportional to the square-root of the number of steps is easily understood intuitively because it is due to diffusion theory.  (If this was not the case, there would be no diffusion, because molecules hitting each other at random would just oscillate around a central point without any net movement.)  This result is just a statistical average for a great many drunkard’s walks.  You can derive it statistically, or you can simulate it on a computer, add up the mean distance gone after n steps for lots of random walks, and take the average.  In other words, you take the path integral over all the different possibilities, and this allows you to work out what is most likely to occur.

Feynman applied this procedure to the principle of least action.  One simple way to illustrate this is the discussion of how light reflects off a mirror.  Classically, the angle of incidence is equal to the angle of reflection, which is the same as saying that light takes the quickest possible route when reflecting.  If the angle of incidence were not equal to the angle of reflection, then light would obviously take longer to arrive after being deflected than it actually does (i.e., the sum of lengths of the two congruent sides in an isosceles triangle is smaller than the sum of lengths of two dissimilar sides for a trangle with the same altitude line perpendicular to the reflecting surface).

The fact that light classically seems always to go where the time taken is least is a specific instance of the more general principle of least action.  Feynman explains this with path integrals in his book QED (Penguin, 1990).  Physically, path integrals are the mathematical summation of all possibilities.  Feynman crucially discovered that all possibilities have the same magnitude but that the phase or effective direction (argument of the the complex number) varies for different paths.  Because each path is a vector, the differences in directions mean that the different histories will partly cancel each other out.

To get the probability of event y occurring, you first calculate the amplitude for that event.  Then you calculate the path integral for all possible events including event y.  Then you divide the first probability (that for just event y) into the path integral for all possibilities.  The result of this division is the absolute probability of event y occurring in the probability space of all possible events!  Easy.

Feynman found that amplitude for any given history is proportional to eiS/h-bar, and that the probability is proportional to the square of the modulus (positive value) of eiS/h-bar.  Here, S is the action for the history under consideration.

What is pretty important to note is that, contrary to some popular hype by people who should know better (Dr John Gribbin being such an example of someone who won’t correct errors in his books when I email the errors), the particle doesn’t actually travel on all of the paths integrated over in a specific interaction!  What happens is just one interaction, and one path.  The other paths in the path integral are considered so that you can work out the probability of a given path occurring, out of all possibilities.  (You can obviously do other things with path integrals as well, but this is one of the simplest things. For example, instead of calculating the probability of a given event history, you can use path integrals to identify the most probable event history, out of the infinite number of possible event histories.  This is just a matter of applying simple calculus!)

However, the nature of Feynman’s path integral does allow a little interaction between nearby paths!  This doesn’t happen with brownian diffusion!  It is caused by the phase interference of nearby paths, as Feynman explains very carefully:

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The Wiki article explains:

‘In the limit of action that is large compared to Planck’s constant h-bar, the path integral is dominated by solutions which are stationary points of the action, since there the amplitudes of similar histories will tend to constructively interfere with one another. Conversely, for paths that are far from being stationary points of the action, the complex phase of the amplitude calculated according to postulate 3 will vary rapidly for similar paths, and amplitudes will tend to cancel. Therefore the important parts of the integral—the significant possibilities—in the limit of large action simply consist of solutions of the Euler-Lagrange equation, and classical mechanics is correctly recovered.

‘Action principles can seem puzzling to the student of physics because of their seemingly teleological quality: instead of predicting the future from initial conditions, one starts with a combination of initial conditions and final conditions and then finds the path in between, as if the system somehow knows where it’s going to go. The path integral is one way of understanding why this works. The system doesn’t have to know in advance where it’s going; the path integral simply calculates the probability amplitude for a given process, and the stationary points of the action mark neighborhoods of the space of histories for which quantum-mechanical interference will yield large probabilities.’

I think this last bit is badly written: interference is only possible in the ‘small core’ paths that the size of the photon or other particle takes.  The paths which are not taken are not eliminated by inferference: they only occur in the path integral so that you know the absolute probability of a given path actually occurring.

Similarly, to calculate the probability of dice landing heads up, you need to know how many sides dice have.  So on one throw the probability of one particular side landing facing upwards is 1/6 if there are 6 sides per die.  But the fact that the number 6 goes into the calculation doesn’t mean that the dice actually arrive with every side facing up.  Similarly, a photon doesn’t arrive along routes where there is perfect cancellation!  No energy goes along such routes, so nothing at all physical travels along any of them.  Those routes are only included in the calculation because they were possibilities, not because they were paths taken.

In some cases, such as the probability that a photon will be reflected from the front of a block of glass, other factors are involved.  For the block of glass, as Feynman explains, Newton discovered that the probability of reflection depends on the thickness of the block of glass as measured in terms of the wavelength of the light being reflected.  The mechanism here is very simple.  Consider the glass before any photon even approaches it.  A normal block of glass is full of electrons in motion and vibrating atoms.  The thickness of the glass determines the number of wavelengths that can fit into the glass for any given wavelength of vibration.  Some of the vibration frequencies will be cancelled out by interference.  So the vibration frequencies of the electrons at the surface of the glass are modified in accordance to the thickness of the glass, even before the photon approaches the glass.  This is why the exact thickness of the glass determines the precise probability of light of a given frequency being reflected.  It is not determined when the photon hits the electron, because the vibration frequencies of the electron have already been determined by the interference of certain frequencies of vibration in the glass.

The natural frequencies of vibration in a block of glass depend on the size of the block of glass!  These natural frequencies then determine the probability that a photon is reflected.  So there is the two-step mechanism behind the dependency of photon reflection probability upon glass thickness.  It’s extremely simple.  Natural frequency effects are very easy to grasp: take a trip on an old school bus, and the windows rattle with substantial amplitude when the engine revolutions reach a particular frequency.  Higher or lower engine frequencies produce less window rattle.  The frequency where the windows shake the most is the natural frequency.  (Obviously for glass reflecting photons, the oscillations we are dealing with are electron oscillations which are much smaller in amplitude and much higher in frequency, and in this case the natural frequencies are determined by the thickness of the glass.)

The exact way that the precise thickness of a sheet of glass affects the abilities of electrons on the surface to reflect light easily understood by reference to Schroedinger’s original idea of how stationary orbits arise with a wave picture of an electron.  Schroedinger found that where an integer number of wavelengths of the electron fits into the orbit circumference, there is no interference.  But when only a fractional number of wavelengths would fit into that distance, then interference would be caused.  As a result, only quantized orbits were possible in that model, corresponding to Bohr’s quantum mechanics.  In a sheet of glass, when an integer number of wavelengths of light for a particular frequency of oscillation fit into the thickness of the glass, there is no interference in vibrations at that specific frequency, so it is a natural frequency.  However, when only a fractional number of wavelengths fit into the glass thickness, there is destructive interference in the oscillations.  This influences whether the electrons are resonating in the right way to admit or reflect a photon of a given frequency.  (There is also a random element involved, when considering the probability for individual photons chancing to interact with individual electrons on the surface of the glass in a particular way.)

Virtual pair-production can be included in path integrals by treating antimatter (such positrons) as matter (such as electrons) travelling backwards in time (this was one of the conveniences of Feynman diagrams which initially caused Feynman a lot of trouble, but it’s just a mathematical convenience for making calculations).  For more mathematical detail on path integrals, see Richard Feynman and Albert Hibbs, Quantum Mechanics and Path Integrals, as well as excellent briefer introductions such as Christian Grosche, An Introduction into the Feynman Path Integral, and Richard MacKenzie, Path Integral Methods and ApplicationsFor other standard references, scroll down this page.  For Feynman’s problems and hostility from Teller, Bohr, Dirac and Oppenheimer in 1948 to path integrals, see quotations in the comments of the previous post.

Feynman was extremely pragmatic.  To him, what matters is the validity of the physical equations and their predictions, not the specific model used to get the equations and predictions.  For example, Feynman said:

‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2.

If you can get the right equations even from a false model, you have done something useful, as Maxwell did.  However, you might still want to search for the correct model, as Feynman explained:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:

‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182.

This perturbative expansion is a simple example of the application of path integrals.  There are several ways that the electron can move, each corresponding to a unique Feynman diagram.  The electron can do along a direct path from spacetime location A to spacetime location B.  Alternatively, it can be deflected by a virtual particle enroute, and travel by a slightly longer path. Another alternative is that if could be deflected by two virtual particles.  There are, of course, an infinite number of other possibilities.  Each has a unique Feynman diagram and to calculate the most probable outcome you need to average them all in accordance with Feynman’s rules.

For the case of calculating the magnetic moment of leptons, the original calculation came from Dirac and assumed in effect the simplest Feynman diagram situation: that the electron interacts with a virtual (gauge boson) ‘photon’ from a magnet in the simplest simple way possible.  This is what conributes 98.85% of the total (average) magnetic moment of leptons, according to path integrals for lepton magnetic moments.  The next Feynman diagram is the second highest contributor and accounts for over 1% of interactions.  This correction is the situation evaluated by Schwinger in 1947 and is represented by a Feynman diagram in which a lepton emits a virtual photon before it interacts with the magnet.  After interacting with the magnet, it re-absorbs the virtual photon it emitted earlier.  This is odd because if an electron emits a virtual photon, it briefly (until the virtual photon is recaptured) loses energy.  How, physically, can this Feynman diagram explain how the magnetic moment of the electron be increased by 0.116% as a result of losing the energy of a virtual photon for the duration of the interaction with a magnet?  If this mechanism was the correct story, maybe you’d have a reduced magnetic moment result, not an increase?  Since virtual photons  mediate electromagnetic charge, you might expect them to reduce the charge/magnetism of the electromagnetism by being lost during an interaction.  Obviously, the loss of a non-virtual photon from an electron has no effect on the charge energy at all, it merely decelerates the electron (so kinetic energy and mass are slightly reduced, not electromagnetic charge).

There are two possible explanations to this:

1) the Feynman diagram for Schwinger’s correction is physically correct.  The emission of the virtual photon occurs in such a way that the electron gets briefly deflected towards the magnet for the duration of the interaction between electron and magnet.  The reason why the magnetic moment of the electron is increased as a result of this is simply that the virtual ‘photon’ that is exchanged between the magnet and the electron is blue-shifted by the motion of the electron towards the magnet for the duration of the interaction.  After the interaction, the electron re-captures the virtual ‘photon’ and is no-longer moving towards the magnet.  The blue-shift is the opposite of red-shift.  Whereas red-shift reduces the interaction strength between receding charges, blue-shift (due to the approach of charges) increases the interaction strength because the photons have an energy that is directly proportional to their frequency (E = hf).  This mechanism may be correct, and needs further investigation.

2) The other possibility is that there is a pairing between the electron core and a virtual fermion in the vacuum around it which increases the magnetic moment by a factor which depends on the shielding factor of the field from the particle core.  This mechanism was described in the previous post.  It helped inspire the general concept for the mass model discussed in the previous post, which is independent of this magnetic moment mechanism, and makes checkable predictions of all observable lepton and hadron masses.

The relationship of leptons to quarks and the perturbative expansion

As mentioned in the previous post (and comments number 13, 14, 22, 24, 25, 26, 27, 28 and 31 of that post), the number one priority now is to develop the details of the lepton-quark relationship.  The evidence that quarks are pairs or triads of confined leptons with some symmetry transformations was explained in detail in comment 13 to the previous post and is known as universality.  This was first recognised when the lepton beta decay event

muon -> electron + electron antineutrino + muon neutrino

was found to have similar detailed properties to the quark beta decay event

neutron -> proton + electron + electron antineutrino

Nicola Cabibbo used such evidence that quarks are closely related to leptons (I’ve only given one of many examples above) to develop the concept of ‘weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles.’ 

As stated in comment 13 of the previous post, I’m interested in the relationship between electric charge Q, weak isospin charge T and weak hypercharge Y:

Q = T + Y/2.

Where Y = −1 for left-handed leptons (+1 for antileptons) and Y = +1/3 for left-handed quarks (−1/3 for antiquarks). 

The minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” with strong (colour) charge and fractional apparent electric charge, are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.)  Peter Woit’s Not Even Wrong summarises what is known in Figure 7.1 on page 93 of Not Even Wrong:

‘The picture shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).

‘Under SU(3), the quarks are triplets and the leptons are invariant.

‘Under SU(2), the [left-handed] particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other [right-handed] particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).

‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’

This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’).

But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!

The issue of the fine detail in the relationship of leptons and quarks, how the transformation occurs physically and all the details you can predict from the new model suggested in the previous post, is very interesting and, as stated, is the number one priority.

For a start, to study the transformation of a lepton into a quark, we will consider the conversion of electrons into downquarks.  First, the conversion of a left-handed electron into a left-handed downquark will be considered, because the weak isospin charge is the same for each (T = -1/2):

eL  -> dL

The left-handed electron, eL, has a weak hypercharge of Y = -1 and the left-handed downquark, dL, has a weak hypercharge of Y = +1/3.  Therefore, this transformation incurs a fall in observable electric charge by a factor of 3 and an accompanying increase in weak hypercharge by +4/3 units (from -1 to +1/3).

Now, if the vacuum shielding mechanism suggested has any heuristic validity, the right-handed electron should transform into a right-handed downquark by way of a similar fall in electric charge by a factor of 3 and accompanying increase in weak hypercharge by +4/3 units:

eR -> dR

The weak isospin charges are the same for right-handed electrons and right-handed downquarks (T = 0 in each case).

The transformation of a right-handed electron to right-handed downquark involves the same reduction in electric charge by a factor of 3 as for left-handed electrons, while the weak hypercharge changes from Y = -2 to Y = -2/3.  This means that the weak hypercharge increases by +4/3 units, just the same amount as occurred with the transformation of a left-handed electron to a left-handed downquark.  So there is a consistency to this model: the shielding of a given amount of electric charge by the polarized vacuum causes a consistent increase in the weak hypercharge.

If we ignore for the moment the possibility that antimatter leptons may get transformed into upquarks and just consider matter, then the symmetry transformations required to change right-handed neutrinos into right-handed upquarks, and left-handed neutrions into left-handed upquarks are:

vL -> uL

vR -> uR

The first transformation involves a left-handed neutrino, vL, with Y = -1, Q = 0, and T = 1/2, becoming a left-handed upquark, uL, with Y = 1/3, Q = 2/3, and T = 1/2.  We notice that Y gains 4/3 in the transformation, while Q gains 2/3.

The second transformation involves a right-handed neutrino with Y = 0, Q = 0 and T = o  becoming a right-handed upquark with Y = 4/3, Q = 2/3 and T = 0.  We can immediately see that the transformation has again resulted in Y gaining 4/3 while Q gains 2/3.  Hence, the concept that a given change in electric charge is accompanied by a given change in hypercharge remains valid.  So we have accounted for the conversion of the four leptons in one generation of particle physics (two types of handed electrons and two types of handed neutrinos) into the four quarks in the same generation of particle physics (left and right handed versions of two quark flavors).

These transformations are obviously not normal reactions at low energy.  The first two make checkable, falsifiable predictions about unification to replace supersymmetry speculation about the unification of running couplings, the relative charges of the electromagnetic, weak and strong forces as a function of either collision energy (e.g., electromagnetic charge increases at higher energy, while strong charge falls) or distance (e.g., electromagnetic charge increases at small distances, while strong charge falls).

If we review the symmetry transformations suggested for a generation of leptons into a generation of quarks,

eL  -> dL

eR -> dR

vL -> uL

vR -> uR

it is clear that the last two reactions are in difficulty, because the conversion of neutrinos into upquarks (in this example of a generation of quarks) is a potential problem for the suggested physical mechanism in the previous (and earlier) posts.  The physical mechanism for the first two of the four transformations is relatively straightforward to picture: try to collide leptons at enormous energy and the overlap of the polarized vacuum veils of polarizable fermions should shield some of the long-range (observable low energy) electric charge, with this shielded energy is used instead in short range weak hypercharge mediated by weak gauge bosons, and colour charges for the strong force.

Because we know exactly how much energy is ‘lost’ from the electric charge in the first two transformations due to the increased shared polarized vacuum shield, we can quantitatively check this physical mechanism by setting this lost energy equal to the energy gained in the weak force and seeing if the predictions are accurate.  This mechanism might not apply directly to the last two transformations, since neutrinos do not carry a net electric charge.  It is also necessary to investigate the possibilities for the transformation of positrons into upquarks.  This issue of why there is little antimatter might be resolved if positrons were converted into upquarks at high energy in the big bang by the mechanism suggested for the first two transformations.

However, the polarized vacuum shielding mechanism might still apply in some circumstances to neutral particles, depending on the geometry.  Neutrinos may be electrically neutral as observed at low energy or large distances, while actually carrying equal and opposite electric charge.  (Similarly, atoms often appear to be neutral, but if we smash them to pieces we get observable electric charges arise.  The apparent electrical neutrality of atoms is a masking effect of the fact that atoms usually carry equal positive and negative charge, which cancel as seen from a distance.  A photon of light similarly carries positive electric field and negative electric field energy in equal quantities; the two cancel out overall, but the electromagnetic fields of the photon can interact with charges.

Charge is only manifested by way of the field created by a charge; since nobody has ever seen the core of a charged particle, only the field.  A confined field of a given charge is therefore indistinguishable from a charge.  The only reason why an electron appears to be a negative charge is because it has a negative electric field around it.  As shown in Fig. 5 of the previous post, there is a modification necessary to the U(1) symmetry of the standard model of particle physics: negative gauge bosons to mediate the fields around negative charges, and positive gauge bosons to mediate the fields around positive charges.

So a ‘neutral’ particle which is neutral because it contains of equal amounts of positive and negative electric field, may be able to induce electric polarization of the vacuum for the short ranged (uncancelled) electric field.  The range of this effect is obviously limited to the distance between the centrel of the positive part of the particle and the negative part of the particle.  (In the case of a photon for example, this distance is the wavelength.)

If we replace the existing electroweak SU(2)xU(1) symmetry by SU(2)xSU(2), maybe with each SU(2) having a different handedness, then we get four charged bosons (two charged massive bosons for the weak force, and two charged massless bosons for electromagnetism) and two neutral bosons: a massless gravity mediating gauge boson, and a massive weak neutral-current producing gauge boson.

Let’s try the transformation of a positron into an upquark.  This has two major advantages over the idea that neutrinos are transformed into upquarks.  First, it explains why we don’t observe much antimatter in nature (tiny amounts arise from radioactive decays involving positron emission, but it quickly annihilates with matter into gamma rays).  In the big bang, if nature was initially symmetric, you would expect as much matter as antimatter.  The transformation of free positrons into confined upquarks would sort out this problem.  Most of the universe is hydrogen, consisting of a proton containing two upquarks and a downquark, plus an orbital electron.  If the upquarks come come from a transformation of positrons while downquarks come from a transformation of electrons, the matter-antimatter balance is resolved.

Secondly, the transformation of positrons to upquarks has a simple mechanism by vacuum polarization shielding of the electric charge, causing the electric charge of the positron to drop from +1 unit for a positron to +2/3 units for upquarks.  This occurs because you get two positive upquarks and one downquark in a proton.  The transformation is

e+L  -> uL

The positron on the left hand side has Y = +1, Q = +1 and T = +1/2.  The upquark on the right hand side has Y = +1/3, Q = +2/3 and T = +1/2.  Hence, there is an decrease of Y by 2/3, while Q decreases by 1/3.  Hence the amount of change of Y is twice that of Q.  This is impressively identical to the the situation in the transformation of electrons into downquarks, where an increase of Q by 2/3 units is accompanied by an increase of Y by twice 2/3, i.e., by 4/3, for the transformation eL  -> dL 

There are only two ways that quarks can group: in pairs and in triplets or triads.  The pairs are of quarks sharing the same polarized vacuum are known as mesons, and mesons are the SU(2) symmetry pairs of left-handed quark and left-handed anti-quark, which both experience the weak nuclear force (no right-handed particle can participate in the weak nuclear force, because the right handed neutrino has zero weak hypercharge).  The SU(3) symmetry triplets of quarks are called baryons.

Because only left-handed particles experience the weak force (i.e., parity is broken), it is vital to explain why this is so.  This arises from the way the vector bosons gain mass.  In the basic standard model, everything is massless.  Mass is added to the standard model by a separate scalar field (such as that which is speculatively proposed by Philip Anderson and Peter Higgs and called the Higgs field), which gives all the massive particles (including the weak force vector bosons) their mass.  The quanta for the scalar mass field are named ‘Higgs bosons’ but these have never been officially observed, and mainstream speculations do not make predict Higgs boson mass unambiguously.

The model for masses in the previous post predicts composite (meson and baryon) particle masses to be due to an integer number of 91 GeV building blocks of mass which couple weakly due to the shielding factor due to the polarized vacuum around a fermion.  The 91 GeV energy equivalent to the rest mass of the uncharged neutral weak gauge boson, the Z.

The SU(3), SU(2) and U(1) gauge symmetries of the standard model describe triplets (baryons), doublets (mesons) and single particle cores (leptons), dominated by strong, weak and electromagnetic interactions, respectively.  The problem is located in the electroweak SU(2)xU(1) symmetry.  Most of the papers and books on gauge symmetry focus on the technical details of the mathematical machinery, and simple mechanisms are looked at askance (as is generally the case in quantum mechanics and general relativity).  So you end up learning say, how to drive a car without knowing how the engine works, or you learn how the engine works without any knowledge of the territory which would enable you to plan a useful journey.  This is the way some complex mathematical physics is traditionally taught, mainly to get away from useless speculations: Feynman’s analogy of the chess game is fairly good.  (Deduce some of the rules of the game by watching the game being played, and use these rules to make some accurate predictions about what may happen; without having the complete understanding necessary for confident explanation of what the game is about.  Then make do by teaching the better known predictive rules, which are technical and accurate, but don’t always convey a complete understanding of the big picture.)

A serious problem with the U(1) symmetry is that you can’t really ever get single leptons in nature.  They all arise naturally from pair production, so they usually arrive in doublets, contradicting U(1); examples: in beta decay, you get a beta particle and an antineutrino, while in pair production you may get a positron and an electron.

This is part of the reason why SU(2) deals with leptons in the model proposed in the previous post.  Whereas pairs of left-handed quarks are confined in close proximity in mesons, a lepton-antilepton pair is not confined in a small space, but it is still a type of doublet and can be treated as such by SU(2) using massless gauge bosons (take the masses away from the Z, W+ and W- weak bosons, and you are left with a massless Z boson that mediates gravity, and massless W+ and W- bosons which mediate electromagnetic forces).  Because a version of SU(2) with massless gauge bosons has infinite range inverse-square law fields, it is ideal for describing the widely separated lepton-antilepton pairs created by pair production, just as SU(2) with massive guage bosons is ideal for describing the short range weak force in left-handed quark-antiquark pairs (mesons).

The electroweak chiral symmetry arises because only left-handed particles can interact with massive SU(2) gauge bosons (the weak force), while all particles can interact with massless SU(2) gauge bosons (gravity and electromagnetism).  The reason why this is the case is down to the nature of the way mass is given to SU(2) gauge bosons by a mass-giving Higgs-type field.  Presumably the combined Higgs boson when coupled with a massless weak gauge boson gives a composite particle which only interacts with left-handed particles, while the nature of the massless weak gauge bosons is that in the absence of Higgs bosons they can interact equally with left and right handed particles.


To summarise, quarks are probably electron and antielectrons (positrons) with the symmetry transformation modifications you get from close confinement of electrons against the exclusion principle (e.g., such electrons acquire new charges and short range interactions).

Downquarks are electrons trapped in mesons (pairs of quarks containing quark-antiquark, bound together by the SU(2) weak nuclear force, so they have short lifetimes and under beta radioactive decay) or baryons, which are triplets of quarks bound by the SU(3) strong nuclear force.  The confinement of electrons in a small reduces their electric charge because they are all close enough in the pair or triplet to share the same overlapping polarized vacuum which shields part of the electric field.  Because this shielding effect is boosted, the electron charge per electron observed at long range is reduced to a fraction.  The idealistic model is 3 electrons confined in close proximity, giving a 3 times strong polarized vacuum, which reduces the observable charge per electron by a factor of 3, giving the e/3 downquark charge.  This is a bit too simplistic of course because in reality you get mainly stable combinations like protons (2 upquark and 1 downquark).  The energy lost from the electric charge, due to the absorption in the polarized vacuum, powers short-ranged nuclear forces which bind the quarks in mesons and hadrons together.

Upquarks would seem to be trapped positrons.  This is neat because most of the universe is hydrogen, with one electron in orbit and 2 upquarks plus 1 downquark in the proton nucleus.  So one complete hydrogen atom is formed by 2 electrons and 2 positrons.  This explains the absence of antimatter in the universe: the positrons are all here, but trapped in nuclei as upquarks.  Only particles with left-handed Weyl spin undergo weak force interactions.

Possibly the correct electroweak-gravity symmetry group is SU(2)L x SU(2)R, where SU(2)L is a left-handed symmetry and SU(2)R is a right handed one. The left-handed version couples to massive bosons which give mass to particles and vector bosons, creating all the massive particles and weak vector bosons. The right handed version presumably does not couple to massive bosons. The result here is that the right handed version, SU(2)R, produces only mass-less particles, giving the gauge bosons needed for long-range electromagnetic and gravitational forces. If that works in detail, it is a simplification of the SU(2)xU(1) electroweak model, which should make the role of the mass-giving field clearer, and predictions easier.

The mainstream SU(2)xU(1) model requires a symmetry-breaking Higgs field which works by giving mass to weak gauge bosons only below a particular energy or beyond a particular distance from a particle core. The weak gauge bosons are supposed to be mass-less above that energy, where electroweak symmetry exists; electroweak symmetry breaking is supposed to occur below the Higgs expectation energy due to the fact that 3 weak gauge bosons acquire mass at low energy, while photons don’t acquire mass at low energy.

This SU(2)xU(1) model mimics a lot of correct physics, without being the correct electroweak unification. How far has the idea that weak gauge bosons lose mass above the Higgs expectation value been checked (I don’t think it has been checked at all yet)? Presumably this is linked to ongoing efforts to see evidence for a Higgs boson. The electroweak theory correctly unifies the weak force (dealing with neutrinos, beta decay and the behaviour of mesons) with Maxwell’s equations at low energy and the electroweak unification SU(2)xU(1) predicted the W and Z massive weak gauge bosons detected at CERN in 1983. However, the existence of three massive weak gauge bosons is the same in the proposed replacement for SU(2)xU(1). I think that the suggested replacement of U(1) by another SU(2) makes quite a lot of changes to the untested parts of the standard model (in particular the Higgs mechanism), besides the obvious benefits of introducing gravity and causal electromagnetism.

Spherical symmetry of Hubble recession

I’d like to thank Bee and others at the Backreaction blog for patiently explaining to me that a statement that radial distance elements are equal for the Hubble recession in all directions around us,

H = dv/dr = dv/dx = dv/dy = dv/dz


t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv


dv/H = dr = dx = dy = dx

for spherically symmetrical recession of stars around us (in directions x, y, z, where r is the general radial direction that can point any way), appears superficially to be totally ‘wrong’ to people who are only unaccustomed to cosmology where the elementary equations for spherical geometry and metrics in non-symmetric spatial dimensions don’t apply.  Hopefully, ‘critics’ will grasp the point that equation A does not disprove equation B just because you have seen equation A in some textbook, and not equation B.

For example, some people repeatedly and falsely claim that H = dv/dr = dv/dx = dv/dy = dv/dz and the resulting equality dr = dx = dy = dx is total rubbish, and is ‘disproved’ by the existence of metrics and non-symmetrical spherical geometrical equations.  They ignore all explanations that this equality of gradient elements has nothing to do with metrics or spherical geometry, and is due to the spherical symmetry of the cosmic expansion we observer around us.

Another way to look at H = dv/dr = dv/dx = dv/dy = dv/dz is to remember that 1/H is a way to measure the age of the universe.  If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.

However, since 1998 there has been good evidence that gravity is not slowing down the expansion; instead there is either something opposing gravity by causing repulsion at immense distance scales and outward acceleration (so-called ‘dark energy’ giving a small positive cosmological constant), or else there is a partial lack of gravity at long distances due to graviton redshift and/or the geometry of a quantum gravity mechanism (depending on whether you are assuming spin-2 gravitons or not), which has substantially more predictive and less ad hoc. since it was predicted via Electronics World Oct. 1996, years before being confirmed by observation (see comment 11 on previous post).

Therefore, let’s use 1/H as the age of the universe, time!  Then we find:

1/H = dr/dv = dx/dv = dy/dv = dz/dv.

This proves that dr/dv = dx/dv = dy/dv = dz/dv.

Now multiply this out by dv, and what do you get?  You get:

dr = dx = dy = dz.

As Fig. 2 shows, it is a fact that the Hubble parameter can be expressed as H = dv/dr = dv/dx = dv/dy = dv/dz, where the equality of numerators means that the denominators are similarly equal: dr = dx = dy = dx.  This is fact, not an opinion or guess.

Fig 2 - why dr = dx = dy = dx in the Hubble law v/r = H or dv/dr = H

Fig. 2: Illustration of the reason why the Hubble law H = dv/dr = dv/dx = dv/dy = dv/dz, where because of the isotropy (i.e. the Hubble law is the same in every direction we look, as far as observational evidence can tell), the numerators in the fractions are all equal to dv so the denominators are all equal to each other too: dr = dx = dy = dx.  Beware everyone, this has nothing whatsoever to do with metrics, with general relativity, or with the general case in spherical geometry (where the origin of coordinates need not in general be the centre of the spherical symmetry)!

So if your textbook has a formula which ‘contradicts’ dr = dx = dy = dx or if you think that dr = dx = dy = dx should in your opinion be replaced by a metric with the squares of line elements all added up, or with a general formula for spherical geometry which applies to situations where the recession would be vary with directions, then you are wrong.  As one commentator on this blog has said (I don’t agree with most of it), it is true that new ideas which have not been investigated before often look ‘silly’.  People who do not check the physics and instead just pick out formulae, misunderstand them, and then ridicule them, are not “critics”.  They are not criticising the work, instead they are criticising their own misunderstandings.  So any ridicule and character assassinations resulting should be taken with a large pinch of salt.  It’s best to try to see the funny side when this occurs!

One of the very interesting things about dr = dx = dy = dx is what you get for time dimensions because the age of the universe (if there is no gravitational deceleration, as was shown to be the case in 1998) is 1/H, and because we look back in time with increasing distance according to r = x = y = z = ct, it follows that there are equivalent time-like dimensions for each of the spatial dimensions.  This makes spacetime easier to understand and allows a new unification scheme!  The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter.  Surely this contradicts general relativity?  No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square.  To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving:

(dr)/c = (dx)/c = (dy)/c = (dz)/c

which can be written as:

dtr = dtx = dty = dtz

So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal!  This is why we only need one time to describe the expansion of the universe.  If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions.  Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic!  This is quite a surprising result as some hostility to this new idea from traditionalists shows.

But the three time dimensions which are usually hidden by this isotropy are vitally important!  Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need.  It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here.  The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity.  For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.

In addition, as was shown in detail in the previous post, this sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity:

‘To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.’

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength.  They are radical.  Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’

– Herman Minkowski, 1908.

Deriving the relationship between the FitzGerald contraction and the gravitational contraction

Feynman finds that whereas lengths contract in the direction of motion at velocity v by the ratio (1 – v2/c2)1/2, gravity contracts lengths by the amount (1/3)MG/c2 = 1.5 mm for the contraction of Earth’s radius by gravity.

It is of interest that this result can be obtained simply, throwing light on the relationship between the equivalence of mass and energy in ‘special relativity’ (which is at best just an approximation) and the equivalence of inertial mass and gravitational mass in general relativity.

To start with, recall Dr Love’s derivation of Kepler’s law from the equivalence of the kinetic energy of a planet to its gravitational potential energy, given in a previous post.

This is very simple.  If a body’s average kinetic energy in space (outwide the atmosphere) is such that it has just over the escape velocity, it will eventually escape and will therefore be unable to orbit endlessly.  If it has just under that velocity, it will eventually fall back to Earth and so it will not orbit endlessly, just as is the case if the average velocity is too high.  Like Goldilocks and the porridge, it is very fussy.

The average orbital velocity must exactly match the escape velocity – and be neither more nor less than the escape velocity – in order to achieve a stable orbit.

Dr Love points out the consequences: a body in orbit must have an average velocity equal to escape velocity v = (2GM/r1/2 which implies that its kinetic energy must be equal to its gravitational potential energy:

kinetic energy, E = (1/2) mv 2 = (1/2) m((2GM/r1/2 )2 = mMG/r.

This permits him to derive Kepler’s law.  It is also very important because it explains the relationship for stability of orbits:

average kinetic energy = gravitational potential energy

Einstein’s equivalence of inertial and gravitational mass in E = mc2 then allows us to use this equivalence of inertial kinetic energy and gravitational potential energy derive the equivalence principle of general relativity, which states that the inertial mass is equal to the gravitational mass, at least for orbiting bodies.  Another physically justified argument is that gravitational potential is the gravity energy that would be released in the case of collapse.  If you allowed the object to fall and therefore pick up that gravitational potential energy, the latter energy would be converted into kinetic energy of the object.  This is why the two energies are equivalent.  It’s a rigorous argument!

Now test it further.  Take the FitzGerald-Lorentz contraction of length due to inertial motion at velocity, where objects are compressed by the ratio (1 – v2/c2)1/2, using equivalence of average kinetic energy to gravitational potential energy, you can place the escape velocity, v = (2GM/r1/2 into the contraction formula, and expand the result to two terms using the binomial expansion.  You find that the radius of a gravitational mass would be reduced by the amount GM/c2 = 4.5 mm for Earth’s radius which is three times as big as Feynman’s formula for gravitational compression of Earth’s radius.  The factor of three comes from the fact that the FitzGerald-Lorentz contraction is in one dimension only (direction of motion), while the gravitational field lines radiate in three dimensions, so the same amount of contraction is spread over three times as many dimensions, giving a reduction in radius by (1/3)GM/c2 = 1.5 mm!  (There is also a rigorous mathematical discussion of this on the page here if you have the time to scroll down and find it.)

Unusually, Feynman makes a confused mess of this effect in his relevant volume of Lectures on Physics, c42 p6, where correctly he gives his equation 42.3 for excess radius being equal to predicted radius minus measured radius (i.e., he claims that the predicted radius is the bigger one in the equation) but then on the same page in the text falsely and confusingly writes: ‘… actual radius exceeded the predicted radius …’ (i.e., he claims in the text that the predicted radius is the smaller).

Professor Jacques Distler’s philosophical and mathematical genius

‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’

– Professor Jacques Distler, Musings blog post on the Role of Rigour.

Jacques also summarises the issues for theoretical physics clearly in a comment there:

  1. ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
  2. ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
  3. ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’

20 thoughts on “Feynman diagrams in loop quantum gravity, path integrals, and the relationship of leptons to quarks

  1. Nigel,

    I accept the validity of your statement here that the Hubble parameter,

    H = v/r = dv/dr

    for the isotropic (similar in all directions) Hubble recession,

    h = dv/dr = dv/dx = dv/dy = dv/dz

    where the numerators are all equal to the same thing, dv, as are the denominators, due to the symmetry of the recession in all directions:

    dr = dx = dy = dz.

    Obviously it is weird why some people are so opposed to physics. I think your problem here is like a certain physicist’s problems with someone whose name escapes me (let’s call the Fly) in 1905.

    In 1905 a physicist derived E = mc^2 from a specific set of assumptions.

    Then a critic called Fly claimed that E = mc^2 is just plain wrong because in all the textbooks which Fly had, there was no mention of E = mc^2.

    Instead, Fly claimed that E = (1/2)mv^2 because that was mentioned in some of the books. Fly asserted that if you want c in the equation, you set v = c and get

    E = (1/2)mc^2

    which, so Fly claimed, “disproves” E = mc^2.

    Well, Fly had not read the physicists paper and was completely confused, but claimed to be right and to be very clever for writing abusive slurs and lies about the competence of the physicist, while all the time ignoring all the physics under discussion!

    This story of confused people being rude and hostile to solid physics is very similar to the problems you seem to be having.

    I think you will get nowhere by being gentle with people who are deliberately making up lies and being abusive. If you let them walk all over you, the facts will never get a fair hearing.

    These totally ignorant false “critics” who don’t bother to read or understand physics before ridiculing the innovators, are exactly like hot resistors in an electronic circuit: they are noise generators.

    They create a loud buzz, which drowns out the physics. Don’t give in to them because they simply have absolutely no idea what physics is.

  2. anon.,

    I disagree with most of your comment: I think you can make the case (with varying degrees of success) that some of the alleged “critics” are in part trying to find real “errors” instead of just satisfying their personal egos by hating “contradictions” that they haven’t taken the time to understand, and assassinating the credibility of innovators.

    Don’t post any more comments like that as it is not helpful. My interest is in the “shut up and calculate!” physics style, not arguing.

    My recent email to Dr Mario Rabinowitz touches on the causes censorship of new ideas in physics:

    From: Nigel Cook
    To: Mario Rabinowitz
    Sent: Thursday, June 14, 2007 12:05 PM
    Subject: Re: Heaviside was a remarkable man

    Dear Mario,

    The displacement current is indeed very important. Maxwell’s final view of the displacement current is similar to the electric polarization of the vacuum in quantum field theory.

    In the vacuum, displacement currents flow where the virtual fermion pairs are polarized by an electric field:

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    In order for the virtual fermions to separate slightly (causing a radial polarization that cancel’s out part of the core charge of the electron), a displacement current of Maxwell’s type must occur.

    The problem is that this can only occur in electric field strengths exceeding Schwinger’s critical threshold for pair production, which is E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. [Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040 .]

    Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*r^2), so

    E = Q/(4*Pi*Permittivity*r^2) volts/metre.

    Setting this equal to Schwinger’s threshold for pair-production, we get:

    E_c = (m^2)*(c^3)/(e*h-bar) = e/(4*Pi*Permittivity*r^2)

    where e is electronic charge.

    Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

    r^2 = (e^2)*(h-bar)/[4*Pi*Permittivity*(m^2)*(c^3)]

    r = [e/(2m)]*[(h-bar)/(Pi*Permittivity*c^3)]^{1/2}

    = 3.2953 * 10^{-14} metre = 32.953 fm.

    It’s curious that this is 11.69 times the classical electron radius, 2.81794 * 10^{-15} m = 2.81794 fm. The classical electron radius comes from integrating the energy of the electromagnetic field from this radius out to infinity, with this radius being chosen so that the integral equals the electron’s rest-mass energy mc^2. Clearly the electron is core is very much smaller than the classical electron radius, so mc^2 cannot be the total energy of the electron, it is merely the energy releasable in pair-production or annihilation phenomena, or when mass becomes binding energy. The physical explanation is probably the chaotic nature of the vacuum where the electric field strength is well above the Schwinger threshold: the pair production energy is almost randomly directed and has near maximum entropy, so most of it cannot be used. It’s the same as trying to extract useful energy from the kinetic energy of air molecules (air pressure). It can’t be done because the energy has maximum entropy so you need to supply more energy than you can possibly extract.

    The main point I’m making is quite different: that displacement currents due to electric fields (as Maxwell claimed) are only possible in the vacuum where you have polarizable fermion-antifermion pairs.

    There are no such pairs below the Schwinger threshold. Hence radiowaves with electric field strength amplitudes of 10 volts/metre don’t involve any displacement currents. Instead, electromagnetic radiation occurs which allows light and radiowaves generally to propagate, by doing the role normally attributed to displacement current.

    See http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html for the explanation, based on a transmission line situation.

    The resulting changes to the Maxwellian photon are illustrated in Figure 5 of my article: https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/

    I think these last two references are fairly readable, although I’ve used a great many “comments” after each one to add further information and responses people have made (mostly inaccurate and hostile). I’m still hoping to have the time needed at some stage to write up a properly edited free online book.

    TAt one time I thought there was a serious problem in not having a PhD: hostile people can “deal” with you by quoting you out of context or inventing a bogus error which you did not make, and trying to ridicule you. However, I was much relieved recently to read on Tony Smith’s page http://www.tony5m17h.net/goodnewsbadnews.html#badnews the following statement of Feynman (who had a PhD) about the hostility and false dismissals given to his path integrals by Teller, Dirac and Bohr at the 1948 Pocono conference:

    “… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …

    “… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

    ” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

    “… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

    “I gave up, I simply gave up …”. – The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

    Another example of ignorant hostility to Feynman’s research was due to Oppenheimer. Dyson explains what occurred in a video on the internet:


    “… the first seminar was a complete disaster because I tried to talk about what Feynman had been doing, and Oppenheimer interrupted every sentence and told me how it ought to have been said, and how if I understood the thing right it wouldn’t have sounded like that. He always knew everything better, and was a terribly bad organiser of seminars.

    “I mean he would – he had to have the centre state for himself and couldn’t shut up [like string theorists today!], and we couldn’t tell him to shut up. So in fact, there was very little communication at all. …

    “I always felt Oppenheimer was a bigoted old fool. … And then a week later I had the second seminar and it went a little bit better, but it still was pretty bad, and so I still didn’t get much of a hearing. And at that point Hans Bethe somehow heard about this and he talked with Oppenheimer on the telephone, I think. …

    “I think that he had telephoned Oppy and said ‘You really ought to listen to Dyson, you know, he has something to say and you should listen. And so then Bethe himself came down to the next seminar which I was giving and Oppenheimer continued to interrupt but Bethe then came to my help and, actually, he was able to tell Oppenheimer to shut up, I mean, which only he could do. …

    “So the third seminar he started to listen and then, I actually gave five altogether, and so the fourth and fifth were fine, and by that time he really got interested. He began to understand that there was something worth listening to. And then, at some point – I don’t remember exactly at which point – he put a little note in my mail box saying, ‘nolo contendere’.”

    Tony Smith (on the the previously mentioned page) quotes Dyson’s conclusion:

    “… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …”

    – Freeman Dyson, 1981 essay Unfashionable Pursuits (reprinted in From Eros to Gaia (Penguin 1992, at page 171).

    Even then, Oppenheimer did not learn, as Tony Smith points out at


    What about David Bohm’s expulsion from Princeton?

    According to the Bohm biography Infinite Potential, by F. David Peat (Addison-Wesley 1997) at pages 101, 104, and 133:

    “… when his [Bohm’s] … Princeton University … teaching … contract came up for renewal, in June [1951], it was terminated. … Renewal of his contract should have been a foregone conclusion … Clearly the university’s decison was made on political and not on academic grounds … Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … “We consider it juvenile deviationism …” … no one had actually read the paper … “We don’t waste our time.” … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should “pay no attention to Bohm’s work.” … Oppenheimer went so far as to suggest that “if we cannot disprove Bohm, then we must agree to ignore him.” …”.

    The mechanism for this ignorant hostility is simply greed:

    ‘… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly…’ – Nicolo Machiavelli, The Prince, Chapter VI: Concerning New Principalities Which Are Acquired By One’s Own Arms And Ability.

    As a result, the popular response to innovation is to hate it, and try to ignore it or ridicule it, and never give any credit to an innovator (unless or until that innovator has turned into a new dictator powerful enough to be respected):

    ‘(1). The idea is nonsense.
    (2). Somebody thought of it before you did.
    (3). We believed it all the time.’

    – Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle in his autobiography, Home is Where the Wind Blows, Oxford University Press, 1997, p154).

    What is slowly occurring is that I’m being forced to do more and more research into symmetry groups and quantum field theory, and am tackling problems myself that other people – with more advanced training (i.e., PhD’s) should be doing. The response from other people is exactly as Hoyle says it is. At best, any experimentally confirmed theory is accepted as being correct, with the snub that it’s obvious to everyone anyway. At worse, false “errors” in the treatment are invented and used to avoid any discussion at all!

    By the time I do get anybody to listen (if I ever do), I’ll be completely paranoid and really turning into a lunatic.

    Best wishes,

    —– Original Message —–
    From: Mario Rabinowitz
    To: Nigel Cook
    Sent: Wednesday, June 13, 2007 11:13 PM
    Subject: Heaviside was a remarkable man

    Hi Nigel,

    Thanks for pointing out Heaviside’s independent development of what is commonly called the Poynting vector. Heaviside was a remarkable man. I think the Maxwell equations should be called the Maxwell-Heaviside equations. Though he may even have done them before Maxwell (one of my heroes), I don’t think he developed the displacement current, as did Maxwell. This was an important contribution to what otherwise might be considered a codification of existing laws e.g. Coulombs law, Ampere’s law, etc. Would you agree?

  3. copy of a comment:


    Harry, you refer to “other opinions or sources of contrary evidence”.

    Please, it’s not just a case that opinions aren’t worth a dime in science, but opinions are actually negative equity. Historically, the opinions of experts held back science:

    ‘Science is the belief in the ignorance of experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187. This is quoted on my home page.

    Lee Smolin has a slightly different quotation from Feynman in his book The Trouble with Physics, U.S. ed., p307:

    “Science is the organized skepticism in the reliability of expert opinion.”

    Tony Smith (a string theorist censored off arXiv because he has embarrassingly stuck to 26 dimensional bosonic string theory, instead of working on 10 dimensional superstrings with 1:1 boson:fermion supersymmetry), has quoted Feynman describing his problems with getting people to listen to him in 1948 at the Pocono conference:

    “Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” … Dirac could not think of going forwards and backwards … in time … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

    “… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

    “I gave up, I simply gave up …”. – The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

    Dyson has a google video (search for Freeman Dyson Feynman, on google video) describing how hard it was to get Feynman’s idea taken seriously:

    “… the first seminar was a complete disaster because I tried to talk about what Feynman had been doing, and Oppenheimer interrupted every sentence and told me how it ought to have been said, and how if I understood the thing right it wouldn’t have sounded like that. He always knew everything better, and was a terribly bad organiser of seminars.

    “I mean he would – he had to have the centre state for himself and couldn’t shut up [like string theorists today!], and we couldn’t tell him to shut up. So in fact, there was very little communication at all. …

    “I always felt Oppenheimer was a bigoted old fool. …”

    Eventually, Dyson got Bethe to explain it to Oppeheimer, who listened to Bethe. Tony Smith quotes Dyson’s conclusion:

    “… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …”

    – Freeman Dyson, 1981 essay Unfashionable Pursuits (reprinted in From Eros to Gaia (Penguin 1992, at page 171).

    Tony Smith, in a comment on the Not Even Wrong weblog, points out that Oppenheimer continued to be bigoted by nature:

    “Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … “We consider it juvenile deviationism …” … no one had actually read the paper … “We don’t waste our time.” … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should “pay no attention to Bohm’s work.” … Oppenheimer went so far as to suggest that “if we cannot disprove Bohm, then we must agree to ignore him.” …”.

    – Infinite Potential, by F. David Peat (Addison-Wesley 1997) at pages 101, 104, and 133.

    Even Carl Sagan falsely argued: “exceptional claims require exceptional evidence”. Problem is, what is exceptional evidence to one person, looks like a mere coincidence to a critic. What exceptional claims ideally require are solid facts or at least falsifiable predictions. But such results may require time and money.

    Opinions don’t mean an[y]thing scientific in physics. Expert opinion is often bunk. The greatest physicists of the Victorian era in England, Lord Kelvin and James Clerk Maxwell, were respectively a “stable vortex atom” theorist (disproved by radioactivity) and a mechanical aether theorist. Why should mainstream string theorists be foolproof today? Opinions are a symptom of politics and change on a whim like fashions, which is a little better than a religious dogma, but still is not science.

  4. It’s a bit hazy in my mind, but I do know there’s all sorts of problems in defining various “fundamental” things in classical E&M.

    Put current into a transmission line (pair of wires) by connecting them to a battery, and you get a continuous flat-topped logic pulse propagating along the transmission line at light speed for the insulator.

    This violated Ampere’s law of circuits because the current pulse doesn’t know in advance if there is a complete circuit at the far end of the line, or an open circuit.

    Maxwell’s whole genius was adding an ‘extra current’ to Ampere’s law which can flow across space between the two wires (even across a vacuum), completing the circuit while a transient flows into an open circuit!

    What happens when you do the experiment with sampling oscilloscopes is you find that the energy reflects back from the far end of the transmission line. If it’s an open circuit at the far end, the reflected current adds to the energy flowing in, so the transmission line charges up, a little like a capacitor.

    All the same sorts of problems that show up in QFT as a matter of fact.

    Maxwell’s extra current was supposed to be due to the displacement of virtual fermions in the vacuum, which polarize in an electric field. The vacuum ‘displacement current’ consequently flows in direct proportion to the rate of change of the electric field, dE/dt.

    Nice theory, and it predicts light. Problem is, QFT involves a vacuum polarization due to pair production of virtual fermions, only at high energy (above Schwinger’s electric field strength threshold for pair production, or the IR cutoff energy for particle scatter). So below the IR cutoff, Maxwell’s displacement current mechanism is in difficulty. However, the correction is easy to see: electrons are accelerated by electric fields in the conductors, so they radiate transversely. Each conductor behaves as an antenna radiating an inverted version of the radio signal from the other one. At large distances from the power line, the superimposed radio signals cancel out perfectly. The conductors are therefore just swapping this radio energy, and the resulting effect of the swap is equivalent to having a ‘displacement current’. So you still justify the Maxwell equations when you dig deeply, though his original theory is wrong.

  5. copy of a comment:


    “Theoretically if an accelerator fired enough mass into a tiny space a singularity would be created. The Black Hole would almost instantly evaporate, but could be detected via Hawking radiation. Unfortunately quantum mechanics says that a particle’s location can not be precisely measured. This quantum uncertainty would prevent us from putting enough mass into a singularity.”

    I disagree with Lisa Randall here. It depends on whether the black hole is charged or not, which changes the mechanism for the emission of Hawking radiation.

    The basic idea is that in a strong electric field, pairs of virtual positive fermions and virtual negative fermions appear spontaneously. If this occurs at the event horizon of a black hole, one of the pair can at random fall into the black hole, while the other one escapes.

    However, there is a factor Hawking and Lisa Randall ignore: the requirement of the black hole having electric charge in the first place, because pair production has only been demonstrated to occur in strong fields, the standard model fields of the strong and electromagnetic force fields (nobody has ever seen pair production occur in the extremely weak gravitational fields).

    Hawking ignores the fact that pair production in quantum field theory (according to Schwinger’s calculations, which very accurately predict other things like the magnetic moments of leptons and the Lamb shift in the hydrogen spectra) requires a net electric field to exist at the event horizon at the black hole.

    This in turn means that the black hole must carry a net electric charge and cannot be neutral if there is to be any Hawking radiation.

    In turn, this implies that Hawking radiation in general is not gamma rays as Hawking claims it is.

    Gamma rays in Hawking’s theory are produced just beyond the event horizon of the black hole by as many virtual positive fermions as virtual negative fermions escaping and then annihilating into gamma rays.

    This mechanism can’t occur if the black hole is charged, because the net electric charge [which is required to give the electric field which is required for pair-production in the vacuum in the first place] of the black hole interferes with the selection of which virtual fermions escape from the event horizon!

    If the black hole has a net positive charge, it will skew the distribution of escaping radiation so that more virtual positive charges escape than virtual negative charges.

    This, in turn, means that the escaped charges beyond the event horizon won’t be equally positive and negative; so they won’t be able to annihilate into gamma rays.

    It’s strange that Hawking has never investigated this.

    You only get Hawking radiation if the black hole has an electric charge of Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar).

    (This condition is derived below.)

    The type of Hawking radiation you get emitted is generally going to be charged, not neutral.

    My understanding is that the fermion and boson are both results of fundamental prions. As Carl Brannen and Tony Smith have suggested, fermions may be a triplet of prions to explain the three generations of the standard model, and the colour charge in SU(3) QCD.

    Bosons of the classical photon variety would generally have two prions: because their electric field oscillates from positive to negative (the positive electric field half cycle constitutes an effective source of positive electric charge and can be considered to be one preon, while the negative electric field half cycle in a photon can be considered another preon).

    Hence, there are definite reasons to suspect that all fermions are composed of three preons, while bosons consist of pairs of preons.

    Considering this, Hawking radiation is more likely to be charged gauge boson radiation. This does explain electromagnetism if you replace the U(1)xSU(2) electroweak unification with an SU(2) electroweak unification, where you have 3 gauge bosons which exist in both massive forms (at high energy, mediating weak interactions) and also massless forms (at all energies), due to the handedness of the way these three gauge bosons acquire mass from a mass-providing field. Since the standard model’s electroweak symmetry breaking (Higgs) field fails to make really convincing falsifiable predictions (there are lots of versions of Higgs field ideas making different “predictions”, so you can’t falsify the idea easily), it is very poor physics.

    Sheldon Glashow and Julian Schwinger investigated the use of SU(2) to unify electromagnetism and weak interactions in 1956, as Glashow explains in his Nobel lecture of 1979:

    ‘Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).’

    This is plain wrong: Glashow and Schwinger believed that electromagnetism would have to be explained by a massless uncharged photon acting as the vector boson which communicates the force field.

    If they had considered the mechanism for how electromagnetic interactions can occur, they would have seen that it’s entirely possible to have massless charged vector bosons as well as massive ones for short range weak force interactions. Then SU(2) gives you six vector bosons:

    Massless W_+ = +ve electric fields
    Massless W_- = -ve electric fields
    Massless Z_o = graviton (neutral)

    Massive W_+ = mediates weak force
    Massive W_- = mediates weak force
    Massive Z_o = neutral currents

    Going back to the charged radiation from black holes, massless charged radiation mediates electromagnetic interactions.

    This idea that black holes must evaporate if they are real simply because they are radiating, is flawed: air molecules in my room are all radiating energy, but they aren’t getting cooler: they are merely exchanging energy. There’s an equilibrium.


    To derive the condition for Hawking’s heuristic mechanism of radiation emission, he writes that pair production near the event horizon sometimes leads to one particle of the pair falling into the black hole, while the other one escapes and becomes a real particle. If on average as many fermions as antifermions escape in this manner, they annihilate into gamma rays outside the black hole.

    Schwinger’s threshold electric field for pair production is: E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040

    So at least that electric field strength must exist at the event horizon, before black holes emit any Hawking radiation! (This is the electric field strength at 33 fm from an electron.) Hence, in order to radiate by Hawking’s suggested mechanism, black holes must carry enough electric charge so make the eelectric field at the event horizon radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.

    Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so

    E = Q/(4*Pi*Permittivity*R^2) v/m.

    Setting this equal to Schwinger’s threshold for pair-production, (m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

    R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.

    Where Q is black hole’s electric charge, and e is electronic charge, and m is electron’s mass. Set this R equal to the event horizon radius 2GM/c^2, and you find the condition that must be satisfied for Hawking radiation to be emitted from any black hole:

    Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)

    where M is black hole mass.

    So the amount of electric charge a black hole must possess before it can radiate (according to Hawking’s mechanism) is proportional to the square of the mass of the black hole.

    On the other hand, it’s interesting to look at fundamental particles in terms of black holes (Yang-Mills force-mediating exchange radiation may be Hawking radiation in an equilibrium).

    When you calculate the force of gauge bosons emerging from an electron as a black hole (the radiating power is given by the Stefan-Boltzmann radiation law, dependent on the black hole radiating temperature which is given by Hawking’s formula), you find it correlates to the electromagnetic force, allowing quantitative predictions to be made. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997 for example.

    To summarize: Hawking, considering uncharged black holes, says that either of the fermion-antifermion pair is equally likey to fall into the black hole. However, if the black hole is charged (as it must be in the case of an electron), the black hole charge influences which particular charge in the pair of virtual particles is likely to fall into the black hole, and which is likely to escape. Consequently, you find that virtual positrons fall into the electron black hole, so an electron (as a black hole) behaves as a source of negatively charged exchange radiation. Any positive charged black hole similarly behaves as a source of positive charged exchange radiation.

    These charged gauge boson radiations of electromagnetism are predicted by an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    It’s amazing how ignorant mainstream people are about this. They don’t understand that charged massless radiation can only propagate if there is an exchange (vector boson radiation going in both directions between charges) so that the magnetic field vectors cancel, preventing infinite self inductance.

    Hence the whole reason why we can only send out uncharged photons from a light source is that we are only sending them one way. Feynman points out clearly that there are additional polarizations but observable long-range photons only have two polarizations.

    It’s fairly obvious that between two positive charges you have a positive electric field because the exchanged vector bosons which create that field are positive in nature. They can propagate despite being massless because there is a high flux of charged radiation being exchanged in both directions (from charge 1 to charge 2, and from charge 2 to charge 1) simultaneously, which cancels out the magnetic fields due to moving charged radiation and prevents infinite self-inductance from stopping the radiation. The magnetic field created by any moving charge has a directional curl, so radiation of similar charge going in opposite directions will cancel out the magnetic fields (since they oppose) for the duration of the overlap.

    All this is well known experimentally from sending logic signals along transmission lines, which behave as photons. E.g. you need two parallel conductors at different potential to cause a logic signal to propagate, each conductor containing a field waveform which is an exact inverted image of that in the other (the magnetic fields around each of the conductors cancels the magnetic field of the other conductor, preventing infinite self-inductance).

    Moreover, the full mechanism for this version of SU(2) makes lots of predictions. So fermions are blac[k] holes and the charged Hawking radiation they emit is the gauge bosons of electromagnetism and weak interactions.

    Presumably the neutral radiation is emitted by electrically neutral field quanta which give rise to the mass (gravitational charge). The reason why gravity is so weak is because it is mediated by electrically neutral vector bosons.

  6. comment:



    That’s obviously what is meant because that’s Dirac’s prediction from the spinor of his famous equation. He had to modify the Hamiltonian and one consequence is antimatter.

    It was Schwinger, however, who found that pair production occurs spontaneously in the vacuum if the electric field strength exceeds a threshold of 1.3*10^18 volts/metre. See equation 359 in Dyson’s 1951 Lectures on Advanced Quantum Mechanics, Second Edition, http://arxiv.org/abs/quant-ph/0608140, or equation 8.20 of Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo, Introductory Lectures on Quantum Field Theory, http://arxiv.org/abs/hep-th/0510040

    One thing that really annoys me about popular books on the subject is that they claim – falsely – that pairs of fermions are constantly popping into existence and annihilating everywhere in the vacuum, without limit.

    Actually, that only occurs with[in] a distance of 32.953 fm from an electron (see https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks/ ).

    So all those physicists who state that the entire vacuum is a seething foam of Heisenberg-formula controlled pair-production and annihilation (i.e., looped Feynman diagrams), are talking out of their hats.

    It’s been known for over fifty years that there is a cut-off on the pair production. It’s pair production that allows pairs of short-lived (virtual) fermions to become briefly polarized in a field, which opposes and partially the primary electric field, thereby explaining physically the reason for electric charge renormalization.

    If pair-production occurred throughout the vacuum, there would be no infrared cutoff on the low-energy range for running couplings, and the observable electric charge would get for ever smaller as you got further from an electron. This doesn’t happen, proving that pair production-annihilation certainly doesn’t occur everywhere in the vacuum.

  7. Copy of a comment:


    Fascinating news about the Omega_b baryon with its three quarks of -1/3 electric charge each giving total electric charge -1. The only such quark I had heard of previously was the Omega minus, which has three strange quarks of -1/3 electric charge, giving the same sum, -1.

    Since I’m interested in the mechanisms of physics, it occurred to me that the vacuum pair-production (dielectric) polarization phenomena that explains the running coupling of QED automatically makes three nearby electric charges of -1 each appear (from long range) to add up to only -1 (i.e. -1/3 per quark):

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    ‘… we [experimentally] find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

    ‘The cloud of virtual particles acts like a screen or curtain that shields the true value of the central core. As we probe into the cloud, getting closer and closer to the core charge, we ’see’ less of the shielding effect and more of the core. This means that the electromagnetic force from the electron as a whole is not constant, but rather gets stronger as we go through the cloud and get closer to the core. Ordinarily when we look at or study an electron, it is from far away and we don’t realize the core is being shielded. …

    ‘Because the electromagnetic charge is in effect becoming stronger as we get closer and the strong force is getting weaker, there is a possibility that these two forces may at some energy be equal. Many physicists have speculated that when and if this is determined, an entirely new and unique physics may be discovered.’ – Professor David Koltick, quoted at http://findarticles.com/p/articles/mi_m1272/is_n2625_v125/ai_19496192

    The source of the shielding of the electric charge is the pair-production caused by the strong electric field. Schwinger calculated that an electric field above 1.3*10^18 v/m is needed to allow pair production (equation 359 of the mainstream work http://arxiv.org/abs/quant-ph/0608140 for equation 8.20 of the mainstream work http://arxiv.org/abs/hep-th/0510040 ), and since the electric field strength around an electron is E = Q/(4*Pi*Permittivity*Radius^2) v/m, Schwinger’s theshold limits pair-production (loops) in the vacuum to a radius within 33.0 fm (about 11.7 times the classical electron radius).

    So all the polarization and polarized vacuum dielectric shielding of the bare core charge of the electron occurs in a very tiny space, smaller in radius than 33 fm.

    The point is, if you take three identical electric charges and place them very nearby, their electric fields add together and overlap, but so does the polarization and shielding. If they aren’t nearby, then only the electric fields overlap and not the polarized vacuum region. Hence, three -1 electric charges well separated have a total charge of 3 * -1 = -3, but 3 very closely confined -1 electric charges will always have an electric charge of (3* -1)/3 = -1, i.e. they will appear to have a charge of -1/3. This is because if they are very close enough together, they boost the shielding effect by 3 times (this obviously doesn’t occur if they are more than 33 nm apart).

    Because three strange quarks are nearby, their vacuum polarization shells overlap, giving extra mutual shielding which wouldn’t occur for isolated charges (quarks can’t be isolated, but the principle holds). It’s the combined polarized vacuum shielding which accounts for the reason why quarks have fractional charges. The Omega minus is the simplest example of this.

    [See https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks/ and other posts]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s