Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron (see previous post for the mechanism and quantitative checked prediction of the strength of gravity). If you believe string theory, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity. (Basically, spin-1 gravitons push, while spin-2 gravitons suck. So if you want a checkable, predictive, real theory of quantum gravity that pushes forward, check out spin-1 gravitons. But if you merely want any old theory of quantum gravity that well and truly sucks, you can take your pick from the ‘landscape’ of 10500 stringy theories of mainstream sucking spin-2 gravitons.) In general relativity, an electron accelerates due to a continuous smooth curvature of spacetime, due to a spacetime ‘continuum’ (spacetime fabric).
In mainstream quantum gravity ideas (at least in the Feynman diagram for quantum gravity), an electron accelerates in a gravitational field because of quantized interactions with some sort of graviton radiation (the gravitons are presumed to interact with the mass-giving Higgs field bosons surrounding the electron core). As explained in the discussion of the stress-energy curvature in the previous post, in addition to the gravity mediators (gravitons) presumably being quantized rather than a continuous or continuum curved spacetime, there is the problem that the sources of fields such as discrete units of matter, come in quantized units in locations of spacetime. General relativity only produces smooth curvature (the acceleration curve in the left hand diagram of Fig. 1) by smoothing out the true discontinuous (atomic and particulate) nature of matter by the use of an averaged density to represent the ‘source’ of the gravitational field.
The curvature of the line in the Feynman diagram for general relativity is therefore due to the smoothness of the source of gravity spacetime, resulting from the way that the presumed source of curvature – the stress-energy tensor in general relativity – averages the discrete, quantized nature of mass-energy per unit volume of space. Quantum field theory is suggestive that the correct Feynman diagram for any interaction is not a continuous, smooth curve, but instead a number of steps due to discrete interactions of the field quanta with the charge (i.e., gravitational mass). However, the nature of the ‘gravitons’ has not been observed, so there are some uncertainties remaining about their nature. Fig. 1 (which was inspired – in part – by Fig. 3 in Lee Smolin’s Trouble with Physics) is designed to give a clear idea of what quantum gravity is about and how it is related to general relativity:
‘Loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. The model is not as speculative as string theory…’ – http://quantumfieldtheory.org/
The previous post predicts gravity and cosmology correctly; the basic mechanism was published (by Electronics World) in October 1996, two years ahead of the discovery that there’s no gravitational retardation. More important, it predicts gravity quantitatively, and doesn’t use any ad hoc hypotheses, just experimentally validated facts as input. I’ve used that post to replace the earlier version of the gravity mechanism discussion here, here, etc., to improve clarity.
I can’t update the more permanent paper on the CERN document server here because as Tony Smith has pointed out, “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …” The only way you can update a paper on the CERN document server is if it a mirror copy of one on arXiv; update the arXiv paper and CERN’s mirror copy will be updated. This is contrary to scientific ethics whereby the whole point of electronic archives is that corrections and updates should be permissible. Professor Jacques Distler, who works on string theory and is a member of arXiv’s advisory board, despite being warmly praised by me, still hasn’t even put Lunsford’s published paper on arXiv, which was censored by arXiv despite having been peer-reviewed and published.
Path integrals of quantum field theory
The path integral for the incorrect spin-2 idea was discussed at the earlier post here, while as stated the correct mechanism with accurate predictions confirming it, is at the post here. Let’s now examine the path integral formulation of quantum field theory in more depth. Before we go into the maths below, by way of background, Wiki has a useful history of path integrals, mentioning:
‘The path integral formulation was developed in 1948 by Richard Feynman. … This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s called the renormalization group which unified quantum field theory with statistical mechanics. If we realize that the Schrödinger equation is essentially a diffusion equation with an imaginary diffusion constant, then the path integral is a method for the enumeration of random walks. For this reason path integrals had also been used in the study of Brownian motion and diffusion before they were introduced in quantum mechanics.’
As Fig. 1 shows, according to Feynman, ‘curvature’ is not real and general relativity is just an approximation: in reality, graviton exchange causes accelerations in little jumps. If you want to get general relativity out of quantum field theory, you have to sum over the histories or interaction graphs for lots of little discrete quantized interactions. The summation process is what we are about to describe mathematically. By way of introduction, we can remember the random walk statistics mentioned in the previous post. If a drunk takes n steps of approximately equal length x in random directions, he or she will travel an average of distance xn1/2 from the starting point, in a random direction! The reason why the average distance gone is proportional to the square-root of the number of steps is easily understood intuitively because it is due to diffusion theory. (If this was not the case, there would be no diffusion, because molecules hitting each other at random would just oscillate around a central point without any net movement.) This result is just a statistical average for a great many drunkard’s walks. You can derive it statistically, or you can simulate it on a computer, add up the mean distance gone after n steps for lots of random walks, and take the average. In other words, you take the path integral over all the different possibilities, and this allows you to work out what is most likely to occur.
Feynman applied this procedure to the principle of least action. One simple way to illustrate this is the discussion of how light reflects off a mirror. Classically, the angle of incidence is equal to the angle of reflection, which is the same as saying that light takes the quickest possible route when reflecting. If the angle of incidence were not equal to the angle of reflection, then light would obviously take longer to arrive after being deflected than it actually does (i.e., the sum of lengths of the two congruent sides in an isosceles triangle is smaller than the sum of lengths of two dissimilar sides for a trangle with the same altitude line perpendicular to the reflecting surface).
The fact that light classically seems always to go where the time taken is least is a specific instance of the more general principle of least action. Feynman explains this with path integrals in his book QED (Penguin, 1990). Physically, path integrals are the mathematical summation of all possibilities. Feynman crucially discovered that all possibilities have the same magnitude but that the phase or effective direction (argument of the the complex number) varies for different paths. Because each path is a vector, the differences in directions mean that the different histories will partly cancel each other out.
To get the probability of event y occurring, you first calculate the amplitude for that event. Then you calculate the path integral for all possible events including event y. Then you divide the first probability (that for just event y) into the path integral for all possibilities. The result of this division is the absolute probability of event y occurring in the probability space of all possible events! Easy.
Feynman found that amplitude for any given history is proportional to eiS/h-bar, and that the probability is proportional to the square of the modulus (positive value) of eiS/h-bar. Here, S is the action for the history under consideration.
What is pretty important to note is that, contrary to some popular hype by people who should know better (Dr John Gribbin being such an example of someone who won’t correct errors in his books when I email the errors), the particle doesn’t actually travel on all of the paths integrated over in a specific interaction! What happens is just one interaction, and one path. The other paths in the path integral are considered so that you can work out the probability of a given path occurring, out of all possibilities. (You can obviously do other things with path integrals as well, but this is one of the simplest things. For example, instead of calculating the probability of a given event history, you can use path integrals to identify the most probable event history, out of the infinite number of possible event histories. This is just a matter of applying simple calculus!)
However, the nature of Feynman’s path integral does allow a little interaction between nearby paths! This doesn’t happen with brownian diffusion! It is caused by the phase interference of nearby paths, as Feynman explains very carefully:
‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.
The Wiki article explains:
‘In the limit of action that is large compared to Planck’s constant h-bar, the path integral is dominated by solutions which are stationary points of the action, since there the amplitudes of similar histories will tend to constructively interfere with one another. Conversely, for paths that are far from being stationary points of the action, the complex phase of the amplitude calculated according to postulate 3 will vary rapidly for similar paths, and amplitudes will tend to cancel. Therefore the important parts of the integral—the significant possibilities—in the limit of large action simply consist of solutions of the Euler-Lagrange equation, and classical mechanics is correctly recovered.
‘Action principles can seem puzzling to the student of physics because of their seemingly teleological quality: instead of predicting the future from initial conditions, one starts with a combination of initial conditions and final conditions and then finds the path in between, as if the system somehow knows where it’s going to go. The path integral is one way of understanding why this works. The system doesn’t have to know in advance where it’s going; the path integral simply calculates the probability amplitude for a given process, and the stationary points of the action mark neighborhoods of the space of histories for which quantum-mechanical interference will yield large probabilities.’
I think this last bit is badly written: interference is only possible in the ‘small core’ paths that the size of the photon or other particle takes. The paths which are not taken are not eliminated by inferference: they only occur in the path integral so that you know the absolute probability of a given path actually occurring.
Similarly, to calculate the probability of dice landing heads up, you need to know how many sides dice have. So on one throw the probability of one particular side landing facing upwards is 1/6 if there are 6 sides per die. But the fact that the number 6 goes into the calculation doesn’t mean that the dice actually arrive with every side facing up. Similarly, a photon doesn’t arrive along routes where there is perfect cancellation! No energy goes along such routes, so nothing at all physical travels along any of them. Those routes are only included in the calculation because they were possibilities, not because they were paths taken.
In some cases, such as the probability that a photon will be reflected from the front of a block of glass, other factors are involved. For the block of glass, as Feynman explains, Newton discovered that the probability of reflection depends on the thickness of the block of glass as measured in terms of the wavelength of the light being reflected. The mechanism here is very simple. Consider the glass before any photon even approaches it. A normal block of glass is full of electrons in motion and vibrating atoms. The thickness of the glass determines the number of wavelengths that can fit into the glass for any given wavelength of vibration. Some of the vibration frequencies will be cancelled out by interference. So the vibration frequencies of the electrons at the surface of the glass are modified in accordance to the thickness of the glass, even before the photon approaches the glass. This is why the exact thickness of the glass determines the precise probability of light of a given frequency being reflected. It is not determined when the photon hits the electron, because the vibration frequencies of the electron have already been determined by the interference of certain frequencies of vibration in the glass.
The natural frequencies of vibration in a block of glass depend on the size of the block of glass! These natural frequencies then determine the probability that a photon is reflected. So there is the two-step mechanism behind the dependency of photon reflection probability upon glass thickness. It’s extremely simple. Natural frequency effects are very easy to grasp: take a trip on an old school bus, and the windows rattle with substantial amplitude when the engine revolutions reach a particular frequency. Higher or lower engine frequencies produce less window rattle. The frequency where the windows shake the most is the natural frequency. (Obviously for glass reflecting photons, the oscillations we are dealing with are electron oscillations which are much smaller in amplitude and much higher in frequency, and in this case the natural frequencies are determined by the thickness of the glass.)
The exact way that the precise thickness of a sheet of glass affects the abilities of electrons on the surface to reflect light easily understood by reference to Schroedinger’s original idea of how stationary orbits arise with a wave picture of an electron. Schroedinger found that where an integer number of wavelengths of the electron fits into the orbit circumference, there is no interference. But when only a fractional number of wavelengths would fit into that distance, then interference would be caused. As a result, only quantized orbits were possible in that model, corresponding to Bohr’s quantum mechanics. In a sheet of glass, when an integer number of wavelengths of light for a particular frequency of oscillation fit into the thickness of the glass, there is no interference in vibrations at that specific frequency, so it is a natural frequency. However, when only a fractional number of wavelengths fit into the glass thickness, there is destructive interference in the oscillations. This influences whether the electrons are resonating in the right way to admit or reflect a photon of a given frequency. (There is also a random element involved, when considering the probability for individual photons chancing to interact with individual electrons on the surface of the glass in a particular way.)
Virtual pair-production can be included in path integrals by treating antimatter (such positrons) as matter (such as electrons) travelling backwards in time (this was one of the conveniences of Feynman diagrams which initially caused Feynman a lot of trouble, but it’s just a mathematical convenience for making calculations). For more mathematical detail on path integrals, see Richard Feynman and Albert Hibbs, Quantum Mechanics and Path Integrals, as well as excellent briefer introductions such as Christian Grosche, An Introduction into the Feynman Path Integral, and Richard MacKenzie, Path Integral Methods and Applications. For other standard references, scroll down this page. For Feynman’s problems and hostility from Teller, Bohr, Dirac and Oppenheimer in 1948 to path integrals, see quotations in the comments of the previous post.
Feynman was extremely pragmatic. To him, what matters is the validity of the physical equations and their predictions, not the specific model used to get the equations and predictions. For example, Feynman said:
‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2.
If you can get the right equations even from a false model, you have done something useful, as Maxwell did. However, you might still want to search for the correct model, as Feynman explained:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.
Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:
‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182.
This perturbative expansion is a simple example of the application of path integrals. There are several ways that the electron can move, each corresponding to a unique Feynman diagram. The electron can do along a direct path from spacetime location A to spacetime location B. Alternatively, it can be deflected by a virtual particle enroute, and travel by a slightly longer path. Another alternative is that if could be deflected by two virtual particles. There are, of course, an infinite number of other possibilities. Each has a unique Feynman diagram and to calculate the most probable outcome you need to average them all in accordance with Feynman’s rules.
For the case of calculating the magnetic moment of leptons, the original calculation came from Dirac and assumed in effect the simplest Feynman diagram situation: that the electron interacts with a virtual (gauge boson) ‘photon’ from a magnet in the simplest simple way possible. This is what conributes 98.85% of the total (average) magnetic moment of leptons, according to path integrals for lepton magnetic moments. The next Feynman diagram is the second highest contributor and accounts for over 1% of interactions. This correction is the situation evaluated by Schwinger in 1947 and is represented by a Feynman diagram in which a lepton emits a virtual photon before it interacts with the magnet. After interacting with the magnet, it re-absorbs the virtual photon it emitted earlier. This is odd because if an electron emits a virtual photon, it briefly (until the virtual photon is recaptured) loses energy. How, physically, can this Feynman diagram explain how the magnetic moment of the electron be increased by 0.116% as a result of losing the energy of a virtual photon for the duration of the interaction with a magnet? If this mechanism was the correct story, maybe you’d have a reduced magnetic moment result, not an increase? Since virtual photons mediate electromagnetic charge, you might expect them to reduce the charge/magnetism of the electromagnetism by being lost during an interaction. Obviously, the loss of a non-virtual photon from an electron has no effect on the charge energy at all, it merely decelerates the electron (so kinetic energy and mass are slightly reduced, not electromagnetic charge).
There are two possible explanations to this:
1) the Feynman diagram for Schwinger’s correction is physically correct. The emission of the virtual photon occurs in such a way that the electron gets briefly deflected towards the magnet for the duration of the interaction between electron and magnet. The reason why the magnetic moment of the electron is increased as a result of this is simply that the virtual ‘photon’ that is exchanged between the magnet and the electron is blue-shifted by the motion of the electron towards the magnet for the duration of the interaction. After the interaction, the electron re-captures the virtual ‘photon’ and is no-longer moving towards the magnet. The blue-shift is the opposite of red-shift. Whereas red-shift reduces the interaction strength between receding charges, blue-shift (due to the approach of charges) increases the interaction strength because the photons have an energy that is directly proportional to their frequency (E = hf). This mechanism may be correct, and needs further investigation.
2) The other possibility is that there is a pairing between the electron core and a virtual fermion in the vacuum around it which increases the magnetic moment by a factor which depends on the shielding factor of the field from the particle core. This mechanism was described in the previous post. It helped inspire the general concept for the mass model discussed in the previous post, which is independent of this magnetic moment mechanism, and makes checkable predictions of all observable lepton and hadron masses.
The relationship of leptons to quarks and the perturbative expansion
As mentioned in the previous post (and comments number 13, 14, 22, 24, 25, 26, 27, 28 and 31 of that post), the number one priority now is to develop the details of the lepton-quark relationship. The evidence that quarks are pairs or triads of confined leptons with some symmetry transformations was explained in detail in comment 13 to the previous post and is known as universality. This was first recognised when the lepton beta decay event
muon -> electron + electron antineutrino + muon neutrino
was found to have similar detailed properties to the quark beta decay event
neutron -> proton + electron + electron antineutrino
Nicola Cabibbo used such evidence that quarks are closely related to leptons (I’ve only given one of many examples above) to develop the concept of ‘weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles.’
As stated in comment 13 of the previous post, I’m interested in the relationship between electric charge Q, weak isospin charge T and weak hypercharge Y:
Q = T + Y/2.
Where Y = −1 for left-handed leptons (+1 for antileptons) and Y = +1/3 for left-handed quarks (−1/3 for antiquarks).
The minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” with strong (colour) charge and fractional apparent electric charge, are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.) Peter Woit’s Not Even Wrong summarises what is known in Figure 7.1 on page 93 of Not Even Wrong:
‘The picture shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).
‘Under SU(3), the quarks are triplets and the leptons are invariant.
‘Under SU(2), the [left-handed] particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other [right-handed] particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).
‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’
This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’).
But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!
The issue of the fine detail in the relationship of leptons and quarks, how the transformation occurs physically and all the details you can predict from the new model suggested in the previous post, is very interesting and, as stated, is the number one priority.
For a start, to study the transformation of a lepton into a quark, we will consider the conversion of electrons into downquarks. First, the conversion of a left-handed electron into a left-handed downquark will be considered, because the weak isospin charge is the same for each (T = -1/2):
eL -> dL
The left-handed electron, eL, has a weak hypercharge of Y = -1 and the left-handed downquark, dL, has a weak hypercharge of Y = +1/3. Therefore, this transformation incurs a fall in observable electric charge by a factor of 3 and an accompanying increase in weak hypercharge by +4/3 units (from -1 to +1/3).
Now, if the vacuum shielding mechanism suggested has any heuristic validity, the right-handed electron should transform into a right-handed downquark by way of a similar fall in electric charge by a factor of 3 and accompanying increase in weak hypercharge by +4/3 units:
eR -> dR
The weak isospin charges are the same for right-handed electrons and right-handed downquarks (T = 0 in each case).
The transformation of a right-handed electron to right-handed downquark involves the same reduction in electric charge by a factor of 3 as for left-handed electrons, while the weak hypercharge changes from Y = -2 to Y = -2/3. This means that the weak hypercharge increases by +4/3 units, just the same amount as occurred with the transformation of a left-handed electron to a left-handed downquark. So there is a consistency to this model: the shielding of a given amount of electric charge by the polarized vacuum causes a consistent increase in the weak hypercharge.
If we ignore for the moment the possibility that antimatter leptons may get transformed into upquarks and just consider matter, then the symmetry transformations required to change right-handed neutrinos into right-handed upquarks, and left-handed neutrions into left-handed upquarks are:
vL -> uL
vR -> uR
The first transformation involves a left-handed neutrino, vL, with Y = -1, Q = 0, and T = 1/2, becoming a left-handed upquark, uL, with Y = 1/3, Q = 2/3, and T = 1/2. We notice that Y gains 4/3 in the transformation, while Q gains 2/3.
The second transformation involves a right-handed neutrino with Y = 0, Q = 0 and T = o becoming a right-handed upquark with Y = 4/3, Q = 2/3 and T = 0. We can immediately see that the transformation has again resulted in Y gaining 4/3 while Q gains 2/3. Hence, the concept that a given change in electric charge is accompanied by a given change in hypercharge remains valid. So we have accounted for the conversion of the four leptons in one generation of particle physics (two types of handed electrons and two types of handed neutrinos) into the four quarks in the same generation of particle physics (left and right handed versions of two quark flavors).
These transformations are obviously not normal reactions at low energy. The first two make checkable, falsifiable predictions about unification to replace supersymmetry speculation about the unification of running couplings, the relative charges of the electromagnetic, weak and strong forces as a function of either collision energy (e.g., electromagnetic charge increases at higher energy, while strong charge falls) or distance (e.g., electromagnetic charge increases at small distances, while strong charge falls).
If we review the symmetry transformations suggested for a generation of leptons into a generation of quarks,
eL -> dL
eR -> dR
vL -> uL
vR -> uR
it is clear that the last two reactions are in difficulty, because the conversion of neutrinos into upquarks (in this example of a generation of quarks) is a potential problem for the suggested physical mechanism in the previous (and earlier) posts. The physical mechanism for the first two of the four transformations is relatively straightforward to picture: try to collide leptons at enormous energy and the overlap of the polarized vacuum veils of polarizable fermions should shield some of the long-range (observable low energy) electric charge, with this shielded energy is used instead in short range weak hypercharge mediated by weak gauge bosons, and colour charges for the strong force.
Because we know exactly how much energy is ‘lost’ from the electric charge in the first two transformations due to the increased shared polarized vacuum shield, we can quantitatively check this physical mechanism by setting this lost energy equal to the energy gained in the weak force and seeing if the predictions are accurate. This mechanism might not apply directly to the last two transformations, since neutrinos do not carry a net electric charge. It is also necessary to investigate the possibilities for the transformation of positrons into upquarks. This issue of why there is little antimatter might be resolved if positrons were converted into upquarks at high energy in the big bang by the mechanism suggested for the first two transformations.
However, the polarized vacuum shielding mechanism might still apply in some circumstances to neutral particles, depending on the geometry. Neutrinos may be electrically neutral as observed at low energy or large distances, while actually carrying equal and opposite electric charge. (Similarly, atoms often appear to be neutral, but if we smash them to pieces we get observable electric charges arise. The apparent electrical neutrality of atoms is a masking effect of the fact that atoms usually carry equal positive and negative charge, which cancel as seen from a distance. A photon of light similarly carries positive electric field and negative electric field energy in equal quantities; the two cancel out overall, but the electromagnetic fields of the photon can interact with charges.
Charge is only manifested by way of the field created by a charge; since nobody has ever seen the core of a charged particle, only the field. A confined field of a given charge is therefore indistinguishable from a charge. The only reason why an electron appears to be a negative charge is because it has a negative electric field around it. As shown in Fig. 5 of the previous post, there is a modification necessary to the U(1) symmetry of the standard model of particle physics: negative gauge bosons to mediate the fields around negative charges, and positive gauge bosons to mediate the fields around positive charges.
So a ‘neutral’ particle which is neutral because it contains of equal amounts of positive and negative electric field, may be able to induce electric polarization of the vacuum for the short ranged (uncancelled) electric field. The range of this effect is obviously limited to the distance between the centrel of the positive part of the particle and the negative part of the particle. (In the case of a photon for example, this distance is the wavelength.)
If we replace the existing electroweak SU(2)xU(1) symmetry by SU(2)xSU(2), maybe with each SU(2) having a different handedness, then we get four charged bosons (two charged massive bosons for the weak force, and two charged massless bosons for electromagnetism) and two neutral bosons: a massless gravity mediating gauge boson, and a massive weak neutral-current producing gauge boson.
Let’s try the transformation of a positron into an upquark. This has two major advantages over the idea that neutrinos are transformed into upquarks. First, it explains why we don’t observe much antimatter in nature (tiny amounts arise from radioactive decays involving positron emission, but it quickly annihilates with matter into gamma rays). In the big bang, if nature was initially symmetric, you would expect as much matter as antimatter. The transformation of free positrons into confined upquarks would sort out this problem. Most of the universe is hydrogen, consisting of a proton containing two upquarks and a downquark, plus an orbital electron. If the upquarks come come from a transformation of positrons while downquarks come from a transformation of electrons, the matter-antimatter balance is resolved.
Secondly, the transformation of positrons to upquarks has a simple mechanism by vacuum polarization shielding of the electric charge, causing the electric charge of the positron to drop from +1 unit for a positron to +2/3 units for upquarks. This occurs because you get two positive upquarks and one downquark in a proton. The transformation is
e+L -> uL
The positron on the left hand side has Y = +1, Q = +1 and T = +1/2. The upquark on the right hand side has Y = +1/3, Q = +2/3 and T = +1/2. Hence, there is an decrease of Y by 2/3, while Q decreases by 1/3. Hence the amount of change of Y is twice that of Q. This is impressively identical to the the situation in the transformation of electrons into downquarks, where an increase of Q by 2/3 units is accompanied by an increase of Y by twice 2/3, i.e., by 4/3, for the transformation eL -> dL
There are only two ways that quarks can group: in pairs and in triplets or triads. The pairs are of quarks sharing the same polarized vacuum are known as mesons, and mesons are the SU(2) symmetry pairs of left-handed quark and left-handed anti-quark, which both experience the weak nuclear force (no right-handed particle can participate in the weak nuclear force, because the right handed neutrino has zero weak hypercharge). The SU(3) symmetry triplets of quarks are called baryons.
Because only left-handed particles experience the weak force (i.e., parity is broken), it is vital to explain why this is so. This arises from the way the vector bosons gain mass. In the basic standard model, everything is massless. Mass is added to the standard model by a separate scalar field (such as that which is speculatively proposed by Philip Anderson and Peter Higgs and called the Higgs field), which gives all the massive particles (including the weak force vector bosons) their mass. The quanta for the scalar mass field are named ‘Higgs bosons’ but these have never been officially observed, and mainstream speculations do not make predict Higgs boson mass unambiguously.
The model for masses in the previous post predicts composite (meson and baryon) particle masses to be due to an integer number of 91 GeV building blocks of mass which couple weakly due to the shielding factor due to the polarized vacuum around a fermion. The 91 GeV energy equivalent to the rest mass of the uncharged neutral weak gauge boson, the Z.
The SU(3), SU(2) and U(1) gauge symmetries of the standard model describe triplets (baryons), doublets (mesons) and single particle cores (leptons), dominated by strong, weak and electromagnetic interactions, respectively. The problem is located in the electroweak SU(2)xU(1) symmetry. Most of the papers and books on gauge symmetry focus on the technical details of the mathematical machinery, and simple mechanisms are looked at askance (as is generally the case in quantum mechanics and general relativity). So you end up learning say, how to drive a car without knowing how the engine works, or you learn how the engine works without any knowledge of the territory which would enable you to plan a useful journey. This is the way some complex mathematical physics is traditionally taught, mainly to get away from useless speculations: Feynman’s analogy of the chess game is fairly good. (Deduce some of the rules of the game by watching the game being played, and use these rules to make some accurate predictions about what may happen; without having the complete understanding necessary for confident explanation of what the game is about. Then make do by teaching the better known predictive rules, which are technical and accurate, but don’t always convey a complete understanding of the big picture.)
A serious problem with the U(1) symmetry is that you can’t really ever get single leptons in nature. They all arise naturally from pair production, so they usually arrive in doublets, contradicting U(1); examples: in beta decay, you get a beta particle and an antineutrino, while in pair production you may get a positron and an electron.
This is part of the reason why SU(2) deals with leptons in the model proposed in the previous post. Whereas pairs of left-handed quarks are confined in close proximity in mesons, a lepton-antilepton pair is not confined in a small space, but it is still a type of doublet and can be treated as such by SU(2) using massless gauge bosons (take the masses away from the Z, W+ and W- weak bosons, and you are left with a massless Z boson that mediates gravity, and massless W+ and W- bosons which mediate electromagnetic forces). Because a version of SU(2) with massless gauge bosons has infinite range inverse-square law fields, it is ideal for describing the widely separated lepton-antilepton pairs created by pair production, just as SU(2) with massive guage bosons is ideal for describing the short range weak force in left-handed quark-antiquark pairs (mesons).
The electroweak chiral symmetry arises because only left-handed particles can interact with massive SU(2) gauge bosons (the weak force), while all particles can interact with massless SU(2) gauge bosons (gravity and electromagnetism). The reason why this is the case is down to the nature of the way mass is given to SU(2) gauge bosons by a mass-giving Higgs-type field. Presumably the combined Higgs boson when coupled with a massless weak gauge boson gives a composite particle which only interacts with left-handed particles, while the nature of the massless weak gauge bosons is that in the absence of Higgs bosons they can interact equally with left and right handed particles.
To summarise, quarks are probably electron and antielectrons (positrons) with the symmetry transformation modifications you get from close confinement of electrons against the exclusion principle (e.g., such electrons acquire new charges and short range interactions).
Downquarks are electrons trapped in mesons (pairs of quarks containing quark-antiquark, bound together by the SU(2) weak nuclear force, so they have short lifetimes and under beta radioactive decay) or baryons, which are triplets of quarks bound by the SU(3) strong nuclear force. The confinement of electrons in a small reduces their electric charge because they are all close enough in the pair or triplet to share the same overlapping polarized vacuum which shields part of the electric field. Because this shielding effect is boosted, the electron charge per electron observed at long range is reduced to a fraction. The idealistic model is 3 electrons confined in close proximity, giving a 3 times strong polarized vacuum, which reduces the observable charge per electron by a factor of 3, giving the e/3 downquark charge. This is a bit too simplistic of course because in reality you get mainly stable combinations like protons (2 upquark and 1 downquark). The energy lost from the electric charge, due to the absorption in the polarized vacuum, powers short-ranged nuclear forces which bind the quarks in mesons and hadrons together.
Upquarks would seem to be trapped positrons. This is neat because most of the universe is hydrogen, with one electron in orbit and 2 upquarks plus 1 downquark in the proton nucleus. So one complete hydrogen atom is formed by 2 electrons and 2 positrons. This explains the absence of antimatter in the universe: the positrons are all here, but trapped in nuclei as upquarks. Only particles with left-handed Weyl spin undergo weak force interactions.
Possibly the correct electroweak-gravity symmetry group is SU(2)L x SU(2)R, where SU(2)L is a left-handed symmetry and SU(2)R is a right handed one. The left-handed version couples to massive bosons which give mass to particles and vector bosons, creating all the massive particles and weak vector bosons. The right handed version presumably does not couple to massive bosons. The result here is that the right handed version, SU(2)R, produces only mass-less particles, giving the gauge bosons needed for long-range electromagnetic and gravitational forces. If that works in detail, it is a simplification of the SU(2)xU(1) electroweak model, which should make the role of the mass-giving field clearer, and predictions easier.
The mainstream SU(2)xU(1) model requires a symmetry-breaking Higgs field which works by giving mass to weak gauge bosons only below a particular energy or beyond a particular distance from a particle core. The weak gauge bosons are supposed to be mass-less above that energy, where electroweak symmetry exists; electroweak symmetry breaking is supposed to occur below the Higgs expectation energy due to the fact that 3 weak gauge bosons acquire mass at low energy, while photons don’t acquire mass at low energy.
This SU(2)xU(1) model mimics a lot of correct physics, without being the correct electroweak unification. How far has the idea that weak gauge bosons lose mass above the Higgs expectation value been checked (I don’t think it has been checked at all yet)? Presumably this is linked to ongoing efforts to see evidence for a Higgs boson. The electroweak theory correctly unifies the weak force (dealing with neutrinos, beta decay and the behaviour of mesons) with Maxwell’s equations at low energy and the electroweak unification SU(2)xU(1) predicted the W and Z massive weak gauge bosons detected at CERN in 1983. However, the existence of three massive weak gauge bosons is the same in the proposed replacement for SU(2)xU(1). I think that the suggested replacement of U(1) by another SU(2) makes quite a lot of changes to the untested parts of the standard model (in particular the Higgs mechanism), besides the obvious benefits of introducing gravity and causal electromagnetism.
Spherical symmetry of Hubble recession
I’d like to thank Bee and others at the Backreaction blog for patiently explaining to me that a statement that radial distance elements are equal for the Hubble recession in all directions around us,
H = dv/dr = dv/dx = dv/dy = dv/dz
t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv
dv/H = dr = dx = dy = dx
for spherically symmetrical recession of stars around us (in directions x, y, z, where r is the general radial direction that can point any way), appears superficially to be totally ‘wrong’ to people who are only unaccustomed to cosmology where the elementary equations for spherical geometry and metrics in non-symmetric spatial dimensions don’t apply. Hopefully, ‘critics’ will grasp the point that equation A does not disprove equation B just because you have seen equation A in some textbook, and not equation B.
For example, some people repeatedly and falsely claim that H = dv/dr = dv/dx = dv/dy = dv/dz and the resulting equality dr = dx = dy = dx is total rubbish, and is ‘disproved’ by the existence of metrics and non-symmetrical spherical geometrical equations. They ignore all explanations that this equality of gradient elements has nothing to do with metrics or spherical geometry, and is due to the spherical symmetry of the cosmic expansion we observer around us.
Another way to look at H = dv/dr = dv/dx = dv/dy = dv/dz is to remember that 1/H is a way to measure the age of the universe. If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.
However, since 1998 there has been good evidence that gravity is not slowing down the expansion; instead there is either something opposing gravity by causing repulsion at immense distance scales and outward acceleration (so-called ‘dark energy’ giving a small positive cosmological constant), or else there is a partial lack of gravity at long distances due to graviton redshift and/or the geometry of a quantum gravity mechanism (depending on whether you are assuming spin-2 gravitons or not), which has substantially more predictive and less ad hoc. since it was predicted via Electronics World Oct. 1996, years before being confirmed by observation (see comment 11 on previous post).
Therefore, let’s use 1/H as the age of the universe, time! Then we find:
1/H = dr/dv = dx/dv = dy/dv = dz/dv.
This proves that dr/dv = dx/dv = dy/dv = dz/dv.
Now multiply this out by dv, and what do you get? You get:
dr = dx = dy = dz.
As Fig. 2 shows, it is a fact that the Hubble parameter can be expressed as H = dv/dr = dv/dx = dv/dy = dv/dz, where the equality of numerators means that the denominators are similarly equal: dr = dx = dy = dx. This is fact, not an opinion or guess.
Fig. 2: Illustration of the reason why the Hubble law H = dv/dr = dv/dx = dv/dy = dv/dz, where because of the isotropy (i.e. the Hubble law is the same in every direction we look, as far as observational evidence can tell), the numerators in the fractions are all equal to dv so the denominators are all equal to each other too: dr = dx = dy = dx. Beware everyone, this has nothing whatsoever to do with metrics, with general relativity, or with the general case in spherical geometry (where the origin of coordinates need not in general be the centre of the spherical symmetry)!
So if your textbook has a formula which ‘contradicts’ dr = dx = dy = dx or if you think that dr = dx = dy = dx should in your opinion be replaced by a metric with the squares of line elements all added up, or with a general formula for spherical geometry which applies to situations where the recession would be vary with directions, then you are wrong. As one commentator on this blog has said (I don’t agree with most of it), it is true that new ideas which have not been investigated before often look ‘silly’. People who do not check the physics and instead just pick out formulae, misunderstand them, and then ridicule them, are not “critics”. They are not criticising the work, instead they are criticising their own misunderstandings. So any ridicule and character assassinations resulting should be taken with a large pinch of salt. It’s best to try to see the funny side when this occurs!
One of the very interesting things about dr = dx = dy = dx is what you get for time dimensions because the age of the universe (if there is no gravitational deceleration, as was shown to be the case in 1998) is 1/H, and because we look back in time with increasing distance according to r = x = y = z = ct, it follows that there are equivalent time-like dimensions for each of the spatial dimensions. This makes spacetime easier to understand and allows a new unification scheme! The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter. Surely this contradicts general relativity? No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square. To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving:
(dr)/c = (dx)/c = (dy)/c = (dz)/c
which can be written as:
dtr = dtx = dty = dtz
So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal! This is why we only need one time to describe the expansion of the universe. If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions. Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic! This is quite a surprising result as some hostility to this new idea from traditionalists shows.
But the three time dimensions which are usually hidden by this isotropy are vitally important! Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need. It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here. The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity. For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.
In addition, as was shown in detail in the previous post, this sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity:
‘To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.’
‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’
– Herman Minkowski, 1908.
Deriving the relationship between the FitzGerald contraction and the gravitational contraction
Feynman finds that whereas lengths contract in the direction of motion at velocity v by the ratio (1 – v2/c2)1/2, gravity contracts lengths by the amount (1/3)MG/c2 = 1.5 mm for the contraction of Earth’s radius by gravity.
It is of interest that this result can be obtained simply, throwing light on the relationship between the equivalence of mass and energy in ‘special relativity’ (which is at best just an approximation) and the equivalence of inertial mass and gravitational mass in general relativity.
To start with, recall Dr Love’s derivation of Kepler’s law from the equivalence of the kinetic energy of a planet to its gravitational potential energy, given in a previous post.
This is very simple. If a body’s average kinetic energy in space (outwide the atmosphere) is such that it has just over the escape velocity, it will eventually escape and will therefore be unable to orbit endlessly. If it has just under that velocity, it will eventually fall back to Earth and so it will not orbit endlessly, just as is the case if the average velocity is too high. Like Goldilocks and the porridge, it is very fussy.
The average orbital velocity must exactly match the escape velocity – and be neither more nor less than the escape velocity – in order to achieve a stable orbit.
Dr Love points out the consequences: a body in orbit must have an average velocity equal to escape velocity v = (2GM/r) 1/2 which implies that its kinetic energy must be equal to its gravitational potential energy:
kinetic energy, E = (1/2) mv 2 = (1/2) m((2GM/r) 1/2 )2 = mMG/r.
This permits him to derive Kepler’s law. It is also very important because it explains the relationship for stability of orbits:
average kinetic energy = gravitational potential energy
Einstein’s equivalence of inertial and gravitational mass in E = mc2 then allows us to use this equivalence of inertial kinetic energy and gravitational potential energy derive the equivalence principle of general relativity, which states that the inertial mass is equal to the gravitational mass, at least for orbiting bodies. Another physically justified argument is that gravitational potential is the gravity energy that would be released in the case of collapse. If you allowed the object to fall and therefore pick up that gravitational potential energy, the latter energy would be converted into kinetic energy of the object. This is why the two energies are equivalent. It’s a rigorous argument!
Now test it further. Take the FitzGerald-Lorentz contraction of length due to inertial motion at velocity, where objects are compressed by the ratio (1 – v2/c2)1/2, using equivalence of average kinetic energy to gravitational potential energy, you can place the escape velocity, v = (2GM/r) 1/2 into the contraction formula, and expand the result to two terms using the binomial expansion. You find that the radius of a gravitational mass would be reduced by the amount GM/c2 = 4.5 mm for Earth’s radius which is three times as big as Feynman’s formula for gravitational compression of Earth’s radius. The factor of three comes from the fact that the FitzGerald-Lorentz contraction is in one dimension only (direction of motion), while the gravitational field lines radiate in three dimensions, so the same amount of contraction is spread over three times as many dimensions, giving a reduction in radius by (1/3)GM/c2 = 1.5 mm! (There is also a rigorous mathematical discussion of this on the page here if you have the time to scroll down and find it.)
Unusually, Feynman makes a confused mess of this effect in his relevant volume of Lectures on Physics, c42 p6, where correctly he gives his equation 42.3 for excess radius being equal to predicted radius minus measured radius (i.e., he claims that the predicted radius is the bigger one in the equation) but then on the same page in the text falsely and confusingly writes: ‘… actual radius exceeded the predicted radius …’ (i.e., he claims in the text that the predicted radius is the smaller).
Professor Jacques Distler’s philosophical and mathematical genius
‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’
– Professor Jacques Distler, Musings blog post on the Role of Rigour.
Jacques also summarises the issues for theoretical physics clearly in a comment there:
- ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
- ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
- ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’