‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ –Wiki. (Hence there is a simple relationship between leptons and fermions; more later on.)
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
DIRAC’S PARTICULATE SEA EINSTEIN’S CONTINUUM-FIELD LINES
Fig. 1 – (Display page full width to see illustration properly; it is not an image file.) The incompatibility between the quantum fields of quantum field theory (which are discontinuous, particulate) and the continious fields of classical theories like Einstein’s general relativity and Maxwell’s electromagnetism. The incompatibility between quantum field theory and general relativity is due to the Dirac sea, which imposes discrete upper and lower limits (called the UV/ultraviolet and the IR/infrared cutoff, respectively) on the strengths of fields originating from particles.
Simplified vacuum polarization picture: zone A is the UV cutoff, while zone B is the IR cutoff around the particle core:
Mass mechanism based on this simplified model:
See also http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html and https://nige.wordpress.com/2006/10/09/16/ for more information. To find out how to calculate the 137 polarization shielding factor (1/alpha), scroll down and see the section below in this post headed ‘Mechanism for the strong nuclear force.’
RENORMALIZATION AND LOOP QUANTUM GRAVITY
Dirac’s sea correctly predicted antimatter and allows the polarization of the vacuum required in the Standard Model of particle physics, making thousands of accurate predictions. Einstein’s spacetime continuum of his general relativity allows only a very few correct predictions and has a large ‘landscape’ of ad hoc cosmological models (ie, a large number of unphysical solutions, or at least uncheckable solutions, making it an ugly physics model) and in addition it is false in so much as it fails to naturally explain or incorporate the renormalization of force field charges due to polarization of the particulate vacuum, and it also fails to even model the long range gauge bosons (which may be non-oscillatory radiation for the long-range force fields) exchange radiation of the Yang-Mills quantum field theories which successfully comprise the Standard Model of electroweak and strong interactions. For example, Einstein’s general relativity is disproved by the fact that it contains no natural mechanism to allow for the redshift or related depletion of energy in gauge boson exchange radiation causing forces across the expanding universe!For these reasons, it is necessary to re-build general relativity on the basis of quantum field theory. Smolin et al. show using Loop Quantum Gravity (LQG) that a Feynman path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent).
Regarding the physics of the metric: in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:
‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’
Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions:
‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’
– P. Woit, Not Even Wrong, Cape, London, 2006, p189. [Emphasis added.]
Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.
Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations. As Nobel Laureate Professor Phil Anderson wrote: “the flat universe is just not decelerating, it isn’t really accelerating …” – http://cosmicvariance.com/2006/01/03/danger-phil-anderson
ULTRAVIOLET (UV) CUTOFF AND THE INFRARED (IR) CUTOFF
. .
. .
Fig. 2 – The large void represents simply an enlargement of part of the left hand side of Fig. 1. The particulate nature of the Dirac sea explains the physical basis for the UV (ultraviolet) cutoff in quantum field theories such as the successful Standard Model. As you reduce the volume of space to such small volumes (i.e., as you collide particles to higher energies so that they approach so closely that there is very little distance between them) that it is unlikely that the small space will contain any background Dirac sea field particles at all, it is obvious that no charge polarization of the Dirac sea is possible. This is due to the Dirac sea becoming increasing coarse grained when magnified excessively. To make this argument quantitative and predictive is easy (see below). The error in existing quantum field theory which require manual renormalization (upper and lower cutoffs) is the statistical treatment in the equations, which are continuous equations: these are only valid where large numbers of statistics are involved, and they break down where pushed too far, thus requiring cutoffs.
The UV cutoff is explained in Fig. 2: Dirac sea polarization (leading to charge renormalization) is only possible in volumes large enough to be likely to contain some discrete charges! The IR cutoff has a different explanation. It is required physically in quantum field theory to limit the range over which the vacuum charges of the Dirac sea are polarized, because if there were no limit, then the Dirac sea would be able to polarize sufficiently to completely eradicate the entire electric field of all electric charges. That this does not happen in nature shows that there is a physical mechanism in place which prevents polarization below the range of the IR cutoff, which is about 10-15 m from an electron, corresponding to something like 1020 volts/metre electric field strength. Clearly, the Dirac sea is physically:(a) disrupted from bound into freed charges (pair production effect) above the IR cutoff (threshold for pair production),(b) given energy in proportion to the field strength (by analogy to Einstein’s photoelectric equation, where there is a certain minimum amount of energy required to free electrons from their bound state, and further energy above that mimimum then then goes into increasing the kinetic energy of those particles, except that in this case the indeterminancy principle due to scattering indeterminism introduces statistics and makes it more like a quantum tunnelling effect and the extra field energy above the threshold can also energise ground state Dirac sea charges into more massive loops in progressive states, ie, 1.022 MeV delivered to two particles colliding with 0.511 MeV each – the IR cutoff – can create an e- and e+ pair, while a higher loop threshold will be 211.2 MeV delivered as two particles colliding with 105.6 MeV or more, which can create a muon+ and muon- pair, and so on, see the previous post for explanation of a diagram explaining mass by ‘doubly special supersymmetry’ where charges have a discrete number of massive partners located either within the close-in UV cutoff range or beyond the perimeter IR cutoff range, accounting for masses in a predictive, checkable manner), and(c) the quantum field is then polarized (shielding electric field strength).These three processes should not be confused, but are generally confused by the use of the vague term ‘energy’ to represent 1/distance in most discussions of quantum field theory. For two of the best introductions to quantum field theory as it is traditionally presented see http://arxiv.org/abs/hep-th/0510040 and http://arxiv.org/abs/quant-ph/0608140We only see ‘pair-production’ of Dirac sea charges becoming observable in creation-annihilation ‘loops’ (Feynman diagrams) when the electric field is in excess of about 1020 volts/metre. This very intense electric field, which occurs out to about 10-15 metres from a real (long-observable) electron charge core, is strong enough to overcome the binding energy of the Dirac sea: particle pairs then pop into visibility (rather like water boiling off at 100 C).The spacing of the Dirac sea particles in the bound state below the IR cutoff is easily obtained. Take the energy-time form of Heisenberg’s uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10-21 second; the maximum possible range is therefore this time multiplied by c, ie ~10-12 metre. The key thing to do would be to calculate the transmission of gamma rays in the vacuum. Since the maximum separation of charges is 10-12 m, the vacuum contains at least 1036 charges per cubic metre. If I can calculate that the range of gamma radiation in such a dense medium is 10-12 metre, I’ll have substantiated the mainstream picture. Normally you get two gamma rays when an electron and positron annhilate (the gamma rays go off in opposite directions), so the energy of each gamma ray is 0.511 MeV, and it is well known that the Compton effect (a scattering of gamma rays by electrons as if both are particles not waves) predominates for this energy. The mean free path for scatter of gamma ray energy by electrons and positrons depends essentially on the density of electrons (number of electrons and positrons per cubic metre of space). However, the data come from either the Klein-Nishita theory (an application of quantum mechanics to the Compton effect) or experiment, for situations where the binding energy of electrons to atoms or whatever is insignificant compared to the energy of the gamma ray. It is perfectly possible that the binding energy of the Dirac sea would mean that the usual radiation attenuation data are inapplicable!Ignoring this possibility for a moment, we find that for 0.5 MeV gamma rays, Glasstone and Dolan (page 356) state that the linear absorption coefficient of water is u = 0.097 (cm)-1}, where the attenuation is exponential as e-ux where x is distance. Each water molecule has 8 electrons and we know from Avogadro’s number that 18 grams of water contains 6.0225*1023 water molecules, or about 4.818*1024 electrons. Hence, 1 cubic metre of water (1 metric ton or 1 million grams) contains 2.6767*1029 electrons. The reciprocal of the linear absorption coefficient u, ie, 1/u tells us the ‘mean free path’ (the best estimate of effective ‘range’ for our purposes here), which for water exposed to 0.5 MeV gamma rays is 1/0.097 = 10.3 cm = 0.103 m. Hence, the number of electrons and positrons in the Dirac sea must be vastly larger that in water, in order to keep the range down (we don’t observe any vacuum gamma radioactivity, which only affects subatomic particles). Normalising the mean free path to 10-12 m to agree with the Heisenberg uncertainty principle, we find that the density of electrons and positrons in the vacuum would be: {the electron density in 1 cubic metre of water, 2.6767*1029} * 0.103/[10-12] = 2.76 * 1040 electrons and positrons per cubic metre of Dirac sea. This agrees with the estimate previously given from the Heisenberg uncertainty principle that the vacuum contains at least 1036 charges per cubic metre. However, the binding energy of the Dirac sea is being ignored in this Compton effect shielding estimate. The true separation distance is smaller still, and the true density of electrons and positrons in the Dirac sea is still higher.
Obviously the graining of the Dirac sea must be much smaller than 10-12 m because we have already said that it exists down to the UV cutoff (very high energy, ie, very small distances of closest approach). The amount of ‘energy’ in the Dirac sea is astronomical if you calculate the rest mass equivalent, but you can similarly produce stupid numbers for the energy of the earth’s atmosphere: the mean energy of an air molecule is around 500 m/s, and since the atmosphere is composed mainly of air molecules (with a relatively small amount of water and dust), we can get a ridiculous energy density of the air by multiplying the mass of air by 0.5*(5002) to obtain its kinetic energy. Thus, 1 kg of air (with all the molecules going at a mean speed of 500 m/s) has an energy of 125,000 Joules. But this is not useful energy because it can’t be extracted: it is totally disorganised. The Dirac sea ‘energy’ is similarly massive but useless.
REPRESENTATION THEORY AND THE STANDARD MODEL
Woit gives an example of how representation theory can be used in low dimensions to reduce the entire Standard Model of particle physics into a simple expression of Lie spinors and Clifford algebra on page 51 of his paper http://arxiv.org/abs/hep-th/0206135. This is a success in terms of what Wigner wants (see the top of this post for the vital quote from Wiki), and there is then the issue of the mechanism for electroweak symmetry breaking, for mass/gravity fields, and for the 18 parameters of the Standard Model. These are not extravagant, seeing that the Standard Model has made thousands of accurate predictions with them, and all of those parameters are either already or else in principle mechanistically predictable by the causal Yang-Mills exchange radiation effects model and a causal model of renormalization and gauge boson energy-sharing based unification (see previous posts on this blog, and the links section in the ‘about’ section on the right hand side of this blog for further information).
Additionally, Woit stated other clues of chiral symetry: ‘The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time.’
For the background to Lie spinors and Clifford algebras, Baez has an interesting discussion of some very simple Lie algebra physics here and here, and representation theory here, Woit has extensive lecture notes here, and Tony Smith has a lot of material about Clifford algebras here and spinors here. The objective to have is a simple unified model to represent the particle which can explain the detailed relationship between quarks and leptons and predict things about unification which are checkable. The short range forces for quarks are easily explained by a causal model of polarization shielding by lepton-type particles in proximity (pairs or triads of ‘quarks’ form hadrons, and the pairs or triads are close enough to all share the same polarized vacuum veil to a large extent, which makes the poalrized vacuum generally stronger so that the effective long-range electromagnetic charge per ‘quark’ is reduced to a fraction of that for a lepton which consists of only one core charge: see this comment on Cosmic Variance blog.
I’ve given some discussion of the Standard Model at my main page (which is now partly obsolete and in need of a major overhaul to include many developments). Woit gives a summary the Standard Model in a completely different way, which makes chiral symmetries clear, in Fig. 7.1 on page 93 of Not Even Wrong (my failure to understand this before made me very confused about chiral symmetry so I didn’t mention or consider it’s role):
‘The picture [it is copyright, so get the book: see Fig. 7.1 on p.93 of Not Even Wrong] shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).
‘Under SU(3), the quarks are triplets and the leptons are invariant.
‘Under SU(2), the particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).
‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’
This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (‘quarks’), whereas SU(2) controls doublet’s of particles (‘quarks’).
But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!
Clearly this weak hypercharge effect is what has been missing from my naive causal model (where observed long range quark electric charge is determined merely by the strength of vacuum polarization shielding of the electric charges closely confined). Energy is not merely being shared between the QCD SU(3) colour forces and the U(1) electromagnetic forces, but there is the energy present in the form of weak hypercharge forces which are determined by the SU(2) weak nuclear force group.
Let’s get the facts straight: from Woit’s discussion (unless I’m misunderstanding), the strong QCD force SU(3) only applies to triads of quarks, not to pairs of quarks (mesons).
The binding of pairs of quarks is by the weak force only (which would explain why they are so unstable, they’re only weakly bound and so more easily decay than triads which are strongly bound). The weak force also has effects on triads of quarks.
The weak hypercharge of a downquark in a meson containing 2 quarks is Y=1/3 compared to Y=-2/3 for a downquark in a baryon containing 3 quarks.
Hence the causal relationship holds true for mesons. Hypothetically, 3 right-handed electrons (each with weak hypercharge Y = -2) will become right-handed downquarks (each with hypercharge Y=-2/3) bought close together, because they then share the same vacuum polarization shield, which is 3 times stronger than that around a single electron, and so attenuates more of the electric field, reducing it from -1 per electron when widely separated to -1/3 when brought close together (forget the Pauli exclusion principle, for a moment!).
Now, in a meson, you only have 2 quarks, so you might think that from this model the downquark would have electric charge -1/2 and not -1/3, but that anomaly only exists when ignoring the weak hypercharge! For a downquark in a meson, the weak hypercharge is Y=1/3 instead of Y=-2/3 which the downquark has in a baryon (triad). The increased hypercharge (which is responsible physically to the weak force field that binds up a meson) offsets the electric charge anomaly. The handedness switch-over, in going from considering quarks in baryons to those in mesons, automatically compensates the electric charge, keeping it the same!
The details of how handedness is linked to weak hypercharge is found in the dynamics of Pauli’s exclusion principle: adjacent particles can’t have have a full set of the same quantum numbers like the same spin and charge. Instead, each particle has a unique set of quantum numbers. Bringing particles together and having them ‘live together’ in close proximity forces them to arrange themselves with suitable quantum numbers. The Pauli exclusion principle is simple in the case of atomic electrons: each electron has four quantum numbers, describing orbit configuration and intrinsic spin, and each adjacent electron has opposite spin to its neighbours. The spin alignment here can be understood very simply in terms of magnetism: it needs the least energy to have sign an alignment (hving similar spins would be an addition of magnetic moments, so that north poles would all be adjacent and south poles would all be adjacent, which requires more energy input than having adjacent magnets parallel with opposite poles nearest). In quarks, the situation regarding the Pauli exclusion principle mechanism is slightly more complex, because quarks can have similar spins if their colour charges are different (electrons don’t have colour charges, which are an emergent property of the strong fields which arise when two or three real fundamental particles are confined at close quarters).
Obviously there is a lot more detail to be filled in, but the main guiding principles are clear now: every fermion is indeed the same basic entity (whether quark or lepton), and the differences in observed properties stem to the vacuum properties such as the strength of vacuum polarization, etc. The fractional charges of quarks always arise due to the use of some electromagnetic energy to create other types of short range forces (the testable prediction of this model is the forecast that detailed calculations will show that perfect unification will arise on such energy conservation principles, without requiring the 1:1 boson to fermion ‘supersymmetry’ hitherto postulated by string theorists). Hence, in this simple mechanism, the +2/3 charge of the upquark is due to a combination of strong vacuum polarization attenuation and hypercharge (the downquark we have been discussing is just the clearest case).
So regarding unification, we can get hard numbers out of this simple mechanism. We can see that the total gauge boson energy for all fields is conserved, so when one type of charge (electric charge, colour charge, or weak hypercharge) varies with collision energy or distance from nucleus, we can predict that the others will vary in such a way that the total charge gauge boson energy (which mediates the charge) remains constant. For example, we see reduced electric charge from a long range because some of that energy is attenuated by the vacuum and is being used for weak and (in the case of triads of quarks) colour charge fields. So as you get to ever higher energies (smaller distances from particle core) you will see all the forces equalizing naturally because there is less and less polarized vacuum between you and the real particle core which can attenuate the electromagnetic field. Hence, the observable strong charge couplings have less supply of energy (which comes from attenuation of the electromagnetic field), and start to decline. This causes asymptotic freedom of quarks because the decline in the strong nuclear coupling at very small distances is offset by the geometric inverse-square law over a limited range (the range of asymptotic freedom). This is what allows hadrons to have a much bigger size than the size of the tiny quarks they contain.
MECHANISM FOR THE STRONG NUCLEAR FORCE
We’re in a Dirac sea, which undergoes various phase transitions breaking symmetries as the strength of the field is increased. Near a real charge, the electromagnetic field within 10^{-15} metre exceeds 10^20 volts/metre which causes the first phase transition, like ice melting or water boiling. The freed Dirac sea particles can exert therefore a short range attractive force by the LeSage mechanism (which of course does not apply directly to long range interactions because the ‘gas’ effect fills in LeSage shadows over long distances, so the attractive force is short-ranged: it is limited to a range of about one mean-free-path for the interacting particles in the Dirac sea). The LeSage gas mechanism represents the strong nuclear attractive force mechanism. Gravity and electromagnetism as explained the previous posts on this blog are both due to the Yang-Mills ‘photon’ exchange mechanism (because Yang-Mills exchange ‘photon’ radiation – or any other radiation – doesn’t diffract into shadows, it doesn’t suffer the short range issue of the strong nuclear force; the short range of the weak nuclear force due to shielding by the Dirac sea may be quite a different mechanism for having a short-range).
You can think of the strong force like the short-range forces due to normal sea-level air pressure: the air pressure of 14.7 psi or 101 kPa is big, so you can prove the short range attractive force of air pressure it by using a set of rubber ‘suction cups’ strapped on your hands and knees to climb a smooth surface like a glass-fronted building (assuming the glass is strong enough!). This force has a range on the order of the mean free path of air molecules. At bigger distances, air pressure fills the gap, and the force disappears. The actual fall of course is statistical; instead of the short range attraction becoming suddenly zero at exactly one mean free path, it drops (in addition to geometric factors) exponentially by the factor exp{-ux} where u is the reciprocal of the mean free path and x is distance (in air of course there are weak attractive forces between molecules, Van der Waals forces, as well). Hence it is short ranged due to scatter of charged particles dispersing forces in all directions (unlike radiation):
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
(Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, which above the IR cutoff start to exert vast pair-production loop pressure; this gives the foam vacuum.)
Now for the formulae! The reason for radioactivity of heavy elements is linked to the increasing difficulty the strong force has in offsetting electromagnetism as you get towards 137 protons, accounting for the shorter half-lives. So here is a derivation of the strong nuclear force (mediated by pions) law including the natural explanation of why it is 137 times stronger than electromagnetism at short distances:
Heisenberg’s uncertainty says p*d = h/(2*Pi), if p is uncertainty in momentum, d is uncertainty in distance.
This comes from the resolving power of Heisenberg’s imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process. The factor of 2 would be a factor of 4 if we consider the uncertainty in one direction about the expected position (because the uncertainty applies to both directions, it becomes a factor of 2 here).
For light wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time. OK, we are dealing with massive pions, not light, but this is close enough since they are expected to be relativistic, ie, they have a velocity approaching c:
Et = h/(2*Pi)
t = d/c = h/(2*Pi*E)
E = hc/(2*Pi*d).
Hence we have related distance to energy: this result is the formula used even in popular texts used to show that a 80 GeV energy W+/- gauge boson will have a range of 10-17 m. So it’s OK to do this (ie, it is OK to take uncertainties of distance and energy to be real energy and range of gauge bosons which cause fundamental forces).
Now, the work equation E = F*d (a vector equation: “work is product of force and the distance acted against the force in the direction of the force”), where again E is uncertainty in energy and d is uncertainty in distance, implies:
E = hc/(2*Pi*d) = Fd
F = hc/(2*Pi*d^2)
Notice the inverse square law resulting here! There is a maximum range of this force, equal to the distance pions can travel in the time given by the uncertainty principle, d = hc/(2*Pi*E).
The strong force as a whole gives a Van der Waals type force curve; attractive at the greatest distances due to the pion mediated strong force (which is always attractive since pions have spin 0) and repulsive at short ranges due to exchange of rho particles (which have a spin of 1). We’re just considering the attractive pion exchange force here. (The repulsive rho exchange force would need to be added to the result to get the net strong force versus distance curve.)
The exponential quenching factor for the attractive (pion mediated) part of the strong force is exp(-x/a) where x is distance and a is the range of the pions. Using the uncertainty principle, assuming the pions are relativistic (velocity ~ c) and ignoring pion decay, a = {h bar}c/E where E is pion energy (~140 MeV ~=2.2*10^{-11} J). Hence a = 1.4*10^{-15} m = 1.4 fm.
So over a distance of 1 fm, this would reduce the pion force to exp(-1/1.4) ~ 0.5. But if the pion speed is much smaller than c, the reduction will be greater. There would be other factors involved as well, due to things like the polarization of the charged pion cloud.
Ignoring the exponential attenuation, the strong force obtained above is 137.036 times higher than Coulomb’s law for unit fundamental charges! This is the usual value often given for the ratio between the strong nuclear force and the electromagnetic force (I’m aware the QCD inter quark gluon-mediated force takes different and often smaller values than 137 times the electromagnetism force; that is due to vacuum polarization effects including the effect of charges in the vacuum loops coupling to and interfering with the gauge bosons for the QCD force).
This is the bare core charge of any particle, ignoring vacuum polarization which extends out to 10-15 metres and shields the electric field by a factor of 137 (which is the number 1/alpha), ie, the vacuum is attenuating 100(1-alpha) % = 99.27 % of the electric field of the electron. This energy is going into nuclear forces in the short-range vacuum polarization region (ie, massive loops, virtual particles, W+/-, Z_o and ‘gluon’ effects, which don’t have much range because they are barred by the high density of the vacuum, which is the obvious mechanism of electroweak symmetry breaking – regardless whether there is a Higgs boson or no Higgs boson).
The electron has the characteristics of a gravity field trapped energy current, a Heaviside energy current loop of black hole size (radius 2GM/c^2) for its mass, as shown by gravity mechanism considerations (see ‘about’ information on right hand side of this blog for links). The looping of energy current, basically a Poynting-Heaviside energy current trapped in a small loop, causes a spherically symmetric E-field and a toroidal shaped B-field which at great distances reduces (because of the effect of the close-in radial electric fields on transverse B-fields in the vacuum polarization zone within 10-15 metre of the electron black hole core) to a simple magnetic dipole field (those B-field lines which are parallel to E-field lines, ie, the polar B-field lines of the toroid, obviously can’t ever be attenuated by the radial E-field). This means that since the E- and B-fields in a photon are related by simply E = c*B, the vacuum polarization reduces only E by a factor of 137, and not B! This is long evidenced in practice as Dirac proved in 1931:
‘When one considers Maxwell’s equations for just the electromagnetic field, ignoring electrically charged particles, one finds that the equations have some peculiar extra symmetries besides the well-known gauge symmetry and space-time symmetries. The extra symmetry comes about because one can interchange the roles of the electric and magnetic fields in the equations without changing their form. The electric and magnetic fields in the equations are said to be dual to each other, and this symmetry is called a duality symmetry. Once electric charges are put back in to get the full theory of electrodynamics, the duality symmetry is ruined. In 1931 Dirac realised that to recover the duality in the full theory, one needs to introduce magnetically charged particles with peculiar properties. These are called magnetic monopoles and can be thought of as topologically non-trivial configurations of the electromagnetic field, in which the electromagnetic field becomes infinitely large at a point. Whereas electric charges are weakly coupled to the electromagnetic field with a coupling strength given by the fine structure constant alpha = 1/137, the duality symmetry inverts this number, demanding that the coupling of the magnetic charge to the electromagnetic field be strong with strength 1/alpha = 137. [This applies to the magnetic dipole Dirac calculated for the electron, assuming it to be a Poynting wave where E = c*B and E is shielded by vacuum polarization by a factor of 1/alpha = 137.]
‘If magnetic monopoles exist, this strong [magnetic] coupling to the electromagnetic field would make them easy to detect. All experiments that have looked for them have turned up nothing…’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, pp. 138-9. [Emphasis added.]
The Pauli exclusion principle normally makes the magnetic moments of all electrons undetectable on a macroscopic scale (apart from magnets made from iron, etc.): the magnetic moments usually cancel out because adjacent electrons always pair with opposite spins! If there are magnetic monopoles in the Dirac sea, there will be as many ‘north polar’ monopoles as ‘south polar’ monopoles around, so we can expect not to see them because they are so strongly bound!
CAUSALITY IN QUANTUM MECHANICS
Professor Jacques Distler has an interesting, thoughtful, and well written post called ‘The Role of Rigour’ on his Musings blog where he brilliantly argues:
‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’
Jacques also summarises the issues for theoretical physics clearly in a comment there:
- ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
- ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
- ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’
I’ve explained there to Dr ‘string-hype Haelfix’ that people should be working on non-rigorous areas like the derivation of the Hamiltonian in quantum mechanics, which would increase the rigour of theoretical physics, unlike string. I earlier explained this kind of thing (the need for checkable research not speculation about unobservables) in the October 2003 Electronics World issue opinion page, but was ignored, so clearly I need to move on to stronger language because stringers don’t listen to such polite arguments as those I prefer using! Feynman writes in QED, Penguin, London 1985:
‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’
There is already a frequency of oscillation in the photon before it hits the glass, and in the glass due to the sea of electrons interacting via Yang-Mills force-causing radiation. If the frequencies clash, the photon can be reflected or absorbed. If they don’t interfere, the photon goes through the glass. Some of the resonate frequencies of the electrons in the glass are determined by the exact thickness of the glass, just like the resonate frequencies of a guitar string are determined by the exact length of the guitar string. Hence the precise thickness of the glass controls some of the vibrations of all the electrons in it, including the surface electrons on the edges of the glass. Hence, the precise thickness of the glass determines the amplitude there is for a photon of given frequency to be absorbed or reflected by the front surface of the glass. It is indirect in so much as the resonance is set up by the thickness of the glass long before the photon even arrives (other possible oscillations, corresponding to a non-integer value of the glass thickness as measured in terms of the number of wavelengths which fit into that thickness, are killed off by interference, just as a guitar string doesn’t resonate well at non-natural frequencies).
What has happened is obvious: the electrons have set up a equilibrium oscillatory state dependent upon the total thickness before the photon arrives. There is nothing to this: consider how a musical instrument works, or even just a simple tuning fork or solitary guitar string. The only resonate vibrations are those which contain an integer number of wavelengths. This is why metal bars of different lengths resonate at different frequencies when struck. Changing the length of the bar slightly, completely alters its resonance to a given wavelength! Similarly, the photon hitting the glass has a frequency itself. The electrons in the glass as a whole are all interacting (they’re spinning and orbiting with centripetal accelerations which cause radiation emission, so all are exchanging energy all the time which is the force mechanism in Yang-Mills theory for U(1) electromagnetism), so they have a range of resonances that is controlled by the number of integer wavelengths which can fit into the thickness of the glass, just as the range of resonances of a guitar string are determined by the wavelengths which fit into the string length resonately (ie, without suffering destructive interference).
Hence, the thickness of the glass pre-determines the amplitude for a photon of given frequency to be either absorbed or reflected. The electrons at the glass surface are already oscillating with a range of resonate frequencies depending on the glass thickness, before the photon even arrives. Thus, the photon is reflected (if not absorbed) only from the front face, but it’s probability of being reflected is dependent on the total thickness of the glass. Feynman also explains:
‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’
More about this here (in the comments; but notice that Jacques’ final comment on the thread of discussion about rigour in quantum mechanics is discussed by me here), here, and here. In particular, Maxwell’s equations assume that real electric current is dQ/dt which is a continuous equation being used to represent a discontinuous situation (particulate electrons passing by, Q is charge): it works approximately for large numbers of electrons, but breaks down for small numbers passing any point in a circuit in a second! It is a simple mathematical error, which needs correcting to bring Maxwell’s equations into line with modern quantum field theory. A more subtle error in Maxwell’s equations is his ‘displacement current’ which is really just a Yang-Mills force-causing exchange radiation as explained in the previous post and on my other blog here. This is what people should be working on to derive the Hamiltonian: the Hamiltonian in both Schroedinger’s and Dirac’s equations describes energy transfers as wavefunctions vary in time, which is exactly what the corrected Maxwell ‘displacement current’ effect is all about (take the electric field here to be a relative of the wavefunction). I’m not claiming that classical physics is right! It is wrong! It needs to be rebuilt and its limits of applicability need to be properly accepted:
Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{-15} m from an electron (where the electric field is over about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
The Hamiltonian time evolution should be derived rigorously from the empirical facts of electromagnetism: Maxwell’s ‘displacement current’ describes energy flow (not real charge flow) due to a time-varying electric field. Clearly it is wrong because the vacuum doesn’t polarize below the IR cutoff which corresponds to 10^20 volts/metre, and you don’t need that electric field strength to make capacitors, radios, etc. work.
So you could derive the Schroedinger from a corrected Maxwell ‘displacement current’ equation. This is just an example of what I mean by deriving the Schroedinger equation. Alternatively, a computer Monte Carlo simulation of electrons in orbit around a nucleus, being deflected by pair production in the Dirac sea, would provide a check on the mechanism behind the Schroedinger equation, so there is a second way to make progress
HOW SHOULD CENSORSHIP PRESERVE QUALITY?
‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’ – Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org/articles/science/sc0043.html
‘There will certainly be no lack of human pioneers when we have mastered the art of flight. Who would have thought that navigation across the vast ocean is less dangerous and quieter than in the narrow, threatening gulfs of the Adriatic , or the Baltic, or the British straits? Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies – I shall do it for the moon, you, Galileo, for Jupiter.’ – Letter from Johannes Kepler to Galileo Galilei, April 1610, http://www.physics.emich.edu/aoakes/letter.html
Kepler was a crackpot/noise maker; despite his laws and discovery of elliptical orbits, he got the biggest problem wrong, believing that the earth – which William Gilbert had discovered to be a giant magnet – was kept in orbit around the sun by magnetic force. So he was a noise generator, a crackpot. If you drop a bag of nails, they don’t all align to the earth’s magnetism because it is so weak, but they do all fall – because gravity is relatively strong due to the immense amounts of mass involved. (For unit charges, electromagnetism is stronger than gravity by a factor like 1040 but that is not the right comparison here, since the majority of the magnetism in the earth due to fundamental charges is cancelled out by the fact that charges are paired with opposite spins, cancelling out their magnetism. The tiny magnetic field of the planet earth is caused by some kind of weak dynamo mechanism due to the earth’s rotation and the liquid nickel-iron core of the earth, and the earth’s magnetism periodically flips and reverses naturally – it is weak!) So just because a person gets one thing right, or one thing wrong, or even not even wrong, that doesn’t mean that all their ideas are good/rubbish.
As Arthur Koestler pointed out in The Sleepwalkers, it is entirely possible for there to be revolutions without any really fanatic or even objective/rational proponents (Newton was a totally crackpot alchemist who also faked the first ‘theory’ of sound waves). My own view of the horrible Dirac sea (Oliver Lodge said: ‘A fish cannot comprehend the existence of water. He is too deeply immersed in it,’ but what about flying fish?) is that it is an awfully ugly empirical fact that is
(1) required by the Dirac equation’s negative energy solution, and which is
(2) experimentally demonstrated by antimatter.
My personal interest in the subject is more to do with a personal, bitter vendetta against string theorists who are turning physics into a religion and laughing stock in Britain, than because I have the slightest interest how the big bang came about or what will happen in the distant future. I don’t care about that, just about understanding what is already known, and promoting the hard, experimental facts. Maybe when time permits some analysis of what these facts say about the early time of the big bang and the future of the big bang will be possible (see my controversial comment here). I did touch on these problems in an eight pages long initial paper which I wrote in May 1996 and which was sold via the October 1996 issue of Electronics World (see letters pages for the Editor’s note). However, that paper is long obsolete and the whole subject needs to be carefully analysed before coming to important conclusions. But the main problem is that Woit summarises on p.259 of the UK edition of the brilliant book Not Even Wrong:
‘As long as the leadership of the particle theory community refuses to face up to what has happened and continues to train young theorists to work on a failed project, there is little likelihood of new ideas finding fertile ground in which to grow. Without a dramatic change in the way theorists choose what topics to address, they will continue to be as unproductive as they have been for two decades, waiting for some new experimental result finally to arrive.’
John Horgan’s 1996 excellent book The End of Science, which Woit argues is the future of physics if people don’t keep to explaining what is known (rather than speculating about unification at energy higher than can ever be seen, speculating about parallel universes, extradimensions, and other non-empirical drivel), states:
‘A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.’
This post is updated as of 26 October 2006, and will be further expanded to include material such as the results here, here, here, here and here.
I’ve not included gravity, electromagnetism or mass mechanism dynamics in this post; for these see the links in the ‘about’ section on the right hand side of this blog, and the previous posts on this blog. The major quantitative predictions and successful experimental tests are summarized in the old webpage at http://feynman137.tripod.com/#d apart from all of the particle masses which are dealt with in the previous post on this blog. It is not particularly clear whether I should spent spare time revising outdated material or studying unification and Standard Model details further. Obviously, I’ll try to do both as far as time permits.
L. Green, “Engineering versus pseudo-science”, Electronics World, vol. 110, number 1820, August 2004, pp52-3:
‘… controversy is easily defused by a good experiment. When such unpleasantness is encountered, both warring factions should seek a resolution in terms of definitive experiments, rather than continued personal mudslinging. This is the difference beween scientific subjects, such as engineering, and non-scientific subjects such as art. Nobody will ever be able to devise an uglyometer to quantify the artistic merits of a painting, for example.’ (If string theorists did this, string theory would be dead, because my mechanism published in Oct 96 E.W. and Feb. 97 Science World, predicts the current cosmological results which were discovered about two years later by Perlmutter.)
‘The ability to change one’s mind when confronted with new evidence is called the scientific mindset. People who will not change their minds when confronted with new evidence are called fundamentalists.’ – Dr Thomas S. Love, California State University.
This comment from Dr Love is extremely depressing; we all know today’s physics is a religion. I found out after emailed exchanges with, I believe, Dr John Gribbin, the author of numerous crackpot books like ‘The Jupiter Effect’ (claiming Los Angeles would be destroyed by an earthquake in 1982), and quantum books trying to prove Lennon’s claim ‘nothing is real’. After explaining the facts to Gribbin, he then emailed me a question something like (I have archives of emails by the way, so could check the exact wording if required): ‘you don’t seriously expect me to believe that or write about it?’
‘… a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ – Max Planck.
But, being anti-belief and anti-religious intrusion into science, I’m not interested in getting people to believe truths but on the contrary, to question them. Science is about confronting facts. Dr Love suggests a U(3,2)/U(3,1)xU(1) alternative to the Standard Model, which provides a test on my objectivity. I can’t understand his model properly because it reproduces particle properties in a way I don’t understand, and doesn’t appear to yield any of the numbers I want like force strengths, particle masses, causal explanations. Although he has a great many causal explanations in his paper, which are highly valuable, I don’t see how they connect to the alternative to the standard model. He has an online paper on the subject as a PDF file, ‘Elementary Particles as Oscillations in Anti-de-Sitter Space-Time’ which I have several issues with: (1) anti-de-Sitter spacetime is a stringy assumption to begin with, (2) I don’t see checkable predictions. However, maybe further work on such ideas will produce more justification for them; they haven’t had the concentration of effort which string theory has had.
There are no facts in string ‘theory’ (there isn’t even a theory – see the previous post) which is merely speculation. The gravity strength prediction I give is accurate and compatible with the Yang-Mills exchange radiation standard model and the validated (not the cosmic-landscape epicycles rubbish) aspects of general relativity. Likewise, I predict correctly the ratio of electromagnetic strength to gravity strength (previous post), and the ratio of strong to electromagnetic which means that I predict three forces for the price of one. In addition (see previous post) the masses of all directly observable particles (the masses of isolated quarks are not real as such because they can’t be isolated, because the energy required to separate them exceeds the energy required to create new quark pairs).
Don’t believe this, it is not a faith-based religion. It is just plain fact. The key questions are the accuracy of the predictions and the clear presentation of the mechanisms. Unlike string theory, this is falsifiable science which makes many connections to reality. However, as Ian Montgomery, an Australian, aptly expressed the political state of physics in an email: ‘… we up Sh*t Creek in a barbed wire canoe without a paddle …’ I think that is a succinct summary of the state of high energy physics at present and of the hope of making progress. There is obviously a limit to what a handful of ‘crackpots’ outside the mainstream can do, with no significant resources compared to stringers.
[Regards the ‘spin 2 graviton’ see an interesting comment on Not Even Wrong: ‘LDM Says:
October 26th, 2006 at 12:03 pm
Referring to footnote 12 of the physics/0610168 about string theory and GR…
If you actually check what Feynman said in the “Feynman Lectures on Gravitation”, page 30…you will see that the (so far undetected) graviton, does not, a priori, have to be spin 2, and in fact, spin 2 may not work, as Feynman points out.
This elevation of a mere possibility to a truth, and then the use of this truth to convince oneself one has the correct theory, is a rather large extrapolation.’
Note that I also read those Feynman lectures on gravity when Penguin books brought them out in paperback a few years ago and saw the same thing, although I hated reading the abject speculation in them where Feynman suggests that the strength ratio of gravity to electromagnetism is like the ratio of the radius of the universe to the radius of a proton, without any mechanism or dynamics. Tony Smith quotes a bit of them on his site which I re-quote on my home page. The spin depends on the nature of the radiation, and if it is non-oscillating then it can only propagate via the 2-way mode like electric/Heaviside-Poynting energy due to the same reason of infinite self-inductance preventing it working by a single way mode (like two non-oscillating energy currents going in opposite directions) which will affect what you mean by spin.
On my home page there are three main sections dealing with the gravity mechanism dynamics, namely near the top of http://feynman137.tripod.com (scroll down to first illustration), at http://feynman137.tripod.com/#a and for technical calculations predicting strength of gravity accurately at http://feynman137.tripod.com/#h. The first discussion, near the top of the page, explains how shielding occurs: ‘… If you are near a mass, it creates an asymmetry in the radiation exchange, because the radiation normally received from the distant masses in the universe is red-shifted by high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force of a nearby mass which is not receding (in spacetime) from you is F = ma = mv/t = mv/(x/c) = mcv/x = 0. Hence by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, creating an asymmetry. So you get pushed towards the shield. This is why apples fall. …’ This brings up the issue of how electromagnetism works. Obviously, the charges of gravity and electromagnetism are different: masses don’t have the symmetry properties of the electric charge. For example, mass increases with velocity, while electric charge doesn’t. I’ve dealt with this in the last couple of posts on this blog, but unification physics is a big field and I’m still making progress. One comment about the spin. Fermions have half-integer spin which means they are like a Mobius strip, requiring 720 degrees of rotation for a complete exposure of their surface. Fermi-Dirac statistics describe such particles. Bosons have integer spin and spin-1 bosons are relatively normal in that they only require 360 degrees of rotation for a complete revolution. Spin-2 bosons gravitons presumably require only 180 degrees of rotation per revolution, so appear stringy to me. I think the exchange radiation of gravity and electromagnetism is the same thing – based on the arguments in previous posts – and is spin-1 radiation, albeit continuous radiation. It is quite possible to have continuous radiation in a Dirac sea, just as you can have continuous waves composed of molecules in a water based sea.]
A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments. – Novum Organum.
This would allow LQG to be built as a bridge between path integrals and general relativity. I wish Smolin or Woit would pursue this.
Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)
– Feynman, QED, Penguin, 1990, page 54.
That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.
The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that.
Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.
So the vacuum simply isn’t full of loops (they only extend out to 1 fm around particles). Hence no dark energy mechanism.
For more recent information on gravity, see http://electrogravity.blogspot.com/
See the discussion of this at https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/
BACKGROUND:
There are two cutoffs, named for historical reasons after different extreme ends of the visible spectrum of light. Visible light spectra have two cutoffs, a lower (infrared) “IR” cutoff, and an upper (ultraviolet) “UV” cutoff. Obviously, IR and UV have nothing to do with the quantum field theory IR and UV cutoffs which are in the gamma ray energy region (0.511 MeV and 10^16 GeV or thereabouts).
To calculate the exact distance corresponding to the IR cutoff: simply calculate the distance of closest approach of two electrons colliding at 1.022 MeV (total collision energy) or 0.511 MeV per particle. This is easy as it is Coulomb scattering. See http://cosmicvariance.com/2006/10/04/is-that-a-particle-accelerator-in-your-pocket-or-are-you-just-happy-to-see-me/#comment-123143 :
Unification is often made to sound like something that only occurs at a fraction of a second of the BB: http://hyperphysics.phy-astr.gsu.edu/hbase/astro/unify.html#c1Problem is, unification also has another meaning: that of closest approach when two electrons (or whatever) are collided. Unification of force strengths occurs not merely at high energies, but close to the core of a fundamental particle.The kinetic energy is converted into electrostatic potential energy as the particles are slowed by the electric field. Eventually, the particles stop approaching (just before they rebound) and at that instant the entire kinetic energy has been converted into electrostatic potential energy of E = (charge^2)/(4*Pi*Permittivity*X), where R is the distance of closest approach.This concept enables you to relate the energy of the particle collisions to the distance they are approaching. For E = 1 MeV, R = 1.44 x 10^-15 m (this assumes one moving electron of 1 MeV hits a non-moving electron, or that two 0.5 MeV electrons collide head-on). OK, I do know that there are other types of scattering than the simple Coulomb scattering, so it gets far more complex, particularly at higher energies.
But just thinking in terms of distance from a particle, you see unification very differently to the usual picture. For example, experiments in 1997 (published by Levine et al. in PRL v.78, 1997, no.3, p.424) showed that the observable electric charge is 7% higher at 92 GeV than at low energies like 0.5 MeV. Allowing for the increased charge due to reduced polarization caused shielding, the 92 GeV electrons approach within 1.8 x 10^-20 m. (Assuming purely Coulomb scatter.)
Extending this to the assumed unification energy of 10^16 GeV, the distance of approach is down to 1.6 x 10^-34 m, and the Planck scale is ten times smaller.
If you replot graphs like http://www.aip.org/png/html/keith.htm (or Fig 66 of Lisa Randall’s Warped Passages) as force strength versus distance form particle core, you have to treat leptons and quarks differently.
You know that vacuum polarization is shielding the core particle’s electric charge, so that electromagnetic interaction strength rises as you approach unification energy, while strong nuclear forces fall.
Electric field lines diverge, that causes the inverse square law (the number of lines crossing unit area falls as the inverse square of distance because the number of radial field lines is constant but the surface area of a sphere at distance R from the electron core is 4*Pi*R^2). The polarization of the vacuum within 1 fm of an electron core means virtual positrons get drawn closer to the electron core than virtual electrons, creating another electric field line which opposes and cancels out some the electron’s field lines entirely.
If you look you my home page you find that the electron’s charge is 7% stronger at a scattering energy of 90 GeV than at 1 fm distance and beyond (0.511 Mev/particle or 1 MeV per collision scattering energy). For purely Coulomb perfectly elastic scattering at normal incidence the distance of closest approach goes inversely as the energy of the collision, so on this basis the charge of the electron is the normal charge “e” at 1 fm (10^{-15} m) and beyond, but is 1.07e at something like 10^{-20} m. Actually, the collision is not elastic but is results in other particles being created and reactions, so the true distance of 1.07e charge is less than 10^{-20} m.
I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424 |