Electroweak symmetry breaking and strong nuclear force

Electric charge of the electron as a function of distance

The electron has a charge which varies with collision energy, increasing by 7% when collision energy is increased to 92 GeV.  This is not to do with relativity (which only varies mass, length, and local time) but is due to vacuum polarization.

Fig. 1: Vacuum polarization around a charge

Fig. 2: The effect of polarization is that the effective electric charge rises at higher energy collisions, as electrons approach close enough to penetrate part of the vacuum polarization shield before being turned back by the Coulomb repulsive force.  The force causing gauge boson energy absorbed by the vacuum polarization at short ranges appears as other, short-ranged, ‘nuclear’ forces.  The weak force has been unified with the electromagnetic force above a certain unification energy.  Below that energy, this electroweak symmetry is broken, because weak gauge bosons get stopped or shielded by the vacuum somehow (officially it is due to the as-yet-unobserved Higgs field, but that has problems as the official theory says that a single Higgs boson would have infinite mass, and also since the Higgs field would cause all masses, it would have to cause gravitational mass as well as inertial by Einstein’s equivalence principle of general relativity, ie, the Higgs field would have to be a quantum gravity theory).  It is possible that particles related to the Z_o gauge boson fill the vacuum and have mass in their own right, thus playing the role today allocated to the Higgs field by the mainstream.

See http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html for a simple model of vacuum polarization.   A vacuum loop for electron-positron creation and annihilation is something like: gamma ray -> e + e+ -> gamma ray -> e + e+ …  The virtual charges exist for a shot timeand are polarized, which shields the real bare electric core charge.

This loop of matter creation and annihilation is occurring in the vacuum, and is described by the creation-annihilation operators in quantum field theory. The basic physics can be grasped with Heisenberg’s uncertainty equation. Physically, particles are hitting one another in the vacuum. Just as in an particle accelerator, such collisions produce short-lived bursts of particles which have observable consequences, such as increasing the magnetic moment of an electron by 0.116% and causing the Lamb shift in the hydrogen atom’s spectrum.

The average time that such particles persist depends on how long another collision in the vacuum annihilates them. So Heisenberg’s uncertainty relation for energy and time predicts the lifetime of any particular energy fluctuation. [There is no need to get into science fiction or metaphysics about Hawking’s ‘interpretations’ of the uncertainty principle (parallel universes, baby universes, etc.) because that simply isn’t physics, which is about checkable calculations.]

Particles which have the lowest energy exist for the longest period, a simple inverse law relationship between energy and time. Therefore, electron-positron pairs will dominate the vacuum, because they have the least rest mass-energy of all charged particles, and exist for the longest period of time.

The next particle beyond the electron has merely the same electric charge as the electron, but is over two hundred times as massive, and will therefore exist on average for under 1/200th or under 0.5% of the duration of time that electron-positron charges exist in the vacuum.

An electron-positron pair is created by 1.022 MeV = 1.634 x 10-13 J of energy, and has a lifetime according to the Heisenberg uncertainty relationship of t ~ h/E ~ (6.626 x 10-34 m2kg/s) / (1.634 x 10-13 J) ~ 4 x 10-21 seconds, during which the maximum possible distance moved by either particle is x = ct = (3 x 108 m/s).(4 x 10-21 s) = 1.2 x 10-12 metres.

This is why the vacuum isn’t observably radioactive! The virtual particles have just enough range (10-12 metres) to continuously upset the electron orbits in atoms, creating the Schroedinger statistical wave distribution instead of classical orbits, but they don’t have enough range to break up molecules. The radioactivity of the vacuum is short-ranged enough that it can only upset the orbits of electrons and nuclear particles. This is rather like the effects of a gas on producing Brownian motion of very small (micron sized) dust particles, but being unable to induce chaotic motion in large objects because statistically their effects balance out on large scales (creating pressure).

Pairs of heavier charged particles could exist closer to the core of an electron if the electric field of the electron is responsible for creating particles. This in the Yang-Mills theory is the same as saying that force-causing gauge photons – which constitute the electric field in the U(1) quantum field theory – behave like very high energy gamma rays when close to a heavy nucleus (when pair production occurs, gamma ray -> electron + positron). Obviously, this route is suggesting that gauge photons interact with one another to create vacuum charges of higher mass when closer to the electron middle. If I shield cobalt-60 with lead, some of the gamma rays (with a mean energy of 1.25 MeV) will be converted into electron-positron pairs by interacting with the strong nuclear field in the lead nucleus.

I cannot however see why the polarization of heavier particles would produce a greater attenuation of the electric charge, because after all they have the same amount of charge as electrons and more inertia, so are less mobile to become polarized. Physically, they should produce less shielding, so Lubos Motl’s off-the-cuff response to a question (see comments section of a post at http://electrogravity.blogspot.com/) is illucid.

At very close ranges, you get strong and weak forces occurring due to the effect of the electromagnetic force field strength on the vacuum. These stronger forces – due to creation of charges in the vacuum near the middle of the electron – sap energy from the electromagnetic field. For example, if hypothetically on average half of the gauge boson energy of the photons causing electromagnetism is used to produce heavy charges at a certain distance close to the middle of the electron, then the electromagnetic field energy at that distance as carried by gauge bosons (photons for electromagnetism) will be reduced by 50%.

So you can’t have your cake and eat it; if one observational charge gets weaker, that implies that energy is being used by some other process. The strong force for quarks within say nucleons is optimised at low energies, and gets weaker at high energy collisions or when the quarks are brought closer together or close to other quarks. The reason may well be simply that the whole strong force phenomenology is driven by the variation in energy of the electroweak force field in the polarized vacuum. As the electromagnetic force gets stronger very close to the middle of a particle, less energy is therefore available for strong force coupling.

This suggestion is in sharp contrast to the official highly abstract renormalization approach to quantum field theory. The existing system uses a logarithmic correction for observable charge variation with energy (which is, in general, inversely proportional to distance from the middle of the particle):

e(x)/e(x = infinity) ~ 1 + [0.005 ln(A/E)]

This equals 1.07 for A = 92,000 MeV and E = 0.511 MeV as experimentally validated (Levine, PRL 1997), hence the relative shielding factor of the vacuum falls from 137 at 0.511 MeV collisions to about 128 at 92 GeV. What is artificial about this equation is the lower limit (cutoff) of 0.511 MeV. From quantum mechanics you would expect a smooth curve for the shielding from charge polarization, not an abrupt lower limit. The electron charge is exactly e until you get to a certain distance from it (corresponding to a collision energy of 0.511 MeV) when it abruptly starts to increase. No way this could happen, why should vacuum polarization stop at some arbitrary distance?!!  Answer: the virtual particles are not only being polarized by the electric field, they are being created/freed by it, and this requires a threshold field strength which we will calculate below (at this threshold, the energy density of the electric field is sufficiently high to create/free charged pairs which then become polarized and shield the electric field!).

The whole of quantum field theory as traditionally taught is therefore, as Dr Chris Oakley points out, a fiddle physically and this proves it.  There is confusion introduced into QFT allowing charge variation as a function of collision energy, instead of plotting the variation for observable charge as a function of distance from the middle of the electron in the simplest case of a static electron.

The shielding factor is falling at closer distances (and thus higher collision energies in particle accelerators) because you are getting closer to the core of the electron (or whatever you are colliding), and so seeing effects due to less of the polarised shield, and more of the unshielded charge. Similarly, if it is a cloudy day and you get in an aircraft and get much higher in the atmosphere, you will see more sunlight and less attenuated sunlight. If you could get above the cloud cover, you would see completely unshielded sunlight. Similarly, if you hit particles so hard together that the clouds of virtual charges are no longer intervening in the small distance between the particle cores when they rebound by Coulomb repulsion, the electric charge involved is much stronger simply because it is less shielded by polarization.

The final theory involves getting to grips with charge and mass, which first means getting a working model for what these things are in quantum field theories. Charge is the priority, because it is less variable than mass (mass varies in relativity, charge doesn’t).

Fundamental particles have charges in the ratio 1: -2 : 3 (example, d quark is -1/3, u is +2/3, electron is -1). The fractionational charges of quarks in the final theory must arise from polarization either by itself or in combination with some other effect or principle.

When you get two or three quarks close together, they share the same cloud of polarized charge, but the cloud is two or three times stronger, so explaining why the electric charge of each individual quark as observed from a great distance (outside the intensely polarized vacuum shield) is only a fraction of what you get at a large distance from an electron!

CALCULATION OF THE POLARIZATION CUTOFF RANGE: 

Dyson’s paper http://arxiv.org/abs/quant-ph/0608140 is very straightforward and connects deeply with the sort of physics I understand (I’m not a pure mathematician turned physicist). Dyson writes on page 70:

‘Because of the possibility of exciting the vacuum by creating a positron-electron pair, the vacuum behaves like a dielectric, just as a solid has dielectric properties in virtue of the possibility of its atoms being excited to excited states by Maxwell radiation. This effect does not depend on the quantizing of the Maxwell field, so we calculate it using classical fields.

‘Like a real solid dielectric, the vacuum is both non-linear and dispersive, i.e. the dielectric constant depends on the field intensity and on the frequency. And for sufficiently high frequencies and field intensities it has a complex dielectric constant, meaning it can absorb energy from the Maxwell field by real creation of pairs.’

Pairs are created by the high intensity field near the bare core of the electron, and the pairs become polarised, shielding part of the bare charge. The lower limit cutoff in the renormalized charge formula is therefore due to the fact that polarization is only possible where the field is intense enough to create virtual charges.

This threshold field strength for this effect to occur is 6.9 x 10^20 volts/metre. This is the electric field strength by Gauss’ law at a distance 1.4 x 10^-15 metre from an electron, which is the maximum range of QED vacuum polarization. This distance comes from the ~1 MeV collision energy used as a lower cutoff in the renormalized charge formula, because in a direct (head on) collision all of this energy is converted into electrostatic potential energy by the Coulomb repulsion at that distance: to do this just put 1 MeV equal to potential energy (electron charge)^2 / (4Pi.Permittivity.Distance).

Can someone explain to me why there are no books or articles with plots of observable (renormalized) electric charge versus distance from a quark or lepton, let alone plots of weak and nuclear force as a function of distance? Everyone plots forces as a function of collision energy only, which is obfuscating. What you need is to know is how the various types of charge vary as a function of distance. Higher energy only means smaller distance. It is pretty clear that when you plot charge as a function of distance, you start thinking about how energy is being shielded by the polarized vacuum and electroweak symmetry breaking becomes clearer. The electroweak symmetry exists close to the bare charge but it breaks at great distances due to some kind of vacuum polarization/shielding effect. Weak gauge bosons get completely attenuated at great distances, but electromagnetism is only partly shielded.

To convert energy into distance from particle core, all you have to do is to set the kinetic energy equal to the potential energy, (electron charge)^2 / (4Pi.Permittivity.Distance). However, you have to remember to use the observable charge for the electron charge in this formula to get correct results (hence at 92 GeV, the observable electric charge of the electron to use is 1.07 times the textbook low-energy electronic charge).

Here is a nice essay dealing with the Dirac and the perturbative QFT in physical terms:

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … It will be apparent that a hole in the negative energy states is equivalent to a particle with the same mass as the electron … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949]. ‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001…’

CALCULATION OF THE BARE CHARGE OF ELECTRON:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

Heisenberg’s uncertainty principle says

pd = h/(2.Pi)

where p is uncertainty in momentum, d is uncertainty in distance.
This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process.

For light wave: momentum p = mc, while distance d = ct, hence:
pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc2), and t is uncertainty in time.

Hence, Et = h/(2.Pi)

t = h/(2.Pi.E)

d/c = h/(2.Pi.E)

d = hc/(2.Pi.E)

This result is often used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it’s reliable to use this.

Now, E = Fd implies

d = hc/(2.Pi.E) = hc/(2.Pi.Fd)

Hence

F = hc/(2.Pi.d^2)

This force is 137.036 times higher than Coulomb’s law for unit fundamental charges.
Notice that in the last sentence I’ve suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.
Clearly what’s physically happening is that the true force is 137.036 times Coulomb’s law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:

“… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).” – arxiv hep-th/0510040, p 71.

The unified Standard Model force is F = hc/(2.Pi.d^2)

That’s the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(-d/x) for vacuum attenuation by short-ranged nuclear particles, where x = hc/(2.Pi.E)

All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles.

Quarks have asymptotic freedom because the strong force and electromagnetic force cancel where the strong force is weak, at around the distance of separation of quarks in hadrons. That’s because of interactions with the virtual particles (fermions, quarks) and the field of gluons around quarks. If the strong nuclear force fell by the inverse square law and by an exponential quenching, then the hadrons would have no volume because the quarks would be on top of one another (the attractive nuclear force is much greater than the electromagnetic force).

It is well known you can’t isolate a quark from a hadron because the energy needed is more than that which would produce a new pair of quarks. So as you pull a pair of quarks apart, the force needed increases because the energy you are using is going into creating more matter. This is why the quark-quark force doesn’t obey the inverse square law. There is a pictorial discussion of this in a few books (I believe it is in “The Left Hand of Creation”, which says the heuristic explanation of why the strong nuclear force gets weaker when quark-quark distance decreases is to do with the interference between the cloud of virtual quarks and gluons surrounding each quark). Between nucleons, neutrons and protons, the strong force is mediated by pions and simply decreases with increasing distance by the inverse-square law and an exponential term something like exp(-x/d) where x is distance and d = hc/(2.Pi.E) from the uncertainty principle.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:’… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification. The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you may learn that SUSY just isn’t needed or is plain wrong, or else you will get a better grip on what is real and make some testable predictions as a result.

At low energies, the experimentally determined strong nuclear force strength is alpha = 1 (which is about 137 times the Coulomb law), but it falls to alpha = 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.

In the next post on this blog, I will present (using the method outlined in this post) a full conversion of the normal force-strength-versus-collision-energy curves for Standard Model forces into force-strength-versus-distance-from-particle core curves, which will euclidate the nature of the strong nuclear force and electroweak symmetry breaking on the basis that these result from shielding of gauge boson radiation by charge polarization.  I will show the correct mathematical model for charge polarization as a shielding effect, and show by energy conservation of gauge bosons that as the shielding due to charge polarization depletes electromagnetism, this energy is transformed into the short ranged energy of nuclear forces.  The quantitative model will unify all forces without stringy supersymmetry.

Gravitational mechanism: http://electrogravity.blogspot.com/2006/07/observable-radial-expansion-around-us.html.  For recent updates and further explanation of relationship between gravity and electromagnetism see comments of http://electrogravity.blogspot.com/2006/08/updates-1-peter-woits-not-even-wrong.html and see http://feynman137.tripod.com/.  I am producing a new properly organised paper which explains all of the problems and solutions methodically, with many new results and more lucid proofs, improvements and clarifications of older results.  This new paper will be published on this wordpress blog within a week.

Advertisements

5 thoughts on “Electroweak symmetry breaking and strong nuclear force

  1. Related comments about vacuum polarization and gauge bosons:

    http://en.wikipedia.org/wiki/Talk:Ivor_Catt#Electron_plasma_waves

    Kevin, Maxwell’s displacement current in a vacuum is disproved by the proof that renormalization in quantum electrodynamics predicts the Lamb shift and magnetic moment of leptons. Maxwell says displacement current i = dD/dt where D is displacement, which is due to a polarization of charge in the vacuum according to Maxwell. The error was found after Dirac’s equation was corrected for vacuum charge effects. If Maxwell’s theory was right, any electric charging, even to an infidesimally small potential difference (voltage) results from vacuum charge polarization. Quantum electrodynamics proves that if this were the case, the vacuum would polarize around each real charge so as to entirely cancel out all real charges. This is absurd. Quantum electrodynamics proves that no charge polarization is possible in the vacuum below the lower energy cutoff in the corrected (renormalized) electron charge formula, with this lower cutoff for polarization corresponding to an electric field strength of 6.9 x 10^20 volts/metre [7]. This is the electric field strength by Gauss’ law at a distance 1.4 x 10^-15 metre from the middle of an electron, which is the maximum range to which vacuum polarization can occur[8]. Therefore, Maxwell’s displacement current, since it relies on vacuum polarization ideas for weaker electric fields, is false. My post which you refer to [ http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html ] is concerned mainly with replacing Maxwell’s false theory with a corrected theory [9]. Your distinction between Maxwell’s prediction of displacement current due to vacuum charge polarization, and the resulting magnetic field from the motion of vacuum charges during their polarization, is nice yet superfluous. There is no polarization of the vacuum and hence no displacement current if the electric field strength is below 6.9 x 10^20 volts/metre [10]. Nigel 172.206.104.113 15:31, 1 September 2006 (UTC)

    http://en.wikipedia.org/wiki/Talk:Ivor_Catt#The_July_chapters

    Kevin, the point is that electric current is the result of electromagnetic fields. If you send logic signals of same amplitude in opposite directions along the same transmission line, there is no electric current while they overlap, there is just energy current for the duration of the overlap. Dr Walton did the research on this and proved it theoretically as well as by experiment. It is vital. See page http://www.ivorcatt.com/1_3.htm top illustrations: while two opposite-travelling logic pulses overlap, if they have similar amplitude they cancel each other’s magnetic fields (because the B-fields have opposite curls), and during overlap there is no variation in voltage along the line, so the field E = dV/dx = 0/dx = 0. Hence there is no electric [electron drift] current, there is only Heaviside-Poynting energy currents going in both directions creating a detectable net voltage (although no gradient in that voltage along the overlap region since dV/dx = 0), and nothing else. You are the one who leaves out the forces acting on the electrons; I keep telling you that the Standard Model says those forces are gauge bosons which constitute the field, and they go at light velocity. Nigel 172.206.104.113 15:14, 1 September 2006 (UTC)

  2. Copy of off-topic comment to Peter Woit’s blog:

    http://www.math.columbia.edu/~woit/wordpress/?p=456#comment-15827

    nc Says:

    September 10th, 2006 at 5:33 pm

    Can I say I feel that the major tragedy of string is that it’s promotion sets out to dismiss alternatives without studying them first.

    Lunsford’s unification of electromagnetism and gravity yields cosmological constant of zero, which is entirely consistent with the Yang-Mills Standard Model gauge boson radiation: redshifted gauge bosons (for long range gravitation) mean gravity coupling strength is weakened over large ranges.

    Just as photons lose energy when redshifted, the cosmic expansion does the same to gauge bosons of gravity. This is why the expansion doesn’t get slowed down by gravity at immense distances. (What part of this can’t people grasp?)

    Hence no dark energy, hence cosmological constant = 0. This is what Lunsford’s paper predicts. Sorry for being off-topic, but this argument should be taken seriously.

  3. Copy of on-topic comment to Professor Michio Kaku’s blog:

    http://blog.myspace.com/index.cfm?fuseaction=blog.view&friendID=92880544&blogID=166112485&MyToken=0995dec9-ae9e-4fec-afa0-8af8adcc14fd

    Dear Professor Michio Kaku,

    I would like to point out that Pauli’s prediction of the neutrino came from an experimental anomaly concerning energy conservation in beta decay; beta particles carry off less energy than is lost during the transformation. This led to two theories. Heisenberg claimed that explanation is that the law of conservation of energy is wrong and only applies statististically on the average when considering subatomic phenomena.

    However, Heisenberg made no definite statements about the world so this line of reasoning went nowhere. Pauli was able to make progress by postulating the neutrino, which allows the energy and spin of the neutrino to be predicted by conservation principles. Even though he thought it unlikely to be tested and confirmed, it made more definite statements about the real world, and was consistent with more experimentally determined facts than Heisenberg’s suggestion (in 1956 the neutrino was confirmed due to Fermi’s invention of the nuclear reactor, which emits vast numbers of antineutrinos accompanying the beta decay of fission products; a happy coincidence being that Fermi was not only a practical experimenter but was also the theoretical physicist who also put forward the original theory of beta decay).

    I think that Dr Woit’s argument is simple: string theory has been around for long enough for people to prove that no hard, definite predictions are to be obtained from it.

    Therefore, string theory is forever to be vague. This is due to the nature of the 6 dimensional Calabi -Yau manifold, which has so many variable parameters that it creates a landscape of 10^350 or more ground states to the universe, and the Calabi-Yau manifold is required for supersymmetry in string theory. You can’t hope to make definite predictions with a theory containing a 6-dimensional manifold of unknown state. The alternative idea within string theory, that the observable universe we see is just a brane on a massive Calabi-Yau manifold is equally vague and non-specific.

    The list of tests you give are all hopeless. They are not hard predictions, you don’t predict anything definite, and these experiments can neither confirm nor refute string theory, whatever the experiments reveal. So it is just ad hoc waffle, with all due respect.

    There are several other options than the mainstream string theory, and you discuss none of them. Smolin has worked on loop quantum gravity and obtains a link between the field equation of general relativity and the path integrals of quantum field theory (general relativity without a metric is represented by spin networks graphs, and the path integral is the sum of the graphs over all interactions), Woit has reproduced the Standard Model particle spectrum using representation theory in 4-dimensions http://arxiv.org/abs/hep-th/0206135, Tony Smith has been censored off arXiv for continued development of 26-dimensional (bosonic) string theory which does not involve the landscape problem of the mainstream 10 dimensional supersymmetric M-theory http://www.valdostamuseum.org/hamsmith/stringbraneStdModel.html, and Lunsford has been censored off arXiv for a 6-dimensional unification of electrodynamics and gravity: http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    Lunsford had his paper first published in Int. J. Theor. Phys. 43 (2004) no. 1, pp.161-177, and it shows how vicious string theory is that arXiv censored it. It makes definite predictions may be compatible at some level of abstraction with Woit’s ideas for deriving the Standard Model. Lunsford’s 3 extra dimensions are attributed to coordinated matter. Physically, the contraction of matter due to relativity (motion or gravity cause contractions) is a local effect on a dimension being used to measure the matter. The cosmological dimensions continue expanding regardless of the fact that the contraction in general relativity contracts the Earth’s radius by 1.5 millimetres. So really, Lunsford’s extra dimensions are describing local matter whereas the 3 other dimensions describe the ever expanding universe.

    Lunsford begins by showing the errors in the historical attempts by Kaluza, Pauli, Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct unification of general relativity and Maxwell’s equations, finding 4-d spacetime inadequate: ‘… We see now that we are in trouble in 4-d. The first three [dimensions] will lead to 4th order differential equations in the metric. Even if these may be differentially reduced to match up with gravitation as we know it, we cannot be satisfied with such a process, and in all likelihood there is a large excess of unphysical solutions at hand. … Only first in six dimensions can we form simple rational invariants that lead to a sensible variational principle. The volume factor now has weight 3, so the possible scalars are weight -3, and we have the possibilities [equations]. In contrast to the situation in 4-d, all of these will lead to second order equations for the g, and all are irreducible – no arbitrary factors will appear in the variation principle. We pick the first one. The others are unsuitable … It is remarkable that without ever introducing electrons, we have recovered the essential elements of electrodynamics, justifying Einstein’s famous statement …’

    He shows that 6 dimensions in SO(3,3) should replace the Kaluza-Klein 5-dimensional spacetime, unifying GR and electromagnetism: ‘One striking feature of these equations … is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behavior. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so this theory explains why ordinary general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularized.’

    A major important prediction Lunsford makes is that the unification shows the cosmological constant is zero. This abstract prediction is entirely consistent with the Yang-Mills Standard Model gauge boson radiation: redshifted gauge bosons (for long range gravitation) mean gravity coupling strength is weakened over large ranges.

    Just as photons lose energy when redshifted, the cosmic expansion does the same to gauge bosons of gravity. This is why the expansion doesn’t get slowed down by gravity at immense distances. Professor Philip Anderson puts it clearly when he says that the supernova results showing no slowing don’t prove a dark energy is countering gravity, because the answer could equally be that there is simply no cosmological-range gravity (due to weakening of gauge boson radiation by expansion caused redshift, which is trivial or absent locally, say in the solar system and in the galaxy).

    ‘the flat universe is just not decelerating, it isn’t really accelerating’ – Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

    This is why the expansion doesn’t get slowed down by gravity at immense distances. (What part of this can’t people grasp?)

    Hence no dark energy, hence cosmological constant = 0. This is what Lunsford’s paper predicts. This argument should be taken seriously. Can I say I feel that the major tragedy of string is that it’s promotion sets out to dismiss alternatives without studying them first.

    I wish you well in your string theory work, and hope you can correct all the errors and misconceptions in the New Scientist article before it goes to press.

    Warmest regards,

    Nigel Cook

    Posted by Nigel on Monday, September 11, 2006 at 3:12 AM
    [Reply to this]

  4. Pingback: Force Gauge |
  5. Nice job, have you considered the Dehmelt triplet electron and positron, recent positronium lifetimes?

    Higgs particle might be a triplet too, connected to all other particles in the universe by strings.

    Maybe not intuitive at first, but as string stretches out, the force longitudinally follows inverse diameter square law as the distance for Newton gravity.

    Hans Dehmelt (circa 1989) the Nobel Prize winner conjectured such, but transfered to ancient diets. He’s at Univ of Washington now.

    Best, rmuldavin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s