The correct unification scheme

The top post here gives a discussion on the problem of unifying gravity and standard model forces: gauge boson radiation is exchanged between all charges in the universes, while electromagnetic forces only result in particular situations (dissimilar or similar charges) .  As discussed below, gravitational exchange radiation interacts indirectly with electric charges, via some vacuum field particles which become associated with electric charges.  [This has nothing to do with the renormalization problem in speculative (string theory) quantum gravity that predicts nothing.  Firstly, this does make predictions of particle masses and of gravity and cosmology.  Secondly, renormalization is accounted for by vacuum polarization shielding electric charge.  The mass changes in the same way, since the field which causes mass is coupled to the charge by the already renormalized (shielded) electric charge.]

The whole idea that gravity is a regular quantum field theory, which causes pair production if the field is strong enough, is totally speculative and there is not the slightest evidence for it.  The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997.  Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).

There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons).  How can gravitational charge be renormalized?  There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field.  For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized.  If they are both displaced in the same direction by the field, they aren’t polarized.  So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s Cavorite.

There is no evidence for this.  Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value.  However, this is not a problem.  The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude).  Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.

Finally, supersymmetry is totally flawed.  What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification.  You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).

The energy sapped from the gauge boson mediated field of electromagnetism is being used.  It’s being used to create pairs of charges, which get polarized and shield the field.  This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory.  Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.

Now take the situation where you put N electrons close together, so that their cores are very nearby.  What will happen is that the surrounding vacuum polarization shells of both electrons will overlap.  The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger.  So the shielding of the polarized vacuum is N times stronger!  This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron.  Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!

This has three remarkable consequences.  First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy.  Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.

Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials).  If you could extend the Pauli exclusion principle, you could allow particles to acquire short-range nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.

Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do.  Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.

The right handed electron has a weak hypercharge of -2.  The left handed electron has a weak hypercharge of -1.  The left handed downquark (with observable low energy, electric charge of -1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of -2/3.

It’s totally obvious what’s happening here.  What you need to focus on is the hadron (meson or baryon), not the individual quarks.  The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks).  The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them.  It all comes from the electromagnetic field.  Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces.  Hence, at the usually assumed so-called Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric).  This ties in with representation theory for particle physics, whereby symmetry transformation principles relate all particles and fields (the conservation of gauge boson energy and the exclusion principle being dynamic processes behind the relationship of a lepton and a quark; it’s a symmetry transformation, physically caused by quark confinement as explained above), and it makes predictions.

It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength.  This is done when electric field energy is stored in a capacitor.  In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose.  See page 70 of http://arxiv.org/abs/hep-th/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy.  (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge.  The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.)  The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces.  This allows predictions and more checks.  It’s totally tied down to hard facts, anyway.  If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.

To give an example from https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron.  Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 – 1 = 136.036 times the electric charge energy of the electron experienced at large distances.  This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.

16 thoughts on “The correct unification scheme

  1. Copy of a comment to

    http://asymptotia.com/2007/03/13/more-scenes-from-the-storm-in-a-teacup-vii/#comment-33687

    (Professor Jacques Distler dismissed the errors in string theory on the false basis that the standard model of particle physics is similar!)

    Dear Jacques,

    Comparing and confusing the standard model of particle physics and making ad hominem attacks on people being “lay readers” doesn’t detract from the failure of supersymmetry to predict anything correctly. It’s still an error, whether the standard model makes it as well, or not as the case may be. You seem to have a different definition of the standard model, which is more like a speculative theory than an empirical model of particle physics.

    The standard model isn’t claimed to be the final theory. String is. The standard model is extremely well based on empirical observations and makes checked predictions. String doesn’t. That’s why Smolin and Woit are more favourable to the standard model. String theory if of any use should sort out any problems with the standard model. This is why the errors of string theory are so alarming. It is supposed to theoretically sort things out, unlike the standard model, which is an empirically based model, not a claimed final theory of unification.

  2. Professor Distler then made another comment which mentioned the suggestion that I had been mislead by Dr Woit’s book. Copy of my response:

    http://asymptotia.com/2007/03/13/more-scenes-from-the-storm-in-a-teacup-vii/#comment-33709

    Nigel
    Mar 17th, 2007 at 10:30 am

    Dear Jacques,

    People don’t come away with an impression from reading the books that Dr Woit is claiming the CC is a unique problem of supersymmetry. It is made perfectly clear that this is a problem of supersymmetry when you try to use supersymmetry to calculate the vacuum energy.

    A student who answers one of the questions on a paper and gets it wrong, derives no excuse from pointing to another who achieved 99%, despite happening to get the same single question wrong. Any assessment by comparison needs to take account of successes, not just errors. In one case the single error marks failure, in the other it’s trivial.

  3. Copy of fast-comment to http://motls.blogspot.com/2007/03/brian-greene-vs-lawrence-krauss.html in case it is deleted by an accident of Dr Lubos Motl’s mouse:

    “Well, if a reader wants to meet a famous physicist, just write a silly book against his field and all the gates will open. ;-)” – Lubos

    It helps in the case of meeting Brian Greene perhaps that you are the author of “The Physics of Star-Trek” or something else related to Hollywood 😉 and it also helps if you retract the claims made in your “Hiding in the Mirror” book when face to face with string theorists.

    I.e., it helps if you hide in the mirror yourself when string theorists object to your book, instead of having a showdown.

    What’s sick about this is that some people claim to be “string theorists” when they don’t have a proper string or a proper theory.

    String theory of some sort might be useful as a basis for “particles”, seeing that the options like singularities or little solid balls have serious problems.

    However, why does the mainstream use extra-dimensional supersymmetry to force unification at high energy? The expense of supersymmetry seems five-fold:

    (1) It requires unobserved supersymmetric partners, and doesn’t predict their energies or anything else that is a checkable prediction.

    (2) It assumes that there is unification at high energy. Why? Obviously a lot of electric field energy is being shielded by the polarized vacuum near the particle core. That shielded electromagnetic energy goes into short ranged virtual particle loops which will include gauge bosons (W+/-, Z, etc.). In this case, there’s no high unification. At really high energy (small distance from particle core), the electromagnetic charge approaches its high bare core value, and there is less shielding between core and observer by the vacuum so there is less effective weak and strong nuclear charge, and those charges fall toward zero (because they’re powered by the energy shielded from the electromagnetic field by the polarized vacuum). This gets rid of the high energy unification idea altogether.

    (3) Supersymmetry requires 10 dimensions and the rolling up of 6 of those dimensions into the Calabi-Yau manifold creates the complexity of string resonances that causes the landscape of 10^500 versions of the standard model, preventing the prediction of particle physics.

    (4) Supersymmetry using the measured weak SU(2) and electromagnetic U(1) forces, predicts the SU(3) force incorrectly high by 10-15%.

    (5) Supersymmetry when applied to try to solve the cosmological constant problem, gives a useless answer, at least 10^55 times too high.

    The real check on the existence of a religion is the clinging on to physically useless orthodoxy.

  4. Copy of a comment:

    http://dorigo.wordpress.com/2007/03/18/a-fair-account-of-the-matter-for-once/

    6. nc – March 20, 2007

    Can I just clarify what the Higgs is (ignoring the complexity of needing 5 or more Higgs bosons if supersymmetry of some type is correct)? I know it is supposed to be a spin 0 boson that gives mass to everything. But Einstein’s equivalence principle between inertial and gravitational mass therefore implies that the Higgs boson interacts with whatever exchange radiation there is that causes gravity (spin-2 gravitons?).

    If that’s the case, then the simple physical picture is that you have Higgs bosons there in vacuum, exchanging gravitons with other Higgs bosons. Because, by pair-production, photons can be converted into massive fermions, there must be a Higgs field (like a Dirac sea) everywhere in space which can allow such particles to pop into existence when fermions with mass are created from photons.

    However, the Dirac sea doesn’t say the vacuum is full of pair production everywhere causing virtual particles to pop into existence. These loops of creation and annihilation can only occur in the intense fields near a charge. Pair production from gamma rays occurs where the gamma rays enter a strong field in a high Z-number nucleus.

    The IR cutoff used in renormalization indicates that no virtual particle loops are created below collision energy of 0.511 MeV/particle, i.e., a distance of about 1 fm from Coulomb scattering electrons, corresponding to an electric field strength of about 10^18 v/m or so. If virtual particles were everywhere, the no real charges would exist because there would be no limit to the polarization of the vacuum (which would polarize just enough to entirely cancel out all real charges).

    Does apply to the Higgs field? If the Higgs mass is say 150 GeV or whatever, then obviously they are not being created when an electron+positron pair are created from a gamma ray. It takes only 1 MeV to do that, not 300 MeV or whatever would be required to form a pair of Higgs bosons?

    Or is the case really that Higgs bosons are virtual particles created from the vacuum at the energy of collisions corresponding to their mass? Assuming a Higgs mass of 150 GeV and a using Coulomb scatter to relate distance of closest approach to collision energy, then Higgs bosons would be spontaneously created at a distance on the order of 10^{-21} m from the core of an electron.

    If this is the case, then the vacuum isn’t full of interacting Higgs bosons like the usual picture of an “aether” miring particles and giving them mass. Instead, the actual mechanism is that Higgs particles appear in some kind of annihilation-creation loops at a distance of 10^{-21} m and closer to a fermion, and the Higgs bosons themselves are “mired” by the exchange of radiation (gravitons) with the Higgs bosons at similar distances from other particles.

    This is clearly the correct picture of what is going on, if the equivalence principle is correct and if mass (provided by the Higgs boson) is the charge in quantum gravity.

    Professor Alfonso Rueda of California State University and Bernard Haisch have been arguing that radiation pressure from the quantum vacuum produces inertial mass (Physical Review A, v49, p678) and gravitational mass (because the presence of a mass warps the spacetime around it, encountering more photons on the side away from another nearby mass than on the side facing it, so the masses get pushed together – they have a paper proving this effect in principle in Annalen der Physik, v14, p479).

    http://www.calphysics.org/articles/newscientist.html

    http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php

    If Rueda and Haisch are correct in their “vacuum zero-point photon radiation pressure causes inertia and gravity” argument, then the problem is that they are using photon exchange for both electromagnetism and gravity, which is a real muddle because those forces are different in strength by a factor of 10^40 or so for unit charges. So either they’re totally wrong, or oversimplifying.

    Suppose they’re not wrong and are oversimplyfying by using rejecting the Higgs boson while using the same gauge boson for electromagnetism and gravity.

    Suppose there is still a Higgs boson in their theory, and there are different kinds of gauge bosons for gravity and electromagnetism.

    Then, the gravity-causing exchange radiation is mediated between the Higgs bosons in the strong vacuum field near particles. The electromagnetism causing exchange radiation is mediated between the fermions etc. in the cores of the particles.

    My next question is how the Higgs bosons explain electro-weak symmetry breaking? My understanding is that above electroweak expectation energy, there is a electroweak symmetry with SU(2)xU(1) producing four zero mass gauge bosons: W+, W-, Z, photon. Below that energy, the W+, W- and Z acquire great mass from the Higgs field.

    Because they acquire such mass at low energies (unlike the photon), they have a short range, which makes the weak force short ranged unlike electromagnetism, breaking the electroweak symmetry.

    The conventional idea is that very high energy W and Z bosons aren’t coupled to the Higg’s field? The Higgs field still exists at extremely high energy, but W and Z bosons are unable to couple effectively to it?

    Normally in a “miring” medium, drag effects and miring increase with your kinetic energy, since the resistance force is proportional to the square of velocity, and drag effects become small at low speeds.

    So the Higgs miring effect is the opposite of a fluid like the air or water? It retards particles of low energy and low speed, but doesn’t mire particles of high energy and high speed?

    I’m wondering what the real mechanism is for why Z and W have mass at low speeds but not high speeds? It is also the opposite of special relativity, where mass increases with velocity.

    Have I got this all wrong? Maybe the Higgs field disappears above the electroweak symmetry breaking energy, and that explains why the masses of Z and W bosons disappear?

    *******************

    If nobody at the blog where I left the comment above confirms it, I will study the Higgs mechanism ideas a bit more and clarify it myself.

    The key question is whether the massless Z boson at high energy is just a photon or not.

    Perhaps the weak gauge bosons couple the mass causing Higgs bosons to particles like fermions.

    Normally exchanges of Z bosons between particles are referred to as “neutral currents” because they carry no net electrical charge (well, if they are related to photons they probably transfer electric field energy, and all we mean by “electric field energy” is all we normally mean by “electric charge”, so a trapped electric field would be the same thing as an electric charge, so a photon or presumably a Z boson does contain electromagnetic fields but has no electric charge simply because it contains as much negative electric field energy as positive electric field energy, so the two balance one another – which is different from complete cancellation).

    Newtral currents between fermions and Higgs bosons could well be the way in which Higgs bosons are associated with fermions, giving the fermions their masses.

  5. Copy of a comment:

    http://kea-monad.blogspot.com/2007/03/censorship.html

    Louise has a new post showing that the person behind this latest attack is at Cornell, and he seems to be upset that Louise tried posting papers on arXiv. The previous time (last June) the critic was Dr Motl, and Dr Woit really fell out with him over his sexism, rudeness, etc.

    I wrote a post showing that Louise’s GM = tc^3 far from being completely unphysical is really obtained simply:

    Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

    Hence E = mc^2 = mMG/(ct)

    Cancelling and collecting terms,

    GM = tc^3

    So Louise’s formula is derivable.

    The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein’s principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

    (1) GR equivalence principle: inertial mass = gravitational mass.

    (2) SR equivalence principle: mass has an energy equivalent.

    (3) Combining (1) and (2):

    inertial mass-energy = gravitational mass-energy

    (4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

    The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

    I wrote several follow up posts about this because as a result of my post, Dr Thomas S. Love of the Departments of Mathematics and Physics, California State University, emailed me a derivation of Kepler’s law based on the similar reasoning of equating the relevant energy equations! See for example

    https://nige.wordpress.com/2006/09/30/keplers-law-from-kinetic-energy/

    When I explained all the above in a blog discussion on alternatives to string theory, I think it was one of the discussions at Asymptotia run by Professor Clifford V. Johnson (who is the most friendly and reasonable of the string theorists, I think), Professor Jacques Distler of the University of Texas ridiculed it because it didn’t use tensor calculus or something irrelevant.

    I’m sympathetic with people who want alternatives to do an enormous amount and prove G = 8*Pi*T and SU(3)xSU(2)xU(1), but what these censors are doing is driving the early development of alternative ideas underground, minimising the number of people working on them, and generating a lot of needless heat and hostility. I’ve rarely seen a critic make a genuine scientific point, but whenever they do, these points are taken seriously and addressed carefully.

    It’s all in Machiavelli’s description of human politics:

    “… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly …”

    That’s the mechanism by which new ideas traditionally have to struggle against those who are happy with the hype of string theory and the lambda-CDM ad hoc cosmology.

    Some of the comments Clifford made recently on Asymptotia in trying to defend string theory by saying it is incomplete made me very upset. There is serious hypocrisy, and string theorists themselves just can’t see it:

    “… this is an ongoing research program on a still rather underdeveloped theory. … how come you are willing to pre-suppose the outcome and discard all the work that is going on to understand and better develop string theory in so many areas? This is especially puzzling since you mention that lack of understanding.
    How can you on the one hand claim that a theory is poorly understood, and then in the same breath condemn it as unworkable before it is understood?”
    Clifford

    They just can’t understand that mainstream string theory ideas that fail to make really checkable predictions can’t be defended like this, because mainstream pro-string censors (not Clifford, admittedly) go out of their way to attack alternatives like LQG for being incomplete, not to mention Louise’s theory.

    (BTW, sorry this is a such a long comment, I’ll copy it to my blog so you are free to delete, if it’s clutter.)

  6. Two ways to get GM = tc^3:

    (1)

    Consider why the big bang was able to happen, instead of the mass being locked by gravity into a black hole singularity and unable to expand!

    This question is traditionally answered (Prof. Susskind used this in an interview about his book) by the fact the universe simply had enough outward explosive or expansive force to counter the gravitational pull which would otherwise produce a black hole.

    In order to make this explanation work, the outward acting explosive energy of the big bang, E = Mc^2, had to either be equal to, or exceed, the energy of the inward acting gravitational force which was resisting expansion.

    This energy is the gravitational potential energy E = MMG/R = (M^2)G/(ct).

    Hence the explosive energy of the big bang’s nuclear reactions, fusion, etc., E = Mc^2 had to be equal or greater than E = (M^2)G/(ct):

    Mc^2 ~ (M^2)G/(ct)

    Hence

    MG ~ tc^3.

    That’s the first way, and perhaps the easiest to understand.

    (2)

    Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

    Hence E = mc^2 = mMG/(ct)

    Cancelling and collecting terms,

    GM = tc^3

    So Louise’s formula is derivable.

    The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein’s principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

    (1) GR equivalence principle: inertial mass = gravitational mass.

    (2) SR equivalence principle: mass has an energy equivalent.

    (3) Combining (1) and (2):

    inertial mass-energy = gravitational mass-energy

    (4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

    The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

  7. copy of a comment:

    http://www.math.columbia.edu/~woit/wordpress/?p=545#comment-24373

    anon. Says:

    April 10th, 2007 at 5:14 am

    Can I raise point about the SU(3)xSU(2)xU(1) empirical model?

    Why is unification always discussed in terms of collision energy not distance? The string theorists plot the electromagnetic, weak and strong force coupling constants (i.e., charges, not forces) versus collision energy. This is meaningless what is being collided. Ie, if leptons are being collided, there should be no strong force plotted.

    Can’t anyone plot the effective charge versus distance from leptons and quarks separately? The electric charge rises as you approach a unit charge because pair production causes polarization that shields the core charge. What happens to the energy of the force field which gets attenuated? Ie, is the attenuation of one field (electromagnetic) by the polarized vacuum simply transferring some of the long range electromagnetic gauge boson energy into short ranged nuclear forces?

    I’m interested in comparing the polarized vacuum situation for leptons with that for hadrons and quarks. Has anyone quantitatively calculated how much electromagnetic energy is being shielded by the polarized vacuum between the IR and UV cutoffs around different particles (leptons and quarks)? The energy density of an electromagnetic field is easily calculated (half the permittivity times the square of the electric field strength, see http://physics.bu.edu/~duffy/PY106/EMWaves.html ) so it shouldn’t be impossible to calculate.

    Does this explain the standard model? Is the quantitative reduction in the energy of the electromagnetic charge, due to polarized vacuum shielding of the charge, similar to the amount of energy needed to explain the weak and strong short range forces?

    Since the polarization effect (shielding electric charge) is a feedback effect (polarization being caused by pair production due to the strong electric field strength in the vacuum near a charge), evidently if you could somehow force 3 electrons into a small space (some extra quantum numbers/charges would appear to avoid violating the exclusion principle) the polarization would be 3 times stronger than normal, causing 3 times as much shielding, so the charge per ‘electron’ seen from beyond the IR cutoff would be e/3.

    Fig 7.1 in N.E.W. shows that, as for the respective electric charges, the weak hypercharge for a right-handed electron, -2, is also simply 3 times bigger than that of a right-handed downquark, -2/3.

    This should make checkable predictions: you should be able, for instance, to predict the SU(3) force coupling constants entirely from the known attenuation of SU(2)xU(1) force as a function of distance from the particle core. This is a more solid idea than SUSY.

    ****************************

    For a description of weak hypercharge, see Dr Woit’s book Not Even Wrong. There is a bit about this at http://en.wikipedia.org/wiki/Weak_hypercharge

  8. copy of a comment:

    http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-255674

    nigel on Apr 29th, 2007 at 4:27 pm

    “The main motivation for introducing SUSY is that it provides a natural resolution to the gauge hierarchy problem. As an added bonus, one gets gauge coupling unification and has a natural dark matter candidate. Plus, if you make SUSY local you get supergravity. These are all very good reasons why we expect SUSY to be a feature of nature, besides mathematical beauty.

    “Regarding your questions about vacuum polarization, this is in fact what causes the gauge couplings to run with energy. Contrary to your idea, the electroweak interactions are a result of local gauge invariance…” – V

    The standard model is a model for forces, not a cause or mechanism of them. I’ve gone into this mechanism for what supplies the energy for the different forces in detail elsewhere (e.g. here & here).

    Notice that when you integrate the electric field energy of an electron over the volume from radius R to infinity, you have to make R the classical electron radius of 2.8 fm in order that the result corresponds to the rest mass energy of the electron. Since the electron is known to be way smaller than 2.8 fm, there’s something wrong with this classical picture.

    The paradox is resolved because the major modification you get from quantum field theory is that the bare core charge of the electron is far higher than the observable electron charge at low energy. Hence, the energy of the electron is far greater than 0.511 MeV.

    In QED, not just charge but also rest mass of the electron is renormalized. I.e., the bare core values of electron charge and electron mass are higher than the observed values in low energy physics by large factor.

    The rest mass we observe and the corresponding equivalent energy E=mc^2 is low because of physical association of mass to charge via the electric field, which is shielded by vacuum polarization. This is because the virtual charge polarization mechanism for the variation of observable electric charge with energy, doesn’t explain the renormalization of mass in the same way. Electric polarization requires a separation of positive and negative charges which drift in opposite directions in an electric field, partly cancelling out that electric field as a result. The quantum field where mass as a charge is gravity, and and since nothing has ever been observed to fall upwards, it’s clear that the polarization mechanism that shields electric charge doesn’t occur separately for masses. Instead, mass is renormalized because it gets coupled to charges by the electric field which is shielded by polarization. This mechanism inferred from the success of renormalization of mass and charge in QED gives a clear approach to quantum gravity. It’s the sort of thing which in an ideal world like this one (well, string theorists have [an] idealistic picture, and they’re in charge of theoretical HEP) should be taken seriously, because it’s building on empirically confirmed facts, and it predicts masses.

  9. copy of a comment:

    http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/

    “Nigel,
    I appreciate your enthusiam for thinking about these problems. However, it seems clear that you haven’t had any formal education on the subjects. The bare mass and charges of the quarks and leptons are actually indeterminate at the level of quantum field theory. When they are calculated, you get an infinities. What is done in renormalization is to simply replace the bare mass and charges with the finite measured values.” – V

    No, the bare mass and charge are not the same as measured values, they’re higher than the observed mass and charge. I’ll just explain how this works at a simple level for you so you’ll grasp it:

    Pair production occurs in the vacuum where the electric field strength exceeds a threshold of ~ 1.3*10^18 v/m; see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or 8.20 in http://arxiv.org/abs/hep-th/0510040

    These pairs shield the electric charge: the virtual positrons are attracted and on average get slightly closer to the electron’s core than the virtual electrons, which are repelled.

    The electric field vector between the virtual electrons and the virtual positrons is radial, but it points inwards towards the core of the electron, unlike the electric field from the electron’s core, which has a vector pointing radially outward. The displacement of virtual fermions is the electric polarization of the vacuum which shields the electric charge of the electron’s core.

    It’s totally incorrect and misleading of you to say that the exact amount of vacuum polarization can’t be calculated. It can, because it’s limited to a shell of finite thickness between the UV cutoff (close to the bare core, where the size is too small for vacuum loops to get polarized significantly) and the IR cutoff (the lower limit due to the pair production threshold electric field strength).

    The uncertainty principle give the range of virtual particles which have energy E: the range is r ~ h bar*c/E. So E ~ h bar*c/r. Work energy E is equal to the force multiplied by the distance moved in the direction of the force, E = F*r. Hence F = E/r = h bar*c/r^2. Notice the inverse square law arising automatically. We ignore vacuum polarization shielding here, so this is the bare core quantum force.

    Now, compare the magnitude of this quantum F = h bar*c/r^2 (high energy, ignoring vacuum polarization shielding) to Coulomb’s empirical law for electric force between electrons (low energy), and you find the bare core of an electron has a charge e/alpha ~137*e, where e is observed electric charge at low energy. So it can be calculated, and agrees with expectations:

    ‘… infinities [due to ignoring cutoffs in vacuum pair production polarization phenomena which shields the charge of a particle core], while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

    ‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’/m and e’/e would be of order alpha ~ 1/137.’

    – M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, p76.

    I must say it is amazing how ignorant some people are about this, and this is vital to understanding QFT, becausebelow the IR cutoff there’s no polarizable pair production in the vacuum, so beyond ~1 fm from a charge where the , spacetime is relatively classical. The spontaneous appearance of loops of charges being created and annihilated everywhere in the vacuum is discredited by renormalization. Quantum fields are entirely bosonic exchange radiation at field strengths below 10^18 v/m. You only get fermion pairs being produced at higher energy, and smaller distances than ~1 fm.

  10. http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-256095

    nigel on Apr 30th, 2007 at 4:17 pm

    Niel B.,

    The field lines are radial so they diverge, which produces the inverse square law since the field strength is proportional to the number of field lines passing through a unit area, and spherical area is 4*Pi*r^2.

    The charge shielding is due to virtual particles created in a sphere of space with a radius of about 10^{-15} m around an electron, where the electric field is above Schwinger’s threshold for pair production, 10^{20} volts/metre. For a good textbook explanation of this see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or 8.20 in http://arxiv.org/abs/hep-th/0510040

    Here’s direct experimental verification that the electron’s observable charge increases at collision energies which bring electrons into such close proximity that the pair production threshold is exceeded:

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is alpha (1/137.036) at low energies but increases by 7% to about 1/128.5 at 80 GeV or so.

    This is an increase in electric charge (i.e., an increase in the total number of electric field lines in Faraday’s picture), nothing whatsoever to do with the radial divergence of electric field lines. The electric charge increases not due to divergence of field lines, but due to some field lines being stopped by polarized pairs of fermions which cancel them out, as explained in comment 63. The coupling constant of alpha corresponds to to the observed electric charge at low energy. This increases at higher energy. Think of it like cloud cover. If you go up through the cloud using an aircraft, you get increased sunlight. This has nothing whatsoever to do with the inverse square law of radiation flux, instead it is caused by the shielding by the cloud absorbing the energy. My argument is that the electric charge energy that’s shielded causes the short ranged forces since the loops give rise to massive W bosons, etc., which mediate short ranged nuclear forces. Going on to higher energy (through the cloud to the unshielded electromagnetic field), there won’t be any energy absorbed by the vacuum because the distance is too small for pairs to polarize, hence there won’t be any short ranged nuclear forces. So by injecting the conservation of mass-energy into QED, you get an answer to the standard model unification problem: the electromagnetic coupling constant increases from alpha towards 1 approach distances so tiny from the electron core that there is simply no room for polarized virtual charges to shield the electric field. As a result, there’s no nuclear forces beyond the short ranged UV cutoff because there is no energy absorbed from the electromagnetic field by polarizated pairs, which can produce massive loops of gauge bosons. (By contrast, supersymmetry is based on a false assumption that all forces have the same energy approaching Planck scale. There’s no physics in it. It’s just speculation.)

    **************

    Because the bare core of the electron has a charge of 137.036e, total energy of the electron (including all the mass-energy in the short ranged massive loops which polarize which shield the 137.036e core charge and associated mass down to the observed small charge e and observed small mass) is 137*0.511 = 70 MeV. Just as it seemed impossible to mainstream crackpots in 1905 that there was a large amount of unseen energy locked up in the atom, they also have problems understanding that renormalization means that there’s a lot more energy in fundamental particles (tied up the creation-annihilation loops) than that which is directly measurable.

Leave a comment