I’ve put a sketch of the fundamental forces as a function of distance here, and an article [not] illustrated with that sketch is at http://gaugeboson.blogspot.com/. **UPDATE (23 Feb 2007): this illustration is inaccurate in assuming unification.**

The GUT (grand unified theory) scale unification may be wrong itself. The Standard Model might not turn out to be incomplete with regards to requiring supersymmetry: the QED electric charge rises as you get closer to an electron because there’s less polarized vacuum to shield the corer charge. Thus, a lot of electromagnetic energy is absorbed in the vacuum above the IR cutoff, producing loops. It’s possible that the short ranged nuclear forces are powered by this energy absorbed by the vacuum loops.

In this case, energy from one force (electromagnetism) gets used indirectly to produce pions and other particles that mediate nuclear forces. This mechanism for sharing gauge boson energy between different forces in the Standard Model would get rid of supersymmetry which is an attempt to get three lines to numerically coincide near the Planck scale. With the strong and weak forces caused by energy absorbed when the polarized vacuum shields electromagnetic force, when you get to very high energy (bare electric charge), there won’t be any loops because of the UV cutoff so both weak and strong forces will fall off to zero. That’s why it’s dangerous to just endlessly speculate about only one theory, based on guesswork extrapolations of the Standard Model, which doesn’t have evidence to confirm it.

**The whole idea of unification is wrong, if the nuclear force gauge bosons are vacuum loop effects powered by attenuation of the electromagnetic charge due to vacuum polarization; see:**

Copy of a comment:

http://kea-monad.blogspot.com/2007/02/luscious-langlands-ii.html

Most of the maths of physics consists of applications of equations of motion which ultimately go back to empirical observations formulated into laws by Newton, supplemented by Maxwell, Fitzgerald-Lorentz, et al.

The mathematical model *follows* experience. It is only speculative in that it makes predictions as well as summarizing empirical observations. Where the predictions fall well outside the sphere of validity of the empirical observations which suggested the law or equation, then you have a prediction which is worth testing. (However, it may not be falsifiable even then, the error may be due to some missing factor or mechanism in the theory, not to the theory being totally wrong.)

Regarding supersymmetry, which is the example of a theory which makes no contact with the real world, Professor Jacques Distler gives an example of the problem in his review of Dine’s book *Supersymmetry and String Theory: Beyond the Standard Model*:

http://golem.ph.utexas.edu/~distler/blog/

“Another more minor example is his discussion of Grand Unification. He correctly notes that unification works better with supersymmetry than without it. To drive home the point, he presents non-supersymmetric Grand Unification in the maximally unflattering light (run α 1 ,α 2 up to the point where they unify, then run α 3 down to the Z mass, where it is 7 orders of magnitude off). The naïve reader might be forgiven for wondering why anyone ever thought of non-supersymmetric Grand Unification in the first place.”

The idea of supersymmetry is the issue of getting electromagnetic, weak, and strong forces to unify at 10^16 GeV or whatever, near the Planck scale. Dine assumes that unification is a fact (it isn’t) and then shows that in the absense of supersymmetry, unification is incompatible with the Standard Model.

The problem is that the physical mechanism behind unification is closely related to the vacuum polarization phenomena which shield charges.

Polarization of pairs of virtual charges around a real charge partly shields the real charge, because the radial electric field of the polarized pair is pointed the opposite way. (I.e., the electric field lines point inwards towards an electron. The electric field likes between virtual electron-positron pairs, which are polarized with virtual positrons closer to the real electron core than virtual electrons, produces an outwards radial electric field which cancels out part of the real electron’s field.)

So the variation in coupling constant (effective charge) for electric forces is due to this polarization phenomena.

Now, what is happening to the energy of the field when it is shielded like this by polarization?

Energy is conserved! Why is the bare core charge of an electron or quark higher than the shielded value seen outside the polarized region (i.e., beyond 1 fm, the range corresponding to the IR cutoff energy)?

Clearly, the polarized vacuum shielding of the electric field is removing energy from charge field.

That energy is being used to make the loops of virtual particles, some of which are responsible for other forces like the weak force.

This provides a physical mechanism for unification which deviates from the Standard Model (which does not include energy sharing between the different fields), but which does not require supersymmetry.

Unification appears to occur because, as you go to higher energy (distances nearer a particle), the electromagnetic force increases in strength (because there is less polarized vacuum intervening in the smaller distance to the particle core).

This increase in strength, in turn, means that there is less energy in the smaller distance of vacuum which has been absorbed from the electromagnetic field to produce loops.

As a result, there are fewer pions in the vacuum, and the strong force coupling constant/charge (at extremely high energies) starts to fall. When the fall in charge with decreasing distance is balanced by the increase in force due to the geometric inverse square law, you have asymptotic freedom effects (obviously this involves gluon and other particles and is complex) for quarks.

Just to summarise: the electromagnetic energy absorbed by the polarized vacuum at short distances around a charge (out to IR cutoff at about 1 fm distance) is used to form virtual particle loops.

These short ranged loops consist of many different types of particles and produce strong and weak nuclear forces.

As you get close to the bare core charge, there is less polarized vacuum intervening between it and your approaching particle, so the electric charge increases. For example, the observable electric charge of an electron is 7% higher at 90 GeV as found experimentally.

The reduction in shielding means that less energy is being absorbed by the vacuum loops. Therefore, the strength of the nuclear forces starts to decline. At extremely high energy, there is – as in Wilson’s argument – no room physically for any loops (there are no loops beyond the upper energy cutoff, i.e. UV cutoff!), so there is no nuclear force beyond the UV cutoff.

What is missing from the Standard Model is therefore an energy accountancy for the shielded charge of the electron.

It is *easy* to calculate this, the electromagnetic field energy for example being used in creating loops up to the 90 GeV scale is the energy of a charge which is 7% of the energy of the electric field of an electron (because 7% of the electron’s charge is lost by vacuumn loop creation and polarization below 90 GeV, as observed experimentally; I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424).

So this physical understanding should be investigated. Instead, the mainstream censors physics out and concentrates on a mathematical (non-mechanism) idea, supersymmetry.

Supersymmetry shows how all forces would have the same strength at 10^16 GeV.

This can’t be tested, but maybe it can be disproved theoretically as follows.

The energy of the loops of particles which are causing nuclear forces comes from the energy absorbed by the vacuum polalarization phenomena.

As you get to higher energies, you get to smaller distances. Hence you end up at some UV cutoff, where there are no vacuum loops. Within this range, there is no attenuation of the electromagnetic field by vacuum loop polarization. Hence within the UV cutoff range, there is no vacuum energy available to create short ranged particle loops which mediate nuclear forces.

Thus, energy conservation predicts a lack of nuclear forces at what is traditionally considered to be “unification” energy.

So there would seem to discredit supersymmetry, whereby at “unification” energy, you get all forces having the same strength. The problem is that the mechanism-based physics is ignored in favour of massive quantities of speculation about supersymmetry to “explain” unification, which are not observed.

***************************

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Comment by nc — February 23, 2007 @ 11:19 am

Return to old (partly obsolete) discussion:

The text of the post at http://gaugeboson.blogspot.com/ is:

For electromagnetic charge, the relative strength is 1/137 for low energy collisions, below E = 0.511 MeV, the lower limit so-called “infrared” cutoff. Putting this value of E into the formula gives the 10^-15 metre range of vacuum polarization around an electron. For distances within this radius but not too close (in fact for the range: 0.511 MeV < E < 92,000 MeV/92GeV, see https://nige.wordpress.com/ for further details and links. In calculating the charges (coupling strengths) for fundamental forces as a function of distance as indicated above, for all distances closer than 10^-15 metre you need to take account of the charge increase in the the formula for closest approach in Coulomb scattering where the kinetic energy of the particle is converted entirely into electrostatic potential energy E = (electric charge^2)/(4.Pi.Permittivity.Distance). The electric charge in this formula is higher than the normal charge of the particle when you get within the polarization region, because the polarization shields the charge and the less polarization between you and the particle core, the less shielding of the charge.

The two graphs above on the left hand side are the standard presentation, the sketch graph on the right hand side is a preliminary illustration of the same data plotted as a function of distance from particle core instead of collision energy. Obviously I can easily compute the full details quantitatively, but am worried about what criticism may result from the simple treatment detailed above whereby I am assuming head-on Coulomb scattering. I know from nuclear physics that the scattering may be far more complex and messy so the quantitative details may differ. For example, the treatment above assumes a perfectly elastic scatter, not inelastic scatter, and it deals with only one mechanism for scatter and one force being involved. If we are dealing with penetration of the vacuum polarization zone, the forces involved will not only be Coulomb electric scatter, but also weak and possibly strong nuclear forces, depending upon whether the particles we are scattering off one another are leptons like electrons (which don’t seem to participate in the strong nuclear force at all, at least at the maximum experimentally checkable energy of scatter to date!), or hadron constituents, quarks.

I think the stagnation in HEP (high energy physics) comes from ignoring the problem of plotting force strengths as a function of distance, as I’ve stetched above. Looking at the right hand side force unification graph, you can see that the strong nuclear force charge or coupling strength over a wide range of small distances (note distance axis is logarithmic) actually falls as the particle core is approached. This *offsets the inverse-square law, whereby for constant charge or constant coupling strength the force would increase as distance from core is reduced.* This offset means that over that range where the strong nuclear charge is falling as you get closer to the core, the actual force on quarks is not varying. This clearly is the physical cause of asymptotic freedom of quarks, when you consider that they are also subjected to electromagnetic forces. The very size of the proton is given by the range to which asymptotic freedom of quarks extends.

I’ve also pointed out that the variations of all these fundamental forces as a function of distance *clearly brings out the fact from conservation of energy that the gauge boson radiation which causes forces is getting shielded by vacuum polarization, so that the ‘shielded’ electromagnetic force gauge boson energy (being soaked up by the vacuum in the polarization zone) is being converted into the energy of nuclear force gauge bosons.*

*These are physical facts.* I can’t understand why other people don’t think physically about physics, preferring to steer clear of phenomenology and to remain in abstract mathematical territory, which they believe to be safer ground despite the failure of string theory to actually explain or predict anything real or useful for experiments or for actually understanding the Standard Model.

I’m writing a proper review paper (to replace http://feynman137.tripod.com/ and related pages) on quantum field theory for phenomenologists which will replace supersymmetry (string theory) with a proper dynamic vacuum model based entirely on well established empirical laws. I’m going to place all my draft calculations and analyses here as blog posts as I go, and then edit the review paper from the results. In the meantime, I’ve re-read Luis Alvarez Gaume and Miguel A. Vazquez-Mozo’s Introductory Lectures on Quantum Field Theory and find them a lot more lucid in the June 27 2006 revision than the the earlier 2005 version. They have a section now explaining pair production in the vacuum and give a threshold electric field strength (page 85) of 1.3 x 10^16 v/cm which is on the order (approximately) of the electric field strength at 10^-15 m from an electron core, the limiting distance for vacuum polarization (see https://nige.wordpress.com/ , top post, for details).

The review paper focusses on the links between two approaches to quantum field theory. On one side of the coin, you have the particles in the three generations of the Standard Model, and on the other you have the forces. However, if you can model the forces you will understand the particles, which after all are totally characterised by which forces they interact via. If you can understand physically why pairs or triads of fundamental particles have fractional electric charges (as seen outside the polarized vacuum) and why they interact by strong nuclear interactions in addition to electroweak interactions, while single particles (which don’t share their polarized vacuum region with one or two other particles) have integer electric charges (seen at large distances) and don’t participate in the strong nuclear force, then that is the same thing as understanding the Standard Model because it will tell you physically the reason for the differing electric charges and for the different types of particle charges (strong nuclear force charge is called ‘color charge’, while the gravitational field charge is simply called ‘the inertial mass’).

I think part of the answer is already known at https://nige.wordpress.com/ and http://feynman137.tripod.com/, namely when you have three charge cores in close proximity (sharing the same overall vacuum polarization shell around all of them), the electric field energy creating the vacuum polarization field is three times stronger, so the polarization is three times greater, which means that the electric charge of each downquark is 1/3 that of the electron. Of course this is a very incomplete piece of physical logic, and leads to further questions where you have upquarks with 2/3 charge, and where you have pairs of quarks in mesons. But some of these have been answered: consider the neutron which has an overall electric charge of zero, where is the electric field energy being used? By conservation of electromagnetic field energy, the reduction in electric charge indicated by fractional charge values due to vacuum polarization shielding implies that the energy shielded is being used to bind the quarks (within the asymptotic freedom range) via the strong nuclear force. Neutrons and protons have zero or relatively low electric charge for their fundamental particle number because so much energy is being tied up in the strong nuclear binding force, ‘color force’.

I think proper unification will require not mathematical speculation (supersymmetry = SUSY) but a polarization mechanism around charge cores as you suggest.

Basically your analysis suggests that all charges are electrons and positrons or similar, and the strength of the vacuum polarization (due to nearby proximity of charges in hadrons) produces the fractional charges of quarks.

The issue here is that you need to find how the strong nuclear force affects quarks (electron etc in very close proximity in pairs or triads) because of the polarization, but cannot affect electrons.

If you think about it, force unification of all forces at extremely high energy is exactly the same thing as force unification of all forces at very short distances.

However, very close to any real particle – even an electron core – the energy density of the electromagnetic field is sufficient to create some heavy particles.

Presumably the strong nuclear force occurs in hadrons because the fundamental charges (quarks) are so so close together, which means that the overlapping polarization clouds of the vacuum created by their electromagnetic fields are particularly strong and result in massive short range gauge bosons which convey the strong nuclear force. It is extremely tempting to suggest that the strong nuclear force is just mediated by the virtual charges in the vacuum moving towards quarks of opposite electric charge as by normal attraction. Geometry alone would mean that between two protons you would find a predominance of negatively charged vacuum virtual charges, which would help overcome the repulsive force. This is presumably the mechanism behind Yukawa’s original strong nuclear force theory of the 1930s, whereby pions mediate the force between nucleons. However, the strong nuclear force between quarks is supposed to be mediated by gluons, which as you have explained on your home page, are necessitated – if they are a completely correct theory – by the Pauli exclusion principle which would forbid two downquarks in a nucleon from having the same state, etc. Some exotic particles seem to have three quarks in the same state. The only way to make this compatible with Pauli’s exclusion principle was to add an extra state (and therefore an extra quantum number) to the quarks in the theory of the strong nuclear force, called “colour charge”. Each quark in a nucleon has a different colour, red, blue and green (say), which is an abstract labelling for the mysterious way the strong nuclear force appears to operate. Different, unique gauge bosons are exchanged between the three quarks, so there should be 3 x 3 = 9 strong force (inter-quark, not inter-nucleon) gauge bosons called gluons. However, the adding scheme of the SU(3) strong nuclear force symmetry unitary group shows that there are 8 gluons, not 9. Perhaps there is something deeper or simpler behind this abstract model.

Hi anon,

String theorists use half of Popper’s conception of a scientific theory: they guess a theory and then study the maths. If they were 100% Popperian, they would also make checkable predictions of a clarity that would be capable of falsifying the theory.

However, much as I respect Popper’s physical dynamics for the uncertainty principle as being vacuum charge scattering effects which introduce chaotic behaviour for things on a small size scale (electrons in an atom, etc.), his philosophy that theories are guessed and then checked is atypical. It is the “crackpot scientist” view, and is false in general, because anyone can make a guess that will cost a vast amount to check it.

What real physics is mainly about is taking facts (facts here include verified theories within their range of validity), and working out a theory which UNIFIES THOSE FACTS OR VERIFIED THEORIES (WITHIN THE RANGE OF VALIDITY), putting a useful mathematical structure on those facts, and which makes predictions.

This is NOT SPECULATION. Take the case of Archimedes’ proof of the law of buoyancy. It is still around in Heath’s translation of the “Works of Archimedes”, and is published in a fine volume of the otherwise dull set “Great Books of the Western World” (publisher: Encyclopedia Britannica).

Archimedes doesn’t work out the fluid pressure as a function of depth all around a ship or a floating cork. All he does is to employ logic plus facts to prove the law that a floating body displaces a volume of water with a mass equal to the mass of the floating body.

Archimedes does this by observing that at any depth below the water surface but directly under the ship, the water pressure (whatever it will be, Archimedes did not measure it and did not know it) will be logically similar to the water pressure at the same depth in the water but to one side of the ship.

Because of this, the total downward-acting weight of water ABOVE that depth in free water (to the side of the ship) is EXACTLY THE SAME as the total downward-acting weight of water ABOVE the point directly under the ship. Hence, WHATEVER MASS OF WATER IS VACATED BY THE PRESENCE OF THE SHIP ABOVE YOU AT THAT POINT, MUST BE EXACTLY COMPENSATED FOR BY THE WEIGHT OF THE SHIP.

QED. That is the brains to the proof that Archimedes gives for the law that a floating body displaces the same mass of water as its own mass.

Now Archimedes does NOT consider the dynamics of buoyancy in his proof. He has NO theory in the proof of how the equilibrium of pressure is set up when a ship is lowered into the sea. He only considers the steady state place once the great water waves have died down and an equilibrium has been achieved.

Also, he doesn’t even offer a physical mechanism for what is keeping the floating object from sinking in his proof. Clearly the reason the floating body doesn’t sink is due to the fact that the pressure INCREASES WITH DEPTH IN WATER and this gives an upward force on the bottom of the ship, which keeps the ship from sinking further (provided that there aren’t any holes or leaks!).

Archimedes knew that water pressure increases with depth, but he didn’t have a simple relationship. The true relationship, strictly speaking, is complicated by the compressibility of water, but water is only slightly compressible so we can fairly accurately say that the pressure at depth x is equal to p = x.rho.g, where rho is the density of water and g is the acceleration due to gravity. Ignoring the compression of water (which only becomes large at the deepest places in the oceans), we can take the density rho to be about 1000 kg/m^3 and g = 9.81 m/s^2, so p ~ 10,000x = pascals. Normal atmospheric pressure is 101,000 pascals, so water pressure p = 0.1x atmospheres. Hence, water pressure increases to 1 atmosphere (10 metric tons/square metre) at a depth of 10 metres.

This pressure means that steel ships that sink some distance (with the submerged hull deep under the water) float well. The deeper the hull is, the greater the upward pressure. The upward force or “UPTHRUST” is simply this water pressure at the depth multiplied by the area of the bottom of the ship. For the simple situation where you place a raft with a flat bottom in the sea, the upthrust is F = pA where p is water pressure p = x.rho.g and A is the area of the bottom of the raft. From this formula you can calculate the distance that the bottom of the raft will sink below the water surface when you set the raft on the water (once the water oscillations have died down and equilibrium is achieved):

F = mg = pA = x.rho.gA

Hence x = mg/(rho.gA)

If this depth x is bigger than the thickness of the raft, then it will sink. If this distance x is smaller than the thickness of the raft, it will float (assuming it has no leaks).

This calculation is entirely different from Archimedes’ law, but it supplements it and doesn’t disprove it.

Einstein’s special relativity is like Archimedes’ proof in the sense that it lacks any physics, dynamics and mechanism, and merely gets the right answer from a mathematical argument based on facts and logic.

Of course, special relativity is wrong in general because it doesn’t apply to accelerating motion (you need general relativity for accelerations) because it assumes velocity is constant (both of light and observer, which doesn’t occur in the real world, where people have to accelerate to start and stop moving, and where gravity causes velocity to change due to directional deflection). On the other hand, special relativity is convenient for supplying the empirically verified mathematical formulae of 1889-1904 by FitzGerald, Lorentz, Larmor, and Poincare in an economically small package, which is why it is so popular with mathematicians (particularly those who value abstraction more than fact).

However, the point I’m making is that there are different ways to tackle a problem in physics, and the fact that there are different ways doesn’t necessarily mean that one way is better than the others for all purposes. Sometimes there are dualities where you can have different proofs of the same thing, each of which utilises different evidence but gets to the same result. Similarly, at one time all roads led to Rome. Where alternative treatments are particularly important is where they throw new light on a subject.

I think my formula above for the depth the bottom of a raft is submerged to is just as good as Archimedes’ proof of the law of buoyancy. I think Archimedes cooked up his proof after he had a bath in a full bath and noticed that the mass of water displaced over the edges was equal to his own mass. He then came up with an ad hoc proof of what he had seen with his own eyes in natural phenomena. The eureka moment came when he saw the water go over the edge. The mathematical proof came later, after the physical insight!

Copy of a comment:

http://scienceblogs.com/principles/2006/09/peter_woit_not_even_wrong.php

# 10 | nigel | September 13, 2006 09:21 PM

Chad,

You mentioned dark energy, which is due supposed to the acceleration of the universe to overcome supposed long range gravitational slowing on distant receding mass by the mass nearer to us, but this is a speculative interpretation which can be disproved and replaced with a factual theory that already exists and is suppressed off arXiv by stringers:

‘…the flat universe is just not decelerating, it isn’t really accelerating…’ – Prof. Phil Anderson, Nobel Laureate, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

This is the key to the current error over dark energy. Now for the fact explaining the data without dark energy.

The Yang-Mills Standard Model gauge boson radiation: the big bang induced redshift of gravity gauge bosons (gravitons) for long range gravitation means that the gravity coupling strength is weakened over large ranges.

I’m not talking inverse square law, which is valid for short range, but the value of gravity strength G. I mean that if gauge boson exchange does mediate gravity interactions between masses around the massive expanding universe, those gauge bosons (gravitons, whatever) will be redshifted in the sense of losing energy due to cosmic expansion over vast distances.

Just as photons lose energy when redshifted, the cosmic expansion does the same to gauge bosons of gravity. This is gravity is weakened over vast cosmic distances, which in turn is why the big bang expansion doesn’t get slowed down by gravity at immense distances. (What part of this can’t people grasp?)

Hence no dark energy, hence cosmological constant = 0.

D. R. Lunsford published a mathematical unification of gravity and electromagnetism which does exactly this, proving cosmological constant = 0. I wrote a comment about this on Prof. Kaku’s “MySpace” blog but then he deleted the whole post since his essay about Woit contained some mistakes, so my comment got deleted.

So, if you don’t mind, I’ll summarise the substance briefly here instead so that the censored information is at least publically available, hosted by a decent blog:

Lunsford has been censored off arXiv for a 6-dimensional unification of electrodynamics and gravity: http://cdsweb.cern.ch/search.py?recid=688763&ln=en

Lunsford had his paper first published in Int. J. Theor. Phys. 43 (2004) no. 1, pp.161-177, and it shows how vicious string theory is that arXiv censored it. It makes definite predictions which may be compatible at some level of abstraction with Woit’s ideas for deriving the Standard Model. Lunsford’s 3 extra dimensions are attributed to coordinated matter. Physically, the contraction of matter due to relativity (motion and gravity both can cause contractions) is a local effect on a dimension being used to measure the matter. The cosmological dimensions continue expanding regardless of the fact that, for example, the gravity-caused contraction in general relativity shrinks the Earth’s radius by 1.5 millimetres. So really, Lunsford’s extra dimensions are describing local matter whereas the 3 other dimensions describe the ever expanding universe. Instead of putting one extra dimension into physics to account for time, you put 3 extra dimensions in so that you have 3 spacelike dimensions (for coordinated matter, such as rulers to measure contractable matter) and 3 expanding timelike dimensions (currently each on the order of 15,000 million light-years). (This is anyway how I understand it.)

Lunsford begins by showing the errors in the historical attempts by Kaluza, Pauli, Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct unification of general relativity and Maxwell’s equations, finding 4-d spacetime inadequate: ‘… We see now that we are in trouble in 4-d. The first three [dimensions] will lead to 4th order differential equations in the metric. Even if these may be differentially reduced to match up with gravitation as we know it, we cannot be satisfied with such a process, and in all likelihood there is a large excess of unphysical solutions at hand. … Only first in six dimensions can we form simple rational invariants that lead to a sensible variational principle. The volume factor now has weight 3, so the possible scalars are weight -3, and we have the possibilities [equations]. In contrast to the situation in 4-d, all of these will lead to second order equations for the g, and all are irreducible – no arbitrary factors will appear in the variation principle. We pick the first one. The others are unsuitable … It is remarkable that without ever introducing electrons, we have recovered the essential elements of electrodynamics, justifying Einstein’s famous statement …’

He shows that 6 dimensions in SO(3,3) should replace the Kaluza-Klein 5-dimensional spacetime, unifying GR and electromagnetism: ‘One striking feature of these equations … is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behavior. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so this theory explains why ordinary general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularized.’

A major important prediction Lunsford makes is that the unification shows the cosmological constant is zero. This abstract prediction is entirely consistent with the Yang-Mills Standard Model gauge boson radiation: redshifted gauge bosons (for long range gravitation) mean gravity coupling strength is weakened over large ranges.

Just as photons lose energy when redshifted, the cosmic expansion does the same to gauge bosons of gravity. This is why the expansion doesn’t get slowed down by gravity at immense distances. Professor Philip Anderson puts it clearly when he says that the supernova results showing no slowing don’t prove a dark energy is countering gravity, because the answer could equally be that there is simply no cosmological-range gravity (due to weakening of gauge boson radiation by expansion caused redshift, which is trivial or absent locally, say in the solar system and in the galaxy).

Nigel

Copy of another comment:

http://scienceblogs.com/principles/2006/09/peter_woit_not_even_wrong.php

# 12 | nigel | September 14, 2006 08:15 AM

A.J.,

“Gravity and the nuclear forces are largely decoupled…”

The range of the weak nuclear force is limited by the electroweak symmetry breaking mechanism. Within a certain range of the fundamental particle, the electroweak symmetry exists. Beyond that range, it doesn’t because weak gauge bosons are attenuated while electromagnetic photons aren’t.

The nuclear force is intricately associated with gravitation because mass (inertial and gravitational) arises from the vacuum Higgs field or whatever nuclear mechanism shields the weak nuclear force gauge bosons.

There are two ways of approaching unification. One way is to look at the particles of the Standard Model, which gives you a list of characteristics of each particle which you then have to try to produce from some mathematical set of symmetry groups such as SU(3)xSU(2)xU(1). However you still need in this approach to come up with a way of showing why this symmetry breaks down at different energy scales (or different distances that particles can approach each other in collisions).

The other way of approaching the problem is to look at the force field strengths in an objective way, as a function of distance from a fundamental particle:

Take a look at http://thumbsnap.com/v/LchS4BHf.jpg

The plot of interaction charge (coupling constant alpha) versus distance is better than interaction charge versus collision energy, because you can see from the graph intuitively how asymptotic freedom works: the strong nuclear charge falls over a certain range as you approach closer to a quark, and this counters the inverse square law which says force ~ (charge/distance)^2, hence the net nuclear force is constant over a certain ranges, giving the quarks asymptotic freedom over the nucleon size. You can’t get this understanding by the usual plots of interaction coupling constants versus collision energies.

So, all that fundamental force “unification” means is that as you collide particles harder, they get closer together before bouncing off due to Coulomb (and/or other force field) scattering, and at such close distances all the forces have simple symmetries such as SU(3)xSU(2)xU(1).

Put another way, the vacuum filters out massive, short-range particles over a small distance, which shields out SU(3)xSU(2) and leaves you with only a weakened U(1) force at great distances. It is pretty obvious if you are trained in dielectric materials like capacitor dielectrics, that Maxwell’s idea of polarized molecules in an aether is not the same as the polarized vacuum of quantum field theory.

Vacuum polarization requires a threshold electric field strength which is sufficient to create pairs in the vacuum (whether the pairs pop into existence from nothing, or whether they are simply being freed from a bound state in the Dirac sea, is irrelevant here).

The range for vacuum polarization around an electron is ~10^-15 metre: Luis Alvarez Gaume and Miguel A. Vazquez-Mozo’s Introductory Lectures on Quantum Field Theory http://arxiv.org/abs/hep-th/0510040 have a section now explaining pair production in the vacuum and give a threshold electric field strength (page 85) of 1.3 x 10^16 v/cm which is on the order (approximately) of the electric field strength at 10^-15 m from an electron core, the limiting distance for vacuum polarization. Outside this range, there is no nuclear force and the electromagnetic force coupling constant is alpha ~ 1/137. Moving within this range, the value of alpha increases steadily because you see progressively less and less polarized (shielding) vacuum between you and the core of the particle. The question then is what is happening to the energy that the polarized vacuum is shielding? The answer is that attenuated energy is conserved so it is used to give rise to the short range strong nuclear force.

When you get really close to the particle core, there is little polarized vacuum between you and the core, so there is little shielding of the electromagnetic charge by polarization, hence there is at that distance little shielded electromagnetic field energy available to cause nuclear binding, which is why the strong nuclear charge falls at very short distances.

As for the electric charge differences between leptons and quarks:when you have three charge cores in close proximity (sharing the same overall vacuum polarization shell around all of them), the electric field energy creating the vacuum polarization field is three times stronger, so the polarization is three times greater, which means that the electric charge of each downquark is 1/3 that of the electron. Of course this is a very incomplete piece of physical logic, and leads to further questions where you have upquarks with 2/3 charge, and where you have pairs of quarks in mesons. But some of these have been answered: consider the neutron which has an overall electric charge of zero, where is the electric field energy being used? By conservation of electromagnetic field energy, the reduction in electric charge indicated by fractional charge values due to vacuum polarization shielding implies that the energy shielded is being used to bind the quarks (within the asymptotic freedom range) via the strong nuclear force. Neutrons and protons have zero or relatively low electric charge for their fundamental particle number because so much energy is being tied up in the strong nuclear binding force, ‘color force’.

The reason why gravity is so much weaker than electromagnetism is because, as stated in my previous comment, electromagnetism is unifiable with gravity.

This obviously implies that the same force field is producing gravitation as electromagnetism. We don’t see the background electromagnetism field because there are equal numbers of positive and negative charges around, so the net electric field strength is zero, and the net magnetic field strength is also zero because charges are usually paired with opposite spins (Pauli exclusion) so that the intrinsic magnetic moments of electrons (and other particles) normally cancel.

However, the energy still exists as exchange radiation. Electromagnetic gauge bosons have 4 polarizations to account for attraction, unlike ordinary photons which only have 2 and could only cause repulsive forces. The polarizations in an ordinary photon are electric field and magnetic field, both orthagonal to one another and the direction of propagation of the photon. In an electromagnetic gauge boson, there are an additional two polarizations of electric and magnetic field because of the exchange process. Gauge bosons are flowing in each direction and their fields can superimpose in different ways so that the four polarizations (two from photons going one direction, and two from photons going the other way) can either cancel out (leaving gravity) or add up to cause repulsive or attractive net fields.

Consider each atom in the universe at a given time as a capacitor with a net electric charge and electric field direction (between electron and proton, etc) which is randomly orientated relative to the other atoms.

In any given imaginary line across the universe, because of the random orientation, half of the charge pairs (hydrogen atoms are 90% of the universe we can observer) will have their electric field pointing one way and half will it pointing the other way.

So if the line lies along an even number of charges, then that line will have (on average) zero net electric field.

The problem here (which produces gravity!) is that the randomly drawn line will only lie along an even number of charge pairs 50% of the time, and the other 50% of the time it will contain an odd number of charge pairs.

For an odd number of charges lying along the line, there is a net resultant equal to the attractive force between one pair of charges, say an electron and a proton!

So the mean net force along a randomly selected line is only 0 for lines lying along even numbers of atoms (which occurs with 50% probability) and is 1 atom strength (attractive force only) for odd numbers of atoms (which again has a 50% probability). The mean residue force therefore is NOT ZERO but:

{0 + [electric field of one hydrogen atom]} /2

= 0.5[electric field of one hydrogen atom].

There are many lines in all directions but the simple yet detailed dynamics of exactly how the gauge bosons deliver forces geometrically ensures that this residue force is always attractive (see my page for details at http://feynman137.tripod.com/#h etc).

The other way that the electric field vectors between charges in atoms can add up in the universe is in a simple perfect summation where every charge in the universe appears in the series + – + – + -, etc. This looks totally improbable, but in fact is statistically just a drunkard’s walk summation, and by the nature of path-integrals gauge bosons do take EVERY possible route, so it WILL definitely happen. When capacitors are arranged like this, the potential adds like a statistical drunkard’s walk because of the random orientation of ‘capacitors’, the diffusion weakening the summation from the total number to just the square root of that number because of the angular variations (two steps in opposite directions cancel out, as does the voltage from two charged capacitors facing one another). This vector sum of a drunkard’s walk is equal to the mean vector size of one step (the net electric field of one atom at an instant) times the square root of the number of steps, so for ~10^80 charges known in the observable size and density of the universe, you get a resultant of (10^80)^0.5 = 10^40 atomic electric field units. This sum is always from an even number of atoms in the universe, so the force can be either attractive or repulsive in effect (unlike a residual from a single pair of charges, ie, and odd number of atoms [one atom], which is always attractive).

The ratio of electromagnetism to gravity is then ~(10^40) /(0.5), which is the correct factor for the difference in alpha for electromagnetism/gravity. Notice that this model shows gravity is a residual line summation of the gauge boson field of electromagnetism, caused by gauge bosons.

The weak nuclear force comes into play here.

‘We have to study the structure of the electron, and if possible, the single electron, if we want to understand physics at short distances.’ – Professor Asim O. Barut, On the Status of Hidden Variable Theories in Quantum Mechanics, Aperion, 2, 1995, pp97-8. (Quoted by Dr Thomas S. Love.)

In leptons, there is just one particle core with a polarized vacuum around it, so you have an alpha sized coupling strength due to polarization shielding. Where you have two or three particles in the core all close enough that they share the same surrounding polarized vacuum field out to 10^-15 metres radius, those particles are quarks in mesons (2 quarks) and baryons (3 quarks).

They have asymptotic freedom at close range due to the fall in the strong force at a certain range of distances, but they can’t ever escape from confinement because their nuclear binding energy far exceeds the energy required to create pairs of quarks. The mass mechanism is illustrated with a diagram at http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html

When a mass-giving black hole (gravitationally trapped) Z-boson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have 137 shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is: [Z-boson mass]/(3/2 x 2.Pi x 137 x 137) ~ 0.51 MeV. If, however, the electron core has more energy and can get so close to a trapped Z-boson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass: [Z-boson mass]/(2.Pi x 137) ~ 105.7 MeV. The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is [electron mass].[137].n(N+1)/2 ~ 35n(N+1) Mev. (For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Z-bosons. Lest you think this is all ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements: http://feynman137.tripod.com/ (scroll down to table).

Comparison of mass formula, M = [electron mass].[137].n(N+1)/2 = [Z-boson mass].n(N+1)/[3 x 2Pi x 137] ~ 35n(N+1) shows that this predicts an array of masses for integers n (number of fundamental particles per observable particle core) and N (number of trapped Z-bosons associated with each core). It naturally reproduces the masses of all the observed particles known to within a few percent accuracy.

So the complete linkage between gravity and nuclear force shows that inertial and gravitational mass is contributed to particles by the vacuum. The link between the Z-boson mass is the key in the relationship is straightforward: the Z-boson is an analogy to the photon of electromagnetism. Rivero and another noticed and published in an arXiv paper a simple relationship between the Z-boson mass and another particle mass via alpha = 137.

The argument that alpha = 137 is polarized vacuum shielding factor for the idealised bare core electric charge is given at

Heisenberg’s uncertainty says [we are measuring the uncertainty in distance in one direction only, radial distance from a centre; for two directions like up or down a line the uncertainty is only half this, i.e., it equal h/(4.Pi) instead of H/(2.Pi)]:

pd = h/(2.Pi)

where p is uncertainty in momentum, d is uncertainty in distance.This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process.

For light wave momentum p = mc,pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time.

Hence, Et = h/(2.Pi)

t = h/(2.Pi.E)

d/c = h/(2.Pi.E)

d = hc/(2.Pi.E)

This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it’s OK to take d as not merely an uncertainty in distance, but an actual range for the gauge boson!

Now, E = Fd implies

d = hc/(2.Pi.E) = hc/(2.Pi.Fd)

Hence

F = hc/(2.Pi.d^2)

This force between electrons is 1/alpha, or 137.036, times higher than Coulomb’s law for unit fundamental charges. The reason:

“… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).” – arxiv hep-th/0510040, p 71.

Another clear account of vacuum polarization shielding bare charges: Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.’All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu_zero for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu_zero.(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu_zero = 1.001…’

The response of most people to building on these facts is completely bewildering, they don’t care at all despite the progress made in understanding and making physical predictions of a checkable kind.

At the end of the day, the string theorist and the public don’t want to know the facts, they prefer the worship of mainstream mathematical speculations which don’t risk being experimentally refuted.

Copy of a comment (very off-topic and deletable):

http://www.math.columbia.edu/~woit/wordpress/?p=458

nigel Says:

September 14th, 2006 at 8:52 am

Heineken,

your site http://www.stringtheory.com/gloss.htm#String defines string as

“Strings participating in particle structures and magnetic fields are typically closed into a loop. Open string segments constitute line-of-sight electromagnetic radiations, such as light.”

I don’t see any problem with this, since you are just saying that looped radiation gives particles. Fine. It is the speculative reliance on extra dimensional degrees of freedom which cause problems. Charged radiation of energy is characterised by the Heaviside-Poynting vector (electric field, magnetic field and propagation all orthagonal) so when it is looped – say by the entrapment of its own gravity on a black hole scale – you get a spinning particle.

The radial electric field at long distances is symmetrically symmetric (standard electric charge) and the magnetic vectors for Poynting energy going around in a loop partially cancel and only leave a magnetic dipole moment along the spin axis. So if string theory was merely saying that looped, trapped radiation gives particles with rest mass, fine. What is wrong with string theory is that it goes too far away from experimental facts and into speculative extra dimensional unification of forces in the Standard Model.

If string theory was building on empirical facts, fine. Instead it adds to speculation about gravitons by adding extra dimensional speculation to further the graviton speculation into a totally uncheckable theory.

Similarly, it takes the speculation of force unification at extremely high energy in the standard model, and then furthers that speculation by speculating about a 1:1 boson:fermion supersymmetry, without any evidence that the speculation it is modelling further is true or not.

By analogy, suppose you take a speculation that fairies exist, and then you speculate further about where those fairies come from, using plenty of extra dimensional mathematics. If anyone criticises your line of physical investigation as being unproductive, you simply say that you have a solid theory of fairies because you have worked out a self-consistent theory for the origin of the fairies which were speculated in the first place.

Furthermore, you have the splendid defense that your theory of fairies is the only self-consistent theory of fairies which has so far been published (because alternative theories like LQG don’t choose to address the origin or fairies/gravitons, but instead work on trying to unify equations with some empirical evidence such as quantum field theory path integrals with the field equation of general relativity…).

Copy of an off-topic comment:

http://www.math.columbia.edu/~woit/wordpress/?p=457

nc Says: Your comment is awaiting moderation.

September 15th, 2006 at 11:16 am

‘It’s true but unprovable that our universe may just be a simulation in a computer run by a higher intelligence…’

Yes, Bill Gates would never confess to being that much of a control freak. What is absurd here is that the hardware analogy is totally absent. It is completely obvious that all matter is composed of atoms which are charged capacitors.

Maxwell’s equations can’t be applied really to the atom as capacitor, because he associates electric energy transfer with electric current; send in energy, and you are sending in charge.

The break down in Maxwell’s classical model comes here. You can put energy into an electron in an atom, and this does not mean that the charge of the electron increases.

So when you consider the atom as a charged capacitor, you see why Maxwell’s added term in the curl.B equation breaks down in quantum theory. Electricity has two components: a light speed field which is mediated by gauge bosons of electromagnetism, and electrons which only move in response to the gauge boson mediated field. Take away the wires, and wireless energy is just the gauge boson energy of the varying field.

Since every real photon consists of magnetic and electric fields, and magnetic and electric fields are composed of gauge bosons, that means that a photon contains gauge bosons in the fields it consists of! Hence a photon is just an energy disturbance mediated along the existing fields in space. These E, B fields aren’t experienced except as gravity and inertia, because net fields are generally cancelled out (but the field energy remains; field energy is conserved).

Sorry Peter for off-topic comments, but it is extremely irritating that some mathematicians will happily address the crackpot idea that the universe is a computer in the software sense, but totally ignore the hardware analogy.

The dimensions of spacetime on a large scale are expanding with the big bang cosmology, they never contract. They are also timelike, since you are looking back significantly into the past as you look into greater distances with a telescope.

On small scales, the dimensions which measure the lengths of atoms and composite bodies like rules are CONTRACTABLE due to relativity.

The length of the universe doesn’t contract due to recession, even though the receding stars will be contracted by the SR factor. Gravity causes the earth’s radius to contract 1.5 mm as Feynman calculated in his Lectures on Physics.

The contraction of distance is associated with the distance-like dimension moving or gravitating matter, and time dilation on a local scale follows from the constancy of speed of light c = distance/time. This ratio is constant, so a contraction of distance (numerator) implies that time is contracted/dilated (denominator).

However the dimensions describing the spaces between receding matter are increasing at a uniform speed, the Hubble rate (supernovae data show no slowing down in the expansion rate).

So we really need to consider 3 contractable distance-like spacetime dimensions to describe CONTRACTABLE matter, and 3 non-contractable dimensions to describe the time-like uniform expansion of the universe.

Einstein would have saved everyone a lot of trouble in 1915 had he known of Lunsford’s theory and added 3 extra dimensions.

I’ll let Lubos Motl keep string theory if he cuts out the Calabi-Yau manifold and adds two extra dimensions as Lunsford’s SO(3,3) does.

Copy of a comment:

http://ppcook.blogspot.com/2006/09/penrose-universe.html

“As time passes, the Weyl curvature increases and gravitational masses attract each other more strongly forming a less-homogeneous universe, with clumped masses and higher entropy encoded in the dense packing massive bodies. So that early uniform universe may be explained by there being zero Weyl curvature. Penrose talks about the Weyl curvature’s growth as freeing up gravitational degrees of freedom that may then be excited. It is the excitation of these gravitaional degrees of freedom that is the real measure of entropy. It is a nice picture. But just what drives the Weyl curvature’s variance is a mystery to me.”

The idea of zero curvature at the big bang, and increasing gravitational strength with time, is is the opposite of Dirac’s varying constants hypothesis, where gravitational curvature DECREASES with time.

If you want a mechanism for increasing curvature – although I have not checked whether it is a precise duality to the sort of curvature variation Penrose suggests – check a proof of the universal gravitational constant from an eccentric/crackpot (depending on your view) Yang-Mills dynamics of radiation exchange:

http://feynman137.tripod.com/#a

http://feynman137.tripod.com/#h

Notice that increasing curvature with time after big bang is implied by the time-dependence of the Hubble parameter and the observable density of the universe (Rho) in the above proof’s result that universal gravitational “constant” G = (3/4)(H^2)/(Pi.Rho.e^3)

Where H is Hubble constant, Rho is observable density of present universe, e = 2.718… As per Perlmutter’s results on supernovae redshifts, there is no observable deceleration of the universe due to gravitional retardation, so H = 1/(age of universe). (The standard result for a critical-density universe, is H = (2/3)/(age of universe), but the 2/3 factor comes from gravitational retardation which isn’t observed – officially because of dark energy producing a cosmological constant which offsets gravitational retardation on expansion, but more likely because gravitation weakens over very large distances due to the substantial cosmological redshift of “graviton” type gauge boson radiation being exchanged between very distant masses to cause gravity).

Notice that increasing curvature has a whole list of correct predictions.

(1) The linkage between gravitation and electromagnetism in a unification scheme implies that both forces increase in the same way with time.

Hence the fusion rate in the BB and stars is independent of time since the compressive force(gravity) variation is offset exactly by the similarly inverse-square law variation of the repulsive force (Coulomb’s law which is what stops fusion – proton capture at short ranges by the attractive strong nuclear force – from occurring unless the gravitational compression in the star or BB is high enough). If you vary gravity by factor x and vary Coulomb’s law by the same factor, the net fusion rate of protons is unchanged (increasing gravity would increase fusion rate, but increased Coulomb repulsion would reduce it, and the fact both are inverse-square law forces means the variational effects of each offset the other exactly).

(2) Increasing gravitational strength makes correct preductions. It explains why the ripples in the CBR are so unexpectedly small as being due to the simple fact that gravitation was thousands of times smaller 300,000 years after the BB when the CBR originated. (The mainstream explanation is inflation, but that is speculative.)

(3) It allows the quantitative prediction of gravity and of electromagnetic forces on the basis of a Yang-Mills exchange radiation scheme, plus other things such as particle masses (above links). The predictions depend on cosmological measurements like Hubble constant and density of universe, but are accurate within experimental uncertainty of a certain number of percentage points. By contrast, M-theory makes no predictions for force coupling strengths, and the only numbers it produces show wild inaccuracies for the cosmological constant, etc.

nigel

https://nige.wordpress.com/

nigel said…

Sorry, the links above are wrong:

http://feynman137.tripod.com/#a

http://feynman137.tripod.com/#h

Copy of a comment:

http://christinedantas.blogspot.com/2006/09/paper-for-discussion-quantum-mechanics.html

nigel said…

Eugene,

Quantum mechanics chaos/indeterminism is explained by the vacuum pair creation/annihilation operators.

Near a real particle core, the field strength is high enough to create pairs of charges. These spontaneously come into existence, exerting deflective effects on the motion of the real particle in the core of the vacuum pair production/polarization region (which extends out to 10^-15 m).

These perpetual random deflections lead to the non-deterministic electron orbitals in atomic shells.

The role of Schroedinger’s wave equation is to give a probabilistic description of the overall effect. On the average, the net effect of the vacuum charge deflections on the real particle is like a gas and therefore is predictable statistically, like any waves which are due to particle interactions on a small scale (sound waves are due to particulate air molecules, for example).

All that matters is working out the cause for QM indeterminism. Once you have the cause (quantum field theory, as explained above), you can cut out all the philosophical speculation.

To summarise, the meaning of quantum mechanics is implicit in the mathematical formulation of quantum field theory. The very nature of quantum fields – as opposed to Maxwell-Einstein type classical (continuous) – is responsible for the nature of quantum mechanics.

Lee Smolin is right to seek connections between quantum mechanics and cosmology, I think he will get the final theory minus the details of the Standard Model. I hope he considers Lunsford’s work on cosmology and electromagnetism. The word “classical” as you use it is confusing. For any statistically large number of particles interacting from a quantum field, the net result approaches “classical” (continuum) physics.

If you have a room with 1 air molecule in it, you CAN’T physically have ANY sound waves in that room, whatsoever! It is just one particle bouncing around.

If you put more air molecules in, the behaviour starts to become chaotic, and eventually with enough air molecules in the room, the net behaviour becomes statistically predictable using wave equations. Professor David Bohm in 1954 showed that Schroedinger’s equation can be derived from brownian motion of a quantum field of chaotic particles around an electron!

Nigel

9/16/2006 03:29:08 PM