Preliminary pages from the draft book

Solution to a problem with general relativity

A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, partly in advance of observations

This book is an updated and expanded version of a CERN Document Server deposited draft preprint paper, EXT-2004-007, which is now obsolete and can’t be updated there. Please see the additional new calculations and the duality between Yang-Mills exchange radiation and the dynamics of the Dirac sea spacetime fabric of general relativity (in chapter one).


In the preprint EXT-2004-007, the observation was made that in space-time the Hubble recession of the mass of the universe around us can be represented either as (recession velocity, v)/(apparent distance in space-time, s) = Hubble parameter, H = v/s, or, equally well, as (recession velocity, v)/(apparent time past in space-time, t) = v/t = v/(s/c) = cv/s = cH, which is the outward acceleration of the matter of the universe as seen in our space-time reference frame (rather than how the universe might hypothetically appear if light and other fields travelled instantly, which cannot occur). Space-time was ignored by Hubble, which is why this fact was not recognised before. The immediate consequence is that we get an outward force for the big bang from this outward acceleration of matter, as given by Newton’s empirical second law of motion, F = ma, with a = cH and m the mass of the receding universe observable around us (because of various other considerations, such as an increase in density in space-time as we look to great distances and earlier eras of the universe, there are complexities which are analysed in chapter one). This outward force by Newton’s empirical third law of motion should be associated with an equal inward directed reaction force, which allows us to predict gravity as a local effect due to exchange radiation pressure due to the big bang. This prediction is substantiated because it is now proved that there are two distinct proofs which are dual of one another, one for material pressure (particles) and one for radiation pressure (waves). The result is a full prediction of empirically verifiable general relativity, not merely the inverse-square law, but the accurate prediction of the gravitational coupling constant G and the gravitational curvature produced by masses, as well as the elimination of all ‘dark matter’ and ‘dark energy’ problems from general relativity. The cosmological consequences of this mechanism go further, because the exchange radiation mechanism causes the big bang Hubble recession on large scales while causing gravitation and curvature on small scales. It unifies both electromagnetism and gravitation, in the process eliminating the unobserved Higgs mechanism for electroweak symmetry breaking. The 19 parameters of the Standard Model are all predicted by the simple replacement mechanism, providing a full and detailed prediction of strong, weak, electromagnetic and gravitational interactions. The author is aware now of a great deal of relevant independent work by other people, including, among others, Louise Riofrio, D. R. Lunsford (whose unification, see EXT-2003-090, of electromagnetism and gravitation by a space-time symmetry where there are three orthogonal space dimensions and a corresponding three time dimensions, leading him to prove the elimination of the cosmological constant, is a duality to the mechanism presented here), Thomas Love, Tony Smith, John Hunter, Hans de Vries, Alejandro Rivero and Carl Brannen.

[To be inserted here when book content is complete: Summary list of predictions and links to the places they occur in the body of the book]


Jacques Distler inspired the writing of this technical-level book by suggesting in a comment on Clifford V. Johnson’s discussion blog that I’d be taken more seriously if only I’d use tensor analysis in discussing the mathematical physics of general relativity. Walter Babin kindly hosted some papers on his General Science Journal, which is less prejudiced and thus more sceientific than a certain other glamorous internet archive, while editors at Electronics World printed them; Peter Woit, Sean M. Carroll, Lee Smolin and ‘Kea’ (Marni D. Sheppherd) discussed in various ways the facts on mainstream string theory propaganda. Edward Witten’s alternative idea, called stringy M-theory, turned out to be ‘not even wrong’. Thank you, Ed!


Chapter 1: The mathematics and physics of general relativity

Chapter 2: Quantum gravity approaches: string theory and loop quantum gravity

Chapter 3: Dirac’s equation, Spin and Magnetic Moments, Pair-Production, the Polarization of the Vacuum above the IR cutoff and It’s Role in the Renormalization of Charge and Mass

Chapter 4: The Path Integrals of Quantum electrodynamics, compared with Maxwell’s classical electrodynamics

Chapter 5: Nuclear and Particle Physics, Yang-Mills theory, the Standard Model, and Representation Theory

Chapter 6: Methodology of doing science: Edward Witten’s stringy definition of the word ‘prediction’; real predictions of this theory based purely on empirical facts (vacuum mechanism for mass and electroweak symmetry breaking at low energy, including Hans de Vries’ and Alejandro Rivero’s ‘coincidence’)

Chapter 7: Riofrio’s and Hunter’s equations, and Lunsford’s unification of electromagnetism and gravitation

Chapter 8: Standard Model mechanism: vacuum polarization and gauge boson field mediators for asymptotic freedom and force unification

Chapter 9: Evidence for the ‘stringy’ nature of fundamental particle cores (trapped, Poynting-Heaviside electromagnetic energy currents constitute static, spinning, radiating, charge in capacitor plates, the Yang-Mills exchange radiation being the zero point vacuum field)

Chapter 10: Summary of evidence, comparison of theories, limitations and further work required.


This errors in the unification of fundamental theories lie in the way general relativity is currently being used, particularly the continuum gravity source assumptions which are forced into the energy-stress tensor because you can’t mathematically use differential equations to model true discontinuities in practice. So the lumpiness of quantum field theory isn’t compatible with the continuum of general relativity for purely mathematical reasons, not physical reasons. It pays therefore to examine what is correct in general relativity, and to identify/isolate what is merely a statistical approximation. The errors are identified and corrected in chapter one, which leads to further ramifications for the rest of physics, that are analysed and solved in the rest of the book.

Chapter One

The mathematics and physics of general relativity

Until 1998, it was widely held that general relativity predicted a gravitational retardation in the recession of the most distant supernovas, which proved to be in contradiction to the observations of supernovae redshifts published that year by Perlmutter et al., and since corroborated by many others.1 However, in 1996 a mechanism of gravity had been advanced which offered an approach to predicting (uniquely) the universal gravitational ‘constant’, G, that resolves the problem and many others, including the flatness problem, the smoothness of the cosmic background radiation originating from 300,000 years after the big bang, Standard Model particle physics parameters, and the underlying mechanism for quantum field theory.2

This chapter deals with the correct derivation and application of the Einstein-Hilbert field equation of general relativity, including quantum corrections that pertain to gravitational phenomena.

1.1 The mathematical physics of the Einstein-Hilbert field equation

The Einstein-Hilbert field equation, Rab½ gab R = Tab, of general relativity was obtained in November 1915 from solid mathematical and physical considerations. Einstein’s equivalence principle, that inertial accelerations and gravitational accelerations are indistinguishable, is one basis of the physical description of gravitation. Two other vital ingredients are non-Euclidean geometry, described by tensor calculus, and the conservation of mass-energy, which produces the complicated left hand side of the equation, specifically introducing the ‘- ½ gab R’ term.

Einstein’s first equated the curvature of space-time (describing acceleration fields and curved geodesics), to the source of the gravitational field (assumed to be some kind of continuum such as a classical field or a ‘perfect fluid’) by simply Rab = Tab. Here, Rab is the Ricci tensor (a description of curvature based on Ricci’s developments to Riemann’s non-Euclidean spacetime ideas) and Tab is the stress-energy tensor.

This simple equation, Rab = Tab, was wrong. It turned out that Tab should have zero ‘divergence’ in order that mass-energy is conserved locally. The easiest way to describe this is by analogy to the Maxwell equation for the divergence of a magnetic field B, i.e., Ñ × B = 0. Because as many magnetic field lines radiate from the north pole of the magnet as from the south pole, and this means that the divergence of the field (which is simply the summation of the gradients of the field in the three orthogonal spatial dimensions), is always exactly zero. In the case of the stress-energy tensor, Tab, the conservation of mass-energy density locally would be violated by, for example, the Lorentzian dependence of motion upon volume and hence upon the field density or the source of gravitation.

Tab = r ua ub

Taking just the energy density component, a = b = 0,

T00 = r g 2 = r (1 – v2/c2)-1

Hence, T00 will increase towards infinity as v tends towards c. If, therefore, the curvature was equal to the stress-energy tensor, Rab = Tab, mass-energy is curvature is dependent upon the reference frame of the observer, increasing towards infinity as velocity increases toward c.

Einstein, in his 1916 paper ‘The Foundation of the General Theory of Relativity,’ recognised the need whereby ‘The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ (Italics are Einstein’s own.)

In order to ensure that the source of the curvature describing gravitation is … [The ten chapters of the full book will be downloadable from a link at when completed, shortly.  It will replace the ramshackle, hit and miss compendium of ideas and calculations on pages like – which is where a hyperlinked index page for the new book will go – and the recent updates in numerous blog posts and comments, with a structured, completely rewritten thesis, eliminating repetitions and other annoying aspects of presentation.]

SU(3) is OK, but SU(2)xU(1) and the Higgs mechanism are too complicated and SU(2) is rich enough, with a very simple mass mechanism, to encompass the full electroweak phenomena, allowing the prediction of the strength of the electromagnetic force and weaker gravity correctly

Illustration of physical mechanisms for exchange radiation in quantum field theory and the modification to the standard model implied therewith: SU(3) is OK, but SU(2)xU(1) and the Higgs mechanism are too complicated and SU(2) is rich enough (with a very simple mass-giving mechanism) to encompass the full electroweak phenomena, allowing the prediction of the strength of the electromagnetic force and weaker gravity correctly.  So the standard model should be modified to SU(3)xSU(2) where the SU(2) has a mechanism for chiral symmetry and mass at certain energies, or perhaps SU(3)xSU(2)xSU(2), with one of the SU(2) groups describing massive weak force gauge bosons, and the other SU(2) is electromagnetism and gravity (mass-less versions of the W+ and W- mediate electric fields and the mass-less Z is just the photon, and it mediates gravity in the network of particles which give rise to mass). It is simply untrue that electromagnetic gauge boson radiation must be uncharged: this condition only holds for isolated photons, not for exchange radiation, where there is continual exchange of gauge bosons between charges (gauge bosons going in both directions between charges, an equilibrium).  If the mass-less gauge bosons are uncharged, the magnetic field curls cancel in each individual gauge boson (seen from a large distance), preventing infinite self-inductance, so they will propagate.  This is why normal electromagnetic radiation like light photons are uncharged (the varying electromagnetic field of the photon contains as much positive electric field as negative electric field).

If the gauge bosons are charged and massless, then you would not normally expect them to propagate, because their magnetic fields cause infinite self-inductance, which would prevent propagation.  However, if you have two similar, charged massless radiations flowing in opposite directions, their interaction will be cancel out the magnetic fields, leaving only the electric field component as observed in electric fields.

This has been well investigated in the transmission line context of TEM (travsverse electromagnetic) waves (such as logic steps in high speed digital computers, where cross-talk, i.e., mutual inductance, is a limiting factor on the integrated circuit design) propagated by a pair of parallel conductors, with charges flowing in one direction on one conductor, and the opposite direction in the other.  When a simple capacitor, composed of metal plates separated by a small distance of vacuum (the vacuum acts as a dielectric, i.e., the permittivity of free space is not zero), is charged up by light-velocity electromagnetic energy, that energy has no mechanism to slow down when it enters the capacitor, which behaves as a transmission line.  Hence, you get the ‘energy current’ bouncing in all directions concurrently in a ‘steady, charged’ capacitor.  The magnetic field components of the TEM waves cancel, leaving just electric field (electric charge) as observed.  See the illustration in the previous post here.


13 thoughts on “Preliminary pages from the draft book

  1. Just a clarification: the standard model’s electromagnetic/weak force unification mechanism may be SU(2) with a different interpretation to the existing one.

    We know already from classical electromagnetism that when you integrate the energy in the electric field of an electron, it corresponds to the electron’s rest mass energy is the electron is assumed to have a radius of 2.8 x 10^{-15} m or 2.8 fm.


    High energy collisions have shown that Coulomb’s law holds down to much smaller sizes than 2.8 fm: the electric charge doesn’t merely exist from 2.8 fm outward from the electron core, although 2.8 fm does correspond roughly to the assumed IR cutoff energy for QED which is typically taken to be the distance of closest approach in 0.511 MeV electron collisions. This is the maximum radius where vacuum polarization (due to polarizable pair-production charges, which shield the electron’s core charge) can occur.

    Pair production in a static electric field requires a field strength on the order 10^18 v/m, which occurs out to a similar distance from an electron.

    The classical idea that there should be a cutoff 2.8 fm radius to the electron is wrong, and it is wrong because the electron has a lot more energy than 0.511 MeV, but that energy is not utilizable; it is also cloaked by the vacuum effects, which alter the observable mass of the electron. These vacuum pair shielding effects phenomena are well known from renormalization of charge and mass in QED, where renormalization, or adjusted bare core charge and mass, is required to correctly predict the Lamb shift and the magnetic moments of muons.

    Some earlier discussions on this topic are at the older blog:

    Please note, that in writing the book I am noticing a lot of (usually trivial) errors and some poorly expressed arguments in my blog posts, which I’m sorting out in the book version.

    One example of an ongoing problem is the full electroweak symmetry breaking mechanism, but that is allied to the SU(3) strong interaction threshold.

    The idea in the standard model normally is that, to get SU(3)xSU(2)xU(1) to model particle physics, you have to introduce various convenient scales and cutoffs.

    SU(3) for example must have an IR cutoff on the order of 100 GeV, below which gluons lose mass. (Obviously, in nuclei, pions continue to mediate inter-nucleon forces at energies down to 140 MeV and thus over larger distances than gluons, but this is not the gluon mediated inter quark force mechanism.)

    SU(2) has scales of 80 and 91 GeV where the W and Z weak force gauge bosons appear, and an electroweak symmetry breaking scale of 246 GeV, above which there is supposed to be a perfect symmetry between the electromagnetic and weak gauge bosons, giving electroweak unification.

    Below 246 GeV, the electroweak symmetry is supposed to be broken by the mediation of massive spin-0 Higgs bosons (possibly several types, because in the standard model a single Higgs boson could lead to problems).

    However, the 246 GeV expectation energy of the Higgs field is coincidentally approximately 91 GeV x e = 247 GeV, where e is the base of natural logs. This coincidence suggests the following:

    Assuming Coulomb-type (elastic) scattering, the distance of closest approach is inversely proportional to the kinetic energy of the particles.

    One mean free path of incident radiation (gauge bosons) is the distance over which the fraction of unscattered radiation falls by a factor of e, the base of natural logs.

    Hence, in radial symmetry for particle scattering, a distance approach in the virtual particle veil around an electron (or other particle) which is necessary to decrease the abundance of incident (unscattered) radiation by the factor e, may also be associated by an increase in required energy by the same factor of e.

    Going from 91 GeV to 247 GeV is therefore an increase in energy by a factor of e, which is associated with a fall in the amount of incident (unscattered) radiation by a factor of e.

    In general, a mean free path denotes the range of the effect. The mechanism which this argument suggests is that mass arises by pair production at energies of say 80-91 GeV, corresponding to a small range from a particle.

    Beyond this range, there are basically no massive loops, because the field strength and energy density in the vacuum are too weak to produce them.

    Al smaller distances, an approaching massless particle acquires mass by some kind of interaction with the field of massive loops, such as a scattering reaction; the “miring” you get from moving in a fluid due to molecules scattering off you in the direction of your motion through the fluid gives a drag pressure at velocity v is q = (1/2)*{density of air}*v^2.

    The question is why W and Z gauge bosons, which have mass from their creation at 80-91 GeV to the vacuum expectation energy of 246 GeV, lose mass above 246 GeV.

    Normally this is explained by the increased penetrating power particles gain at higher energy. Particles like gamma rays are less interacting at higher energy, so they penetrate further.

    In the same way, the W and Z gauge bosons of higher energy are less likely to interact with the Higgs field.

    Put in a simple analogy, if you have little energy, you are stopped quite easily by any barrier in your path. If you have more energy, you are more likely to go straight through the barrier without being constrained by it.

    Sometimes the treacle analogy to the “Higgs aether” is used, whereby things moving slowly in a treacle syrup get stopped easily , but things with more energy and velocity are less affected.

    Physically, this is a poor analogy however, since in general, as stated, drag forces in fluids increase with kinetic energy or the square of velocity, instead of decreasing with increasing velocity (and thus with increasing kinetic energy).

    The problem for the Higgs field is that you need it to altogether stop miring particles which have over 246 GeV energy, not to increasing mire them as their energy rises.

    Either the particles must cease to interact with the “Higgs aether” above 246 GeV, or else the “Higgs aether” itself must disappear above 246 GeV. See for the problem of trying to get a clear resolution to the problem (the comment on that thread by Island doesn’t answer anything, because we already know from Schwinger’s analysis that pair production requires an intense electric field, about 10^18 v/m if the field is not oscillating, and merely expressing this threshold in some other units doesn’t solve a thing).

    Of course, if we are going to completely change the electroweak section of the standard model, SU(2)xU(1), into SU(2) with a new system to replace the Higgs mechanism, then the whole question as to how the “Higgs aether” is supposed to eliminate mass above 246 GeV disappears and is replaced by the new physics anyway.

  2. Another possibility, as originally suggested at is that the full symmetry is SU(3)xSU(2)xSU(2).

    The advantage of having SU(2) appear twice is one of the SU(2)’s will describe the massive weak gauge bosons, and the other can describe the electromagnetic and the gravity boson (which is a simple spin-1 photon – albeit one of unusual energy – pressing things together and causing the radial contraction of mass as general relativity predicts, by the physical mechanism of radiation pressure and mutual shielding, rather than by a spin-2 ‘graviton’!).

    The details of chiral symmetry, mass, symmetry breaking and unification in this system need to be assessed and compared to the existing standard model, SU(3)xSU(2)xU(1) with plus its Higgs mechanism.

  3. copy of a comment:

    Thank you very much indeed for this news. On 3 space plus 3 time like dimensions, I’d like to mention D. R. Lunsford’s unification of electrodynamics and gravitation, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, as summarized here.

    Lunsford discusses why, despite being peer reviewed and published, arXiv blacklisted it, in his comment here. Lunsford’s full paper is available for download, however, here.

    Lunsford succeeds in getting a unification which actually makes checkable predictions, unlike the Kaluza-Klein unification and other stuff: for instance it predicts that the cosmological constant is zero, just as observed!

    The idea is to have three orthagonal time dimensions as well as three of the usual spatial dimensions. This gets around difficulties in other unification schemes, and although the result is fairly mathematically abstract it does dispense with the cosmological constant. This is helpful if you (1) require three orthagonal time dimensions as well as three orthagonal spatial dimensions (treating the dimensions of the expanding universe as time dimensions rather than space dimensions makes the Hubble parameter v/t instead of v/x, and thus it becomes an acceleration which allows you to predict the strength of gravity from a simple mechanism, since outward force of the big bang is simply f=ma where m is the mass of the universe, and newton’s 3rd law then tell’s you that there is equal inward reaction force, which – from the possibilities known – must be gravity causing gauge boson radiation of some sort, and you can numerically predict gravity’s strength as well as the radial gravitational contraction mechanism of general relativity in this way), and (2) it require no cosmological constant:

    (1) The universe is expanding and time can be related to that global (universal) expansion, which is entirely different from local contractions in spacetime caused by motion and gravitation (mass-energy etc.). Hence it is reasonable, if trying to rebuild the foundations, to have two distinct but related sets of three dimensions; three expanding dimensions to describe the cosmos, and three contractable dimensions to describe matter and fields locally.

    (2) All known real quantum field theories are Yang-Mills exchange radiation theories (ie, QED, weak and QCD theories). It is expected that quantum gravity will similarly be an exchange radiation theory. Because distant galaxies which are supposed to be slowing down due to gravity (according to Friedmann-Robertson-Walker solutions to GR) are very redshifted, you would expect that any exchange radiation will similarly be “redshifted”. The GR solutions which slow slowing should occur are the “evidence” for a small positive constant and hence dark energy (which provides the outward acceleration to offset the presumed inward directed gravitational acceleration).

    Professor Philip Anderson argues against Professor Sean Carroll here that: “the flat universe is just not decelerating, it isn’t really accelerating … there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea.”

    My arguments in favour of lambda = 0 and 6 dimensions (3 time like global expansion, and 3 contractable local spacetime which describes the coordinates of matter) are at places like this and other sites.

  4. copy of a follow up comment:

    “I checked Lunsford’s article but he said nothing about the severe problems raised by the new kinematics in particle physics unless the new time dimensions are compactified to small enough radius.” – Matti Pitkanen

    Thanks for responding, Matti. But the time dimensions aren’t extra spatial dimensions, and they don’t require compactification. Lunsford does make it clear, at least in the comment on Woit’s blog, that he mixes up time and space.

    The time dimensions describe the expanding vacuum (big bang, Hubble recession of matter), the 3 spatial dimensions describe contractable matter.

    There’s no overlap possible because the spatial dimensions of matter are contracted due to gravity, while the vacuum time dimensions expand.

    It’s a physical effect. Particles are bound against expansion by nuclear, electromagnetic and gravitational (for big masses like planets, stars, galaxies, etc.) force.

    Such matter doesn’t expand and so it needs a different coordinate system to describe it from the vacuum in the expanding universe inbetween lumps of bound matter (galaxies, stars, etc.).

    Gravitation in general relativity causes a contraction of spatial distance, the amount of radial contraction of mass M being approximately (1/3)MG/c^2. This is 1.5 mm for earth’s radius.

    The problem is that this local spatial contraction of matter is quite distinct from the global expansion of the universe as a whole. Attractive forces over short ranges, such as gravity, prevent matter from expanding and indeed cause contraction of spatial dimensions.

    So you need one coordinate system to describe matter’s coordinates. You need a separate coordinate system to describe the non-contractable vacuum which is expanding.

    The best way to do this is to treat the distant universe which is receding as receding in time. Distance is an illusion for receding objects, because by the time the light gets to you, the object is further away. This is the effect of spacetime.

    At one level, you can say that a receding star which appears to you to be at distance R and receding at velocity v, will be at distance = R + vt = R + v(R/c) = R(1 + v/c) by the time the light gets to you.

    However, you then have an ambiguity in measuring the spatial distance to the star. You can say that it appears to be at distance R in spacetime where you are at your time of t = 10^17 seconds after the big bang (or whatever the age of the universe is) and the star is measured at a time of t – (R/c) seconds after the big bang (because you are looking back in time with increasing distance).

    The problem here is that the distance you are giving relates to different values of time after the big bang: you are observing at time t after the big bang, while the thing you are observing at apparent distance R is actually at time t – (R/c) after the big bang.

    Alternatively you get a problem if you specify the distance of a receding star as being R(1 + v/c), which allows for the continued recession of the star or galaxy while its light is in transit to us. The problem here is that we don’t can’t directly observe how the value of v varies over the time interval that the light is coming to us. We only observationally know the value of recession velocity v for the star at a time in the past. There is no guarantee that it has continued receding at the same speed while the light has been in transit to us.

    So all possible attempts to describe the recession of matter in the big bang as a function of distance are subjective. This shows that to achieve an unequivocal, unambiguous statement about what the recession means quantitatively, we must always use time dimensions, not distance dimensions to describe the recession phenomena observed. Hubble should have realized this and written his empirical recession velocity law not as v/R = constant = H (units reciprocal seconds), but as a recession velocity increasing in direct proportion to time past v/T = v/(R/c) = vc/R = (RH)c/R = Hc.

    This has units of acceleration, which leads directly to a prediction of gravitation, because that outward acceleration of receding matter means there’s outward force F = m.dv/dt ~ 104^3 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity. This “shadowing” is totally different from LeSage’s mechanism of gravity, which predicts nothing and involves all sorts of crackpot speculations. LeSage has a false idea that a gas pressure causes gravity. It’s really exchange radiation in QFT. LeSage thinks that there is shielding which stops pressure. Actually, what really happens is that you get a reaction force from receding masses by known empirically verified laws (Newton’s 2nd and 3rd), but no inward reaction force from a non-receding mass like the planet earth below you (it’s not receding from you because you’re gravitationally bound to it). Therefore, because local, non-receding masses don’t send a gauge boson force your way, they act as a shield for a simple physical reason based entirely on facts, such as the laws of motion, which are not speculation but are based on observations.

    The 1.5 mm contraction of Earth’s radius according to general relativity causes the problem that Pi would change because circumference (perpendicular to radial field lines) isn’t contracted. Hence the usual explanation of curved spacetime invoking an extra dimension, with the 3 known spatial dimensions a curved brane on 4 dimensional spacetime. However, that’s too simplistic, as explained, because there are 6 dimensions with a 3:3 correspondence between the expanding time dimensions and non-expanding contractable dimensions describing matter. The entire curvature basis of general relativity corresponds to the mathematics for a physical contraction of spacetime!

    The contraction is a physical effect. In 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4: ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v^2 /c^2)^1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/(1 – v^2 / c^2)^1/2, where E(o) is the potential energy of the dislocation at rest.’

    Because constant c = distance/time, a contraction of distance implies a time dilation. (This is the kind of simple argument FitzGerald-Lorentz used to get time dilation from length contraction due to motion in the spacetime fabric vacuum. However, the physical basis of the contraction is due to motion with respect to the exchange radiation in the vacuum which constitutes the gravitational field, so it is a radiation pressure effect, instead of being caused directly by the Dirac sea.)

    You get the general relativity contraction because a velocity v, in the expression (1 – v^2 /c^2)^1/2, is equivalent to the velocity gravity gives to mass M when it falls from an infinite distance away from M to distance R from M: v = (2GM/R)^{1/2}. This is just the escape velocity formula. By energy conservation, there is a symmetry: the velocity a body gains from falling from an infinite distance to radius R from mass M is identical to the velocity needed to escape from mass M beginning at radius R.

    Physically, every body which has gained gravitational potential energy, has undergone contraction and time dilation, just as an accelerating body does. This is the equivalence principle of general relativity. SR doesn’t specify how the time dilation rate of change varies as a function of acceleration, it merely gives the time flow rate once a given steady velocity v has been attained. Still, the result is useful.

    The fact that quantum field theory can be used to solve problems in condensed matter physics, shows that the vacuum structure has some analogies to matter. At very low temperatures, you get atoms characterized by outer electrons (fermions) pairing up to form integer spin (boson like) molecules, which obey Bose-Einstein statistics instead of Fermi-Dirac statistics. As temperatures rise, increasing random, thermal motion of atoms breaks this symmetry down, so there is a phase transition and weird effects like superconductivity disappear.

    At higher temperatures, further phase transitions will occur, with pair production occurring in the vacuum at the IR cutoff energy, whereby the collision energy is equal to the rest mass energy of the vacuum particle pairs. Below that threshold, there’s no pair production, because there can be no vacuum polarization in arbitrarily weak electric fields or else renormalization wouldn’t work (the long range shielded charge and mass of any fermion would be zero, instead of the finite values observed).

    The spacetime of general relativity is approximately classical because all tested predictions of general relativity relate to energies below the IR cutoff of QFT, where the vacuum doesn’t have any pair production.

    So the physical substance of the general relativity “spacetime fabric” isn’t a chaotic fermion gas or “Dirac sea” of pair production. On the contrary, because there is no pair production in space where the steady electric field strength is below 10^18 v/m, general relativity successfully describes a spacetime fabric or vacuum where there are no annihilation-creation loops; it’s merely exchange radiation which doesn’t undergo pair production.

    This is why field theories are classical for most purposes at low energy. It’s only at high energy that you get within a femto metre from a fermion, so QFT loop effects like pair production begin to affect the field due to vacuum polarization of the virtual fermions and chaotic effects.


    Copy of another comment on the subject to a different post of Kea’s:

    I’ve read Sparling’s paper and it misses the point, it ignores Lunsford’s paper and it’s stringy in its references (probably the reason why arXiv haven’t censored it, unlike Lunsford’s). However, it’s a step in the right direction that at least some out of the box ideas can get on to arXiv, at least if they show homage to the mainstream stringers.

    The funny thing will be that the mainstream will eventually rediscover the facts others have already published, and the mainstream will presumably try to claim that their ideas are new when they hype them up to get money for grants, books etc.

    It is sad that arXiv and the orthodoxy in science generally, censors radical new ideas, and delays or prohibits progress, for fear of the mainstream being offended at being not even wrong. At least string theorists are well qualified to get jobs as religious ministers (or perhaps even jobs as propaganda ministers in dictatorial banana republics) once they accept they are failures as physicists because they don’t want any progress to new ideas. 😉

    Update: I’ve just changed the banner description on the blog to:

    Standard Model and General Relativity mechanism with predictions
    In well-tested quantum field theory, forces are radiation exchanges. Masses recede in spacetime at Hubble speed v = RH = ctH, so there’s outward acceleration a = v/t = cH and outward force F = ma ~ 10^43 N. Newton’s 3rd law implies an inward (exchange radiation) force, predicting forces, curvature, cosmology and particle masses. Non-receding (local) masses don’t cause a reaction force, so they cause an asymmetry, gravity.

  5. copy of a comment:

    Kea, thanks for this, which is encouraging.

    In thinking about general relativity in a simple way, a photon can orbit a black hole, but at what radius, and by what mechanism?

    The simplest way is to say 3-d space is curved, and the photon is following a curved geodesic because of the curvature of spacetime.

    The 3-d space is curved because it is a manifold or brane on higher-dimensional spacetime, where the time dimension(s) create the curvature.

    Consider a globe of the earth as used in geography classes: if you try to draw Euclidean triangles on the surface of that globe, you get problems with angles being bigger than on a flat surface, because although the surface is two dimensional in the sense of being an area, it is curved by the third dimension.

    You can’t get any curvature in general relativity due to the 3 contractable spatial dimensions: hence the curvature is due to the extra dimension(s) of time.

    This implys that the time dimension(s) are the source of the gravitational field, because the time dimension(s) produce all of the curvature of spacetime. Without those extra dimension(s) of time, space is flat, with no curved geodesics, and no gravity.

    This should tell people that the mechanism for gravity is to be found in the role of the time dimension(s). With the cosmic expansion represented by recession of mass radially outward in three time dimensions t = r/c, you have a simple mechanism for gravity since you have outward velocity varying specifically with time not distance, which implies outward acceleration of all the mass in the universe, using Hubble’s empirical law, dr/dt = v = Hr:

    a = dv/dt
    = dv/(dr/v)
    = v*dv/dr
    = v*d(Hr)/dr
    = vH.

    Thus outward force of universe F=Ma = MvH. Newton’s 3rd law tells you there’s equal inward force. That inward force predicts gravity because it (which is corce-mediating gauge boson exchange radiation, i.e., gravitational field) exerts pressure against masses from all directions except where shielded by local, non-receding masses. The shielding is simply caused by the fact that non-receding (local) masses don’t cause a reaction force, so they cause an asymmetry, gravity. There are two different working sets of calculations for this mechanism which predict the same formula for G (which is accurate well within observational errors on the parameters) using different approaches (I’m improving the clarity of those calculations in a big rewrite).

    Back to the light ray orbiting the black hole due to the curvature of spacetime: Kepler’s law for planetary orbits is equivalent to saying the radius of orbit, r is equal to 2MG/v^2, where M is the mass of the central body and v is the velocity of the orbiting body.

    This comes from: E = (1/2)mv^2 = mMG/r, as Dr Thomas R. Love has explained.

    Light, however, due to its velocity v = c, is deflected by twice as much by gravity than the slow moving objects (v black hole event horizon radii (similar to the way that the Earth’s Van Allen belt’s are plotted in units of earth radii).

    For the case where n = 1, i.e., one event horizon radius, you get g_{00} = g_{11} = g_{22} = g_{33} = 0.

    That’s obviously wrong because there is severe curvature. The problem is that in using Maclaurin’s series to the first two terms only, the result only applies to small curvatures, and you get a strng curvature at event horizon radius of a black hole.

    So it’s vital at black hole scales to not use Maclaurin’s series to approximate the basic equations, but to keep them intact:

    g_{00} = [(1 – GM/(2rc^2)/(1 + GM/(2rc^2)]^2

    g_{11} = g_{22} = g_{33} = -[1 + GM/(2rc^2)]^4

    where GM/(2rc^2) = (2GM/c^2)/(4r) = 1/(4n) where as before n is the distance in units of event horizon radii. (Every mass consititues a black hole at a small enough radius, so this is universally valid.) Hence:

    g_{00} = [(4 – 1/n)/(4 + 1/n)]^2

    g_{11} = g_{22} = g_{33} = -(1/256)*[4 + 1/n]^4.

    So for one event horizon radius (n = 1),

    g_{00} = (3/5)^2 = 9/25

    g_{11} = g_{22} = g_{33} = -(1/256)*5^4 = -625/256.

    The gravitational time dilation factor of 9/25 at the black hole event horizon radius is equivalent to a velocity of about 0.933c.

    It’s pretty easy for to derive the Schwarzschild metric for weak gravitational fields just by taking the Lorentz-FitzGerald contraction gamma factor and inserting v^2 = 2GM/r, on physical arguments, but then we have the problem that Schwarzschild’s metric only applies to weak gravity fields because uses only the first two terms in the Maclaurin’s series’ for the metric tensor’s time and space. It’s an interesting problem to try to get a completely defensible, simple physical model for the maths of general relativity. Of course, there is no real physical need to work beyond the Schwarzschild metric since … all the tests of general relativity apply to relatively weak gravitational fields within the domain of validity of the Schwarzschild metric. There’s not much physics in worrying about things that can’t be checked or tested.

  6. copy of a comment:

    I appreciate your enthusiam for thinking about these problems. However, it seems clear that you haven’t had any formal education on the subjects. The bare mass and charges of the quarks and leptons are actually indeterminate at the level of quantum field theory. When they are calculated, you get an infinities. What is done in renormalization is to simply replace the bare mass and charges with the finite measured values.” – V

    No, the bare mass and charge are not the same as measured values, they’re higher than the observed mass and charge. I’ll just explain how this works at a simple level for you so you’ll grasp it:

    Pair production occurs in the vacuum where the electric field strength exceeds a threshold of ~ 1.3*10^18 v/m; see equation 359 in Dyson’s or 8.20 in

    These pairs shield the electric charge: the virtual positrons are attracted and on average get slightly closer to the electron’s core than the virtual electrons, which are repelled.

    The electric field vector between the virtual electrons and the virtual positrons is radial, but it points inwards towards the core of the electron, unlike the electric field from the electron’s core, which has a vector pointing radially outward. The displacement of virtual fermions is the electric polarization of the vacuum which shields the electric charge of the electron’s core.

    It’s totally incorrect and misleading of you to say that the exact amount of vacuum polarization can’t be calculated. It can, because it’s limited to a shell of finite thickness between the UV cutoff (close to the bare core, where the size is too small for vacuum loops to get polarized significantly) and the IR cutoff (the lower limit due to the pair production threshold electric field strength).

    The uncertainty principle give the range of virtual particles which have energy E: the range is r ~ h bar*c/E. So E ~ h bar*c/r. Work energy E is equal to the force multiplied by the distance moved in the direction of the force, E = F*r. Hence F = E/r = h bar*c/r^2. Notice the inverse square law arising automatically. We ignore vacuum polarization shielding here, so this is the bare core quantum force.

    Now, compare the magnitude of this quantum F = h bar*c/r^2 (high energy, ignoring vacuum polarization shielding) to Coulomb’s empirical law for electric force between electrons (low energy), and you find the bare core of an electron has a charge e/alpha ~137*e, where e is observed electric charge at low energy. So it can be calculated, and agrees with expectations:

    ‘… infinities [due to ignoring cutoffs in vacuum pair production polarization phenomena which shields the charge of a particle core], while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

    ‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’/m and e’/e would be of order alpha ~ 1/137.’

    – M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, p76.

    I must say it is amazing how ignorant some people are about this, and this is vital to understanding QFT, becausebelow the IR cutoff there’s no polarizable pair production in the vacuum, so beyond ~1 fm from a charge where the , spacetime is relatively classical. The spontaneous appearance of loops of charges being created and annihilated everywhere in the vacuum is discredited by renormalization. Quantum fields are entirely bosonic exchange radiation at field strengths below 10^18 v/m. You only get fermion pairs being produced at higher energy, and smaller distances than ~1 fm.

  7. funny comment I chanced to see over at Not Even Wrong, copied here in case it is deleted for being off topic:

    anon. Says:

    May 3rd, 2007 at 4:56 am
    Maldacena’s unproved conjecture of a correspondence between 5-d AdS (anti de Sitter) space and 4-d stringy Yang-Mills conformal field theory (CFT) is one of the best things to come from string theory related research. I just love the fact that by the holographic principle, that 4-d particle physics resides as a brane or surface on a 5-d anti de Sitter space.

    On the negative side,

    *AdS has a negative cosmological constant, instead of a positive one,

    *N=4 Yang-Mills CFT isn’t consistent with 10-d supersymmetry.

    Other conjectured unproved correspondences which can under limited conditions model real phenomena to some extent include:

    *by using phlogiston theory, you can model combustion without oxidation, which was very handy at one time,

    *by using caloric you can model heat without needing kinetic and radiation theories, again, a useful simplification at one time (Ca/KR correspondence)

    *by using Ptolemy’s epicycles, you can actually model planetary motions in the solar system, which was simple for astrologers (Pt/SS correspondence)

    *by using the FitzGerald-Lorentz aether you can model the contraction of moving bodies without needing special relativity, which is oh so useful for crackpots (FL/SR correspondence)

    So the evidence of adS/CFT correspondence conjecture holding fairly well, implies that maybe people should start taking seriously other models that are similarly based on totally unphysical assumptions?

  8. copy of a comment:

    Regarding Tommaso’s experimental research, maybe I’m wrong but my feeling is that checking bosonic Higgs or superpartner predictions from very speculative theories (built entirely on unobservables) is similar to checking ‘predictions’ from ESP spoon-bending charlatans, or searching for the Lock Ness Monster, ghosts, etc. No matter how much money or scientific effort is put in, you can’t get anywhere because the results are never 100% accurate and the crackpot theorist will cling on to the hope that some result will show up when accuracy improves some more. If Geller’s brain doesn’t bend the spoon, it’s down to insufficient brain energy that day, or insufficient number of trials. You can’t disprove a crackpot theory experimentally if the theory is ‘not even wrong’ and doesn’t make precise predictions. Keep the theory endlessly adjustable like the landscape of variants of the Higgs theory and the landscape of supersymmetry theories, and it’s a ‘heads I win, tails you lose’ situation. Whatever you do find, the theory will be able to accommodate with suitable fiddles and adjustments, while there is no risk of it failing because it hasn’t made any precise (falsifiable) predictions anyway.

    I’ve just finished reading Prof. Smolin’s Trouble with Physics, which I’ve read in a random, non-linear way in bits and pieces over a long time. He has a very deep understanding of some vital concepts underlying spacetime which I find helpful to clarifying the issue of the role of time dimension(s).

    Pages 42-43 are really useful. Prof. Smolin explains curvature of spacetime very simply there, especially figure 3 which plots the deceleration of a car as space (i.e. distance in direction of motion) versus time.

    The curvature of the line (e.g. for space = time^2), is “curved spacetime”.

    I think this is a very good way to explain the curvature of spacetime! Quite often, you hear criticisms that nobody has ever seen the curvature of spacetime, but this makes it clear that the general relativity is addressing physical facts expressed mathematically.

    It also makes it clear that “flat spacetime” is simply a non-curved line on a graphical plot of space versus time. Because special relativity applies to non-accelerating motion, it is restricted to flat spacetime. Profl Smolin writes (p42):

    “Consider a straight line in space. Two particles can travel along it, but one travels at a uniform speed, while the other is constantly accelerating. As far as space is concerned, the two particles travel on the same path. But they travel on different paths in spacetime. The particle with a constant speed travels on a straight line, not only in space but also in spacetime. The accelerating particle travels on a curved path in spacetime (see Fig. 3).

    “Hence, just as the geometry of space can distinguish a straight line from a curved path, the geometry of spacetime can distinguish a particle moving at a constant speed from one that is accelerating.

    “But Einstein’s equivalence principle tells us that the effects of gravity cannot be distinguished, over small distances, from the effects of acceleration. Hence, by telling which trajectories are accelerated and which are not, the geometry of spacetime describes the effects of gravity. The geometry of spacetime is therefore the gravitational field.”

    What I like most about it is that Prof. Smolin is explaining spacetime by matching up one spatial dimension with one time dimension.

    Extend this to three spatial dimensions, and you would naively expect to require three time dimensions, instead of just one.

    The simplification that there appears to be just one time dimension surely arises because the time dimensions are all expanding uniformly, so there is no mathematical difference between them.

    In three spatial dimensions, if all the spatial dimensions are indistinguishable it is a case of spherical symmetry. In this case, x = y = z = r, where r is radial distance from the middle.

    Hence, three dimensions can be treated as one, provided that they are similar: t_1 = t_2 = t_3 = t.

    So the reason why three time dimensions can normally be treated as one time dimension is that time dimensions are symmetric to one another (unlike spatial dimensions). So the symmetry orthagonal group SO(3,3) is equivalent to SO(3,1), provided that the three time dimensions are identical.

  9. copy of interesting comment

    anon. Says:

    May 6th, 2007 at 5:29 am

    Gross … declared that “I am still a true believer in the sexiness of string theory”

    Sexiness? Maybe that will be the case when string theory has been proved finite, gravitons have been discovered, particle physics has been discovered in the landscape, the hierarchy problem has been sorted out, and there is some evidence for supersymmetry particles or extra dimensions.

    Nima Arkani-Hamed … ended by claiming that the situation is like that of the quantum theory in 1911, with the angst people are experiencing due to the landscape just like the difficulties physicists faced early in the century in going from classical physics to quantum physics.

    I agree that 1911 is an analogy, but only if M-theory is the equivalent of J. J. Thomson’s plum-pudding atom, supported by groupthink. In 1911 Rutherford published the nuclear atom based on analysis of alpha particle scattering angles in gold foil. The plum-pudding atom theorists said that the nuclear atom is impossible, because the concentration of positive charge in the nucleus would cause it to explode. To them, any self-consistent theory needed to explain the normal stability of most atoms. It seemed very ad hoc to have to make up a special new nuclear force, merely to hold the proposed ‘nucleus’ together.

    String theory imposes a similarly mind-barring speculative framework by building on graviton theory. You end up with constraints that any new theory of gravity must be based on similar speculative criteria to that of string theory, i.e., gravitons with spin-2 for attractive forces. The tragedy is that this speculative framework for judging new ideas might be wrong by ignoring something, just as the nuclear atom doesn’t need to have a structure which is stable according to [nuclear force ignoring] electrostatic criteria.

    anon. Says:

    May 6th, 2007 at 5:40 am

    Nagoaka in Japan suggested the nuclear atom (nucleus surrounded by orbiting electrons) in 1904 but he was censored out because people said that the positive charge in the nucleus would cause it to blow up.

    This is the danger of insisting that new ideas agree with existing criteria for judging ideas. Sometimes, existing ideas are wrong because they are incomplete. Quantum gravity ideas are certainly incomplete, so it’s crazy to use them to rule out alternatives. Only experimental data should be used to judge theories.

  10. copy of a comment:

    That’s a very nice Java application illustrating the difference between Newtonian gravity and general relativity for orbits!

    The procession of the perhelion of the orbit makes the general relativity solution look quite chaotic when clicking the “faster” button a few times.

    Obviously, it’s still a planar orbit and isn’t actually chaotic.

    I’d like to do a classical-type simulation for a multi-body problem, say the helium atom (2 electrons) to study the chaos produced by the effect of each electron on the other as it moves.

    Something like the Schroedinger distribution might result:

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

    – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

    In particular, for the hydrogen atom, is the chaotic Schroedinger distribution just the effect of the measurement uncertainty when you introduce another (third) particle to probe where the electron is relative to the nucleus, or is the uncertainty there before you take a measurement because of virtual particles in the vacuum surrounding the electron causing small scale random deflections of it’s path:

    ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

    – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

  11. copy of a comment:

    Mach’s relationism isn’t proved. He claimed that the earth’s rotation is just relative to the stars. However, the Foucault pendulum proves that absolute rotation can be determined without reference to the stars. Mach could only counter that objection by his faith (without any evidence at all) that if the earth was not rotating, but the stars were instead rotating around it, then the Coriolis force would still exist and deflect a Foucault pendulum.

    To check Mach’s principle, instead of the complex situation of the earth and the stars (where the dynamical relationship is by consensus unknown until there is a quantum gravity to explain inertia by the equivalence principle), consider the better understood situation of a proton with an electron orbiting it.

    From classical mechanics, neglecting therefore the normal force causing exchange (equilibrium) radiation which constitutes fields, there should be a net (non equilibrium) emission of radiation by accelerating charge.

    If you consider a system with an electron and a proton nearby but not in orbit, they will be attracted by Coulomb’s law. (The result of the normal force-causing exchange radiation.)

    Now, consider the electron in orbit. Because it is accelerating with acceleration a = (v^2)/r, it is continuously emitting radiation in a direction perpendicular to its orbit; i.e., generally the direction of the radiation is the radial line connecting the electron with the proton.

    Because the electron describes a curved orbit, the angular distribution of its radiation is asymmetric with more being emitted generally towards the nucleus than in the opposite direction.

    The recoil of the electron from firing off radiation is therefore in a direction opposite to the centripetal Coulomb attraction force. This is why how the “centrifugal” effect works.

    What is so neat is that no loss kinetic energy occurs to the electron. T.H. Boyer in 1975 (Physical Review D, v11, p790) suggested that the ground state orbit is a balance between radiation emitted due to acceleration and radiation absorbed from the vacuum’s zero point radiation field caused by all the other accelerating charges which are also radiating in the universe surrounding any particular atom.

    H.E. Puthoff in 1987 (Physical Review D v35, p3266, “Ground state of hydrogen as a zero-point-fluctuation-determined state”) assumed that the Casimir force causing zero-point electromagnetic radiation had an energy spectrum

    Rho(Omega)d{Omega} = {h bar}[{Omega}^3]/[2(Pi^2)*(c^3)] d{Omega}

    which causes an electron in a circular orbit to absorb radiation from the zero-point field with the power

    P = (e^2)*{h bar}{Omega^3}/(6*Pi*Epsilon*mc^3)

    Where e is charge, Omega is angular frequency, and Epsilon is permittivity. Since the power radiated by an electron with acceleration a = r*{Omega^2} is:

    P = (e^2)*(a^2)/(6*Pi*Epsilon*c^3),

    equating the power the electron receives from the zero-point field to the power it radiates due to its orbit gives

    m*{Omega}*(r^2) = h bar,

    which is the ground state of hydrogen. Puthoff writes:

    “… the ground state of the hydrogen atom can be precisely defined as resulting from a dynamic equilibrium between radiation emitted due to acceleration of the electron in its ground-state orbit and radiation absorbed from zero-point fluctuations of the background vacuum electromagnetic field, thereby resolving the issue of radiative collapse of the Bohr atom.”

    This model dispenses with Mach’s principle. An electron orbiting a proton is not equivalent to the proton rotating while the electron remains stationary; one case results in acceleration of the electron and radiation emission, while the other doesn’t.

    The same arguments will apply to the case of the earth rotating, or the stars orbiting a stationary earth, although some kind of quantum gravity/inertia theory is there required for the details.

    One thing I disagree with Puthoff over is the nature of the zero-point field. Nobody seems to be aware that the IR cutoff and the Schwinger requirement for a minimum electric field strength of 10^18 v/m, prevents the entire vacuum from being subject to creation/annihilation loop operators. Quantum field theory only applies to the fields above the IR cutoff, or closer than 10^{-15} metre to a charge.

    Beyond that distance, there’s no pair production in the vacuum whatsoever, so all you have is radiation. In general, the “zero-point field” is the gauge boson exchange radiation field which causes forces. The Casimir force works because long wavelengths of the zero-point field radiation are excluded from the space between two metal plates, which therefore shield one another and get pushed together like two suction cups being pushed together by air pressure when normal air pressure is reduced in the small gap between them.

    Puthoff has an interesting paper, “Source of the vacuum electromagnetic zero-point energy” in Physical Review D, v40, p4857 (1989) [note that an error in this paper is corrected by Puthoff in an update published in Physical Review D v44, p3385 (1991)]:

    “… the zero-poing energy spectrum (field distribution) drives particle motion … the particle motion in turn generates the zero-point energy spectrum, in the form of a self-regenerating cosmological feedback cycle. The result is the appropriate frequency-cubed spectral distribution of the correct order of magnitude, this indicating a dynamic-generation process for the zero-poing energy fields.”

    What interests me is that Puthoff’s calculations in that paper tackle the same problems which I had to deal with over the last decade in regards to a gravity mechanism. Puthoff writes that since the radiation intensity from any charge will fall as 1/r^2, and since charges in shells of thickness dr will have an area of 4*Pi*r^2, the increasing number of charges at bigger distances offsets the inverse square law of radiation, so you get a version of Obler’s paradox appearing.

    In addition, Puthoff notes that:

    “… in an expanding universe radiation arriving from a particular shell located now at a distance was emitted at an earlier time, from a more compacted shell.”

    This effect tends to make Obler’s paradox even more severe, because the earlier universe we see at great distances should be more and more compressed with distance, and ever brighter.

    Redshift of radiation emitted from receding matter at such great distances solves these problems.

    However, Puthoff assumes that some already known metric of general relativity is correct, which clearly is false because of the redshift of gauge bosons in an expanding universe will weaken the gravitational coupling constant between receding (distant) masses, a fact that all widely accepted general relativity metrics totally ignore!

    Sorry for the length of this comment, and feel free to delete this comment (I’ll put a copy on my blog).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s