## Rabinowitz and quantum gravity

Dr Mario Rabinowitz, the author of the arXiv paper “Deterrents to a Theory of Quantum Gravity,” has kindly pointed out his approach to the central problem I’m dealing with.  (Incidentally, the problem he has with quantum gravity does not apply to the quantum gravity mechanism I’m working on, where gravity is a residue of the electromagnetic field caused by the exchange of electromagnetic gauge bosons which allows two kinds of additions, a weak always attractive force and a force about 1040 times stronger with both attractive and repulsive mechanisms.)  His paper, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990), is the earliest I’m aware of which comes up with a general result equal to Louise Riofrio’s equation MG = tc3.

He sets the gravitational force equal to the inertial force, F = mMG/R2 = [mM/(M + m)]v2/R ≈ (mc2)/R.  This gives MG = Rc2 = (ct)c2 = tc3 which is identical to Riofrio’s equation.

Here is my detailed treatment of Mario’s analysis.  The cosmological recession of Hubble’s law v = HR where H is Hubble parameter and R is radial distance, implies an acceleration in spacetime (since R = ct) of a = dv/dt = d(HR)/dt = Hv = (v/R)v = v2/R.  (This is not controversial or speculative; it is just employing calculus on Hubble’s v = HR, in the Minkowski spacetime we can observe, where: ‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1908.) Hence the outward force on mass m due to recession is F = ma = mv2/R = mc2/R for extreme distances where most of the mass is and where redshifts are great, so that  v ~ c.

Hence the inward (attractive) gravity force is balanced by this outward force:

F = mMG/R2 = mc2/R

Thus,

MG = Rc2 = (ct)c2 = tc3.

(This result is physically and dimensionally correct but quantitatively is off by a dimensionless correction factor of e3 = 20, because it ignores the dynamics of quantum gravity at long distances (rising density as time approaches zero, which increases toward infinity the effective gravity effect due to the expansion of the universe, and falling strength of gravity causing exchange radiation as time goes towards zero due to the extreme redshift of that radiation, weakening gravity.  However, the physical arguments above are very important and can be compared to those in the mechanism at http://feynman137.tripod.com/.  The correct formula is: e3 MG =  tc3 , where, because of the lack of gravitational retardation in quantum gravity, t = 1/H where H is Hubble parameter, instead of t = (2/3)/H which is the case for the classic Friedmann scenario with gravitational deceleration.)

Historically the rediscovery of this result since Mario’s paper in 1990 has occurred three times, each under different circumstances:

(1) M. Rabinowitz, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990).

(2) My own treatment, Electronics World, various issues (October 1996-April 2003), based on a physical mechanism of gravity (outward force of matter in receding universe is balanced, by Newton’s 3rd law, by an inward force of gauge boson pressure, which causes gravity by asymmetries since each fundamental particle acts as a reflecting shield, so masses shield one another and get pushed together by gauge boson radiation, predicting the value of G quite accurately).

[Initially I had a crude physical model of the Dirac sea, in which the motion of matter outward resulted in an inward motion of the Dirac sea to fill in the volume being vacated.  This was objected to strongly for being a material pressure LeSage gravity mechanism, although it makes the right prediction for gravity strength (unlike other LeSage models) and utilises the right form of the Hubble acceleration outward, a = dv/dt = d(HR)/dt = Hv.  This was published in Electronics World from October 1996 (letters page item) to April 2003 (a major six pages long paper).  A gauge boson exchange radiation based calculation for gravity was then developed which does the same thing (without the Dirac sea material objections to LeSage gravity which the previous version had) in 2005.  I’ve little free time, but am rewriting my site into an organised book which will be available free online.  The correct formula from http://feynman137.tripod.com/ for the gravity constant is G = (3/4)H2 /(Pi*Rho*e3 ) where Rho is the observed (not Friedmann critical) density of visible matter and dust, etc.  This equation is equivalent to e3 MG =  tc3 , and differs from the Friedmann critical density result by a factor of approximately 10, predicting that the amount of dark matter is less than predicted by the critical density law.  In fact, you get a very good prediction of the gravity constant from the detailed Yang-Mills exchange radiation mechanism by ignoring dark matter, as a first approximation.  Since dark matter has never been observed in a laboratory, but is claimed to be abundant in the universe, you have to ask why it is avoiding laboratories.  In fact the most direct evidence claimed for it doesn’t reveal any details about it.  It is required in the conventional (inadequate) approximations to gravity but the correct quantum gravity, which predicted the non-retarded expansion of the universe in 1996, two years before Perlmutter’s observational data confirmed it, reduces the amount of dark matter dramatically and makes various other validated predictions.]

(3) John Hunter published a conjecture on page 17 of the 12 July 2003 issue of New Scientist, suggesting that the rest mass energy of a particle, E = mc2, is equal to its gravitational potential energy with respect to the rest of the matter in the surrounding universe, E = mMG/R.  This leads to E = mc2 = mMG/R, hence MG = Rc2 = (ct)c2 = tc3.  He has the conjecture on a website here, which contains an interesting and important approach to solving the galactic rotation curve problem without inventing any unobserved dark matter, although his cosmological speculations on linked pages are unproductive and I wouldn’t want to be associated with those non-predictive guesses.  Theories should be built on facts.

(4) Louise Riofrio came up with the basic equation MG = tc3 by dimensional analysis and has applied it to various problems.  She correctly concludes that there is no dark energy, but one issue is what is varying in the equation MG = tc3 to compensate for time increasing on the right hand side.  G is increasing with time, while M and c remain constant.  This conclusion comes from the detailed gravity mechanism.  Contrary to claims by Professor Sean Carroll and the late Dr Edward Teller, an increasing G does not vary the sun’s brightness or the fusion rate in the first minutes of the big bang (electromagnetic force varies in the same way so Coulomb’s law of repulsion between protons was different, offsetting the variation in compression on the fusion rate due to varying gravity), but it does correctly predict that gravity was weaker in the past when the cosmic background radiation was emitted, thus explaining quantitatively why the ripples in that radiation due to mass were so small when it was emitted 300,000 years after big bang.  This, together with the lack of gravitational retardation on the rapid expansion of the universe (gravity can’t retard expansion between relativistically receding masses, because the gravity causing exchange radiation will be redshifted, losing its force-causing energy, like ordinary light which is also redshifted in cases of rapid recession; this redshift effect is precisely why we don’t see a blinding light and lethal radiation from extreme distances corresponding to early times after the big bang) gets rid of the ad hoc inflationary universe speculations

I’m disappointed by Dr Peter Woit’s new post on astronomy where he claims astronomy is somehow not physics: ‘When I was young, my main scientific interest was in astronomy, and to prove it there’s a very geeky picture of me with my telescope on display in my apartment, causing much amusement to my guests (no way will I ever allow it to be digitized, I must ensure that it never appears on the web). By the time I got to college, my interests had shifted to physics…’

I’d like to imagine that Dr Woit just means that current claims of observing ‘evolving dark energy’ and ‘dark matter (with lots of alleged evidence which turns out to be gravity caused distortions which could be caused by massive neutrinos or anything, and doesn’t have a fig leaf of direct laboratory confirmation for the massive quantity postulated to fix epicycles in the current general relativity paradigm which ignores quantum gravity)’ are not physics.  However, he is unlikely to start claiming that the mainstream ‘time-varying-lambda-CDM or time-varying-lambda (time-varying dark energy ‘cosmological constant’)-cold dark matter’ model of cosmology is nonsense because in his otherwise excellent book Not Even Wrong he uses the false, ad hoc, small positive fixed value of the cosmological constant to ridicule the massive value predicted by force unification considerations in string theory.  Besides, if he knows little of modern astronomy and cosmology, he will not be in an expert to competently evaluate it and criticise it.  I hope Dr Woit will submerge himself in the lack of evidence for modern cosmology and perhaps come up with a second volume of Not Even Wrong addressed at the lambda-CDM model and its predictive, checkable solution using a proper system of quantum gravity.

For my earlier post on this topic, see https://nige.wordpress.com/2006/09/22/gravity-equation-discredits-lubos-motl/

Other news: my domain http://quantumfieldtheory.org/ is up and running with some draft material – now I just have to write the free quantum field theory textbook to put on there!

NUMERICAL CHECK

The current observational value of H is about 70 +/- 2.4 km/s/Mparsec ~ 2.27*10-18 s-1, and Rho causes the difficulty today.  The observed visible matter (stars, hot gas clouds) has long been estimated to have a mean density around us of ~4*10-28 kg/m3, although studies show that this should be increased for dust by about 15% and for various other factors.   The prediction G = (3/4)H2 /(Pi*Rho*e3 )  is a factor of e3 /2 ~ 10 times smaller than that in the Friedmann critical density formula.  It’s accuracy depends on what evidence you take for the density.  It happens to agree exactly with the statement by Hawking in 2005:

‘When we add up all this dark matter [which accounts for the high speed of the outermost stars orbiting spiral galaxies like the Milky Way, and the high speed of galaxies orbiting in clusters of galaxies] , we still get only about one-tenth of the amount of matter required to half the expansion [the critical density in Friedmann’s solution]’.

– S. Hawking and L. Mlodinow, A Briefer History of Time, Bantam, London, 2005, p65.

Changing it around, it predicts the density is 9.2*10-28 kg/m3, about twice the observed density if that is taken as the traditional figure of 4*10-28 kg/m3, however the latest estimates of the density are higher and similar to the predicted value 9.2*10-28 kg/m3, for example the following:

‘Astronomers can estimate the mass of galaxies by totalling up the number of stars in the galaxy (about 109) and multiplying by the mass of one star, or by observing the dynamics of orbiting parts of a galaxy. Next they add up all the galactic mass they can see in this region and ivide by the volume of space they are looking at. If this is done for bigger and bigger regions of space the mean density approaches a figure of about 10-30 grams per cubic centimetre or 10-27 kg m-3. You will realise that there is some doubt in this value because it is the result of a long chain of estimations.’

Putting this approximate value of Rho = 10-27 kg m-3 into G = (3/4)H2 /(Pi*Rho*e3 ) with H as before gives G =  6.1*10-11  N m2 kg-2 , which is only 9% low, and although the experimental error in density observations is relatively high, it will improve with further astronomical studies, just as the Hubble parameter error has improved with time.  This provides a further check.  (Other relevant checks on quantum gravity are discussed here, top post.)

Here’s an extract from a response I sent to Dr Rabinowitz on 8 January, regarding the issue of gauge bosons and the accuracy of the calculation of G in comparison to observed data:

“Are your gauge bosons real or virtual?” What’s the difference? It’s the key question in many ways.  Obviously they are real in the sense they really produce electric forces. But you can’t detect them with a radio receiver or other instrument designed to detect either oscillatory waves or discrete particles.
“I am troubled by your force calculation (~10^43 N) which is an input to your derivation of G.  I’m inclined to think that the force calculation could be off by a large factor, so that that one may question that “The result predicts gravity constant G to within 2 % “.

First, the “outward force” is ambiguous.  If you ignore the fact that the more distant observable universe has higher density, then you get one figure. If you assume that density increases to infinity with distance, you get another result for outward force (infinity).  Finally, if you are interested in the inward reaction force carried by radiation (gauge bosons) then you need to allow for the redshift of those due to the recession of the matter emitting them, which cancels out the infinity due to density increasing, and gives a result of about 7 x 10^43 N or whatever.  In giving outward force as ~10^43 N, I’m giving a rough figure which anyone will be able to validate approximately without having to do the more complicated calculations.

I used two published best estimates for the Hubble parameter and the density of the visible matter plus dust in the universe.  These allowed G to be predicted.  The result was within 2% of the empirically known value of G.  I used 70 km/s/Mparsec for H a decade ago and that is still the correct figure, although the uncertainty is falling.  A decade ago, there was no estimate to the uncertainty because the data clustered between two values, 50 and 100.  Now there is agreement that the correct value of H is very close to 70.  …  I don’t think there is any massive error involved in observational astronomy.  There used to be a confusion because of two types of variable star, with Hubble using the wrong type to estimate H.  Hubble had a value of 550 for H, many times too high.  That sort of error is long gone.

Recent response to Professor Landis about general relativity:

“Ultimately, it’s all in the experimental demonstration.  If Einstein’s theory hadn’t been confirmed by tests, it would have been abandoned regardless of how pretty or ugly it may be.” – Geoffrey Landis

What about string theory, which has been around since 1969 and can’t be tested and doesn’t hold out any hope?  I disagree: the tests of general relativity would first have been repeated, and if they still didn’t agree, then an additional factor would have been invented/discovered to make the theory correct.

Newton’s gravity law in tensors would be R_uv = 4*Pi*T_uv

which is false because the divergence of T_uv doesn’t disappear.  Hence it violates conservation of energy.  Einstein replaces T_uv with (T_uv) – (1/2)(g_uv)T which does have a vanishing divergence and so doesn’t contradict the conservation of energy.  If the solutions of general relativity are wrong, then you would need to find out physically what is causing the discrepancy.

The Friedmann solution of general relativity predicted that gravity slows down expansion.  Observations by Perlmutter on distant supernova showed that there was something wrong.  Instead of abandoning general relativity, a suitable small positive “cosmological constant” was adopted to keep everything fine.  Recently, however, more detailed observations show that there is evidence that such a “cosmological constant” lambda would be varying with time.

Discussion by email with Dr Rabinowitz:

From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, January 17, 2007 5:27 AM

Subject: Paul Gerber is an unsung hero

Dear Nigel,

… Einstein’s General Relativity (EGR)  makes the problem much more difficult than your simple approach.

Another shortcoming is the LeSage model itself.  It is very appealing, but one aspect is appauling.  What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.  One should be able to calculate the time constant for slowing down a body.  …

Best regards,
Mario

From: Nigel Cook

Sent: Wednesday, January 17, 2007 7:07 PM

Subject: Re: Paul Gerber is an unsung hero

Dear Mario,

Since the leading edge of the Universe is moving at nearly c, one needs to bring relativity into the equations.  Special relativity (without boosts) can’t do it.  Einstein’s General Relativity (EGR)  makes the problem much more difficult than your simple approach.”

The mechanism of relativity comes from this simple approach: the radiation pressure on a moving object causes the contraction effect.  Any inconsistency is a failure of general or special relativity, which are mathematical structures based on principles.  An example of a failure is the lack of deceleration of the universe…

Another shortcoming is the LeSage model itself.  It is very appealing, but one aspect is appauling.  What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.”

This is the objection of Feynman to LeSage in his November 1964 Cornell lectures on the Character of Physical Law.  The failure of LeSage has been discussed in detail by people from Maxwell to Feynman.  I have some discussion of LeSage at http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html

See http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html where the Dirac sea (or the equivalent Yang-Mills radiation exchange pressure on moving objects) is the mechanism for relativity:

Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:” ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’”

The force inward on every point is enormous, 10^43 Newtons.  General relativity gives the result that the Earth’s radius is contracted by (1/3)MG/c^2 = 1.5 millimetres.  The physical mechanism of this process (gravity dynamics by radiation pressure of exchange radiation) is the basis for gravitational “curvature” of spacetime in general relativity, because this shrinking of radius is radial only: transverse directions (eg circumference) is not affected.  Hence, the ratio circumference/radius will vary depending on the mass of the object, unless you invent a fourth dimension and maintain Pi by stating that spacetime is curved by the extra dimension.

LeSage (who apparently plagarised Fatio, a friend of Newton) was also dismissed for various other equally false reasons:

1. Maxwell claimed that the force causing radiation would have to be so great it would heat up objects until they were red hot.  This is vacuous for various reasons: the strong nuclear force (unknown in Maxwell’s time) is widely accepted to be mediated by Pions and other particles, and is immensely stronger than gravity, but doesn’t cause things to melt.  Heat transfer depends on how energy is coupled.  It is known that gravity and other forces are indirectly coupled to particles via a vacuum field that has mass and other properties.

2. Several physicists in the 1890s wrote papers which dismissed LeSage by claiming that any useful employment of the mechanism makes gravity depend on the mass of atoms rather than on the surface area of a planet, and so requires the gravity causing field to be able to penetrate through solid matter, and that therefore matter must be mainly void, with atoms mainly empty.  This appeared absurd.  But when X-rays, radioactivity and the nuclear atom confirmed LeSage, he was not hailed as having made a successful prediction, confirmed experimentally.  The later mainstream view of LeSage was summed up by Eddington: ‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, ‘Space Time and Gravitation’, Cambridge University Press, 1921, p64.  This is partly correct in the sense that there was no numerical prediction from LeSage that could be tested.

3. Feynman’s objection assumes that the force carrying radiation interacts chaotically with itself, like gas molecules, and would fill in “shadows” and cause drag on moving objects by striking moving objects and carrying away momentum randomly in any direction.  This is a straw man argument: Feynman should have considered the Yang-Mills exchange radiation as the only basis for forces below the infra red cutoff, ie, beyond 1 fm from a particle core.

The gas of creation-annihilation loops only occurs above the IR cutoff.  It is ironic that Feynman missed this, seeing his own major role in discovering renormalization which is evidence for the IR cutoff.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, January 17, 2007 7:51 PM

Subject: Contradictory prediction of the LeSage model to that of Newton

Dear Nigel,

Thanks for addressing the issues I raised.

I know very little about the LeSage model, its critics, and its proponents.  Nevertheless, let me venture forth.  Consider a Large Dense Disk rotating slowly.  I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body?  We could have two identical Disks:  One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius.  I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction.

Best regards,
Mario

From: Nigel Cook

Sent: Thursday, January 18, 2007 10:57 AM

Subject: Re: Contradictory prediction of the LeSage model to that of Newton

Dear Mario,

It is just a very simple form of radiation shielding.  Each fundamental particle is found to have a gravity shielding cross-section of Pi.R^2 where R = 2GM/c^2, M being the mass of the particle.  This precise result, that the black hole horizon area is the area of gravitational interactions, is not a fiddle to make the theory work, but instead comes from comparing the results of two different derivations of G, each derivation being based on a different set of empirically-founded assumptions or axioms.

It is also consistent with the idea of Poynting electromagnetic energy current being trapped gravitationally to form fermions from bosonic energy (the E-field lines are spherically symmetric in this case, while the B-field lines form a torus shape which becomes a magnetic dipole at long distances because the polarized vacuum around the electron core shields transverse B-field lines as it does radial E-field lines, but doesn’t of course shield radial – ie polar – B-field lines).

Notice that the black hole radius of an electron is many orders of magnitude smaller than the Planck length.  The idea that gravity will be reduced by particles being directly behind one another is absurd, because the gravitational interaction cross-section is so small.  You can understand the small size of the gravitational cross-section when you consider that the inward force of gauge boson radiation is something on the order 10^43 N, directed towards every particle.  This force only requires a tiny shielding to produce a large gravitational force.

There are obviously departures produced by this model from standard general relativity under extreme circumstances.  One is that you can never have a gravitational force – regardless how big the mass is – that exceeds 10^43 N.  I don’t list this as a prediction in the list of predictions on my home page, because it is clearly not a falsifiable or checkable prediction, except near a large black hole which can’t very well be examined.  The effect of one mass being behind the other, and so not adding any additional geometrical shielding to a situation, is dealt with in regular radiation shielding calculations.  If amount of shielding material H is enough to cut the gravity causing radiation pressure by half, the statistical effect of amount M is that the shielded pressure fraction will not be f = 1 – (0.5M/H), but will instead be f = exp{-M(ln 2)/H}.

However, we know mathematically that f = 1 – (0.5M/H) becomes a brilliant approximation to f = exp{-M(ln 2)/H} when M << H.  Calculations show that you will generally have to have a mass approaching the mass of the universe in order to get any significant effect whereby “overlap” issues become effective.

Consider a Large Dense Disk rotating slowly.  I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body?  We could have two identical Disks:  One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius.  I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction. “

You or I need to make some calculations to check this.  The problem here is that I don’t immediately see the mechanism by which you think that there would be a reduction in gravity, or how much of a reduction there would be, do you allow for mass increase due to speed of rotation, or is that ignored?  Many of the “criticisms” that could be laid against a LeSage gravity could also be laid against the Standard Model SU(3)xSU(2)xU(1) forces which again use exchange radiation.  You could suggest that Yang-Mills quantum field theory would predict a departure from Coulomb’s law for a large charged rotating disc, along the plain of the disc.

To put this another way, how far should someone go into trying to disprove the model, or resolve all questions, before trying to publish?  This comes down to the question of time.

Can I also say that the calculations for http://quantumfieldtheory.org/Proof.htm were extremely difficult to do for the first time.  The diagram http://quantumfieldtheory.org/Proof_files/Image31.gif is the result of a great deal of effort in trying to make calculations, not the other way around.  The clear picture emerged slowly:

“The universe empirically looks similar in all directions around us: hence the net unshielded gravity force equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (see diagram). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4 p R 2. The ‘clever’ mathematical bit is that the shielding area of a local mass is projected on to this area by very simple geometry: the local mass of say the planet Earth, the centre of which is distance r from you, casts a ‘shadow’ (on the distant surface 4 p R 2) equal to its shielding area multiplied by the simple ratio (R / r)2. This ratio is very big. Because R is a fixed distance, as far as we are concerned for calculating the fall of an apple or the ‘attraction’ of a man to the Earth, the most significant variable the 1/ r2 factor, which we all know is the Newtonian inverse square law of gravity. For two separate rigorous and full accurate treatments see Geometrically, the unshielded gravity force is equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (illustration here). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4*Pi*R². The shielding area of a local mass is projected on to this area: the local mass of say the planet Earth, the centre of which is distance r from you, casts a shadow (on the distant surface 4*Pi*R² ) equal to its shielding area multiplied by the simple ratio (R/r)². This ratio is very big. Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.

“Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the net force sideways is the same in each direction unless there is a shielding mass intervening. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is receding. Gravity is the net force introduced where a mass shadows you, namely in the double-cone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc.”

Where disagreements exist, it may be the case that the existing theory is wrong, rather than the new theory.  There were plenty of objections to Aristarchus’ solar system because it predicted that the earth spins around daily, which was held to be absurd.  Ptolemy casually wrote that the earth can’t be rotating or clouds and air would travel around the equator at 1,000 miles per hour, but he didn’t prove that this would be the case, or state his assumption that the air doesn’t get dragged.

“Refutations” should really be written up in detail so they can be analysed and checked properly.  Problems arise in science where ideas are ridiculed instead of being checked with scientific rigor: clearly journal editors and busy peer reviewers are prone to ridicule ideas with strawman arguments without spending much time checking them.  It is a problem with elitism, as Witten’s letter shows, http://schwinger.harvard.edu/%7Emotl/witten-nature-letter.pdf .  Witten’s approach to criticism of M-theory is not to reply, thus remaining respectful.  Yet if I don’t reply to criticism, it is implied that I’m just a fool.

An excellent example is how your paper’s on the problems in quantum gravity are ignored by string theorists.  That proves string theorists are respectable, you see.  If they engaged in discussions with their critics, they would look foolish.  It is curious that if Witten refuses to discuss problems, he escapes being deemed foolish, but if outsiders do that then they are deemed foolish.  There is such a rigid view taken of the role of authority in science today, that hypocrisy is taken for granted by all.

Best wishes,

Nigel

## 10 thoughts on “Rabinowitz and quantum gravity”

1. nc says:

Analogies:

Dark matter = phlogiston (however, actually there is some dark matter as dust, gas, neutrinos, etc., but not Friedmann critical density of dark matter)
Dark energy = caloric
Lambda-CDM model = Ptolemy’s Earth centred universe with its epicycles to force it to “predict” what we see.

Copy of a comment to

nige said…
Seeing that nobody has found any dark matter, it is just hype.

The usual evidence doesn’t say what it is. About 15% of the mass of the universe is dark dust. The usual claim, that the amount of dark matter is many times the mass of the glowing visible matter, is based on galactic rotation curves and the Friedmann critical density.

Both of these are wrong. Gravitational lensing due to matter in some cases has also been hyped as “direct evidence” of dark matter, and may be true. It may be mass from dark dust or neutrinos.

If there was a lot of dark matter around, why isn’t it here on earth? Why doesn’t {it} affect the solar system? The “direct evidence of dark matter” is just like epicycles in Ptolemy’s earth centred universe having “direct evidence” in the “fact” the sun orbits the earth, which everyone can see. Or the “direct evidence” that phlogiston exists because fires burn and the ash left is less massive than the material burned.

Phlogiston was actually the original “dark matter”, just as caloric was the original “dark energy”. [According to the scientific approach to physics, all that counts are facts, not fashionable popularity (citations) or how many collaborators there are.

This is obsolete, now that there is a landscape of 10^500 non-checkable string theory solutions to the same thing, particle physics. The way forward is for everyone who is a true scientist to join a mainstream bandwaggon. If it is headed in the wrong direction, who cares because might is right politically, and it’s the mainstream which makes the most noise and gets attention from research funding committees.]

11:06 PM

2. nc says:

http://motls.blogspot.com/2007/01/raphael-bousso-probabilities-in.html

“the anthropic explanation is a disappointing and controversial approach to an explanation of anything but it is the only remotely acceptable quantitative explanation of the size of the observed cosmological constant we have.

“Until you post a better explanation on this blog, your comparisons may be viewed as fog, sorry. … We want to know why the vacuum energy responsible for acceleration of the expansion of the Universe is equal to

exp(-283.2) times m_{Planck}^2,

at least qualitatively why it is so small.”

It’s not a fixed value: http://www.google.co.uk/search?hl=en&q=evolving+dark+energy&meta=

I predicted the correct expansion curve in May 1996 and it was published via page 693 of the October 1996 issue of Electronics World, well before the first evidence from Perlmutter’s automated CCD observations were published in, if I recall correctly, January 1998.

The mechanism is that there is no long-range gravitational retardation. This discredit’s Friedmann’s solution to GR. Quantum gravity modifies GR by insisting that exchange radiation between severely redshifted galaxies is itself – like the visible radiation – redshifted and has an energy deficit.

GR omits the dynamics for this quantum gravity phenomena. The prediction of the phenomena agrees with the detailed observations, and gets rid of evolving dark energy: https://nige.wordpress.com/2007/01/09/rabinowitz-and-quantum-gravity/

nc | Homepage | 01.12.07 – 10:51 am | #

3. nc says:

Copy of a comment to http://kea-monad.blogspot.com/2007/01/swinging-schwinger.html :

Dirac himself had different ideas about the vacuum, ie, the ideas he actually used to predict antimatter from the Dirac equation’s negative energy solution. He thought about a “Dirac sea”:

… with the new theory of electrodynamics we are rather forced to have an aether.

– P.A.M. Dirac, ‘Is There An Aether?,’ Nature, v.168, 1951, p.906.

http://www.math.columbia.edu/~woit/wordpress/?p=262#comment-5066

It’s gravity gauge boson radiation that causes the pressure on matter (ie the subatomic particles, not macroscopic matter as it appears, which we know if mostly empty), and such radiation is directional.

The argument that shadows will be filled in by non-radial (eg sideways) components of pressure was the major argument against LeSage’s material aether push gravity theory.

The aether exists as a Dirac fluid closer to 1 fm to an electron where you have a nice, strong, disruptive electric field strength of 10^20 v/m to break it up into a fluid. Beyond that, whatever “aether” there might be, is metaphysical in the sense that it is definitely not polarized or QFT wouldn’t work (it is too locked up by bonding forces to be polarized and thus it is unable to really move much if at all). The only way you will work out what it is at low energy (beyond 1 fm, or the distances electrons approach in collisions at an energy of less than 1 MeV per collision) is to work out all the stuff you can detect by high-energy physics above the IR cutoff (within 1 fm) and use those results to work out a model which both produces all of that, tells you the low energy structure of the vacuum, and also makes some other predictions so that it can actually be verified scientifically by experimentation.

Hence, what you need to do is to get a complete understanding of how the Standard Model (or some other equally good approximation to high energy phenomena, if you know of one!) can arise with some understanding of how to resolve existing problems like electroweak symmetry breaking, in a predictive way that exceeds existing understanding.

The LeSage mechanism is actually the pion exchange strong nuclear force in the vacuum. There is a limit on the range, because the pions aren’t directional (the Dirac sea within 1 fm from a particle is a fluid assembly of the chaotic motions of particles), unlike the radiation which travels through the non-fluid vacuum beyond 1 fm range (gravity and electromagnetism). The pion pressure only pushes protons and neutrons together if they are close enough that there is not room for too many pions to appear between two particles and neutralize the pressure: similarly, a rubber sucker only sticks to surfaces smooth enough to exclude air. If air gets between the sucker and the surface, the pressure on both sides of the sucker is equalized, and it won’t be “attracted” (pushed to the surface by ambient air pressure).

4. nc says:

Copy of a comment:

http://riofriospacetime.blogspot.com/2007/01/cumbre-vieja.html

Although religious preachers ideally require belief in what they say, scientists should not have any guesswork beliefs, but just confidence in solid observational and experimental facts. A theory should only be accepted on the basis of experimental or observational evidence upon which it rests.

This is not the case today, as the mainstream defence of string theory shows. How is this problem likely to be resolved? Not from experimental or observational data, since string theory isn’t able to make any falsifiable predictions. It stands to reason that there is going to be grave difficulties ahead in making fundamental progress against bigoted attitudes.

And certainly, “…interpreted in terms of existing…”, is usually the way humans, including scientists, do things isn’t it? (Standing on the shoulders of giants, and all that).

Yes, that’s the entire problem with physics. Physicists have to come to analyse the history of science objectively, and to learn that instead of fiddling old theories by adding additional epicycles, sometimes it pays to listen to radical new ideas, whether superficially they look crazy or not:

“We are all agreed that your theory is crazy. The question which divides us is whether it is crazy enough to have a chance of being correct. My own feeling is that it is not crazy enough.”

– Bohr’s reply to Pauli, quoted by Freeman J Dyson, “Innovation in Physics”, Scientific American vol. 199, pp. 74-82, sept 1958.

5. nc says:

January 19th, 2007 at 10:59 am

“It’s certainly true that astrophysical observations of a CC pose a serious challenge to fundamental particle physics, but unfortunately I don’t think anyone has a promising idea about what to do about this.”

Look at the data plot, http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

The CC model doesn’t fit the data, which seem to suggest that the CC would need to vary for different distances from us. It’s like adding epicycles within epicycles. At some stage you really need to question whether you definitely need a repulsive long range force (CC) to cancel out gravity at great distances, or whether you get better agreement by doing something else entirely, like the idea that any exchange radiation causing gravity is redshifted and weakened by recession:

‘the flat universe is just not decelerating [ie, no long range gravitational retardation on expansion], it isn’t really accelerating’ – Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson#comment-10901

6. nc says:

Copy of a comment to http://kea-monad.blogspot.com/2007/01/007-marches-on.html

Hi Kea and Carl,

For the neutral electroweak boson mass, the Z, 91 GeV:

91 GeV/ (2*Pi*137) ~ 105 MeV

similar to Muon mass.

91 GeV/ (3*Pi*137^2) ~ 0.51 MeV

similar to electron mass.

Hence the masses of at least two leptons may correlate with the Z boson mass, depending on whether the idea of empirical data leading theory is deemed numerological crackpotism or not. (Here, for convenience I’m using 137 to represent 1/alpha or 137.036…, which is close enough for present purposes.)

If you want to have all particles arising from a single particle, it would seem natural to have a Higgs mass of 91 GeV, similar to the Z mass. I think the Higgs particle is supposed to have spin-0, whereas the Z has spin-1.

But if the Higgs is giving mass to the Z directly, then you could expect them to have similar masses.

Taking the coincidence above for Z mass and muon mass,

91 GeV/ (2*Pi*137) ~ 105 MeV,

the 2*Pi*137 attenuation factor would contain 2*Pi for geometric reasons relating to spin and the 137 for vacuum polarization (muon charge core shielding) reasons.

For the electron mass,

91 GeV/ (3*Pi*137^2) ~ 0.51 MeV.

the mechanism would be similar to the mion except that there is a 50% bigger geometrical reduction factor and an additional 137 polarized vacuum shielding factor.

Taking accepted facts from QFT, the vacuum polarization exists in the range from the UV cutoff to the IR cutoff (e.g. from the Planck scale out to about 1 fm).

Hence, if the mass-giving particle is glued to a fermion or massive boson by a polarizable field, there are two possibilities for the mass.

If the mass-giving particle is so close that it is inside the polarized zone (say at a distance equal to the Planck scale), then polarization won’t shield and attenuate the mass.

But if the mass-giving particle is outside the IR cutoff distance, then the effective mass will be shielded by a factor of 137 with some geometric correction.

An analogy is the magnetic moment of leptons. Whereas Dirac calculated the magnetic moment of the electron as 1 Bohr magneton, Schwinger obtained a closer approximation to allow for the effect of the field on the magnetic moment, which is 1 + 1/(2*Pi*137) = 1.00116 Bohr magnetons.

Obviously there are lots of further corrections for vacuum interactions. However, this is by far the biggest and most important vacuum correction.

The vacuum field is increasing the core magnetic moment, not by 1 Bohr magneton, but by that amount reduced with a combination of shielding factors of 2*Pi*137.

If both the electron core and the particle which gives mass to the electron have a polarized vacuum, and these don’t overlap, then the total polarizing shielding of the field associating them will be by the 137^2 factor because each polarization will shield by 137 fold.

The 2*Pi multiplier for one vacuum polarization may increase to 3*Pi for the case of two vacuum polarizations, because they take up more space.

For hadrons, the mass is correlated closely to a similar formula:

= 35n(N+1) MeV,

where n is the number of quarks in the hadron core (n=2 for mesons, n=3 for baryons), and N is an integer (N = the number of Higgs bosons associated with the hadron?).

This formula does post-dict or correlate all meson and baryon masses +/- 2%.

The explanation for the structure of nuclei can involk nuclear shell theory “magic numbers” for N which denote nuclei of high stability, so N = 2, 8 and 50 predict relatively stable systems.

For n = 3 (baryons) and N = 8 (stable), 35n(N+1) = 945 Mev which is approximately the mass of neutrons and protons (938, 940 Mev).

For n = 1 (lepton) and N = 2, 35n(N+1) = 105 Mev (muon mass)

For n = 1 (lepton) and N = 50 (magic number), 35n(N+1) = 1785 Mev
which is similar to the tauon mass.

For n = 2 (mesons) and N = 1, we get 35n(N+1) = 140 Mev (pions have masses 139.57 and 134.96 Mev).

All the measured meson masses seem close to 70(N+1) MeV where N is an integer, while baryon masses are close to 105(N+1) MeV.

If this general model is correct, then the electron is the most complex particle there is, not the simplest.

The electron would have two polarization factors shielding it from the mass-giving particle by 137 squared, in addition to a geometrical factor.

The muon is simpler, with the mass-giving particle outside a simple polarized vacuum, and all hadrons are similar to the muon except for differing number of core particles at the centre.

The fact a quark has a fractional electric charge can be grasped from the crude (and physically impossible) idea of bringing together three electrons. The polarized vacuum shielding is driven by the electric field strength of the core.

If you make the core charge 3 times stronger, the polarized vacuum is 3 times stronger at shielding the core charge. Hence, if you could (impossibly) bring 3 electrons together so that the vacuum polarization of each exactly coincided, the increased charge would be cancelled out by the stronger vacuum polarization.

Thus the core charge would be 3*137*e but the observable charge beyond the polarized vacuum would be 3*137*e/(3*137) = e. Each electron in the core would therefore appear to have an apparent electric charge of 1/3 of the electron’s charge. This corresponds to what probably is the cause for the downquark charge of -1/3.

It is just a polarization effect. This is not obvious because it is severely cloaked in the standard model by complex effects of chiral symmetry and weak charge, the exclusion principle, and colour charge/strong force.

A quick calculation which claims to justify the idea that the vacuum polarization shielding factor for electric charge is 137 is as follows.

Using uncertainty principle, uncertainty in momentum p and distance x is:

h/(2*Pi) = px = (~mc)(~ct) ~ (mc^2)t = Et = Ex/c

hence x = hc/(2*Pi*E)

although x and E are just uncertainties in distance and energy, this result is a good prediction, for instance it correctly shows that the range of a 91 GeV boson is about 10^{-17} m.

Rearranging x = hc/(2*Pi*E) gives

E = hc/(2*Pi*x)

Hence we are justified in treating x and E as real distance and real energy, and using E = Fx to estimate force:

F = E/x = hc/(2*Pi*x^2)

This result for force between electrons is directly comparable to Coulomb’s law because both are inverse-square law forces.

It turns out that the quantum field force above is 137.036… times the Coulomb law for electrons. Hence, there is some evidence that the core charge of an electron is 137.036e, and that the polarized vacuum shielding reduces this to the observed electron charge value of e beyond a distance of 1 fm from the core.

I realise that this is very sketchy in places, but it does seem to me to tackle some diverse issues with a general framework.

The human question is, what this is trying to achieve. It certainly is quite a different idea to say string theory, where experimental data is treated as crackpot, and theories are developed in complete isolation from reality (ie, extra dimensions, unobserved superpartners and gravitons, branes, etc.).

I don’t think that string theorists would take kindly to data driven theorising in particle theory. So anything like this will just annoy them, and cause them to freak out and try to censor it. I’m wondering how much time and effort I can afford to put in to writing up proper-looking papers. The problem is, you have to do a good deal of exploration to get ideas roughly right, before writing any papers.

There are enormous gaps in the above ideas, such as how to rigorously predict quark charges other than the crude argument for the downquark having -1/3 because the electric field driven polarization of the vacuum around a triad of electrons would be three times stronger than that around a single electron, so the long range observable charge per electron in the triad would reduced by a factor of 3 from its normal value.

How to predict the upquark charge of +2/3 from this sort of polarization model? Presumably, that will involve going deeply into representation theory for the standard model. I’ve a sneaking idea that even if I did have a complete paper which did everything, it would still be censored out by those who are sure that only string theories are real physics.

7. nc says:

Hi Kea,

Yes, I do read Rivero’s papers and the model above is based in the major “coincidence” on one of them!

The paper http://arxiv.org/abs/hep-ph/0503104 by Hans de Vries and Alejandro Rivero, “Evidence for radiative generation of lepton masses”, 11 Mar 2005, inspired the model in my last comment.

They write that in November 2004 de Vries noticed numerical coincidences between the the anomalous magnetic moment of electron, and muon (mainly the Schwinger first coupling correction of alpha/(2*Pi)) appeared numerically close to the ratio of muon mass to Z boson mass, and to the ratio of electron mass to Z boson mass, respectively.

I was already interested in the relationship between alpha and particle masses, because I had some evidence that the core charge of the electron is e/(alpha) ~ 137e, and that the vacuum polarization between UV and IR cutoffs shielded it down to e. Therefore, the 137 factor 1/alpha is a general shielding factor where it crops up in quantum field theory. Most of the differences in masses between particles are artifacts of the way vacuum polarization shielding weakens the association between the standard model charge core and the mass-giving particle, which is well separated from the charge core and separated by two vacuum polarizations in the case of an electron (hence low mass) but is less shielded (one vacuum polarization shield only) in the case of the muon, tauon, and hadrons (differences in mass being due there to the number of particles present in the core, eg the number of quarks, and differences in the discrete number of mass-giving particles around the core), and is unshielded in the case of Z bosons, which have a 1:1 correspondence with mass giving particles, a bit like the idea of supersymmetry.

If this idea is right, obviously it has a way to go and is just a crude model at present. Ideally, you would want to get a way of calculating masses with all the necessary corrections and fine tuning so that the theory could be checked against data to high precision, not just with an accuracy of a couple of percent either way. That is likely to take time. However, it might not be that difficult. http://arxiv.org/abs/hep-ph/0503104 does show some relationships between vacuum polarization corrections and the “coincidences” in detail. I’d like to examine this carefully and see if it is possible to come up with a complete mechanism-based approach to calculating quantum field theory expansions for different loop corrections to mass.

The “fine tuning” of the mass model is obviously due to the loops of charges spontaneously appearing in the vacuum for a brief period, acquiring mass, and then disappearing.

So the accurate prediction of masses is likely to involve precisely the same kinds of calculations as are involved in calculating, say, the precise magnetic moment of the electron to 10 significant figures.

If this can be done to theoretically predict precise, empirically checkable values of masses for a range of particles, then the practical benefits of the model will be improved.

8. nc says:

Copy of a comment to http://riofriospacetime.blogspot.com/2007/01/moon-breaking-apart-not.html

nige said…
“As we know, c has been changing according to GM=tc^3.”

Unless, say, G is rising in proportion to t, a possibility consistent with observations of apparently constant G from nuclear fusion in the early big bang, etc., because the Coulomb and gravity forces would vary the same way, preventing an increase in fusion as gravity increases (the Coulomb force increase repels protons, slowing fusion, which offsets the increased gravitational force of rising G).

The assumption that c falls off as the inverse cube root of the age of the universe is obviously one way of looking at the equation.

Ultimately it has to be determined observationally or experimentally whether it is right or not.

I think there is evidence that c is constant and G is rising, there is also a fair amount of theoretical evidence from a gravity mechanism that there is dimensionless multiplication factor to be included; the completely correct formula is, I believe on some theoretical evidence I have, GM = t(c/e)^3, where e is the base of natural logarithms.

On the other hand, the restricted theory relativity and certainly general relativity, don’t disprove the possibility of changes to the velocity of light.

According to the pre-relativity FitzGerald and Lorentz transformation, the length of the Michelson-Morley instrument was contracted in the direction of absolute motion by the spacetime fabric (be that an ether or a Yang-Mills exchange radiation field). This contraction shortened the length of the instrument in that direction, but not in a perpendicular direction. The result is that an absolute speed of light in an ether or Yang-Mills exchange radiation field is rendered undetectable; the slowing down of that light which has to move against a moving background speed (like a swimmer being slowed by a water current) is exactly compensated by the shortening of the distance the light has to travel. Hence, both beams of light in the Michelson-Morley experiment arrive at the same time, because the ‘relativistic’ contraction offsets an absolute speed of light.

There is an amusing discussion of faster than c speeds by Neil Cornish, an astrophysicist at Montana State University, at http://www.space.com/scienceastronomy/mystery_monday_040524.html

‘The problem is that funny things happen in general relativity which appear to violate special relativity (nothing traveling faster than the speed of light and all that). Let’s go back to Hubble’s observation that distant galaxies appear to be moving away from us, and the more distant the galaxy, the faster it appears to move away. The constant of proportionality in that relationship is known as Hubble’s constant. One seemingly paradoxical consequence of Hubble’s observation is that galaxies sufficiently far away will be receding from us at a velocity faster than the speed of light. This distance is called the Hubble radius, and is commonly referred to as the horizon in analogy with a black hole horizon. In terms of special relativity, Hubble’s law appears to be a paradox. But in general relativity we interpret the apparent recession as being due to space expanding (the old raisins in a rising fruit loaf analogy). The galaxies themselves are not moving through space (at least not very much), but the space itself is growing so they appear to be moving apart. There is nothing in special or general relativity to prevent this apparent velocity from exceeding the speed of light.’

9. nc says:

The final paragraph in this post:

“An excellent example is how your paper’s on the problems in quantum gravity are ignored by string theorists. That proves string theorists are respectable, you see. If they engaged in discussions with their critics, they would look foolish. It is curious that if Witten refuses to discuss problems, he escapes being deemed foolish, but if outsiders do that then they are deemed foolish. There is such a rigid view taken of the role of authority in science today, that hypocrisy is taken for granted by all.”