D. R. Lunsford, Lubos Motl, and Quantum Gravity

“If everything in the universe depends on everything else in a fundamental way, it might be impossible to get close to a full solution by investigating parts of the problem in isolation.” – S. Hawking and L. Mlodinow, A Briefer History of Time, London, 2005, p15.

“… I was looking at papers that appeared in astro-ph that had large numbers of citations to see if any had much relevance to particle physics. The only candidates were papers about the vacuum energy and things like “phantom energy”.  It’s certainly true that astrophysical observations of a CC [cosmological constant, a “fix” to the Friedmann solution of general relativity, which is “explained” by the invention of “dark energy”] pose a serious challenge to fundamental particle physics, but unfortunately I don’t think anyone has a promising idea about what to do about this.” – Dr Woit.

The reason is that such promising ideas have been censored out of arXiv sections like astro-ph, much as Aristarchus of Samos and Copernicus were censored, for being too radical.  {Update: I’ve added a section about Dr Motl’s numerological solution to the cosmological constant problem at the end of this post.}

SO(3,3) as a unification of electrodynamics and general relativity: Lunsford had a unification scheme published see http://www.springerlink.com/content/k748qg033wj44x11/

“Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177.

Lunsford suggests that [string theorists such as JD, U. of T.?] censored it off arXiv, see http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

It is available however at CERN http://cdsweb.cern.ch/record/688763

The idea is to have three orthagonal time dimensions as well as three of the usual spatial dimensions. This gets around difficulties in other unification schemes, and although the result is fairly mathematically abstract it does dispense with the cosmological constant. This is helpful if you (1) require three orthagonal time dimensions as well as three orthagonal spatial dimensions, and (2) require no cosmological constant:

(1) The universe is expanding and time can be related to that global (universal) expansion, which is entirely different from local contractions in spacetime caused by motion and gravitation (mass-energy etc.). Hence it is reasonable, if trying to rebuild the foundations, to have two distinct but related sets of three dimensions; three expanding dimensions to describe the cosmos, and three contractable dimensions to describe matter and fields locally.

(2) All known real quantum field theories are Yang-Mills exchange radiation theories (ie, QED, weak and QCD theories). It is expected that quantum gravity will similarly be an exchange radiation theory. Because distant galaxies which are supposed to be slowing down due to gravity (according to Friedmann-Robertson-Walker solutions to GR) are very redshifted, you would expect that any exchange radiation will similarly be “redshifted”. The GR solutions which slow slowing should occur are the “evidence” for a small positive constant and hence dark energy (which provides the outward acceleration to offset the presumed inward directed gravitational acceleration).

Professor Philip Anderson argues at http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901 that: “that the flat universe is just not decelerating, it isn’t really accelerating … there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea.”

The fact is, the flat universe isn’t accelerating; that alleged dark energy-produced acceleration is purely an artefact placed into the Lambda-CDM theory to get the theory to agree with post-1998 observations of supernova redshifts at extremely large distances.

Put another way, take out the GR gravitational deceleration, by allowing gravity to be a Yang-Mills quantum field theory in which redshift of gauge bosons due to the recession of gravitational charges (masses) weakens the gravity coupling constant G, and you can’t have anything but zero cosmological constant. The data only support a cosmological constant if you explicitly or implicitly assume that exchange radiation in quantum gravity isn’t redshifted.

The greatest galaxy redshift recorded is Z = 7, which implies a frequency shift of 7 + 1 = 8 fold, i.e., the redshifted light we receive from it has a frequency 8 times lower the emitted light.  Since Planck’s law says that energy of a photon is directly proportional to its frequency (E = hf), the photons coming from that galaxy have only 1/8th or 12.5% of the energy they had when emitted.  (The energy ‘loss’ doesn’t violate energy conservation; this is a analogous situation to firing an arrow at something which is moving away from you at nearly the velocity of the arrow.  The arrow ‘loses’ most of its kinetic energy as observed by the target, which feels only a weak impact.)

Similarly, any spin-2 (attractive) graviton radiation being exchanged between the universe we see (centred on us, from our frame of reference) and a receding galaxy which has redshift of Z = 7, will have an energy of exactly 12.5% of the energy of the graviton radiation being exchanged with local masses.  Hence, the universal gravitational constant G will have an effective value, for the Z = 7 redshifted galaxy, of not G but only G/8.

This allows us to make calculations.  Results are similar in the spin-1 gravity model which seems consistent with Lunsford’s unification where, it is clear, gravity and electromagnetism are two results of the same Yang-Mills exchange radiation scheme.

 Here, observed gravity attraction is caused by radiation pressure pushing nearby non-receding masses together, simply because they are shielding one another (the shielding and hence gravity is due to the fact that nearby masses they are not receding from one another; in order to be exchanging gauge boson exchange it is vital that the masses are receding from one another so that gauge boson radiation force results from Newton’s 3rd law, i.e., the reaction force from the Hubble force of recession in spacetime which is outward force of F = ma = dv/dt ~ c/t ~ Hc having an equal reaction force in the direction opposite to the recession).

Here, the reduction in the effective value of the universal gravitational constant G for the situation of highly redshifted receding galaxies is due to the absense (as seen in the our observable reference frame) of further matter at still greater distances (beyond the highly redshifted galaxy), which could produce an inward gauge boson pressure, against the particles of the galaxy, to slow down its recession.

Look at the data plot, http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

The dark energy CC (lambda-CDM) model based on general relativity doesn’t fit the data well, which suggests that the CC would need to vary for different distances from us. It’s like adding epicycles within epicycles. At some stage you really need to question whether you definitely need a repulsive long range force (driven by a CC) to cancel out gravity at great distances, or whether you get better agreement by doing something else entirely, like the idea that any exchange radiation responsible for gravity is redshifted and weakened by relativistic recession velocities:

“… the flat universe is just not decelerating [ie, instead of there being gravitational deceleration PLUS a dark energy CC acceleration which offsets the gravitational deceleration, there is INSTEAD simply no long range gravity because the gravity causing exchange radiation gets redshifted and loses its energy], it isn’t really accelerating … ” – Professor Philip Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson#comment-10901

Here is a plot of the curve for the absence of gravitational deceleration at great redshifts, in direct comparison to all the empirical gamma ray burst data and in comparison to the mainstream Lambda-CDM model: http://thumbsnap.com/v/Jyelh1YV.gif.  Information about the definition of distance modulus and redshift is widely available and basic equations are shown.  For redshift with the Lorentzian transformation, Z = (1 + v/c)/(1 – v2/c2)1/2, while for redshift with the Galilean transformation Z = v/c.

The data plotted doesn’t use either of these transformations: the redshift is determined directly by observation of the shift in the frequency of gamma rays (gamma ray bursts) or light (supernovae), while the distance modulus is determined directly by the relative intensity of the gamma ray burst (not the frequency) or the relative brightness of visible light (not wavelength or frequency).  The relationship of distance modulus to distance is simply: distance modulus = -5 + 5 log10 d, where d is distance in parsecs (1 parsec = 3.08568025 × 1016 meters).

At small redshifts, there is gravitational deceleration because exchange radiation causing gravity (in any gravity mechanism*) is still going to cause a pull back on objects moving away.  Hence, the models and data are all in agreement at small redshifts.  The departure from the Lambda-CDM model becomes marked at large redshifts.  To give an example, consider the extreme situation of redshift Z = 7.  The Lambda-CDM model, once fitted to the data from supernova at small redshifts, predicts a distance modulus magnitude of 49.2 at Z = 7.  The gamma ray burst data best fit curve suggests a value of 48.2 +/- 1.5, and the no gravitational deceleration model at extreme redshifts predicts a value of 47.4.

General contradicts restricted relativity by allowing the velocity to light to vary due to deflection of light (which changes the velocity, because – unlike speed – velocity is a vector which depends on direction), which also makes light appear to travel faster than c:

‘All the distance covered by the light in the early universe gets increased by the expansion of the universe,’ explains Neil Cornish, an astrophysicist at Montana State University. ‘Think of it like compound interest.

This article generated quite a few e-mails from readers who were perplexed or flat out could not believe the universe was just 13.7 billion years old yet 158 billion light-years wide. That suggests the speed of light has been exceeded, they argue. So SPACE.com asked Neil Cornish to explain further. Here is his response:

“The problem is that funny things happen in general relativity which appear to violate special relativity (nothing traveling faster than the speed of light and all that). Let’s go back to Hubble’s observation that distant galaxies appear to be moving away from us, and the more distant the galaxy, the faster it appears to move away. The constant of proportionality in that relationship is known as Hubble’s constant. One seemingly paradoxical consequence of Hubble’s observation is that galaxies sufficiently far away will be receding from us at a velocity faster than the speed of light. This distance is called the Hubble radius, and is commonly referred to as the horizon in analogy with a black hole horizon. In terms of special relativity, Hubble’s law appears to be a paradox. But in general relativity we interpret the apparent recession as being due to space expanding (the old raisins in a rising fruit loaf analogy). The galaxies themselves are not moving through space (at least not very much), but the space itself is growing so they appear to be moving apart. There is nothing in special or general relativity to prevent this apparent velocity from exceeding the speed of light.” [Emphasis added.]

(Professor Lee Smolin discusses another alleged problem with restricted or special relativity in his book The Trouble With Physics.  Smolin suggests an argument that the Planck scale is an uncontractable physically real length which dominates the quantum scale, and this contradicts the length-contraction term in special relativity.  The result, as Smolin explains, was the suggestion of a modification to special relativity, whereby the relativistic effects disappear at the Planck scale – so the Planck scale is not contracted – but occur at larger scales.  This modified scheme was called ‘doubly special relativity’ for obvious reasons of political expediency.  This something I don’t like the look of.  If people want to work in the backwaters of special relativity and make it more special, they need to take the mathematical derivation and find a physical dynamic theory concerned with the Higgs field to explain how mass varies, and the Yang-Mills exchange radiation field to explain the dynamics of how things contract.  In a previous post on this blog, I’ve given examples of research which addresses the contraction in terms of the Dirac sea, although that may not be the correct overall theory, seeing that pair production only occurs out to 1 fm from an electron, where the electric field is over 10^18 v/m, i.e., above the IR cutoff.  Clearly, the contraction in special relativity is due physically to distortion caused variations in the gravity causing Yang-Mills exchange radiation pressure when a body moves in a given direction relative to an observer.  I don’t appreciate any evidence for the Planck mass, length, or time, which come from dimensional analysis without any experimental proof or even a theory.  Furthermore, the oft-made claim that the Planck length is the smallest possible distance you can get dimensionally from physical units is just plain wrong, because the radius of a black hole of the electron mass is far smaller than the Planck length.  The Planck mass, length and time are examples of abject speculation, labelled with a famous name, which become physically accepted facts for no physical reason whatsoever.  In other words, those quantities are a religion of groupthink faith.)

If we use the Lorentzian transformation for redshift, v is always less than c, and for Z = 7, v = 0.9844c, so v = 295,000 km/s, and from the Hubble law, d = v/H = 295,000/[70 (km/s)/Mparsec] = 4,220,000,000 parsec, hence distance modulus = -5 + 5 log10 d = 43.1

Using instead the Galilean transformation for apparent velocity for the purpose of this calculation, for Z = 7, v = 7c = 2,100,000 km/s, so the Hubble law gives d = v/H = 295,000/[70 (km/s)/Mparsec] = 30,000,000,000 parsec, hence distance modulus = -5 + 5 log10 d = 47.4.   In fact, the fit is actually closer, because there would be some very weak gravitational deceleration, equivalent to a universal gravitational constant of G/8 for a redshift of Z = 7, due to spin-2 graviton redshift and E = hf energy ‘loss’ of redshifted gravitons.  Results are similar in the spin-1 radiation pressure gravity model.

We can learn about quantum gravity from existing cosmological data.  Two theories replaced with one: get rid of gravitational deceleration (Friedmann solution) because exchange radiation is weakened by redshift over long distances, and you also get rid of dark energy into the bargain, because you no longer need to explain the lack of observed deceleration by inventing dark energy to offset it.  The choice is:

(1) Deceleration of universe due to gravity slowing down expansion + evolving dark energy to cause acceleration = observed (non-decelerating) data.

(2) Redshift of gravity causing exchange radiation (weakening gravity between relativistically receding masses) = observed (non-decelerating) data.

Theory (2) is simpler and pre-dates (October 1996*) the ad hoc small positive, evolving CC in theory (1) which was only invented after Perlmutter discovered, in 1998, that the predicted Friedmann deceleration of the universe was not occurring.  (Perlmutter used automated computer detection of supernova signatures directly from CCD telescope input.)  Ockham’s razor tells us that the simplest theory (theory 2) is the most useful, and it is also totally non-ad hoc because it made this prediction ahead of the data.

However, theory (1) is the mainstream theory that is currently endorsed by Professor Sean Carroll and is in current textbooks.  So if you want to learn orthodoxy, learn theory (1) and if you want to learn the best theory, learn theory (2).  It all depends on whether doing “physics” means to you simply learning “existing orthodoxy (regardless of whether it has any real evidence or not)” to help you pass current exams, or whether you want to see something which has experimental confirmation behind it, and is going places:

“Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.”

– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

*Electronics World: the exchange radiation gravity mechanism shows a dependence on the surrounding expansion of the universe, which prevents retardation of distant objects which are extremely redshifted as seen from our frame of reference.  In this case, the absense of long range gravitational retardation on the expansion of the universe is due to a combination of redshift of exchange radiation weaking the force between receding masses, and a lack of a net inward pressure (as seen from our frame of reference) on the most distant receding masses, because no receding mass lies beyond them to set up an gravity mechanism and slow them down as seen from our frame of reference.

In the simple momentum exchange (pushing) model for gravity which is due to spin-1 exchange radiation (unlike the spin-2 graviton idea for an ‘attraction’ resulting from exchange purely between two masses), the pushing mechanism gives rise to ‘attraction’ by recoiling and knocking masses together.  Regardless of which model you use in the absence of the final theory of quantum gravity, there is no long range retardation.

(What may ‘really’ be occurring in a hypothetical – imaginary – common frame of reference in which objects are all considered at the same time after the big bang, instead of at a time after the big bang which gets less as you look to greater distances, is not known, cannot be known from observations, and therefore is speculative and not appropriate to the universe we actually see and experience light-velocity gravity effects in.  We’re doing physics here, which means making predictions which are checkable.)

UPDATE, 23 Feb 2007: Dr Lubos Motl’s brilliant numerological solution to the cosmological constant problem becomes mainstream!

The small masses of neutrinos can be accounted for using the Standard Model by invoking a ‘Seesaw mechanism’ involving an algebraic solution to a 2×2 matrix containing the three known neutrino flavours plus a really massive undiscovered neutrino.

This gives neutrinos the very small observed mass-energy, on the order of the square of the electroweak energy scale (~246 GeV), divided into the GUT scale (~10^16 GeV).

Dr Motl in 2005 noticed that the fourth power of this ratio (energy) is similar to the alleged cosmological constant (which has units of energy^4, providing that distance is measured in units of 1/energy, which is of course possible because the distance of closest approach of charged particles in Coulomb scattering is proportional to the reciprocal of the collision energy).  So he suggested a matrix of cosmological constants in which the seesaw effect produces the right numerical result.  Now other people are taking this seriously, and writing arXiv papers about it, which aren’t deleted as speculative numerology.  Even Dr Woit is now writing favorable things about it, because he prefers this as an alternative to the anthropic landscape of supersymmetry:

“… there’s a new preprint by Michael McGuigan which manages to cite both Not Even Wrong (the book), and a Lubos Motl blog entry. The citation of my book seems unnecessary, surely there are other sources critical of string-based unification that have priority. The article is about the “see-saw” mechanism for getting the right magnitude of the cosmological constant, and it is for this that Lubos’s blog gets a citation. This does seem to be a more promising idea about the CC than many. I for one think it will be a wonderful development if the field of particle theory turns around, stops promoting pseudo-science justified by the Weinberg anthropic CC “prediction”, and heads instead in a more promising direction, all based on an entry in Lubos’s blog…”

It is obvious what is occurring here.  The whole of stringy M-theory theory is a mechanism-less numerology where you say, ‘let’s try 10 dimensions, look, that predicts supersymmetry! & 11 dimensions predicts gravity!  Wow!’  So this numerology from Dr Motl fits in beautifully with the rest of string theory.  It deserves a nice scientific prize.

Rabinowitz and quantum gravity

Dr Mario Rabinowitz, the author of the arXiv paper “Deterrents to a Theory of Quantum Gravity,” has kindly pointed out his approach to the central problem I’m dealing with.  (Incidentally, the problem he has with quantum gravity does not apply to the quantum gravity mechanism I’m working on, where gravity is a residue of the electromagnetic field caused by the exchange of electromagnetic gauge bosons which allows two kinds of additions, a weak always attractive force and a force about 1040 times stronger with both attractive and repulsive mechanisms.)  His paper, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990), is the earliest I’m aware of which comes up with a general result equal to Louise Riofrio’s equation MG = tc3.

He sets the gravitational force equal to the inertial force, F = mMG/R2 = [mM/(M + m)]v2/R ≈ (mc2)/R.  This gives MG = Rc2 = (ct)c2 = tc3 which is identical to Riofrio’s equation.

Here is my detailed treatment of Mario’s analysis.  The cosmological recession of Hubble’s law v = HR where H is Hubble parameter and R is radial distance, implies an acceleration in spacetime (since R = ct) of a = dv/dt = d(HR)/dt = Hv = (v/R)v = v2/R.  (This is not controversial or speculative; it is just employing calculus on Hubble’s v = HR, in the Minkowski spacetime we can observe, where: ‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1908.) Hence the outward force on mass m due to recession is F = ma = mv2/R = mc2/R for extreme distances where most of the mass is and where redshifts are great, so that  v ~ c.

Hence the inward (attractive) gravity force is balanced by this outward force:

 F = mMG/R2 = mc2/R

Thus,

 MG = Rc2 = (ct)c2 = tc3.

(This result is physically and dimensionally correct but quantitatively is off by a dimensionless correction factor of e3 = 20, because it ignores the dynamics of quantum gravity at long distances (rising density as time approaches zero, which increases toward infinity the effective gravity effect due to the expansion of the universe, and falling strength of gravity causing exchange radiation as time goes towards zero due to the extreme redshift of that radiation, weakening gravity.  However, the physical arguments above are very important and can be compared to those in the mechanism at http://feynman137.tripod.com/.  The correct formula is: e3 MG =  tc3 , where, because of the lack of gravitational retardation in quantum gravity, t = 1/H where H is Hubble parameter, instead of t = (2/3)/H which is the case for the classic Friedmann scenario with gravitational deceleration.)

Historically the rediscovery of this result since Mario’s paper in 1990 has occurred three times, each under different circumstances:

(1) M. Rabinowitz, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990).

(2) My own treatment, Electronics World, various issues (October 1996-April 2003), based on a physical mechanism of gravity (outward force of matter in receding universe is balanced, by Newton’s 3rd law, by an inward force of gauge boson pressure, which causes gravity by asymmetries since each fundamental particle acts as a reflecting shield, so masses shield one another and get pushed together by gauge boson radiation, predicting the value of G quite accurately).

[Initially I had a crude physical model of the Dirac sea, in which the motion of matter outward resulted in an inward motion of the Dirac sea to fill in the volume being vacated.  This was objected to strongly for being a material pressure LeSage gravity mechanism, although it makes the right prediction for gravity strength (unlike other LeSage models) and utilises the right form of the Hubble acceleration outward, a = dv/dt = d(HR)/dt = Hv.  This was published in Electronics World from October 1996 (letters page item) to April 2003 (a major six pages long paper).  A gauge boson exchange radiation based calculation for gravity was then developed which does the same thing (without the Dirac sea material objections to LeSage gravity which the previous version had) in 2005.  I’ve little free time, but am rewriting my site into an organised book which will be available free online.  The correct formula from http://feynman137.tripod.com/ for the gravity constant is G = (3/4)H2 /(Pi*Rho*e3 ) where Rho is the observed (not Friedmann critical) density of visible matter and dust, etc.  This equation is equivalent to e3 MG =  tc3 , and differs from the Friedmann critical density result by a factor of approximately 10, predicting that the amount of dark matter is less than predicted by the critical density law.  In fact, you get a very good prediction of the gravity constant from the detailed Yang-Mills exchange radiation mechanism by ignoring dark matter, as a first approximation.  Since dark matter has never been observed in a laboratory, but is claimed to be abundant in the universe, you have to ask why it is avoiding laboratories.  In fact the most direct evidence claimed for it doesn’t reveal any details about it.  It is required in the conventional (inadequate) approximations to gravity but the correct quantum gravity, which predicted the non-retarded expansion of the universe in 1996, two years before Perlmutter’s observational data confirmed it, reduces the amount of dark matter dramatically and makes various other validated predictions.]

(3) John Hunter published a conjecture on page 17 of the 12 July 2003 issue of New Scientist, suggesting that the rest mass energy of a particle, E = mc2, is equal to its gravitational potential energy with respect to the rest of the matter in the surrounding universe, E = mMG/R.  This leads to E = mc2 = mMG/R, hence MG = Rc2 = (ct)c2 = tc3.  He has the conjecture on a website here, which contains an interesting and important approach to solving the galactic rotation curve problem without inventing any unobserved dark matter, although his cosmological speculations on linked pages are unproductive and I wouldn’t want to be associated with those non-predictive guesses.  Theories should be built on facts.

(4) Louise Riofrio came up with the basic equation MG = tc3 by dimensional analysis and has applied it to various problems.  She correctly concludes that there is no dark energy, but one issue is what is varying in the equation MG = tc3 to compensate for time increasing on the right hand side.  G is increasing with time, while M and c remain constant.  This conclusion comes from the detailed gravity mechanism.  Contrary to claims by Professor Sean Carroll and the late Dr Edward Teller, an increasing G does not vary the sun’s brightness or the fusion rate in the first minutes of the big bang (electromagnetic force varies in the same way so Coulomb’s law of repulsion between protons was different, offsetting the variation in compression on the fusion rate due to varying gravity), but it does correctly predict that gravity was weaker in the past when the cosmic background radiation was emitted, thus explaining quantitatively why the ripples in that radiation due to mass were so small when it was emitted 300,000 years after big bang.  This, together with the lack of gravitational retardation on the rapid expansion of the universe (gravity can’t retard expansion between relativistically receding masses, because the gravity causing exchange radiation will be redshifted, losing its force-causing energy, like ordinary light which is also redshifted in cases of rapid recession; this redshift effect is precisely why we don’t see a blinding light and lethal radiation from extreme distances corresponding to early times after the big bang) gets rid of the ad hoc inflationary universe speculations

I’m disappointed by Dr Peter Woit’s new post on astronomy where he claims astronomy is somehow not physics: ‘When I was young, my main scientific interest was in astronomy, and to prove it there’s a very geeky picture of me with my telescope on display in my apartment, causing much amusement to my guests (no way will I ever allow it to be digitized, I must ensure that it never appears on the web). By the time I got to college, my interests had shifted to physics…’

I’d like to imagine that Dr Woit just means that current claims of observing ‘evolving dark energy’ and ‘dark matter (with lots of alleged evidence which turns out to be gravity caused distortions which could be caused by massive neutrinos or anything, and doesn’t have a fig leaf of direct laboratory confirmation for the massive quantity postulated to fix epicycles in the current general relativity paradigm which ignores quantum gravity)’ are not physics.  However, he is unlikely to start claiming that the mainstream ‘time-varying-lambda-CDM or time-varying-lambda (time-varying dark energy ‘cosmological constant’)-cold dark matter’ model of cosmology is nonsense because in his otherwise excellent book Not Even Wrong he uses the false, ad hoc, small positive fixed value of the cosmological constant to ridicule the massive value predicted by force unification considerations in string theory.  Besides, if he knows little of modern astronomy and cosmology, he will not be in an expert to competently evaluate it and criticise it.  I hope Dr Woit will submerge himself in the lack of evidence for modern cosmology and perhaps come up with a second volume of Not Even Wrong addressed at the lambda-CDM model and its predictive, checkable solution using a proper system of quantum gravity.

For my earlier post on this topic, see https://nige.wordpress.com/2006/09/22/gravity-equation-discredits-lubos-motl/ 

Other news: my domain http://quantumfieldtheory.org/ is up and running with some draft material – now I just have to write the free quantum field theory textbook to put on there!

NUMERICAL CHECK

The current observational value of H is about 70 +/- 2.4 km/s/Mparsec ~ 2.27*10-18 s-1, and Rho causes the difficulty today.  The observed visible matter (stars, hot gas clouds) has long been estimated to have a mean density around us of ~4*10-28 kg/m3, although studies show that this should be increased for dust by about 15% and for various other factors.   The prediction G = (3/4)H2 /(Pi*Rho*e3 )  is a factor of e3 /2 ~ 10 times smaller than that in the Friedmann critical density formula.  It’s accuracy depends on what evidence you take for the density.  It happens to agree exactly with the statement by Hawking in 2005:

‘When we add up all this dark matter [which accounts for the high speed of the outermost stars orbiting spiral galaxies like the Milky Way, and the high speed of galaxies orbiting in clusters of galaxies] , we still get only about one-tenth of the amount of matter required to half the expansion [the critical density in Friedmann’s solution]’.

– S. Hawking and L. Mlodinow, A Briefer History of Time, Bantam, London, 2005, p65.

Changing it around, it predicts the density is 9.2*10-28 kg/m3, about twice the observed density if that is taken as the traditional figure of 4*10-28 kg/m3, however the latest estimates of the density are higher and similar to the predicted value 9.2*10-28 kg/m3, for example the following:

‘Astronomers can estimate the mass of galaxies by totalling up the number of stars in the galaxy (about 109) and multiplying by the mass of one star, or by observing the dynamics of orbiting parts of a galaxy. Next they add up all the galactic mass they can see in this region and ivide by the volume of space they are looking at. If this is done for bigger and bigger regions of space the mean density approaches a figure of about 10-30 grams per cubic centimetre or 10-27 kg m-3. You will realise that there is some doubt in this value because it is the result of a long chain of estimations.’

Putting this approximate value of Rho = 10-27 kg m-3 into G = (3/4)H2 /(Pi*Rho*e3 ) with H as before gives G =  6.1*10-11  N m2 kg-2 , which is only 9% low, and although the experimental error in density observations is relatively high, it will improve with further astronomical studies, just as the Hubble parameter error has improved with time.  This provides a further check.  (Other relevant checks on quantum gravity are discussed here, top post.)

Here’s an extract from a response I sent to Dr Rabinowitz on 8 January, regarding the issue of gauge bosons and the accuracy of the calculation of G in comparison to observed data:

“Are your gauge bosons real or virtual?” What’s the difference? It’s the key question in many ways.  Obviously they are real in the sense they really produce electric forces. But you can’t detect them with a radio receiver or other instrument designed to detect either oscillatory waves or discrete particles.
“I am troubled by your force calculation (~10^43 N) which is an input to your derivation of G.  I’m inclined to think that the force calculation could be off by a large factor, so that that one may question that “The result predicts gravity constant G to within 2 % “.

First, the “outward force” is ambiguous.  If you ignore the fact that the more distant observable universe has higher density, then you get one figure. If you assume that density increases to infinity with distance, you get another result for outward force (infinity).  Finally, if you are interested in the inward reaction force carried by radiation (gauge bosons) then you need to allow for the redshift of those due to the recession of the matter emitting them, which cancels out the infinity due to density increasing, and gives a result of about 7 x 10^43 N or whatever.  In giving outward force as ~10^43 N, I’m giving a rough figure which anyone will be able to validate approximately without having to do the more complicated calculations.

I used two published best estimates for the Hubble parameter and the density of the visible matter plus dust in the universe.  These allowed G to be predicted.  The result was within 2% of the empirically known value of G.  I used 70 km/s/Mparsec for H a decade ago and that is still the correct figure, although the uncertainty is falling.  A decade ago, there was no estimate to the uncertainty because the data clustered between two values, 50 and 100.  Now there is agreement that the correct value of H is very close to 70.  …  I don’t think there is any massive error involved in observational astronomy.  There used to be a confusion because of two types of variable star, with Hubble using the wrong type to estimate H.  Hubble had a value of 550 for H, many times too high.  That sort of error is long gone.

Recent response to Professor Landis about general relativity:

“Ultimately, it’s all in the experimental demonstration.  If Einstein’s theory hadn’t been confirmed by tests, it would have been abandoned regardless of how pretty or ugly it may be.” – Geoffrey Landis

What about string theory, which has been around since 1969 and can’t be tested and doesn’t hold out any hope?  I disagree: the tests of general relativity would first have been repeated, and if they still didn’t agree, then an additional factor would have been invented/discovered to make the theory correct.

Newton’s gravity law in tensors would be R_uv = 4*Pi*T_uv

which is false because the divergence of T_uv doesn’t disappear.  Hence it violates conservation of energy.  Einstein replaces T_uv with (T_uv) – (1/2)(g_uv)T which does have a vanishing divergence and so doesn’t contradict the conservation of energy.  If the solutions of general relativity are wrong, then you would need to find out physically what is causing the discrepancy.

The Friedmann solution of general relativity predicted that gravity slows down expansion.  Observations by Perlmutter on distant supernova showed that there was something wrong.  Instead of abandoning general relativity, a suitable small positive “cosmological constant” was adopted to keep everything fine.  Recently, however, more detailed observations show that there is evidence that such a “cosmological constant” lambda would be varying with time.

Discussion by email with Dr Rabinowitz:

From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, January 17, 2007 5:27 AM

Subject: Paul Gerber is an unsung hero

Dear Nigel,

… Einstein’s General Relativity (EGR)  makes the problem much more difficult than your simple approach.  

  Another shortcoming is the LeSage model itself.  It is very appealing, but one aspect is appauling.  What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.  One should be able to calculate the time constant for slowing down a body.  …  

     Best regards,
     Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Wednesday, January 17, 2007 7:07 PM

Subject: Re: Paul Gerber is an unsung hero

Dear Mario,

Since the leading edge of the Universe is moving at nearly c, one needs to bring relativity into the equations.  Special relativity (without boosts) can’t do it.  Einstein’s General Relativity (EGR)  makes the problem much more difficult than your simple approach.”

The mechanism of relativity comes from this simple approach: the radiation pressure on a moving object causes the contraction effect.  Any inconsistency is a failure of general or special relativity, which are mathematical structures based on principles.  An example of a failure is the lack of deceleration of the universe…

Another shortcoming is the LeSage model itself.  It is very appealing, but one aspect is appauling.  What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.”

This is the objection of Feynman to LeSage in his November 1964 Cornell lectures on the Character of Physical Law.  The failure of LeSage has been discussed in detail by people from Maxwell to Feynman.  I have some discussion of LeSage at http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html

See http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html where the Dirac sea (or the equivalent Yang-Mills radiation exchange pressure on moving objects) is the mechanism for relativity:

Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:” ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’”

The force inward on every point is enormous, 10^43 Newtons.  General relativity gives the result that the Earth’s radius is contracted by (1/3)MG/c^2 = 1.5 millimetres.  The physical mechanism of this process (gravity dynamics by radiation pressure of exchange radiation) is the basis for gravitational “curvature” of spacetime in general relativity, because this shrinking of radius is radial only: transverse directions (eg circumference) is not affected.  Hence, the ratio circumference/radius will vary depending on the mass of the object, unless you invent a fourth dimension and maintain Pi by stating that spacetime is curved by the extra dimension.

LeSage (who apparently plagarised Fatio, a friend of Newton) was also dismissed for various other equally false reasons:

1. Maxwell claimed that the force causing radiation would have to be so great it would heat up objects until they were red hot.  This is vacuous for various reasons: the strong nuclear force (unknown in Maxwell’s time) is widely accepted to be mediated by Pions and other particles, and is immensely stronger than gravity, but doesn’t cause things to melt.  Heat transfer depends on how energy is coupled.  It is known that gravity and other forces are indirectly coupled to particles via a vacuum field that has mass and other properties.

2. Several physicists in the 1890s wrote papers which dismissed LeSage by claiming that any useful employment of the mechanism makes gravity depend on the mass of atoms rather than on the surface area of a planet, and so requires the gravity causing field to be able to penetrate through solid matter, and that therefore matter must be mainly void, with atoms mainly empty.  This appeared absurd.  But when X-rays, radioactivity and the nuclear atom confirmed LeSage, he was not hailed as having made a successful prediction, confirmed experimentally.  The later mainstream view of LeSage was summed up by Eddington: ‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, ‘Space Time and Gravitation’, Cambridge University Press, 1921, p64.  This is partly correct in the sense that there was no numerical prediction from LeSage that could be tested.

3. Feynman’s objection assumes that the force carrying radiation interacts chaotically with itself, like gas molecules, and would fill in “shadows” and cause drag on moving objects by striking moving objects and carrying away momentum randomly in any direction.  This is a straw man argument: Feynman should have considered the Yang-Mills exchange radiation as the only basis for forces below the infra red cutoff, ie, beyond 1 fm from a particle core.

The gas of creation-annihilation loops only occurs above the IR cutoff.  It is ironic that Feynman missed this, seeing his own major role in discovering renormalization which is evidence for the IR cutoff.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Wednesday, January 17, 2007 7:51 PM

Subject: Contradictory prediction of the LeSage model to that of Newton

Dear Nigel,

  Thanks for addressing the issues I raised.  

  I know very little about the LeSage model, its critics, and its proponents.  Nevertheless, let me venture forth.  Consider a Large Dense Disk rotating slowly.  I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body?  We could have two identical Disks:  One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius.  I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction.

        Best regards,
        Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Thursday, January 18, 2007 10:57 AM

Subject: Re: Contradictory prediction of the LeSage model to that of Newton

Dear Mario,

It is just a very simple form of radiation shielding.  Each fundamental particle is found to have a gravity shielding cross-section of Pi.R^2 where R = 2GM/c^2, M being the mass of the particle.  This precise result, that the black hole horizon area is the area of gravitational interactions, is not a fiddle to make the theory work, but instead comes from comparing the results of two different derivations of G, each derivation being based on a different set of empirically-founded assumptions or axioms.

It is also consistent with the idea of Poynting electromagnetic energy current being trapped gravitationally to form fermions from bosonic energy (the E-field lines are spherically symmetric in this case, while the B-field lines form a torus shape which becomes a magnetic dipole at long distances because the polarized vacuum around the electron core shields transverse B-field lines as it does radial E-field lines, but doesn’t of course shield radial – ie polar – B-field lines).

Notice that the black hole radius of an electron is many orders of magnitude smaller than the Planck length.  The idea that gravity will be reduced by particles being directly behind one another is absurd, because the gravitational interaction cross-section is so small.  You can understand the small size of the gravitational cross-section when you consider that the inward force of gauge boson radiation is something on the order 10^43 N, directed towards every particle.  This force only requires a tiny shielding to produce a large gravitational force.

There are obviously departures produced by this model from standard general relativity under extreme circumstances.  One is that you can never have a gravitational force – regardless how big the mass is – that exceeds 10^43 N.  I don’t list this as a prediction in the list of predictions on my home page, because it is clearly not a falsifiable or checkable prediction, except near a large black hole which can’t very well be examined.  The effect of one mass being behind the other, and so not adding any additional geometrical shielding to a situation, is dealt with in regular radiation shielding calculations.  If amount of shielding material H is enough to cut the gravity causing radiation pressure by half, the statistical effect of amount M is that the shielded pressure fraction will not be f = 1 – (0.5M/H), but will instead be f = exp{-M(ln 2)/H}.

However, we know mathematically that f = 1 – (0.5M/H) becomes a brilliant approximation to f = exp{-M(ln 2)/H} when M << H.  Calculations show that you will generally have to have a mass approaching the mass of the universe in order to get any significant effect whereby “overlap” issues become effective.

Consider a Large Dense Disk rotating slowly.  I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body?  We could have two identical Disks:  One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius.  I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction. “

You or I need to make some calculations to check this.  The problem here is that I don’t immediately see the mechanism by which you think that there would be a reduction in gravity, or how much of a reduction there would be, do you allow for mass increase due to speed of rotation, or is that ignored?  Many of the “criticisms” that could be laid against a LeSage gravity could also be laid against the Standard Model SU(3)xSU(2)xU(1) forces which again use exchange radiation.  You could suggest that Yang-Mills quantum field theory would predict a departure from Coulomb’s law for a large charged rotating disc, along the plain of the disc.

To put this another way, how far should someone go into trying to disprove the model, or resolve all questions, before trying to publish?  This comes down to the question of time. 

Can I also say that the calculations for http://quantumfieldtheory.org/Proof.htm were extremely difficult to do for the first time.  The diagram http://quantumfieldtheory.org/Proof_files/Image31.gif is the result of a great deal of effort in trying to make calculations, not the other way around.  The clear picture emerged slowly:

“The universe empirically looks similar in all directions around us: hence the net unshielded gravity force equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (see diagram). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4 p R 2. The ‘clever’ mathematical bit is that the shielding area of a local mass is projected on to this area by very simple geometry: the local mass of say the planet Earth, the centre of which is distance r from you, casts a ‘shadow’ (on the distant surface 4 p R 2) equal to its shielding area multiplied by the simple ratio (R / r)2. This ratio is very big. Because R is a fixed distance, as far as we are concerned for calculating the fall of an apple or the ‘attraction’ of a man to the Earth, the most significant variable the 1/ r2 factor, which we all know is the Newtonian inverse square law of gravity. For two separate rigorous and full accurate treatments see Geometrically, the unshielded gravity force is equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (illustration here). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4*Pi*R². The shielding area of a local mass is projected on to this area: the local mass of say the planet Earth, the centre of which is distance r from you, casts a shadow (on the distant surface 4*Pi*R² ) equal to its shielding area multiplied by the simple ratio (R/r)². This ratio is very big. Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.

“Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the net force sideways is the same in each direction unless there is a shielding mass intervening. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is receding. Gravity is the net force introduced where a mass shadows you, namely in the double-cone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc.”

Where disagreements exist, it may be the case that the existing theory is wrong, rather than the new theory.  There were plenty of objections to Aristarchus’ solar system because it predicted that the earth spins around daily, which was held to be absurd.  Ptolemy casually wrote that the earth can’t be rotating or clouds and air would travel around the equator at 1,000 miles per hour, but he didn’t prove that this would be the case, or state his assumption that the air doesn’t get dragged.

“Refutations” should really be written up in detail so they can be analysed and checked properly.  Problems arise in science where ideas are ridiculed instead of being checked with scientific rigor: clearly journal editors and busy peer reviewers are prone to ridicule ideas with strawman arguments without spending much time checking them.  It is a problem with elitism, as Witten’s letter shows, http://schwinger.harvard.edu/%7Emotl/witten-nature-letter.pdf .  Witten’s approach to criticism of M-theory is not to reply, thus remaining respectful.  Yet if I don’t reply to criticism, it is implied that I’m just a fool.

An excellent example is how your paper’s on the problems in quantum gravity are ignored by string theorists.  That proves string theorists are respectable, you see.  If they engaged in discussions with their critics, they would look foolish.  It is curious that if Witten refuses to discuss problems, he escapes being deemed foolish, but if outsiders do that then they are deemed foolish.  There is such a rigid view taken of the role of authority in science today, that hypocrisy is taken for granted by all.

Best wishes,

Nigel