“If everything in the universe depends on everything else in a fundamental way, it might be impossible to get close to a full solution by investigating parts of the problem in isolation.” – S. Hawking and L. Mlodinow, *A Briefer History of Time,* London, 2005, p15.

“… I was looking at papers that appeared in astro-ph that had large numbers of citations to see if any had much relevance to particle physics. The only candidates were papers about the vacuum energy and things like “phantom energy”. It’s certainly true that astrophysical observations of a CC [*cosmological constant,* a “fix” to the Friedmann solution of general relativity, which is “explained” by the invention of “dark energy”] pose a serious challenge to fundamental particle physics, but unfortunately I don’t think anyone has a promising idea about what to do about this.” – Dr Woit.

The reason is that such promising ideas have been censored out of arXiv sections like astro-ph, much as Aristarchus of Samos and Copernicus were censored, for being too radical. {Update: I’ve added a section about Dr Motl’s numerological solution to the cosmological constant problem at the end of this post.}

SO(3,3) as a unification of electrodynamics and general relativity: Lunsford had a unification scheme published see http://www.springerlink.com/content/k748qg033wj44x11/

“Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177.

Lunsford suggests that [string theorists such as JD, U. of T.?] censored it off arXiv, see http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

It is available however at CERN http://cdsweb.cern.ch/record/688763

The idea is to have three orthagonal time dimensions as well as three of the usual spatial dimensions. This gets around difficulties in other unification schemes, and although the result is fairly mathematically abstract it does dispense with the cosmological constant. This is helpful if you (1) require three orthagonal time dimensions as well as three orthagonal spatial dimensions, and (2) require no cosmological constant:

(1) The universe is expanding and time can be related to that global (universal) expansion, which is entirely different from local contractions in spacetime caused by motion and gravitation (mass-energy etc.). Hence it is reasonable, if trying to rebuild the foundations, to have two distinct but related sets of three dimensions; three expanding dimensions to describe the cosmos, and three contractable dimensions to describe matter and fields locally.

(2) All known real quantum field theories are Yang-Mills exchange radiation theories (ie, QED, weak and QCD theories). It is expected that quantum gravity will similarly be an exchange radiation theory. Because distant galaxies which are supposed to be slowing down due to gravity (according to Friedmann-Robertson-Walker solutions to GR) are very redshifted, you would expect that any exchange radiation will similarly be “redshifted”. The GR solutions which slow slowing should occur are the “evidence” for a small positive constant and hence dark energy (which provides the outward acceleration to offset the presumed inward directed gravitational acceleration).

Professor Philip Anderson argues at http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901 that: “that the flat universe is just not decelerating, it isn’t really accelerating … there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea.”

The fact is, the flat universe isn’t accelerating; that alleged dark energy-produced acceleration is purely an artefact placed into the Lambda-CDM theory to get the theory to agree with post-1998 observations of supernova redshifts at extremely large distances.

Put another way, take out the GR gravitational deceleration, by allowing gravity to be a Yang-Mills quantum field theory in which redshift of gauge bosons due to the recession of gravitational charges (masses) weakens the gravity coupling constant G, and you can’t have anything but zero cosmological constant. The data only support a cosmological constant if you explicitly or implicitly assume that exchange radiation in quantum gravity isn’t redshifted.

The greatest galaxy redshift recorded is *Z* = 7, which implies a frequency shift of 7 + 1 = 8 fold, i.e., the redshifted light we receive from it has a frequency 8 times lower the emitted light. Since Planck’s law says that energy of a photon is directly proportional to its frequency (*E = hf*), the photons coming from that galaxy have only 1/8th or 12.5% of the energy they had when emitted. (The energy ‘loss’ doesn’t violate energy conservation; this is a analogous situation to firing an arrow at something which is moving away from you at nearly the velocity of the arrow. The arrow ‘loses’ most of its kinetic energy as observed by the target, which feels only a weak impact.)

Similarly, any spin-2 (attractive) graviton radiation being exchanged between the universe we see (centred on us, from our frame of reference) and a receding galaxy which has redshift of *Z = *7, will have an energy of exactly 12.5% of the energy of the graviton radiation being exchanged with local masses. Hence, the universal gravitational constant *G *will have an effective value, for the *Z = *7 redshifted galaxy, of not *G* but only *G*/8.

This allows us to make calculations. Results are similar in the spin-1 gravity model which seems consistent with Lunsford’s unification where, it is clear, gravity and electromagnetism are two results of the same Yang-Mills exchange radiation scheme.

Here, observed gravity attraction is caused by radiation pressure *pushing* nearby non-receding masses together, simply because they are shielding one another (the shielding and hence gravity is due to the fact that nearby masses they are not receding from one another; in order to be exchanging gauge boson exchange it is vital that the masses are receding from one another so that gauge boson radiation force results from Newton’s 3rd law, i.e., the reaction force from the Hubble force of recession in spacetime which is outward force of *F = ma = dv/dt ~ c/t ~ Hc *having an equal reaction force in the direction opposite to the recession).

Here, the reduction in the effective value of the universal gravitational constant *G *for the situation of highly redshifted receding galaxies is due to the *absense *(as seen in the our observable reference frame) of further matter at still greater distances (beyond the highly redshifted galaxy), which could produce an inward gauge boson pressure, against the particles of the galaxy, to slow down its recession.

Look at the data plot, **http://cosmicvariance.com/2006/01/11/evolving-dark-energy/**

The dark energy CC (lambda-CDM) model based on general relativity doesn’t fit the data well, which suggests that the CC would need to vary for different distances from us. It’s like adding epicycles within epicycles. At some stage you really need to question whether you definitely need a repulsive long range force (driven by a CC) to cancel out gravity at great distances, *or whether you get better agreement by doing something else entirely,* like the idea that **any exchange radiation responsible for gravity is redshifted and weakened by relativistic recession velocities**:

“… **the flat universe is just not decelerating** [ie, instead of there being gravitational deceleration PLUS a dark energy CC acceleration which offsets the gravitational deceleration, there is INSTEAD simply no long range gravity because the gravity causing exchange radiation gets redshifted and loses its energy], **it isn’t really accelerating** … ” – Professor Philip Anderson, **http://cosmicvariance.com/2006/01/03/danger-phil-anderson#comment-10901**

Here is a plot of the curve for the absence of gravitational deceleration at great redshifts, in direct comparison to all the empirical gamma ray burst data and in comparison to the mainstream Lambda-CDM model: http://thumbsnap.com/v/Jyelh1YV.gif. Information about the definition of distance modulus and redshift is widely available and basic equations are shown. For redshift with the Lorentzian transformation, *Z* = (1 + *v/c*)/(1 – *v*^{2}/*c*^{2})^{1/2}, while for redshift with the Galilean transformation *Z = v/c*.

The data plotted doesn’t use either of these transformations: the redshift is determined directly by observation of the *shift in the frequency* of gamma rays (gamma ray bursts) or light (supernovae), while the distance modulus is determined directly by the *relative intensity of the gamma ray burst* (not the frequency)* or the relative brightness of visible light* (not wavelength or frequency). The relationship of distance modulus to distance is simply: distance modulus = -5 + 5 log_{10} *d*, where *d* is distance in parsecs (1 parsec = 3.08568025 × 10^{16} meters).

At small redshifts, there is gravitational deceleration because exchange radiation causing gravity (in any gravity mechanism*) is still going to cause a pull back on objects moving away. Hence, the models and data are all in agreement at small redshifts. The departure from the Lambda-CDM model becomes marked at large redshifts. To give an example, consider the extreme situation of redshift Z = 7. The Lambda-CDM model, once fitted to the data from supernova at small redshifts, predicts a distance modulus magnitude of 49.2 at Z = 7. The gamma ray burst data best fit curve suggests a value of 48.2 +/- 1.5, and the no gravitational deceleration model at extreme redshifts predicts a value of 47.4.

General contradicts restricted relativity by allowing the velocity to light to vary due to deflection of light (which changes the velocity, because – unlike speed – velocity is a vector which depends on direction), which also makes light appear to travel faster than *c:*

(Professor Lee Smolin discusses another alleged problem with restricted or special relativity in his book *The Trouble With Physics. *Smolin suggests an argument that the Planck scale is an uncontractable physically real length which dominates the quantum scale, and this contradicts the length-contraction term in special relativity. The result, as Smolin explains, was the suggestion of a modification to special relativity, whereby the relativistic effects disappear at the Planck scale – so the Planck scale is not contracted – but occur at larger scales. This modified scheme was called ‘doubly special relativity’ for obvious reasons of political expediency. This something I don’t like the look of. If people want to work in the backwaters of special relativity and make it more special, they need to take the mathematical derivation and find a physical dynamic theory concerned with the Higgs field to explain how mass varies, and the Yang-Mills exchange radiation field to explain the dynamics of how things contract. In a previous post on this blog, I’ve given examples of research which addresses the contraction in terms of the Dirac sea, although that may not be the correct overall theory, seeing that pair production only occurs out to 1 fm from an electron, where the electric field is over 10^18 v/m, i.e., above the IR cutoff. Clearly, the contraction in special relativity is due physically to distortion caused variations in the gravity causing Yang-Mills exchange radiation pressure when a body moves in a given direction relative to an observer. I don’t appreciate any evidence for the Planck mass, length, or time, which come from dimensional analysis without any experimental proof or even a theory. Furthermore, the oft-made claim that the Planck length is the smallest possible distance you can get dimensionally from physical units is just plain wrong, because the radius of a black hole of the electron mass is far smaller than the Planck length. The Planck mass, length and time are examples of abject speculation, labelled with a famous name, which become physically accepted facts for no physical reason whatsoever. In other words, those quantities are a religion of groupthink faith.)

If we use the Lorentzian transformation for redshift, *v* is always less than* c,* and for* Z* = 7, *v* = 0.9844*c*, so *v* = 295,000 km/s, and from the Hubble law, *d = v/H* = 295,000/[70 (km/s)/Mparsec] = 4,220,000,000 parsec, hence distance modulus = -5 + 5 log_{10} *d = *43.1

Using instead the Galilean transformation for apparent velocity for the purpose of this calculation, for *Z *= 7, *v =* 7*c = *2,100,000 km/s, so the Hubble law gives *d = v/H* = 295,000/[70 (km/s)/Mparsec] = 30,000,000,000 parsec, hence distance modulus = -5 + 5 log_{10} *d = *47.4. In fact, the fit is actually closer, because there would be some very weak gravitational deceleration, equivalent to a universal gravitational constant of *G*/8 for a redshift of *Z* = 7, due to spin-2 graviton redshift and *E = hf* energy ‘loss’ of redshifted gravitons. Results are similar in the spin-1 radiation pressure gravity model.

We can learn about quantum gravity from existing cosmological data. Two theories replaced with one: get rid of gravitational deceleration (Friedmann solution) because exchange radiation is weakened by redshift over long distances, and you also get rid of dark energy into the bargain, because you no longer need to explain the lack of observed deceleration by inventing dark energy to offset it. The choice is:

(1) **Deceleration of universe due to gravity slowing down expansion** + **evolving dark energy to cause acceleration** = observed (non-decelerating) data.

(2) **Redshift of gravity causing exchange radiation (weakening gravity between relativistically receding masses)** = observed (non-decelerating) data.

Theory (2) is simpler and pre-dates (October 1996*) the ad hoc small positive, evolving CC in theory (1) which was only invented after Perlmutter discovered, in 1998, that the predicted Friedmann deceleration of the universe was not occurring. (Perlmutter used automated computer detection of supernova signatures directly from CCD telescope input.) Ockham’s razor tells us that the simplest theory (theory 2) is the most useful, and it is also totally non-ad hoc because it made this prediction ahead of the data.

However, theory (1) is the mainstream theory that is currently endorsed by Professor Sean Carroll and is in current textbooks. So if you want to learn orthodoxy, learn theory (1) and if you want to learn the best theory, learn theory (2). It all depends on whether doing “physics” means to you simply learning “existing orthodoxy (regardless of whether it has any real evidence or not)” to help you pass current exams, or whether you want to see something which has experimental confirmation behind it, and is going places:

“Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.”

– Imre Lakatos, *Science and Pseudo-Science,* pages 96-102 of Godfrey Vesey (editor), *Philosophy in the Open,* Open University Press, Milton Keynes, 1974.

**Electronics World: *the exchange radiation gravity mechanism shows a dependence on the surrounding expansion of the universe, which prevents retardation of distant objects which are extremely redshifted as seen from our frame of reference. In this case, the absense of long range gravitational retardation on the expansion of the universe is due to a combination of redshift of exchange radiation weaking the force between receding masses, and a lack of a net inward pressure (as seen from our frame of reference) on the most distant receding masses, because no receding mass lies beyond them to set up an gravity mechanism and slow them down as seen from our frame of reference.

In the simple momentum exchange (pushing) model for gravity which is due to spin-1 exchange radiation (unlike the spin-2 graviton idea for an ‘attraction’ resulting from exchange purely between two masses), the pushing mechanism gives rise to ‘attraction’ by recoiling and knocking masses together. Regardless of which model you use in the absence of the final theory of quantum gravity, there is no long range retardation.

(What may ‘really’ be occurring in a hypothetical – imaginary – common frame of reference in which objects are all considered at the same time after the big bang, instead of at a time after the big bang which gets less as you look to greater distances, is not known, cannot be known from observations, and therefore is speculative and not appropriate to the universe we actually see and experience light-velocity gravity effects in. We’re doing physics here, which means making predictions which are checkable.)

**UPDATE, 23 Feb 2007: Dr Lubos Motl’s brilliant numerological solution to the cosmological constant problem becomes mainstream!**

The small masses of neutrinos can be accounted for using the Standard Model by invoking a ‘Seesaw mechanism’ involving an algebraic solution to a 2×2 matrix containing the three known neutrino flavours plus a really massive undiscovered neutrino.

This gives neutrinos the very small observed mass-energy, on the order of the square of the electroweak energy scale (~246 GeV), divided into the GUT scale (~10^16 GeV).

Dr Motl in 2005 noticed that the fourth power of this ratio (energy) is similar to the alleged cosmological constant (which has units of energy^4, providing that distance is measured in units of 1/energy, which is of course possible because the distance of closest approach of charged particles in Coulomb scattering is proportional to the reciprocal of the collision energy). So he suggested a matrix of cosmological constants in which the seesaw effect produces the right numerical result. Now other people are taking this seriously, and writing arXiv papers about it, which aren’t deleted as speculative numerology. Even Dr Woit is now writing favorable things about it, because he prefers this as an alternative to the anthropic landscape of supersymmetry:

“… there’s a new preprint by Michael McGuigan which manages to cite both *Not Even Wrong* (the book), and a Lubos Motl blog entry. The citation of my book seems unnecessary, surely there are other sources critical of string-based unification that have priority. The article is about the “see-saw” mechanism for getting the right magnitude of the cosmological constant, and it is for this that Lubos’s blog gets a citation. This does seem to be a more promising idea about the CC than many. I for one think it will be a wonderful development if the field of particle theory turns around, stops promoting pseudo-science justified by the Weinberg anthropic CC “prediction”, and heads instead in a more promising direction, all based on an entry in Lubos’s blog…”

It is obvious what is occurring here. The whole of stringy M-theory theory is a mechanism-less numerology where you say, ‘let’s try 10 dimensions, look, that predicts supersymmetry! & 11 dimensions predicts gravity! Wow!’ So this numerology from Dr Motl *fits in beautifully with the rest of string theory.* It deserves a nice scientific prize.