D. R. Lunsford, Lubos Motl, and Quantum Gravity

“If everything in the universe depends on everything else in a fundamental way, it might be impossible to get close to a full solution by investigating parts of the problem in isolation.” – S. Hawking and L. Mlodinow, A Briefer History of Time, London, 2005, p15.

“… I was looking at papers that appeared in astro-ph that had large numbers of citations to see if any had much relevance to particle physics. The only candidates were papers about the vacuum energy and things like “phantom energy”.  It’s certainly true that astrophysical observations of a CC [cosmological constant, a “fix” to the Friedmann solution of general relativity, which is “explained” by the invention of “dark energy”] pose a serious challenge to fundamental particle physics, but unfortunately I don’t think anyone has a promising idea about what to do about this.” – Dr Woit.

The reason is that such promising ideas have been censored out of arXiv sections like astro-ph, much as Aristarchus of Samos and Copernicus were censored, for being too radical.  {Update: I’ve added a section about Dr Motl’s numerological solution to the cosmological constant problem at the end of this post.}

SO(3,3) as a unification of electrodynamics and general relativity: Lunsford had a unification scheme published see http://www.springerlink.com/content/k748qg033wj44x11/

“Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177.

Lunsford suggests that [string theorists such as JD, U. of T.?] censored it off arXiv, see http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

It is available however at CERN http://cdsweb.cern.ch/record/688763

The idea is to have three orthagonal time dimensions as well as three of the usual spatial dimensions. This gets around difficulties in other unification schemes, and although the result is fairly mathematically abstract it does dispense with the cosmological constant. This is helpful if you (1) require three orthagonal time dimensions as well as three orthagonal spatial dimensions, and (2) require no cosmological constant:

(1) The universe is expanding and time can be related to that global (universal) expansion, which is entirely different from local contractions in spacetime caused by motion and gravitation (mass-energy etc.). Hence it is reasonable, if trying to rebuild the foundations, to have two distinct but related sets of three dimensions; three expanding dimensions to describe the cosmos, and three contractable dimensions to describe matter and fields locally.

(2) All known real quantum field theories are Yang-Mills exchange radiation theories (ie, QED, weak and QCD theories). It is expected that quantum gravity will similarly be an exchange radiation theory. Because distant galaxies which are supposed to be slowing down due to gravity (according to Friedmann-Robertson-Walker solutions to GR) are very redshifted, you would expect that any exchange radiation will similarly be “redshifted”. The GR solutions which slow slowing should occur are the “evidence” for a small positive constant and hence dark energy (which provides the outward acceleration to offset the presumed inward directed gravitational acceleration).

Professor Philip Anderson argues at http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901 that: “that the flat universe is just not decelerating, it isn’t really accelerating … there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea.”

The fact is, the flat universe isn’t accelerating; that alleged dark energy-produced acceleration is purely an artefact placed into the Lambda-CDM theory to get the theory to agree with post-1998 observations of supernova redshifts at extremely large distances.

Put another way, take out the GR gravitational deceleration, by allowing gravity to be a Yang-Mills quantum field theory in which redshift of gauge bosons due to the recession of gravitational charges (masses) weakens the gravity coupling constant G, and you can’t have anything but zero cosmological constant. The data only support a cosmological constant if you explicitly or implicitly assume that exchange radiation in quantum gravity isn’t redshifted.

The greatest galaxy redshift recorded is Z = 7, which implies a frequency shift of 7 + 1 = 8 fold, i.e., the redshifted light we receive from it has a frequency 8 times lower the emitted light.  Since Planck’s law says that energy of a photon is directly proportional to its frequency (E = hf), the photons coming from that galaxy have only 1/8th or 12.5% of the energy they had when emitted.  (The energy ‘loss’ doesn’t violate energy conservation; this is a analogous situation to firing an arrow at something which is moving away from you at nearly the velocity of the arrow.  The arrow ‘loses’ most of its kinetic energy as observed by the target, which feels only a weak impact.)

Similarly, any spin-2 (attractive) graviton radiation being exchanged between the universe we see (centred on us, from our frame of reference) and a receding galaxy which has redshift of Z = 7, will have an energy of exactly 12.5% of the energy of the graviton radiation being exchanged with local masses.  Hence, the universal gravitational constant G will have an effective value, for the Z = 7 redshifted galaxy, of not G but only G/8.

This allows us to make calculations.  Results are similar in the spin-1 gravity model which seems consistent with Lunsford’s unification where, it is clear, gravity and electromagnetism are two results of the same Yang-Mills exchange radiation scheme.

 Here, observed gravity attraction is caused by radiation pressure pushing nearby non-receding masses together, simply because they are shielding one another (the shielding and hence gravity is due to the fact that nearby masses they are not receding from one another; in order to be exchanging gauge boson exchange it is vital that the masses are receding from one another so that gauge boson radiation force results from Newton’s 3rd law, i.e., the reaction force from the Hubble force of recession in spacetime which is outward force of F = ma = dv/dt ~ c/t ~ Hc having an equal reaction force in the direction opposite to the recession).

Here, the reduction in the effective value of the universal gravitational constant G for the situation of highly redshifted receding galaxies is due to the absense (as seen in the our observable reference frame) of further matter at still greater distances (beyond the highly redshifted galaxy), which could produce an inward gauge boson pressure, against the particles of the galaxy, to slow down its recession.

Look at the data plot, http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

The dark energy CC (lambda-CDM) model based on general relativity doesn’t fit the data well, which suggests that the CC would need to vary for different distances from us. It’s like adding epicycles within epicycles. At some stage you really need to question whether you definitely need a repulsive long range force (driven by a CC) to cancel out gravity at great distances, or whether you get better agreement by doing something else entirely, like the idea that any exchange radiation responsible for gravity is redshifted and weakened by relativistic recession velocities:

“… the flat universe is just not decelerating [ie, instead of there being gravitational deceleration PLUS a dark energy CC acceleration which offsets the gravitational deceleration, there is INSTEAD simply no long range gravity because the gravity causing exchange radiation gets redshifted and loses its energy], it isn’t really accelerating … ” – Professor Philip Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson#comment-10901

Here is a plot of the curve for the absence of gravitational deceleration at great redshifts, in direct comparison to all the empirical gamma ray burst data and in comparison to the mainstream Lambda-CDM model: http://thumbsnap.com/v/Jyelh1YV.gif.  Information about the definition of distance modulus and redshift is widely available and basic equations are shown.  For redshift with the Lorentzian transformation, Z = (1 + v/c)/(1 – v2/c2)1/2, while for redshift with the Galilean transformation Z = v/c.

The data plotted doesn’t use either of these transformations: the redshift is determined directly by observation of the shift in the frequency of gamma rays (gamma ray bursts) or light (supernovae), while the distance modulus is determined directly by the relative intensity of the gamma ray burst (not the frequency) or the relative brightness of visible light (not wavelength or frequency).  The relationship of distance modulus to distance is simply: distance modulus = -5 + 5 log10 d, where d is distance in parsecs (1 parsec = 3.08568025 × 1016 meters).

At small redshifts, there is gravitational deceleration because exchange radiation causing gravity (in any gravity mechanism*) is still going to cause a pull back on objects moving away.  Hence, the models and data are all in agreement at small redshifts.  The departure from the Lambda-CDM model becomes marked at large redshifts.  To give an example, consider the extreme situation of redshift Z = 7.  The Lambda-CDM model, once fitted to the data from supernova at small redshifts, predicts a distance modulus magnitude of 49.2 at Z = 7.  The gamma ray burst data best fit curve suggests a value of 48.2 +/- 1.5, and the no gravitational deceleration model at extreme redshifts predicts a value of 47.4.

General contradicts restricted relativity by allowing the velocity to light to vary due to deflection of light (which changes the velocity, because – unlike speed – velocity is a vector which depends on direction), which also makes light appear to travel faster than c:

‘All the distance covered by the light in the early universe gets increased by the expansion of the universe,’ explains Neil Cornish, an astrophysicist at Montana State University. ‘Think of it like compound interest.

This article generated quite a few e-mails from readers who were perplexed or flat out could not believe the universe was just 13.7 billion years old yet 158 billion light-years wide. That suggests the speed of light has been exceeded, they argue. So SPACE.com asked Neil Cornish to explain further. Here is his response:

“The problem is that funny things happen in general relativity which appear to violate special relativity (nothing traveling faster than the speed of light and all that). Let’s go back to Hubble’s observation that distant galaxies appear to be moving away from us, and the more distant the galaxy, the faster it appears to move away. The constant of proportionality in that relationship is known as Hubble’s constant. One seemingly paradoxical consequence of Hubble’s observation is that galaxies sufficiently far away will be receding from us at a velocity faster than the speed of light. This distance is called the Hubble radius, and is commonly referred to as the horizon in analogy with a black hole horizon. In terms of special relativity, Hubble’s law appears to be a paradox. But in general relativity we interpret the apparent recession as being due to space expanding (the old raisins in a rising fruit loaf analogy). The galaxies themselves are not moving through space (at least not very much), but the space itself is growing so they appear to be moving apart. There is nothing in special or general relativity to prevent this apparent velocity from exceeding the speed of light.” [Emphasis added.]

(Professor Lee Smolin discusses another alleged problem with restricted or special relativity in his book The Trouble With Physics.  Smolin suggests an argument that the Planck scale is an uncontractable physically real length which dominates the quantum scale, and this contradicts the length-contraction term in special relativity.  The result, as Smolin explains, was the suggestion of a modification to special relativity, whereby the relativistic effects disappear at the Planck scale – so the Planck scale is not contracted – but occur at larger scales.  This modified scheme was called ‘doubly special relativity’ for obvious reasons of political expediency.  This something I don’t like the look of.  If people want to work in the backwaters of special relativity and make it more special, they need to take the mathematical derivation and find a physical dynamic theory concerned with the Higgs field to explain how mass varies, and the Yang-Mills exchange radiation field to explain the dynamics of how things contract.  In a previous post on this blog, I’ve given examples of research which addresses the contraction in terms of the Dirac sea, although that may not be the correct overall theory, seeing that pair production only occurs out to 1 fm from an electron, where the electric field is over 10^18 v/m, i.e., above the IR cutoff.  Clearly, the contraction in special relativity is due physically to distortion caused variations in the gravity causing Yang-Mills exchange radiation pressure when a body moves in a given direction relative to an observer.  I don’t appreciate any evidence for the Planck mass, length, or time, which come from dimensional analysis without any experimental proof or even a theory.  Furthermore, the oft-made claim that the Planck length is the smallest possible distance you can get dimensionally from physical units is just plain wrong, because the radius of a black hole of the electron mass is far smaller than the Planck length.  The Planck mass, length and time are examples of abject speculation, labelled with a famous name, which become physically accepted facts for no physical reason whatsoever.  In other words, those quantities are a religion of groupthink faith.)

If we use the Lorentzian transformation for redshift, v is always less than c, and for Z = 7, v = 0.9844c, so v = 295,000 km/s, and from the Hubble law, d = v/H = 295,000/[70 (km/s)/Mparsec] = 4,220,000,000 parsec, hence distance modulus = -5 + 5 log10 d = 43.1

Using instead the Galilean transformation for apparent velocity for the purpose of this calculation, for Z = 7, v = 7c = 2,100,000 km/s, so the Hubble law gives d = v/H = 295,000/[70 (km/s)/Mparsec] = 30,000,000,000 parsec, hence distance modulus = -5 + 5 log10 d = 47.4.   In fact, the fit is actually closer, because there would be some very weak gravitational deceleration, equivalent to a universal gravitational constant of G/8 for a redshift of Z = 7, due to spin-2 graviton redshift and E = hf energy ‘loss’ of redshifted gravitons.  Results are similar in the spin-1 radiation pressure gravity model.

We can learn about quantum gravity from existing cosmological data.  Two theories replaced with one: get rid of gravitational deceleration (Friedmann solution) because exchange radiation is weakened by redshift over long distances, and you also get rid of dark energy into the bargain, because you no longer need to explain the lack of observed deceleration by inventing dark energy to offset it.  The choice is:

(1) Deceleration of universe due to gravity slowing down expansion + evolving dark energy to cause acceleration = observed (non-decelerating) data.

(2) Redshift of gravity causing exchange radiation (weakening gravity between relativistically receding masses) = observed (non-decelerating) data.

Theory (2) is simpler and pre-dates (October 1996*) the ad hoc small positive, evolving CC in theory (1) which was only invented after Perlmutter discovered, in 1998, that the predicted Friedmann deceleration of the universe was not occurring.  (Perlmutter used automated computer detection of supernova signatures directly from CCD telescope input.)  Ockham’s razor tells us that the simplest theory (theory 2) is the most useful, and it is also totally non-ad hoc because it made this prediction ahead of the data.

However, theory (1) is the mainstream theory that is currently endorsed by Professor Sean Carroll and is in current textbooks.  So if you want to learn orthodoxy, learn theory (1) and if you want to learn the best theory, learn theory (2).  It all depends on whether doing “physics” means to you simply learning “existing orthodoxy (regardless of whether it has any real evidence or not)” to help you pass current exams, or whether you want to see something which has experimental confirmation behind it, and is going places:

“Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.”

– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

*Electronics World: the exchange radiation gravity mechanism shows a dependence on the surrounding expansion of the universe, which prevents retardation of distant objects which are extremely redshifted as seen from our frame of reference.  In this case, the absense of long range gravitational retardation on the expansion of the universe is due to a combination of redshift of exchange radiation weaking the force between receding masses, and a lack of a net inward pressure (as seen from our frame of reference) on the most distant receding masses, because no receding mass lies beyond them to set up an gravity mechanism and slow them down as seen from our frame of reference.

In the simple momentum exchange (pushing) model for gravity which is due to spin-1 exchange radiation (unlike the spin-2 graviton idea for an ‘attraction’ resulting from exchange purely between two masses), the pushing mechanism gives rise to ‘attraction’ by recoiling and knocking masses together.  Regardless of which model you use in the absence of the final theory of quantum gravity, there is no long range retardation.

(What may ‘really’ be occurring in a hypothetical – imaginary – common frame of reference in which objects are all considered at the same time after the big bang, instead of at a time after the big bang which gets less as you look to greater distances, is not known, cannot be known from observations, and therefore is speculative and not appropriate to the universe we actually see and experience light-velocity gravity effects in.  We’re doing physics here, which means making predictions which are checkable.)

UPDATE, 23 Feb 2007: Dr Lubos Motl’s brilliant numerological solution to the cosmological constant problem becomes mainstream!

The small masses of neutrinos can be accounted for using the Standard Model by invoking a ‘Seesaw mechanism’ involving an algebraic solution to a 2×2 matrix containing the three known neutrino flavours plus a really massive undiscovered neutrino.

This gives neutrinos the very small observed mass-energy, on the order of the square of the electroweak energy scale (~246 GeV), divided into the GUT scale (~10^16 GeV).

Dr Motl in 2005 noticed that the fourth power of this ratio (energy) is similar to the alleged cosmological constant (which has units of energy^4, providing that distance is measured in units of 1/energy, which is of course possible because the distance of closest approach of charged particles in Coulomb scattering is proportional to the reciprocal of the collision energy).  So he suggested a matrix of cosmological constants in which the seesaw effect produces the right numerical result.  Now other people are taking this seriously, and writing arXiv papers about it, which aren’t deleted as speculative numerology.  Even Dr Woit is now writing favorable things about it, because he prefers this as an alternative to the anthropic landscape of supersymmetry:

“… there’s a new preprint by Michael McGuigan which manages to cite both Not Even Wrong (the book), and a Lubos Motl blog entry. The citation of my book seems unnecessary, surely there are other sources critical of string-based unification that have priority. The article is about the “see-saw” mechanism for getting the right magnitude of the cosmological constant, and it is for this that Lubos’s blog gets a citation. This does seem to be a more promising idea about the CC than many. I for one think it will be a wonderful development if the field of particle theory turns around, stops promoting pseudo-science justified by the Weinberg anthropic CC “prediction”, and heads instead in a more promising direction, all based on an entry in Lubos’s blog…”

It is obvious what is occurring here.  The whole of stringy M-theory theory is a mechanism-less numerology where you say, ‘let’s try 10 dimensions, look, that predicts supersymmetry! & 11 dimensions predicts gravity!  Wow!’  So this numerology from Dr Motl fits in beautifully with the rest of string theory.  It deserves a nice scientific prize.

Advertisements

19 thoughts on “D. R. Lunsford, Lubos Motl, and Quantum Gravity

  1. Copy of an email

    From: Nigel Cook
    To: Mario Rabinowitz
    Sent: Friday, January 26, 2007 3:45 PM

    Dear Mario,

    Congratulations and thank you very much for the attached copy of the paper as PDF, which is very interesting. I’m very pleased indeed that you have published in International Journal of Theoretical Physics.

    I’d like to raise the issue of Dr Lunsford’s paper in the same journal, v43, 2004, n1, pp 161-77, available as PDF at http://cdsweb.cern.ch/record/688763 .

    Lunsford unifies general relativity and Maxwellian electrodynamics by showing that they are both different aspects of the same thing instead of being separate quantum theories, and by adopting 3 time dimensions in addition to the 3 space dimensions. This makes sense to me for a wide variety of reasons including the fact that by abstract calculation Lunsford shows that the cosmological constant must vanish for unification to be achieved, see my comments at for example http://electrogravity.blogspot.com/2006/12/there-are-two-types-of-loops-behind.html and my recent post https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/ .

    One reason for SO(3,3) rather than the usual SO(3,1) is the dumping of Lorentzian invariance (the contraction, time dilation etc normally attributed to Lorentzian invariance can come from a simple physical mechanism, e.g., the vacuum radiation pressure contracts moving matter in the direction of its motion, and time slows down because the spacetime ratio distance/time = constant = c). SO(3,1) is supposed to give the two SU(2) groups of the weak force but there is so much speculation persisting in electroweak unification (Higgs field details, electroweak symmetry breaking mechanism) that I’m not worried about that.

    More important, we see three expanding dimensions of space in the Big Bang universe, while the three dimensions of matter are contractable but apparently not expanding (the electromagnetic bonding force and gravity would prevent matter from expanding like distances between clumps of receding matter in space).

    It seems perfectly natural, in view of the gravity idea with outward Big Bang force F = m*dv/dt = c/t = cH, that three orthagonal time dimensions describe the expansion of the universe. (Local time dilation is simply a slowing down of motions, such as electron oscillations in a caesium clock or the the swing of a pendulum, due to motion or gravity effects.)

    I’ve always thought it is nonsense to talk of distances to distant receding objects, because of the ambiguity over whether you allow for how much further the objects recede from you during the time the light is in transit to you. It’s completely unambiguous to instead discuss the time in the past, which is why distances expressed as ‘light years’ of time makes so much sense. The tragedy is that Hubble didn’t do this in 1929 when he formulated the Hubble law in the form v = HR instead of writing v = (Hc)t where t is time past at which the galaxy is located.

    If Hubble had written the recession as a variation of velocity with time past in our (observational) frame of reference, the constant would have units of acceleration which would have been a clue: a = v/t = dv/dt = Hc.

    Newton’s 2nd and 3rd laws would then give a clue to how to get quantitative results out of the LeSage idea. Of course the problem is that the reality of gauge boson exchange radiation behind electromagnetism developed too late, even if Hubble had got his experimental results expressed usefully. Hubble died in 1953, the Yang-Mills exchange radiation equations were published in 1954, and the Standard Model wasn’t built and confirmed until the 1970s.

    By the way, I’ve made the following clarification to the shielding mechanism:

    “… , observed gravity attraction is caused by radiation pressure pushing nearby non-receding masses together, simply because they are shielding one another (the shielding and hence gravity is due to the fact that nearby masses they are not receding from one another; in order to be exchanging gauge boson exchange it is vital that the masses are receding from one another so that gauge boson radiation force results from Newton’s 3rd law, i.e., the reaction force from the Hubble force of recession in spacetime which is outward force of F = ma = dv/dt ~ c/t ~ Hc having an equal reaction force in the direction [opposite] to the recession).”

    https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/

    I do have something about this on my old home page, but it isn’t so clear.

    1. Nearby mass shields you from exchange radiation simply because the nearby mass isn’t receding from you, and therefore can’t transmit to you a gauge boson force by Newton’s 3rd law (to compensate for its outward force). Hence nearby masses automatically shield each other, simply because they aren’t receding significantly with respect to one another.

    2. Distant mass, which is receding from you, exchanges force causing radiation with you, as long as it isn’t so far from you that the redshift of the radiation weakens the force to insignificance. …

    Best wishes,
    Nigel

    More about Dr Lunsford’s work:

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

    “… I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint).

    “Nevertheless, my own opinion is that some things in science just can’t be ignored or you aren’t doing science, which is not a series of wacky revolutions. GR is undeniably correct on some level – not only does it make accurate predictions, it is also very tight math. There are certain steps in the evolution of science that are not optional – you can’t make a gravity theory that doesn’t in some sense incorporate GR at this point, any more than you can make one that ignores Newton on that level. Anyone who claims that Einstein’s analysis is all wrong is probably really a crackpot.

    “(BTW my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM….”

    A few notes: the Einstein-Hilbert field equation of general relativity derived in November 1915 (published in 1916) is not wrong, except by omission of quantum gravity mechanism issues like the redshift of exchange radiation between relativistically receding masses in an expanding universe: it is based entirely on empirical facts, and the departures from Newtonian gravitation are caused by the c-speed limit of energy flow and by the principle of conservation of mass-energy. The problem comes with Einstein’s cosmological constant, published in 1917. Einstein tried to stabilize the general relativity universe by a repulsive anti-gravity term, the cosmological constant (CC). His idea was that this repulsive force CC effect increases with distance. For small distances, it is so small it has no effect at all on gravity in the solar system. But at very large distances it becomes big enough to offset gravity between galaxies at the distance of their average separation in the universe. Hence, Einstein believed in 1917, there is no gravity acting between most galaxies because of the CC, so the universe is static.

    Hubble overturned this in 1929 by observing that the universe isn’t static but galaxies are redshifted and thus (despite all sorts of speculative uncheckable or simply wrong ‘alternative’ explanations for redshift) are receding from us. (If you think the redshift is due to dust or gas or magic, you are not being scientific because the whole spectra of emission lines is uniformly redshifted and the cosmic background radiation is redshifted while maintaining the most perfect Planck body body radiation spectrum ever observed; there is no evidence of scattering or anything but recession caused redshift.)

    The big bang had been predicted by Erasmus Darwin and others (Obler’s paradox is explained by the weakening of light due to redshift of distant receding galaxies), see http://www.math.columbia.edu/~woit/wordpress/?p=273#comment-5322

    Einstein then admitted that the cosmological constant had been his ‘biggest blunder’. Sadly, instead of examining quantum gravitation, and using the failure of the cosmological constant as a warning that general relativity was not a complete theory of gravity, people tried to produce a ‘landscape’ of solutions from general relativity, and force-fit one of these to describe the observed Hubble recession and other big bang features. Alexander Friedmann in 1922 made the speculative guess that wherever you are in the universe, the universe would look the same to you (this is the ‘boundless space’ or expanding currant cake idea where all the currants are receding from one another).

    Nobody has been further than the moon, so we simply don’t know whether this is true. But the finite velocity of light means that wherever you are, you are always looking backwards in time with increasing distance around you, so in that respect at least it is probably a useful approximation. However, it is severely at odds with the simple model of the universe as an expanding fireball in space. There is no proof one way or the other, and it is a severe fallacy to adopt a theory and discard an alternative on the basis of speculation, not fact (that was Ptolemy’s error).

    Friedmann came up with a solution that a general relativity universe with a “critical density” of Rho = (3/8)(H^2)/(Pi*G) would expand at an ever decreasing rate. Any more than that density and it would be able to collapse at some time in the future, like a bullet fired away from the earth with less than the escape velocity and eventually falling back (assuming Friedmann’s assumptions were correct, and that gravity was not caused by the effect of surrounding expansion), while if it has less density it would be “open” and would expand forever at some small residual velocity after deceleration (like a bullet fired away from the earth with more than escape velocity).

    Friedmann in fact only analysed the “critical density” solution, and the collapse and open universe solutions were first discovered by Howard Robertson and Arthur Walker in 1935. Hence, the set of solutions (which are described by a single equation with different values of the density parameter) are called Friedmann-Robertson-Walker (F-R-W) models.

    The problem here is that with such a variety of solutions, general relativity as a cosmological model loses falsifiability, because it can describe anything at all, and whatever nature turns out to be, a solution should be on hand to cover it. This means nothing is risked, and nature cannot determine whether the theory is right. Similarly, if you ‘predict’ that a tossed coin will turn up either heads or tails, you have not been very clever.

    There is nothing awesome that a theory which is not specific can model anything. The key failure, however, is the ad hoc addition of a small, positive, evolving, cosmological constant which has been required to force the F-R-W model to fit data since Perlmutter discovered supernovas were receding faster than predicted in 1999.

    Really, we should ask why any of the F-R-W solutions should describe reality. Since they are all based on general relativity which doesn’t contain quantum gravity, and all the other forces of nature (the strong-weak-electromagnetic SU(3)xSU(2)XU(1) standard model) are Yang-Mills exchange radiation forces, it should be expected that gravity will similarly be an exchange radiation force, resulting in issues like redshift of exchange radiation causing gravitational field strength energy depletion between masses receding rapidly from each other at great distances. All this is ignored and censored out by everyone.

    It is not a case that general relativity is wrong, but just incomplete regards quantum gravity mechanism effects. General relativity’s general covariance supersedes special relativity. The fact is that general relativity so far as it goes in the basic field equation (ignoring cosmology, cosmological constant, etc.) is just a mathematical model based on empirical facts, and the “weird results” stem from the modification Einstein arrived at in November 1915 which is a requirement for energy conservation, ensuring that the divergence of the field source term is zero.

    It also supersedes special relativity by allowing velocity of light (the direction part of velocity, not the speed part of velocity) to vary when gravity deflects light, which means that light velocity is varying all the time in this universe. So you need to say that special relativity is mathematical approximate and regarded as wrong in the principle of constant velocity of light by people who know what they are doing (they generally correct the principle to read constant speed of light, not constant velocity). None of the disputes you list have any merit because the source of the predictions is experimental not speculative; we know energy is conserved and that the only mathematical way you can put this into gravity is the way Einstein/Hilbert did. It’s interesting that you don’t include any mainstream objections to general relativity modelling. The main one at present is the fact that you need some kind of “evolving dark energy” (ie a non-constant “cosmological constant”) to make general relativity with fixed G model astronomical observations of recession. Then Lee Smolin’s “doubly special relativity” and his “background independence” for general relativity/quantum gravity, whereby the choice of the metric is considered to be the source of conflicts between quantum field theory and general relativity, is vital.

    The important thing is the physical basis of general relativity. The role of experimental confirmation is called marketing, or sometimes publicity. Nobody at all in the media was interested in Einstein until J.J. Thomson, president of the Royal Society or whatever, in 1919 accepted Eddington’s massaged eclipse data and declared Einstein a genius on a par with Newton.

    The deeper hidden message is the fact that Eddington did not just present massaged data (omitting stars whose deflections were “too wrong” by his arbitrary judgement) that fitted the prediction; what probably swayed the media and J.J. Thomson was a lengthy theoretical “Report” Eddington wrote about general relativity, explaining the basis of general relativity and how the prediction arrived. Eddington explains in his 1919 Report on general relativity that light passing the sun has both the Newtonian deflection (which would be the same for a bullet fired past the sun at speed much less than c) and an additional equal deflection.

    Looking at Eddington’s analysis in the paper, it is clear to me at least that what is occurring is that a bullet passing the sun gets both speeded up and deflected as it passes the sun. If the bullet’s deflection is small enough that the distance from the sun is not appreciably reduced, there might be no net change in speed because the speeding up reaches a maximum when the bullet is closest to the sun, and then the bullet loses some of that speed by gravitational retardation as it travels on, away from the sun.

    In the case of light, it can’t vary in speed, only in velocity. Energy is transversely deflected by gravity. If light travels parallel to a gravitational field line instead of perpendicularly to a gravitational field line, light will not be deflected in direction but will be shifted in frequency (increased it it is travelling into a stronger field, reduced if it is travelling into a weaker field). As light passes the sun, it can’t speed up, so it is only deflected. For a slow bullet which is deflected by gravity, at closest approach half of the gained gravitational potential energy is used to speed up the bullet, and half is used to deflect the direction of the bullet. This immediately predicts that for light – because light can only be deflected and never speeded up by the gravitational potential energy it gains – you get exactly twice the Newtonian deflection just as Einstein’s field equation predicts.

    There is no speculation involved at all. If Ivor accepts a c-speed universe, he is mechanically forced to accept that Einstein’s prediction that starlight passing the sun will be deflected twice as much as Newton’s acceleration a = MG/r^2 formula suggests.

    If you accept that light can never exceed c (although it can go more slowly, for example in glass), then you are forced to accept that in a vacuum gravity deflects light by twice the Newtonian amount.

    The general flat-space Schwarzschild metric is a pythagorean sum of elements for three orthagonal directions and time, with contraction of radial distance in the direction of a gravitational field line, and contraction (or dilation) of time, occurring both by the factor [1 – GM/(rc^2)].

    If you think about the kinetic energy which is equivalent to gravitational potential energy, you get v^2 = 2GM/x. Hence gamma (1 – {v/c}^2)1/2 = 1 – ½(v/c)^2 + … = [1 – 2GM/(xc^2)]1/2 = 1 – GM/(xc^2) + …, by the binomial approximation.

    This is the interface between SR and GR gamma factors: they are equivalent. But for spherical symmetry of the gravitational field around a mass, the contraction is spread over three orthagonal dimensions (since gravity field lines spread out in three dimensions), unlike the SR contraction which only occurs in the direction of motion. Hence, gravity contracts the earth not by the distance GM/c^2 but by only (1/3)GM/c^2 which is about 1.5 mm for the Earth.

    “If an theory doesn’t match experiment, modify the theory until it does. That’s the fundamental nature of science.” – Professor Landis, email to me dated January 18, 2007 11:11 PM.

    “Yep. The old theory gets modified when new data comes. This is the way science is done; our knowledge of the universe evolves. I’m surprised you think it surprising.” – Professor Landis, email to me dated January 18, 2007 11:11 PM.

    The problem here is that you can endlessly modify a theory to make it fit the facts, Ptolemy’s mathematical theory of epicycles, attempts to save the elastic solid aether, and so on are examples. Look at the data plot,
    http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

    The CC model doesn’t fit the data, which seem to suggest that the CC would need to vary for different distances from us. It’s like adding epicycles within epicycles. At some stage you really need to question whether you definitely need a repulsive long range force (CC) to cancel out gravity at great distances, or whether you get better agreement by doing something else entirely, like the idea that any exchange radiation causing gravity is redshifted and weakened by recession:

    ‘the flat universe is just not decelerating [ie, no long range gravitational retardation on expansion], it isn’t really accelerating’ – Prof. Philip Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson#comment-10901

    ‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

    – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

    Quantum field theory has evidence for spacetime “loops” of charge pairs appearing and disappearing (each cycle being a “loop”) in the vacuum only closer than 1 fm to a unit charge. It requires an intense field. If the vacuum charges could be polarized (with a “displacement current” resulting from the polarization in an electric field) at greater distances (i.e. electric fields weaker than 10^20 v/m), the polarization would completely cancel all electric charges, instead of leaving a small residue which we can observe. So clearly, there is a physical conflict between the quantum field theory description of vacuum charge and Maxwell’s displacement current.

    The issue is then whether you are interested in finding out what is going on, or are satisfied to come up with a mathematical model which fits the facts like Ptolemy’s mathematical model of the Earth centred universe fitted the measurements – by epicycles.

    From the factual evidence, quantum field theory is the least in error of all physical theories, second is general relativity (minus cosmological applications), and third is classical Maxwellian theory. Notice that the 1954 Yang-Mill equation which lies at the heart of quantum field theory, is a modification of and hence in some ways a replacement for Maxwell’s equation. However, it is based on accountancy and symmetry principles, rather than physical mechanism.

    It’s clear that within 1 fm or so from an fundamental particle, the vacuum is a chaotic sea of pair production-annihilation of charges. These randomly appearing and annihilating vacuum charges deflect the particle chaotically on small scales, such as in an atom, although their influence statistically cancels out on large distance scales. Beyond the 1 fm range, you are in a physics regime below the infrared cutoff of QFT, so there are no vacuum charges appearing and disappearing anymore; all you have there is radiation exchange which causes gravity and electromagnetic forces.

    This physical picture from quantum field theory contrasts with Maxwell’s model. To me the solution to displacement current is simple. The word “electron” means a source of electric field, with a magnetic dipole moment, spin, and mass. Electromagnetic radiation is emitted when charge is accelerated. It is important to ask what “charge” property has to be accelerated. From macroscopic experiments on oscillating charged spheres, the radio emission depends just on the acceleration of an electric field.

    Hence, you don’t actually need the acceleration of electric charges in order to produce the effect of electromagnetic radiation emission, you simply need an accelerating electric field. In Maxwell’s model, an time-varying electric field induces a “displacement current” of vacuum charges due to polarization of the charges in the electric field, and the “displacement current” then causes a curling magnetic field, which by Faraday’s law of induction causes an electric field to repeat the process.

    The actual nature of light is slightly different: a time-varying electric field is the source for emission of electromagnetic radiation. This is because a time-varying electric field is equivalent itself to accelerating charge with regards radio emission(since it is the property in accelerating charge which causes radio emission).

    The time-varying electric field therefore results directly in the creation of a magnetic field, because a magnetic field is part of electromagnetic radiation.

    The time varying electric field has a direction at right angles to the direction of propagation of light. I’ll leave it there. The question is, do you really think physics can go on endlessly by focussing on mathematics and ignoring questions concerning a physical interpretation? Even in extremely mathematical string theory, there is the need for some kind of physical ideas and concepts to make sense of mathematical descriptions.

    Personally I think string theory has too much physical interpretation – bosonic superpartners for every fermion, 11-d supergravity as a brane on 11-d superstrings – and too little useful mathematics (actually, it has no useful predictive formulae at all, so it is totally useless). Nobody has observed the spin-2 gravitons or Planck scale unification which it claims to model, and even if they did, that would be no indication the string theory (rather than an alternative) was uniquely successful. There may be numerous ad hoc ways to model gravitons and Planck scale unification.

    The message of string theory is that mainstream physicists form a dominating group-think society which places cohesion and unity above mere scientific credibility.

    “Please explain why it does not flow into matter that is not conveniently placed and how it discriminates. How would you explain the gravitational attraction between two large lead spheres suspended adjacently (as occurs at London Science Museum)?” – Guy Grantham, email to me January 16, 2007 9:29 AM.

    It’s the gauge boson radiation that causes the pressure on matter (i.e. the subatomic particles, not macroscopic matter as it appears, which we know if mostly empty), and such radiation is directional.

    The argument that shadows will be filled in by non-radial (eg sideways) components of pressure was the major argument against LeSage’s material aether push gravity theory.

    The aether exists as a Dirac fluid closer to 1 fm to an electron where you have a nice, strong, disruptive electric field strength of 10^20 v/m to break it up into a fluid. Beyond that, whatever “aether” there is, is too locked up by bonding forces to be polarized and thus it is unable to really move much if at all. The only way you will work out what it is at low energy (beyond 1 fm, or the distances electrons approach in collisions at an energy of less than 1 MeV per collision) is to work out all the stuff you can detect by high-energy physics above the IR cutoff (within 1 fm) and use those results to work out a model which both produces all of that, tells you the low energy structure of the vacuum, and also makes some other predictions so that it can actually be verified scientifically by experimentation.

    Hence, what you need to do is to get a complete understanding of how the Standard Model (or some other equally good approximation to high energy phenomena) can arise with some understanding of how to resolve existing problems like electroweak symmetry breaking, in a predictive way that exceeds existing understanding.

    The LeSage mechanism is actually the pion exchange strong nuclear force in the vacuum. There is a limit on the range, because the pions aren’t directional (the Dirac sea within 1 fm from a particle is a fluid assembly of the chaotic motions of particles), unlike the radiation which travels through the non-fluid vacuum beyond 1 fm range (gravity and electromagnetism). The pion pressure only pushes protons and neutrons together if they are close enough that there is not room for too many pions to appear between two particles and neutralize the pressure: similarly, a rubber sucker only sticks to surfaces smooth enough to exclude air. If air gets between the sucker and the surface, the pressure on both sides of the sucker is equalized, and it won’t be “attracted” (pushed to the surface by ambient air pressure).

    See https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/

    Newton showed that for a sphere of either uniform density or radially symmetric density variation, the total gravity effect on the mass is the same as if the mass is all at a “point” in the middle of the Earth.

    This is precisely why you get the 9.8 m/(s^2) acceleration from g = MG/(r^2) where r is radius to Earth’s centre. You don’t have to sum the contributions from all the atoms located in different places throughout the earth. The vector sum is the same as if all the mass was in the middle, instead of being spread out.

    For example, the mass of dirt directly under your feet produces a relatively big contribution because it is closer to you, but the effect is cancelled by mass on the other side of the Earth, which is twice as far from you as the middle of the earth, and this contributes very little.

    Newton’s genius was in dealing with this effect. He also dealt with the real situation, where the earth’s polar radius is less than its equatorial radius, predicting the procession of the equinoxes (Earth was considered a flattened sphere or “oblate spheroid”, although more recent data from spacecraft show it is actually slightly pear shaped).

    The problem with general relativity is that it is Newtonian gravity in tensor calculus + equivalence principle between inertial and gravitational masses + velocity of light limit c + conservation of mass-energy. Hence, it doesn’t contain a theory in any mechanical sense, any more than Newton’s mathematics is a theory.

    Newton’s theory suggests for instance that a bullet passing the sun will be deflected by an maximum acceleration towards the sun of a = MG/(r^2) where M is mass of sun and r is distance of closest approach (assuming that the deflection itself is small, i.e., not great enough to vary r appreciably).

    Einstein’s general relativity cleverly shows that a ray of light can’t be speeded up at all. A relatively slow (compared to c)bullet passing the sun at closest approach transforms 50% of the gravitational potential energy gained into deflection of direction and 50% into increased speed.

    A ray of light, bcause it can’t be speeded up, transforms 100% of the gravitational potential energy gained into additional deflection. Hence it is deflected by twice the Newtonian amount.

    That sort of thing is obfuscated for many by the maths of general relativity. I would only trust a theory where there is an experimentally checked and confirmed physical mechanism it. Otherwise, you don’t know if it is solid physics or speculation(mechanical aether of Maxwell), which can also be duplicated by other, more nearly complete and correct, “alternative” theories.

    An example for a correct, experimentally confirmed mechanism for a physical theory which previously lacked one is the mechanism for buoyancy in Archimedes law. The theory of bouyancy due to gas pressure varying with height took a long time.

    The precedent for theory coming after evidence is Archimedes’ proof of the law of buoyancy in his work “On Floating Bodies”. The man first observed that he displaced a quantity of water when stepping into a bath. But he afterwards cooked up a proof. Nobody uses Popper’s criterion of a falsifiable theory to dismiss Archimedes because experimental proof happened to come before theory.

    Archimedes’ proof just notes that after equilibrium is established the water pressure (which he makes no attempt to calculate)will be the same at equal depths under the water regardless of whether there happens to be a floating body above the position. Hence, whatever the mass of the floating body above that fixed underwater location, it must be exactly compensated by a displacement of water, which keeps the pressure constant.

    He doesn’t tell you the mechanism for buoyancy. The simplest case is actually the lighter than air balloon; air pressure pushing its base is slightly higher than that pushing it down from above, because there is a fall in air pressure as you go upwards (air has a density of 1.225 kg per cubic metre). Only the volume of the balloon counts for calculating buoyancy; the shape of the balloon is of no concern, because if you have a pancake shaped balloon, although that maximises the area for pressure to act on, the vertical extent is reduced. The upthrust on a spherical balloon of equal volume is just the same, because increase in the vertical pressure gradient due to its thickness compensates for the reduced horizontal area as compared to a pancake. The physics of buoyancy is the same for a floating ship: pressure increases with depth and total force depends on cross sectional area multiplied by depth, thus volume.

  2. Copy of a comment:

    nige said…
    http://kea-monad.blogspot.com/2007/01/roast-yam.html

    Kea, thanks, I think Aoraki is the most appropriate name.

    Louise: don’t worry, the commies have found some cash and had their service restored.

    String controversy:

    What interests me regards string controversy now is that Discover magazine – see Dr Woit’s new blog post – is searching for a 2 minute U-tube explanation of string theory, which will be selected by Dr Greene and displayed on the front page of the Discover magazine site.

    This is interesting because the thing presumably will have to be pro-string and not critical, or Dr Greene will reject it??!!

    So it boils down to hyping string theory some more.

    I’ve added a long couple of final paragraphs about the string fiasco to my about page on one blog and my new site at http://quantumfieldtheory.org/About.htm

    Really, there is nothing anyone can do. Prof Penrose wrote this depressing conclusion well in 2004 in The Road to Reality so I’ll quote some pertinent bits from the British (Jonathan Cape, 2004) edition:

    On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’

    Penrose identifies the problem clearly on page 1021: ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

    On page 1026, Penrose gets down to the business of how science is really done: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’

    I think the reason why he helped Dr Woit was that he felt it would help undermine string just enough to make people think. No chance, with Discover magazine using a string theorist to judge descriptions of string theory!

    That’s like having President Bush judge 2-minute summaries of the Iraq War and decide on the “best”. No surprise that the winner won’t be an unbiased explanation.

    12:58 PM

  3. Discussion with Dr Lubos Motl:

    Copied here in case it gets accidentally deleted: http://motls.blogspot.com/2007/02/alan-guth-seesaw.html

    http://www.haloscan.com/comments/lumidek/9061833327706377941/#726592

    From the see-saw mechanism for neutrinos as described at http://en.wikipedia.org/wiki/ See…eesaw_mechanism the argument made used to give neutrinos a small mass, on the order of the square of the electroweak energy scale (~246 GeV), divided into the GUT scale (~10^16 GeV).

    In your post http://motls.blogspot.com/2005/1…ant- seesaw.html you relate this to the alleged cosmological constant, which is about this mass raised to the fourth power.

    Would you say this is numerology? In a way the whole of string theory is numerology where you say let’s have 10 dimensions, look that predicts supersymmetry, and 11 dimensions predicts gravity.

    I’m really interested about this issue. Put it another way, if Quantoken starts suggesting numerological ways to calculate the cosmological constant from squaring some ratio from something apparently unrelated (neutrino masses), will he be taken seriously?

    It is interesting Peter Woit is announcing this as a piece of genius because it moves away from the landscape, http://www.math.columbia.edu/~wo…ordpress/? p=525

    “On a completely different subject, there’s a new preprint by Michael McGuigan which manages to cite both Not Even Wrong (the book), and a Lubos Motl blog entry. The citation of my book seems unnecessary, surely there are other sources critical of string-based unification that have priority. The article is about the “see-saw” mechanism for getting the right magnitude of the cosmological constant, and it is for this that Lubos’s blog gets a citation. This does seem to be a more promising idea about the CC than many. I for one think it will be a wonderful development if the field of particle theory turns around, stops promoting pseudo-science justified by the Weinberg anthropic CC “prediction”, and heads instead in a more promising direction, all based on an entry in Lubos’s blog…”

    Loony | Homepage | 02.23.07 – 2:04 pm | #

    ——————————————————————————–

    Dear Loony, incidentally, the Seesaw mechanism Wikipedia page you mentioned was written mostly by me.

    The scales involved in these two seesaw mechanisms are almost exactly identical. For neutrinos, the high GUT (almost Planck) scale is dragged down through the electroweak (almost SUSY breaking) scale to the neutrino mass scale (which is the scale of the vacuum energy, with a huge accuracy).

    For neutrinos, this coincidence is a strong supplementary evidence for the existence of a high energy scale near the GUT scale – and I would say that most particle physicists believe that this evidence creates such a concise picture that it’s probable to be true. We know specific formalisms how to prove that the small neutrino mass is produced.

    For the cosmological constant issue, the numerological agreement is equally good – except that we don’t have a fully satisfactory mathematical framework that would realize that seesaw relation (and cancel the mSUSY^4 contributions to the C.C.) while it would agree with the rest of physics.

    Indeed, the seesaw relation is numerology, and the 2×2 matrix way of realizing it is not yet a solid part of physics that would agree with other parts of physics.

    You shouldn’t combine this with string theory. The seesaw relation for the C.C. isn’t connected with anything in string theory that we know so far.

    Numerology is a pejorative word. Of course that numerical agreements between a priori different numbers – and relations that one can find that are simpler than expected – are often a legitimate source of interest for physicists. In many cases, they’re meaningless and we know (or seem to know) why. In other cases, they become a guide to obtain a more complete understanding of some physical laws.

    Which of these two scenarios is valid in a given situation depends on other things, too. But it is fair to say that the numerological considerations sometimes play a role for physicists who decide how to go on. However, if someone has nothing else than numerology and he doesn’t know how the rest of physics actually works, it is unlikely that numerology will lead him in the right direction.

    Lubos Motl | Homepage | 02.23.07 – 3:27 pm | #

    ——————————————————————————–

    Dear Lubos,

    Thanks for the reply. The Wikipedia article on the Seesaw mechanism is nice.

    There is a direct or indirect relationship between this CC estimate and string theory, in that you are assuming a GUT scale/Planck which is not observed and therefore must be justified by supersymmetry.

    From my standpoint, this relationship between the unobserved GUT scale and the indirectly-implied CC is like the relationship between caloric and phlogiston.

    The CC data just shows a lack of gravitational deceleration of distant supernova, with no proof that this is due to dark energy offsetting gravity, or a fall in gravitational strength due to a serious redshift/energy drop of gravitons or whatever when the masses are receding from one another relativistically.

    The GUT scale unification may be wrong itself. The Standard Model might not turn out to be incomplete with regards to requiring supersymmetry. The QED electric charge rises as you get closer to an electron because there’s less polarized vacuum to shield the core[] charge. Thus a lot of electromagnetic energy is absorbed in the vacuum above the IR cutoff, producing loops. It’s likely the short ranged nuclear forces are powered by this energy absorbed by the vacuum loops. In this case, energy from one force (electromagnetism) gets used indirectly to produce pions and other particles that mediate nuclear forces. This mechanism for sharing gauge boson energy between different forces in the Standard Model would get rid of supersymmetry which is an attempt to get three lines to numerically coincide near the Planck scale. With the strong and weak forces caused by energy absorbed when the polarized vacuum shields electromagnetic force, when you get to very high energy (bare electric charge), there won’t be any loops because of the UV cutoff so both weak and strong forces will fall off to zero. That’s why it’s dangerous to just endlessly speculate about only one theory, based on guesswork extrapolations of the Standard Model, which doesn’t have evidence to confirm it.

    Loony | Homepage | 02.23.07 – 4:48 pm | #

  4. Copy of a comment:

    http://riofriospacetime.blogspot.com/2007/03/conservation-of-energy-pt-2.html

    “When faced with an inexplicable phenomena, it is tempting even for good scientists to come up with a half-baked explanation. That is how “dark energy” got started. Once the half-baked idea is in place, it will prevent better ideas from being considered.”

    That’s the whole problem behind both dark energy and also string theory.

    It isn’t a problem that these theories exist and that people work on them.

    The problem is just that these mainstream speculative ad hoc theories are supported in an irrational way despite making no checkable predictions or even holding out any realistic hope of ever doing so.

    If dark energy and string theory were being presented as being highly speculative, unchecked and probably uncheckable, then OK. That’s fine. People can choose to work on them.

    It’s the entirely misleading claims and egotism of the mainstream, despite having no scientific checks whatsoever, that’s the problem.

    I just can’t believe how this intrusion of irrationality into science has taken place! Reading a lot of textbooks from different periods, everything until the early 1980s which was speculative and unchecked was strongly emphasized as being just a conjecture or guess.

    After about 1983, loads of books arrived which began to reverse this attitude and hype string theory as being a scientifically justified framework of ideas, despite opposition from many scientists like Feynman.

    The approach of the mainstream is just to ignore criticisms, and in some cases increase the hype level to outride it.

    Dark energy is more mysticism, like epicycles it does work up to a point as an ad hoc, physically speculative (unchecked) model, until you google “evolving dark energy” and see that it isn’t clear from the data whether a cosmological constant is really fixed, so the model requires epicycles within epicycles, more modifications, and still the basic reason for the small positive cosmological “constant” is unknown officially.

    2:40 AM

  5. copy of a comment:

    http://kea-monad.blogspot.com/2007/04/sparring-sparling.html

    Thank you very much indeed for this news. On 3 space plus 3 time like dimensions, I’d like to mention D. R. Lunsford’s unification of electrodynamics and gravitation, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, as summarized here.

    Lunsford discusses why, despite being peer reviewed and published, arXiv blacklisted it, in his comment here. Lunsford’s full paper is available for download, however, here.

    Lunsford succeeds in getting a unification which actually makes checkable predictions, unlike the Kaluza-Klein unification and other stuff: for instance it predicts that the cosmological constant is zero, just as observed!

    The idea is to have three orthagonal time dimensions as well as three of the usual spatial dimensions. This gets around difficulties in other unification schemes, and although the result is fairly mathematically abstract it does dispense with the cosmological constant. This is helpful if you (1) require three orthagonal time dimensions as well as three orthagonal spatial dimensions (treating the dimensions of the expanding universe as time dimensions rather than space dimensions makes the Hubble parameter v/t instead of v/x, and thus it becomes an acceleration which allows you to predict the strength of gravity from a simple mechanism, since outward force of the big bang is simply f=ma where m is the mass of the universe, and newton’s 3rd law then tell’s you that there is equal inward reaction force, which – from the possibilities known – must be gravity causing gauge boson radiation of some sort, and you can numerically predict gravity’s strength as well as the radial gravitational contraction mechanism of general relativity in this way), and (2) it require no cosmological constant:

    (1) The universe is expanding and time can be related to that global (universal) expansion, which is entirely different from local contractions in spacetime caused by motion and gravitation (mass-energy etc.). Hence it is reasonable, if trying to rebuild the foundations, to have two distinct but related sets of three dimensions; three expanding dimensions to describe the cosmos, and three contractable dimensions to describe matter and fields locally.

    (2) All known real quantum field theories are Yang-Mills exchange radiation theories (ie, QED, weak and QCD theories). It is expected that quantum gravity will similarly be an exchange radiation theory. Because distant galaxies which are supposed to be slowing down due to gravity (according to Friedmann-Robertson-Walker solutions to GR) are very redshifted, you would expect that any exchange radiation will similarly be “redshifted”. The GR solutions which slow slowing should occur are the “evidence” for a small positive constant and hence dark energy (which provides the outward acceleration to offset the presumed inward directed gravitational acceleration).

    Professor Philip Anderson argues against Professor Sean Carroll here that: “the flat universe is just not decelerating, it isn’t really accelerating … there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it. And yes, inflation predicts flatness, and I even conditionally accept inflation, but how does the crucial piece Dark Energy follow from inflation?–don’t kid me, you have no idea.”

    My arguments in favour of lambda = 0 and 6 dimensions (3 time like global expansion, and 3 contractable local spacetime which describes the coordinates of matter) are at places like this and other sites.

  6. copy of a follow up comment:

    http://kea-monad.blogspot.com/2007/04/sparring-sparling.html

    “I checked Lunsford’s article but he said nothing about the severe problems raised by the new kinematics in particle physics unless the new time dimensions are compactified to small enough radius.” – Matti Pitkanen

    Thanks for responding, Matti. But the time dimensions aren’t extra spatial dimensions, and they don’t require compactification. Lunsford does make it clear, at least in the comment on Woit’s blog, that he mixes up time and space.

    The time dimensions describe the expanding vacuum (big bang, Hubble recession of matter), the 3 spatial dimensions describe contractable matter.

    There’s no overlap possible because the spatial dimensions of matter are contracted due to gravity, while the vacuum time dimensions expand.

    It’s a physical effect. Particles are bound against expansion by nuclear, electromagnetic and gravitational (for big masses like planets, stars, galaxies, etc.) force.

    Such matter doesn’t expand and so it needs a different coordinate system to describe it from the vacuum in the expanding universe inbetween lumps of bound matter (galaxies, stars, etc.).

    Gravitation in general relativity causes a contraction of spatial distance, the amount of radial contraction of mass M being approximately (1/3)MG/c^2. This is 1.5 mm for earth’s radius.

    The problem is that this local spatial contraction of matter is quite distinct from the global expansion of the universe as a whole. Attractive forces over short ranges, such as gravity, prevent matter from expanding and indeed cause contraction of spatial dimensions.

    So you need one coordinate system to describe matter’s coordinates. You need a separate coordinate system to describe the non-contractable vacuum which is expanding.

    The best way to do this is to treat the distant universe which is receding as receding in time. Distance is an illusion for receding objects, because by the time the light gets to you, the object is further away. This is the effect of spacetime.

    At one level, you can say that a receding star which appears to you to be at distance R and receding at velocity v, will be at distance = R + vt = R + v(R/c) = R(1 + v/c) by the time the light gets to you.

    However, you then have an ambiguity in measuring the spatial distance to the star. You can say that it appears to be at distance R in spacetime where you are at your time of t = 10^17 seconds after the big bang (or whatever the age of the universe is) and the star is measured at a time of t – (R/c) seconds after the big bang (because you are looking back in time with increasing distance).

    The problem here is that the distance you are giving relates to different values of time after the big bang: you are observing at time t after the big bang, while the thing you are observing at apparent distance R is actually at time t – (R/c) after the big bang.

    Alternatively you get a problem if you specify the distance of a receding star as being R(1 + v/c), which allows for the continued recession of the star or galaxy while its light is in transit to us. The problem here is that we don’t can’t directly observe how the value of v varies over the time interval that the light is coming to us. We only observationally know the value of recession velocity v for the star at a time in the past. There is no guarantee that it has continued receding at the same speed while the light has been in transit to us.

    So all possible attempts to describe the recession of matter in the big bang as a function of distance are subjective. This shows that to achieve an unequivocal, unambiguous statement about what the recession means quantitatively, we must always use time dimensions, not distance dimensions to describe the recession phenomena observed. Hubble should have realized this and written his empirical recession velocity law not as v/R = constant = H (units reciprocal seconds), but as a recession velocity increasing in direct proportion to time past v/T = v/(R/c) = vc/R = (RH)c/R = Hc.

    This has units of acceleration, which leads directly to a prediction of gravitation, because that outward acceleration of receding matter means there’s outward force F = m.dv/dt ~ 104^3 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity. This “shadowing” is totally different from LeSage’s mechanism of gravity, which predicts nothing and involves all sorts of crackpot speculations. LeSage has a false idea that a gas pressure causes gravity. It’s really exchange radiation in QFT. LeSage thinks that there is shielding which stops pressure. Actually, what really happens is that you get a reaction force from receding masses by known empirically verified laws (Newton’s 2nd and 3rd), but no inward reaction force from a non-receding mass like the planet earth below you (it’s not receding from you because you’re gravitationally bound to it). Therefore, because local, non-receding masses don’t send a gauge boson force your way, they act as a shield for a simple physical reason based entirely on facts, such as the laws of motion, which are not speculation but are based on observations.

    The 1.5 mm contraction of Earth’s radius according to general relativity causes the problem that Pi would change because circumference (perpendicular to radial field lines) isn’t contracted. Hence the usual explanation of curved spacetime invoking an extra dimension, with the 3 known spatial dimensions a curved brane on 4 dimensional spacetime. However, that’s too simplistic, as explained, because there are 6 dimensions with a 3:3 correspondence between the expanding time dimensions and non-expanding contractable dimensions describing matter. The entire curvature basis of general relativity corresponds to the mathematics for a physical contraction of spacetime!

    The contraction is a physical effect. In 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4: ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v^2 /c^2)^1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/(1 – v^2 / c^2)^1/2, where E(o) is the potential energy of the dislocation at rest.’

    Because constant c = distance/time, a contraction of distance implies a time dilation. (This is the kind of simple argument FitzGerald-Lorentz used to get time dilation from length contraction due to motion in the spacetime fabric vacuum. However, the physical basis of the contraction is due to motion with respect to the exchange radiation in the vacuum which constitutes the gravitational field, so it is a radiation pressure effect, instead of being caused directly by the Dirac sea.)

    You get the general relativity contraction because a velocity v, in the expression (1 – v^2 /c^2)^1/2, is equivalent to the velocity gravity gives to mass M when it falls from an infinite distance away from M to distance R from M: v = (2GM/R)^{1/2}. This is just the escape velocity formula. By energy conservation, there is a symmetry: the velocity a body gains from falling from an infinite distance to radius R from mass M is identical to the velocity needed to escape from mass M beginning at radius R.

    Physically, every body which has gained gravitational potential energy, has undergone contraction and time dilation, just as an accelerating body does. This is the equivalence principle of general relativity. SR doesn’t specify how the time dilation rate of change varies as a function of acceleration, it merely gives the time flow rate once a given steady velocity v has been attained. Still, the result is useful.

    The fact that quantum field theory can be used to solve problems in condensed matter physics, shows that the vacuum structure has some analogies to matter. At very low temperatures, you get atoms characterized by outer electrons (fermions) pairing up to form integer spin (boson like) molecules, which obey Bose-Einstein statistics instead of Fermi-Dirac statistics. As temperatures rise, increasing random, thermal motion of atoms breaks this symmetry down, so there is a phase transition and weird effects like superconductivity disappear.

    At higher temperatures, further phase transitions will occur, with pair production occurring in the vacuum at the IR cutoff energy, whereby the collision energy is equal to the rest mass energy of the vacuum particle pairs. Below that threshold, there’s no pair production, because there can be no vacuum polarization in arbitrarily weak electric fields or else renormalization wouldn’t work (the long range shielded charge and mass of any fermion would be zero, instead of the finite values observed).

    The spacetime of general relativity is approximately classical because all tested predictions of general relativity relate to energies below the IR cutoff of QFT, where the vacuum doesn’t have any pair production.

    So the physical substance of the general relativity “spacetime fabric” isn’t a chaotic fermion gas or “Dirac sea” of pair production. On the contrary, because there is no pair production in space where the steady electric field strength is below 10^18 v/m, general relativity successfully describes a spacetime fabric or vacuum where there are no annihilation-creation loops; it’s merely exchange radiation which doesn’t undergo pair production.

    This is why field theories are classical for most purposes at low energy. It’s only at high energy that you get within a femto metre from a fermion, so QFT loop effects like pair production begin to affect the field due to vacuum polarization of the virtual fermions and chaotic effects.

    *****************

    Copy of another comment on the subject to a different post of Kea’s:

    http://kea-monad.blogspot.com/2007/04/sparring-sparling-ii.html

    I’ve read Sparling’s paper http://www.arxiv.org/abs/gr-qc/0610068 and it misses the point, it ignores Lunsford’s paper and it’s stringy in its references (probably the reason why arXiv haven’t censored it, unlike Lunsford’s). However, it’s a step in the right direction that at least some out of the box ideas can get on to arXiv, at least if they show homage to the mainstream stringers.

    The funny thing will be that the mainstream will eventually rediscover the facts others have already published, and the mainstream will presumably try to claim that their ideas are new when they hype them up to get money for grants, books etc.

    It is sad that arXiv and the orthodoxy in science generally, censors radical new ideas, and delays or prohibits progress, for fear of the mainstream being offended at being not even wrong. At least string theorists are well qualified to get jobs as religious ministers (or perhaps even jobs as propaganda ministers in dictatorial banana republics) once they accept they are failures as physicists because they don’t want any progress to new ideas. 😉

    **********
    Update: I’ve just changed the banner description on the blog http://electrogravity.blogspot.com/ to:

    Standard Model and General Relativity mechanism with predictions
    In well-tested quantum field theory, forces are radiation exchanges. Masses recede in spacetime at Hubble speed v = RH = ctH, so there’s outward acceleration a = v/t = cH and outward force F = ma ~ 10^43 N. Newton’s 3rd law implies an inward (exchange radiation) force, predicting forces, curvature, cosmology and particle masses. Non-receding (local) masses don’t cause a reaction force, so they cause an asymmetry, gravity.

  7. HI nige, have just got around to reading this great post. Lunsford’s work is very promising, which is probably why it gets censored. 3 + 3 seems so simple and elegant! Other side is losing: Note that one “professor” mentioned was denied tenure at U Chicago and is currently in a non-tenure position. Another’s position at Harvard is tenuous. Promoting the Concorde cosmolgy hasn’t helped them.

  8. I don’t see the acceleration of the universe as anything but a quantum card trick. If I could build a machine wherein I could decrease my time relative to my surroundings, my surroundings would seem to speed up.

    Time, by the way, seems to function as a kind of shock absorber, contracting and expanding to keep the physical laws as they are in our particular universe in check.

    The stretching out of spacetime is the opposite of running headlong through, or within a gravitational field for that matter. Accordingly, it may be the case that what we perceive as the acceleration of distant objects in our universe to actually be our observation of them through the intervening, flattening spacetime.

    The overall effect would be the same as my machine experiment, on great scale: an observed acceleration.

    Thus, it is possible that the apparent acceleration of the expansion of the universe is actually an observational artifact, an illusion created entirely by this lensing effect.

  9. Yoshiro Aoki,

    Thank you for your comment.

    Spacetime is a mathematical concept, not a physical reality: there is no spacetime physically, just quantum fields which produce effects that can be approximated on large scales by mathematical idealization of distorted geometry, the curvature of spacetime. The exchange of field quanta between charges causes the force effects of gravity. The spacetime continuum is just a classical approximation to this. Spacetime isn’t physically real. The fabric of the vacuum is quantum fields.

    Therefore, when you write that spacetime is stretched out, you are mixing up the mathematical approximation (spacetime) with the reality. In reality, there are quantum fields. These fields aren’t stretched out like elastic as the universe expands: instead, they cause the expansion of the universe itself because the exchange of field quanta induce acceleration of charges away from each other.

    Please try reading the latest post on this blog, and see if it helps: https://nige.wordpress.com/2008/12/20/summary-explanation-of-some-non-mainstream-quantum-gravity-evidence/

  10. hi Nige,
    Thank you for your reply.
    I am aware of the nature of the vacuum as quantum fields,
    but are not the shape of these fields related directly to mass present (or not present) within them, or equivalently by a mass in motion?

    I am just a lay person, Nige, my degree is not quantum mechanics, so pardon if I am a bit naive. I just happened to think about all this at once a few days ago.

    I remember being taught that the speed of light in a vacuum is a constant, but then it dawned on me a few days ago, perhaps incorrectly, that even the speed of light depends on the geometry of the quantum field that it propagates through, and that field is different on earth than in space, but it is still c relative to the quantum field it traverses.

    If thats true, then I reasoned that objects on earth are slightly smaller than they would be in space, and that time is also slightly slower here relative to space as well, because of the quantum field geometry of earth’s mass, which I imagined as a point source at its center of mass.

    I extended these ideas to our galaxy, and surmised that our view of the most far away objects in space may likewise be distorted in time.
    It was just a guess, but it looks like science has concluded that the acceleration is caused by “the exchange of field quanta induce acceleration of charges away from each other”.

    I am certain my lay view has given you and others here a few chuckles.
    If so, I am happy 🙂
    I will have a look at that latest post you mention.
    Soon I will be in such classes, so I’ll save any further
    questions for my poor professors.

    Thank you very much!

    -aoki

  11. Nige, if I may ask you..just one question..since I brought it up in my earlier post..is the speed of light the same no matter where you observe it, or does it change depending on where ..oh..wait, no no no…we can observe atomic clocks on space craft moving slowly, but we will never view light moving from that craft, or anywhere else, as an outside observer, as moving faster or slower than c. It will be c, always. Of course.

    So, even though physical lengths and time may change, the speed of light is immune from change.

    Never mind 🙂

    Nige, have a good day, and thanks for these papers and discussions here.

  12. Hi Yoshiro Aoki,

    Thank you for your comments.

    ‘I am aware of the nature of the vacuum as quantum fields, but are not the shape of these fields related directly to mass present (or not present) within them, or equivalently by a mass in motion?’

    The field is composed of field quanta, gauge bosons which are exchanged between charges to produce forces. Particles (gravitons) are therefore being exchanged between masses to produce gravity. In classical theory like general relativity, you have shapes of fields given by plotting field lines. However, such field lines need to be replaced by quanta exchanges.

    ‘… it dawned on me a few days ago, perhaps incorrectly, that even the speed of light depends on the geometry of the quantum field that it propagates through, and that field is different on earth than in space, but it is still c relative to the quantum field it traverses.’

    The speed of light is slower in glass than in the vacuum, because of the interaction of the the photon with the electromagnetic field quanta inside the block of glass. In this sense, the speed of light depends on the quantum field that it propagates through, as you say. On earth, light interacts with the air it passes through, and slows down.

    ‘… I reasoned that objects on earth are slightly smaller than they would be in space, and that time is also slightly slower here relative to space as well, because of the quantum field geometry of earth’s mass, which I imagined as a point source at its center of mass.’

    I do not follow your reasoning, but you are right that clocks run more slowly at the Earth’s surface than in space (weaker gravitational fields). This is ‘gravitational time dilation’. Linked in to it, as Feynman explains in his Lectures on Physics, is the concept of curvature and excess radius. The Earth’s radius is reduced by (1/3)GM/c^2 = 1.5 mm due to the ‘curvature’ effect of gravitation, while the circumference is unaffected (the contraction only occurs along radial gravitational field lines, not parallel to them). So in this sense, the Earth is contracted in radius slightly causing a distortion to Euclidean geometry, while time is slowed down.

    ‘I extended these ideas to our galaxy, and surmised that our view of the most far away objects in space may likewise be distorted in time.’

    We are seeing distant objects in the past. There is a linkage between distance and time, hence Minkowski’s spacetime concept. The further something is away, the further back in time it appears to be because of the time taken for the light to reach us.

    In relativity, a clock or other object moving at high velocity relative to an observer will slow down in its internal motions. So the oscillating electrons which emit light from distant receding stars which are moving away from us at high velocity will slow down, and will emit redshifted light. However, this effect is taken account of in the relativistic Doppler redshift formula used in cosmology.

    ‘It was just a guess, but it looks like science has concluded that the acceleration is caused by “the exchange of field quanta induce acceleration of charges away from each other”.’

    That’s what science suggests because it makes checkable predictions which are confirmed, but it isn’t what mainstream scientists are likely to say. (They are more likely to say that ‘mysterious dark energy’ causes cosmological acceleration, because of ignorance and love of the occult.)

    ‘..is the speed of light the same no matter where you observe it, or does it change depending on where ..oh..wait, no no no…we can observe atomic clocks on space craft moving slowly, but we will never view light moving from that craft, or anywhere else, as an outside observer, as moving faster or slower than c. It will be c, always. Of course.

    ‘So, even though physical lengths and time may change, the speed of light is immune from change.’

    The speed of light is reduced when it enters a block of glass, ice, perspex, water, air, etc.

    The mechanism of restricted relativity is that the measuring instrument gets contracted and slowed down due to its own motion in the quantum gravity field: this means that the observer’s measurements are affected when you try to measure the speed of light. The classic example of this is as follows:

    ‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus [the Lorentz contraction itself is physically caused by the head-on pressure of the quantum graviton field against the front of the moving particle, squeezing it slightly in the direction of motion] … The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’

    – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

    I hope this is helpful!

  13. Amazing 🙂
    I see I must be more precise with words when discussing these things. I will remember that! For example, when I say ‘speed of light’ but thinking ‘speed of light in a vacuum’, I should certainly say so. Its like chemistry: no non-fully labeled containers allowed in the lab 🙂
    I remember on the news awhile ago they got light to go as slow as a man running by passing it through a cold fog near zero K, a ‘Bose-Einstein condensate’.

    “So in this sense, the Earth is contracted in radius slightly causing a distortion to Euclidean geometry, while time is slowed down”

    this is how I was thinking about it, when I wondered if such applied to everything within such field, including light in a vacuum.
    For a day I convinced myself that c in a vacuum on earth and c in a vacuum in space are different to an external observer. But I recall a certain physics professor of mine being quite sure c in a vacuum is the same in both places. To confess, I say that seems strange, but if thats what the experiments show, on you go.

    “That’s what science suggests because it makes checkable predictions which are confirmed, but it isn’t what mainstream scientists are likely to say. (They are more likely to say that ‘mysterious dark energy’ causes cosmological acceleration, because of ignorance and love of the occult.)”

    I am bothered by the apparent departure of observational, testable science into the realm of mathematics and labeling such as science and even theory. In ancient times, elaborate instruments were constructed to explain celestial motions and they did so, but were based on fundamentally incorrect notions.

    And do you remember what happened when that ‘face on mars’ picture came out? All kinds of popular books were published showing elaborate mathematical links between that geological feature on mars and things like Stonehenge, Egyptian pyramids, etc.

    It seems this is what is being done with string idea, which Ive seen on TV. Observable, testable theories will be revealed if we are clever enough, without need for these cults of magical musical elementary particles and dark phenomena with supposed properties. We have been down that road before, in ancient times and even with the face on mars. Its disturbing.

    I hope I did not insult anyone. It is not my intent. It is just how I see it from my vantage point standing way, way back and looking at it. Really, its the face on mars all over again, on grand scale.

    ‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus [the Lorentz contraction itself is physically caused by the head-on pressure of the quantum graviton field against the front of the moving particle, squeezing it slightly in the direction of motion] … The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’

    Well, of course such will fail. Its like trying to measure ocean wave height from a boat whose apparatus is a ruler painted upon its hull 🙂

    Have a great day, and thanks Nige.

    -aoki

  14. I have to thank you for the efforts you’ve put in writing this site.
    I am hoping to see the same high-grade content by you in the future
    as well. In fact, your creative writing abilities has
    motivated me to get my very own site now 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s