Authority problems in physics

Dr Cormac O’Raifeartaigh has kindly illustrated in detail the kind of confusion that exists in the mainstream over the Hubble recession law, velocity v = HR where H is Hubble’s constant and R is radial distance. (For evidence that this is a real expansion and not ‘tired light’ redshift, see Ned Wright’s article ‘Errors in Tired Light Cosmology’).

In the older mainstream model of cosmology, i.e. the Friedmann-Leimatre-Robertson-Walker metric of general relativity, the effective radius or scale factor of the universe increases as t2/3 where the 2/3 power is due to gravitation slowing down the expansion rate of the universe. However, this prediction was discredited after 1998 when it was observationally discovered, by Perlmutter et al., that supernovae at half the age of the universe (7,000 million light years) were not slowing down as predicted by the Friedmann-Leimatre-Robertson-Walker metric. In order to deal with this discovery, a small lambda (cosmological constant or universal repulsion between masses powered by ‘dark energy’), of sufficient value to negate the previously-predicted long range cosmological slowing down effect of gravity, was introduced.  This was a very much smaller cosmological constant than Einstein’s in 1917 when he was attempting to keep the universe static, and in agreement with all of the observations at that time.  Rather than cancelling out the expansion of the universe as Einstein had tried to do in 1917 with his massive cosmological constant, the new small cosmological constant introduced after 1998 merely cancelled out the gravitational deceleration of the universe predicted for large distances.  This made it replicate (as a purely ad hoc fit, since there was no theoretical prediction of the value of lambda from the mainstream theory) observed expansion rates, the observed scale factor being directly proportional to time t since the big bang. Since the horizon radius of the universe is ct, it is identical to the scale factor in this case. Things are very simple:

The Hubble expansion empirical law is

v = HR

On the right hand side, H = 1/t where t is the age of the (non-decelerating) universe, and R = cT where T is the time past corresponding to observed distance R (light takes time to travel distance R, so you are seeing events there as they occurred at that time in the past). Hence:

v = HR = [1/t][cT ] = cT/t.

Now, since Hubble discovered that v/R = H = constant in spacetime (i.e., looking to increasing distances, you find that v is directly proportional to R), it follows that since H = 1/t, both H and t are constants in spacetime!  This fixed value of t proves that t in this equation is the time after the big bang in the frame of reference of the observer taking the picture of the sky, and not the time after the big bang for the different individual stars being observed in the sky.  Hence, the variables in the equation v = cT/t are v and T, not t.

Hence we differentiate with respect to time T, giving:

acceleration a = dv/dT = d(cT/t)/dT = c/t = Hc = 6×10-10 ms-2.

This was the prediction of the acceleration of the universe which I made in 1996.  You can rewrite T in terms of time since big bang if you want, and then have that as the variable.  However, the predicted numerical value is the same, apart from a minus sign (because time past increases while time after the big bang decreases when looking to larger distances: see Fig. 1 below).

Fig. 1: Proof that the 6×10-10 ms-2 cosmological acceleration of the universe is implicitly present in the normal v = HR Hubble expansion law, using ‘time since big bang’ (instead of ‘time past’ which was used in the blog post text for simplicity; the only difference in the result being a minus sign as explained earlier).

If you want to learn new physics based on facts that makes checkable predictions (few do!), you can also do other calculations to get the prediction of this and other facts such as the value of the universal gravitational parameter G.

Copy of a comment to: http://dorigo.wordpress.com/2008/08/09/problems-with-authority

‘I have no problems with authority. Meaning that I do not feel an inferiority complex, or a defensive instinct, when I deal with people who have titles or power which can affect their interaction with me. I however have to acknowledge that it is a very common problem for many people, even intelligent, instructed, realized individuals.’ – Tommaso Dorigo

‘Problems come with this definite consensus in a number of ways, for instance when [it]:

’1) seems to leak out and apply to decisions out of its initial scope.
’2) becomes fixed, neglecting new knowledge
’3) is altered via opinion instead of knowledge.’ – Alejandro Rivero, comment #4

‘Problems arise when authority is misused.’ – Tony Smith, comment #10

I like the following quotation about authority from Feynman:

‘It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong.’ – Feynman, http://www.vigyanprasar.gov.in/scientists/RichardPFeynman/RichardPFeynman.htm

In politics and in the media, beauty, intelligence, star-quality, and fame are more important than facts. String theory is currently being hyped by authority criteria (beautiful, intelligent, star-quality and famous). Here’s a personal example of how authority deals with unwanted facts:

Sent: 02/01/03 17:47
Subject: Your_manuscript LZ8276 Cook …

Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters

It’s a falsehood that there is a ‘currently accepted theory’ predicting gravity, because although Edward Witten claimed ‘String theory has the remarkable property of predicting gravity’ (April 1996 issue of Physics Today), this claim was repudiated by Roger Penrose on page 896 of his book The Road to Reality: ‘in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory’. String theory does not predict anything falsifiable about gravity. So this is a good example of how authority is abused.

When I challenged it, editor Stanley just got his associate editor to email me a message ignoring my point completely, claiming falsely that I was complaining, and stating that he supported the editor’s decision (I was asking for the mainstream theory prediction that my theory is allegedly merely an alternative to). At IoP’s Classical and Quantum Gravity, the editor tried to be more reasonable but sent my paper for ‘peer-review’ by a string theorist (not a ‘peer’!), who came back saying that the paper should not be published since it was not based on string theory.

There is no inferiority complex required in order to have a problem with authority. All you have to do to be falsely attacked by an authority, is to challenge that authority using empirical facts which the authority can’t find a rational basis to reject. If you look at the example of the USSR, e.g. Trotsky’s The Revolution Betrayed (Trotsky was of course famously murdered with an ice-axe in Mexico on Stalin’s orders), you see the problem is not with the dissenter, but with the authority which can’t find a rational way to respond. Doubtless Stalin thought he was doing what was best…

***

‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do.’

- Letter from Galileo to Kepler, 1610.

‘In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual. I do not feel obliged to believe that the same God who has endowed us with sense, reason, and intellect has intended us to forgo their use.’

- Galileo Galilei, 1632.

‘There is no place for dogma in science. The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors.’

- J. Robert Oppenheimer, quoted in Life, October 10, 1949.

But Oppenheimer was a terrible censor! See Freeman Dyson’s video account of Oppenheimer’s horrendous attacks on Feynman’s path integral work on QED in 1948:

http://video.google.co.uk/videoplay?docid=-77014189453344068&q=Freeman+Dyson+Feynman

“… the first seminar was a complete disaster because I tried to talk about what Feynman had been doing, and Oppenheimer interrupted every sentence and told me how it ought to have been said, and how if I understood the thing right it wouldn’t have sounded like that. … we couldn’t tell him to shut up. So in fact, there was very little communication at all. … I always felt Oppenheimer was a bigoted old fool. … Hans Bethe somehow heard about this and he talked with Oppenheimer on the telephone, I think. …

“I think that he had telephoned Oppy and said ‘You really ought to listen to Dyson, you know, he has something to say and you should listen’. And so then Bethe himself came down to the next seminar which I was giving and Oppenheimer continued to interrupt but Bethe then came to my help and, actually, he was able to tell Oppenheimer to shut up, I mean, which only he could do. …

“So the third seminar he started to listen and then, I actually gave five altogether, and so the fourth and fifth were fine, and by that time he really got interested. He began to understand that there was something worth listening to. And then, at some point – I don’t remember exactly at which point – he put a little note in my mail box saying, ‘nolo contendere’.”

Tony Smith points out at http://www.math.columbia.edu/~woit/wordpress/?p=189#comment-3222 that Oppenheimer was later a dictatorial tyrant to David Bohm:

“Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … “We consider it juvenile deviationism …” … no one had actually read the paper … “We don’t waste our time.” … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should “pay no attention to Bohm’s work.” … Oppenheimer went so far as to suggest that “if we cannot disprove Bohm, then we must agree to ignore him.” …”. (Bohm biography Infinite Potential, by F. David Peat (Addison-Wesley 1997), pages 101, 104, and 133.)

Tony Smith at the page http://www.tony5m17h.net/goodnewsbadnews.html#badnews additionally quotes Feynman on the problem of the false ‘authority’ of ‘critics’ who dismissed (for bogus reasons) his path integral formulation of QED. These authority figures were not just Oppenheimer, but included other expert physicists such as Teller, Dirac and Bohr at the 1948 Pocono conference:

“… take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: “… It is fundamentally wrong that you don’t have to take the exclusion principle into account.” …

“… Dirac asked “Is it unitary?” … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

” … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …

“… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

“I gave up, I simply gave up …”.

- The Beat of a Different Drum: The Life and Science of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248).

It was just as well that after Feynman had given up, Dyson and Bethe managed to convince Oppenheimer to take it seriously. I think this kind of story about authority problems in physics should be widely known. There is too much hero worship of mortal famous authority figures whose judgement is worth damn all assessing new work.

***

For more on string theorists using irrational arguments to attack scientific critics of string, see: https://nige.wordpress.com/string-theorist-urs-schrieber-attacks-critics-of-pseudo-physics/

***
Copy of a comment to Backreaction blog:

http://backreaction.blogspot.com/2008/08/equivalence-principle.html

Cecil,

The equivalence principle amounts to treating inertial mass and gravitational mass as exactly the same thing in general relativity. So all accelerations are curvatures due to the gravitational field, i.e. curved spacetime.

It’s difficult to find a good mathematical treatment from a physical perspective, and when you do find the mathematical and physical facts, they aren’t always satisfying. Here are some useful online links for the technical details:

Riemann tensor for curvature (acceleration)

Stress-energy tensor (source of gravitation

This page is good for explaining exactly why Einstein’s law differs from Newton’s: e.g., when Einstein wrote Newton’s law in terms of spacetime curvature, he had to add a term containing the trace of the Ricci tensor to satisfy conservation of mass-energy. The effective divergence of the stress-energy tensor must be zero (i.e. the vector sum of gravitational field lines from any mass/energy must be zero). Since it isn’t zero, an extra term had to be included in the field equation to make sure that there is zero curvature wherever the divergence of the stress-energy tensor is not zero. This term is the reason why Einstein’s field equation predicts twice the deflection of starlight by gravity that you get predicted by Newtonian gravity.

Physically, if a non-relativistic particle or bullet passes by the sun, half of the acquired gravitational potential energy from the approach is used for speeding up the bullet, and half is used for changing the direction (deflecting) the bullet. For a photon moving at c, none of the energy gained from approaching the sun can be used to speed up the photon, so twice as much deflection occurs than would occur for a non-relativistic bullet (100% instead of 50% of the acquired gravitational potential energy gets used to change the direction of the photon).

This is the kind of physics that general relativity delivers: it’s a kind of accountancy. Feynman gives a nice explanation of curvature in general relativity in his lectures, pointing out that the Earth’s gravitational field makes the Earth’s radius contract by 1.5 millimetres or MG/(3c2).

It’s pretty interesting that you can get this contraction from the Lorentz transformation (1 – V2/c2)1/2 factor for lengths of moving bodies.

Clearly, in quantum gravity you have exchanges of gravitons between masses. Therefore a moving body will experience front-side graviton interactions which may cause the contraction (possibly like the net air pressure on the nose of a moving aircraft, or the water pressure on the front of a moving ship).

If so, whatever graviton effect causes the contraction of length of moving bodies, will also cause the radial contraction of static masses.

Because of the equivalence principle between inertial and gravitational masses, there should be an equivalence between the contraction you get when moving at relativistic velocity and that you get in a strong gravitational field. One way to relate these is by the fall of a small particle from a long distance in a gravitational field. The velocity gained when a small particle is dropped from an immense distance and falls to the earth’s surface (ignoring air drag) is be equal to the escape velocity from the earth’s surface. The relativistic contraction of that small particle due to freefall from a very large distance should be identical to the amount of contraction of static mass you get due to gravity at the earth’s surface, if inertial and gravitational mass effects are indistinguishable.

If you put the escape velocity law (V2 = 2GM/r) into the Lorentz contraction, and then expand the result by the binomial expansion, as a first approximation this predicts that gravity contracts length by the amount GM/c2. However in the case of a moving body only one dimension gets contracted (that in the direction of motion), whereas three dimensions are contracted by gravity effects on static masses, so the average contraction amount per dimension will be one-third of GM/c2, which gives the result Feynman gives from general relativity in his lecture on curvature in ‘Lectures on Physics’.

So it’s easy to understand the physics behind the mathematical laws in general relativity. All contractions in relativity are real effects from graviton exchanges occuring between masses.

Curvature is not real at the fundamental (quantum field) level because quantum particles (gravitons) will accelerate masses in a large number of small steps from individual discrete interactions, not as a continuous smooth spacetime curvature. For large masses and large distances, the graviton interactions produce effects that average out to look like a smooth curvature. The argument for 4-dimensional spacetime in general relativity is, as Feynman pointed out, based on the fact that in a gravitational field the radial field lines are contracted (e.g. the earth’s radius is contracted by 1.5 mm), but the transverse lines (like the earth’s circumference) aren’t contracted. So this contraction of radius but not circumference would produce an increase in Pi if Euclidean 3-dimensional space were true. Having 4-dimensional spacetime is justified because it means that you can keep Pi fixed and account for the distortion by having an effective extra dimension appear! One thing I don’t like about general relativity is that the source for the gravitational field (the stress-energy tensor) is a continuously variable differential equation system, and we know that mass comes in discrete particles. So all the solutions of general relativity which have ever been done are fiddles, using smoothed distributions of mass-energy. There is no way to get general relativity to work by having discrete particles produce the field: it only works for statistically averaged smooth distributions of matter and energy. You have to assume that the source of a gravitational field is a perfect fluid with no particulate qualities, so it varies smoothly and works with the differential equations being used.

So despite the fact that general relativity has made many accurate predictions, its mathematical framework is that of classical physics (differential equations for fields), instead of being inherently compatible with quantum fields. It’s certainly accurate as an approximation where large numbers of gravitons are involved, but even Einstein himself had very serious reservations on whether continuous field structures were right:

‘I consider it quite possible that physics cannot be based on the [smooth geometric] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air …’

- Albert Einstein in a letter to friend Michel Besso, 1954.

***

For more on fact-based falsifiable work which isn’t an ‘alternative’ to speculative, non-falsifiable, non-fact based string theory see: https://nige.wordpress.com/2008/01/30/book/

I notice that Lubos Motl has, in a comment at the Not Even Wrong blog, attacked string theorist Urs Schreiber’s standing in the theoretical physics community:

http://www.math.columbia.edu/~woit/wordpress/?p=820#comment-42986

Motl’s attack is completely opinionated drivel: “… people who don’t really mean anything in physics, such as Urs Schreiber, A.J., or someone like that, can be ambiguous …”

I think this sort of comment is typical of the behind-the-scenes sneering officialdom of physics. I’m glad to have escaped mainstream physics academia, a vile mixture of political thuggery and religious orthodoxy, where dictatorial censorship rather than reason presents the barrier to genuine new (i.e. initially unorthodox) ideas (holding them up on non-scientific grounds that sound scientific or authoritive to most people).

http://en.wikipedia.org/wiki/Ad_hominem :

‘An ad hominem argument, also known as argumentum ad hominem (Latin: “argument to the man”, “argument against the man”) consists of replying to an argument or factual claim by attacking or appealing to a characteristic or belief of the person making the argument or claim, rather than by addressing the substance of the argument or producing evidence against the claim. The process of proving or disproving the claim is thereby subverted, and the argumentum ad hominem works to change the subject.

‘It is most commonly used to refer specifically to the ad hominem as abusive, sexist, racist, or argumentum ad personam, which consists of criticizing or attacking the person who proposed the argument (personal attack) in an attempt to discredit the argument. It is also used when an opponent is unable to find fault with an argument, yet for various reasons, the opponent disagrees with it.’

For more about Urs Schreiber’s recent string theory comments at Not Even Wrong which Motl apparently objected to, see my post: https://nige.wordpress.com/string-theorist-urs-schrieber-attacks-critics-of-pseudo-physics/

***

More light is thrown on Oppenheimer’s dictatorial mannerism as a typical in-a-bluster busybody mainstream physicist, in the autobiography of neutron bomb inventor and Manhattan Project veteran Samuel T. Cohen, F*** You, Mr President! Confessions of the Father of the Neutron Bomb, 3rd ed., Athenalab, 2006, page 24:

‘Oppenheimer’s personality … could be intolerant and downright sadistic … he showed up at a seminar to hear Dick Erlich, a very bright young physicist with a terrible stuttering problem, which got even worse when he became nervous. Poor Dick, who was having a hard enough time at the blackboard explaining his equations, went into a state of panic when Oppenheimer walked in unexpectedly. His stuttering became pathetic, but with one exception everyone loyally stayed on trying to decipher what he was trying to say. This exception was Oppenheimer, who sat there for a few minutes, then got up and said to Dick: “You know, we’re all cleared to know what you’re doing, so why don’t you tell us.” With that he left, leaving Dick absolutely devastated and unable to continue. Also devastated were the rest of us who worshipped Oppenheimer, for very good reasons, and couldn’t believe he could act so cruelly.’

Cosmology at Dr Cormac O’Raifeartaigh’s blog

Professor Cormac O’Raifeartaigh has an interesting blog post about The Cosmological Distance Ladder – the key to understanding the Universe, a lecture given by Micheal Rowan-Robinson, Professor of Astrophysics at Imperial College London.  Rowan-Robinson is author of the textbook Cosmology, which is very good.

I believe that the cosmological distance ladder is the key to understanding the universe, very literally indeed!  So I commented on Professor O’Raifeartaigh’s blog as follows:

Take the Hubble law v/r = H

a = dv/dt = d(Hr)/dt = H*(dr/dt) + r*(dH/dt)

dH/dt = 0, so

a = H*(dr/dt)

= Hv

= H(Hr)

= rH^2

Which predicts a cosmological acceleration at cosmological distances of what is seen in observations, approximately 6*10^(-10) ms^(-2). Smolin’s T.T.W.P. book for example translates the small cosmological constant into an acceleration in units of ms^(-2).

I did this and published it via Electronics World in 1996, well before the cosmological acceleration was discovered by Perlmutter in 1998 and published in Nature.

It’s wrong of Hubble to solely express the recession as v/r = H. Didn’t he know about spacetime?? Distance isn’t meaningful here. The velocity v only correlated to distance r is you are looking back in time as well as distance, because light takes a time of t = r/c to come to you from distance r. During this time, velocity v is quite likely to change!

So Hubble should have expressed recession velocities less ambiguously using the concept of spacetime, where the constant is not v/r, but v/t. If he had done that, he would have noticed that v/t has units of acceleration! (The ratio v/r has units of 1/t, i.e. in general it’s inversely proportional to the age of the universe – and is exactly the inverse of the age of the universe if the universe is flat rather than curved on cosmological scales.)

Once you differentiate Hubble’s law v = Hr to get acceleration a = rH^2, you can do a lot of interesting physics using Newton’s simple laws of motion. E.g., any receding mass m has an outward force from you of F=ma (Newton’s 2nd law of motion), and Newton’s 3rd law of motion then suggests an inward “reaction” force must be directed towards you of equal size F=ma. This reaction force presumably (from the possibilities) is carried by gravitons, and when you calculate with this you find that gravity and all the confirmed aspects of G.R. are reproduced by spin-1 gravitons.

E.g., spin-1 gravitons come inwards towards us from distant receding masses in all directions. The pressure acts on all fundamental particles, and since nearby masses aren’t receding, they don’t have an outward force relative to one another and hence don’t exchange gravitons forcefully with one another. This tells you that the net effect is that nearby particles shield one another and get pushed together. You can predict how much.

This predicts gravity. As Feynman showed in his Lectures on Physics, G.R.’s main difference physically from Newtonian gravity is that there is a contraction radially of spacetime around a mass; the earth’s radius is reduced by (1/3)MG/c^2 = 1.5 millimetres due to the G.R. contraction term. This contraction is the effect of inward graviton force on the earth, shrinking it radially.

The bottom line is that there’s loads of evidence to support the contention that dark energy is spin-1 graviton energy.

Spin-1 graviton exchange between masses on large (cosmological) scales pushes the masses apart, causing the cosmological acceleration as predicted in 1996. It also pushes cosmologically “nearby” masses together (because nearby masses aren’t receding much if at all, there is little or no forceful exchange of gravitons between nearby masses; so the only forceful exchange of gravitons occurs between each of the masses and converging inward gravitons from the surrounding receding masses in the universe, which means that nearby masses get pushed towards one another; gravity).

That’s the answer to “dark energy”. It’s graviton energy!

Sadly, all the stuff above life Hubble’s law, spacetime, differentiation, Newton’s empirical laws of motion, and so on, are rejected when combined to come up with a quantum gravity theory.

The mainstream prefers to believe that two nearby masses only exchange gravitons with one another, which means that for attraction the gravitons would have to have spin-2. They just can’t see that an apple is going to be exchanging gravitons more forcefully with the massive surrounding universe than with the earth; although the earth is closer, the gravitons coming inwards from surrounding masses and hitting the apple are converging inward from the surrounding universe (they’re not diverging outward). So there is no loss due to inverse square law divergence!

It’s so hard to get anybody who believes in spin-2 leprechauns to listen to straightforward physical facts, that I’ve virtually given up!

***

‘I guess the real question concerns the Hubble equation really means. Is it meaningful to talk about a Hubble graph for one galaxy only? If we measure the distance and velocity of galaxy A, plot it, then measure the distance and velocity of galaxy A again some time later, plot that etc, do we get a straight line of slope H?’ – Cormac

Thanks for responding! If we take a single highly redshifted receding galaxy or supernova, there is evidence that it is accelerating away from us. Perlmutter’s original paper on the discovery of the cosmological acceleration is titled:

Discovery of a Supernova Explosion at Half the Age of the Universe and its Cosmological Implications published in Nature v. 391, pp. 51-54 (1 January 1998).

By that time, 50 supernova with extreme redshift had been discovered, but the paper dealt with just the first one of extreme redshift, the SN 1997ap which has a redshift of z = 0.83. Thus, the implication from this research is that individual supernovae are indeed accelerating!

So this acceleration of individual masses away from one another isn’t controversial.

If you remember the story, Einstein added the cosmological constant to general relativity a year or so after publishing the basic field equation in November 1915.

He believed that the observed universe was static (Hubble’s analysis of redshifts wasn’t completed until 1929, and is still falsely attacked by some people who have mistaken ideas, as exposed on the excellent page http://www.astro.ucla.edu/~wright/tiredlit.htm), and that it would collapse unless there was a repulsive force between masses which increased with distance (thus being negligible over small distances) and cancelled out gravitational attraction over a distance equal to the average distance between galaxies.

At smaller distances, Einstein’s cosmological constant produced a repulsive force which was smaller than gravity (so gravity dominated), while over bigger distances it produced a force which was bigger than gravity (so universal repulsion dominated). One immediate problem was that this model would make the universe unstable.

So it was abandoned by Einstein in after Hubble’s results showed that the universe was expanding.

However, in 1998 the cosmological constant (albeit with a much smaller magnitude than Einstein had stipulated) had to be taken back into the field equation to account for the observed lack of gravitational curvature on the largest distances. The exact value is still hazy, but the approximate order of magnitude is well established: it’s certain from the evidence that there is cosmological acceleration on the order of 10^{-10} m/s^2 or so at large redshifts. There is some uncertainty from gamma ray burster evidence over whether the cosmological acceleration actually implies a cosmological constant or an evolving parameter: http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

So from observational evidence, every receding mass has a small cosmological acceleration away from every other masses. On small distances, gravity dominates over cosmological acceleration, and so the cosmological acceleration only becomes important over large distances.

Going back to your question about testing the applicability of the Hubble law to individual galaxies by measuring the recession and distance of a galaxy at successive times, I fear that we’d have to wait too many lifespans to get statistically significant results. Experimental errors are generally too large to wait short periods of time and detect whether an individual galaxy is accelerating or not. Obviously if we naively apply the Hubble law to predict the motion of an individual galaxy, it fails because as the galaxy recedes to greater distances the Hubble law is describing earlier times after the big bang; when of course the recession will take time so the galaxy will age as it recedes, instead of getting younger. My feeling is that, as Minkowski stated in the quotation above, we have to base physics on observables. The Hubble law is what is observed. Even though an individual galaxy may just be coasting along at constant velocity, or maybe slowing, that doesn’t really matter because all we we see appears to be an acceleration in the frame of reference at our disposal, in which information is carried to us at light speed from times past which increase with distance. Because other effects like gravitons will go at the same velocity as visible light, this observed reference frame is the correct one to be using in making predictions. For gravitational purposes, the apparent spacetime observations of the universe are fine, because the data is coming from the past just at the same velocity that gravitons come at. So the apparent positions and accelerations of masses as seen with visible light are going to be the same as those corresponding to gravitons coming from such receding masses. In any case, from the fact that the universe really is acelerating, I have no problem in deducing from this acceleration that receding individual galaxies themselves do have an effective acceleration in observable spacetime. If we could see them in a reference frame whereby we could see things when the same age after the big bang – without looking backwards in time with increasing distance – then maybe the acceleration would be modified. But we can’t see the universe in reference frame where everything is 13,700 million years old, so it’s unphysical. We have to accept, as Minkowski stated in 1908, that when we look at distant things we’re seeing them as they were at earlier epochs in the big bang.

***

http://coraifeartaigh.wordpress.com/2008/08/15/hubble-puzzle/#comment-191

Dr Chris Oakley:

Thanks! Taking your last point first, the flat universe cosmology has

t = 1/H

or:

H = 1/t

so it does suggest that the Hubble “constant” is falling as the universe expands. But Hubble’s constant is not a time-independent constant, but merely

v/r = H

= 1/t

in flat cosmology, where t is age of universe.

So v is only a “constant” with respect to r as far as Hubble was concerned. H is not a variable as observed in the Minkowsky flat spacetime metric of the universe we see.

Taking

v/r = H = 1/t,

your argument (‘so it is Hubble’s “constant”, not the speed of the galaxy that is changing’) suggests that v is constant and in

v/r = 1/t

each denominator (r and t) is increasing in proportion.

That’s a nice simple idea. Unfortunately, it’s completely wrong, because the time t in this formula is the age of the universe, whereas r = cT where T is time past.

The age of the universe, t, is not proportional to time past T = r/c. E.g., the closest star, the sun is T = 8.3 light-minutes away, but the universe is not t = 8.3 minutes old. (We’re just seeing the sun as it was 8.3 minutes in the past.) So you can’t set v/r = H = 1/t and try to get rid of a changing v by saying that r is proportional to t!

Now for your earlier comment above about the explosion analogy. This particular explosion analogy has been tried and criticised. In about 1931, when initial attempts were being made to understand the Hubble law (before the Friedmann-Robertson-Walker metric was dogma), people like Lemaitre were suggesting that the universe was like an explosion in a pre-existing space.

A letter appeared in Nature in I believe 1931 (I think you will find the details discussed in Eddington’s book The Expanding Universe) pointing out that the Hubble law is not what you expect from say a bursting bomb.

Just before the bomb explodes, the compressed hot gas of explosion debris will have a Maxwell-Boltzmann distribution of velocities, which is skewed so that most of the molecules have low velocities, and the above the peak there is a long tailing-off to a small number of molecules with very high velocities: see http://www.webchem.net/notes/how_far/kinetics/maxwell_boltzmann.htm or http://theory.ph.man.ac.uk/~judith/stat_therm/node85.html

The letter in Nature pointed out that the Hubble distribution is quite different, it is in fact an anti-Maxwell-Boltzmann distribution. In the universe, the greatest number of galaxies have the greatest recession velocities, which is contrary to what the Maxwell-Boltzmann distribution predicts for molecules from a bursting bomb.

‘Afterwards the distance of the fragments from the site of the explosion is proportional to the velocity they set off at, so if you observe at a later time T the distance from the initial point d=vT. So the “Hubble constant” for this is just the inverse time since the explosion. And, because of vector addition of velocities, one gets the same answer in a co-ordinate system comoving with any fragment.’

There’s a graph showing the Hubble law for distribution of air molecule speeds behind the shock front in an explosion, in a paper by Sir G. I. Taylor, ‘The formation of a blast wave by a very intense explosion. II. The atomic explosion of 1945′, Proceedings of the Royal Society of London, Series A, v. 201, pp. 175–186.

However, that’s an air burst detonation, not an explosion in space. The die-hard general relativists who believe in curved space (even though the universe has been shown to be flat, and curvature is anyway just an approximation to a lot of quantum graviton interactions), will tell you (falsely) that spacetime in the universe curves back on itself at great distances, so any effort to model the big bang as some kind of explosion is automatically void.

I’m not sympathetic with those people who want to use the authority of popular speculations to rule out the simplest possible physical model. Within seconds, the big bang universe became mainly compressed, ionized hydrogen gas. As in a H-bomb, fusion occurred in the extremely high temperature and pressure but was not complete; the expansion of the universe quenched the fusion rate by reducing the temperature and pressure before all of the hydrogen could fuse into helium, etc.

If you get away from the curved space of general relativity, and move on to a model of gravitation where accelerations result from the exchange of gravitons in discrete interactions (so spacetime curvature is just an approximation for the effect of a large number of discrete interactions), then it might make sense to try to model the late stages of the universe as a 10^55 megatons H-bomb in a pre-existing space. But this won’t address the very early-time physics (less than 1 second after the big bang), where the energies were initially so high that the binding energy of hadrons was trivial by comparison, so there was a quark soup instead. Also, it will annoy all the pacifist physicists who don’t like the morality of using an explosion as any kind of analogy to the big bang. Finally, general relativity indicates that space was created with the big bang, not pre-existing (this is because spacetime is defined by the gravitational field, so where you don’t have such a field there is no spacetime).

I find these arguments vacuous because general relativity is just a classical continuous-field line model for gravitational fields. Because of the clever way it incorporates conservation of mass-energy for fields (something which Newton’s gravity law ignored), it makes checkable predictions that differ from Newtonian gravity, and which is correct. But the evidence supporting general relativity is just supporting the inclusion of conservation of mass-energy by general relativity, it isn’t specifically supporting the classical curved spacetime model of general relativity. Curved spacetime at best is just a classical approximation to many discrete graviton interactions. Sum the Feynman diagram interaction histories for many graviton interactions, and you get something that approximates to the spacetime curvature of general relativity.

A simple physical way to get the observed cosmological acceleration out of big bang is by spin-1 graviton exchange between masses. All masses have the same gravitational charge (say positive gravitational charge), so they all repel by exchanging gravitons. The repulsion makes masses accelerate away from one another, giving the Hubble law. The same effect predicts gravitation with the correct strength (within observational error bars).

***
‘As for an “accelerating” universe, that observation depends entirely on the redshifts of Type Ia supernovae. Many astrophysicists are not sure their luminosity is constant.’ – Louise Riofrio

Louise,

In addition to Type Ia supernovae, gamma ray bursters (stars collapsing into black holes) also provide alleged evidence of “acceleration”, albeit an “evolving dark energy” (changing cosmological constant), see the plotted gamma ray burster data at: http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

However, as Nobel Laureate Philip Anderson has argued,

‘… the flat universe is just not decelerating, it isn’t really accelerating …’ – http://cosmicvariance.com/2006/01/03/danger-phil-anderson/

The radial recession of galaxies in the Friedmann-Robertson-Walker metric of general relativity is the Hubble recession law with a gravitational slowing down at large distances due to attraction to the mass centred around us (in our frame of reference, which is the frame of reference we’re observing the universe from).

The 1998 results of Perlmutter on Type Ia supernovae suggested that this metric is wrong because there is no observable gravitational slowing down of the expansion.

So the mainstream knee-jerk response was to say that the inward-directed gravitational acceleration (which is only important at large distances, i.e. immense redshifts, according to general relativity) must be cancelled out by an outward-directed acceleration on the order 10^{-10} metres/second^2. The outward-directed acceleration would be due to a universal repulsion of masses, i.e. a small positive cosmological constant.

However, this explanation makes quite a lot of assumptions. As mentioned, gamma ray burster data indicate an evolution of the cosmological ‘constant’. There is also the idea that gravity gets weaker over cosmological distance scales. If you believe that gravity is due to spin-2 gravitons being exchanged directly between the masses which are attracting, then for redshifted masses the gravitons can be expected to be redshifted and thus received in a degraded energy condition, weakening the gravity coupling constant G from that measured in a lab where the masses are not relativistically receding from one another.

But if gravitons have spin-1 rather than spin-2 (and thereby act by pushing masses together instead of pulling them together), the spin-1 graviton exchange actually causes the cosmological acceleration as well as gravity. This makes checkable predictions.

I think one thing to be made clear is that a gravitational field can exist without something actually accelerating. I’m sitting in a gravitational field of 9.81 metres/second^2 and I’m not being accelerated downward, because there’s a normal reaction force from the chair that stopping me.

It’s exactly the same with the cosmological acceleration:

1. The most distant receding galaxies, supernovae and gamma ray bursters etc have an inward-directed gravity-caused acceleration towards us observers on the order 10^{-10} metres/second^2. (This effect is similar in nature to the gravitational slowing down of a bullet fired vertically upward.)

2. Such distant receding matter also has an outward-directed ‘dark energy’ (spin-1 graviton, I argue) caused acceleration away from us on the order of 10^{-10} metres/second^2.

The outward cosmological ‘dark energy’ acceleration cancels out the inward gravitational acceleration, so there is no net acceleration.

This is what Philip Anderson meant when he wrote:

‘… the flat universe is just not decelerating, it isn’t really accelerating …’

In my case, I’m not being accelerated downward by gravity because that acceleration is being cancelled out by an equal upward acceleration due to electromagnetic repulsion of the electrons in me and the electrons in my chair.

In the case of the universe, the cosmological acceleration of the universe is being cancelled out by gravitational attraction.

***

“In the standard big bang cosmology of yore, its growth was supposed to be decelerating due to the gravitational pull between galaxies. If you believe the current concordance model, it is actually growing faster as time goes by.” – SomeRandomGuy

The Friedmann-Robertson-Walker metric of general relativity up to 1998 predicted that the expansion rate is slowing down, and that this should be observable at extreme redshifts.

Perlmutter simply found that the expansion rate isn’t slowing down in his observations in 1998.  The mainstream interpreted the lack of gravitational (inward-directed) deceleration to imply that gravity is being cancelled out by an outward-directed cosmological acceleration due to some unknown dark energy.

The universe isn’t actually accelerating; there is an acceleration field which isn’t causing matter to accelerate because gravitational attraction is cancelling out that outward acceleration (see Nobel Laureate Phil Anderson’s criticism of “acceleration” on mainstream cosmologist Professor Sean Carroll’s blog as I quoted it in the previous comment; Sean didn’t repudiate this point!).

The cosmological acceleration is a small but universal repulsion between masses.  Gravitation is a universal attraction between masses.  On cosmological distance scales these two opposite accelerations cancel one another out, so there is no net acceleration of matter.

***

SomeRandomGuy,

The Lambda-CDM model, which is the mainstream model now (the Cold Dark Matter model with a small positive cosmological constant lambda worked out from the data), involves a repulsive force that increases as a function of distance.

The gravitational deceleration effect decreases with increasing distance.

Therefore at small distances, gravitation predominates, then at a larger distance (on the order 5*10^9 light years) the acceleration of the universe cancels out gravitational deceleration on receding matter, and at still greater distances the cosmological acceleration exceeds gravitation.

The observations of red-shifts made so far from Type Ia supernovae and gamma ray bursters are concentrated in the region where cosmological acceleration (repulsion) is cancelling out gravitational acceleration which is trying to slow the expansion of the universe.

For bigger distances than have currently been well observed, the mainstream Lambda-CDM model suggests an overall net acceleration outward. Whether the model is right is another matter (see the evidence from gamma ray bursters which suggests that the value of Lambda isn’t a constant: http://cosmicvariance.com/2006/01/11/evolving-dark-energy/ ).

This acceleration as an extrapolation from the Lambda-CDM model isn’t a fact or even a scientific prediction, because it’s not really a falsifiable prediction because the small positive Lambda/cosmological constant value is already an ad hoc modification, not a piece of genuine scientific prediction. You can endlessly introduce ad hoc ‘epicycle’ type corrections into a model to make it fit unpredicted effects. Whatever new data comes, they can just find a formula to fit Lambda’s variation (if any) as a function of redshift/distance. But this isn’t real physics.

The real physics concerns the nature of the dark energy. It’s spin-1 gravitons, causing universal repulsion between similar gravitational charge; this causes the cosmological acceleration and also pushes relatively small masses towards one another because they exchange gravitons more forcefully with the large masses in the distant universe than with one another. Both are checkable predictions.

Spin-2 gravitons don’t lead to any checkable predictions. Firstly spin-2 gravitons are based on the bizarre idea that mass A and mass B are solely exchanging gravitons with one another, and not exchanging gravitons with every other mass in the universe, including the immense masses at great distances. Secondly, spin-2 gravitons seem to need some incredibly ugly and extravagant theoretical framework such as string theory with 10 dimensions. (String theory is inherently vague because 6 dimensions are supposed to be too small to probe experimentally, so nobody knows their moduli if they are compactified in a Planck scale Calabi-Yau manifold. Without knowing all the parameters of these extra dimensions, you can’t make falsifiable predictions.)

***

SomeRandomGuy,

Thanks for that up to date reference. Normally blog posts are updated or have trackbacks from new posts when updates are made, which makes them more dynamic than the usual literature not less so, but as you point out this is not so for Cosmic Variance. (I’ll avoid linking to posts at Sean’s blog from now on in case they become obsolete and are not updated by a trackback.)

It’s interesting that the latest gamma ray burster data is compatible with an unchanging cosmological constant!

***

“As an aside, your use of the term “acceleration outward” suggests that you do not yet understand FRW. You really seem to be using a mental image of an ordinary explosion in 3-dimensional space.”

See http://coraifeartaigh.wordpress.com/2008/08/12/cosmological-distance-at-trinity-college/#comments

We’re seeing earlier epochs with increasing distances, which is quite different from a purely 3-dimensional Euclidean space. As I explained there, flat spacetime is Minkowski spacetime, where you time changes with distance. This is why the variation in recession velocity with “distance” that Hubble reported is also a variation of velocity with time (conventionally referred to as acceleration).

“Your claim of repulsive “spin-1 gravitons” is of course completely unsubstantiated.”

There is scientific evidence to back it up: https://nige.wordpress.com/2008/01/30/book/

The cosmological acceleration is a univeral repulsion of masses suggesting a spin-1 mediator, and differentiating Hubble’s law, a = dv/dt = d(Hr)/dt gives you a way of making solid predictions of forces, which predicts that the same spin-1 mediators that cause cosmological acceleration also produce gravitation. This is simple physical calculations using long-established empirical laws and observations. It predicted the cosmological acceleration of the universe in a publication in 1996, two years before observational discovery by Perlmutter!

The same calculation predicts gravity parameter G accurately. It doesn’t contain any speculations, unlike string theory. It’s already made falsifiable predictions and been vindicated. So it does seem scientifically accurate, although string theorists have censored it out!

***

SomeRandomGuy,

No, I’ve never mentioned a balloon analogy!

Your statement that ‘the whole effect is caused by the expansion of space’ is vague enough to be compatible not only with what you’re thinking (the idea that the vacuum is powering cosmological expansion) but is also compatible with the physics I’ve stated: spin-1 gauge boson exchange between masses in the vacuum causes a repulsive force, giving rise to expansion of the universe (recession of masses) over large distances and pushing matter together on small scales.

Cosmologists don’t know what dark energy is so they don’t mean anything by accelerated expansion, apart from what I explained.

I.e., the cosmological acceleration is an outward radial acceleration (acceleration is a vector so it has direction and can be either outward or inward towards us when it is ascertained from redshifts). It’s needed – as far as cosmologists are concerned – to make the observed redshift data compatible with the predicted gravitational deceleration of the universe which is based on the density of the universe.

If the universe had a high enough density, the gravitational deceleration would not merely slow down the expansion but would cause the universe to begin contracting at some point in the future. I think it’s important to understand that the gravitational deceleration is always a vector, represented by arrows pointed radially inwards towards the observer. The acceleration of the universe by ‘dark energy’ is an acceleration outward, a vector represented by arrow pointed radially away from the observer. Hence the two accelerations oppose each other.

This physical explanation makes clear what is going on. Ideally the quantitative magnitude of the acceleration needs to be explained to people, on the order of 10^{-10} metres/second^2. If all this had been done when the acceleration was discovered in 1998, then there would be less confusion today. E.g., differentiate the Hubble law (v = Hr) and you get

a = dv/dt = d(Hr)/dt = H*dr/dt + r*dH/dt = Hv + 0 = rH^2.

This predicts the acceleration of the universe quantitatively. Mainstream cosmologists can’t predict this, so they don’t really ‘mean’ anything about acceleration. This was predicted in 1996, two years before being confirmed, while I was doing a cosmology course at university. Further calculations predicted gravity accurately. This model is observationally confirmed and predicts not only cosmological acceleration but also gravity accurately, using spin-1 gauge bosons. It debunks spin-2 graviton speculations, which are physically vacuous.

To restate briefly Dr Oakley’s problem:

Hubble law: v/r = H

where H = 1/t,

Chris Oakley argument: v/r = H = 1/t,

hence: v/r = 1/t

where Dr Oakley suggests that r is proportional to t, so v doesn’t vary: “(so it is Hubble’s “constant”, not the speed of the galaxy that is changing)” comment #3.

This is wrong since, as you look to bigger distances (r) you are seeing smaller times (t) after the big bang.

So r definitely is not proportional to age of universe t. In fact, if the age of the universe is that of the observer’s frame of reference, t is fixed at 13,700 million years. Hubble’s point was that v/r = constant = H, regardless of how far away (back in time) you look. This is why I feel that Dr Oakley’s comments (comments #2 and #3) above are in error.

If an Oxford PhD/D.Phil in quantum field theory can make such an error in looking at the very basics of cosmology, and then come back with personal comments ignoring the science, you can see the problem in communicating the fact that there is an acceleration inherent in the Hubble law!

***

“I can see why people don’t like to get into arguments with you … Hubble’s law is a fit. The velocity of galaxies being proportional to their distance roughly fits the observational data.” – Chris Oakley

You haven’t mentioned the scientific facts at all, you’ve “argued” with me by making personal comments which miss the scientific facts. You’ve ignored entirely everything that I wrote in comment 24.

Look, saying “Hubble’s law is a fit” which we all know doesn’t tell us anything new, because that’s been known since Hubble did it in 1929. I’ve done cosmology and quantum mechanics courses, and I’m studying quantum field theory as time permits.

Hubble’s law: v/r = H. Rearrange: v = rH. Differentiate it and you get an acceleration, from the calculus

dv/dt

= d[Hr]/dt

= H(dr/dt) + r(dH/dt)

= H(dr/dt)

= Hv

= H(Hr)

= rH^2

~ 6*10^(-10) ms^(-2) at extreme redshifts approaching the horizon radius.

This was predicted in 1996, and confirmed in 1998. The calculation above was published before confirmation. I’m very well aware (not just from hostility I receive) that it is not in the textbooks, because if it was well known, I wouldn’t need to point it out to people. I’m not pointing this out because I think it’s well known, but because it isn’t well known. It’s counter intuitive which is why it’s not mainstream thinking. If it was totally obvious, someone else would have predicted the acceleration of the universe this way before me. Yet it’s been borne out by factual evidence. It also leads to a simple prediction of the strength of gravitation, which again turns out to be accurate!

***

SomeRandomGuy,

Thirteen years ago I did a cosmology course that included finding solutions such as the Friedman-Walker-Robertson metric and everything else. General relativity doesn’t predict the small positive cosmological constant to fit te observed expansion of the universe. This approach does.

See my calculation in comment 29 above and note it was done in 1996 two years before Perlmutter’s observations of distant supernovae confirmed it. It’s a different calculation from the Friedmann-Robertson-Walker metric. I work from empirical laws towards falsifiable predictions. This isn’t mainstream fundamental physics, which involves starting with a mathematical speculation like general relativity (or even string theory) that can model just about any universe, and fitting the model to the results of observations <i>ad hoc</i>. That’s one reason why I dropped physics and don’t want a degree in physics. Another reason is the general hostility, prejudice, etc., you receive when trying to have a scientific discussion. It’s not particularly healthy. I’m not claiming to be a paid-up member of the orthodoxy, I’m just pointing out facts that made falsifiable predictions which were confirmed.

***

SomeRandomGuy,

I did a course in general relativity. Let me explain this to you and also to Chris Oakley very clearly.

Hubble noticed that the ratio v/r = constant = H.

In other words, the velocity of recession increases linearly with distance in spacetime.  Hence, when observing the universe by looking out in space (and back in time) in observable spacetime, dH/dt = 0, but if we <i>don’t</i> do that but instead wait around for H to change and then look again in the telescope, we’ll find that H is varying!

In the context of looking out to bigger distances and earlier times after BB, H is a constant, but when we wait around, we’ll find that H is varying as a function of the age of the universe for the observer (not for the observed).

When I did cosmology, the horizon radius of the universe was supposed (from the FRW metric) to be increasing in proportion not to t but to t^(2/3).  This slower than linear increase was predicted from the gravitational deceleration on the expansion which was predicted by GR without a CC, i.e. a curved universe of critical density.

This implied that the age of the universe was t = (3/2)/H or H = (2/3)/t.

After it was discovered by Perlmutter in 1998 that the mainstream model was wrong and the universe wasn’t decelerating (because the gravitational deceleration was being cancelled by a repulsive-force type acceleration), the flat geometry of the universe meant that the horizon radius wasn’t expanding as t^(2/3) but merely as t, so the age of the universe in flat geometry is simply

t = 1/H.

This has nothing to do with the Hubble constant varying as we look back in time.  It doesn’t!  There is no contradiction between dH/dt = 0 for spacetime where t represents earlier epochs in the universe (because as Hubble observed, H <i>is constant when we look back in time because recession velocities vary in proportion to distances or to times past</i>), and H = 1/t.

dH/dt = 0 applies to looking to greater distances in spacetime where the variation of v with r (or time past) means that v/r = constant = H, so dH/dt = 0.  When you differentiate such a constant you get zero.

But H = 1/t does not apply to looking to greater distances, because t here is the observer’s time, not the time after the big bang for the object being observed.

The only variation of H you get from H = 1/t is when the age of the universe in our frame of reference varies.

E.g., if you wait for a time of 13,700 million years and then re-measured Hubble’s “constant”, H would have halved.

<i>When you look to greater distances, however, H doesn’t appear to vary!</i> That’s because H when observed in spacetime is a ratio of two things which are both varying in sync: v and r. Because recession velocities increase as you look to greater distances (earlier times), you can’t observe any variation in H with distance. This is so simple, it’s depressing that I have to really keep spelling it out. Again, dH/dt = 0 involves spacetime t for the observed galaxy, while H = 1/t involves observer time t.

“But you are quite wrong to call GR “a mathematical speculation”.”

I’ve gone into the details of the speculations in general relativity here: https://nige.wordpress.com/2008/08/16/authority-problems/

1. The successful predictions of “general relativity” result directly from the inclusion of mass-energy conservation into the gravitational model.

2. General relativity speculatively assumes that the source of gravity is a continuous distribution, not a quantized one consisting of fundamental particles. So the stress-energy tensor has to be supplied by an unrealistically smoothed distribution of matter like a mathematically “perfect fluid”, instead of discrete particles, to act as the source of smooth curvature.

3. General relativity speculatively and implicitly assumes that acceleration is due to spacetime being smoothly curved, instead of there being a quantum field with a series of discrete interactions with gravitons.

4. General relativity’s Ricci curvature tensor is rank-2, so it’s been argued by Pauli and Fietz in the 1930s that gravity is due to spin-2 gravitons, not spin-1 particles like electromagnetism. Spin-1 particle exchange between similar sign gravitational charges (e.g. two masses) would cause repulsion, whereas since attraction occurs, so you need spin-2 to make gravity attractive between two masses.  The flaw here is that – while the surrounding universe is electrically neutral for electromagnetism charges (equal positive and negative charges) – it definitely can’t be ignored this way for gravity.  Basically you have an immense amount of mass surrounding your apple and Earth, which should be exchanging gravitons with them both.  This can lead to spin-1 gravitons producing gravity by pushing together masses that are small compared the mass of the surrounding universe.  (Feynman points out in “The Feynman Lectures on Gravitation”, page 30, that gravitons are not necessarily spin-2). This is what I find to be the case, resulting in falsifiable predictions that are checked: https://nige.wordpress.com/2008/01/30/book/

5. General relativity can result in a wide range of metrics depending on what assumptions you make in order to derive those metrics: it’s an endlessly adjustable speculative cosmological model that can model flat and curved universes, and in fact with appropriate ad hoc amounts of dark energy and dark matter it can model anything from endless expansion to collapse.

6. The speculative aspect of general relativity was explained even better by Einstein himself, who stated:

‘I consider it quite possible that physics cannot be based on the [smooth geometric] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air …’ – Albert Einstein in a letter to friend Michel Besso, 1954.

**********

The anonymous commenter then followed Dr Chris Oakley’s approach by ignoring the physics in my comment and writing personal abuse, to which I replied (probably my final reply, since it’s not much use when my comments are ignored and personal abuse is made instead):

No, you are both 1) and 2), because you are confused about the “definition” of the Hubble constant.

(a) Age of universe t = 1/H implies H = 1/t, so the rate of change of H is:

dH/dt = d(1/t)/dt = – 1/t^2

(b) But in spacetime T:

dH/dT = 0

So it’s fundamentally dishonest of you to muddle up the two times!

No amount of confusion and insults against me will make dH/dt = dH/dT. They are not the same. There is not only one possible definition to the Hubble parameter: if you’re dealing with Friedmann’s obsolete law then you have varying H, but if you’re dealing with real physics in real spacetime, you have constant H.

This is because Hubble’s observation was that there is an unvarying ratio v/r = H, it follows H is defined as constant where r = cT, T being time past (if we use capital T for time past to distinguish it from time after big bang). Hubble’s law states:

v/r = v/(cT) = H

= constant regardless of value of T, because v increases in direct proportion to T, keeping observed H in spacetime constant! You still can’t grasp this, and you try to confuse it with Friedmann’s irrelevant (non-spacetime) absolute time since big bang, where H does vary.

Differentiate and you will find that dH/dT = 0 because H is constant as observed in spacetime!

So you’re confusing time after big bang from your obsolete cosmology notes, with spacetime in the Hubble law. The time after big bang is not what we can observe. Please read again what I quoted (maybe on the original blog thread here) from Minkowski wrote in 1908: we have to base physics on spacetime, where distance is proportional to time past because of the speed of light.

Really, for you and others to ignore this and make personal comments based on ignorance about my background or claims that I am being dishonest, is not helpful to the physics! Please understand that the time used in the Friedmann et al metric is not directly observed! Physics calculations in this case needs to be based on observables! Spacetime (i.e. where distance r is related to time past T by r = cT) is observed, and in spacetime the Hubble parameter H is constant because it is the ratio of (velocity)/(radial distance from us, or time past). You seem to be completely confused about this.

Physics needs to build upon facts, not speculative theories. The non-zero dH/dt you get for the Hubble constant varying with absolute time after big bang doesn’t come into what I’m calculating at all, because in spacetime the further the distance, the earlier after the big bang you are seeing. All effects such as light and gravitons will travel to us at velocity c, so in calculating effects we need to treat the physical universe as we observe it, i.e. with time past varying with observable distance. This gives us an effective acceleration for predicting physical facts. The dishonesty and confusion come from the mainstream, I fear.

What your comments – which contain no relevant physics and are just personal attacks based on ignorance – do is to detract attention from the important quantitative success in which the acceleration of the universe was accurately predicted in 1996, years before observation! The kind of confusion you have is not helpful to physics. It’s dishonest to make such comments if you are so confused about the basics. That said, there is a lot of confusion around!

***

SomeRandomGuy,

Instead of apologising for your insults, and admitting that you are totally confused and got it wrong, you again just ignore my message and ask a question.

The graph shows that observable distance (double the distance and you’re seeing twice as far back in time) and recession velocity are linearly correlated, i.e. that H = (velocity)/(distance) or H = (velocity)/[(time past)*c] = constant.

Hence, H doesn’t vary because the velocity increases in proportion to distance or to time past. This variation of velocity with time is an effective acceleration.

In my original 1996 8-pages paper predicting the acceleration of the universe, I pointed out Minkowski’s statement that when looking to greater distances we’re seeing the past, and explained that if Hubble in 1929 had tried that he would have predicted the acceleration of the universe.

“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

– Hermann Minkowski, 1908.

Hubble found (velocity)/(distance) = constant (H) with units of 1/(time). If he had noted that in spacetime (distance) = c*(time past), he would have had the option of finding that (velocity)/(time past) = constant with units of acceleration!

If you want to make the expansion rate of the flat universe absolutely clear in terms of time, it is

v = Hr

= (1/t)*(cT)

= cT/t

where T is time past (which VARIES with observed distance), and t is time after big bang in our frame of reference (so it does NOT vary with distance).

The confusion of youself and Chris is centred on confusing the two times. It’s pretty obvious that you don’t have any real interest in physics, just quoting irrelevant obsolete speculations from your textbook, trying to confuse things, and then handing out insults instead of apologies when your errors are explained to you. But this behaviour is pretty typical in mainstream physics, which is religion.

I remember a fruitless discussion with fellow Electronics World features writer Mike Renardson in 1996. His dismissed it for a different reason to you, because the predicted acceleration of the universe, on the order of 10^{-10} ms^(-2), was extremely small. This prediction and the related prediction of the gravity parameter G was rejected by journals which believed in a large or a zero cosmological constant on the basis of string theory. When it was discovered to be a correct prediction in 1998, people still tried to ignore it and claimed that the small cosmological acceleration of the universe is a “mystery”!

I remember correspondence in which I answered all kinds of concerns Renardson had with the physics in 1996, and then he sent a reply stating “do you really expect me to start believing an unorthodox theory?” I think that this says it all: mainstream physics is a belief-based system, and people need more than factual discussions to convince them of anything new. They need authority as in the stamp of officialdom. Science is supposed to be a matter of facts, but in reality it’s a matter of politics: fame, money, popularity and groupthink. As Chris Oakley wrote above, innovations in physics need sponsorship. Facts don’t speak for themselves. Or people refuse to listen to them unless they come from authority figures, the orthodoxy of religion.

***

“Right – so if I see a car travelling at 30 kph 30 meters away from me, and one travelling 60 kph 60 metres away from me then it must mean that the nearer one is accelerating and will be travelling at 60 kph when it is 60 meters away – ? I suppose that it is possible … but a much simpler “explanation” is that neither of them is accelerating, but started at the same place (where I am standing) and set off at different speeds. AFAIK we cannot measure galactic acceleration directly so I don’t know how I would prove you wrong. But it certainly is not the most natural explanation.” – Chris comment #37

You’re neglecting the time delay in light coming from more distant objects, which is the whole point for the case of cosmology. For cars, the distances are small enough that the delay time in information coming to you is trivial.
For galaxies billions of light years away, you’re seeing them as they were in the past. In this case, as Minkowski states, you have to accept that seeing an object at distance r is the same thing as looking back in time r/c seconds ago.

Two things are varying as you look to greater distances: distance and time. The car analogy ignores the variation in time, which is trivial. But in cosmology this variation in time past is not trivial, and if we differentiate the Hubble velocity correctly we get acceleration

a = dv/dt = d(Hr)/dt = rH^2

which is an accurate prediction made in 1996. It was confirmed. There is no speculation involved in spacetime, the plot of recession velocity versus distance r (or time past T = r/c), or the rules of differentiation. There is also no speculation involved in dH/dt = 0 for the case of looking to greater distances, because the Hubble constant is a constant when we look to bigger distances: v/r is constant because top and bottom are proportional. The Hubble constant is only not a constant when you get away from spacetime and just consider a variation in absolute time. So there is no speculation in predicting the acceleration.

It’s simple physics and mathematics all the way!

***

“a = a_0 t^k, H = k t^(k-2), so k = 2/3 implies H ~ 1/t^(4/3).” – SomeRandomGuy

If the universe’s horizon radius increases as

R ~ t^(2/3), [Equation 1]

then

v = dR/dt = (2/3)t^(-1/3) [Equation 2]

Now, since H = v/R, using Equations 1 and 2 we get:

H = v/R = [(2/3)t^(-1/3)]/[t^(2/3)]

= (2/3)/t

This is my result, H = (2/3)/t. Your result H ~ 1/t^(4/3) is just plain wrong, and I’ve no interest in trying to help you find out why since you are rude and ignorant of physics.

***

“You do realize, of course, that this means the observed expansion would be geometrically like any explosion in 3D, with a central point where it all started? And that since we observe the same rate of expansion in all directions, we would have to be located at that point, i.e. at the center of the universe, for your picture to work?” – SomeRandomGuy

What I realise is that science is not about prejudice, it’s about facts and making predictions that are subsequently confirmed by observations. If you have a theory based entirely on observed facts that has made checkable unique predictions that have been confirmed, that theory may be correct.

Regarding our place in the universe, I refer you to the largest anisotropy (the cosine variation in the sky) in the microwave background radiation.

In the May 1978 issue of Scientific American (vol. 238, p. 64-74), R. A. Muller of the University of California, Berkeley, published an article about this, titled “The cosmic background radiation and the new aether drift”, stating:

“U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.”

Most of this 600 km/s velocity is due to our galaxy, the Milky Way, being locally attracted to the larger galaxy Andromeda, so it may be an upper limit on the average speed of the Milky Way mass motion. Now suppose the universe was more like Dr Chris Oakley’s explosion than the curved boundless geometry of mainstream general relativity: since the universe is flat and in any case curvature appears to be a classical approximation to a lot of graviton interactions, if quantum gravity is correct.

Distance is the product of velocity and time, and if we multiply 600 km/s by the age of the universe, we find that the matter in the Milky Way would have moved only 0.3% of the horizon radius of the universe in 13,700 million years.

If the average speed was less than 600 km/s, it would be even closer to the centre of the universe. So it doesn’t pay you to be biased either way. There are lots of problems with multiplying the 600 km/s speed deduced from the major anisotropy in the cosmic microwave background by the age of the universe to obtain our distance from the “centre” or “origin” of the universe. But they can probably all be overcome. Muller’s argument that this is a “new aether drift” in 1978 didn’t catch on, because of relativity. I don’t want to argue about speculations.

I’m mentioning this just as a counter argument to you. I made a fact based prediction that was subsequently confirmed. You then make a comment saying that if it is right it implies we’re at the middle of the universe. Well, I don’t care where we are, only that the prediction works. Copernicus is referred to as a defender of science for arguing that we’re not at the centre of the universe. I think science is quite different to any sort of prejudice: science is not about defending speculations that we are or are not here or there. It’s about establishing facts!

***

Further support for the result of my calculation in comment #42 above can be found in Marc Lachièze-Rey’s textbook, “Theoretical and Observational Cosmology”, 1999, p 384, available online at:

http://books.google.co.uk/books?id=Lds_QbApCbgC&pg=PA384&lpg=PA384&dq=t2/3+cosmology&source=web&ots=2MHo_RO-ug&sig=2Kl44rBzVTmCCPikkwmVnwhqEw4&hl=en&sa=X&oi=book_result&resnum=7&ct=result

For horizon radius expansion proportional to t^(2/3), you get H = (2/3)/t.

The same also occurs in Lars Bergström and Ariel Goobar, “Cosmology and Particle Astrophysics”, Springer, 2004, p. 202:

http://books.google.co.uk/books?id=XQBJ2he1Cz4C&pg=PA202&lpg=PA202&dq=t2/3+cosmology&source=web&ots=2BV266Qkyc&sig=NExhOnXefqhTAsh0E_zJFV4Dm_I&hl=en&sa=X&oi=book_result&resnum=9&ct=result

“For a flat, matter dominated FLRW [Friedmann-Leimatre-Robertson-Walker] model, a(t) ~ t^(2/3), (da/dt)/a [this (da/dt)/a = H, since a is radius here] = 2/(3t) …”

I’m very busy now and I hope that no more insulting rubbish from the totally ignorant pseudo physicist SomeRandomGuy will appear.  I won’t have time to respond to it any more.  He doesn’t read anything I write anyway, so what’s the point in trying to explain physics to someone like that?

***

Before disappearing back to SQL database ASP programming for good, just one more comment about Chris Oakley’s point in comment 37, that we may be seeing stars with different velocities at different distances because stars started from one point and moved with different speeds.  This is something I responded to earlier in comment 4 (unfortunately I didn’t turn off italics at one point, so the while section is in italics as a result).  Eddington discussed that idea in his book “The Expanding Universe”, referring to papers in Nature which discredited it.  The distribution of speeds you need turns out to be contrary to the Maxwell-Boltzmann distribution for a gas like the hydrogen cloud that the universe was soon after the big bang. But let’s assume that is one possibility.  That predicts no cosmological acceleration!  My argument, a = dv/dt = d(Hr)/dt, does predict cosmological acceleration of the right size, which I think is evidence that it might be right.  I then went on to predict the strength of gravity and other things, again using simple facts.

“As an aside, why are your “spin 1 gravitons” causing repulsion only over cosmological distance, and attraction on all scales at least up to galactic? How does that work, exactly, given that exchange of spin 1 bosons is repulsive for equal charges, and you are using mass as charge? Is the mass of the moon of opposite sign to that of Earth?” – SomeRandomGuy

See https://nige.wordpress.com/2008/01/30/book/ for the answer.  All masses repel one another by exchanging gravitons.  The more mass, the more repulsive charge.  If you have two small masses, two planets or nearby galaxies, they will repel each other slightly, but they’re being pushed together harder by gravitons exchanged with the surrounding universe, involving bigger masses and a convergence of gravitons.

The outward radial acceleration from us of mass m is a = dv/dt = rH^2.  The second law of motion gives outward force for that mass of F = mrH^2.  The third law of motion suggests that there is an equal reaction force, F = mrH^2, directed radially towards us.  This quantifies spin-1 graviton predictions for low energy, where only the simplest Feynman diagram contributes significantly to the result, so the path integral becomes very simple and can be evaluated geometrically as Feynman did for QED (see https://nige.wordpress.com/path-integrals/ for a discussion of how this works).

Now the graviton force F = mrH^2 contains mass m and distance r, so it is trivial for relatively small masses and relatively small distances, but is significant for marge masses and/or larger distances.

Two nearby masses get pushed together because they exchange gravitons more forcefully with the surrounding universe than with each other.  So the fundamental particles in the Moon are pushed towards the Earth repulsion of immense distant masses more than by graviton impacts from gravitons exchanged between the Earth and the Moon. This is very slightly like LeSage’s pictorial gravity from the Newtonian era (it was said to have been first proposed by Newton’s friend Fatio but Newton didn’t like it because at that time it couldn’t be made to work and make checkable predictions), which was generally discredited because:

1. It couldn’t usefully predict anything checkable like G
2. It wasn’t a gauge theory of virtual radiation (graviton) exchange, so the real radiation it postulated as the exchange radiation would cause drag on moving bodies, heat up bodies until they glowed red how, and would also diffuse into geometric “shadows” to make gravity fall off much faster than the inverse-square law.

http://en.wikipedia.org/wiki/Le_Sage’s_theory_of_gravitation

(I also have a discussion of the errors somewhere on my blog.)

The fact-based calculations you’re talking of differs from the LeSage model in that it’s a gauge theory of quantum gravity, which does make predictions of cosmological acceleration, G, and other things, and which does not have the defects of LeSage’s theory.

***

SomeRandomGuy,

I don’t have much time for discussions. If you knew cosmology, you’d have known the fact that t^(2/3) expansion leads to H = (2/3)/t.  You don’t know anything about it. Your calculations are irrelevant and wrong.

“The main points are of course unaffected: you are saying that metric theories of gravity in general and GR in particular are wrong, despite all evidence to the contrary, and should not be used to do cosmology, that spacetime is static and we are located at the center of the universe, and that gravity is mediated by “spin 1 gravitons”, which would make it repulsive between masses of equal sign.”

No, I’m not saying that.  You’re saying that.  What I am saying is a sequence of facts:

Fact 1: the universe is accelerating, confirmed by Perlmutter and others since 1998.

Fact 2: the acceleration was predicted by a = dv/dt = d(rH)/dt = rH^2 back in 1996.

Fact 3: the spin-2 graviton idea relies on a path integral including only two masses: so it forces the exchange of gravitons to have the right spin to cause attraction when exchanged.  It makes no falsifiable predictions (string theory of gravitons has a landscape of 10^500 vacua, which can’t be checked).

Fact 4: If you correct the path integral for gravitons so that you include graviton exchange between all mass-energy(gravitational charge) in the universe, instead of just two masses as Pauli and Fietz did in the 1930s when arguing that gravitons have spin-2, you find that gravitons have spin-1 and the basic graviton interaction is the Feynman diagram of virtual radiation being exchanged by analogy to radiation scattering off charge.

Fact 5: This predicts the strength of gravity, and other things.

Your statement that I’m saying that all metric theories of gravity are wrong is in error.  I’m saying are facts. (Quite a few metrics of general relativity are useful under certain conditions, where they approximate the underlying quantum gravity dynamics very well.) I’ve no interest in arguing with time-wasting bigots about whether approximate metrics are wrong or right.  Life is short and what matters are facts, not uncheckable controversies.

***

On the speculative nature of conjectures concerning spin-2 (attractive or ’suck’) gravitons, Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has not been observed.

***

http://coraifeartaigh.wordpress.com/2008/09/06/hubble-puzzle-solution/#comment-277

“My solution (simple version): Yes, Hubble’s Law implies that distant galaxies are accelerating away from one another. However, this has nothing to do with the so-called acceleration of the universe. The latter term refers to the observation that the universe expansion has recently speeded up (an acceleration of the universe acceleration above if you like.)” – Dr Cormac O’Raifeartaigh

Thanks for putting the mainstream official case so eloquently. I think that this is wrong for two reasons: first, the universe isn’t “recently” speeding up.  The acceleration is observed at the greatest distances, i.e. the earliest times after the big bang.  Second, there is no evidence that the “dark energy” causing the acceleration of the universe is evolving with time.  Whatever is causing the cosmological acceleration, it is only a very small acceleration, 6*10^{-10} m/s^2 over immense distances, and you need to look to immense distances to detect it’s effect on recession rates.

Maybe you have Smolin’s book, “The Trouble with Physics”, where Smolin finds that the acceleration of the universe is quantitatively equal to approximately 6*10^{-10} m/s^2.  Smolin found it a coincidence that a = cH or RH^2. Presumably you do too, despite my derivation in 1996 of this acceleration from the Hubble expansion law, v = HR.  a = dR/dt = d(HR)/dt = RH^2.

If you look at Hubble’s law, H = 1/t where t is time since big bang in the observer’s frame of reference in flat spacetime with cosmological acceleration cancelling out gravitational attraction over large distances, and R = cT, where T is time past.

So v = HR = [1/t]*[cT] = cT/t

The whole point of Hubble’s law is that when you look to greater distances R in spacetime, the increase in v is matched by the proportionate increase in R, so v/R = constant = H.  If H = 1/t, when looking out in spacetime, the fact that H is constant makes also t constant.

So t is not a variable in spacetime!  Two variables in the equation v = cT/t are v and T.  Hence, in spacetime, a = dv/dT = d(cT/t)/dT = c/t = cH = 6*10^{-10} ms^{-2}.

This is physically and mathematically legitimate, and makes an accurate prediction.  The only objections I’ve ever received have been based on errors, misunderstandings, or the idea that physics is mainstream orthodoxy and the obnoxious but prevalent idea any new developments based on deeper understanding of the basics must be dismissed as wrong automatically.

This is a very tiny acceleration and is therefore only observable over immense distances.  Perlmutter’s group back in 1999 came up with a computer program to detect the supernova signatures automatically from CCD equipped telescopes, which was an innovation.

The acceleration is only observable over vast distances, corresponding to relatively short times after the big bang.  Therefore I don’t think that you can claim that this acceleration is “recent”.

In spacetime you are looking back to earlier times with bigger distances.  In the time taken for light to travel from a distant star to you, the star will presumably have receded a further distance.  One way to get around the two distance scales is through spacetime, using the travel time of light to measure how far away things are.  If you stop thinking about distances and think about times past instead, then the velocity-distance relationship of Hubble becomes a velocity-time relationship.  The funny thing is that the maths predicts the correctly observed cosmological acceleration.

“My solution (more sophisticated version): … Relativity tells us that that the expansion of the universe is an expansion of space-time (or space expanding as time unfolds). Hence, the common ‘explosion-picture’ of galaxies rushing away from one fixed point is simply wrong. Instead, space itself is expanding and this expansion has a scale factor. The recent evidence of ’acceleration’ simply suggests that the scale factor has increased in the last few million years. (This is a surprise because most cosmologists expected the expansion to slow down, if anything, due to gravitational effects)….” – Dr Cormac O’Raifeartaigh

Your sentence:

“The recent evidence of ’acceleration’ simply suggests that the scale factor has increased in the last few million years.”

I’m worried that your “few million years” timescale is not consistent with Perlmutter’s 1998 discovery of cosmological acceleration, using specifically supernovae at half the age of the universe, i.e. 7,000 million years.

Also, there has been quite a lot of criticism of the concept you mention of “expanding space”:

“Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space, which is utterly empty, to expand? How can ‘nothing’ expand?

” ‘Good question,’ says Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’

“Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept,’ he says. ‘Think of the Universe in a Newtonian way – that is simply, in terms of galaxies exploding away from each other.’ ” – http://www.newscientist.com/article/mg13818693.600-all-you-ever-wanted-to-know-about-the-big-bang—-everyweek-questions-about-the-big-bang-flood-into-the-new-scientist-officesowe-thought-it-was-about-time-to-let-some-experts-loose-on-the-subject-.html

I don’t think that Dr Chris Oakley, SomeRandomGuy, or yourself really grasp this problem.  I think I do after twelve years of battling against ignorance everywhere, although there is always room to improve the communication of facts (although the more forcefully the facts are presented, the more angry the opposition to progress!).

Interesting news about the Higgs mass: a further success!

Above: Dr Tommaso Dorigo’s illustration of the 80 GeV (with – 23, +30 GeV standard deviation) preferred Higgs mass in his new post:

‘The above plot is tidy, yet the amount of information that the Gfitter digested to produce it is gigantic. Decades of studies at electron-positron colliders, precision electroweak measurements, W and top mass determinations. Probably of the order of fifty thousand man-years of work, distilled and summarized in a single, useless graph.

‘Jokes aside, the plot does tell us a lot. Let me try to discuss it. The graph shows the variation from its minimum value of the fit chi-squared – the standard quantity describing how well the data agree with the model – as a function of the Higgs boson mass, interpreted as a free parameter. The fit prefers a 80 GeV mass for the Higgs boson, but the range of allowed values is still broad: at 1-sigma, the preferred range is within 57-110 GeV.’

Tommaso goes on to add that the LEP II limit is a minimum Higgs mass of 114 GeV, but that’s based on not-observing non-occurring interactions in the form of Higgs field quanta decay routes which would have already shown up in pair-production phenomena at low energies, if (1) the Higgs boson was literally correct and (2) if it had an energy below 114 GeV.  See the groupthink massively co-authored paper here: ‘A search for pair produced charged Higgs bosons has been performed in the high energy data collected by DELPHI at LEP with Image , 172 and 183 GeV. The analysis uses the τντν, Image and Image final states and a combination of event shape variables, di-jet masses and jet flavour tagging for the separation of a possible signal from the dominant W+W and QCD backgrounds. The number of selected events has been found to be compatible with the expected background. The lower excluded value of the H± mass obtained by varying the H±→ hadrons decay branching ratio has been found to be 56.3 GeV/c2.’

However, as explained in the previous post, the Standard Model is wrong in the electroweak U(1) x SU(2) groups, and when you correct the error you change the required mass-providing field.  In the absence of an alternative name for the quanta of this field from “Higgs” boson, I’m retaining the name of “Higgs boson” when I write about the mass-providing field quanta, but in the model discussed on this blog – which makes falsifiable predictions – the “Higgs boson” plays a different role to, and has different properties to, that in the Standard Model.  For example, when you correct the errors in the electroweak groups of the Standard Model, gravity appears, carried on massless versions of the uncharged weak Z boson in SU(2).  The mixing of gauge bosons and the acquisition of mass by the SU(2) gauge bosons at low energy is very different to that which the Standard Model’s Higgs field provides!  So we rejected the Higgs boson decay models based on the Standard Model which were checked by LEP II.

If we accept the 80 (57-110) GeV Higgs mass in Tommaso’s first diagram, it is exciting because it is a much lower Higgs mass than some previous guesses, and it is in agreement with the theoretical predictions from the model discussed in the previous post (based on the empirical facts of quantum gravity). Notice that the mass model I developed began with an observation by Hans de Vries and Alejandro in their paper http://arxiv.org/abs/hep-ph/0503104 Evidence for radiative generation of lepton masses, which shows that the Z boson mass is about twice Pi times the 137.0 (or 1/alpha) factor, times the muon mass: 2*Pi*137*105 MeV ~ 91 GeV (an early article about the initial development of this is dated 26 Feb. 2006 at my old blog, but there are still earlier references). This was what I worked on to produce a model which predicts all masses (summarized below without diagrams):

Copy of a comment of mine to Tommaso Dorigo’s blog: http://dorigo.wordpress.com/2008/08/01/new-bounds-for-the-higgs/:

‘The fit prefers a 80 GeV mass for the Higgs boson.’

Hi Tommaso, thanks – that’s excellent news! The argument that lepton and hadron masses are quantized with masses dependent on the weak boson masses is pretty neat because it agrees with the prediction of my model, in mass arises by the coupling to fermions of a discrete number of massive Higgs-type bosons, through the polarized vacuum which weakens the mass coupling by a factor of 1/alpha and a geometric factor of an integer multiple of Pi.

However, this scheme requires the Z_0 mass of 91 GeV as the building block, not the 80 GeV mass of the W+/- weak boson. (These two masses are related by the Weinberg mixing angle for the two neutral gauge bosons of U(1) and SU(2).)

The model is pretty simple. The mass of the electron is the Z_0 mass times alpha squared divided by 3*Pi:

91000 MeV *(1/137^2)/(3*Pi) = 0.51 MeV

All other lepton and hadron masses approximated by the Z_0 mass times alpha times n(N  + 1)/(6*Pi):

91000 MeV *(1/137)*n(N+1)/6*Pi)

= 35n(N+1) MeV

= 105 MeV for the muon (n = 1 lepton, N = 2 Higgs bosons)

= 140 MeV for the pion (n = 2 quarks, N = 1 Higgs boson)

= 490 MeV for kaons (n = 2 quarks, N = 6 Higgs bosons)

= 1785 MeV for tauons (n = 1 lepton, N = 50 Higgs bosons)

The model also holds for other mesons (n = 2 quarks) and baryons (n = 3 quarks); e.g. the eta meson has N=7, while for baryons the relatively stable nucleons have N=8, lambda and sigmas have N=10, xi has N=12, and omega has N=15.

The physical picture of the mechanism involved and of the reasons for the choice of N (Higgs boson) values is as follows.  First, the electron is the most complex particle in terms of vacuum polarization; there is a double polarization (hence alpha squared – see appendix for this) shielding the electron core from the single Higgs type boson which it gains its mass from.

For all other leptons and hadrons, there is single vacuum polarization zone between the electromagnetically charged fermion cores and the massive bosons which give the former their mass.

Instead of the Higgs like bosons giving mass by forming an infinite aether extending throughout the vacuum which mires down moving particles (which is the mainstream picture), what actually occurs is that just a small discrete (integer) number of Higgs like massive bosons become associated with each lepton or hadron; the graviton field in the vacuum then does the job of miring these massive particles and giving them inertia and hence mass.  Gravitons are exchanged between the massive Higgs type bosons, but not between the fermion cores (which just have electromagnetic charge, and no mass). (This is why mass increases and length contracts as a particle moves: it gets hit harder by gravitons being exchanged when it is accelerated, and changes shape to adjust to the asymmetry in graviton exchange due to motion.)

Now the clever bit. Where multiple massive Higgs-like bosons give mass to fermions, they surround the fermion cores at a distance corresponding the the distance of collisions at 80-90 GeV, which is outside the intensely polarized loop filled vacuum.  The configuration the Higgs like bosons take is analogous to the shell structure of the nucleus, or to the shell structure of electrons in an atom.  You get stable configurations as in nuclear physics, with N = 2, 8, and 50 Higgs like quanta.  These numbers correspond to closed shells in nuclear physics. So when we want to predict the integer number N in the formula above, we can use N=2, 8, and 50 for relatively stable configurations (closed shells).

E.g., the muon is the most stable particle (shortest half life) after the neutron, and the muon has N = 2 (high stability).  Nucleons are relatively stable because they have N = 8.  And the tauon is relatively stable (forming the last generation of leptons) because it has N = 50 Higgs like bosons giving it mass.

I’ve checked the model in detail for all particles with lives above 10^-2[3] second (the data in my databook). It works well.  Like the periodic table of the elements, there are  a few small discrepancies, presumably due to effects analogous to isomers, where for instable particles, a certain percentage has one number of Higgs field quanta, and the remainder has a slightly different number, so the overall looks like a non-integer; analogous to the problem of chlorine having a mass number of 35.5, and there may be further detailed analogies to the atomic mass theory in terms of binding energy and related complexities.

Appendix: justification for vacuum polarization shielding by factor of alpha

Heisenberg’s uncertainty principle (momentum-distance form):

ps = h-bar (minimum uncertainty)

For relativistic particles, momentum p = mc, and distance s = ct.

ps = (mc)(ct) = t*mc^2 = tE = h-bar

This is the energy-time form of Heisenberg’s law.

E = h-bar/t
= h-bar*c/s

Putting s = 10^-15 metres into this (i.e. the average distance between nucleons in a nucleus) gives us the predicted energy of the strong nuclear exchange radiation, about 200 MeV. According to Ryder’s Quantum Field Theory, 2nd ed. (Cambridge University press, 1996, p. 3), this is what Yukawa did in predicting the mass of the pion (140 MeV) which was discovered in 1947 and which causes the attraction of nucleons. In Yukawa’s theory, the strong nuclear binding force is mediated by pion exchange, and the pions have a range dictated by the uncertainty principle, s = h-bar*c/E. he found that the potential energy in this strong force field is proportional to (e^-R/s)/R, where R is the distance of one nucleon from another and s = h-bar*c/E, so the strong force between two nucleons is proportional to (e^-R/s)/R^2, i.e. the usual square law and an exponential attenuation factor. What is interesting to notice is that this strong force law is exactly what the old (inaccurate) LeSage theory predicts for with massive gauge bosons which interact with each other and diffuse into the geometric “shadows” thereby reducing the force of gravity faster with distance than the inverse-square law observed (thus giving the exponential term in the equation (e^-R/s)/R^2. So it’s easy to make the suggestion that the original LeSage gravity mechanism with limited-range massive particles and their “problem” due to the shadows getting filled in by the vacuum particles diffusing into the shadows (and cutting off the force) after a mean-free-path of radiation-radiation interactions, is actually actually the real mechanism for the pion-mediated strong force. Work energy is force multiplied by distance moved due to force, in direction of force:

E = Fs = h-bar*c/s

F = h-bar*c/s^2

which is the inverse-square geometric form for force. This derivation is a bit oversimplified, but it allows a quantitative prediction: it predicts a relatively intense force between two unit charges, some 137.036… times the observed (low energy physics) Coulomb force between two electrons, hence it indicates an electric charge of about 137.036 times that observed for the electron. This is the bare-core charge of the electron (the value we would observe for the electron if it wasn’t for the shielding of the core charge by the intervening polarized vacuum veil which extends out to a radius on the order 1 femtometre). What is particularly interesting is that it should enable QFT to predict the bare core radius (and the grain size vacuum energy) for the electron simply by setting the logarithmic running-coupling equation to yield a bare core electron charge of 137.036 (or 1/alpha) times the value observed in low energy physics. (The mainstream and Penrose in his book ‘Road to Reality’ use a false argument that the shielding factor is the square root of alpha, instead of alpha. They get the square root of alpha by seeing that the equation for alpha contains the electronic charge squared, and then they argue that the relative charge is proportional to the square root of alpha.  They’re wrong because they’re doing numerology; the gravitational force between two equal fundamental masses is similarly given by an equation which contains the square of that mass, but you can’t deduce from this that in general force is proportional to the square of mass! Newton’s second law tells you the relationship between force and mass is linear. Doing actual formulae derivations based on physical mechanisms, as demonstrated above, is a very good way of avoiding such errors, which are all too easy for people who ignore physical dynamics and just muddle around with equations.)

***************************************************

The comment above was ignored, but Dr Lubos Motl later in the thread of comments made up a spurious attack on the Koide formula research by Carl and Kea.  Here is my repudiation of Lubos’ argument (I haven’t added it as a comment over there yet, as it is long, but if and when I have the time to compress it, I may do so):

“It’s not possible for complicated quantities such as the low-energy lepton masses to be expressed by similar childish formulae because these quantities are obtained by non-integrable RG running differential equations from some high-energy values that are the only ones that have a chance to be of a relatively “analytical” form.” – Lubos Motl, comment #103

Supposedly the dynamics of quantum gravity involves Feynman diagrams in which gravitons are exchanged between Higgs-type massive bosons in the vacuum, which swarm around and give rest mass to particles. In the string theory picture where spin-2 gravitons carry gravitational charge and interact with one another to increase the gravitational coupling at high energy, it is assumed – then forced to work numerically by adding to the theory supersymmetry (supergravity) – that the gravitational coupling increases very steeply at high energy from it’s low energy value and becomes exactly the same as the electromagnetic coupling around the Planck scale. So in that case, particle masses (i.e. gravitational charges) at the highest energy are identical to electromagnetic charges.

Hence, if forces unify at the Planck scale forced to by string theory assumptions about supersymmetry, then mass and electric charge have the same (large) value at the Planck scale, and you can predict the masses of particles at very high energy (bare gravitational charge).

So even if string theory were true, you could predict lepton masses by taking the unified interaction charge (coupling) at the Planck scale and correcting it down to low energy by using the renormalization group. This is what your comment seems to be saying, given that you’re a string theorist.

My issue here is that in standard QED calculations of (say) the magnetic moments of leptons (one of the most precise tests of quantum field theory), both electric charge and mass (gravitational charge) have to be renormalized in the same way, i.e. the bare core values are obtained by multiplying up the low energy values of electric charge and mass by the same factor.

Yet the string theory ideas suggest that the renormalization for mass as gravitational charge will be quite different than that of electric charge. Experimentally, the running coupling for electric charge was confirmed by Levine and published in PRL in 1997: at 90 GeV the electron’s electric charge is 7% higher than at low energy. But there is no 7% increase in mass (gravitational charge or gravitational coupling) when you approach a particle closely in 90 GeV collisions (relativistic mass increase has nothing to do with string theory’s prediction that mass gets bigger when you get closer to a particle: this is purely due to assumed interactions of gravitons with one another via new gravitons creating more and more effective mass at high energy because spin-2 gravitons have mass).

The experimentally confirmed electromagnetic running coupling (or increase in electric charge with energy) occurs because vacuum polarization shields less of the core electric charge from you as you get closer to the core (less intervening polarized vacuum). The supposed increase in gravitational coupling (gravitational charge, mass) with energy occurs because spin-2 gravitons are themselves masses which exchange gravitons with one another intensely in relatively strong gravity fields (small distance scales).

If gravity is due to spin-2 gravitons, therefore, the renormalization of gravity would differ to that of electric charge. But in QED, the couplings or relative charges of electromagnetism and mass (gravity) are increased in the same proportion. You would expect the bare core electromagnetic charge to be either 11.7 or 137 times the measured charge of the electron at low energy (the first factor being 1/square root of alpha, while the second is 1/alpha), depending on the argument you use to derive the polarization shielding factor (comment 38), and these suggest unification at the Planck scale (assumed in string theory) and the black hole size scale for a fermion, respectively. But if string theory is correct, the bare core unified charge implies that gravity coupling/charge/mass must increase by a factor of about 10^40 at the Planck scale.

So there is a disagreement between gravitational charge (mass) renormalization in empirically confirmed QED and in speculative string theory spin-2 graviton and supersymmetry ideas. In QED, renormalization means multiplying low energy masses (as well as electric charges) by a factor like 137 to get bare-core values that give you accurate predictions, but in string theory renormalization suggests that the bare-core gravitational charge (mass) is 10^40 times bigger than the low energy value.

So which is right: is the bare core mass of a particle something like 137 times the low energy value, as suggested by empirically-confirmed QED, or os the bare core mass of a particle 10^40 times the low energy value, as suggested by non-falsifiable, unconfirmed, unconfirmable spin-2 graviton supersymmetric string theory?

I have independent reasons (see previous post on this blog) for string theory is wrong: spin-2 gravitons are a mistaken guess disproved by considering the factual effects of the exchange of gravitons with masses in the surrounding universe.

Once you get rid of spin-2 gravitons and stick to the fact that mass is given by Higgs-like massive bosons which interact with gravitons and electric charges, giving the latter mass, the problems disappear. Gravitons don’t have to have spin-2 and they don’t have to have mass: spin-1 gravitons without gravitational charge will do the job by being exchanged between masses, pushing them together. The only thing with gravitational charge is the massive Higgs-like field in the vacuum, which gives mass to particles. E.g., photons get deflected by gravity near the sun because gravitons interact with Higgs-type bosons, which in turn interact with photons:

gravitons <-> Higgs-like massive bosons <-> photons and other fundamental particles

<As this chain of connections shows, a “gravitational field” can have mass because energy in general relativity is a source of gravitational field, so the energy of a gravitational field has gravitational charge itself. This mass of a “gravitational field” doesn’t imply that gravitons have tohave intrinsic mass. The Higgs-like massive bosons of the vacuum give mass to all other particles. A gravitational field consists of not only gravitons but Higgs-like bosons. The former don’t have mass, but the latter do have mass.

This explains why string theory’s unification using spin-2 gravitons is flawed. Hence, gravity does not gain strength at high energy from graviton breeding and a 10^40 factor increase in the mass of the field at high energy to “unify” numerically with other forces. Instead, the physical dynamics for mass (gravitational charge) indicates that it needs to be renormalized in exactly the same way as electric charge is renormalized in QED. The renormalization equations for leptons aren’t as complex as Lubos thinks, however, because perturbative effects are small. (Mass is more complex when you dealing with strongly interacting hadrons.)

If we discuss electromagnetic charge renormalization of leptons as an analogy to the renormalization of mass for leptons, then renormalization group corrections to the magnetic moments of leptons can be expressed by a perturbative expansion with an infinite number of terms, but these terms (radiative corrections) are trivial for leptons. The magnetic moment of leptons given by Dirac’s theory is g = 2 in esg/(2m) = 1 Bohr magneton for g = 2, where e is electric charge, s is spin and m is mass). This agrees with the measured magnetic moment of leptons to three significant figures.

As shown Schwinger in 1948, even when you start including radiative corrections, they take a very simple form. E.g., the first term is the major correction and gives you six significant figures accuracy:

1 + alpha/(2*Pi)

= 1 + 1/(2*Pi*137.036…)

= 1.00116 Bohr magnetons.

Hence you get a very simple correction factor that gives you a very precise agreement with nature.   [The mainstream treatment obfuscates the physics by treating everything as a lot of physically meaningless symbols, e.g. by modifying Dirac's value of g = 2 to a new value for g equal to twice the corrected magnetic moment of the lepton measured in Bohr magnetons, e.g. g ~ 2 + alpha/Pi = 2 + 2/(137.036... *Pi) = 2.0023...]  There is no evidence for immense complexity in the most important terms for electromagnetic charge and the major radiative correction (masses can’t even be measured as accurately as the magnetic moment of leptons for technical reasons). Dirac predicted 1 Bohr magneton of magnetic moment for a spin-1/2 fermion with unit electronic charge.

By analogy to electromagnetic charges (like intrinsic magnetic dipole moment of leptons), lepton masses (gravitational charges), are expected to be strictly the result of an infinite series of terms, but as with QED, this doesn’t exclude the possibility of simple formulae.

The fact that particles don’t have infinite mass tells us that the perturbative expansion for mass is convergent. If it converges rapidly enough, then the bulk of the mass of a particle will be represented fairly well by a simple formula, just as Dirac’s equation predicted the magnetic moment of a lepton correctly to 3 significant figures (1.00 Bohr magnetons). Yes, there are an infinite number of additional terms for radiative corrections (additional Feynman diagram interactions between the charge and the field, via gauge bosons which can form increasingly rare, increasingly complex spacetime loops of fermionic pair production followed by annihilation), but these terms are trivial in effect because such complexities are extremely unlikely.

The only reason anyone bothers to calculate the complex radiative corrections for many loops in QED is because it’s possible to measure the magnetic moments of leptons to an immense number of significant figures. You can’t measure masses that accurately, so it’s simpler for mass. There’s no physical reason whatsoever to expect that the mass of leptons is going to be correlated by an excessively complicated formula.

There is a solid factual reason why you would expect simple formulae such as the Koide to describe lepton masses to a fairly large degree of precision (the precision with which you can measure masses): radiative corrections are trivial for leptons.

***************************************************

I’m busy developing a large ASP site with SQL database, but here is a copy of a quick comment to backreaction, just in case it gets deleted for appearing to be slightly off-topic or slightly too long:

http://backreaction.blogspot.com/2008/07/blogging-heads-woit-and-hossenfelder.html

Thanks for posting this discussion. RE: the discussion of how to get around the problems of string promotion, e.g. could string theory have been opposed in a less public way?

I read pacifist history and in virtually every major war in history, the pacifists (both at the time and afterwards) claimed that if one side had tried a bit harder to explain their case in private to the other side, everything could have been solved without open hostility. (Wars were just a gigantic misunderstanding, and if people were less stupid and more talkative regarding issues, there would never be any hostility, you see.)

Every conflict could have been avoided if only one side had surrendered without a fight. The problem is not that they couldn’t communicate but that they didn’t want to agree a surrender peacefully and have other people’s ideas imposed on them “peacefully”. The stronger side gave the weaker side the choice of surrender or war. The weaker side chose war. They actually wanted to try to defend themselves. (It takes two fighters to have a war because peaceful surrender is not called a “war”.) It’s pretty analogous to the situation you’re discussing.

The story of how Dr Woit’s book was censored out from a university press (which would not have promoted it so sensationally) by a string theorist peer-reviewer who took a quotation out of context, changed the words to make it look stupid, and then gave it as an example of Dr Woit’s alleged stupidity in criticising string theory, is something I’m well aware of.

I wrote the opinion page/editorial for the British “Electronics World” magazine issue of (I think) October 2003, criticising string theory censorship tactics, but it brought in abusive letters by pro-string theory PhD students at Nottingham University. (A google search showed that they were all students of the same professor.) The letters ignored the arguments and just made personal abuse about my intelligence in asking why so much funding was being given to unproductive areas, so the editor had to censor them out. However, the editor also decided not to commission any more articles on the subject.

It’s human nature that people like Smolin and Woit have different reasons for objecting to string theory, but if you read the books the main reason comes down to the arrogance and abuse from the most outspoken (and thus media-hyped) string theorists to gentle criticism.

They are extremely defensive, to the point of taking any question or scientific criticism as a personal insult, ignoring the science of the question and then making personal insults to the person making the criticism.

This paranoia is a well-known groupthink symptom. You can’t have a discussion which is rational with people who won’t read your evidence and who won’t keep to the science, but who prefer to personally attack those people who are being scientific and looking at nature factually, rather than to develop applications and unifications of many different speculative beliefs that can’t be falsified. Almost exactly the same happens to anyone criticising the government of a despotic regime from within the country: their points are ignored, they are treated as traitors or criminals and are personally attacked. This is called “shooting the messenger”. Bad news is dealt with by attacking those publicising the bad news, instead of tackling the underlying problem.

The problem with string theory, as both Smolin and Woit keep stating in their books, is nothing to do with the failure of string theory after twenty-five years of mainstream research, but is due to the effects of the arrogance of string theorists on the subject.

There is no shame in trying to do something and failing. There is only shame if you fail to get a working theory with falsifiable predictions yet keep obfuscating the facts in public hype, claiming you’re on the brink of the theory of everything when actually you have an anthropic landscape of 10^500 vacua for the universe (none of which has even been shown to model the world), and then censoring out critics and alternative theories without even bothering to read any of them. That what’s shameful. Not the failure, but the hype and the abuse of science by people who profiteer from failure.

It’s the hype that makes the failure of string theory as a physical science “not even wrong”. Even things like the AdS/CFT correspondence requires a negative cosmological constant, instead of the observed positive one, so it’s applicability to the real world requires forces where there is not repulsion but attraction such as the strong nuclear force. Maybe it’s a useful approximation for calculations of that, but it’s not a falsifiable theory. Epicycles for an earth-centred universe were a “useful approximation” and were “self-consistent” mathematically for a thousand years before being disproved. String theory can’t get even be disproved. Again, I’m not hostile to research in string theory (or anything else, because we thankfully live in a free world, where nobody has the right to force others to give up on anything), but the endless hype for mainstream speculations by extremely arrogant and abusive people who also “peer-review” physics journals and censor out alternative ideas, really pisses genuine scientists off. (By genuine scientists, I don’t mean those who are the groupies of Witten or those who think “doing science” is the process of censoring out science without reading it, and instead publishing speculations that can’t be falsified.)

There is a lot that can be done if you look at the empirical facts of quantum gravity (such as the fact it must satisfy certain empirical criteria of the real world as confirmed by certain tests of general relativity): you can try to unify those empirical facts with other empirical facts in cosmology. You don’t need to go to view fundamental physics as seeking to unify speculations. You can instead work on the few empirical facts we actually do have, and get somewhere (falsifiable predictions) from that. However, this is ironically now dismissed as “crackpot” by the string theorists, so certain are they in their own hype of their own unchecked theory of spin-2 gravitons etc.

Update: the comment above was responded to by Bee.  A copy of my response follows:

Hi Bee,

Thanks for your kind response.

“For one … string theorists have after all the same goals as all other physicists that is understanding.”

I fear that you may be too optimistic here. If that assumption were really correct, then there wouldn’t be any problem at all. It simply isn’t. The string theorists do share a goal of understanding speculations and belief systems within the non-falsifiable string theory framework, M-theory.

But are you sure that this amounts to string theorists sharing the same ultimate goals as those who work on alternative ideas? Sure they want results and they want funding, but why are they sticking with a failed framework of ideas that has never worked?

Furthermore, the arrogance of the media-hyped string theorists, who market failure as success, is not a part of physics and generally physicists are not so obnoxious and paranoid about criticisms of the paradigm. If they can’t make falsifiable predictions in other theories, they don’t hype that as a success.

The real problem I’ve seen has included peer-reviewers for the UK Institute of Physics journal Classical and Quantum Gravity who don’t read what they condemn, and who simply dismiss papers because it’s about quantum gravity but isn’t using the game rules of string theory. I submitted a paper to that journal at the suggestion of Dr Bob Lambourne of the O.U., and the editor of Classical and Quantum Gravity was good enough to send the paper for peer-review. This was ten years ago. I really needed that publication. The editor sent me back a photocopy of the peer-reviewers report with the names of the reviewer blanked out. It ignored everything in my paper and just went on about the virtues of string theory which I hadn’t dealt with.

The paper I sent was not concerned with string theory. Yet string theory was used to censor it out. The paper was based entirely on facts, which is extremely hard to do in physics when building a theory that makes falsifiable predictions. The 1996 predictions were confirmed empirically by the discovery of the cosmological acceleration a couple of years later. You can’t publish them because of string!

They’re proposing uncheckable speculations based on a self-consistency between various speculations. The whole framework is critically unconnected to reality, so how can it be defended by saying that they share the seeking for understanding with scientists?

If you want to claim that string theorists share the same goals as other physicists, they may share some common aims and ambitions like getting a lot of citations, getting a lot of funding, etc. But I don’t see how they share any interest in understanding physics. Maybe understanding uncheckable pseudo-physics, but they wouldn’t accept that it is pseudo-physics despite the fact it is not falsifiable. (Even the name physics itself relates to physical things, not to the abstractions of a multiverse, etc.)

“But why wasn’t it possible for example, to address the issue to the APS?”

The American Physical Society, just like the Institute of Physics here in Bristol, is the opposite of a forum for controversy. The members of an institute pay their fees to avoid controversy, which is why journals are peer-reviewed. If they wanted controversy, they could obviously just stop peer-review and let the mistakes in papers be argued over by the readers instead of by peer-reviewers. Clearly, they don’t want that mess in their pages, because a large number of their readers (i.e. membership) are teachers and researchers who don’t have the time to check papers in detail outside their own specialisms.

To teachers of physics (a fairly large proportion of membership) controversy can be an annoying, time-wasting embarrassment, which looks inelegant and detracts (from the media perspective) from large body of solid facts in physics which are not controversial.

The committees in charge if APS and IoP are elected by members, and if they start allowing the venting of hostilities about controversy, they risk losing their positions.

If you think about it, the government doesn’t profit when a newspaper prints a corruption scandal or problem with government policy. The newspaper prints controversy as news solely because people other than government are buying the newspaper, and the news affects those people who are not part of the government.

If the government had total control of the newspapers, then newspapers would end up not publishing so many controversial stories that threatened the popularity of the government.

In other words, with APS and IoP, you have various committees and journal editors in the same buildings, drawing salaries from the same membership revenues.

You can’t expect them to annoy the membership by allowing annoying controversy. They have printed some news of the string theory controversy, but they haven’t printed a real backlash yet (something that in the public eye would effectively “cancel out” the 25 years of string theory hype so far).

If they did that, then there would be extreme anger from very powerful figureheads in physics such Weinberg et al. (see how Weinberg deals with British academics who oppose Israeli attacks on Palestinians: he conveniently yet falsely labels them anti-semitic, according to the quotation at this page). Editors and committee members and leaders could become embroiled in a terrible row, risking a lot. The string theorists who cause the problems are physicists who behave in a paranoid way, ignoring the real motivations and making up false accusations; these people are politically-astute propagandarists who have the media at their beck and call.

You can’t hope to have a sensible conversation with people who are irrational enough to claim that uncheckable multiverse speculations are part of physics.

Best wishes,
Nige