Hawking radiation from black hole electrons has the right radiating power to cause electromagnetic forces; it therefore seems to be the electromagnetic force gauge boson exchange radiation

Here’s a brand new calculation in email to Dr Mario Rabinowitz which seems to confirm the model of gravitation and electromagnetism proposed by Lunsford and others, see discussion at top post here.  The very brief outline for gravity mechanism is:

‘The Standard Model is the most tested theory: forces result from radiation exchanges. There’s outward force F ~ 1043 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.’

See http://quantumfieldtheory.org/Proof.htm for illustrations.

From: Nigel Cook

To: Mario Rabinowitz

Sent: Thursday, March 08, 2007 10:54 PM

Subject: Re: Science is based on the seeking of truth

Dear Mario,

Thank you very much for the information about Kaluza being pre-empted by Nordstrom, http://arxiv.org/PS_cache/physics/pdf/0702/0702221.pdf

I notice that it was only recently (25 Feb) added to arXiv.  Obviously this unification scheme was worked out before the Einstein-Hilbert field equation of general relativity.  It doesn’t make any predictions anyway and as a “unification” is drivel.

My idea of a unification between electricity and magnetism is some theory which predicts the ratio of forces of gravity to electromagnetism between electrons, protons, etc.

Lunsford has some more abstract language for the problem with a 5-dimensional unification, but I think it amounts to the same thing.  If you add dimensions, there are many ways of interpreting the new metric, including the light wave.  But it achieves nothing physical, explains nothing in mechanistic terms, predicts nothing, and has a heavy price because you there are other ways of interpreting an extra dimension, and the theiry becomes mathematically more complex, instead of becoming simpler.

Edward Witten is making the same sort of claim for M-theory that Kaluza-Klein made in the 1920s.  Witten claims 10/11-d M-theory unifies everything and “predicts” gravity.  But it’s not a real prediction.  It’s just hype.  It just gives censors an excuse to ban people from arxiv, on the false basis that the mainstream theory already is proven correct.

Thank you for the arXiv references to your papers on black holes and gravitational tunnelling.

One thing I’m interested in regards these areas is Hawking radiation from black holes.  Quarks and electrons have a cross-sectional shielding area equal to the event horizon of a black hole with their mass.

This conclusion comes from comparing two different calculations I did for gravitational mechanism.  The first calculation is based on a Dirac sea.  This includes an argument that the shielding area needed to completely stop all the pressure from receding masses in the surrounding universe is equal to the total area of those masses.  Hence, the relative proportion of the total inward pressure which is actually shielded is equal to the mass of the shield (say the Earth) divided by the mass of the universe.  An optical-type inverse square law correction is applied for the geometry, because obviously the masses in the universe effectively appear to have a smaller area because the average distance of the masses in the universe is immensely larger than the distance to the middle of the earth (the effective location of all Earth’s mass, as Newton showed geometrically).

Anyway, this type of calculation (completed in 2004/5) gives the predicted gravity strength G, based on Hubble parameter and density of universe locally.  It doesn’t involve the shielding area per particle.

A second (different) calculation I completed in 2005/6 ends up with a relationship between shielding area (unknown) for a particle of given mass, and G.

If the result of this second calculation is set equal to that of the first calculation, the shielding cross-sectional area per particle is found to be Pi*(2GM/c^2)^2, so the effective radius of a particle is 2GM/c^2, which is the black hole horizon radius.  (Both calculations are at http://quantumfieldtheory.org/Proof.htm which I will have to re-write, as it is the result of ten years of evolving material on an old free website I had, and has never been properly edited.  It has been built up in an entirely ranshackle way by adding bits and pieces, and contains much obsolete material.)

I have not considered Hawking radiation from these black hole sized electrons, because Hawking’s approximations mean his formula doesn’t hold for small masses.

From the Hawking radiation perspective, it is interesting that exchange radiation is being emitted and received by black hole electrons.  I believe this to be the case, because the mainstream uses a size for fundamental particles equal to the Planck scale, which has no physical basis (you can get all sorts of numerology from the dimensional analysis which Plank used), and is actually a lot bigger than the event horizon radius for an electron mass.

http://en.wikipedia.org/wiki/Black_hole_thermodynamics#Problem_two states the formula for the black hole effective black-body radiating temperature.  The radiation power from a black hole is proportional to the fourth power of absolute temperature by the Stefan-Boltzmann radiation law.  That wiki page states that the black hole temperature for radiating is inversely proportional to the mass of the black hole.

Hence, a black hole of very small mass would by Hawking’s formula be expected to have an astronomically large radiating temperature, 1.35*10^53 Kelvin.  You can’t even get the fourth power of that on a standard pocket calculator because it is too big a number, although obviously you just multiply the power by 4 to get 10^212 and multiply that by 1.35^4 = 2.32 so (1.35*10^53)^4 = 2.32*10^212.

The radiating power is P/A = sigma *T^4 where sigma = Stefan-Boltzmann constant, 5.6704*10^{-8} W*m^{-2} * K^{-4}.

Hence, P/A = 1.3*10^205 watts/m^2.

The total surface for spherical radiating area, A = 4*Pi*R^2 where R = 2GM/c^2 = 1.351*10^{-57}, so A = 2.301*10^{-113}.

Hence the Hawking radiating power of the black hole electron is: P = A * sigma *T^4 = 2.301*10^{-113} * 1.3*10^205 = 3*10^92 watts.

At least the result has suddenly become a number which can be displayed on a pocket calculator.  It is still an immense radiating power.  I’ve no idea whether this is a real figure or not, and I know that Hawking’s argument is supposed to break down on the quantum scale.

But this might be true.  After all, the force of matter receding outward from each point, if my argument is correct, is effectively something like 7*10^43 N.  The inward force is equal to that.  The force of exchange radiation is reflected back the way it came when it reaches the black hole event horizon of a particle.  So you would expect each particle to be radiating energy at an astronomical rate all the time.  Unless spacetime is filled with gauge boson -acting exchange radiation, there wouldn’t be any inertial force or curvature.

The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

Hence my inward gauge boson calculation F = 7*10^43 N should be given (if Hawking’s formula is right) by the exchange of 3*10^92 watts of energy:

F = 7*10^43 N (my gravity model)

F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of  2*10^84 / [7*10^43] = 3*10^40.

So the Hawking radiation force is the electromagnetic force!  Electromagnetism between fundamental particles is about 10^40 times stronger than gravity!  The exact figure of the ration depends on whether the comparison is for electrons only, electron and proton, or two protons (the Coulomb force is identical in each case, but the ration varies because of the different masses affecting the gravity force).

So I think that’s the solution to the problem: Hawking radiation is the electromagnetic gauge boson exchange radiation. 

It must be either that or a coincidence.  If it is true, does that means that the gauge bosons (Hawking radiation quanta) being exchanged are extremely energetic gamma rays?  I know that there is a technical argument that exchange radiation is different to ordinary photons because it has extra polarizations (4 polarizations, versus 2 for a photon), but that might be related to the fact that exchange radiation is passing continually in two directions at once while being exchanged from one particle to another and back again, so the you get superposition effects (like the period of overlap when sending two logic pulses down a transmission line at the same time in opposite directions).

I only did this calculation while writing this email.  This is my whole trouble, it takes so long to fit all the bits together properly.  I nearly didn’t bother working through the calculation above, because the figures looked too big to go in my calculator.

Best wishes,

Nigel

—– Original Message —–

From: Mario Rabinowitz 

To: Nigel Cook

Sent: Wednesday, March 07, 2007 11:52 PM

Subject: Science is based on the seeking of truth

Dear Nigel, 

   You covered a lot of material in your letter of 3-6-07, to which I responded a little in my letter of 3-6-07 and am now responding some more.

   I noticed that you mentioned Kaluza in your very interesting site, http://electrogravity.blogspot.com/ .  Since science is based on the seeking of truth, I think acknowledgement of priority must be a high value coin of the realm in our field. Did you know that G. Nordstrom of the Reissner-Nordstrom black hole fame, pre-empted Kaluza’s 1921 paper (done in 1919) by about 7 years?  Three of his papers have been posted in the arXiv by Frank G. Borg who translated them.
physics/0702221 Title: On the possibility of unifying the electromagnetic and the gravitational fields

Authors: Gunnar Nordstr{ö}m
Journal-ref: Physik. Zeitschr. XV (1914) 504-506

  Since you are interested in D. R. Lunsford unification of Gravitation and Electrodynamics in which he has 3 space & 3 time dimensions, Nordstrom’s work may also be of interest to you.. 

  When I was younger, I too wanted to write a book(s).  I have written Chapters for 3 books:

astro-ph/0412101 Black Hole Paradoxes

physics/0503079 Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions

astro-ph/0302469 Consequences of Gravitational Tunneling Radiation.

   So I know how hard it is to do.  Good luck with your book.  Some very prominent people have posted Free Online Books.

      Best regards,
      Mario

 ***************************

Further discussion:

From: Mario Rabinowitz

To: Nigel Cook

Sent: Friday, March 09, 2007 12:59 AM

Subject: I differ with your conclusion “So the Hawking radiation force is the electromagnetic force!”

Dear Nigel,  3-8-07  

   I differ with your conclusion that: “So the Hawking radiation force is the electromagnetic force!”  

   Hawking Radiation is isotropic in space and can be very small or very large depending on whether one is dealing respectively with a very large or very small black hole.  My Gravitational Tunneling Radiation (GTR) is beamed between a black hole and another body.  In the case of two black holes that are very close, Hawking Radiation is also beamed, and the two radiations produce a similar repulsive force.

  One of my early publications on this was in the Hadronic J. Supplement 16, 125 (2001) arXiv.org/abs/astro-ph/0104055. ” Macroscopic Hadronic Little Black Hole Interactions.”  See Eqs. (3.1) and (3.2).  I also discussed this in my Chapter  “Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions.”
This is in the book: “Progress in Dark Matter Research. Editor J. Blain; NovaScience Publishers, Inc. N.Y., (2005), pp. 1 – 66.  It is also in the ArXiv:
physics/0503079.  This is calculated in eq. (11.9) p.26 where I say:  
“Thus F = 10^43 N may also be the largest possible repulsive force in nature between two masses.”  I think it is just a coincidence that this is close to the ratio of electrostatic force to gravitational force ~ 10^40 between as you point out 2 electrons, an electron and a proton, or 2 protons.   As  I  point out, my calculation is for  Planck  Mass  ~10^-5 gm LBH, which is the smallest mass one can expect to be able to use in the Hawking and GTR equations.

  Little black holes (LBH) (with Hawking radiation) which may result in the early universe can only be short-lived and don’t play the game very long.  As I pointed out over a decade ago, my LBH (with Gravitational Tunneling Radiation) are a different kind of player in the early and later universe in terms of beaming and much, much greater longevity. This is now being rediscovered by people who have not referenced my work.  

   One needn’t do the complicated equation you did in terms of T^4, etc.  I also found that the quite complicated black hole blackbody expression for Hawking radiation power can be exactly reduced to  

P = G(rho)h-bar/90 where rho is the density of the black hole.  The 90 seems out of place for a fundamental equation.  However the 90 goes away for Gravitational Tunneling Radiation where the radiated power is

P= (2/3)G(rho)h-bar x (transmission probability).  This is in the Arxiv.  Eqs. (3) and (4) of my “Black Hole Radiation and Volume Statistical Entropy.” International Journal of Theoretical Physics 45, 851-858 (2006).   arXiv.org/abs/physics/0506029

            Best,
            Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Friday, March 09, 2007 10:13 AM

Subject: Re: I differ with your conclusion “So the Hawking radiation force is the electromagnetic force!”

Dear Mario,

Thank you for these criticisms, and I agree the gamma rays (Hawking radiation) will be isotropic in the absence of any shield or any motion.  But they are just gamma rays, and suffer from exactly the shielding and the redshift effects I’ve been calculating:

‘The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 10^43 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.’

See http://quantumfieldtheory.org/Proof.htm for illustrations.

I don’t see any theoretical or physical evidence has ever been proposed or found for a Planck mass or Planck length, etc.; they are numerology from dimensional analysis.  You can get all sorts of dimensions.  If Planck had happened to set the “Planck length” as the GM/c^2 where M is electron mass, he would have had a length much smaller than the one he chose (the Planck length), and close to the black hole horizon radius 2GM/c^2.  Planck’s length formula is more complex and lacks any physical explanation: (hG/c^3)^(1/2).  The people who popularise the Planck scale as being fundamental to physics now are mainly string theorists, who clearly are quite willing to believe things without proof (spin-2 gravitons, 10 dimensional superstring as a brane on 11 dimensional supergravity, supersymmetric bosonic partners for every particle to make forces unify near the Planck scale, all of which are totally unobserved).  It is likely that they skipped courses in the experimental basis of quantum theory, and believe that the hypothetical Planck scale is proved somehow by Planck’s earlier empirically confirmed theory of quantum radiation.

I should have written “So the Hawking radiation force is [similar in strength to] the electromagnetic force!”  However, it seems to be an interesting result.

Thank you again for the comments and I look forward to reading these additional papers you mention.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Friday, March 09, 2007 7:32 PM

Subject: Now I’ll respond to more of the points you raised in your letter of 3-6-07

Dear Nigel,  

    The conventional wisdom, including that of Stephen Hawking, is that Hawking radiation is mainly by the six kinds of neutrinos with photons far down the list.  In Hawking Radiation, as a Little Black Hole (LBH) radiates, it mass quickly diminishes and the radiated power goes up (since it is inversely proportional to M^2) until it evaporates away.  In my Gravitational Tunneling Radiation (GTR), the LBH radiation is exponentially lower by the Tunneling Probability (more correctly the transmission probability), and the LBH live much longer.  The radiation tunnels through the gravitational potential energy barrier between a black hole (BH) and other bodies.  If there is a nearby body, it is beamed and predominantly between the BH and the nearby body.  Since the radiation force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.

   Now I’ll respond to more of the points you raised in your letter of 3-6-07.   You said “I don’t quite see why you state the time-dependent form of Schroedinger’s wave function, which is more complicated than the time-independent form that generally is adequate for describing a hydrogen atom.  Maybe you must use the most rigorous derivation available mathematically?  However, it does make the mathematics relatively complicated and this makes the underlying physics harder for me to grasp.”  

  This was in reference to my paper, “Deterrents to a Theory of Quantum Gravity,” http://arxiv.org/abs/physics/0608193 accepted for publication in the International Journal of Theoretical Physics.  This paper goes the next important step following my earlier paper “A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle,” http://arxiv.org/abs/physics/0601218 published in Concepts of Physics.  

   Einstein’s General Relativity (EGR) is founded on the Strong Equivalence Principle (SEP) which states that locally a gravitational field and an accelerating frame are equivalent.  Einstein was motivated by the Weak Equivalence Principle (WEP) which states that gravitational mass is equivalent to inertial mass. The SEP implies the WEP.  In my earlier paper, I showed that Quantum Mechanics (QM) violates the WEP and thus violates the SEP.  Since if A implies B, (not B) implies (not A).  This demonstrated an indirect violation of the SEP.

  In the second paper I went a step further, and showed a direct violation of the SEP.  It was necessary for full generality to deal with the time-dependent form of Schroedinger’s equation. Since the relativistic Klein-Gordon and Dirac equations reduce to the Schroedinger equation, my conclusion also holds for them.  In addition to showing this violation theoretically, I also referenced experimental evidence that indicates that the equivalence principle is violated in the quantum domain.

   Also in your letter of 3-6-07, you mentioned Feynman’s book on QED.    I greatly enjoyed reading Feynman’s little book on QED.  Did you notice that he allows the speed of light to exceed c.

      Best,
      Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Monday, March 12, 2007 11:36 AM

Subject: Re: Now I’ll respond to more of the points you raised in your letter of 3-6-07

Dear Mario,

“…Hawking radiation is mainly by the six kinds of neutrinos with photons far down the list.”

The neutrinos are going to interact far less with matter than photons, so aren’t of interest as gauge bosons for gravity unless the ratio of neutrinos to gamma radiation in Hawking radio is really astronomical in size.

Since the radiation force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”

Maybe you are accounting for something fictitious here, which is unfortunate.  I think I mentioned, the universe isn’t accelerating in the dark energy sense, the error there is the assumption that G is constant regardless of the redshift of gauge boson radiation between receding masses (i.e., any distant masses in this particular expanding universe we inhabit).  Clearly this is the error, G does fall if the two masses are receding relativistically, between exchanged gauge bosons are redshifted.

It would violate conservation of energy for gauge bosons exchanged between receding masses to not be reduced in energy when received.  Correct this error in the application of GR to the big bang, and the cosmological constant with associated dark energy vanish.  See https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/

What is interesting however, is that in spacetime the universe IS accelerating, albeit this is the correct way of interpreting the Hubble law which should be written in the observable form of: (recession velocity) is proportional to (time past).  We can unambiguously measure and state what the recession velocity is as a function of time past, which makes the recession a kind of acceleration seen in spacetime, see https://nige.wordpress.com/2007/03/01/a-checkably-correct-theory-of-yang-mills-quantum-gravity/

The gauge boson exchange radiation in a finite universe will contribute to expansion of the universe, just as the pressure due to molecular impacts in a balloon cause the air in the balloon to expand in the absence of the restraining influence of the balloon’s surface material.

Obviously, at early times in the universe, the expansion rate would much higher because in addition to gauge boson exchange pressure, there would be direct material pressure due to the gas of hydrogen produced in the big bang expanding under pressure.

The mechanism for gravity suggested at http://quantumfieldtheory.org/Proof.htm shows that G, in addition to depending on the recession of the masses (gravitational charges) depends on the time since the big bang, increasing in proportion to time.

My mechanism gives the same basic relationship as Louise Riofrio’s and your equation, something like GM  = {dimensionless constant}*tc^3.  (See https://nige.wordpress.com/2007/01/09/rabinowitz-and-quantum-gravity/ and links to earlier posts.)

Louise has assumed that this equation means that the right hand side is constant, and so the velocity of light decreases inversely as the cube-root of the age of the universe.

However, by the mechanism I gave, the velocity of light is constant with age of universe, but instead G increases in direct proportion to age of universe.  (This doesn’t cause the sun’s brightness to vary or the big bang fusion rate to vary at all, because fusion depends on gravitational compression offsetting Coulomb repulsion of protons so that protons approach close enough to be fused by the strong force.  Hence, if you vary G, you don’t affect fusion rates in the big bang or in stars, because the mechanism unifies gravity and the standard model so all forces vary in exactly the same way; a rise in G doesn’t increase the fusion rate because it is accompanied by a rise in Coulomb repulsion which offsets the effect of rising G.)

The smaller value of G at earlier times in the universe produces the effects normally attributed to “inflation”, without requiring the complexity of inflation.  The smoothness (small size of ripples) of the cosmic background radiation is due to the lower value of G at times up to the time of emission of that radiation, 300,000 years.  A list of other predictions is included at http://quantumfieldtheory.org/Proof.htm but it needs expansion and updating, plus some re-writing.

Thank you very much for your further explanation of your work on the violation of the equivalence principle by quantum gravity.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Monday, March 12, 2007 12:40 PM

Subject: what is the presently claimed acceleration of the universe?

Dear Nigel,   

 Sean M. Carroll takes the approach that one way to account for the acceleration of the universe is to modify general relativity, rather than introducing dark energy.  His published paper is also in the ArXiv.  
astro-ph/0607458
Modified-Source Gravity and Cosmological Structure Formation
Authors: Sean M. Carroll, Ignacy Sawicki, Alessandra Silvestri, Mark Trodden
Comments: 22 pages, 6 figures, uses iopart style
Journal-ref: New J.Phys. 8 (2006) 323
 
 In your letter of 3-12-07 you say:
“The gauge boson exchange radiation in a finite universe will contribute to expansion of the universe, just as the pressure due to molecular impacts in a balloon cause the air in the balloon to expand in the absence of the restraining influence of the balloon’s surface material.
   Obviously, at early times in the universe, the expansion rate would much higher because in addition to gauge boson exchange pressure, there would be direct material pressure due to the gas of hydrogen produced in the big bang expanding under pressure.”

  This seems inconsistent with your criticism in this letter of my statement:
MR: “Since the radiation [pressure] force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”
NC:  “Maybe you are accounting for something fictitious here, which is unfortunate.”    

  Would you agree with my statement if the word accelerated were deleted so that it would read:  
 MR: “Since the radiation [pressure] force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”

  If so, then I ask how you can be sure that the acceleration is 0 and not just some [small] number?  In fact what is the presently claimed acceleration?

         Best,
         Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Monday, March 12, 2007 1:35 PM

Subject: Re: what is the presently claimed acceleration of the universe?

Dear Mario,

The true acceleration has nothing to do with the “acceleration” which is put into GR (via the cosmological constant) to counter long range gravity.

1. There is acceleration implied by the Hubble recession in spacetime, i.e., a variation of recession velocity with respect to observable times past is acceleration, a = dv/dt = d(Hr)/dt = Hv where H is Hubble parameter and v quickly goes to the limit c at great distances, so a = Hc ~ 6 *10^{-10} ms^{-2}.

Any contribution to this true acceleration of the universe (i.e., the Hubble law in spacetime) has nothing to do with the fictitious dark energy/cc.

2. Fictional acceleration of universe is the idea that gravity applies perfectly to the universe with no reduction due to the redshift of force mediating exchange radiation due to the recession of gravitational charges (masses) from one another.  This fictional acceleration is required to make the most appropriate Friedmann-Robertson-Walker solution of GR fit the data which show that the universe is not slowing down.

In other words, an acceleration is invented by inserting a small positive cosmological constant into GR to make it match observations made by Perlmutter et al., since 1998.  The sole purpose of this fictional acceleration is to cancel out gravity.  In fact, it doesn’t do the job well, because it doesn’t cancel out gravity correctly at all distances.  Hence the recent controversy over “evolving dark energy” (i.e., the need for different values of lambda, the cc, for supernovae at different spacetime distances/times from us.)

For the acceleration see https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/ .  There is no evidence for dark energy.  What is claimed as evidence for dark energy is evidence of no long range gravitational deceleration.  The insistence that the evidence from supernovae is evidence for dark energy is like the insistence of Ptolemy that astronomy is evidence for an earth centred universe, insistence from other faith-based belief systems that combustion and thermodynamics are evidence for caloric and phlogiston, and the insistence of Lord Kelvin that the existence of atoms is evidence for his vortex atom model.  The evidence doesn’t specifically support a small positive cosmological constant, see: http://www.google.co.uk/search?hl=en&q=evolving+dark+energy&meta

I can’t see why people are so duped by the application of GR to cosmology that they believe it perfect, and any problem to invoke extra epicycles, rather than quantum gravity effects like redshift of exchange radiation between receding masses.

Einstein 1917 cosmological constant has a very large value, to make gravity become zero at the distance of the average separation between galaxies, and to become repulsive at greater distances than that.

That fiddle proved wrong.  What is occurring is that the exchange radiation can cause both attractive and repulsive effects at the same time.  The exchange radiation pressure causes the local curvature.  As a whole spacetime is flat and has no observed curvature, so all curvature is local.

The curvature is due to the radial compression of masses, being squeezed by the exchange radiation pressure.  This is similar in mechanism to the Lorentz contraction physically of moving bodies.

If a large number of masses are exchanging radiation in a finite sized universe, they wil recoil apart while being individually compressed.  The expansion of the universe and the contraction of gravity are two entirely different things that have the same cause.  There is no contradiction.

The recoil is due to Newton’s 3rd law, and doesn’t prevent the radial compressive force.  In fact, the two are interdependent, you can’t have one without the other.  Gravitation is an aspect of the radial pressure, just a shielding effect:

The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 1043 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.

Regards MOND (modified Newtonian dynamics), such ideas are not necessarily mechanistic and checkable or fact based.  They’re usually speculations that to my mind are in the LeSage category – they do not lead anywhere unless you can inject enough factual physics into them to make them real.  Another thing which I’ve been thinking about in relation to Sean’s writings on his Cosmic Variance group blog, is the problem of religion.

I recall a recent survey which showed that something like 70% of American physicists are religious.  Religion is a not rational belief.  If physicists are irrational with respect to religion, why expect them to be rational about deciding which scientific theories to investigate?  There is a lot of religious or pseudoscientific speculation in physics as witnessed by the rise of 10/11 dimensional M-theory.  If mainstream physicists decide what to investigate based on irrational belief systems akin to their religion or other prejudices (clearly religion instills one form of prejudice, but there are others such as mathematical elitism, racism, all based on the wholesale application of ad hominem arguments against all other people who are different in religion or whatever), why expect them to do anything other than end up in blind alleys like epicycles, phlogiston, caloric, vortex atoms, mechanical aether, M-theory?

It’s very unfortunate that probably 99.9% of those who praise Einstein do so for the wrong (metaphysical, religious) reasons, instead of scientific reasons (his early quickly corrected false claims that clocks run slower at the equator, there is a massive cosmological constant, the universe is static, that quantum mechanics is wrong, that the final theory will be purely geometric with no particles, etc., don’t discredit him).  People like to claim Einstein is a kind of religious-scientific figure who didn’t make mistakes, and was infallible.  At least that’s the popular image the media feel keen on promoting, and it is totally wrong.  It sends out the message to people that they must not ever make mistakes.  I feel this is an error.  As long as there is some mechanism in place for filtering out errors or correcting them, it doesn’t matter if there are errors.  What matters is that there is no risk of making an error because someone is modelling non-observed spin-2 gravitons interacting with a non-observed 10 dimensional superstring brane on a bulk of non-observed 11 dimensional supergravity.  That matters, because it’s not even wrong.

I prefer investigating physics from Feynman’s idea of ultimate simplicity, and emergent complexity:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

If and when this is ever found to be wrong, it will then make sense to start investigating extra dimensions on the basis that the universe can’t be explained in fewer dimensions.

Best wishes,

Nigel

Aaron’s and Lubos’ anti-Loop Quantum Gravity propaganda

There is some discussion of the break down of special relativity at Steinn Sigurðsson‘s blog here, which is mentioned by Louise Riofrio here.  Several string theorists including Aaron Bergman and Lubos Motl have savagely attacked the replacement theory for special relativity, which is termed ‘doubly special relativity’, because they misunderstand both the physical basis of the theory and ignore the supporting evidence for it’s predictions.

Professor Lee Smolin explains why the Lorentz invariance breaks down at say the Planck scale in his book The Trouble with Physics.  Simply put, in loop quantum gravity spacetime is composed of particles with some ultimate, absolute grain size, such as the Planck scale of length (a distance on the order of 10-35 metre), which is totally independent of, and therefore in violation of, Lorentz contraction.  Hence, special relativity must break down for very small scales: the ultimate grain size is absolute.  Doubly special relativity is any scheme whereby you retain special relativity for large distance scales, but lose it for small ones.  Because you need higher energy to bring particles closer in collisions, small distance scales are for practical purposes in physics equivalent to high energy collisions.  So the failure of Lorentz invariance occurs at very small distances and at correspondingly high energy scales.

Doubly special relativity was applied by Giovanni Amelino-Camelia in 2001 to explain why some cosmic rays have been detected with energies exceeding the limit of special relativity, 5 x 10^19 eV = 3 J (the Greisen-Zatsepin-Kuzmin limit).  So it’s not just a case that LQG makes a speculative prediction of doubly special relativity, because there’s also experimental evidence validating it!

Actually, there are quite a lot of indications of this non-Lorentzian behaviour in quantum field theory, even at lower energies, where space does not look quite the same to all observers due to pair production phenomena. For example, on page 85 of their online Introductory Lectures on Quantum Field Theory, Professors Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo explain in http://arxiv.org/abs/hep-th/0510040:

‘In Quantum Field Theory in Minkowski space-time the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state. The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.

‘This is precisely the situation when fields are quantized on curved backgrounds. In particular, if the background is time-dependent (as it happens in a cosmological setup or for a collapsing star) different observers will identify different vacuum states. As a consequence what one observer call the vacuum will be full of particles for a different observer. This is precisely what is behind the phenomenon of Hawking radiation.’

Sadly, some string theorists are just unable to face the facts and understand them:

‘… the rules of the so-called “doubly special relativity” (DSR) to transform the energy-momentum vectors are nothing else than the ordinary rules of special relativity translated to awkward variables that parameterize the energy and the momentum.’ – Lubos Motl, http://motls.blogspot.com/2006/02/doubly-special-relativity-is-just.html

‘… Still, I just want to say again: DSR and Lorentz violation just aren’t in any way predictions of LQG.’ – Aaron Bergman, http://scienceblogs.com/catdynamics/2007/03/strings_and_apples.php#comment-364824

Loop quantum gravity (LQG) does quantize spacetime. Smolin makes the point clearly in “The Trouble with Physics” that whatever the spin network grain size in LQG, the grains will have an absolute size scale (such as Planck scale, or whatever).

This fixed grain size contradicts the Lorentz invariance, and so you have to modify special relativity to make it compatible with LQG. Hence, DSR in some form (there are several ways of predicting Lorentz violation at small scales while preserving SR at large scales) is a general prediction of LQG.   String theorists are just looking at the mathematics and ignoring the physical basis, and then complaining that they don’t understand need for tha mathematics.  It’s clear why they have got into such difficulties themselves, theoretically.

STRING. In string theory, it is assumed that the fundamental particles are all vibrating strings of this size, and that the various possible vibration modes and frequencies determine the nature of the particle. Nobody can actually ever prove this because string theory only describes gravity with spin-2 gravitons if there are 11 dimensions, and only describes unification near the Planck scale if there are 10 dimensions (which allows supersymmetry, namely a pairing of unobserved super bosons to every observed fermions that is required to make forces unify in the stringy paradigm). The problem is the 6/7 extra dimensions required to make today’s string theory work. The final (but still incomplete in detail) framework of string theory is named M-theory after ‘membrane’, since the 10 dimensional superstring theory is a membrane on 11 dimensional supergravity, analogous to a 2-dimensional bubble surface or membrane on a 3-dimensional bubble volume; the membrane has one dimension fewer than the bulk. To account for why we don’t see the extra dimensions, 6 of them are conveniently curled up in a Calabi-Yau manifold (a massive extension of the old Kaluza-Klein unification from 1929, which postulated 5 dimensional space time, because the metric including the extra dimension could be interpreted as giving a prediction of the photon). The 6 extra dimensions in the Calabi-Yau manifold can have a ‘landscape’ consisting of as many as 101000 different models of particle physics as solutions. It’s now very clear that such a hopelessly vague theory is a Hollywood-hyped religion of groupthink.

LQG. In loop quantum gravity (LQG), however, one possibility (not the only possibility) is that the different particles are supposed to come from the twists of braids of spacetime (see illustration here which is based on the paper of Bilson-Thompson, Markopoulou, and Smolin). This theory also contains the speculative Planck scale, but in a different way: spacetime fabric is assumed to contain a Penrose spin network. The grain size of this spin network is assumed to be the Planck scale. However, since loop quantum gravity so far does not produce any quantitative predictions, the assumption of the Planck scale is not crucial to the theory. Loop quantum gravity is actually more scientific than string theory, because it at least explains observables using other observables, instead of explaining non-observables (spin-2 graviton and unification near the Planck scale) by way of other non-observables (extra-dimensions and supersymmetry). In loop quantum gravity, interactions occur between nodes of the spin network. The summation of all interactions is equivalent to the Feynman path integral, and the result is background independent general relativity (without a metric). The physical theory of gravity is therefore be likely to be a variant or extension of loop quantum gravity, rather than anything to do with super-speculative M-theory.Doubly Special Relativity. The problem Smolin discusses with special relativity and the Planck scale is that distance contracts in the direction of motion in special relativity. Clearly because the Planck distance scale is a fixed distance independent of velocity, special relativity cannot apply to Planck scale distances. Hence ‘doubly special relativity’ was constructed to allow normal special relativity to work as usual at large distance scales, but to break down as distance approaches the Planck scale, which does not obey the Lorentz transformation. Because the Planck distance is related to the Planck energy (a very high energy, at which forces are assumed by many to unify), this is the same as saying that special relativity breaks down at extremely high energy. The insertion of the Planck scale (as a threshold length or maximum energy) gives rise to ‘doubly special relativity’.It isn’t just special relativity which is incomplete.  Supersymmetry (1:1 boson to fermion correspondence for all particles in the universe, just to unify forces at the Panck scale in string theory) also needs to be abandoned because of a failure in quantum field theory. Another example of a problem of incompleteness in modern physics is that in quantum field theory is there does not appear to be any proper constraints on conservation of field energy where the charge of the field is varying due to pair polarization phenomena; the correction of this problem will tell us where the energy of the various short-range fields comes from! It is easy to calculate the energy density of an electromagnetic field. Now, quantum field theory and experimental confirmation show that the effective electric charge of an electron is 7% bigger at 92 GeV than at collision energies up to and including 0.511 MeV (this latter energy corresponds to a distance of closest approach in elastic Coulomb scattering of electrons of about 10-15 m or 1 fm, and if we can assume elastic Coulomb type scattering and ignore inelastic radiation effects, then the energy is inversely proportional to the distance of closest approach).

So the increasing electric charge of the electron as you get closer to the core of the electron poses a problem for energy conservation: where is the energy? Clearly, we know the answer from Dyson’s http://arxiv.org/abs/quant-ph/0608140 page 70 and also Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040 page 85: the electric field creates observable pairs (which annihilate into radiation and so on, causing vacuum ‘loops’ as plotted in spacetime) above a threshold electric field strength of 1.3 x 1018 v/m. This occurs at a distance on the order of 1 fm from an electron and is similar to the IR cutoff energy of 0.511 MeV Coulomb collisions in quantum field theory.

It is clear that stronger electric fields are attenuated by pair production and polarization of these pairs (virtual positrons being closer to the real electron core than the virtual electrons) so that they cause a radial electric field pointing the other way to the particle core’s field. As you get closer to the real electron core, there is less intervening shielding because there are fewer polarized pairs between you and the core. It’s like travelling upwards through thick cloud in an aircraft: the illumination gradually increases, simply since the amount of cloud intervening between you and the sun is diminishing.

Therefore, the pair production and polarization of vacuum loops of virtual charges are absorbing the shielded energy of the electric field out to a distance of 1 fm. The virtual charges are only limited to electrons and positrons at the lowest energy. Higher energies, corresponding to stronger electric field strengths, result in the production of heavier pairs. At a distance closer than 0.005 fm, pairs of virtual muons occur because muons have a rest mass equivalent to Coulomb scattering at 105.6 MeV energy. At still higher energies you get quark pairs forming.

It seems that by the pair production and polarization mechanisms, electromagnetic energy is being transferred into the loop energy of virtual particles.  We know that the strong force charge falls experimentally as particle collision energy increases (after the threshold energy for nuclear charge to peak), while the electromagnetic charge increases as particle collision energy increases.  Surely, this in at least a qualitative way confirms that eletromagnetic gauge boson energy is being converted (via the pair production and polarization mechanism) into nuclear force gauge bosons (pions etc. between nucleons, gluons between quarks).

If so, there is no Planck scale unification of standard model forces, because the conservation of gauge boson energy shared between all forces very near a particle core means that the fall in the strong charge is caused by the increase in the electromagnetic charge as you get closer to a particle.  If this is the mechanism for nuclear forces, then the although at some energy the strong and electromagnetic forces will happen to collide, they won’t unify because as collision energy becomes ever higher, the electromagnetic charge will approach the bare core value.  This implies that there is no energy then being absorbed from the electromagnetic field, and so no energy available for the nuclear charge.  Thus, if this mechanism for nuclear charge is real, at extremely high energies the nuclear charge continues to fall after coinciding with the electromagnetic charge, until the nuclear charge falls to zero where the electromagnetic charge equals to bare core charge.  This discredits stringy supersymmetry, which is based on the assumption that all standard model charges merge into a superforce of one fixed charge value above grand unification energy.  This supersymmetry is just speculative rubbish, and is disproved by the mechanism.

This mechanism is not speculative: it is based entirely on the accepted, experimentally verified, picture of vacuum polarization shielding the core charge of an electron, plus the empirically based concept that the energy of an electromagnetic field is conserved.

Something has to happen to the field energy lost via charge renormalization.  We know what the energy is used for: pair production of ever more massive (nuclear) particle loops in spacetime.  These virtual particles mediate nuclear forces. 

It should be noted, however, that although you get ever more massive particles being created closer and closer to a fundamental charged particle due to pair production in the intense electric field, the pairs do not cause divergent (ever increasing, instead of decreasing) energy problems for two reasons. Firstly, Heisenberg’s uncertainty principle limits the time that a pair of virtual charges can last: this time is inversely proportional to the energy of the pair. Hence, loops of ever more massive virtual particles closer to a real particle core exist for shorter and shorter intervals of time before they annihilate back into the gauge boson energy of the electromagnetic field. Secondly, there is an upper energy limit (called the UV cutoff) corresponding physically to the coarseness of the background quantum nature of spacetime: observable pairs result as strong electric field energy breaks up the quantized spacetime fabric. The quantized spacetime fabric has a limit to how many particles you will find in a given spatial volume. If you look in a volume too small (smaller than the size of the grain in quantized spacetime) you won’t find anything. So although the mathematical differential equations of quantum field theory show an increasingly strong field creates increasingly high energy pairs, this breaks down at very short distances where there simply aren’t any particles because the spacetime is too small spatially to accommodate them:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

This physical view to explain the cutoffs (i.e., the renormalization of charge in quantum field theory) was championed by the Nobel Laureate Kenneth Wilson, as Professor John Baez explains, paraphrasing Peskin and Schroeder:

‘In Chapter 10 we took the philosophy that the distance cutoff D should be disposed of by taking the limit D -> 0 as quickly as possible. We found that this limit gives well-defined predictions only if the Lagrangian contains no coupling constants with dimensions of length^d with d > 0. From this viewpoint, it seemed exceedingly fortunate that quantum electrodynamics, for example, contained no such coupling constants since otherwise this theory would not yield well-defined predictions.

‘Wilson’s analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson’s arguments show that this this circumstance explains the renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.’

I have an extensive discussion of the causal physics behind the mathematics of quantum field theory here (also see later posts and this new domain), but the point I want to make here concerns unification.To me, it is entirely logical that the long range electromagnetic and gravity forces are classical in nature beyond the IR cutoff (i.e., for scattering energies below those that are required for pair production, or distances from particles of more than 1 fm).At such long distances, there are no pair production (annihilation-creation) loops in spacetime (see this blog post for a full discussion).All this implies that the nature of any ‘final theory’ of everything will be causal, with for example:

quantum mechanics = classical physics + mechanisms for chaos.

My understanding is that if you have any orbital system with masses in orbit around mass which all have fairly similar (i.e., no more than an order of magnitude difference) masses to each other and to the central mass, then classical orbitals disappear and you have chaos. Hence you might describe the probability of finding a given planet at some distance by some kind of Schroedinger equation.I think this is a major problem with classical physics; it works only because the planets are all far, far, far smaller than the mass of the sun.In an atom, the electric charge is the equivalent to gravitational mass, so the atom is entirely different to the simplicity of the solar system because the fairly similar charges on electrons and nuclei mean that it is going to be chaotic if you have more than one electron in orbit.There are other issues as well with classical physics which are clearly just down to a lack of physics. For example, the randomly occurring loops of virtual charges in the strong field around an electron will, when the electron is seen on na small scale, cause the path of the electron to be erratic, by analogy to drunkard’s walk Brownian motion the motion of pollen grain which is being affect by random influences of air molecules.  So: quantum mechanics = classical physics + mechanisms for chaos.Another mechanism for chaos is Yang-Mills exchange radiation. Within 1 fm of an electron, the Yang-Mills radiation-caused electric field is so strong that the gauge boson’s of electromagnetism, photons, get to produce short lived spacetime loops of virtual charges in the vacuum, which quickly annihilate back into gauge bosons.

But at greater distances, they lack the energy to polarize the vacuum, so the majority of the vacuum (i.e., the vacuum beyond about 1 fm distance from any real fundamental particle) is just a classical-type continuum of exchange radiation which does not involve any chaotic loops at all.

This is partly why general relativity works so well on large scales (quite apart from the fact that planets have small masses compared to the sun): there really is an Einstein-type classical field, a continuum, outside the IR cutoff of QFT.

Of course, on small scales, this exchange of gauge boson radiation causes the weird interactions you get in the double-slit experiment, the path-integrals effect, where a particle seems to be affected by every possible route it could take.

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The solar system would be as chaotic as a multi-electron atom if the gravitational charges (masses) of the planets were all the same (as for electrons) and if the sum or planetary masses was the sun’s mass (just as the sum of electron charges is equal to the electric charge of the nucleus). This is the 3+ body problem of classical mechanics:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Obviously Bohr did not know anything about this chaos in classical systems, when when coming up with complementarity and correspondence principles in the Copenhagen Interpretation. Nor did even David Bohm, who sought the Holy Grail of a potential which becomes deterministic at large scales and chaotic (due to hidden variables) at small scales.

What is interesting is that, if chaos does produce the statistical effects for multi-body phenomena (atoms with a nucleus and at least two electrons), what produces the interference/chaotic statistically describable (Schroedinger equation model) phenomena when a single photon has a choice of two slits, or when a single electron orbits a proton in hydrogen?

Quantum field theory phenomena obviously contribute to quantum chaotic effects. The loops of charges spontaneously and randomly appearing around a fermion between IR – UV cutoffs could cause chaotic deflections on the motion of even a single orbital electron:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.] … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

Yang-Mills exchange radiation is what constitutes electromagnetic fields, both of the electrons in the screen containing the double slits, and also the electromagnetic fields of the actual photon of light itself. Again, consider the remarks of Feynman quoted earlier:

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

Comments about further work planned:

Above electroweak unification energy, the electroweak gauge bosons (W+, W-, Z_0 and photon) are all massless and form a simple symmetry (the Z_0 being similar to the photon), but below electroweak unification energy, everything apart from the photon acquires mass from the Higgs field (or some other mechanism!).

This does fit in perfectly with the mass model that predicts (to within a couple of percent error) the masses of lepton and hadrons from a mechanism whereby mass is usually coupled electromagnetically in quantized units external to the polarized vacuum of a particle.  The case of the electron is indicated to be the odd situation here, involving a double polarization of the vacuum.  The quantized mass-causing particle of the vacuum has a mass equal to the Z_0 mass, 91 GeV.  Since there is evidence (see http://thumbsnap.com/vf/FBeqR0gc.gif ) that the polarization shielding factor of 1/alpha or 137.036… reduces the bare charge of QED to the observed charge of the electron beyond 1 fm distance, the electron charge is on the order of 91 GeV/(137^2) with ignoring small geometric factors like twice Pi, while the muon and heavier particles mass only involves a single vacuum polarization shielding the coupling of charge to quantized mass, so those masses are on the order 91 GeV/137 (again ignoring small integer and Pi geometry factors).  http://thumbsnap.com/vf/FBeqR0gc.gif contains a very quick summary of how masses are related.

Doubtless there are other factors at work.  The periodic table had various anomalies in detail itself at first, and it took a long time to explain mass discrepancies by isotopic abundances, and the theoretical reason for the existence of isotopes of elements was of course only explained when the neutron was discovered as late as 1932.

There are plenty of evidence showing that everything may be explained by a causal model, albeit one based on the experimentally well established facts quantum field theory and general relativity (excluding cosmological constant/dark energy speculation).  It is just a great pity that the mainstream has gone off into speculations, so that, if the causal model is right, there will be some sort of conflict of interests with string theorists before many people take the facts seriously.

I’m planning to publish a free online book which presents this experimental evidence (not the speculative parts and philosophy) for quantum mechanics, quantum field theory including the Standard Model, general relativity, and the big bang (the recession, nucleosynthesis, and cosmic background radiation evidence).