Hawking radiation from black hole electrons has the right radiating power to cause electromagnetic forces; it therefore seems to be the electromagnetic force gauge boson exchange radiation

Here’s a brand new calculation in email to Dr Mario Rabinowitz which seems to confirm the model of gravitation and electromagnetism proposed by Lunsford and others, see discussion at top post here.  The very brief outline for gravity mechanism is:

‘The Standard Model is the most tested theory: forces result from radiation exchanges. There’s outward force F ~ 1043 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.’

See http://quantumfieldtheory.org/Proof.htm for illustrations.

From: Nigel Cook

To: Mario Rabinowitz

Sent: Thursday, March 08, 2007 10:54 PM

Subject: Re: Science is based on the seeking of truth

Dear Mario,

Thank you very much for the information about Kaluza being pre-empted by Nordstrom, http://arxiv.org/PS_cache/physics/pdf/0702/0702221.pdf

I notice that it was only recently (25 Feb) added to arXiv.  Obviously this unification scheme was worked out before the Einstein-Hilbert field equation of general relativity.  It doesn’t make any predictions anyway and as a “unification” is drivel.

My idea of a unification between electricity and magnetism is some theory which predicts the ratio of forces of gravity to electromagnetism between electrons, protons, etc.

Lunsford has some more abstract language for the problem with a 5-dimensional unification, but I think it amounts to the same thing.  If you add dimensions, there are many ways of interpreting the new metric, including the light wave.  But it achieves nothing physical, explains nothing in mechanistic terms, predicts nothing, and has a heavy price because you there are other ways of interpreting an extra dimension, and the theiry becomes mathematically more complex, instead of becoming simpler.

Edward Witten is making the same sort of claim for M-theory that Kaluza-Klein made in the 1920s.  Witten claims 10/11-d M-theory unifies everything and “predicts” gravity.  But it’s not a real prediction.  It’s just hype.  It just gives censors an excuse to ban people from arxiv, on the false basis that the mainstream theory already is proven correct.

Thank you for the arXiv references to your papers on black holes and gravitational tunnelling.

One thing I’m interested in regards these areas is Hawking radiation from black holes.  Quarks and electrons have a cross-sectional shielding area equal to the event horizon of a black hole with their mass.

This conclusion comes from comparing two different calculations I did for gravitational mechanism.  The first calculation is based on a Dirac sea.  This includes an argument that the shielding area needed to completely stop all the pressure from receding masses in the surrounding universe is equal to the total area of those masses.  Hence, the relative proportion of the total inward pressure which is actually shielded is equal to the mass of the shield (say the Earth) divided by the mass of the universe.  An optical-type inverse square law correction is applied for the geometry, because obviously the masses in the universe effectively appear to have a smaller area because the average distance of the masses in the universe is immensely larger than the distance to the middle of the earth (the effective location of all Earth’s mass, as Newton showed geometrically).

Anyway, this type of calculation (completed in 2004/5) gives the predicted gravity strength G, based on Hubble parameter and density of universe locally.  It doesn’t involve the shielding area per particle.

A second (different) calculation I completed in 2005/6 ends up with a relationship between shielding area (unknown) for a particle of given mass, and G.

If the result of this second calculation is set equal to that of the first calculation, the shielding cross-sectional area per particle is found to be Pi*(2GM/c^2)^2, so the effective radius of a particle is 2GM/c^2, which is the black hole horizon radius.  (Both calculations are at http://quantumfieldtheory.org/Proof.htm which I will have to re-write, as it is the result of ten years of evolving material on an old free website I had, and has never been properly edited.  It has been built up in an entirely ranshackle way by adding bits and pieces, and contains much obsolete material.)

I have not considered Hawking radiation from these black hole sized electrons, because Hawking’s approximations mean his formula doesn’t hold for small masses.

From the Hawking radiation perspective, it is interesting that exchange radiation is being emitted and received by black hole electrons.  I believe this to be the case, because the mainstream uses a size for fundamental particles equal to the Planck scale, which has no physical basis (you can get all sorts of numerology from the dimensional analysis which Plank used), and is actually a lot bigger than the event horizon radius for an electron mass.

http://en.wikipedia.org/wiki/Black_hole_thermodynamics#Problem_two states the formula for the black hole effective black-body radiating temperature.  The radiation power from a black hole is proportional to the fourth power of absolute temperature by the Stefan-Boltzmann radiation law.  That wiki page states that the black hole temperature for radiating is inversely proportional to the mass of the black hole.

Hence, a black hole of very small mass would by Hawking’s formula be expected to have an astronomically large radiating temperature, 1.35*10^53 Kelvin.  You can’t even get the fourth power of that on a standard pocket calculator because it is too big a number, although obviously you just multiply the power by 4 to get 10^212 and multiply that by 1.35^4 = 2.32 so (1.35*10^53)^4 = 2.32*10^212.

The radiating power is P/A = sigma *T^4 where sigma = Stefan-Boltzmann constant, 5.6704*10^{-8} W*m^{-2} * K^{-4}.

Hence, P/A = 1.3*10^205 watts/m^2.

The total surface for spherical radiating area, A = 4*Pi*R^2 where R = 2GM/c^2 = 1.351*10^{-57}, so A = 2.301*10^{-113}.

Hence the Hawking radiating power of the black hole electron is: P = A * sigma *T^4 = 2.301*10^{-113} * 1.3*10^205 = 3*10^92 watts.

At least the result has suddenly become a number which can be displayed on a pocket calculator.  It is still an immense radiating power.  I’ve no idea whether this is a real figure or not, and I know that Hawking’s argument is supposed to break down on the quantum scale.

But this might be true.  After all, the force of matter receding outward from each point, if my argument is correct, is effectively something like 7*10^43 N.  The inward force is equal to that.  The force of exchange radiation is reflected back the way it came when it reaches the black hole event horizon of a particle.  So you would expect each particle to be radiating energy at an astronomical rate all the time.  Unless spacetime is filled with gauge boson -acting exchange radiation, there wouldn’t be any inertial force or curvature.

The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

Hence my inward gauge boson calculation F = 7*10^43 N should be given (if Hawking’s formula is right) by the exchange of 3*10^92 watts of energy:

F = 7*10^43 N (my gravity model)

F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of  2*10^84 / [7*10^43] = 3*10^40.

So the Hawking radiation force is the electromagnetic force!  Electromagnetism between fundamental particles is about 10^40 times stronger than gravity!  The exact figure of the ration depends on whether the comparison is for electrons only, electron and proton, or two protons (the Coulomb force is identical in each case, but the ration varies because of the different masses affecting the gravity force).

So I think that’s the solution to the problem: Hawking radiation is the electromagnetic gauge boson exchange radiation. 

It must be either that or a coincidence.  If it is true, does that means that the gauge bosons (Hawking radiation quanta) being exchanged are extremely energetic gamma rays?  I know that there is a technical argument that exchange radiation is different to ordinary photons because it has extra polarizations (4 polarizations, versus 2 for a photon), but that might be related to the fact that exchange radiation is passing continually in two directions at once while being exchanged from one particle to another and back again, so the you get superposition effects (like the period of overlap when sending two logic pulses down a transmission line at the same time in opposite directions).

I only did this calculation while writing this email.  This is my whole trouble, it takes so long to fit all the bits together properly.  I nearly didn’t bother working through the calculation above, because the figures looked too big to go in my calculator.

Best wishes,

Nigel

—– Original Message —–

From: Mario Rabinowitz 

To: Nigel Cook

Sent: Wednesday, March 07, 2007 11:52 PM

Subject: Science is based on the seeking of truth

Dear Nigel, 

   You covered a lot of material in your letter of 3-6-07, to which I responded a little in my letter of 3-6-07 and am now responding some more.

   I noticed that you mentioned Kaluza in your very interesting site, http://electrogravity.blogspot.com/ .  Since science is based on the seeking of truth, I think acknowledgement of priority must be a high value coin of the realm in our field. Did you know that G. Nordstrom of the Reissner-Nordstrom black hole fame, pre-empted Kaluza’s 1921 paper (done in 1919) by about 7 years?  Three of his papers have been posted in the arXiv by Frank G. Borg who translated them.
physics/0702221 Title: On the possibility of unifying the electromagnetic and the gravitational fields

Authors: Gunnar Nordstr{ö}m
Journal-ref: Physik. Zeitschr. XV (1914) 504-506

  Since you are interested in D. R. Lunsford unification of Gravitation and Electrodynamics in which he has 3 space & 3 time dimensions, Nordstrom’s work may also be of interest to you.. 

  When I was younger, I too wanted to write a book(s).  I have written Chapters for 3 books:

astro-ph/0412101 Black Hole Paradoxes

physics/0503079 Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions

astro-ph/0302469 Consequences of Gravitational Tunneling Radiation.

   So I know how hard it is to do.  Good luck with your book.  Some very prominent people have posted Free Online Books.

      Best regards,
      Mario

 ***************************

Further discussion:

From: Mario Rabinowitz

To: Nigel Cook

Sent: Friday, March 09, 2007 12:59 AM

Subject: I differ with your conclusion “So the Hawking radiation force is the electromagnetic force!”

Dear Nigel,  3-8-07  

   I differ with your conclusion that: “So the Hawking radiation force is the electromagnetic force!”  

   Hawking Radiation is isotropic in space and can be very small or very large depending on whether one is dealing respectively with a very large or very small black hole.  My Gravitational Tunneling Radiation (GTR) is beamed between a black hole and another body.  In the case of two black holes that are very close, Hawking Radiation is also beamed, and the two radiations produce a similar repulsive force.

  One of my early publications on this was in the Hadronic J. Supplement 16, 125 (2001) arXiv.org/abs/astro-ph/0104055. ” Macroscopic Hadronic Little Black Hole Interactions.”  See Eqs. (3.1) and (3.2).  I also discussed this in my Chapter  “Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions.”
This is in the book: “Progress in Dark Matter Research. Editor J. Blain; NovaScience Publishers, Inc. N.Y., (2005), pp. 1 – 66.  It is also in the ArXiv:
physics/0503079.  This is calculated in eq. (11.9) p.26 where I say:  
“Thus F = 10^43 N may also be the largest possible repulsive force in nature between two masses.”  I think it is just a coincidence that this is close to the ratio of electrostatic force to gravitational force ~ 10^40 between as you point out 2 electrons, an electron and a proton, or 2 protons.   As  I  point out, my calculation is for  Planck  Mass  ~10^-5 gm LBH, which is the smallest mass one can expect to be able to use in the Hawking and GTR equations.

  Little black holes (LBH) (with Hawking radiation) which may result in the early universe can only be short-lived and don’t play the game very long.  As I pointed out over a decade ago, my LBH (with Gravitational Tunneling Radiation) are a different kind of player in the early and later universe in terms of beaming and much, much greater longevity. This is now being rediscovered by people who have not referenced my work.  

   One needn’t do the complicated equation you did in terms of T^4, etc.  I also found that the quite complicated black hole blackbody expression for Hawking radiation power can be exactly reduced to  

P = G(rho)h-bar/90 where rho is the density of the black hole.  The 90 seems out of place for a fundamental equation.  However the 90 goes away for Gravitational Tunneling Radiation where the radiated power is

P= (2/3)G(rho)h-bar x (transmission probability).  This is in the Arxiv.  Eqs. (3) and (4) of my “Black Hole Radiation and Volume Statistical Entropy.” International Journal of Theoretical Physics 45, 851-858 (2006).   arXiv.org/abs/physics/0506029

            Best,
            Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Friday, March 09, 2007 10:13 AM

Subject: Re: I differ with your conclusion “So the Hawking radiation force is the electromagnetic force!”

Dear Mario,

Thank you for these criticisms, and I agree the gamma rays (Hawking radiation) will be isotropic in the absence of any shield or any motion.  But they are just gamma rays, and suffer from exactly the shielding and the redshift effects I’ve been calculating:

‘The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 10^43 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.’

See http://quantumfieldtheory.org/Proof.htm for illustrations.

I don’t see any theoretical or physical evidence has ever been proposed or found for a Planck mass or Planck length, etc.; they are numerology from dimensional analysis.  You can get all sorts of dimensions.  If Planck had happened to set the “Planck length” as the GM/c^2 where M is electron mass, he would have had a length much smaller than the one he chose (the Planck length), and close to the black hole horizon radius 2GM/c^2.  Planck’s length formula is more complex and lacks any physical explanation: (hG/c^3)^(1/2).  The people who popularise the Planck scale as being fundamental to physics now are mainly string theorists, who clearly are quite willing to believe things without proof (spin-2 gravitons, 10 dimensional superstring as a brane on 11 dimensional supergravity, supersymmetric bosonic partners for every particle to make forces unify near the Planck scale, all of which are totally unobserved).  It is likely that they skipped courses in the experimental basis of quantum theory, and believe that the hypothetical Planck scale is proved somehow by Planck’s earlier empirically confirmed theory of quantum radiation.

I should have written “So the Hawking radiation force is [similar in strength to] the electromagnetic force!”  However, it seems to be an interesting result.

Thank you again for the comments and I look forward to reading these additional papers you mention.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Friday, March 09, 2007 7:32 PM

Subject: Now I’ll respond to more of the points you raised in your letter of 3-6-07

Dear Nigel,  

    The conventional wisdom, including that of Stephen Hawking, is that Hawking radiation is mainly by the six kinds of neutrinos with photons far down the list.  In Hawking Radiation, as a Little Black Hole (LBH) radiates, it mass quickly diminishes and the radiated power goes up (since it is inversely proportional to M^2) until it evaporates away.  In my Gravitational Tunneling Radiation (GTR), the LBH radiation is exponentially lower by the Tunneling Probability (more correctly the transmission probability), and the LBH live much longer.  The radiation tunnels through the gravitational potential energy barrier between a black hole (BH) and other bodies.  If there is a nearby body, it is beamed and predominantly between the BH and the nearby body.  Since the radiation force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.

   Now I’ll respond to more of the points you raised in your letter of 3-6-07.   You said “I don’t quite see why you state the time-dependent form of Schroedinger’s wave function, which is more complicated than the time-independent form that generally is adequate for describing a hydrogen atom.  Maybe you must use the most rigorous derivation available mathematically?  However, it does make the mathematics relatively complicated and this makes the underlying physics harder for me to grasp.”  

  This was in reference to my paper, “Deterrents to a Theory of Quantum Gravity,” http://arxiv.org/abs/physics/0608193 accepted for publication in the International Journal of Theoretical Physics.  This paper goes the next important step following my earlier paper “A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle,” http://arxiv.org/abs/physics/0601218 published in Concepts of Physics.  

   Einstein’s General Relativity (EGR) is founded on the Strong Equivalence Principle (SEP) which states that locally a gravitational field and an accelerating frame are equivalent.  Einstein was motivated by the Weak Equivalence Principle (WEP) which states that gravitational mass is equivalent to inertial mass. The SEP implies the WEP.  In my earlier paper, I showed that Quantum Mechanics (QM) violates the WEP and thus violates the SEP.  Since if A implies B, (not B) implies (not A).  This demonstrated an indirect violation of the SEP.

  In the second paper I went a step further, and showed a direct violation of the SEP.  It was necessary for full generality to deal with the time-dependent form of Schroedinger’s equation. Since the relativistic Klein-Gordon and Dirac equations reduce to the Schroedinger equation, my conclusion also holds for them.  In addition to showing this violation theoretically, I also referenced experimental evidence that indicates that the equivalence principle is violated in the quantum domain.

   Also in your letter of 3-6-07, you mentioned Feynman’s book on QED.    I greatly enjoyed reading Feynman’s little book on QED.  Did you notice that he allows the speed of light to exceed c.

      Best,
      Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Monday, March 12, 2007 11:36 AM

Subject: Re: Now I’ll respond to more of the points you raised in your letter of 3-6-07

Dear Mario,

“…Hawking radiation is mainly by the six kinds of neutrinos with photons far down the list.”

The neutrinos are going to interact far less with matter than photons, so aren’t of interest as gauge bosons for gravity unless the ratio of neutrinos to gamma radiation in Hawking radio is really astronomical in size.

Since the radiation force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”

Maybe you are accounting for something fictitious here, which is unfortunate.  I think I mentioned, the universe isn’t accelerating in the dark energy sense, the error there is the assumption that G is constant regardless of the redshift of gauge boson radiation between receding masses (i.e., any distant masses in this particular expanding universe we inhabit).  Clearly this is the error, G does fall if the two masses are receding relativistically, between exchanged gauge bosons are redshifted.

It would violate conservation of energy for gauge bosons exchanged between receding masses to not be reduced in energy when received.  Correct this error in the application of GR to the big bang, and the cosmological constant with associated dark energy vanish.  See https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/

What is interesting however, is that in spacetime the universe IS accelerating, albeit this is the correct way of interpreting the Hubble law which should be written in the observable form of: (recession velocity) is proportional to (time past).  We can unambiguously measure and state what the recession velocity is as a function of time past, which makes the recession a kind of acceleration seen in spacetime, see https://nige.wordpress.com/2007/03/01/a-checkably-correct-theory-of-yang-mills-quantum-gravity/

The gauge boson exchange radiation in a finite universe will contribute to expansion of the universe, just as the pressure due to molecular impacts in a balloon cause the air in the balloon to expand in the absence of the restraining influence of the balloon’s surface material.

Obviously, at early times in the universe, the expansion rate would much higher because in addition to gauge boson exchange pressure, there would be direct material pressure due to the gas of hydrogen produced in the big bang expanding under pressure.

The mechanism for gravity suggested at http://quantumfieldtheory.org/Proof.htm shows that G, in addition to depending on the recession of the masses (gravitational charges) depends on the time since the big bang, increasing in proportion to time.

My mechanism gives the same basic relationship as Louise Riofrio’s and your equation, something like GM  = {dimensionless constant}*tc^3.  (See https://nige.wordpress.com/2007/01/09/rabinowitz-and-quantum-gravity/ and links to earlier posts.)

Louise has assumed that this equation means that the right hand side is constant, and so the velocity of light decreases inversely as the cube-root of the age of the universe.

However, by the mechanism I gave, the velocity of light is constant with age of universe, but instead G increases in direct proportion to age of universe.  (This doesn’t cause the sun’s brightness to vary or the big bang fusion rate to vary at all, because fusion depends on gravitational compression offsetting Coulomb repulsion of protons so that protons approach close enough to be fused by the strong force.  Hence, if you vary G, you don’t affect fusion rates in the big bang or in stars, because the mechanism unifies gravity and the standard model so all forces vary in exactly the same way; a rise in G doesn’t increase the fusion rate because it is accompanied by a rise in Coulomb repulsion which offsets the effect of rising G.)

The smaller value of G at earlier times in the universe produces the effects normally attributed to “inflation”, without requiring the complexity of inflation.  The smoothness (small size of ripples) of the cosmic background radiation is due to the lower value of G at times up to the time of emission of that radiation, 300,000 years.  A list of other predictions is included at http://quantumfieldtheory.org/Proof.htm but it needs expansion and updating, plus some re-writing.

Thank you very much for your further explanation of your work on the violation of the equivalence principle by quantum gravity.

Best wishes,

Nigel

From: Mario Rabinowitz

To: Nigel Cook

Sent: Monday, March 12, 2007 12:40 PM

Subject: what is the presently claimed acceleration of the universe?

Dear Nigel,   

 Sean M. Carroll takes the approach that one way to account for the acceleration of the universe is to modify general relativity, rather than introducing dark energy.  His published paper is also in the ArXiv.  
astro-ph/0607458
Modified-Source Gravity and Cosmological Structure Formation
Authors: Sean M. Carroll, Ignacy Sawicki, Alessandra Silvestri, Mark Trodden
Comments: 22 pages, 6 figures, uses iopart style
Journal-ref: New J.Phys. 8 (2006) 323
 
 In your letter of 3-12-07 you say:
“The gauge boson exchange radiation in a finite universe will contribute to expansion of the universe, just as the pressure due to molecular impacts in a balloon cause the air in the balloon to expand in the absence of the restraining influence of the balloon’s surface material.
   Obviously, at early times in the universe, the expansion rate would much higher because in addition to gauge boson exchange pressure, there would be direct material pressure due to the gas of hydrogen produced in the big bang expanding under pressure.”

  This seems inconsistent with your criticism in this letter of my statement:
MR: “Since the radiation [pressure] force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”
NC:  “Maybe you are accounting for something fictitious here, which is unfortunate.”    

  Would you agree with my statement if the word accelerated were deleted so that it would read:  
 MR: “Since the radiation [pressure] force due to GTR is repulsive, it can in principle contribute to the accelerated expansion of the universe.”

  If so, then I ask how you can be sure that the acceleration is 0 and not just some [small] number?  In fact what is the presently claimed acceleration?

         Best,
         Mario

From: Nigel Cook

To: Mario Rabinowitz

Sent: Monday, March 12, 2007 1:35 PM

Subject: Re: what is the presently claimed acceleration of the universe?

Dear Mario,

The true acceleration has nothing to do with the “acceleration” which is put into GR (via the cosmological constant) to counter long range gravity.

1. There is acceleration implied by the Hubble recession in spacetime, i.e., a variation of recession velocity with respect to observable times past is acceleration, a = dv/dt = d(Hr)/dt = Hv where H is Hubble parameter and v quickly goes to the limit c at great distances, so a = Hc ~ 6 *10^{-10} ms^{-2}.

Any contribution to this true acceleration of the universe (i.e., the Hubble law in spacetime) has nothing to do with the fictitious dark energy/cc.

2. Fictional acceleration of universe is the idea that gravity applies perfectly to the universe with no reduction due to the redshift of force mediating exchange radiation due to the recession of gravitational charges (masses) from one another.  This fictional acceleration is required to make the most appropriate Friedmann-Robertson-Walker solution of GR fit the data which show that the universe is not slowing down.

In other words, an acceleration is invented by inserting a small positive cosmological constant into GR to make it match observations made by Perlmutter et al., since 1998.  The sole purpose of this fictional acceleration is to cancel out gravity.  In fact, it doesn’t do the job well, because it doesn’t cancel out gravity correctly at all distances.  Hence the recent controversy over “evolving dark energy” (i.e., the need for different values of lambda, the cc, for supernovae at different spacetime distances/times from us.)

For the acceleration see https://nige.wordpress.com/2007/01/21/hawking-and-quantum-gravity/ .  There is no evidence for dark energy.  What is claimed as evidence for dark energy is evidence of no long range gravitational deceleration.  The insistence that the evidence from supernovae is evidence for dark energy is like the insistence of Ptolemy that astronomy is evidence for an earth centred universe, insistence from other faith-based belief systems that combustion and thermodynamics are evidence for caloric and phlogiston, and the insistence of Lord Kelvin that the existence of atoms is evidence for his vortex atom model.  The evidence doesn’t specifically support a small positive cosmological constant, see: http://www.google.co.uk/search?hl=en&q=evolving+dark+energy&meta

I can’t see why people are so duped by the application of GR to cosmology that they believe it perfect, and any problem to invoke extra epicycles, rather than quantum gravity effects like redshift of exchange radiation between receding masses.

Einstein 1917 cosmological constant has a very large value, to make gravity become zero at the distance of the average separation between galaxies, and to become repulsive at greater distances than that.

That fiddle proved wrong.  What is occurring is that the exchange radiation can cause both attractive and repulsive effects at the same time.  The exchange radiation pressure causes the local curvature.  As a whole spacetime is flat and has no observed curvature, so all curvature is local.

The curvature is due to the radial compression of masses, being squeezed by the exchange radiation pressure.  This is similar in mechanism to the Lorentz contraction physically of moving bodies.

If a large number of masses are exchanging radiation in a finite sized universe, they wil recoil apart while being individually compressed.  The expansion of the universe and the contraction of gravity are two entirely different things that have the same cause.  There is no contradiction.

The recoil is due to Newton’s 3rd law, and doesn’t prevent the radial compressive force.  In fact, the two are interdependent, you can’t have one without the other.  Gravitation is an aspect of the radial pressure, just a shielding effect:

The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 1043 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.

Regards MOND (modified Newtonian dynamics), such ideas are not necessarily mechanistic and checkable or fact based.  They’re usually speculations that to my mind are in the LeSage category – they do not lead anywhere unless you can inject enough factual physics into them to make them real.  Another thing which I’ve been thinking about in relation to Sean’s writings on his Cosmic Variance group blog, is the problem of religion.

I recall a recent survey which showed that something like 70% of American physicists are religious.  Religion is a not rational belief.  If physicists are irrational with respect to religion, why expect them to be rational about deciding which scientific theories to investigate?  There is a lot of religious or pseudoscientific speculation in physics as witnessed by the rise of 10/11 dimensional M-theory.  If mainstream physicists decide what to investigate based on irrational belief systems akin to their religion or other prejudices (clearly religion instills one form of prejudice, but there are others such as mathematical elitism, racism, all based on the wholesale application of ad hominem arguments against all other people who are different in religion or whatever), why expect them to do anything other than end up in blind alleys like epicycles, phlogiston, caloric, vortex atoms, mechanical aether, M-theory?

It’s very unfortunate that probably 99.9% of those who praise Einstein do so for the wrong (metaphysical, religious) reasons, instead of scientific reasons (his early quickly corrected false claims that clocks run slower at the equator, there is a massive cosmological constant, the universe is static, that quantum mechanics is wrong, that the final theory will be purely geometric with no particles, etc., don’t discredit him).  People like to claim Einstein is a kind of religious-scientific figure who didn’t make mistakes, and was infallible.  At least that’s the popular image the media feel keen on promoting, and it is totally wrong.  It sends out the message to people that they must not ever make mistakes.  I feel this is an error.  As long as there is some mechanism in place for filtering out errors or correcting them, it doesn’t matter if there are errors.  What matters is that there is no risk of making an error because someone is modelling non-observed spin-2 gravitons interacting with a non-observed 10 dimensional superstring brane on a bulk of non-observed 11 dimensional supergravity.  That matters, because it’s not even wrong.

I prefer investigating physics from Feynman’s idea of ultimate simplicity, and emergent complexity:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

If and when this is ever found to be wrong, it will then make sense to start investigating extra dimensions on the basis that the universe can’t be explained in fewer dimensions.

Best wishes,

Nigel

16 thoughts on “Hawking radiation from black hole electrons has the right radiating power to cause electromagnetic forces; it therefore seems to be the electromagnetic force gauge boson exchange radiation

  1. Copy of a comment to Arcadian Functor:

    http://kea-monad.blogspot.com/2007/03/ribbon-review.html

    nige said…

    There is also interesting article about the Standard Model represented by twisted braids at http://space.newscientist.com/channel/astronomy/cosmology/mg19125645.800, which has an illustration of braided models of neutrinos, leptons and quarks at http://space.newscientist.com/data/images/archive/2564/25645802.jpg

    Because these are joined at the top, Dr Motl in his angry Amazon review of Dr Smolin’s latest book, dismissed this claim for LQG to account for the Standard Model as being merely “octopusses swimming in the [Penrose] spin network”.

    Of course an octopus has eight legs, and these only have three. But you don’t seriously expect a string theorist like Dr Motl to get simple numbers correct. (After all, these people claim there are 10/11 dimensions.)

    The problem with all abstract ad hoc models is that they are not really telling you anything unless you can get predictions from them. It reminds me of the epicycles, caloric, phlogiston, mechanical gear box type aether and the vortex atom.

    These junk mainstream efforts were successful models in small (but over-hyped) way for a limited range of phenomena, but they ended up hindering the progress of science.

    It’s a maybe a difficulty that people are trying to build speculative, abstract models that don’t have physical explanations for real, experimentally validated phenomena:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    I think that some aspects of the Higgs mechanism are probably vital.

    Colour force does seem to arise when you two or three confine charges, basically leptons. The colour force arises because of the interaction between the confined charges, and the attenuation of electromagnetic charge energy by the polarized vacuum.

    But the idea is that at high energy all forces are closely related but some gauge bosons acquire mass and get shielded by the vacuum somehow, limiting their range.

    If you think about it, the electromagnetic charge is being attenuated all around a particle, out to the IR cutoff distance ~1fm, by the polarization of pairs produced in the intense electric field out to that distance, >10^18 v/m.

    What happens to the energy that is absorbed from the electromagnetic field by polarization caused shielding?

    Clearly, the vacuum attenuated electromagnetic field energy of the particle is used to produce high energy loops, and the loops polarize. It’s a physically real process.

    Now consider that somehow you could fire 3 electrons very close to one another. The overall electric field is then 3 times stronger, so the polarization of charge pair production in the vacuum is 3 times greater, hence the electromagnetic force per charge is shielded 3 times more strongly at a distance, so thateach electron has an apparent electric charge – as seen outside the IR cutoff – of exactly -1/3.

    This is the charge of the downquark.

    Naturally, the full details are complex because of the Pauli exclusion principle, the weak hyper charge, and strong forces. Because of the complexity, you will get +2/3 for the upquark, etc.

    However, it’s clear that if this basic physical idea is a helpful clue, then the difference between a lepton and a quark is produced due to vacuum field effects on the proximity of leptons. The electromagnetic energy shielded, i.e., 2/3rds of the energy when an electron becomes effectively a downquark, is used to create the strong or colour charge, which is completely absent when the lepton is not confined in a pair or triad.

    Woit has shown how Representation Theory may unify all Standard Model forces, electromagnetic, weak and strong forces. See p51 of his paper, http://arxiv.org/abs/hep-th/0206135.

    Unification implies a relationship between all types of force, and thus all types of charge. If there is any unified theory of fields, then it will explain how forces are different aspects of the same thing.

    The Standard Model is just experiment-based symmetry groups. It’s not a theory by itself. It can’t even unify electromagnetism and the weak force into the electroweak force without the Higgs sector which is speculative as to the number and type of “Higgs bosons”.

    The problem with Feynman’s idea of great simplicity in physical mechanism, is that the apparent complexity of the Standard Model and general relativity will only arise from a kind of Rube-Goldberg universe. So the underlying simplicity is covered up by numerous simple mechanisms, working together to create the apparent complexity of nature.

    I’ve been reading Carl’s papers and they are very interesting.

    One thing I don’t understand is the attention given to the formula

    [(e^n + m^n + t^n)^(1/n)]/(e + m + t) = 3/2 if n = 1/2.

    Where the masses of electron (e), muon (m) and tauon (t) are involved.

    If n = 3/2, the result is 3/2. If n = 1, then the sum is obviously 1.

    I don’t see what physical significance this has. It looks like numerology, where the reader is impressed that this way of averaging the masses gives the result 3/2. With other values of n, you get different dimensionless results. What is special about 3/2?

    I know it comes into the empirical relationship between electron mass, muon mass, and alpha:

    muon mass ~ electron mass *(3/2)/alpha

    = 0.511 MeV * (3/2) * 137.036…

    = 105.0 MeV,

    but this is only approximate, since the muon mass is about 105.66 MeV.

    Hence, the 3/2 factor there should be replaced by about 1.51. On the other hand, it’s clear that whatever the final theory is, there should be some clues in data about masses of particles and so on, just as the periodic table was assembled empirically before being explained theoretically.

    These ideas are very interesting and should be published on arXiv.

    11:55 PM

  2. Copy of a comment to Arcadian Functor:

    http://kea-monad.blogspot.com/2007/03/ribbon-review.html

    nige said…
    There is also interesting article about the Standard Model represented by twisted braids at http://space.newscientist.com/channel/astronomy/cosmology/mg19125645.800 , which has an illustration of braided models of neutrinos, leptons and quarks at http://space.newscientist.com/data/images/archive/2564/25645802.jpg

    Because these are joined at the top, Dr Motl in his angry Amazon review of Dr Smolin’s latest book, dismissed this claim for LQG to account for the Standard Model as being merely “octopusses swimming in the [Penrose] spin network”.

    Of course an octopus has eight legs, and these only have three. But you don’t seriously expect a string theorist like Dr Motl to get simple numbers correct. (After all, these people claim there are 10/11 dimensions.)

    The problem with all abstract ad hoc models is that they are not really telling you anything unless you can get predictions from them. It reminds me of the epicycles, caloric, phlogiston, mechanical gear box type aether and the vortex atom.

    These junk mainstream efforts were successful models in small (but over-hyped) way for a limited range of phenomena, but they ended up hindering the progress of science.

    It’s a maybe a difficulty that people are trying to build speculative, abstract models that don’t have physical explanations for real, experimentally validated phenomena:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    I think that some aspects of the Higgs mechanism are probably vital.

    Colour force does seem to arise when you two or three confine charges, basically leptons. The colour force arises because of the interaction between the confined charges, and the attenuation of electromagnetic charge energy by the polarized vacuum.

    But the idea is that at high energy all forces are closely related but some gauge bosons acquire mass and get shielded by the vacuum somehow, limiting their range.

    If you think about it, the electromagnetic charge is being attenuated all around a particle, out to the IR cutoff distance ~1fm, by the polarization of pairs produced in the intense electric field out to that distance, >10^18 v/m.

    What happens to the energy that is absorbed from the electromagnetic field by polarization caused shielding?

    Clearly, the vacuum attenuated electromagnetic field energy of the particle is used to produce high energy loops, and the loops polarize. It’s a physically real process.

    Now consider that somehow you could fire 3 electrons very close to one another. The overall electric field is then 3 times stronger, so the polarization of charge pair production in the vacuum is 3 times greater, hence the electromagnetic force per charge is shielded 3 times more strongly at a distance, so thateach electron has an apparent electric charge – as seen outside the IR cutoff – of exactly -1/3.

    This is the charge of the downquark.

    Naturally, the full details are complex because of the Pauli exclusion principle, the weak hyper charge, and strong forces. Because of the complexity, you will get +2/3 for the upquark, etc.

    However, it’s clear that if this basic physical idea is a helpful clue, then the difference between a lepton and a quark is produced due to vacuum field effects on the proximity of leptons. The electromagnetic energy shielded, i.e., 2/3rds of the energy when an electron becomes effectively a downquark, is used to create the strong or colour charge, which is completely absent when the lepton is not confined in a pair or triad.

    Woit has shown how Representation Theory may unify all Standard Model forces, electromagnetic, weak and strong forces. See p51 of his paper, http://arxiv.org/abs/hep-th/0206135

    Unification implies a relationship between all types of force, and thus all types of charge. If there is any unified theory of fields, then it will explain how forces are different aspects of the same thing.

    The Standard Model is just experiment-based symmetry groups. It’s not a theory by itself. It can’t even unify electromagnetism and the weak force into the electroweak force without the Higgs sector which is speculative as to the number and type of “Higgs bosons”.

    The problem with Feynman’s idea of great simplicity in physical mechanism, is that the apparent complexity of the Standard Model and general relativity will only arise from a kind of Rube-Goldberg universe. So the underlying simplicity is covered up by numerous simple mechanisms, working together to create the apparent complexity of nature.

    I’ve been reading Carl’s papers and they are very interesting.

    One thing I don’t understand is the attention given to the formula

    [(e^n + m^n + t^n)^(1/n)]/(e + m + t) = 3/2 if n = 1/2.

    Where the masses of electron (e), muon (m) and tauon (t) are involved.

    If n = 3/2, the result is 3/2. If n = 1, then the sum is obviously 1.

    I don’t see what physical significance this has. It looks like numerology, where the reader is impressed that this way of averaging the masses gives the result 3/2. With other values of n, you get different dimensionless results. What is special about 3/2?

    I know it comes into the empirical relationship between electron mass, muon mass, and alpha:

    muon mass ~ electron mass *(3/2)/alpha

    = 0.511 MeV * (3/2) * 137.036…

    = 105.0 MeV,

    but this is only approximate, since the muon mass is about 105.66 MeV.

    Hence, the 3/2 factor there should be replaced by about 1.51. On the other hand, it’s clear that whatever the final theory is, there should be some clues in data about masses of particles and so on, just as the periodic table was assembled empirically before being explained theoretically.

    These ideas are very interesting and should be published on arXiv.

    11:55 PM

  3. Copy of an interesting comment by Q on Not Even Wrong, refuting groupthink orthodoxy that string theory is the deepest possible religion of the universe:

    http://www.math.columbia.edu/~woit/wordpress/?p=530#comment-23238

    Q Says:

    March 8th, 2007 at 10:10 am

    No, it’s not a theory because it doesn’t explain anything about real phenomena. It links gravitons nobody has ever seen to Planck scale unification nobody can see, using extra-dimensions that have to be explained away by a Calabi-Yau manifold which gives the string a massive landscape of 10^500 solutions of particle physics.

    Explaining (by further speculation) a few speculations about gravitons and unification isn’t a theory of everything. It’s not even a ‘theory’ about speculations, it’s just vague hype that isn’t tied down to any known facts. The claim it’s the “deepest possible theory of physics” suggests you have a disproof of LQG and every suggested alternative. Where are these disproofs?

  4. Uncheckable speculation is the first step to a tyranny by false ideas:

    ‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands.’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2.

    ‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mind-history of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime. The result will easily be guessed. Egypt stood still.
    Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.

    ‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.

    Copy of an email:

    From: Nigel Cook
    To: Monitek@aol.com
    Sent: Tuesday, March 13, 2007 6:57 PM
    Subject: Re: The Implications of Displacement Current

    In addition to the comment copied below, see the evidence at https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/

    —– Original Message —–
    From: Nigel Cook
    To: Monitek@aol.com
    Sent: Tuesday, March 13, 2007 6:56 PM
    Subject: Re: The Implications of Displacement Current

    When you get two or three leptons close together, accepted physics tells you you get nuclear forces, including strong forces, because the electric field is stronger, and the electric field causes pair production in the vacuum.

    What you mean by accepted physics is the situation whereby leptons are well apart, so their polarized vacuum regions don’t overlap.

    In that case, obviously you don’t get nuclear forces.

    But when they are close together, you do get heavy loops of virtual particles coming into the picture.

    Colour charge is created by the close proximity of electric charges. I’ve dealt with this in detail on my blog!

    See also bits in the following comment copy:

    Copy of a comment to Arcadian Functor:

    http://kea-monad.blogspot.com/2007/03/ribbon-review.html

    nige said…
    There is also interesting article about the Standard Model represented by twisted braids at http://space.newscientist.com/channel/astronomy/cosmology/mg19125645.800 , which has an illustration of braided models of neutrinos, leptons and quarks at http://space.newscientist.com/data/images/archive/2564/25645802.jpg

    Because these are joined at the top, Dr Motl in his angry Amazon review of Dr Smolin’s latest book, dismissed this claim for LQG to account for the Standard Model as being merely “octopusses swimming in the [Penrose] spin network”.

    Of course an octopus has eight legs, and these only have three. But you don’t seriously expect a string theorist like Dr Motl to get simple numbers correct. (After all, these people claim there are 10/11 dimensions.)

    The problem with all abstract ad hoc models is that they are not really telling you anything unless you can get predictions from them. It reminds me of the epicycles, caloric, phlogiston, mechanical gear box type aether and the vortex atom.

    These junk mainstream efforts were successful models in small (but over-hyped) way for a limited range of phenomena, but they ended up hindering the progress of science.

    It’s a maybe a difficulty that people are trying to build speculative, abstract models that don’t have physical explanations for real, experimentally validated phenomena:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    I think that some aspects of the Higgs mechanism are probably vital.

    Colour force does seem to arise when you two or three confine charges, basically leptons. The colour force arises because of the interaction between the confined charges, and the attenuation of electromagnetic charge energy by the polarized vacuum.

    But the idea is that at high energy all forces are closely related but some gauge bosons acquire mass and get shielded by the vacuum somehow, limiting their range.

    If you think about it, the electromagnetic charge is being attenuated all around a particle, out to the IR cutoff distance ~1fm, by the polarization of pairs produced in the intense electric field out to that distance, >10^18 v/m.

    What happens to the energy that is absorbed from the electromagnetic field by polarization caused shielding?

    Clearly, the vacuum attenuated electromagnetic field energy of the particle is used to produce high energy loops, and the loops polarize. It’s a physically real process.

    Now consider that somehow you could fire 3 electrons very close to one another. The overall electric field is then 3 times stronger, so the polarization of charge pair production in the vacuum is 3 times greater, hence the electromagnetic force per charge is shielded 3 times more strongly at a distance, so thateach electron has an apparent electric charge – as seen outside the IR cutoff – of exactly -1/3.

    This is the charge of the downquark.

    Naturally, the full details are complex because of the Pauli exclusion principle, the weak hyper charge, and strong forces. Because of the complexity, you will get +2/3 for the upquark, etc.

    However, it’s clear that if this basic physical idea is a helpful clue, then the difference between a lepton and a quark is produced due to vacuum field effects on the proximity of leptons. The electromagnetic energy shielded, i.e., 2/3rds of the energy when an electron becomes effectively a downquark, is used to create the strong or colour charge, which is completely absent when the lepton is not confined in a pair or triad.

    Woit has shown how Representation Theory may unify all Standard Model forces, electromagnetic, weak and strong forces. See p51 of his paper, http://arxiv.org/abs/hep-th/0206135

    Unification implies a relationship between all types of force, and thus all types of charge. If there is any unified theory of fields, then it will explain how forces are different aspects of the same thing.

    The Standard Model is just experiment-based symmetry groups. It’s not a theory by itself. It can’t even unify electromagnetism and the weak force into the electroweak force without the Higgs sector which is speculative as to the number and type of “Higgs bosons”.

    The problem with Feynman’s idea of great simplicity in physical mechanism, is that the apparent complexity of the Standard Model and general relativity will only arise from a kind of Rube-Goldberg universe. So the underlying simplicity is covered up by numerous simple mechanisms, working together to create the apparent complexity of nature.

    I’ve been reading Carl’s papers and they are very interesting.

    One thing I don’t understand is the attention given to the formula

    [(e^n + m^n + t^n)^(1/n)]/(e + m + t) = 3/2 if n = 1/2.

    Where the masses of electron (e), muon (m) and tauon (t) are involved.

    If n = 3/2, the result is 3/2. If n = 1, then the sum is obviously 1.

    I don’t see what physical significance this has. It looks like numerology, where the reader is impressed that this way of averaging the masses gives the result 3/2. With other values of n, you get different dimensionless results. What is special about 3/2?

    I know it comes into the empirical relationship between electron mass, muon mass, and alpha:

    muon mass ~ electron mass *(3/2)/alpha

    = 0.511 MeV * (3/2) * 137.036…

    = 105.0 MeV,

    but this is only approximate, since the muon mass is about 105.66 MeV.

    Hence, the 3/2 factor there should be replaced by about 1.51. On the other hand, it’s clear that whatever the final theory is, there should be some clues in data about masses of particles and so on, just as the periodic table was assembled empirically before being explained theoretically.

    These ideas are very interesting and should be published on arXiv.

    11:55 PM

    —– Original Message —–
    From: Monitek@aol.com
    To: nigelbryancook@hotmail.com
    Sent: Tuesday, March 13, 2007 6:35 PM
    Subject: Re: The Implications of Displacement Current

    In a message dated 12/03/2007 22:38:44 GMT Standard Time, nigelbryancook@hotmail.com writes:
    I’ve already explained, you can’t see colour charge, only nuclear effects which are powered by the polarized vacuum absorbing electric charge energy and converting it into so-called “colour charge” (strong nuclear interactions) when two or three “leptons” are confined together, and share the same polarized vacuum region of space.

    This is all experimentally verified in lots of experiments, each experiment confirming one aspect.
    According to accepted physics leptons do not feel the strong force, do not have colour charge and are not involved with the strong force. What makes you think that it is something different?

    Regards,
    Arden

  5. Further discussion:

    From: Nigel Cook
    To: Mario Rabinowitz
    Sent: Friday, March 16, 2007 1:03 PM
    Subject: Re: I think you are nearly indomitable

    Dear Mario,

    Thank you very much for these comments and further references.

    The main issue from my perspective is how exchange radiation interacts with mass. The gravity causing exchange radiation interacts with some mass-generating Higgs field, which then interacts with the Standard Model bosons and fermions to accelerate them. It must be indirect like this if the Standard Model particles have no mass (i.e., no gravitational charge for quantum gravity). If mass is given to particles by a vacuum Higgs field, then there are questions about exactly how the gravitational force is conveyed from the Higgs field to the charges or photons.

    I wonder if you considered this in studying the problems for the equivalence principle in quantum gravity?

    Best wishes,
    Nigel

    —– Original Message —–
    From: Mario Rabinowitz
    To: Nigel Cook
    Sent: Tuesday, March 13, 2007 3:47 AM
    Subject: I think you are nearly indomitable

    Dear Nigel,

    It has been a long day of hard physical work, as was yesterday. I am exhausted and all my muscles are sore. Yet, I have been looking forward to responding to your letter.

    Thanks for writing right back. I think you are nearly indomitable.

    I think I’m more malleable and less judgmental than you are. In my heart, I hope Feynman was right. However, the man now sitting at his desk (as you pointed out) seems to think otherwise.

    In 1998, Sean Carroll (I think he was then at the Univ. of Chicago. Now at Caltech) said something to the effect that:
    Maybe the universe isn’t simple enough for dummies like us humans. Maybe it’s not just our powers of perception that aren’t up to the task but also our powers of
    conception. Extraordinary claims like the dawn of a new universe might require extraordinary evidence, but what if that evidence has to be literally beyond the ordinary?
    Astronomers now realize that dark matter probably involves matter that is nonbaryonic. And whatever it is that dark energy involves, we know it’s not “normal,” either. In that
    case, maybe this next round of evidence will have to be not only beyond anything we know but also beyond anything we know how to know.

    Even prior to my 1999 paper, “Gravitational Tunneling Radiation” published in Phys.Essays 12 (1999) 346-357 and astro-ph/0212249, I presented arguments that:
    “Theorists were quick to coin the term “dark energy” in concert with the already existing conundrum of “dark matter.” The two have been considered as separate entities …. In my model, both are essentially the same, or due to the same source i.e. little black holes (LBH).”

    As 95% of the mass of the universe, LBH essentially hold the universe together gravitationally, and their directed radiation contributes to its accelerated expansion. That many LBH doing Hawking Radiation would fry the universe. My reason that LBH are consider to be energy rather than matter is that they are two small to be easily detected, and hence look smooth like energy rather then lumpy like matter. Unlike LBH that Hawking radiate, my LBH are long-lived because in my model of Gravitational Tunneling Radiation (GTR) they radiate much, much less and the radiation is beamed. See also my chapter
    “Little Black Holes as Dark Matter Candidates with Feasible Cosmic and Terrestrial Interactions” in Progress in Dark Matter Research. Editor J. Blain; NovaScience Publishers, Inc. N.Y., (2005), pp. 1 – 66; also in physics/0503079.

    Best,
    Mario

  6. copy of a comment:

    http://kea-monad.blogspot.com/2007/04/m-theory-lesson-37.html

    Hawking’s theory does have the problem that it assumes that pair production is occurring all the time in the vacuum, and that pairs of fermions can become separated, with one on either side of the event horizon.

    Quantum field theory says that pair production in the vacuum requires either an extremely intense electric field, or an extremely high frequency oscillatory field.

    You get radiation (Casimir force effects for example) in the vacuum without accompanying pair production.

    To get pairs to appear in the first place, to create Hawking radiation, you basically need an electric field exceeding the IR cutoff energy which is equivalent to 10^18 volts/metre field gradient. That electric field occurs about 1 fm from an electron.

    You get this by putting the IR cutoff energy for pair production/vacuum polarization (required to make QFT work correctly) into Coulomb’s law, F = qQ/(4*Pi*Permittivity*r^2). Since force F = qE where q is charge and E is field strength (v/m),

    E = F/q = Q/(4*Pi*Permittivity*r^2)

    we get distance r from the closest approach when two electrons with IR cutoff energy 0.511 Mev each are scattered.

    When the electrons are at closest approach distance r, all their kinetic energy (0.511 MeV each, i.e. 1.022 MeV in total) is converted into electrostatic potential energy because they have been working against a repulsive electric field while approaching:

    1.022 MeV = QQ/(4*Pi*Permittivity*r),

    hence: r = 1.44 x 10^-15 m.

    Putting this value of r into E = F/q = Q/(4*Pi*Permittivity*r^2) gives 7*10^20 v/m.

    This is the minimum electric field strength needed to get polarizable pairs of particles in the vacuum.

    QFT gives a threshold for pair production of E = (m^2)(c^3)/(e*h bar) = 1.3*10^18 volts/metre

    This equation, E = (m^2)(c^3)/(e*h bar), is equation 359 in Freeman Dyson’s lectures http://arxiv.org/abs/quant-ph/0608140 and is also equation 8.20 in Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo, http://arxiv.org/abs/hep-th/0510040

    *********

    Now, what is Hawking assuming? Is he assuming that pair production occurs near the event horizon of a black hole due to intense electric fields of the magnitudes estimated above?

    Or some other case?

    Pair production can’t occur in weaker electric fields, because if they did, there would be no electric charges in the universe. The role of any such dielectric is to polarize around charges, shielding them! This is why the electric charge increases within the IR cutoff zone (i.e., in higher energy collisions).

    I don’t believe that in general black holes will radiate anything, because that contradicts experimentally known facts of QFT as given above. The renormalization of charge is an experimentally shown necessity in the correct calculation of phenomena like the magnetic moment of leptons, known to many more decimal places than any other physical constant in the whole of science.

    Renormalization requires taking effective charges which correspond to the polarization and pair production cutoffs mentioned.

    QFT wouldn’t work if the vacuum contained polarizable (i.e. movable) pairs of charges everywhere. It works only because the vacuum doesn’t. Hawking is assuming otherwise.

    Hawking radiation from electrons (treated as black holes) is however possible and likely, see my discussion at https://nige.wordpress.com/2007/03/08/hawking-radiation-from-black-hole-electrons-causes-electromagnetic-forces-it-is-the-exchange-radiation/

  7. copy of a comment:

    http://kea-monad.blogspot.com/2007/04/m-theory-lesson-37.html

    Professor Smolin has a discussion of this entropy argument at page 90 in TTWP:

    “The first crucial result connecting quantum theory to black holes was made in 1973 by Jacob Bekenstein … He made the amazing discovery that black holes have entropy. Entropy is a measure of disorder, and there is a famous law, called the second law of thermodynamics, holding that the entropy of a closed system can never decrease. [Notice he says “closed system” conveniently without defining it, and if the universe is a closed system then the 2nd law of thermodynamics is wrong: at 300,000 years after the big bang the temperature of the universe was a uniform 4000 K with extremely little variation, whereas today space is at 2.7 K and the centre of the sun is at 15,000,000 K. Entropy for the whole universe has been falling, in contradiction to the laboratory based (chemical experiments) basis of thermodynamics. The reason for this is the role of gravitation in lumping matter together, organizing it into hot stars and empty space. This is insignificant for the chemical experiments in labs which the laws of entropy were based upon, but it is significant generally in physics, where gravity lumps things together over time, reducing entropy and increasing order. There is no inclusion of gravitational effects in thermodynamic laws, so they’re plain pseudoscience when applied to gravitational situations..] Bekenstein worried that if he took a box filled with a hot gas – which would have a lot of entropy, because the motion of the gas molecules was random and disordered – and threw it into a black hole, the entropy of the universe would seem to decrease, because the gas would never be recovered. [This is nonsense because gravity in general works against rising entropy; it causes entropy to fall! Hence the big bang went from uniform temperature and maximum entropy (disorder, randomness of particle motions and locations) at early times to very low entropy today, with a lot of order. The ignorance of the role of gravitation on entropy by these people is amazing.] To save the second law, Bekenstein proposed that the black hole must itself have an entropy, which would increase when the box of gas fell in, so that the total entropy of the universe would never decrease. [But the entropy of the universe is decreasing due to gravitational effects anyway. At early times the universe was a hot fireball of disorganised hydrogen gas at highly uniform temperature. Today space is at 2.7 K and the centres of stars are at tens of millions of Kelvin. So order has increased with time, and entropy – disorder – has fallen with time.]”

    On page 91, Smolin makes clear the errors stemming from Hawking’s treatment:

    “Because a black hole has a temperature, it will radiate, like any hot body.”

    This isn’t in general correct either, because the mechanism Hawking suggested for black hole radiation requires pair production to occur near the event horizon, so that one of the pair of particles can fall into the black hole and the other particle can escape. This required displacement of charges is the same as the condition for polarization of the vacuum, which can’t occur unless the electric field is above a threshold/cutoff of about 10^18 v/m.

    In general, a black hole will not have a net electric field at all because neutral atoms fall into it to give it mass. Certainly there is unlikely to be an electric field strength of 10^18 v/m at the event horizon of the black hole. Hence there are no particles escaping. Hawking’s mechanism is that the escaping particles outside the event horizon annihilate into gamma rays which constitute the “Hawking radiation”.

    Because of the electric field threshold required for pair production, there will be no Hawking radiation emitted from large black holes in the universe: there is no mechanism because the electric field at the event horizon will be too small.

    The only way you can get Hawking radiation is where the condition is satisfied that the event horizon radius of the black hole, R = 2Gm/c^2, corresponds to an electric field strength exceeding the QFT pair production threshold of E = (m^2)(c^3)/(e*h bar) = 1.3*10^18 volts/metre, where e is the electron’s charge.

    Since E = F/q = Q/(4*Pi*Permittivity*r^2) v/m, the threshold net electric charge Q that a black hole must carry in order to radiate Hawking radiation is

    E = (m^2)(c^3)/(e*h bar)

    = Q/(4*Pi*Permittivity*r^2)

    = Q/(4*Pi*Permittivity*{2Gm/c^2}^2)

    Hence, the minimum net electric charge a black hole must have before it can radiate is

    Q = 16*Pi*(m^4)(G^2)*(Permittivity of free space)/(c*e*h bar)

    Notice the fourth power dependence on the mass of the black hole! The more massive the black hole, the more electric charge it requires before Hawking radiation emission is possible.

Leave a comment