Dr Mario Rabinowitz, the author of the arXiv paper “Deterrents to a Theory of Quantum Gravity,” has kindly pointed out his approach to the central problem I’m dealing with. (Incidentally, the problem he has with quantum gravity does not apply to the quantum gravity mechanism I’m working on, where gravity is a residue of the electromagnetic field caused by the exchange of electromagnetic gauge bosons which allows two kinds of additions, a weak always attractive force and a force about 1040 times stronger with both attractive and repulsive mechanisms.) His paper, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990), is the earliest I’m aware of which comes up with a general result equal to Louise Riofrio’s equation MG = tc3.
He sets the gravitational force equal to the inertial force, F = mMG/R2 = [mM/(M + m)]v2/R ≈ (mc2)/R. This gives MG = Rc2 = (ct)c2 = tc3 which is identical to Riofrio’s equation.
Here is my detailed treatment of Mario’s analysis. The cosmological recession of Hubble’s law v = HR where H is Hubble parameter and R is radial distance, implies an acceleration in spacetime (since R = ct) of a = dv/dt = d(HR)/dt = Hv = (v/R)v = v2/R. (This is not controversial or speculative; it is just employing calculus on Hubble’s v = HR, in the Minkowski spacetime we can observe, where: ‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1908.) Hence the outward force on mass m due to recession is F = ma = mv2/R = mc2/R for extreme distances where most of the mass is and where redshifts are great, so that v ~ c.
Hence the inward (attractive) gravity force is balanced by this outward force:
F = mMG/R2 = mc2/R
Thus,
MG = Rc2 = (ct)c2 = tc3.
(This result is physically and dimensionally correct but quantitatively is off by a dimensionless correction factor of e3 = 20, because it ignores the dynamics of quantum gravity at long distances (rising density as time approaches zero, which increases toward infinity the effective gravity effect due to the expansion of the universe, and falling strength of gravity causing exchange radiation as time goes towards zero due to the extreme redshift of that radiation, weakening gravity. However, the physical arguments above are very important and can be compared to those in the mechanism at http://feynman137.tripod.com/. The correct formula is: e3 MG = tc3 , where, because of the lack of gravitational retardation in quantum gravity, t = 1/H where H is Hubble parameter, instead of t = (2/3)/H which is the case for the classic Friedmann scenario with gravitational deceleration.)
Historically the rediscovery of this result since Mario’s paper in 1990 has occurred three times, each under different circumstances:
(1) M. Rabinowitz, “Weighing the Universe and Its Smallest Constituents,” IEEE Power Engineering Review 10, No.11, 8-13 (1990).
(2) My own treatment, Electronics World, various issues (October 1996-April 2003), based on a physical mechanism of gravity (outward force of matter in receding universe is balanced, by Newton’s 3rd law, by an inward force of gauge boson pressure, which causes gravity by asymmetries since each fundamental particle acts as a reflecting shield, so masses shield one another and get pushed together by gauge boson radiation, predicting the value of G quite accurately).
[Initially I had a crude physical model of the Dirac sea, in which the motion of matter outward resulted in an inward motion of the Dirac sea to fill in the volume being vacated. This was objected to strongly for being a material pressure LeSage gravity mechanism, although it makes the right prediction for gravity strength (unlike other LeSage models) and utilises the right form of the Hubble acceleration outward, a = dv/dt = d(HR)/dt = Hv. This was published in Electronics World from October 1996 (letters page item) to April 2003 (a major six pages long paper). A gauge boson exchange radiation based calculation for gravity was then developed which does the same thing (without the Dirac sea material objections to LeSage gravity which the previous version had) in 2005. I’ve little free time, but am rewriting my site into an organised book which will be available free online. The correct formula from http://feynman137.tripod.com/ for the gravity constant is G = (3/4)H2 /(Pi*Rho*e3 ) where Rho is the observed (not Friedmann critical) density of visible matter and dust, etc. This equation is equivalent to e3 MG = tc3 , and differs from the Friedmann critical density result by a factor of approximately 10, predicting that the amount of dark matter is less than predicted by the critical density law. In fact, you get a very good prediction of the gravity constant from the detailed Yang-Mills exchange radiation mechanism by ignoring dark matter, as a first approximation. Since dark matter has never been observed in a laboratory, but is claimed to be abundant in the universe, you have to ask why it is avoiding laboratories. In fact the most direct evidence claimed for it doesn’t reveal any details about it. It is required in the conventional (inadequate) approximations to gravity but the correct quantum gravity, which predicted the non-retarded expansion of the universe in 1996, two years before Perlmutter’s observational data confirmed it, reduces the amount of dark matter dramatically and makes various other validated predictions.]
(3) John Hunter published a conjecture on page 17 of the 12 July 2003 issue of New Scientist, suggesting that the rest mass energy of a particle, E = mc2, is equal to its gravitational potential energy with respect to the rest of the matter in the surrounding universe, E = mMG/R. This leads to E = mc2 = mMG/R, hence MG = Rc2 = (ct)c2 = tc3. He has the conjecture on a website here, which contains an interesting and important approach to solving the galactic rotation curve problem without inventing any unobserved dark matter, although his cosmological speculations on linked pages are unproductive and I wouldn’t want to be associated with those non-predictive guesses. Theories should be built on facts.
(4) Louise Riofrio came up with the basic equation MG = tc3 by dimensional analysis and has applied it to various problems. She correctly concludes that there is no dark energy, but one issue is what is varying in the equation MG = tc3 to compensate for time increasing on the right hand side. G is increasing with time, while M and c remain constant. This conclusion comes from the detailed gravity mechanism. Contrary to claims by Professor Sean Carroll and the late Dr Edward Teller, an increasing G does not vary the sun’s brightness or the fusion rate in the first minutes of the big bang (electromagnetic force varies in the same way so Coulomb’s law of repulsion between protons was different, offsetting the variation in compression on the fusion rate due to varying gravity), but it does correctly predict that gravity was weaker in the past when the cosmic background radiation was emitted, thus explaining quantitatively why the ripples in that radiation due to mass were so small when it was emitted 300,000 years after big bang. This, together with the lack of gravitational retardation on the rapid expansion of the universe (gravity can’t retard expansion between relativistically receding masses, because the gravity causing exchange radiation will be redshifted, losing its force-causing energy, like ordinary light which is also redshifted in cases of rapid recession; this redshift effect is precisely why we don’t see a blinding light and lethal radiation from extreme distances corresponding to early times after the big bang) gets rid of the ad hoc inflationary universe speculations.
I’m disappointed by Dr Peter Woit’s new post on astronomy where he claims astronomy is somehow not physics: ‘When I was young, my main scientific interest was in astronomy, and to prove it there’s a very geeky picture of me with my telescope on display in my apartment, causing much amusement to my guests (no way will I ever allow it to be digitized, I must ensure that it never appears on the web). By the time I got to college, my interests had shifted to physics…’
I’d like to imagine that Dr Woit just means that current claims of observing ‘evolving dark energy’ and ‘dark matter (with lots of alleged evidence which turns out to be gravity caused distortions which could be caused by massive neutrinos or anything, and doesn’t have a fig leaf of direct laboratory confirmation for the massive quantity postulated to fix epicycles in the current general relativity paradigm which ignores quantum gravity)’ are not physics. However, he is unlikely to start claiming that the mainstream ‘time-varying-lambda-CDM or time-varying-lambda (time-varying dark energy ‘cosmological constant’)-cold dark matter’ model of cosmology is nonsense because in his otherwise excellent book Not Even Wrong he uses the false, ad hoc, small positive fixed value of the cosmological constant to ridicule the massive value predicted by force unification considerations in string theory. Besides, if he knows little of modern astronomy and cosmology, he will not be in an expert to competently evaluate it and criticise it. I hope Dr Woit will submerge himself in the lack of evidence for modern cosmology and perhaps come up with a second volume of Not Even Wrong addressed at the lambda-CDM model and its predictive, checkable solution using a proper system of quantum gravity.
For my earlier post on this topic, see https://nige.wordpress.com/2006/09/22/gravity-equation-discredits-lubos-motl/
Other news: my domain http://quantumfieldtheory.org/ is up and running with some draft material – now I just have to write the free quantum field theory textbook to put on there!
NUMERICAL CHECK
The current observational value of H is about 70 +/- 2.4 km/s/Mparsec ~ 2.27*10-18 s-1, and Rho causes the difficulty today. The observed visible matter (stars, hot gas clouds) has long been estimated to have a mean density around us of ~4*10-28 kg/m3, although studies show that this should be increased for dust by about 15% and for various other factors. The prediction G = (3/4)H2 /(Pi*Rho*e3 ) is a factor of e3 /2 ~ 10 times smaller than that in the Friedmann critical density formula. It’s accuracy depends on what evidence you take for the density. It happens to agree exactly with the statement by Hawking in 2005:
‘When we add up all this dark matter [which accounts for the high speed of the outermost stars orbiting spiral galaxies like the Milky Way, and the high speed of galaxies orbiting in clusters of galaxies] , we still get only about one-tenth of the amount of matter required to half the expansion [the critical density in Friedmann’s solution]’.
– S. Hawking and L. Mlodinow, A Briefer History of Time, Bantam, London, 2005, p65.
Changing it around, it predicts the density is 9.2*10-28 kg/m3, about twice the observed density if that is taken as the traditional figure of 4*10-28 kg/m3, however the latest estimates of the density are higher and similar to the predicted value 9.2*10-28 kg/m3, for example the following:
‘Astronomers can estimate the mass of galaxies by totalling up the number of stars in the galaxy (about 109) and multiplying by the mass of one star, or by observing the dynamics of orbiting parts of a galaxy. Next they add up all the galactic mass they can see in this region and ivide by the volume of space they are looking at. If this is done for bigger and bigger regions of space the mean density approaches a figure of about 10-30 grams per cubic centimetre or 10-27 kg m-3. You will realise that there is some doubt in this value because it is the result of a long chain of estimations.’
Putting this approximate value of Rho = 10-27 kg m-3 into G = (3/4)H2 /(Pi*Rho*e3 ) with H as before gives G = 6.1*10-11 N m2 kg-2 , which is only 9% low, and although the experimental error in density observations is relatively high, it will improve with further astronomical studies, just as the Hubble parameter error has improved with time. This provides a further check. (Other relevant checks on quantum gravity are discussed here, top post.)
Here’s an extract from a response I sent to Dr Rabinowitz on 8 January, regarding the issue of gauge bosons and the accuracy of the calculation of G in comparison to observed data:
“Are your gauge bosons real or virtual?” What’s the difference? It’s the key question in many ways. Obviously they are real in the sense they really produce electric forces. But you can’t detect them with a radio receiver or other instrument designed to detect either oscillatory waves or discrete particles.
“I am troubled by your force calculation (~10^43 N) which is an input to your derivation of G. I’m inclined to think that the force calculation could be off by a large factor, so that that one may question that “The result predicts gravity constant G to within 2 % “.
First, the “outward force” is ambiguous. If you ignore the fact that the more distant observable universe has higher density, then you get one figure. If you assume that density increases to infinity with distance, you get another result for outward force (infinity). Finally, if you are interested in the inward reaction force carried by radiation (gauge bosons) then you need to allow for the redshift of those due to the recession of the matter emitting them, which cancels out the infinity due to density increasing, and gives a result of about 7 x 10^43 N or whatever. In giving outward force as ~10^43 N, I’m giving a rough figure which anyone will be able to validate approximately without having to do the more complicated calculations.
I used two published best estimates for the Hubble parameter and the density of the visible matter plus dust in the universe. These allowed G to be predicted. The result was within 2% of the empirically known value of G. I used 70 km/s/Mparsec for H a decade ago and that is still the correct figure, although the uncertainty is falling. A decade ago, there was no estimate to the uncertainty because the data clustered between two values, 50 and 100. Now there is agreement that the correct value of H is very close to 70. … I don’t think there is any massive error involved in observational astronomy. There used to be a confusion because of two types of variable star, with Hubble using the wrong type to estimate H. Hubble had a value of 550 for H, many times too high. That sort of error is long gone.
Recent response to Professor Landis about general relativity:
“Ultimately, it’s all in the experimental demonstration. If Einstein’s theory hadn’t been confirmed by tests, it would have been abandoned regardless of how pretty or ugly it may be.” – Geoffrey Landis
What about string theory, which has been around since 1969 and can’t be tested and doesn’t hold out any hope? I disagree: the tests of general relativity would first have been repeated, and if they still didn’t agree, then an additional factor would have been invented/discovered to make the theory correct.
Newton’s gravity law in tensors would be R_uv = 4*Pi*T_uv
which is false because the divergence of T_uv doesn’t disappear. Hence it violates conservation of energy. Einstein replaces T_uv with (T_uv) – (1/2)(g_uv)T which does have a vanishing divergence and so doesn’t contradict the conservation of energy. If the solutions of general relativity are wrong, then you would need to find out physically what is causing the discrepancy.
The Friedmann solution of general relativity predicted that gravity slows down expansion. Observations by Perlmutter on distant supernova showed that there was something wrong. Instead of abandoning general relativity, a suitable small positive “cosmological constant” was adopted to keep everything fine. Recently, however, more detailed observations show that there is evidence that such a “cosmological constant” lambda would be varying with time.
Discussion by email with Dr Rabinowitz:
From: Mario Rabinowitz
To: Nigel Cook
Sent: Wednesday, January 17, 2007 5:27 AM
Subject: Paul Gerber is an unsung hero
Dear Nigel,
… Einstein’s General Relativity (EGR) makes the problem much more difficult than your simple approach.
Another shortcoming is the LeSage model itself. It is very appealing, but one aspect is appauling. What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind. One should be able to calculate the time constant for slowing down a body. …
Best regards,
Mario
From: Nigel Cook
To: Mario Rabinowitz
Sent: Wednesday, January 17, 2007 7:07 PM
Subject: Re: Paul Gerber is an unsung hero
Dear Mario,
“Since the leading edge of the Universe is moving at nearly c, one needs to bring relativity into the equations. Special relativity (without boosts) can’t do it. Einstein’s General Relativity (EGR) makes the problem much more difficult than your simple approach.”
The mechanism of relativity comes from this simple approach: the radiation pressure on a moving object causes the contraction effect. Any inconsistency is a failure of general or special relativity, which are mathematical structures based on principles. An example of a failure is the lack of deceleration of the universe…
“Another shortcoming is the LeSage model itself. It is very appealing, but one aspect is appauling. What is troublesome is that for moving bodies, there is more momentum transfer for head-on collisions from the sea of tiny bodies than from behind.”
This is the objection of Feynman to LeSage in his November 1964 Cornell lectures on the Character of Physical Law. The failure of LeSage has been discussed in detail by people from Maxwell to Feynman. I have some discussion of LeSage at http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html
See http://electrogravity.blogspot.com/2006/03/george-louis-lesage-newtonian_26.html where the Dirac sea (or the equivalent Yang-Mills radiation exchange pressure on moving objects) is the mechanism for relativity:
“Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:” ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’”
The force inward on every point is enormous, 10^43 Newtons. General relativity gives the result that the Earth’s radius is contracted by (1/3)MG/c^2 = 1.5 millimetres. The physical mechanism of this process (gravity dynamics by radiation pressure of exchange radiation) is the basis for gravitational “curvature” of spacetime in general relativity, because this shrinking of radius is radial only: transverse directions (eg circumference) is not affected. Hence, the ratio circumference/radius will vary depending on the mass of the object, unless you invent a fourth dimension and maintain Pi by stating that spacetime is curved by the extra dimension.
LeSage (who apparently plagarised Fatio, a friend of Newton) was also dismissed for various other equally false reasons:
1. Maxwell claimed that the force causing radiation would have to be so great it would heat up objects until they were red hot. This is vacuous for various reasons: the strong nuclear force (unknown in Maxwell’s time) is widely accepted to be mediated by Pions and other particles, and is immensely stronger than gravity, but doesn’t cause things to melt. Heat transfer depends on how energy is coupled. It is known that gravity and other forces are indirectly coupled to particles via a vacuum field that has mass and other properties.
2. Several physicists in the 1890s wrote papers which dismissed LeSage by claiming that any useful employment of the mechanism makes gravity depend on the mass of atoms rather than on the surface area of a planet, and so requires the gravity causing field to be able to penetrate through solid matter, and that therefore matter must be mainly void, with atoms mainly empty. This appeared absurd. But when X-rays, radioactivity and the nuclear atom confirmed LeSage, he was not hailed as having made a successful prediction, confirmed experimentally. The later mainstream view of LeSage was summed up by Eddington: ‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, ‘Space Time and Gravitation’, Cambridge University Press, 1921, p64. This is partly correct in the sense that there was no numerical prediction from LeSage that could be tested.
3. Feynman’s objection assumes that the force carrying radiation interacts chaotically with itself, like gas molecules, and would fill in “shadows” and cause drag on moving objects by striking moving objects and carrying away momentum randomly in any direction. This is a straw man argument: Feynman should have considered the Yang-Mills exchange radiation as the only basis for forces below the infra red cutoff, ie, beyond 1 fm from a particle core.
The gas of creation-annihilation loops only occurs above the IR cutoff. It is ironic that Feynman missed this, seeing his own major role in discovering renormalization which is evidence for the IR cutoff.
Best wishes,
Nigel
From: Mario Rabinowitz
To: Nigel Cook
Sent: Wednesday, January 17, 2007 7:51 PM
Subject: Contradictory prediction of the LeSage model to that of Newton
Dear Nigel,
Thanks for addressing the issues I raised.
I know very little about the LeSage model, its critics, and its proponents. Nevertheless, let me venture forth. Consider a Large Dense Disk rotating slowly. I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body? We could have two identical Disks: One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius. I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction.
Best regards,
Mario
From: Nigel Cook
To: Mario Rabinowitz
Sent: Thursday, January 18, 2007 10:57 AM
Subject: Re: Contradictory prediction of the LeSage model to that of Newton
Dear Mario,
It is just a very simple form of radiation shielding. Each fundamental particle is found to have a gravity shielding cross-section of Pi.R^2 where R = 2GM/c^2, M being the mass of the particle. This precise result, that the black hole horizon area is the area of gravitational interactions, is not a fiddle to make the theory work, but instead comes from comparing the results of two different derivations of G, each derivation being based on a different set of empirically-founded assumptions or axioms.
It is also consistent with the idea of Poynting electromagnetic energy current being trapped gravitationally to form fermions from bosonic energy (the E-field lines are spherically symmetric in this case, while the B-field lines form a torus shape which becomes a magnetic dipole at long distances because the polarized vacuum around the electron core shields transverse B-field lines as it does radial E-field lines, but doesn’t of course shield radial – ie polar – B-field lines).
Notice that the black hole radius of an electron is many orders of magnitude smaller than the Planck length. The idea that gravity will be reduced by particles being directly behind one another is absurd, because the gravitational interaction cross-section is so small. You can understand the small size of the gravitational cross-section when you consider that the inward force of gauge boson radiation is something on the order 10^43 N, directed towards every particle. This force only requires a tiny shielding to produce a large gravitational force.
There are obviously departures produced by this model from standard general relativity under extreme circumstances. One is that you can never have a gravitational force – regardless how big the mass is – that exceeds 10^43 N. I don’t list this as a prediction in the list of predictions on my home page, because it is clearly not a falsifiable or checkable prediction, except near a large black hole which can’t very well be examined. The effect of one mass being behind the other, and so not adding any additional geometrical shielding to a situation, is dealt with in regular radiation shielding calculations. If amount of shielding material H is enough to cut the gravity causing radiation pressure by half, the statistical effect of amount M is that the shielded pressure fraction will not be f = 1 – (0.5M/H), but will instead be f = exp{-M(ln 2)/H}.
However, we know mathematically that f = 1 – (0.5M/H) becomes a brilliant approximation to f = exp{-M(ln 2)/H} when M << H. Calculations show that you will generally have to have a mass approaching the mass of the universe in order to get any significant effect whereby “overlap” issues become effective.
“Consider a Large Dense Disk rotating slowly. I think the LeSage model would predict a reduction in the gravitational attraction when the plane of the disk is parallel to the line joining the center of the disk and the orbiting body? We could have two identical Disks: One rotating about its axis so as to always be parallel to the orbital radius; and the other rotating so as to always be perpendicular to the orbital radius. I would expect the LeSage model to predict a higher gravitational attraction from the latter, contrary to Newtonian gravitational attraction. “
You or I need to make some calculations to check this. The problem here is that I don’t immediately see the mechanism by which you think that there would be a reduction in gravity, or how much of a reduction there would be, do you allow for mass increase due to speed of rotation, or is that ignored? Many of the “criticisms” that could be laid against a LeSage gravity could also be laid against the Standard Model SU(3)xSU(2)xU(1) forces which again use exchange radiation. You could suggest that Yang-Mills quantum field theory would predict a departure from Coulomb’s law for a large charged rotating disc, along the plain of the disc.
To put this another way, how far should someone go into trying to disprove the model, or resolve all questions, before trying to publish? This comes down to the question of time.
Can I also say that the calculations for http://quantumfieldtheory.org/Proof.htm were extremely difficult to do for the first time. The diagram http://quantumfieldtheory.org/Proof_files/Image31.gif is the result of a great deal of effort in trying to make calculations, not the other way around. The clear picture emerged slowly:
“The universe empirically looks similar in all directions around us: hence the net unshielded gravity force equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (see diagram). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4 p R 2. The ‘clever’ mathematical bit is that the shielding area of a local mass is projected on to this area by very simple geometry: the local mass of say the planet Earth, the centre of which is distance r from you, casts a ‘shadow’ (on the distant surface 4 p R 2) equal to its shielding area multiplied by the simple ratio (R / r)2. This ratio is very big. Because R is a fixed distance, as far as we are concerned for calculating the fall of an apple or the ‘attraction’ of a man to the Earth, the most significant variable the 1/ r2 factor, which we all know is the Newtonian inverse square law of gravity. For two separate rigorous and full accurate treatments see Geometrically, the unshielded gravity force is equal to the total inward force, F = ma ~ mcH, multiplied by the proportion of the shielded area of a spherical surface around the observer (illustration here). The surface area of the sphere with radius R (the average distance of the receding matter that is contributing to the inward gauge boson force) is 4*Pi*R². The shielding area of a local mass is projected on to this area: the local mass of say the planet Earth, the centre of which is distance r from you, casts a shadow (on the distant surface 4*Pi*R² ) equal to its shielding area multiplied by the simple ratio (R/r)². This ratio is very big. Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.
“Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the net force sideways is the same in each direction unless there is a shielding mass intervening. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is receding. Gravity is the net force introduced where a mass shadows you, namely in the double-cone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc.”
Where disagreements exist, it may be the case that the existing theory is wrong, rather than the new theory. There were plenty of objections to Aristarchus’ solar system because it predicted that the earth spins around daily, which was held to be absurd. Ptolemy casually wrote that the earth can’t be rotating or clouds and air would travel around the equator at 1,000 miles per hour, but he didn’t prove that this would be the case, or state his assumption that the air doesn’t get dragged.
“Refutations” should really be written up in detail so they can be analysed and checked properly. Problems arise in science where ideas are ridiculed instead of being checked with scientific rigor: clearly journal editors and busy peer reviewers are prone to ridicule ideas with strawman arguments without spending much time checking them. It is a problem with elitism, as Witten’s letter shows, http://schwinger.harvard.edu/%7Emotl/witten-nature-letter.pdf . Witten’s approach to criticism of M-theory is not to reply, thus remaining respectful. Yet if I don’t reply to criticism, it is implied that I’m just a fool.
An excellent example is how your paper’s on the problems in quantum gravity are ignored by string theorists. That proves string theorists are respectable, you see. If they engaged in discussions with their critics, they would look foolish. It is curious that if Witten refuses to discuss problems, he escapes being deemed foolish, but if outsiders do that then they are deemed foolish. There is such a rigid view taken of the role of authority in science today, that hypocrisy is taken for granted by all.
Best wishes,
Nigel