Preliminary pages from the draft book

Solution to a problem with general relativity

A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, partly in advance of observations

This book is an updated and expanded version of a CERN Document Server deposited draft preprint paper, EXT-2004-007, which is now obsolete and can’t be updated there. Please see the additional new calculations and the duality between Yang-Mills exchange radiation and the dynamics of the Dirac sea spacetime fabric of general relativity (in chapter one).

Abstract

In the preprint EXT-2004-007, the observation was made that in space-time the Hubble recession of the mass of the universe around us can be represented either as (recession velocity, v)/(apparent distance in space-time, s) = Hubble parameter, H = v/s, or, equally well, as (recession velocity, v)/(apparent time past in space-time, t) = v/t = v/(s/c) = cv/s = cH, which is the outward acceleration of the matter of the universe as seen in our space-time reference frame (rather than how the universe might hypothetically appear if light and other fields travelled instantly, which cannot occur). Space-time was ignored by Hubble, which is why this fact was not recognised before. The immediate consequence is that we get an outward force for the big bang from this outward acceleration of matter, as given by Newton’s empirical second law of motion, F = ma, with a = cH and m the mass of the receding universe observable around us (because of various other considerations, such as an increase in density in space-time as we look to great distances and earlier eras of the universe, there are complexities which are analysed in chapter one). This outward force by Newton’s empirical third law of motion should be associated with an equal inward directed reaction force, which allows us to predict gravity as a local effect due to exchange radiation pressure due to the big bang. This prediction is substantiated because it is now proved that there are two distinct proofs which are dual of one another, one for material pressure (particles) and one for radiation pressure (waves). The result is a full prediction of empirically verifiable general relativity, not merely the inverse-square law, but the accurate prediction of the gravitational coupling constant G and the gravitational curvature produced by masses, as well as the elimination of all ‘dark matter’ and ‘dark energy’ problems from general relativity. The cosmological consequences of this mechanism go further, because the exchange radiation mechanism causes the big bang Hubble recession on large scales while causing gravitation and curvature on small scales. It unifies both electromagnetism and gravitation, in the process eliminating the unobserved Higgs mechanism for electroweak symmetry breaking. The 19 parameters of the Standard Model are all predicted by the simple replacement mechanism, providing a full and detailed prediction of strong, weak, electromagnetic and gravitational interactions. The author is aware now of a great deal of relevant independent work by other people, including, among others, Louise Riofrio, D. R. Lunsford (whose unification, see EXT-2003-090, of electromagnetism and gravitation by a space-time symmetry where there are three orthogonal space dimensions and a corresponding three time dimensions, leading him to prove the elimination of the cosmological constant, is a duality to the mechanism presented here), Thomas Love, Tony Smith, John Hunter, Hans de Vries, Alejandro Rivero and Carl Brannen.

[To be inserted here when book content is complete: Summary list of predictions and links to the places they occur in the body of the book]

Acknowledgements

Jacques Distler inspired the writing of this technical-level book by suggesting in a comment on Clifford V. Johnson’s discussion blog that I’d be taken more seriously if only I’d use tensor analysis in discussing the mathematical physics of general relativity. Walter Babin kindly hosted some papers on his General Science Journal, which is less prejudiced and thus more sceientific than a certain other glamorous internet archive, while editors at Electronics World printed them; Peter Woit, Sean M. Carroll, Lee Smolin and ‘Kea’ (Marni D. Sheppherd) discussed in various ways the facts on mainstream string theory propaganda. Edward Witten’s alternative idea, called stringy M-theory, turned out to be ‘not even wrong’. Thank you, Ed!

Contents

Chapter 1: The mathematics and physics of general relativity

Chapter 2: Quantum gravity approaches: string theory and loop quantum gravity

Chapter 3: Dirac’s equation, Spin and Magnetic Moments, Pair-Production, the Polarization of the Vacuum above the IR cutoff and It’s Role in the Renormalization of Charge and Mass

Chapter 4: The Path Integrals of Quantum electrodynamics, compared with Maxwell’s classical electrodynamics

Chapter 5: Nuclear and Particle Physics, Yang-Mills theory, the Standard Model, and Representation Theory

Chapter 6: Methodology of doing science: Edward Witten’s stringy definition of the word ‘prediction’; real predictions of this theory based purely on empirical facts (vacuum mechanism for mass and electroweak symmetry breaking at low energy, including Hans de Vries’ and Alejandro Rivero’s ‘coincidence’)

Chapter 7: Riofrio’s and Hunter’s equations, and Lunsford’s unification of electromagnetism and gravitation

Chapter 8: Standard Model mechanism: vacuum polarization and gauge boson field mediators for asymptotic freedom and force unification

Chapter 9: Evidence for the ‘stringy’ nature of fundamental particle cores (trapped, Poynting-Heaviside electromagnetic energy currents constitute static, spinning, radiating, charge in capacitor plates, the Yang-Mills exchange radiation being the zero point vacuum field)

Chapter 10: Summary of evidence, comparison of theories, limitations and further work required.

Preface

This errors in the unification of fundamental theories lie in the way general relativity is currently being used, particularly the continuum gravity source assumptions which are forced into the energy-stress tensor because you can’t mathematically use differential equations to model true discontinuities in practice. So the lumpiness of quantum field theory isn’t compatible with the continuum of general relativity for purely mathematical reasons, not physical reasons. It pays therefore to examine what is correct in general relativity, and to identify/isolate what is merely a statistical approximation. The errors are identified and corrected in chapter one, which leads to further ramifications for the rest of physics, that are analysed and solved in the rest of the book.

Chapter One

The mathematics and physics of general relativity

Until 1998, it was widely held that general relativity predicted a gravitational retardation in the recession of the most distant supernovas, which proved to be in contradiction to the observations of supernovae redshifts published that year by Perlmutter et al., and since corroborated by many others.1 However, in 1996 a mechanism of gravity had been advanced which offered an approach to predicting (uniquely) the universal gravitational ‘constant’, G, that resolves the problem and many others, including the flatness problem, the smoothness of the cosmic background radiation originating from 300,000 years after the big bang, Standard Model particle physics parameters, and the underlying mechanism for quantum field theory.2

This chapter deals with the correct derivation and application of the Einstein-Hilbert field equation of general relativity, including quantum corrections that pertain to gravitational phenomena.

1.1 The mathematical physics of the Einstein-Hilbert field equation

The Einstein-Hilbert field equation, Rab½ gab R = Tab, of general relativity was obtained in November 1915 from solid mathematical and physical considerations. Einstein’s equivalence principle, that inertial accelerations and gravitational accelerations are indistinguishable, is one basis of the physical description of gravitation. Two other vital ingredients are non-Euclidean geometry, described by tensor calculus, and the conservation of mass-energy, which produces the complicated left hand side of the equation, specifically introducing the ‘- ½ gab R’ term.

Einstein’s first equated the curvature of space-time (describing acceleration fields and curved geodesics), to the source of the gravitational field (assumed to be some kind of continuum such as a classical field or a ‘perfect fluid’) by simply Rab = Tab. Here, Rab is the Ricci tensor (a description of curvature based on Ricci’s developments to Riemann’s non-Euclidean spacetime ideas) and Tab is the stress-energy tensor.

This simple equation, Rab = Tab, was wrong. It turned out that Tab should have zero ‘divergence’ in order that mass-energy is conserved locally. The easiest way to describe this is by analogy to the Maxwell equation for the divergence of a magnetic field B, i.e., Ñ × B = 0. Because as many magnetic field lines radiate from the north pole of the magnet as from the south pole, and this means that the divergence of the field (which is simply the summation of the gradients of the field in the three orthogonal spatial dimensions), is always exactly zero. In the case of the stress-energy tensor, Tab, the conservation of mass-energy density locally would be violated by, for example, the Lorentzian dependence of motion upon volume and hence upon the field density or the source of gravitation.

Tab = r ua ub

Taking just the energy density component, a = b = 0,

T00 = r g 2 = r (1 – v2/c2)-1

Hence, T00 will increase towards infinity as v tends towards c. If, therefore, the curvature was equal to the stress-energy tensor, Rab = Tab, mass-energy is curvature is dependent upon the reference frame of the observer, increasing towards infinity as velocity increases toward c.

Einstein, in his 1916 paper ‘The Foundation of the General Theory of Relativity,’ recognised the need whereby ‘The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ (Italics are Einstein’s own.)

In order to ensure that the source of the curvature describing gravitation is … [The ten chapters of the full book will be downloadable from a link at http://quantumfieldtheory.org/ when completed, shortly.  It will replace the ramshackle, hit and miss compendium of ideas and calculations on pages like http://quantumfieldtheory.org/Proof.htm – which is where a hyperlinked index page for the new book will go – and the recent updates in numerous blog posts and comments, with a structured, completely rewritten thesis, eliminating repetitions and other annoying aspects of presentation.]

SU(3) is OK, but SU(2)xU(1) and the Higgs mechanism are too complicated and SU(2) is rich enough, with a very simple mass mechanism, to encompass the full electroweak phenomena, allowing the prediction of the strength of the electromagnetic force and weaker gravity correctly

Illustration of physical mechanisms for exchange radiation in quantum field theory and the modification to the standard model implied therewith: SU(3) is OK, but SU(2)xU(1) and the Higgs mechanism are too complicated and SU(2) is rich enough (with a very simple mass-giving mechanism) to encompass the full electroweak phenomena, allowing the prediction of the strength of the electromagnetic force and weaker gravity correctly.  So the standard model should be modified to SU(3)xSU(2) where the SU(2) has a mechanism for chiral symmetry and mass at certain energies, or perhaps SU(3)xSU(2)xSU(2), with one of the SU(2) groups describing massive weak force gauge bosons, and the other SU(2) is electromagnetism and gravity (mass-less versions of the W+ and W- mediate electric fields and the mass-less Z is just the photon, and it mediates gravity in the network of particles which give rise to mass). It is simply untrue that electromagnetic gauge boson radiation must be uncharged: this condition only holds for isolated photons, not for exchange radiation, where there is continual exchange of gauge bosons between charges (gauge bosons going in both directions between charges, an equilibrium).  If the mass-less gauge bosons are uncharged, the magnetic field curls cancel in each individual gauge boson (seen from a large distance), preventing infinite self-inductance, so they will propagate.  This is why normal electromagnetic radiation like light photons are uncharged (the varying electromagnetic field of the photon contains as much positive electric field as negative electric field).

If the gauge bosons are charged and massless, then you would not normally expect them to propagate, because their magnetic fields cause infinite self-inductance, which would prevent propagation.  However, if you have two similar, charged massless radiations flowing in opposite directions, their interaction will be cancel out the magnetic fields, leaving only the electric field component as observed in electric fields.

This has been well investigated in the transmission line context of TEM (travsverse electromagnetic) waves (such as logic steps in high speed digital computers, where cross-talk, i.e., mutual inductance, is a limiting factor on the integrated circuit design) propagated by a pair of parallel conductors, with charges flowing in one direction on one conductor, and the opposite direction in the other.  When a simple capacitor, composed of metal plates separated by a small distance of vacuum (the vacuum acts as a dielectric, i.e., the permittivity of free space is not zero), is charged up by light-velocity electromagnetic energy, that energy has no mechanism to slow down when it enters the capacitor, which behaves as a transmission line.  Hence, you get the ‘energy current’ bouncing in all directions concurrently in a ‘steady, charged’ capacitor.  The magnetic field components of the TEM waves cancel, leaving just electric field (electric charge) as observed.  See the illustration in the previous post here.

The physics of general relativity

Copy of a comment to Kea’s blog in case deleted for length: http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html

… This post reminds me of a clip on U-tube showing Feynman in November 1964 giving his Character of Physical Law lectures at Cornell (these lectures were filmed for the BBC which broadcast them in BBC2 TV in 1965):

“In general we look for a new law by the following process. First we guess it. Don’t laugh… Then we compute the consequences of the guess to see what it would imply. Then we compare the computation result to nature: compare it directly to experiment to see if it works. If it disagrees with experiment: it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is…”

http://www.youtube.com/watch?v=ozF5Cwbt6RY

I haven’t seen the full lectures. Someone should put those lecture films on the internet in their entirity. They have been published in book form, but the actual film looks far more fun, particularly as they catch the audience’s reactions. Feynman has a nice discussion of the LeSage problem in those lectures, and it would be nice to get a clip of him discussing that!

General relativity is right at a deep level and doesn’t in general even need testing for all predictions, simply because it’s just a mathematical description of accelerations in terms of spacetime curvature, with a correction for conservation of mass-energy. You don’t keep on testing E=mc^2 for different values of m, so why keep testing general relativity? Far better to work on trying to understand the quantum gravity behind general relativity, or even to do more research into known anomalies such as the Pioneer anomaly.

General relativity may need corrections for quantum effects, just as it needed a major correction for the conservation of mass-energy in November 1915 before the field equation was satisfactory.

The major advance in general relativity (beyond the use of the tensor framework, which dates back to 1901, when developed by Ricci and Tullio Levi-Civita) is a correction for energy conservation.

Einstein started by saying that curvature, described by the Ricci tensor R_ab, should be proportional to the stress-energy tensor T_ab which generates the field.

This failed, because T_ab doesn’t have zero divergence where zero divergence is needed “in order to satisfy local conservation of mass-energy”.

The zero divergence criterion just specifies that you need as many field lines going inward from the source as going outward from the source. You can’t violate the conservation of mass-energy, so the total divergence is zero.

Similarly, the total divergence of magnetic field from a magnet is always zero, because you have as many field lines going outward from one pole as going inward toward the other pole, hence div.B = 0.

The components of T_ab (energy density, energy flux, pressure, momentum density, and momentum flux) don’t obey mass-energy conservation because of the gamma factor’s role in contracting the volume.

For simplicity if we just take the energy density component, T_00, and neglect the other 15 components of T_ab, we have

T_00 = Rho*(u_a)*(u_b)

= energy density (J/m^3) * gamma^2

where gamma = [1 – (v^2)/(c^2)]^(-1/2)

Hence, T_00 will increase towards infinity as v tends toward c. This violates the conservation of mass-energy if R_ab ~ T_ab, because radiation going at light velocity would experience infinite curvature effects!

This means that the energy density you observe depends on your velocity, because the faster you travel the more contraction you get and the higher the apparent energy density. Obviously this is a contradiction, so Einstein and Hilbert were forced to modify the simple idea that (by analogy to Poisson’s classical field equation) R_ab ~ T_ab, in order to make the divergence of the source of curvature always equal to zero.

This was done by subtracting (1/2)*(g_ab)*T from T_ab, because T_ab – (1/2)*(g_ab)*T always has zero divergence.

T is the trace of T_ab, i.e., just the sum of scalars: the energy density T_00 plus pressure terms T_11, T_22 and T_33 in T_ab (“these four components making T are just the diagonal – scalar – terms in the matrix for T_ab”).

The reason for this choice is stated to be that T_ab – (1/2)*(g_ab)*T gives zero divergence “due to Bianchi’s identity”, which is a bit mathematically abstract, but obviously what you are doing physically by subtracting (1/2)(g_ab)T is just getting rid from T_ab what is making it give a finite divergence.

Hence the corrected R_ab ~ T_ab – (1/2)*(g_ab)*T [“which is equivalent to the usual convenient way the field equation is written, R_ab – (1/2)*(g_ab)*R = T_ab”].

Notice that since T_00 is equal to its own trace T, you see that

T_00 – (1/2)(g_ab)T

= T – (1/2)(g_ab)T

= T(1 – 0.5g_ab)

Hence, the massive modification introduced to complete general relativity in November 1915 by Einstein and Hilbert amounts to just subtracting a fraction of the stress-energy tensor.

The tensor g_ab [which equals (ds^2)/{(dx^a)*(dx^b)}] depends on gamma, so it simply falls from 1 to 0 as the velocity increases from v = 0 to v = c, hence:

T_00 – (1/2)(g_ab)T = T(1 – 0.5g_ab) = T where g_ab = 0 (velocity of v = c) and

T_00 – (1/2)(g_ab)T = T(1 – 0.5g_ab) = (1/2)T where g_ab = 1 (velocity v = 0)

Hence for a simple gravity source T_00, you get curvature R_ab ~ (1/2)T in the case of low velocities (v ~ 0), but for a light wave you get R_ab ~ T, i.e., there is exactly twice as much gravitational acceleration acting at light speed as there is at low speed. This is clearly why light gets deflected in general relativity by twice the amount predicted by Newtonian gravitational deflection (a = MG/r^2 where M is sun’s mass).

I think it is really sad that no great effort is made to explain general relativity simply in a mathematical way (if you take away the maths, you really do lose the physics).

Feynman had a nice explanation of curvature in his 1963 Lectures on Physics:gravitational contracts (shrinks) earth’s radius by (1/3)GM/c^2 = 1.5 mm, but this contraction doesn’t affect transverse lines running perpendicularly to the radial gravitational field lines, so the circumference of earth isn’t contracted at all! Hence Pi would increase slightly if there are only 3 dimensions: circumference/diameter of earth (assumed spherical) = [1 + 2.3*10^{-10}]*Pi.

This distortion to geometry – presumably just a simple physical effect of exchange radiation compressing masses in the radial direction only (in some final theory that includes quantum gravity properly) – explains why there is spacetime curvature. It’s a shame that general relativity has become controversial just because it’s been badly explained using false arguments (like balls rolling together on a rubber water bed, which is a false two dimensional analogy – and if you correct it by making it three dimensional and have a surrounding fluid pushing objects together where they shield one another, you get get censored out, because most people don’t want accurate analogies, just myths).

(Sorry for the length of this comment by the way and feel free to delete it. I was trying to clarify why general relativity doesn’t need testing.)

*********************

Follow-up comment:

copy of a follow up comment:

http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html

Matti, thank you very much for your response. On the issue of tests for science, if a formula is purely based on facts, it’s not speculative and my argument is that it doesn’t need testing in that case. There are two ways to do science:

* Newton’s approach: “Hypotheses non fingo” [I frame no hypotheses].

* Feynman’s dictum: guess and test.

The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle of inertial and gravitational acceleration (which seems pretty solid to me, although Dr Mario Rabinowitz writes somewhere about some small discrepancies, there’s no statistically significant experimental refutation of the equivalence principle, and it’s got a lot of evidence behind it), spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc.

All these are solid. So the field equation of general relativity, which is key to to making the well tested unequivocal or unambiguous predictions (unlike the anthropic selection from the landscape of solutions it gives for cosmology, which is a selection to fit observations of how much “dark energy” you assume is powering the cosmological constant, and how much dark matter is around that can’t be detected in a lab for some mysterious reason), is really based on solid experimental facts.

It’s as pointless to keep testing – within the range of the solid assumptions on which it is based – a formula based on solid facts, as it is to keep testing say Pythagoras’ law for different sizes of triangle. It’s never going to fail (in Euclidean geometry, ie flat space), because the inputs to the derivation of the equation are all solid facts.

Einstein and Hilbert in 1915 were using Newton’s no-hypotheses (no speculations) approach, so the basic field equation is based on solid fact. You can’t disprove it, because the maths has physical correspondence to things already known. The fact it predicts other things like the deflection of starlight by gravity when passing the sun as twice the amount predicted by Newton’s law, is a bonus, and produces popular media circus attention if hyped up.

The basic field equation of general relativity isn’t being tested because it might be wrong. It’s only being tested for psychological reasons and publicity, and the false idea that Popper had speculations must forever be falsifiable (ie, uncertain, speculative, or guesswork).

The failure of Popper is that he doesn’t include proofs of laws which are based on solid experimental facts.

First, Archimedes proof of the law of buoyancy in On Floating Bodies. The water is X metres deep, and the pressure in the water under a floating body is the same as that at the same height above the seabed regardless of whether a boat is above it or not. Hence, the weight of water displaced by the boat must be exactly equal to the weight of the boat, so that the pressure is unaffected whether or not a boat is floating above a fixed submerged point.

This law is a falsifiable law. Nor are other empirically-based laws. The whole idea of Popper that you can falsify an solidly empirically based scientific theory is just wrong. The failures of epicycles, phlogiston, caloric, vortex atoms, and aether are due to the fact that those “theories” were not based on solid facts, but were based upon guesses. String theory is also a guess, but is not a Feynman type guess (string theory is really just postmodern ***t in the sense that it can’t be tested, so it’s not even a Popper type ever-falsifiable speculative theory, it’s far worse than that: it’s “not even wrong” to begin with).

Similarly, Einstein’s original failure with the cosmological constant was a guess. He guessed that the universe is static and infinite [see comments] without a shred of evidence (based on popular opinion and the “so many people can’t all be wrong” fallacy). Actually, from Obler’s paradox, Einstein should have realised that the big bang is the correct theory.

The big bang goes right back to Erasmus Darwin in 1791 and Edgar Allan Poe in 1848, which was basically an a fix to Obler’s paradox (the problem that if the universe is infinite and static and not expanding, the light intensity from the infinite number of stars in all directions will mean that the entire sky would be as bright as the sun: the fact that the sun is close to us and gives higher inverse-square law intensity than a distant star would be balanced by the fact that at great distances, there are more stars by the inverse square law of the distance, covering any given solid angle of the sky; the correct resolution to Obler’s paradox is not – contrary to popular accounts – the limited size of the universe in the big bang scenario, but the redshifts of distant stars in the big bang, because after all we’re looking back in time with increasing distance and in the absence of redshift we’d see extremely intense radiation from the high density early universe at great distances).

Erasmus Darwin wrote in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

So there was no excuse for Einstein in 1916 to go with popular prejudice and ignore Obler’s paradox, ignore Darwin, and ignore Poe. What was Einstein thinking? Perhaps he assumed the infinite eternal universe because he wanted to discredit ‘fiat lux’ and thought he was safe from experimental refutation in such an assumption.

So Einstein in 1916 introduced a cosmological constant that produces an antigravity force with increases with distance. At small distances, say within a galaxy, the cosmological constant is completely trivial because it’s effects are so small. But at the average distance of separation between galaxies, Einstein made the cosmological constant take the right value so that its repulsion would exactly cancel out the gravitation attraction of galaxies.

He thought this would keep the infinite universe stable, without continued aggregation of galaxies over time. As now known, he was experimentally refuted over the cosmological constant by Hubble’s observations of redshift increasing with distance, which is redshift of the entire spectrum of light uniformly caused by recession, and not the result of scattering of light with dust (which would be a frequency-dependent redshift) or “tired light” nonsense.

However, the Hubble disproof is not substantive to me. Einstein was wrong because he built the cosmological constant extension on prejudice not facts, he ignored evidence from Obler’s paradox, and in particular his model of the universe is unstable. Obviously his cosmological constant fix suffered from the drawback that galaxies are not all spaced at the same distance apart, and his idea to produce stability in an infinite, eternal universe was a failure physically because it was not a stable solution. Once you have one galaxy slightly closer to another than the average distances, the cosmological constant can’t hold them apart, so they’ll eventually combine, and that will set off more aggregation.

The modern cosmological constant application (to prevent the long range gravitational deceleration of the universe from occurring, since no deceleration is present in data of redshifts of distant supernovas etc) is now suspect experimentally because the “dark energy” appears to be “evolving” with spacetime. But it’s not this experimental (or rather observational) failure of the mainstream Lambda-Cold Dark Model of cosmology which makes it pseudoscience. The problem is that the model is not based on science in the first place. There’s no reason to assume that gravity should slow the galaxies at great distances. Instead,

“… the flat universe is just not decelerating, it isn’t really accelerating…”

The reason it isn’t decelerating is that gravity, contraction, and inertia are ultimately down to some type of gauge boson exchange radiation causing forces, and when these exchange radiation are exchanged between receding masses over vast distances, they get redshifted so their energy drops by Planck’s law E=hf. That’s one simple reason for why general relativity – which doesn’t include quantum gravity with this effect of redshift of gauge bosons – falsely predicts gravitational deceleration which wasn’t seen.

The mainstream response to the anomaly, of adding an epicycle (dark energy, small positive CC) is just what you’d expect from from mathematicians, who want to make the theory endlessly adjustable and non-falsifiable (like Ptolemy’s system of adding more epicycles to overcome errors).

Many thanks for discussion you gave of issues with the equivalence principle. I can’t grasp what is the problem with inertial and gravitational masses being equal to within experimental error to many decimals. To me it’s a good solid fact. There are a lot of issues with Lorentz invariance anyway, so its general status as a universal assumption is in doubt, although it certainly holds on large scales. For example, any explanation of fine-graining in the vacuum to explain the UV cutoff physically is going to get rid of Lorentz invariance at the scale of the grain size, because that will be an absolute size. At least this is the argument Smolin and others make for “doubly special relativity”, whereby Lorentz invariance only emerges on large scales. Also, from the classical electromagnetism perspective of Lorentz’s original theory, Lorentz invariance can arise physically due to contraction of a body in the direction of motion in a physically real field of force-causing radiation, or whatever is the causative agent in quantum gravity.

Many thanks again for the interesting argument. Best wishes, Nige

Updates

Another comment:

http://kea-monad.blogspot.com/2007/04/whats-new.html

“But note that White seems to think that DE has solid foundations.” – Kea

Even Dr Woit might agree with White, because anything based on observation seems more scientific than totally abject speculation.

If you assume the Einstein field equation to be a good description of cosmology and to not contain any errors or omissions of physics, then you are indeed forced by the observations that distant supernovae aren’t slowing, to accept a small positive cosmological constant and corresponding ‘dark energy’ to power that long range repulsion just enough to stop the gravitational retardation of distant supernovae.

Quantum gravity is supposed – by the mainstream – to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.

According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ h-bar.

Since time = distance/c,

(energy)*(distance) ~ c*h-bar.

Hence,

(distance) ~ c*h-bar/(energy)

Very small distances therefore correspond to very big energies. Since gravitons capable of graviton-graviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is non-renormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.

The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).

The problem is that gravity has only one type of ‘charge’, mass. There’s no anti-mass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.

This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.

However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.

This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.

Penrose claims in his book ‘The Road to Reality’ that the bare core charge of the electron is ‘probably’ (137.036^0.5)*e = 11.7e.

In getting this he uses Sommerfeld’s fine structure parameter,

alpha = (e^2)/(4*Pi*permittivity of free space*c*h-bar) = 1/137.036…

Hence, e^2 is proportional to alpha, so you’d expect from dimensional analysis that electric charge shielding should be proportional to (alpha)^0.5.

However, this is wrong physically.

From the uncertainty principle, the range r of a gauge boson is related to its energy E by:

E = hc/(2*Pi*r).

Since the force exerted is F = E/r (from: work energy = force times distance moved in direction of the applied force), we get

F = E/r = [hc/(2*Pi*r)]/r

= hc/(2*Pi*r^2)

= (1/alpha)*(Coulomb’s law for electrons)

Hence, the bare core electron’s bare core charge really has the value e/alpha, not e/(alpha^0.5) as Penrose guessed from dimensional analysis. This “leads to predictions of masses.”

It’s really weird that this simple approach to calculating the total amount of vacuum shielding for the electron core is so ignorantly censored out. It’s published in an Apr. 2003 Electronics World paper, and I haven’t found it elsewhere. It’s a very simple calculation, so it’s easy to check both the calculation and its assumptions, and it leads to predictions.

I won’t repeat the argument that dark energy is a false theory here at length. Just let’s say that on cosmological distances, all radiation including gauge bosons, will be stretched and degraded in frequency and hence in energy. This, the exchange radiation which causes gravity will be weakened by redshift due to expansion over large distances, and when you include this effect on the gravitational interaction coupling parameter G in general relativity, general relativity then predicts the supernovae redshifts correctly. Instead of inventing an additional unobservable to offset the unobserved long range gravitational retardation being offset by dark energy, you just have no long range gravitational deceleration. Hence, no outward acceleration to offset inward gravity at long distances. The universe is simply flat on large scales because gravity is weakened by the redshift of gauge bosons over great distances in an expanding universe where gravitational charges (masses) are receding from one another. Simple.

Another problem with general relativity as currently used is the T_{ab} tensor which is usually represented by a smooth source for the gravitational field, such as a continuum of uniform density.

In reality, the whole idea of density is a statistical approximation, because matter consists of particle of very high density, distributed in the vacuum. So the idea that general relativity shows that spacetime is flat on small distance scales is just bunk, it’s based on the false statistical approximation (which holds on large scales, not on small scales) that you can represent the source for gravity (ie, quantized particles) by a continuum.

So the maths used to make T_{ab} generate solvable differential equations is an approximation which is correct at large scales (after you make allowances for the mechanism of gravity, including redshift of gauge bosons exchanged over large distances), but is inaccurate in general on small scales.

General relativity doesn’t prove a continuum exists, it requires a continuum because its based on continuously variable differential tensor equations which don’t easily model the discontinuities in the vacuum (ie, real quantized matter). So the nature of general relativity forces you to use a continuum as an approximation.

Sorry for the length of comment, feel free to delete.

While I’m listing comments made over there, here’s one showing that according to the IR cutoff of quantum field theory, Hawking radiation is possible for electrons as black holes, but isn’t generally possible for really massive black holes, because the IR cutoff means that pair production (which causes vacuum polarization and hence screening of electric charge) only occurs above a threshold of about 10^18 v/m, which means that Hawking radiation requires a large net electric charge/mass ratio of the black hole, so that the electric field strength in the vacuum is at least 10^18 v/m at the event horizon:

http://kea-monad.blogspot.com/2007/04/m-theory-lesson-37.html

Professor Smolin has a discussion of this entropy argument at page 90 in TTWP:”The first crucial result connecting quantum theory to black holes was made in 1973 by Jacob Bekenstein … He made the amazing discovery that black holes have entropy. Entropy is a measure of disorder, and there is a famous law, called the second law of thermodynamics, holding that the entropy of a closed system can never decrease. [Notice he says “closed system” conveniently without defining it, and if the universe is a closed system then the 2nd law of thermodynamics is wrong: at 300,000 years after the big bang the temperature of the universe was a uniform 4000 K with extremely little variation, whereas today space is at 2.7 K and the centre of the sun is at 15,000,000 K. Entropy for the whole universe has been falling, in contradiction to the laboratory based (chemical experiments) basis of thermodynamics. The reason for this is the role of gravitation in lumping matter together, organizing it into hot stars and empty space. This is insignificant for the chemical experiments in labs which the laws of entropy were based upon, but it is significant generally in physics, where gravity lumps things together over time, reducing entropy and increasing order. There is no inclusion of gravitational effects in thermodynamic laws, so they’re plain pseudoscience when applied to gravitational situations..] Bekenstein worried that if he took a box filled with a hot gas – which would have a lot of entropy, because the motion of the gas molecules was random and disordered – and threw it into a black hole, the entropy of the universe would seem to decrease, because the gas would never be recovered. [This is nonsense because gravity in general works against rising entropy; it causes entropy to fall! Hence the big bang went from uniform temperature and maximum entropy (disorder, randomness of particle motions and locations) at early times to very low entropy today, with a lot of order. The ignorance of the role of gravitation on entropy by these people is amazing.] To save the second law, Bekenstein proposed that the black hole must itself have an entropy, which would increase when the box of gas fell in, so that the total entropy of the universe would never decrease. [But the entropy of the universe is decreasing due to gravitational effects anyway. At early times the universe was a hot fireball of disorganised hydrogen gas at highly uniform temperature. Today space is at 2.7 K and the centres of stars are at tens of millions of Kelvin. So order has increased with time, and entropy – disorder – has fallen with time.]”

On page 91, Smolin makes clear the errors stemming from Hawking’s treatment:

“Because a black hole has a temperature, it will radiate, like any hot body.”

This isn’t in general correct either, because the mechanism Hawking suggested for black hole radiation requires pair production to occur near the event horizon, so that one of the pair of particles can fall into the black hole and the other particle can escape. This required displacement of charges is the same as the condition for polarization of the vacuum, which can’t occur unless the electric field is above a threshold/cutoff of about 10^18 v/m.

In general, a black hole will not have a net electric field at all because neutral atoms fall into it to give it mass. Certainly there is unlikely to be an electric field strength of 10^18 v/m at the event horizon of the black hole. Hence there are no particles escaping. Hawking’s mechanism is that the escaping particles outside the event horizon annihilate into gamma rays which constitute the “Hawking radiation”.

Because of the electric field threshold required for pair production, there will be no Hawking radiation emitted from large black holes in the universe: there is no mechanism because the electric field at the event horizon will be too small.

The only way you can get Hawking radiation is where the condition is satisfied that the event horizon radius of the black hole, R = 2Gm/c^2, corresponds to an electric field strength exceeding the QFT pair production threshold of E = (m^2)(c^3)/(e*h bar) = 1.3*10^18 volts/metre, where e is the electron’s charge.

Since E = F/q = Q/(4*Pi*Permittivity*r^2) v/m, the threshold net electric charge Q that a black hole must carry in order to radiate Hawking radiation is

E = (m^2)(c^3)/(e*h bar)

= Q/(4*Pi*Permittivity*r^2)

= Q/(4*Pi*Permittivity*{2Gm/c^2}^2)

Hence, the minimum net electric charge a black hole must have before it can radiate is

Q = 16*Pi*(m^4)(G^2)*(Permittivity of free space)/(c*e*h har)

Notice the fourth power dependence on the mass of the black hole! The more massive the black hole, the more electric charge it requires before Hawking radiation emission is possible.

**********

For an earlier post on Hawking radiation, see https://nige.wordpress.com/2007/03/08/hawking-radiation-from-black-hole-electrons-causes-electromagnetic-forces-it-is-the-exchange-radiation/ where it is shown that Hawking radiation appears to be the source of the electromagnetic force, since it the radiant power of an electron as a black hole is 10^40 times stronger than the gravity mechanism I published which is based only on empirical facts (observations of Hubble recession of galaxies in spacetime, and empirically based laws of motion), implying a revised electroweak gauge boson unification and symmetry breaking scheme.

Are there hidden costs of bad science in string theory?

The invention of the world’s first marketed wafer-scale integration product, a 160 MB solid state memory back in 1988, won ‘Product of the Year Award’ from the U.S. journals Electronic Design (26 October 1989) and also from Electronic Products (January 1990).  You might not believe it, yet its physics was suppressed because was based on a cross-talk discovery that was heresy to the supposedly secure foundations of stringy speculation.  The original motivation was to avoid fatal risks from cross-talk, as explained in Electronics World September 2003:

‘… during the Falklands War, the British warship HMS Sheffield had to switch off its radar looking for incoming missiles … This is why it did not see incoming Exocet missiles, and you know the rest. How was it that after decades of pouring money into the EMC community, this could happen … that community has gone into limbo, sucking in money but evading the real problems, like watching for missiles while you talk to HQ.’

Back in the 12 March 1989 Sunday Times, the journalist Jane Bird interviewed the inventor, who explained it’s significance: each processor could correspond to a square mile of airspace, avoiding accidents.  The data transmission rate is crucial in this situation, and this is the whole point.  The inventor had come up with an empirical theory in 1967 which worked, but which disclosed problems in mainstream electromagnetic theory.  The latter was its undoing, apparently just because string theory was built by people who knew nothing about the particle-wave duality behind the physics of transmission lines, and who were certain it was all rubbish (despite working and making a multimillion pound product):

‘In July last year, problems with the existing system were highlighted by the tragic death of 71 people, including 50 school children, due to the confusion when Swiss air traffic control noticed too late that a Russian passenger jet and a Boeing 757 were on a collision path. The processing of extensive radar and other aircraft input information for European air space is a very big challenge, requiring a reliable system to warn air traffic controllers of impending disaster. So why has Ivor Catt’s computer solution for Air Traffic Control been ignored by the authorities for 13 years?’ – Electronics World, January 2003, p12.

Weirdly, string theory is the main answer, as I found out from the responses to such articles!  It seems as if things go like this: traditionally, all theories are provisional and falsifiable (never proved) and people keep looking for errors. But when you move to unification, the theories which people claim to be unifying are then assumed to have been proved correct (I won’t discuss problems with the graviton here). If you merely point out (let alone correct) an error in the foundations, it then becomes a heresy, and you are treated as if you are a vandal.  Actually, the vandals are those who build on bad foundations and censor corrections.  Why on earth would string theorists, who can’t predict anything, want to be misleading the world into thinking that the transmission line mechanism has been officially resolved?  They don’t, and they’re not directly censoring it: it’s the community as a whole, as led by string theory, which is censoring it.

They just want string theory to be left with no alternative of a checkable kind. String theorists can always gain relative greatness by polluting or scorching the ground, as it were, so nobody else will be listened to.  You can’t make yourself heard over their hype, which is not based on experiments.  So are these deaths of kids from bad technology, maintained by insistence upon bad science, really necessary?   String theorists won’t, you can be sure, accept any blame for anything (nor do any dictators), and they will claim that their ‘physics’ is at worse harmless and a gallant effort.  I don’t see what’s gallant in a boring extradimensional system which leads people to sneer at live-saving innovations on the basis that the correct physics is based on experimentally confirmed data gathered after Maxwell’s equations had been developed, which contradict mainstream errors, and try to get them blocked from publication and implementation, not always with success:

I. Catt, ‘Crosstalk (Noise) in Digital Systems,’ in IEEE Trans. on Elect. Comp., vol. EC-16 (Dec 1967) pp. 749-58. Also papers proving that the inductor and transformer are really transmission lines like capacitors, published in Proc. IEE, June 83 & June 87.

I have in an old post an explanation of the correct mechanism of displacement current, replacing Maxwell’s mess with a proper quantum field theory consistent basis for electromagnetic crosstalk.  (Not discussed in the Catt paper mentioned above, which just uses some empirical rules about ‘energy current’ which ignores electric charge current and were developed originally by Heaviside.)

So is there a reason why virtually nobody listens?  Well, with all due respect to those who don’t like hearing the analogy to 1933-45 Germany, most people really do subconsciously (at least!) want to ignore the development of life saving innovations and the development of science, simply because they don’t understand what science is and don’t like it.  Those who don’t like science include people paid to do science.  They prefer stuff like string theory, and try to call that science.  Because of this muddle, the really fundamental science is censored out and endless speculations which are not science take their places in what used to be the most appropriate journals.  A political advocacy of eugenics in Germany at that time couldn’t be overrided by the facts, because virtually nobody wanted to hear.  Brainwashing is today replaced by it’s stringy equivalent, branewashing.  Low-level radiation is another example of a science being controlled by politics.

By the time the protein P53 repair mechanism for DNA breaks was discovered and the Hiroshima-Nagasaki effects of radiation were accurately known, the nuclear and health physics industries had been hyping inaccurate radiation effects models which ignored non-linear effects (like saturation of the normal P53 repair mechanism of DNA) and the effects of dose rate for twenty years.

The entire industry had become indoctrinated in the philosophy of 1957, and there was no going back. Most of health physicists are employed by the nuclear or radiation industry at reactors or in medicine/research, so all these people have a vested interest in not rocking their own boat. The only outsiders around seem to politically motivated in one direction only (anti-nuclear), so there’s a standoff. Virtually everyone who enters the subject of health physics gets caught in the same trap, and so there is no mechanism in place to allow for any shift of consensus.

To see possible consequences of the general stagnation and old-fashioned poor standing of physics in the student community, try this report.  When mentioning problem in Electronics World years earlier, my report was also ignored, and the problem became worse.  There’s a peculiar ‘shoot the messenger’ policy that deters you from pointing out why physics dictatorship by string theorists isn’t helping, and is in fact misleading almost everyone.  Quietly publishing the facts doesn’t really start much of a debate or get anywhere, when there is so much hype about speculation taking precedence which claims falsely that mainstream electromagnetism, even though it needs serious corrections for quantum field theory, is completely accurate and well established.  It’s the old story of people trying to run before they can walk: sort out electromagnetism, then you can start to build on that.  I should add that science is not personal property, and facts can’t be dismissed as merely personal beliefs or opinions.