Copy of a comment to Kea’s blog in case deleted for length: http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html

… This post reminds me of a clip on U-tube showing Feynman in November 1964 giving his *Character of Physical Law* lectures at Cornell (these lectures were filmed for the BBC which broadcast them in BBC2 TV in 1965):

“In general we look for a new law by the following process. First we guess it. Don’t laugh… Then we compute the consequences of the guess to see what it would imply. Then we compare the computation result to nature: compare it directly to experiment to see if it works. If it disagrees with experiment: it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is…”

– http://www.youtube.com/watch?v=ozF5Cwbt6RY

I haven’t seen the full lectures. Someone should put those lecture films on the internet in their entirity. They have been published in book form, but the actual film looks far more fun, particularly as they catch the audience’s reactions. Feynman has a nice discussion of the LeSage problem in those lectures, and it would be nice to get a clip of him discussing that!

General relativity is right at a deep level and doesn’t in general even need testing for all predictions, simply because it’s just a mathematical description of accelerations in terms of spacetime curvature, with a correction for conservation of mass-energy. You don’t keep on testing E=mc^2 for different values of m, so why keep testing general relativity? Far better to work on trying to understand the quantum gravity behind general relativity, or even to do more research into known anomalies such as the Pioneer anomaly.

General relativity may need corrections for quantum effects, just as it needed a major correction for the conservation of mass-energy in November 1915 before the field equation was satisfactory.

The major advance in general relativity (beyond the use of the tensor framework, which dates back to 1901, when developed by Ricci and Tullio Levi-Civita) is a correction for energy conservation.

Einstein started by saying that curvature, described by the Ricci tensor R_ab, should be proportional to the stress-energy tensor T_ab which generates the field.

This failed, because T_ab doesn’t have zero divergence where zero divergence is needed “in order to satisfy local conservation of mass-energy”.

The zero divergence criterion just specifies that you need as many field lines going inward from the source as going outward from the source. You can’t violate the conservation of mass-energy, so the total divergence is zero.

Similarly, the total divergence of magnetic field from a magnet is always zero, because you have as many field lines going outward from one pole as going inward toward the other pole, hence div.B = 0.

The components of T_ab (energy density, energy flux, pressure, momentum density, and momentum flux) don’t obey mass-energy conservation because of the gamma factor’s role in contracting the volume.

For simplicity if we just take the energy density component, T_00, and neglect the other 15 components of T_ab, we have

T_00 = Rho*(u_a)*(u_b)

= energy density (J/m^3) * gamma^2

where gamma = [1 – (v^2)/(c^2)]^(-1/2)

Hence, T_00 will increase towards infinity as v tends toward c. This violates the conservation of mass-energy if R_ab ~ T_ab, because radiation going at light velocity would experience infinite curvature effects!

This means that the energy density you observe depends on your velocity, because the faster you travel the more contraction you get and the higher the apparent energy density. Obviously this is a contradiction, so Einstein and Hilbert were forced to modify the simple idea that (by analogy to Poisson’s classical field equation) R_ab ~ T_ab, in order to make the divergence of the source of curvature always equal to zero.

This was done by subtracting (1/2)*(g_ab)*T from T_ab, because T_ab – (1/2)*(g_ab)*T always has zero divergence.

T is the trace of T_ab, i.e., just the sum of scalars: the energy density T_00 plus pressure terms T_11, T_22 and T_33 in T_ab (“these four components making T are just the diagonal – scalar – terms in the matrix for T_ab”).

The reason for this choice is stated to be that T_ab – (1/2)*(g_ab)*T gives zero divergence “due to Bianchi’s identity”, which is a bit mathematically abstract, but obviously what you are doing physically by subtracting (1/2)(g_ab)T is just getting rid from T_ab what is making it give a finite divergence.

Hence the corrected R_ab ~ T_ab – (1/2)*(g_ab)*T [“which is equivalent to the usual convenient way the field equation is written, R_ab – (1/2)*(g_ab)*R = T_ab”].

Notice that since T_00 is equal to its own trace T, you see that

T_00 – (1/2)(g_ab)T

= T – (1/2)(g_ab)T

= T(1 – 0.5g_ab)

Hence, the massive modification introduced to complete general relativity in November 1915 by Einstein and Hilbert amounts to just subtracting a fraction of the stress-energy tensor.

The tensor g_ab [which equals (ds^2)/{(dx^a)*(dx^b)}] depends on gamma, so it simply falls from 1 to 0 as the velocity increases from v = 0 to v = c, hence:

T_00 – (1/2)(g_ab)T = T(1 – 0.5g_ab) = T where g_ab = 0 (velocity of v = c) and

T_00 – (1/2)(g_ab)T = T(1 – 0.5g_ab) = (1/2)T where g_ab = 1 (velocity v = 0)

Hence for a simple gravity source T_00, you get curvature R_ab ~ (1/2)T in the case of low velocities (v ~ 0), but for a light wave you get R_ab ~ T, i.e., there is exactly *twice* as much gravitational acceleration acting at light speed as there is at low speed. This is clearly why light gets deflected in general relativity by *twice* the amount predicted by Newtonian gravitational deflection (a = MG/r^2 where M is sun’s mass).

I think it is really sad that no great effort is made to explain general relativity simply in a mathematical way (if you take away the maths, you really do lose the physics).

Feynman had a nice explanation of curvature in his 1963 *Lectures on Physics:*gravitational contracts (shrinks) earth’s radius by (1/3)GM/c^2 = 1.5 mm, but this contraction doesn’t affect transverse lines running perpendicularly to the radial gravitational field lines, so the circumference of earth isn’t contracted at all! Hence Pi would increase slightly if there are only 3 dimensions: circumference/diameter of earth (assumed spherical) = [1 + 2.3*10^{-10}]*Pi.

This distortion to geometry – presumably just a simple physical effect of exchange radiation compressing masses in the radial direction only (in some final theory that includes quantum gravity properly) – explains why there is spacetime curvature. It’s a shame that general relativity has become controversial just because it’s been badly explained using false arguments (like balls rolling together on a rubber water bed, which is a false two dimensional analogy – and if you correct it by making it three dimensional and have a surrounding fluid pushing objects together where they shield one another, you get get censored out, because most people don’t want accurate analogies, just myths).

(Sorry for the length of this comment by the way and feel free to delete it. I was trying to clarify why general relativity doesn’t need testing.)

*********************

Follow-up comment:

copy of a follow up comment:

http://kea-monad.blogspot.com/2007/04/gravity-probe-b.html

Matti, thank you very much for your response. On the issue of tests for science, if a formula is purely based on facts, it’s not speculative and my argument is that it doesn’t need testing in that case. There are two ways to do science:

* Newton’s approach: “Hypotheses non fingo” [I frame no hypotheses].

* Feynman’s dictum: guess and test.

The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle of inertial and gravitational acceleration (which seems pretty solid to me, although Dr Mario Rabinowitz writes somewhere about some small discrepancies, there’s no statistically significant experimental refutation of the equivalence principle, and it’s got a lot of evidence behind it), spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc.

All these are solid. So the field equation of general relativity, which is key to to making the well tested unequivocal or unambiguous predictions (unlike the anthropic selection from the landscape of solutions it gives for cosmology, which is a selection to fit observations of how much “dark energy” you assume is powering the cosmological constant, and how much dark matter is around that can’t be detected in a lab for some mysterious reason), is really based on solid experimental facts.

It’s as pointless to keep testing – *within the range of the solid assumptions on which it is based* – a formula based on solid facts, as it is to keep testing say Pythagoras’ law for different sizes of triangle. It’s never going to fail (in Euclidean geometry, ie flat space), because the inputs to the derivation of the equation are all solid facts.

Einstein and Hilbert in 1915 were using Newton’s no-hypotheses (no speculations) approach, so the basic field equation is based on solid fact. You can’t disprove it, because the maths has physical correspondence to things already known. The fact it predicts other things like the deflection of starlight by gravity when passing the sun as twice the amount predicted by Newton’s law, is a bonus, and produces popular media circus attention if hyped up.

The basic field equation of general relativity isn’t being tested because it might be wrong. It’s only being tested for psychological reasons and publicity, and the false idea that Popper had speculations must forever be falsifiable (ie, uncertain, speculative, or guesswork).

The failure of Popper is that he doesn’t include proofs of laws which are based on solid experimental facts.

First, Archimedes proof of the law of buoyancy in *On Floating Bodies*. The water is X metres deep, and the pressure in the water under a floating body is the same as that at the same height above the seabed regardless of whether a boat is above it or not. Hence, the weight of water displaced by the boat must be exactly equal to the weight of the boat, so that the pressure is unaffected whether or not a boat is floating above a fixed submerged point.

This law is a falsifiable law. Nor are other empirically-based laws. The whole idea of Popper that you can falsify an solidly empirically based scientific theory is just wrong. The failures of epicycles, phlogiston, caloric, vortex atoms, and aether are due to the fact that those “theories” were not based on solid facts, but were based upon guesses. String theory is also a guess, but is not a Feynman type guess (string theory is really just postmodern ***t in the sense that it can’t be tested, so it’s not even a Popper type ever-falsifiable speculative theory, it’s far worse than that: it’s “not even wrong” to begin with).

Similarly, Einstein’s original failure with the cosmological constant was a guess. He guessed that the universe is static ~~and infinite~~ [see comments] without a shred of evidence (based on popular opinion and the “so many people can’t all be wrong” fallacy). Actually, from Obler’s paradox, Einstein should have realised that the big bang is the correct theory.

The big bang goes right back to Erasmus Darwin in 1791 and Edgar Allan Poe in 1848, which was basically an a fix to Obler’s paradox (the problem that if the universe is infinite and static and not expanding, the light intensity from the infinite number of stars in all directions will mean that the entire sky would be as bright as the sun: the fact that the sun is close to us and gives higher inverse-square law intensity than a distant star would be balanced by the fact that at great distances, there are more stars by the inverse square law of the distance, covering any given solid angle of the sky; the correct resolution to Obler’s paradox is not – contrary to popular accounts – the limited size of the universe in the big bang scenario, but the *redshifts* of distant stars in the big bang, because after all we’re looking back in time with increasing distance and in the absence of redshift we’d see extremely intense radiation from the high density early universe at great distances).

Erasmus Darwin wrote in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

So there was no excuse for Einstein in 1916 to go with popular prejudice and ignore Obler’s paradox, ignore Darwin, and ignore Poe. What was Einstein thinking? Perhaps he assumed the infinite eternal universe because he wanted to discredit ‘fiat lux’ and thought he was safe from experimental refutation in such an assumption.

So Einstein in 1916 introduced a cosmological constant that produces an antigravity force with increases with distance. At small distances, say within a galaxy, the cosmological constant is completely trivial because it’s effects are so small. But at the average distance of separation between galaxies, Einstein made the cosmological constant take the right value so that its repulsion would exactly cancel out the gravitation attraction of galaxies.

He thought this would keep the infinite universe stable, without continued aggregation of galaxies over time. As now known, he was experimentally refuted over the cosmological constant by Hubble’s observations of redshift increasing with distance, which is redshift of the entire spectrum of light uniformly caused by recession, and not the result of scattering of light with dust (which would be a frequency-dependent redshift) or “tired light” nonsense.

However, the Hubble disproof is not substantive to me. Einstein was wrong because he built the cosmological constant extension on prejudice not facts, he ignored evidence from Obler’s paradox, and in particular his model of the universe is unstable. Obviously his cosmological constant fix suffered from the drawback that galaxies are not all spaced at the same distance apart, and his idea to produce stability in an infinite, eternal universe was a failure physically because it was not a stable solution. Once you have one galaxy slightly closer to another than the average distances, the cosmological constant can’t hold them apart, so they’ll eventually combine, and that will set off more aggregation.

The modern cosmological constant application (to prevent the long range gravitational deceleration of the universe from occurring, since no deceleration is present in data of redshifts of distant supernovas etc) is now suspect experimentally because the “dark energy” appears to be “evolving” with spacetime. But it’s not this experimental (or rather observational) failure of the mainstream Lambda-Cold Dark Model of cosmology which makes it pseudoscience. The problem is that the model is not based on science in the first place. There’s no reason to assume that gravity should slow the galaxies at great distances. Instead,

“… the flat universe is just not decelerating, it isn’t really accelerating…”

The reason it isn’t decelerating is that gravity, contraction, and inertia are ultimately down to some type of gauge boson exchange radiation causing forces, and when these exchange radiation are exchanged between receding masses over vast distances, they get redshifted so their energy drops by Planck’s law E=hf. That’s one simple reason for why general relativity – which doesn’t include quantum gravity with this effect of redshift of gauge bosons – falsely predicts gravitational deceleration which wasn’t seen.

The mainstream response to the anomaly, of adding an epicycle (dark energy, small positive CC) is just what you’d expect from from mathematicians, who want to make the theory endlessly adjustable and non-falsifiable (like Ptolemy’s system of adding more epicycles to overcome errors).

Many thanks for discussion you gave of issues with the equivalence principle. I can’t grasp what is the problem with inertial and gravitational masses being equal to within experimental error to many decimals. To me it’s a good solid fact. There are a lot of issues with Lorentz invariance anyway, so its general status as a universal assumption is in doubt, although it certainly holds on large scales. For example, any explanation of fine-graining in the vacuum to explain the UV cutoff physically is going to get rid of Lorentz invariance at the scale of the grain size, because that will be an absolute size. At least this is the argument Smolin and others make for “doubly special relativity”, whereby Lorentz invariance only emerges on large scales. Also, from the classical electromagnetism perspective of Lorentz’s original theory, Lorentz invariance can arise physically due to contraction of a body in the direction of motion in a physically real field of force-causing radiation, or whatever is the causative agent in quantum gravity.

Many thanks again for the interesting argument. Best wishes, Nige

*Updates*

Another comment:

http://kea-monad.blogspot.com/2007/04/whats-new.html

“But note that White seems to think that DE has solid foundations.” – Kea

Even Dr Woit might agree with White, because anything based on observation seems more scientific than totally abject speculation.

If you *assume* the Einstein field equation to be a good description of cosmology and to not contain any errors or omissions of physics, then you are indeed forced by the observations that distant supernovae aren’t slowing, to accept a small positive cosmological constant and corresponding ‘dark energy’ to power that long range repulsion just enough to stop the gravitational retardation of distant supernovae.

Quantum gravity is supposed – by the mainstream – to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.

According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ h-bar.

Since time = distance/c,

(energy)*(distance) ~ c*h-bar.

Hence,

(distance) ~ c*h-bar/(energy)

Very small distances therefore correspond to very big energies. Since gravitons capable of graviton-graviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is non-renormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.

The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).

The problem is that gravity has only one type of ‘charge’, mass. There’s no anti-mass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.

This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.

However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.

This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.

Penrose claims in his book ‘The Road to Reality’ that the bare core charge of the electron is ‘probably’ (137.036^0.5)*e = 11.7e.

In getting this he uses Sommerfeld’s fine structure parameter,

alpha = (e^2)/(4*Pi*permittivity of free space*c*h-bar) = 1/137.036…

Hence, e^2 is proportional to alpha, so you’d expect from dimensional analysis that electric charge shielding should be proportional to (alpha)^0.5.

However, this is wrong physically.

From the uncertainty principle, the range r of a gauge boson is related to its energy E by:

E = hc/(2*Pi*r).

Since the force exerted is F = E/r (from: work energy = force times distance moved in direction of the applied force), we get

F = E/r = [hc/(2*Pi*r)]/r

= hc/(2*Pi*r^2)

= (1/alpha)*(Coulomb’s law for electrons)

Hence, the bare core electron’s bare core charge really has the value e/alpha, not e/(alpha^0.5) as Penrose guessed from dimensional analysis. This “leads to predictions of masses.”

It’s really weird that this simple approach to calculating the total amount of vacuum shielding for the electron core is so ignorantly censored out. It’s published in an Apr. 2003 Electronics World paper, and I haven’t found it elsewhere. It’s a very simple calculation, so it’s easy to check both the calculation and its assumptions, and it leads to predictions.

I won’t repeat the argument that dark energy is a false theory here at length. Just let’s say that on cosmological distances, all radiation including gauge bosons, will be stretched and degraded in frequency and hence in energy. This, the exchange radiation which causes gravity will be weakened by redshift due to expansion over large distances, and when you include this effect on the gravitational interaction coupling parameter G in general relativity, general relativity then predicts the supernovae redshifts correctly. Instead of inventing an additional unobservable to offset the unobserved long range gravitational retardation being offset by dark energy, you just have no long range gravitational deceleration. Hence, no outward acceleration to offset inward gravity at long distances. The universe is simply flat on large scales because gravity is weakened by the redshift of gauge bosons over great distances in an expanding universe where gravitational charges (masses) are receding from one another. Simple.

Another problem with general relativity as currently used is the T_{ab} tensor which is usually represented by a smooth source for the gravitational field, such as a continuum of uniform density.

In reality, the whole idea of density is a statistical approximation, because matter consists of particle of very high density, distributed in the vacuum. So the idea that general relativity shows that spacetime is flat on small distance scales is just bunk, it’s based on the false statistical approximation (which holds on *large* scales, not on *small* scales) that you can represent the source for gravity (ie, quantized particles) by a continuum.

So the maths used to make T_{ab} generate solvable differential equations is an approximation which is correct at large scales (after you make allowances for the mechanism of gravity, including redshift of gauge bosons exchanged over large distances), but is inaccurate in general on small scales.

General relativity doesn’t prove a continuum exists, it requires a continuum because its based on continuously variable differential tensor equations which don’t easily model the discontinuities in the vacuum (ie, real quantized matter). So the nature of general relativity forces you to use a continuum as an approximation.

Sorry for the length of comment, feel free to delete.

While I’m listing comments made over there, here’s one showing that according to the IR cutoff of quantum field theory, Hawking radiation is possible for electrons as black holes, *but isn’t generally possible for really massive black holes,* because the IR cutoff means that pair production (which causes vacuum polarization and hence screening of electric charge) only occurs above a threshold of about 10^18 v/m, which means that Hawking radiation requires a large net electric charge/mass ratio of the black hole, so that the electric field strength in the vacuum is at least 10^18 v/m at the event horizon:

http://kea-monad.blogspot.com/2007/04/m-theory-lesson-37.html

*Notice he says “closed system” conveniently without defining it, and if the universe is a closed system then the 2nd law of thermodynamics is wrong: at 300,000 years after the big bang the temperature of the universe was a uniform 4000 K with extremely little variation, whereas today space is at 2.7 K and the centre of the sun is at 15,000,000 K. Entropy for the whole universe has been*.] Bekenstein worried that if he took a box filled with a hot gas – which would have a lot of entropy, because the motion of the gas molecules was random and disordered – and threw it into a black hole, the entropy of the universe would seem to decrease, because the gas would never be recovered. [

**falling**, in contradiction to the laboratory based (chemical experiments) basis of thermodynamics. The reason for this is the role of**gravitation**in lumping matter together, organizing it into hot stars and empty space. This is insignificant for the chemical experiments in labs which the laws of entropy were based upon, but it**is**significant generally in physics, where gravity lumps things together over time, reducing entropy and increasing order. There is no inclusion of gravitational effects in thermodynamic laws, so they’re plain pseudoscience when applied to gravitational situations.*This is nonsense because gravity in general works*] To save the second law, Bekenstein proposed that the black hole must itself have an entropy, which would increase when the box of gas fell in, so that the total entropy of the universe would never decrease. [

**against**rising entropy; it causes entropy to fall! Hence the big bang went from uniform temperature and maximum entropy (disorder, randomness of particle motions and locations) at early times to very low entropy today, with a lot of order. The ignorance of the role of gravitation on entropy by these people is amazing.*But the entropy of the universe is decreasing due to gravitational effects anyway. At early times the universe was a hot fireball of disorganised hydrogen gas at highly uniform temperature. Today space is at 2.7 K and the centres of stars are at tens of millions of Kelvin. So order has increased with time, and entropy – disorder – has fallen with time.*]”

On page 91, Smolin makes clear the errors stemming from Hawking’s treatment:

“Because a black hole has a temperature, it will radiate, like any hot body.”

This isn’t in general correct either, because the mechanism Hawking suggested for black hole radiation requires pair production to occur near the event horizon, so that one of the pair of particles can fall into the black hole and the other particle can escape. This required displacement of charges is the same as the condition for polarization of the vacuum, which can’t occur unless the electric field is above a threshold/cutoff of about 10^18 v/m.

In general, a black hole will not have a net electric field at all because neutral atoms fall into it to give it mass. Certainly there is unlikely to be an electric field strength of 10^18 v/m at the event horizon of the black hole. Hence there are no particles escaping. Hawking’s mechanism is that the escaping particles outside the event horizon annihilate into gamma rays which constitute the “Hawking radiation”.

Because of the electric field threshold required for pair production, there will be no Hawking radiation emitted from large black holes in the universe: there is no mechanism because the electric field at the event horizon will be too small.

The only way you can get Hawking radiation is where the condition is satisfied that the event horizon radius of the black hole, R = 2Gm/c^2, corresponds to an electric field strength exceeding the QFT pair production threshold of E = (m^2)(c^3)/(e*h bar) = 1.3*10^18 volts/metre, where e is the electron’s charge.

Since E = F/q = Q/(4*Pi*Permittivity*r^2) v/m, the threshold net electric charge Q that a black hole must carry in order to radiate Hawking radiation is

E = (m^2)(c^3)/(e*h bar)

= Q/(4*Pi*Permittivity*r^2)

= Q/(4*Pi*Permittivity*{2Gm/c^2}^2)

Hence, the minimum net electric charge a black hole must have before it can radiate is

Q = 16*Pi*(m^4)(G^2)*(Permittivity of free space)/(c*e*h har)

Notice the fourth power dependence on the mass of the black hole! The more massive the black hole, the more electric charge it requires before Hawking radiation emission is possible.

copy of an interesting comment about the abuse of maths to obfuscate in place of physical concepts, by ignorant physicists like James Jeans:

http://cosmicvariance.com/2007/03/31/string-theory-is-losing-the-public-debate/

anon. on Apr 15th, 2007 at 11:04 am

So you disagree with Sir James Jeans, who for decades in the first half of the twentieth century was credited with the false ‘discovery’ that the solar system was formed by massive tides in the sun. Jeans wrote in his book

The Mysterious Universe(Cambridge University Press, 1930, reprinted many times) that God is a mathematician:It’s then an small step for a Harvard string theorist to write sixty years later to claim: ‘Superstring/M-theory is the language in which God wrote the world.’

However, the problem for Jeans was that the special atomic numbers he lists aren’t special for reasons of pure mathematics. Those numbers aren’t primes or anything. Instead they come from quantum mechanics, with the way electrons are arranged in orbits according to phsyical effects like the exclusion principle. Jeans’ claim about radioactivity being just due to the atomic number being 83-92 was a complete deception.

Professor Eugene Wigner argued:

The role of maths in string theory contrasts with Wigner’s account of why mathematics is useful in physics generally. String theory [is] spectacularly successful in explaining unobservables in terms of other non-observables: an example is how Nobel Laurate Brian Josephson was able to use string theory concepts

without a single equationin his paper unifying ESP and what he calls the ‘special mental vacuum state’ used by string theorists when doing their mathematics, see: http://arxiv.org/abs/physics/0312012Similarly, Einstein’s original failure with the cosmological constant was a guess. He guessed that the universe is static and infiniteEinstein thought that the universe was finite. He didn’t concede the possibility that it isn’t until de Sitter gave examples of Friedmann universes with, and without a cosmological constant.

de Sitter proved that we can simply specify the metric at the spatial limit of the domain under consideration, but this reduces general relativity to a weaker theory that has to be “augmented” with other principles and arbitrary information in order to yield determinate results.

Such a complete resignation in this fundamental question is for me a difficult thing. I should not make up my mind to it until every effort to make headway toward a satisfactory view had proved to be in vain.-A. Einstein

In an address to the Berlin Academy of Sciences in 1921, Einstein said:

I must not fail to mention that a theoretical argument can be adduced in favor of the hypothesis of a finite universe. The general theory of relativity teaches that the inertia of a given body is greater as there are more ponderable masses in proximity to it; thus it seems very natural to reduce the total effect of inertia of a body to action and reaction between it and the other bodies in the universe… From the general theory of relativity it can be deduced that this total reduction of inertia to reciprocal action between masses – as required by E. Mach, for example – is possible only if the universe is spatially finite. On many physicists and astronomers this argument makes no impression…-A. Einstein

The Meaning of Relativity”, 1922From the standpoint of the theory of relativity, to postulate a closed universe is very much simpler than to postulate the corresponding boundary condition at infinity of the quasi-Euclidean structure of the universe.The idea that Mach expressed, that inertia depends on the mutual attraction of bodies, is contained, to a first approximation, in the equations of the theory of relativity; it follows from these equations that inertia depends, at least in part, upon mutual actions between masses. Thereby Mach’s idea gains in probability, as it is an unsatisfactory assumption to make that inertia depends in part upon mutual actions, and in part upon an independent property of space. But this idea of Mach’s corresponds only to a finite universe, bounded in space, and not to a quasi-Euclidean, infinite universe. From the standpoint of epistemology it is more satisfying to have the mechanical properties of space completely determined by matter, and this is the case only in a closed universe.

An infinite universe is possible only if the mean density of matter in the universe vanishes. Although such an assumption is logically possible, it is less probable than the assumption of a finite mean density of matter in the universe.-A. Einstein, 1922.

The unboundedness of space has a greater empirical certainty than any experience of the external world, but its infinitude does not in any way follow from this; quite the contrary. Space would necessarily be finite if one assumed independence of bodies from position, and thus ascribed to it a constant curvature, as long as this curvature had ever so small a positive value.-B. Riemann, 1854

“Einstein thought that the universe was finite.” – island

So at least Einstein got one thing right about cosmology:

“My original considerations on the Structure of Space According to the General Theory of Relativity were based on two hypotheses:1. There exists an average density of matter in the whole of space (the finite spherical universe) which is everywhere the same and different from zero.

2. The magnitude (radius) of space (the finite spherical universe) is independent of time.

Both these hypotheses proved to be consistent, according to the general theory of relativity, but only after a hypothetical term was added to the field equations, a term which was not required by the theory as such nor did it seem natural from a theoretical point of view (‘cosmological term of the field equations’).

(Einstein, 1952)”– http://www.spaceandmotion.com/Physics-Albert-Einstein-Cosmology.htmAnother quotation on that page:

“It should be pointed out that Hubble himself was not convinced that red shift was exclusively due to Doppler effect. Up to the time of his death he maintained that velocities inferred from red shift measurements should be referred to as apparent velocities.” (Mitchell, 1997)Redshift due to recession is the only explanation that works, given the detailed spectra showing how the redshift is a uniform displacement of band spectra at all frequencies, and not a scattering or dust effect. However, there is an issue over the correct amount of redshift.

As a photon travelling a vast distance approaches us, in our reference frame it is falling towards us, and gaining gravitational potential energy, hence it should be very slightly blueshifted. (This can be calculated easily on the symmetry argument, whereby in our frame of reference a photon fired from a torch on earth into space will gradually be redshifted by gravitation – due to the mass of the universe within a sphere of radius equal to its distance from the earth.) This blueshift of light arriving on earth from receding galaxies at great distances will be smaller than the expansion induced redshift effect, so the light will have a net redshift, but the gravitational blueshift effect will reduce the amount of redshift. It seems that this effect and the necessary correction is totally ignored by cosmologists and astronomers. It’s important to recognise that this is an effect that is important in our reference frame, whereby the mass of the universe appears – and is for purposes of calculations – in spherical symmetry around us. (For the impractical approach of the armchair philosopher of the “we are in no special place” dogma, this is a heresy. Nevertheless, for the point of view of doing calculations, the mass of the universe is observed to be in spherical symmetry around us, and this observational circumstance – due largely to spacetime – is what is important for calculations, because gravitational field effects from masses travel at the same velocity of light and thus get delayed while in transit over vast distances in the same way that light signals do.)

Copy of a comment to John Horgan’s blog:

http://www.stevens.edu/csw/cgi-bin/blogs/csw/?p=25

Thanks for this, it’s very entertaining! Very funny discussion about Einstein’s genius as shown by those new biographies. By the way, did you know that, amazingly,

nobody has ever published Einstein’s first scientific paper:http://www.einstein-website.de/z_physics/wisspub-e.html

‘Albert Einstein wrote his first scientific essay in the summer of 1895; he was only 16 years old. This essay

Über die Untersuchung des Ätherzustandes im magnetischen Felde(On the Investigation of the State of the Ether in a Magnetic Field) was sent to his uncle Caesar Koch (1854-1941) for an expert’s opinion. In an accompanying letter to his uncle the young Einstein wrote: “If you are not going to read this stuff I will not be annoyed at all; but at least you have to recognize it as a shy attempt to fight against my being a bad letter writer, which I inherited from both my beloved parents.” Einstein’s first “scientific work” has never been published.’Also, Einstein had over 300 papers published during his scientific career from 1901-54, but Einstein was actually against the whole peer review process of publishing papers, and rejected any attempt of editors to have his papers peer-reviewed (writing an angry letter to the editor of the Physical Review in 1936 when that happened) and has only one paper on the arXiv, http://arxiv.org/abs/physics/0510251

On 27 July 1936, Einstein wrote his celebrated letter to the editor of Physical Review, rejecting entirely the process of peer-review:

http://www.physicstoday.org/vol-58/iss-9/p43.html

‘We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it is printed. I see no reason to address the—in any case erroneous—comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.’

copy of a comment:

http://backreaction.blogspot.com/2007/04/nabla.html

It’s funny that Maxwell defines the use of nabla in the “Preliminary” section at the beginning of his two volume

Treatise on Electricity and Magnetism(3rd ed., 1873), for “convergence” and “curl” operators, and in fact in the rest of the never uses them for his electromagnetic equations.On page 28 of that Treatise, Maxwell defines the nabla operator as convergence instead of divergence and has it positive when the field line vectors are pointing inward: “I propose therefore to call the scalar part of nabla rho the

convergenceof rho … I propose (with great diffidence) to call the vector part of nabla rho thecurl,or theversionof rho …”On page 29 he does gives almost the modern version of the Laplacian operator, but has the directional convention with the sign negative

nabla^2 = – (d^2 /dx^2 + d^2 /dy^2 + d^2 /dz^2)

It’s weird that Maxwell makes no use of vector calculus in the rest of the book. The first publication of the vector calculus version of Maxwell’s 20 long hand differential equations occurs twenty years later in Heaviside’s book of 1893.

“Heaviside simplified and made useful for the sciences the original Maxwell’s equations of electromagnetism. This innovation from the reformulation of Maxwell’s original equations gives the four vector equations known today. Heaviside developed the Heaviside step function, which he used to model the flow of current in an electric circuit. Heaviside developed vectors (and vector calculus). Heaviside formed the operator method for linear differential equations. However, Heaviside’s approach is short of rigorous mathematical basis. Thomas Bromwich supplemented Heaviside’s operator method by providing a rigorous mathematics basis.”

It’s interesting that Heaviside, besides writing the Maxwell equations, also came up with crucial ideas which preceded the Lorentz-FitzGerald contraction:

“In two papers of 1888 and 1889, Heaviside calculated the deformations of electric and magnetic fields surrounding a moving charge, as well as the effects of it entering a denser medium. This included a prediction of what is now known as Cherenkov radiation, and inspired Fitzgerald to suggest what now is known as the Lorentz-Fitzgerald contraction.” – Wiki

What always makes me amazed is when someone makes a mathematical invention but doesn’t use it. Newton didn’t use any calculus whatsoever in

Principiawhich is solely done with Euclidean geometry. (He didn’t even write the inverse square law of gravity with algebra; G was invented long after by Laplace.) Maxwell put nabla “convergence” and curl operators in the Preliminary section of his Treatise and then failed to use them in the remainder for electromagnetism. Also, Tullio Levi-Civita published the tensor calculus (although Einstein came up with the tensor name) in 1900 with Gregorio Ricci-Curbastro, but they didn’t use it to work out general relativity. Why? Riemann’s non-Euclidean geometry was well known. If I came up with anything useful mathematically (which sadly has nil probability), I’d definitely apply it to every important problem there is ASAP.copy of a comment:

http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-255052

nigel on Apr 28th, 2007 at 6:22 am

Sean, you getting into pseudoscience in even asking questions like that, because whatever theory people may find for the initial conditions, it won’t be possible to test it. The best you will be able to do is to say you have a consistent ad hoc theory for how the universe began which includes qantum gravity effects.

It won’t be a potentially falsifiable theory, so it isn’t doing science, unless somehow the theory is based entirely on solid facts as input.

It reminds me of the time you claimed that the universal gravitational constant

Ghad a value within 10% of the present day value a minute after the big bang. It turned out that claim was based on the assumption that the electromagnetic force (which resists fusion, due to Coulomb repulsion) remained static whileGvaried. If there is a unification of long range forces like electromagnetism and gravity, however, they’d both be likely to vary. If gravity and electromagnetism vary the same way, fusion rates won’t vary because an increased gravitational compression helping fusion will be offset by an increased Coulomb repulsion between approaching charged nuclei.So it’s very wishy-washy to be investigating this stuff, especially when there are loads of implicit but unstated assumptions involved. Basically, it amounts to assuming something then claiming to have evidence from a calculation based on those assumed assumptions. This is what things like religion and string theorists do. It’s not very interesting because it’s really just orthodoxy.

Take Hawking’s radiation as another specific example. He implicitly assumes that there is pair production occurring in spacetime everywhere. In quantum field theory, spontaneous pair production in a steady field say of a black hole needs an electric field strength exceeding Schwinger’s limit of 1.3*10^18 v/m (equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040 ).

So for Hawking radiation to be possible due to one fermion in pair production near the event horizon falling into the hole while one escapes, you need the black hole to have a minimum electric charge of Q = 16*Pi*(m^4)(G^2)*(Permittivity of free space)/(c*e*h har).

In general, massive black holes will swallow up as much positive as negative charge, so they’re be neutral, there won’t be any pair production near the event horizon, and they can’t radiate. The situation where Hawking radiation can occur which is most interesting is where you treat fundamental partucles like electrons (which have the maximum charge to mass ratio of all matter) as radiating black holes. Then you have something to act as a source of the exchange radiation.

So it’s by examining assumptions and rejecting false assumptions that progress is made, not by theorizing willy-nilly with foundations consisting of quicksand (a host of unchecked speculation).

copy of a related comment

http://motls.blogspot.com/2007/04/resolving-big-bang.html

“Observations show that the visible Universe is homogeneous and isotropic at distances longer than 300 megaparsecs or so.”

Wrong! At great distances we’re seeing back in time, so it’s definitely not homogeneous and isotropic:

1. The density gets higher with observable radial distance. Density in the more distant, earlier universe was higher.

2. Early galaxies like quasars abound at great distances.

3. The earliest feature of the universe we see is the cosmic background radiation, emitted 300,000 years after the BB, which definitely isn’t isotropic or homogeneous:

“U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.”

– Muller, R. A., “The cosmic background radiation and the new aether drift”, Scientific American, vol. 238, May 1978, p. 64-74 (click on http://adsabs.harvard.edu/abs/1978SciAm.238…64M for link to this abstract)

copy of a comment:

http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-255203

nigel on Apr 28th, 2007 at 1:55 pm

“The problem with your argument is that fitting the model to observation is not the only criteria modern scientists use to make judments.” – V

I’m in favour of building theories on the basis of facts and making predictions, checking the predictions, etc. That’s science.

“Practically any model can be made to fit observations. However, the subset of such models which are internally consistent with known laws of physics as well as mathematical consistency is very small.” – V

No, the new theory has to disagree with the known laws of physics in order to get anywhere. For example, the known force laws in the standard model predict that forces don’t unify at 10^16 GeV.

If a new theory must be consistent with the old theory, the new theory is just a carbon copy and – unless it is covering an area of physics which is devoid of any existing laws (there aren’t any such empty areas of physics) – it will come into conflict with existing laws.

For example, supersymmetry predicts – contrary to the existing laws of the standard model – that electromagnetic, weak and strong force strengths unify at 10^16 GeV. That blows your argument sky high, if you think string theory is science and is consistent with existing laws.

“… Take your example of the Ptolemaic model. Sure, it fits the observational data very well, but it violates known physical laws. Thus, modern scientists would know it is wrong on this basis. …” – V

Evolving dark energy would violate conservation of energy, so you’d dismiss it out of hand for being inconsistent with known laws? Basically your argument would also ban progress in quantum gravity, since any final theory would need to be inconsistent with known physical laws in either general relativity or quantum gravity. (To start with, a modification of general relativity is needed to allow for quantum effects on the gravity constant

Glike redshift over massive cosmological distances of force-mediating radiation being exchanged between distant gravitational charges, i.e. receding masses.)follow up fast comment

http://motls.blogspot.com/2007/04/resolving-big-bang.html

We’re going at an absolute speed of 600 km/s, taking the CMB as an absolute reference frame.

Part of that velocity is due to our Milky Way accelerating towards the massive galaxy of Andromeda.

So that’s likely to be a good upper limit estimate at least for the order of magnitude of the probable average speed of the Milky Way since the big bang. (Although the Milky Way’s direction will have been deflected many times due to motion relative to other nearby galaxies.)

Hence, the distance we are from the singularity of the big bang is simply distance = velocity * time

= 600 km/s * age of universe, ~4.7*10^{17} s

= 9.2 Megaparsecs ~ 3*10^{23} m

That’s probably an overestimate of the distance we are from the point of origin of the big bang, because (1) the velocity assumed is high due to our present attraction to Andromeda, and (2) the motion of the Milky Way would not be a straight line, but a zig zag, so the average radial direction velocity would definitely be less than the average scalar speed value.

[The 9.2 megaparsec distance is pretty trivial as its only 12,000 times as far as Andromeda.]

There is a risk that we are by fluke very near the middle of a spherical universe. The problem is that Ptolemy assumed we’re at the middle for metaphysical reasons, calling Wolf without a good reason. Now, as a result, nobody pays any attention to scientific reason that the universe looks fairly isotropic because we have some evidence of really being near the middle.

Like Dawkin’s said, to be a scientist you need to be open minded but not so much open minded that your brain falls out! This is too much of a difficult balance for many.

http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-255220

nigel on Apr 28th, 2007 at 2:56 pm

V, Supersymmetry is a good example. It modifies the existing extrapolations of three experimentally based force laws at unobservably high energy, without good reason (unless you think that unification is beautiful and a good reason), it introduces unobserved superpartners, and 6 extra dimensions. Dr Woit explains on page 177 of Not Even Wrong that using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.

All these things can be viewed as incompatibilities with existing knowledge. But they are excused because each is just a little difficulty as seen in isolation (just like the debtor with a million little debts of $1 each, who refused to see the big picture – and his big problem).

A question that should be answered is what happens to the electromagnetic charge energy which is “shielded” by dielectric of the Dirac sea, composed of radially polarized pairs of fermions in loops between the IR and UV cutoffs? The electric charge energy of the bare core of an electron is considerably higher than the observed (screened) value. Does the attenuated electric charge energy power short ranged forces? I.e., do the loops of W weak gauge bosons result from energy being screened by the polarized vacuum? If the attenuation of the electric charge at small distances from an electron causes the weak force, then (by analogy) you’d expect the strong force between hadrons to be caused by energy absorbed from the vacuum by radially polarized virtual electric charges at small distances. If the weak and strong forces are indeed being powered by the attenuation of the electromagnetic charge by polarized vacuum dielectric, then as you get to energies exposing the unshielded bare core charge of the electron like 10^16 GeV, the weak and strong forces should drop to zero because there’s no shielded energy to power them.

copy of a comment:

http://motls.blogspot.com/2007/04/resolving-big-bang.html

1. Because it is a calculation based entirely on observations of absolute age of universe ~4.7*10^{17} s and absolute speed of the Milky Way ~600 km/s which gives a result that we’re 3*10^{23} m from the middle of the CBR reference frame.

2. The effective radius of the universe (since it isn’t slowing down) is just ct = 3*10^{8} * 4.7*10^{17} = 1.4*10^{26}.

Therefore the point of origin of the Milky Way is only 3*10^{23} / 1.4*10^{26} = 0.002 of the radius of the universe.

Since we’re thus at only 0.2% of the radius, we’re closer to the middle than the outside. Even a moron can see that this is an explanation for why the universe appears isotropic.

3. “Shut up and calculate” (Feynman).

4. If you don’t investigate obvious simple ideas carefully just because they sound too simple, you’re crazy.

copy of a comment:

http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-255674

nigel on Apr 29th, 2007 at 4:27 pm

“The main motivation for introducing SUSY is that it provides a natural resolution to the gauge hierarchy problem. As an added bonus, one gets gauge coupling unification and has a natural dark matter candidate. Plus, if you make SUSY local you get supergravity. These are all very good reasons why we expect SUSY to be a feature of nature, besides mathematical beauty.

“Regarding your questions about vacuum polarization, this is in fact what causes the gauge couplings to run with energy. Contrary to your idea, the electroweak interactions are a result of local gauge invariance…” – V

The standard model is a model for forces, not a cause or mechanism of them. I’ve gone into this mechanism for what supplies the energy for the different forces in detail elsewhere (e.g. here & here).

Notice that when you integrate the electric field energy of an electron over the volume from radius R to infinity, you have to make R the classical electron radius of 2.8 fm in order that the result corresponds to the rest mass energy of the electron. Since the electron is known to be way smaller than 2.8 fm, there’s something wrong with this classical picture.

The paradox is resolved because the major modification you get from quantum field theory is that the bare core charge of the electron is far higher than the observable electron charge at low energy. Hence, the energy of the electron is far greater than 0.511 MeV.

In QED, not just charge but also rest mass of the electron is renormalized. I.e., the bare core values of electron charge and electron mass are higher than the observed values in low energy physics by large factor.

The rest mass we observe and the corresponding equivalent energy E=mc^2 is low because of physical association of mass to charge via the electric field, which is shielded by vacuum polarization. This is because the virtual charge polarization mechanism for the variation of observable electric charge with energy, doesn’t explain the renormalization of mass in the same way. Electric polarization requires a separation of positive and negative charges which drift in opposite directions in an electric field, partly cancelling out that electric field as a result. The quantum field where mass as a charge is gravity, and and since nothing has ever been observed to fall upwards, it’s clear that the polarization mechanism that shields electric charge doesn’t occur separately for masses. Instead, mass is renormalized because it gets coupled to charges by the electric field which is shielded by polarization. This mechanism inferred from the success of renormalization of mass and charge in QED gives a clear approach to quantum gravity. It’s the sort of thing which in an ideal world like this one (well, string theorists have [an] idealistic picture, and they’re in charge of theoretical HEP) should be taken seriously, because it’s building on empirically confirmed facts, and it predicts masses.

copy of a comment:

http://kea-monad.blogspot.com/2007/04/sparring-sparling-iii.html

Kea, thanks for this, which is encouraging.

In thinking about general relativity in a simple way, a photon can orbit a black hole, but at what radius, and by what mechanism?

The simplest way is to say 3-d space is curved, and the photon is following a curved geodesic because of the curvature of spacetime.

The 3-d space is curved because it is a manifold or brane on higher-dimensional spacetime, where the

time dimension(s) create the curvature.Consider a globe of the earth as used in geography classes: if you try to draw Euclidean triangles on the surface of that globe, you get problems with angles being bigger than on a flat surface, because although the surface is two dimensional in the sense of being an area, it is curved by the third dimension.

You can’t get any curvature in general relativity due to the 3 contractable spatial dimensions:

hence the curvature is due to the extra dimension(s) of time.This implys that the time dimension(s) are the

sourceof the gravitational field, because the time dimension(s) produce all of the curvature of spacetime. Without those extra dimension(s) of time, space is flat, with no curved geodesics, and no gravity.This should tell people that the mechanism for gravity is to be found in the role of the time dimension(s). With the cosmic expansion represented by recession of mass radially outward in three

timedimensions t = r/c, you have a simple mechanism for gravity since you have outward velocity varying specifically withtimenot distance, which impliesoutward accelerationof all the mass in the universe, using Hubble’s empirical law, dr/dt = v = Hr:a = dv/dt

= dv/(dr/v)

= v*dv/dr

= v*d(Hr)/dr

= vH.

Thus outward force of universe F=Ma = MvH. Newton’s 3rd law tells you there’s equal inward force. That inward force predicts gravity because it (which is corce-mediating gauge boson exchange radiation, i.e., gravitational field) exerts pressure against masses from all directions except where shielded by local, non-receding masses. The shielding is simply caused by the fact that non-receding (local) masses don’t cause a reaction force, so they cause an asymmetry, gravity. There are two different working sets of calculations for this mechanism which predict the same formula for G (which is accurate well within observational errors on the parameters) using different approaches (I’m improving the clarity of those calculations in a big rewrite).

Back to the light ray orbiting the black hole due to the curvature of spacetime: Kepler’s law for planetary orbits is equivalent to saying the radius of orbit, r is equal to 2MG/v^2, where M is the mass of the central body and v is the velocity of the orbiting body.

This comes from: E = (1/2)mv^2 = mMG/r, as Dr Thomas R. Love has explained.

Light, however, due to its velocity v = c, is deflected by twice as much by gravity than the slow moving objects (v black hole event horizon radii (similar to the way that the Earth’s Van Allen belt’s are plotted in units of earth radii).

For the case where n = 1, i.e., one event horizon radius, you get g_{00} = g_{11} = g_{22} = g_{33} = 0.

That’s obviously wrong because there is severe curvature. The problem is that in using Maclaurin’s series to the first two terms only, the result only applies to small curvatures, and you get a strng curvature at event horizon radius of a black hole.

So it’s vital at black hole scales to

notuse Maclaurin’s series to approximate the basic equations, but to keep them intact:g_{00} = [(1 – GM/(2rc^2)/(1 + GM/(2rc^2)]^2

g_{11} = g_{22} = g_{33} = -[1 + GM/(2rc^2)]^4

where GM/(2rc^2) = (2GM/c^2)/(4r) = 1/(4n) where as before n is the distance in units of event horizon radii. (Every mass consititues a black hole at a small enough radius, so this is universally valid.) Hence:

g_{00} = [(4 – 1/n)/(4 + 1/n)]^2

g_{11} = g_{22} = g_{33} = -(1/256)*[4 + 1/n]^4.

So for one event horizon radius (n = 1),

g_{00} = (3/5)^2 = 9/25

g_{11} = g_{22} = g_{33} = -(1/256)*5^4 = -625/256.

The gravitational time dilation factor of 9/25 at the black hole event horizon radius is equivalent to a velocity of about 0.933c.

It’s pretty easy for to derive the Schwarzschild metric for weak gravitational fields just by taking the Lorentz-FitzGerald contraction gamma factor and inserting v^2 = 2GM/r, on physical arguments, but then we have the problem that Schwarzschild’s metric only applies to weak gravity fields because uses only the first two terms in the Maclaurin’s series’ for the metric tensor’s time and space. It’s an interesting problem to try to get a completely defensible, simple physical model for the maths of general relativity. Of course, there is no real physical need to work beyond the Schwarzschild metric since … all the tests of general relativity apply to relatively weak gravitational fields within the domain of validity of the Schwarzschild metric. There’s not much physics in worrying about things that can’t be checked or tested.

copy of a fast comment

http://motls.blogspot.com/2007/04/resolving-big-bang.html

“Dear active moron, let me inform you that there is no “middle” of the Universe – exactly because of the cosmological principle. People have known this since the time of Copernicus and every sentence about something being “closer to the middle” is a sign of reduced intelligence.” – Lubos Motl

No, the “cosmological principle” has no evidence;

1. first it was definitely the earth-centred universe,

2. then it was changed to the idea that we’re definitely not in a special place, because earth orbits the sun,

3. finally, the false assumption that spacetime is curved over large distances was used to claim there’s no boundary to spacetime because it curves back on itself.

This last idea was disproved in 1998 when the supernova data showed that there’s no gravitational retardation, hence no curvature, on large scales. (The mainstream idea of accounting for this by adding dark energy as a repulsive force to try to cancel out gravity over cosmological distances doesn’t really affect it – spacetime is still flat on the biggest distances, according to empirical evidence.)

The simplest thing is to accept general relativity works on small scales up to clusters of galaxies, but breaks down over cosmological distances. It’s just an approximation. The stress-energy tensor has to falsely assume that matter and energy are continuums by using an average density, ignoring discontinuities such as the quantization of mass-energy. That’s why you get smoothly curved spacetime coming out of the Ricci tensor. You put in a false smooth source for the curvature, and naturally you get out a false smooth curved spacetime as a result. The lack of curvature on large scales means that spacetime globally doesn’t caused looped geodesics, which only occur locally around planets, stars, galaxies. The universe does have a “middle”:

The fact is, the universe is a quantized mass and energy expanding in 3 spatial dimensions with curvature of geodesics due to time. It’s a fireball expanding in 3 spatial dimensions, so it has a “middle”. The role of time doesn’t take make spacetime boundless, because there’s no curvature on cosmological scales. That’s because gravity-causing gauge bosons exchanged between relativistically receding masses at great distances in an expanding universe will lose energy; their redshift of frequency causes an energy loss by Planck’s law

E = hf.Hence, gravity is terminated by redshift of gauge boson force-causing exchange radiation over such massive distances, and there’s no global curvature. On cosmological distance scales, geodesics can’t be looped because of this lack of curvature. Therefore, the role of time in causing curvature on cosmological scales is trivial, spatial dimensions don’t suffer curvature, and the universe isn’t boundless. There is a middle to the big bang, just as there’s a middle to any expanding fireball. Whether spatial dimensions are created in the big bang or not is irrelevant to this.copy of a comment:

http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/

“Nigel,

I appreciate your enthusiam for thinking about these problems. However, it seems clear that you haven’t had any formal education on the subjects. The bare mass and charges of the quarks and leptons are actually indeterminate at the level of quantum field theory. When they are calculated, you get an infinities. What is done in renormalization is to simply replace the bare mass and charges with the finite measured values.” – V

No, the bare mass and charge are not the same as measured values, they’re

higherthan the observed mass and charge. I’ll just explain how this works at a simple level for you so you’ll grasp it:Pair production occurs in the vacuum where the electric field strength exceeds a threshold of ~ 1.3*10^18 v/m; see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or 8.20 in http://arxiv.org/abs/hep-th/0510040

These pairs shield the electric charge: the virtual positrons are attracted and on average get slightly closer to the electron’s core than the virtual electrons, which are repelled.

The electric field vector between the virtual electrons and the virtual positrons is radial, but it points

inwardstowards the core of the electron, unlike the electric field from the electron’s core, which has a vector pointing radiallyoutward. The displacement of virtual fermions is the electric polarization of the vacuum which shields the electric charge of the electron’s core.It’s totally incorrect and misleading of you to say that the exact amount of vacuum polarization can’t be calculated. It can, because it’s limited to a shell of finite thickness between the UV cutoff (close to the bare core, where the size is too small for vacuum loops to get polarized significantly) and the IR cutoff (the lower limit due to the pair production threshold electric field strength).

The uncertainty principle give the range of virtual particles which have energy

E: the range isr ~ h bar*c/E.SoE ~ h bar*c/r.Work energyEis equal to the force multiplied by the distance moved in the direction of the force,E = F*r.HenceF = E/r = h bar*c/r^2.Notice the inverse square law arising automatically. We ignore vacuum polarization shielding here, so this is the bare core quantum force.Now, compare the magnitude of this quantum

F = h bar*c/r^2(high energy, ignoring vacuum polarization shielding) to Coulomb’s empirical law for electric force between electrons (low energy), and you find the bare core of an electron has a chargee/alpha ~137*e, where e is observed electric charge at low energy. So it can be calculated, and agrees with expectations:‘… infinities [due to ignoring cutoffs in vacuum pair production polarization phenomena which shields the charge of a particle core], while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters

eandmfor a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constantse’andm’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner,m’/mande’/ewould be of order alpha ~ 1/137.’– M. E. Rose (Chief Physicist, Oak Ridge National Lab.),

Relativistic Electron Theory,John Wiley & Sons, New York and London, 1961, p76.I must say it is amazing how ignorant some people are about this, and this is vital to understanding QFT, becausebelow the IR cutoff there’s no polarizable pair production in the vacuum, so beyond ~1 fm from a charge where the , spacetime is relatively classical. The spontaneous appearance of loops of charges being created and annihilated everywhere in the vacuum is discredited by renormalization. Quantum fields are entirely bosonic exchange radiation at field strengths below 10^18 v/m. You only get fermion pairs being produced at higher energy, and smaller distances than ~1 fm.

http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-256095

nigel on Apr 30th, 2007 at 4:17 pm

Niel B.,

The field lines are radial so they diverge, which produces the inverse square law since the field strength is proportional to the number of field lines passing through a unit area, and spherical area is 4*Pi*r^2.

The charge shielding is due to virtual particles created in a sphere of space with a radius of about 10^{-15} m around an electron, where the electric field is above Schwinger’s threshold for pair production, 10^{20} volts/metre. For a good textbook explanation of this see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or 8.20 in http://arxiv.org/abs/hep-th/0510040

Here’s direct experimental verification that the electron’s observable charge increases at collision energies which bring electrons into such close proximity that the pair production threshold is exceeded:

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is alpha (1/137.036) at low energies but increases by 7% to about 1/128.5 at 80 GeV or so.

This is an increase in electric charge (i.e., an increase in the total number of electric field lines in Faraday’s picture), nothing whatsoever to do with the radial divergence of electric field lines. The electric charge increases not due to divergence of field lines, but due to some field lines being stopped by polarized pairs of fermions which cancel them out, as explained in comment 63. The coupling constant of alpha corresponds to to the observed electric charge at low energy. This increases at higher energy. Think of it like cloud cover. If you go up through the cloud using an aircraft, you get increased sunlight. This has nothing whatsoever to do with the inverse square law of radiation flux, instead it is caused by the shielding by the cloud absorbing the energy. My argument is that the electric charge energy that’s shielded causes the short ranged forces since the loops give rise to massive W bosons, etc., which mediate short ranged nuclear forces. Going on to higher energy (through the cloud to the unshielded electromagnetic field), there won’t be any energy absorbed by the vacuum because the distance is too small for pairs to polarize, hence there won’t be any short ranged nuclear forces. So by injecting the conservation of mass-energy into QED, you get an answer to the standard model unification problem: the electromagnetic coupling constant increases from alpha towards 1 approach distances so tiny from the electron core that there is simply no room for polarized virtual charges to shield the electric field. As a result, there’s no nuclear forces beyond the short ranged UV cutoff because there is no energy absorbed from the electromagnetic field by polarizated pairs, which can produce massive loops of gauge bosons. (By contrast, supersymmetry is based on a false assumption that all forces have the same energy approaching Planck scale. There’s no physics in it. It’s just speculation.)

**************

Because the bare core of the electron has a charge of 137.036

e, total energy of the electron (including all the mass-energy in the short ranged massive loops which polarize which shield the 137.036ecore charge and associated mass down to the observed small chargeeand observed small mass) is 137*0.511 = 70 MeV. Just as it seemed impossible to mainstream crackpots in 1905 that there was a large amount of unseen energy locked up in the atom, they also have problems understanding that renormalization means that there’s a lot more energy in fundamental particles (tied up the creation-annihilation loops) than that which is directly measurable.copy of a comment:

http://www.stevens.edu/csw/cgi-bin/blogs/csw/?p=31

Your comment is awaiting moderation.

May 2nd, 2007 at 5:12 am

May I ask what is a ‘science journalist’ defined as? A journalist who covers science stories? Or a scientist who writes?

Science journalism went wrong in trying to explain special relativity to the public while ignore general relativity.

‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’

– Professor Lee Smolin,

Einstein’s Legacy — Where are the ‘Einsteinians?’, http://www.logosjournal.com/issue_4.3/smolin.htmUntil then, the big name scientists were highly cautions in speaking or writing about their frontier research, insisting on caveats and emphasizing uncertainties and alternative ideas that may be correct.

Books written by big name scientists, at least physicists, after that were more insistent on theories, dropping the caveats. Special relativity is insisted upon without any caveats by mainstream physicists who don’t understand the background independence of general relativity.

All science journalists like string theorists, promote special relativity. It’s a matter of orthodoxy.

Copernicus argued that the sun doesn’t orbit the earth, which – if special relativity applies to all motion – would be an empty statement.

Therefore, the special relativists (Einstein called them ‘restricted’ relativists) pervert Copernicus discovery.

Instead of Copernicus having discovered evidence that the earth orbits the sun, they claim, Copernicus discovered evidence that the earth isn’t in special place in the universe.

This is the ‘cosmological principle’.

However, like the string theory principles, it’s not even wrong. General relativity deals with accelerations, which don’t obey special relativity! In fact, you can’t ever have non-accelerating motion in this universe. So special relativity is just a a very restricted approximation that ignores the fact that things normally move along curved geodesics:

‘The special theory of relativity …does not extend to non-uniform motion …

The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates[e.g., including absolute co-ordinate systems],that is, are co-variant with respect to any substitutions whatever …’– Albert Einstein, ‘The Foundation of the General Theory of Relativity’,

Annalen der Physik,v49, 1916. (Italics are Einstein’s own.)This is the need for background independence.

The laws of physics are free from specific metrics like the Minkowski spacetime of special relativity.The metric is determined as the solution to the field equation. You can’t have curved spacetime if there is nothing absolute to be curved.The bottom line is, nobody has any evidence against absolute motion, and there is some evidence for absolute motion accelerations, requiring general relativity:

‘The Michelson-Morley experiment has thus failed to detect our motion … because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus… The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’

– Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919),

Space Time and Gravitation: An Outline of the General Relativity Theory,Cambridge University Press, Cambridge, 1921, pp. 20, 152.At university, I cane across Eddington’s book which explains (see above) that general relativity is an absolute motion theory, because if you are spun around you know you’re moving even if you can’t see any stars (you get feeling of nausea that tells you you’re in a state of spin).

I then did some research and found a popular article by R. A. Muller of the University of California, ‘The cosmic background radiation and the new aether drift’, published in

Scientific American, vol. 238, May 1978, p. 64-74, which states:‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’

Ansolute motion at 600 km/s? Multiply that velocity by the age of the universe, and you get the distance we are from the middle of the universe, about 0.3% of the radius! (Actually, we will be even closer to the middle than that, because much of the 600 km/s speed is due to attraction of the Milky Way to Andromeda, and also during the age of the universe the matter we’re sitting on will have moved generally in a zig-zag due to deflections by the gravity of other matter, instead of moving in a straight line. So the net velocity away from the origin in spatial dimensions will be a lot less, and the distance covered will be smaller.)

There is evidence that a ‘middle’ to the big bang fireball does exist: in 1998 the idea that the universe is curved and thus ‘boundless’ on the greatest distance scales disappeared. Geodesics can’t cirve back on themselves. The universe isn’t boundless, it’s spherical. This was because the gravitation resulting from such curvature wasn’t observed in the supernovae recession data published by Perlmutter. Spacetime is flat on the greatest distance scales, as Nobel Laureate Professor Phil Anderson argued:

‘… my points were … the flat universe is just not decelerating, it isn’t really accelerating … there’s a bit of the “phlogiston fallacy” here, one thinks if one can name Dark Energy or the Inflaton one knows something about it.’ – http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

The problem is that, however much you may want to ask questions, John, the mainstream people answering will waylay you, obfuscate, even lie. They don’t want skeletons in the scientific cupboard to see the light of day. Those like Smolin and Anderson who don’t hype the mainstream consensus, are too cautious in saying what the facts actually are. So it just drags on. The public have a completely mis-informed outlook, as does much of the mainstream. There’s no way around it.

In political matters such as Watergate and string theory, it is possible for a Woodward and Bernstein or Smolin and Woit to expose corruption because there is some interest in exposing errors.

What happens in a scientific revolution is that the mainstream never admits being wrong. They just abuse their authority and ignore criticisms, and as a result eventually the mainstream ideas are perceived to be boring. Science journalism must is as corrupt and politically dominated as the mainstream scientists being interviewed.

This is so because if you try to make sense of alternatives to the mainstream, you – as a journalist – will be in even greater difficulties. For one thing, many alternatives must be wrong because they are incombatible with one another, so you get mired down (however, being wrong is better than not-even-wrong mainstream religions). For another, there is less public interest in alternative ideas than in the mainstream. Notice that when Woodward and Bernstein exposed Nixon’s corruption, they weren’t asked to produce something better than Nixon to go in his place. That’s the problem for Smolin and Woit. Science is tougher than politics, for investigative journalists who want to avoid being sidelined.

copy of a comment to a Harvard University string theory professor’s crackpot blog which claims April 2007 was a record cold month:

http://motls.blogspot.com/2007/05/pa-coldest-april-in-32-years.html

It was so hot last month I got sunburn on my nose after a long walk along Clacton beach. See

http://www.metoffice.gov.uk/corporate/pressoffice/2007/pr20070502.html

News release2 May 2007

Record breaking warm April

Met Office climate scientists have confirmed today that April 2007 and the 12-month rolling period records have been the warmest on record.

Final April and 12-month rolling figures in the Central England Temperature (CET) series are:

11.2 °C for the month of April, beating the previous record of 10.6 °C set in 1865

11.6 °C for the 12-month rolling period for May 2006 to April 2007, beating the previous record of 11.1 °C for the 12-month period ending October 1995.So change the title of this lunatic post to “hottest April ever”. Thank you.

Homepage | 05.04.07 – 1:20 pm | #

Follow up comment to the Harvard string theorist’s crackpot climate blog:

http://motls.blogspot.com/2007/05/pa-coldest-april-in-32-years.html

I didn’t suggest that warm air caused the sunburn to my nose.

The cause of the sunburn was the same as the source of the hot air – the sun:

1. Clouds are white and reflect sunlight back into space, cooling the earth.

2. At night time, cloud cover doesn’t reflect thermal radiation back because the sun is not out. Instead, cloud cover at night acts like a blanket and traps heat in the earth’s atmosphere, preventing it being radiated into outer space.

3. Clear sky during the daytime followed by cloud cover at night (caused by water evaporated from the sea during the day), maximises the temperature of the air. The clouds produce rainfall at dawn, and the sky clears for the sun in daylight. This sequence maximises heating of the air.

At the same time, this sequence maximises the risk of sunburn to my nose, because the sky is clear in daytime!

copy of a comment:

http://www.stevens.edu/csw/cgi-bin/blogs/csw/?p=32

“Neffe is also more adept at explaining Einstein’s influence on modern researchers, including string theorists, cosmologists, and explorers of the oddities of quantum mechanics.” – John Horgan

I’m intrigued to find out what that sentence in your review implies. The “Elegant Universe” of string theory is totally contrary to Einstein’s pragmatic approach to science: Einstein came up with general relativity to solve existing real problems with Newtonian gravity like the precession of the perihelion of Mercury, and to make checkable predictions about how much gravitational deflection starlight would be seen to have when passing close to the sun during eclipses. Einstein actually wrote:

“I adhered scrupulously to the precept of that brilliant theoretical physicist L. Boltzmann, according to whom matters of elegance ought to be left to the tailor and to the cobbler.”

– Albert Einstein, December 1916 Preface to his book

Relativity: The Special and General Theory, English translation by Robert W. Lawson for the 1920 Methuen & Company edition, London.To Einstein, elegance is a matter for tailors, not scientists. There is a vast difference between Einstein’s science and the travesty of science which masquerades as physics under the banner of string theory, ad hoc dark energy cosmology, and religious belief in the Copenhagen Interpretation of quantum mechanics.