Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron. The whole idea of quantum field theory is to remove the calculus of curvature from classical gravitation and replace it with quantized jumps caused by discrete field quanta, graviton interactions. If you believe in the Pauli-Fierz string theory lie, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity. (Basically, spin-1 gravitons push, while spin-2 gravitons suck.
Tom Bethell, senior editor of the American Spectator, has just written an article called “Relativity and relativism” in the Washington Times newspaper criticizing Einstein’s special relativity theory, which we will quote and discuss at the end of this post. First, let’s examine the relationship between relativity, classical fields and quantum gravity.
Below we give an improved presentation of the simple basic calculation in the earlier blog post linked here. In October 1996, we showed via page 893 of Electronics World that spin-1 quantum gravitons do the job now attributed to “dark energy” in accelerating the universe (the “cosmological constant”) as well as quantum gravity.
The cosmological repulsion and consequently the correct cosmological constant was predicted in 1996, years ahead of first being measured. (Few people had any interest and despite concern from the editor, Classical and Quantum Gravity’s peer-reviewers would not support publication of any non-string theory predictions on quantum gravity. A fellow Electronics World author, Mike Renardson, kindly wrote to suggest that the 7 x 10-10 ms-2 acceleration was too small to detect, yet it was soon clear that over large, cosmological-sized, distances it would prove measurable to Perlmutter and other astronomers two years later, using automated detection of standard brightness supernovas by new software working off telescope feed in real time. The measured luminosity indicated distance, while the measured redshift allowed the acceleration of the universe to be measured.)
Since 1998, more and more data has been collected and the presence of a repulsive long-range force between masses has been vindicated observationally. The two consequences of spin-1 gravitons are the same thing: distant masses are pushed apart, nearby small masses exchange gravitons less forcefully with one another than with masses around them, so they get pushed together like the Casimir force effect.
Using an extension to the standard “expanding raisin cake” explanation of cosmological expansion, in this spin-1 quantum gravity theory, the gravitons behave like the pressure of the expanding dough. Nearby raisins have less dough pressure between them to push them apart than they have pushing in on them from expanding dough on other sides, so they get pushed closer together, while distant raisins get pushed further apart. There is no separate “dark energy” or cosmological constant; both gravitation and cosmological acceleration are effects from spin-1 quantum gravity (see also the information in an earlier post, The spin-2 graviton mistake of Wolfgang Pauli and Markus Fierz for the mainstream spin-2 errors and the posts here and here for the corrections and links to other information).
As explained on the About page (which contains errors and needs updating, NASA has published Hubble space telescope estimates of the immense amount of receding matter in the universe, and since 1998 Perlmutter’s data on supernova luminosity versus redshift have shown the amount of the tiny cosmological acceleration, so the relationship in the diagram above predicts gravity quantitatively, or you can you normalize it to Newton’s empirical gravity law so it then predicts the cosmological acceleration of the universe, which it has done since publication in October 1996, long before Perlmutter confirmed the predicted value (both are due to spin-1 gravitons).
At some stage, improvements in the presentation of these diagrams may clarify it to the stage where it may reach a point at which people generally grasp it within the time they spend focussing on it, i.e. where the diagram looks self-evident and obvious and incontrovertible rather than off-putting. One thing that needs to be included is some of the gauge theory mathematics with simple explanations, e.g. some of the stuff from Feynman’s 1985 book QED which I’ve discussed on earlier posts. I’ve not included in the diagram the cross-section for quantum gravity interactions, the gauge theory of U(1), how it predicts lepton and quark mass patterns, how it replaces the Higgs mechanism for mass and modifies the electroweak theory, etc. Maybe I will have to condense all that down to a single diagram before this is really taken seriously?
Nobody uses the argument that “off-shell gauge bosons that cause fundamental forces should cause drag like a gas of on-shell particles and slow down (as well as heat up) the planets” to deny mainstream quantum field theories of the Casimir effect and the concept of a theory quantum gravity, but this kind of vacuous “argument” and historical attacks on LeSage’s gravity idea are still levelled against spin-1 gravitons whenever they are explained by me.
It’s a bit like false experts (charlatans and crackpots) trying to debunk Darwin’s evolution by saying:
“Lamarke had the idea of evolution and he got the details wrong; now you are coming up with a new version of an old debunked idea which evades the problems of the old idea. How stupid, pointless, and pathetic!”
There is no science in such an “objection”. It’s just a statement of pure political-style prejudice, not science. There is no way to respond scientifically to purely political objections which ignore the scientific facts. (If reality turns out to be an old idea with some modifications, then tough cheese to those who believe in string theory. Atoms were an old idea by the ancient Greeks when Dalton revived them two thousand years later. What matters is the new evidence on offer, not how famous the critics of the old evidence for the same idea were. I don’t care how famous critics of LeSage were years ago. Science isn’t about the political standing of a “critic” of an old version of an idea. Science is just about the facts.)
Another “objection” of the same sort is the aether, discussed in the previous post and in the one linked here. This “objection” says falsely that Heisenberg’s and Schroedinger’s first-quantization quantum mechanics disproved causality and mechanism in the universe, because the uncertainty principle makes the world crazy.
The answer to that is simply that first-quantization quantum mechanics went out of the window in 1927 when Dirac’s relativistic quantum field theory replaced it. First-quantization is a lie: it treats the Coulomb field binding the electron to the nucleus classically, so the chaos of the motion of the electron has to be falsely introduced by making the electron’s motion intrinsically indeterminate. Second-quantization, as Feynman explains, gets rid of this application of the uncertainty principle because it simply treats the Coulomb field properly as a quantum field, in which field quanta (random discrete interactions) replace the false classical smooth Coulomb field. The electron has a chaotic motion because the quantum electromagnetic field binding it in its orbit of the nucleus is chaotic, as Feynman explains:
“I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle! … When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [amplitudes for different paths] to predict where an electron is likely to be.”
– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-85.
“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [they] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”
– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.
“The quantum collapse [in the multiple universes, non-relativistic, pseudoscientific first-quantization model of quantum mechanics] occurs when we model the wave moving according to Schroedinger time-dependent and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger time-independent. The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”
– Dr Thomas Love, California State University.
“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.”
So much for first-quantization. Like Ptolemy’s epicycles or the Bohr atom, it can be used to make approximate calculations. What is stupid is the way it is taught and popularized as being physically deep, when it is just a non-relativistic classical Coulomb field model. What will it take, really, to get people to give up such pseudoscience and embrace physical reality? Einstein and his early (but not late) hero Mach dismissed physical mechanisms of phenomena in the vacuum, and this is still done in quantum field theory even by critics of non-falsifiable string speculation. Why? What physical justification do they have? Fashion. Yes, fashion. Here is the problem of fashion (first-quantization) being mistaken for fact, from the new 2010 book by Erik von Markovik and Chris Odom, The Pickup Artist published by Villard, New York, page 225:
“From quantum mechanics, I learned that a particle isn’t really in a specific location until it is observed. Until then, it exists as a fuzzy probability cloud. It’s only when a sentient being observes it that it actually collapses into a specific particle at a specific location. Experiments show that it is the act of observation itself that makes the probability collapse and become ‘real’.”
This is first-quantization, the 1925 non-relativistic quantum mechanics of Heisenberg’s matrices and Schroedinger’s wave equation, promoted with inaccurate “experiments” hype: there are no experiments that are not false like Alain Aspect’s “accidentals subtracted” statistical fiddle to show such a thing, which may explain why there have been few Nobel Prizes awarded for entanglement, quantum computing, string theory, parallel universes, and so on, although of course Heisenberg and Schroedinger got them for first-quantization mythology. As Dr Thomas Love of California State University has explained, wavefunction collapse is generated by unnatural, mathematical models which don’t consistently represent reality: it is due to a discontinuity between the time-dependent and time-independent wavefunction modelling equations.
Einstein was thus right to the extent that he dismissed the subjectivist first-quantization approach to quantum mechanics. The shame of Einstein in this context is that Einstein also dismissed the facts: he listened to Feynman’s presentation of path integrals in an informal seminar organized by Wigner at Princeton. Einstein ignored Feynman’s presentation of second quantization, consistent quantum field theory. What you always find is that the few critics of the mainstream heard properly to date, like Bell and Bohm, and maybe Lee Smolin too, have all ended up being obfuscators seeking to introduce ad hoc infinite potentials and other hidden variable or otherwise “crying wolf” ideas which just discredit non-mainstream ideas generally. They don’t seem to grasp the point that Feynman had already sorted out the whole problem. The path integral concept applies to field quanta travelling along all possible paths during their exchange between charges. The Casimir force measurement confirmation demonstrates the reality of this. Sure, if you fire a photon of light at an electron to try to find out its exact position and momentum, you are faced with uncertainty (because the electron is moved by the impact, ending up with a product of uncertainty in position and momentum which is half of the unit of quantum action, h-bar, assuming that you can measure the photon’s properties precisely, which of course you can’t, so the total uncertainty is still greater than half h-bar). But that doesn’t mean that the electron’s position was indeterminate before it was hit by your photon! In other words, just because observing something interferes with it by the impact of the photon of light, that doesn’t prove that the particle was really in an indeterminate state between parallel universes!
Similarly, a blind man swinging a golf club around in order to detect the position of a golf ball will have uncertainty even when he hears the club hit the ball, because the ball will then have moved and won’t be where it was at the instant of being hit. But the golf ball doesn’t need to be split between two parallel universes before he hits it! Similarly, measuring the potential of a battery will drain it slightly, measuring tyre pressure involves the escape of some air and a fall in pressure, shining a light on a painting fades it slightly. You can’t observe something without interacting in some way, but this doesn’t imply intrinsic chaos. The uncertainty principle has its uses, but as Feynman said, you don’t need it to give rise to wavefunction collapse in quantum mechanics, unless you’re using obsolete, first-quantization, pre-Dirac, flawed mathematical models.
Returning to the subject of the fashion-conscious pick up book describing first quantization by Erik, he is the star of a VH1 TV programme called (like his latest book) The Pickup Artist. Surprisingly, there are some really deep connections here between making yourself attractive socially and making your quantum gravity theory attractive to people generally. You need to gain social acceptance in both situations. At some stage you have to stop focussing on individuality and start to show some of the not-individual general characteristics required for social acceptance, such as getting papers published in peer-reviewed journals. The trick is to be individual and yet still fit into normal social circles: in other words, there is a high degree of constraint on the sort of personality you are allowed by evolution. The author Leil Lowndes puts it lucidly when she exposes the whole mythology of love with the words: “Evolutionary theorists tell us that, even when considering one-night nookie with a nerd she never wants to see again, a woman subconsciously listens to her genes.” (L. Lowndes, Undercover … signals, Citadel Press, page 126.) Of course, the problem here is that peer-reviewers have prejudice in favour of status quo.
Although science is supposedly progressive, there is resistance to new ways of thinking, so you need to have a lot of patience, time and energy to deal with the process of trying to respond to pseudoscientific objections presented in a way that tells you that the person making the objections doesn’t even know what science is all about, and just thinks science is playing with existing string theory or some other not-even-wrong speculative framework established sixty years ago which has never led to a single falsifiable prediction. Darwin never engaged in arguments with bigoted “critics”; he just wrote down the evidence and left the mudslinging to others while continuing his investigations. So advice to try to overcome bigotry is wrong: you end up either ignoring the non-scientific “criticisms” or arguing about philosophy; in no circumstances does this go in a fruitful direction. It just sucks in your time and energy, which the peer-reviewers waste on non-scientific matters. So although at some stage journal peer-reviewed publication is inevitable, it isn’t necessarily something suited to this kind of problem. In the context of dating, it reminds me of the silly advice I used to get to waste time in nightclubs, where the music was too loud to enable speech to be heard. It is a waste of time since it prevents any communication at all, just like the peer-reviewers with their lying spin-2 obsessions.
Physical space, the final frontier of quantum field theory
Maxwell’s gear cog and idler wheel-filled aether was wrong, so was Kelvin’s vortex atom aether, so physicists moved away altogether from “mechanisms” in fundamental physics and sought out purely mathematical models. The S-matrix, as described in a previous post, was the supreme expression of the rejection of the search for physical understanding in terms of mechanism. Mainstream efforts on the S-matrix were initially used in the 1950s and 1960s to fight off Feynman’s quantum field theory. But it failed in the log run like epicycles, and was overtaken by quantum field theory which gave rise to the Standard Model of particle interactions. However the S-matrix legacy of abstraction lives on in the fact that, still today, quantum field theory is submerged in the wrong type of mathematics, since the gauge theory is done using differential geometry, which is the application of continuously variable fields to approximate discontinuous (quantized) fields! This results in the fact that off-shell radiations in quantum fields are not taken seriously as physical entities of fundamental importance. Hence the popular misconceptions about the empirically defensible Casimir force which were discussed in the previous post on this blog.
Really, people should be using Monte Carlo models on computers, with field quanta being randomly simulated, flying between charges to model gauge theory properly in space to cause forces by interactions, instead of using the physically false mathematical “approximation” to such quantum fields by continuously varying curvatures of differential geometry. Calculus is a good approximation for describing the effects of large numbers of quanta, but it leads to problems in understanding individual interactions with physical clarity. The path integral really should be replaced by a path summation. There is no curvature of spacetime in a real quantum field (although there is curvature in the currently used mathematical model of gauge theory); electrons are accelerated not in a continuous, classical manner by an electric or gravitational field, but in a series of discrete steps due to discrete quantum field interactions!
One of the most shockingly groupthink-ignored questions in quantum field theory is the influence of motion on the interactions between real particles and the supposedly off-shell fields around them. As explained in the previous post, vacuum polarization shields electromagnetic field energy, which is given to off-shell particles, making some of them them approach an on-shell condition when those positive and negative virtual fermions are dragged far apart by electric fields. This gives them energy and affects the survival time of such particles, making them less virtual and more susceptible to the influences on real (on-shell) fermions, such as the Pauli exclusion principle (giving a particular amount of geometric space to each fermion, pairing up adjacent fermions with opposite spins, determines shell structures, etc.).
Although bosons don’t obey the exclusion principle due to their integer rather than half-integer spin, neutral Z bosons created by the annihilation of virtual fermions are affected by the locations of those virtual fermions when they annihilated, and thus in polarized virtual fermion fields, the creation of neutral Z bosons can be indirectly affected by the Pauli exclusion principle acting upon the fermions which annihilated to give rise to those bosons. Such Z bosons, having an intrinsic gravitational charge (mass) from an electroweak U(1) quantum graviton gauge theory of spin-1 gravitons, can mire down the motion of particles, thus giving them mass. So we have a model in which different fundamental particles have different possible discrete shell structures of weak field bosons around them, “miring” their motion to different extents as a pseudo-Higgs field, and thereby giving rise to all of the masses we observe for fundamental particles by analogy to quantized atomic electron structures.
One other idea from the picture of a vacuum field affecting particle motion is the analogy to a gas. A helicopter moves up because its rotor blades blow air down, so Newton’s 3rd law (the equal and opposite “reaction force”) acts upward, offsetting the gravitational force (weight). If you look at the quantum gravity mechanism this blog is about, you see that distant stars are accelerating away from us, and their reaction force is simply graviton radiation emitted in our direction, satisfying Newton’s 3rd law. The whole point of field quanta in quantum field theory is to TRANSMIT fundamental forces through the vacuum, but this is being obfuscated by the approximations used in the gauge theory framework. The Yang-Mills model has been discussed before and is the subject of a paper I’m preparing. It’s simply the Maxwell equations with an added term for charge transfer via massive charged field quanta in weak interactions. This whole approach of using classical differential field equations needs to be replaced with a working model based on summing discrete quantum interactions of off-shell field particles. The Yang-Mills model can be retained for many purposes, but it is inherently obfuscating in certain situations (such as small numbers of field quanta interactions in small spaces and over brief periods of time), a fact which needs to be physically understood as giving rise to cut-offs on running couplings and thus the need to renormalize gauge theories. This mathematical model problem should not covered-up by handwaving and technical efforts towards symbolic obfuscation. A distant galaxy accelerating away from us can be modelled in just the same way, therefore, as a rocket accelerating away from us. Instead of exhaust gas, you simply have a net flow of graviton field quanta. Just the same amount of energy is used to accelerate a given mass by the same amount in each situation.
Lies that pay in social interactions and spin-2 graviton “theory”: how lying groupthink delusion wins out over the facts
Hitler’s lies in Mein Kampf were an aid to his social success in gaining power in Germany in 1933. No, I’m not saying here that the 1940s Holocaust is analogous to spin-2 Witten deception: I’m talking about the propaganda trick used to secure power for National Socialism in Germany in 1933. Pointing out the facts does not win over the mob. Telling the mob lies which conform to their long-held prejudices does win over the mob. Hitler didn’t succeed by being generally unpopular in 1933. He became unpopular after causing World War II. Thuggery won for a long while because of this kind of groupthink-delusion-encouraged sh*t (see the post linked here for more details):
It is well established that lying weapons-effects-exaggerating groupthink “pacifism” in Britain was really war-mongering because it directly helped Hitler get into the position to start the Holocaust and World War II without opposition until the very last moment, yet despite helping Hitler these pseudo-peace supporters have won numerous prizes and held the world’s media in awe and rapture, while Churchill was being dismissed as a danger to peace because he didn’t believe in effectively collaborating with evil in order to secure a worthless promise of “peace”.
Despite this, we live in an age of groupthink where the media and the “moral majority” support lying for nefarious reasons. When you look at the points made in this and previous posts about first and second quantization lies, who do you blame? The top professors? The media? The public generally for believing the lies? There is a lot more to be learned about how the Communists and the Nazis used groupthink delusions to oppose the facts, and often received support abroad by other groups hell-bent on lying to the public. I’ve included a more contemporary example of groupthink delusion in the blog post linked here. The vitally important, suppressed fact about the inhumane monsters is that they have loyal and devoted followers who shield them from the facts, and in acquiring power at least, the general public want to believe in their lies because they see them as fashionable prejudices. Spin-2 gravitons are just the same.
Why the rank-2 stress-energy tensor of general relativity does not imply a spin-2 graviton
“If it exists, the graviton must be massless (because the gravitational force has unlimited range) and must have a spin of 2 (because the source of gravity is the stress-energy tensor, which is a second-rank tensor, compared to electromagnetism, the source of which is the four-current, which is a first-rank tensor). To prove the existence of the graviton, physicists must be able to link the particle to the curvature of the space-time continuum and calculate the gravitational force exerted.” – False claim, Wikipedia.
Previous posts explaining why general relativity requires spin-1 gravitons, and rejects spin-2 gravitons, are linked here, here, here, here, and here. But let’s take the false claim that gravitons must be spin-2 because the stress-energy tensor is rank-2. A rank 1 tensor is a first-order (once differentiated, e.g. da/db) differential summation, such as the divergence operator (sum of field gradients) or curl operator (the sum of all of the differences in gradients between field gradients for each pair of mutually orthagonal directions in space). A rank 2 tensor is some defined summation over second-order (twice differentiated, e.g. d2a/db2) differential equations. The field equation of general relativity has a different structure from Maxwell’s field equations for electromagnetism: as the Wikipedia quotation above states, Maxwell’s equations of classical electromagnetism are vector calculus (rank-1 tensors or first-order differential equations), while the tensors of general relativity are second order differential equations, rank-2 tensors.
The lie, however, is that this is physically deep. It’s not. It’s purely a choice of how to express the fields conveniently. For simple electromagnetic fields, where there is no contraction of mass-energy by the field itself, you can do it easily with first-order equations, gradients. These equations calculate fields with a first-order (rank-1) gradient, e.g. electric field strength, which is the gradient of potential/distance, measured in volts/metre. Maxwell’s equations don’t directly represent accelerations (second-order, rank-2 equations would be needed for that). For gravitational fields, you have to work with accelerations because the gravitational field contracts the source of the gravitational field itself, so gravitation is more complicated than electromagnetism.
The people who promote the lie that because rank-1 tensors apply to spin-1 field quanta in electromagnetism, rank-2 tensors must imply spin-2 gravitons, offer no evidence of this assertion. It’s arm-waving lying. It’s true that you need rank-2 tensors in general relativity, but it is not necessary in principle to use rank-1 tensors in electromagnetism: it’s merely easiest to use the simplest mathematical method available. You could in principle use rank-2 tensors to rebuild electromagnetism, by modelling the equations to observable accelerations instead of unobservable rank-1 electric fields and magnetic fields. Nobody has ever seen an electric field: only accelerations and forces caused by charges. (Likewise for magnetic fields.)
There is no physical correlation between the rank of the tensor and the spin of the gauge boson. It’s a purely historical accident that rank-1 tensors (vector calculus, first-order differential equations) are used to model fictitious electric and magnetic “fields”. We don’t directly observe electric field lines or electric charges (nobody has seen the charged core of an electron, what we see are effects of forces and accelerations which can merely be described in terms of field lines and charges). We observe accelerations and forces. The field lines and charges are not directly observed. The mathematical framework for a description of the relationship between the source of a field and the end result depends on the definition of the end result. In Maxwell’s equations, the end result of a electric charge which is not moving relative to the observer is a first-order field, defined in volts/metre. If you convert this first-order differential field into an observable effect, like force or acceleration, you get a second-order differential equation, acceleration a = d2x/dt2. General relativity doesn’t describe gravity in terms of a first-order field like Maxwell’s equations do, but instead describes gravitation in terms of a second-order observable, i.e. space curvature produced acceleration, a = d2x/dt2.
So the distinction between rank-1 and rank-2 tensors in electromagnetism and general relativity is not physically deep: it’s a matter of human decisions on how to represent electromagnetism and gravitation.
We choose in Maxwell’s equations to represent not second-order accelerations but using Michael Faraday’s imaginary concept of a pictorial field, radiating and curving “field lines” which are represented by first-order field gradients and curls. In Einstein’s general relativity, by contrast, we don’t represent gravity by such half-baked unobservable field concept, but in terms of directly-observable accelerations.
Like first-quantization (undergraduate quantum mechanics) lies, the “spin-2” graviton deception is a brilliant example of historical physically-ignorant mathematical obfuscation in action, leading to groupthink delusions in theoretical physics. (Anyone who criticises the lie is treated with a similar degree of delusional, paranoid hostility directed to dissenters of evil dictatorships. Instead of examining the evidence and seeking to correct the problem – which in the case of an evil dictatorship is obviously a big challenge – the messenger is inevitably shot or the “message” is “peacefully” deleted from the arXiv, reminiscent of the scene from Planet of the Apes where Dr Zaius – serving a dual role as Minister of Science and Chief Defender of the Faith, has to erase the words written in the sand which would undermine his religion and social tea-party of lying beliefs. In this analogy, the censors of the arXiv or journals like Classical and Quantum Gravity are not defending objsctive science, but are instead defending subjective pseudo-science – the groupthink orthodoxy which masquerades as science – from being exposed as a fraud.)
Dissimilarities in tensor ranks used to describe two different fields originate from dissimilarities in the field definitions for those two different fields, not to the spin of the field quanta. Any gauge field whose field is written in a second order differential equation, e.g., acceleration, can be classically approximated by rank-2 tensor equation. Comparing Maxwell’s equations in which fields are expressed in terms of first-order gradients like electric fields (volts/metre) with general relativity in which fields are accelerations or curvatures, is comparing chalk and cheese. They are not just different units, but have different purposes. For a summary of textbook treatments of curvature tensors, see Dr Kevin Aylward’s General Relativity for Teletubbys: “the fundamental point of the Riemann tensor [the Ricci curvature tensor in the field equation general relativity is simply a cut-down, rank-2 version Riemann tensor: the Ricci curvature tensor, Rab = Rxaxb, where Rxaxb is the Riemann tensor], as far as general relativity is concerned, is that it describes the acceleration of geodesics with respect to one another. … I am led to believe that many people don’t have a … clue what’s going on, although they can apply the formulas in a sleepwalking sense. … The Riemann curvature tensor is what tells one what that acceleration between the [particles] will be. This is expressed by
[Beware of some errors in the physical understanding of some of these general relativity internet sites, however. E.g., some suggest – following a popular 1950s book on relativity – that the inverse-square law is discredited by general relativity, because the relativistic motion of Mercury around the sun can be approximated within Newton’s framework by increasing the inverse-square law power slightly from its value of 1/R2 to 1/R2 + X where X is a small fraction, so that the force appears to get stronger nearer the sun. This is fictitious and is just an approximation to roughly accommodate relativistic effects that Newton ignored, e.g. the small increase in planetary mass due to its higher velocity when the planet is nearer the sun on part of its elliptical orbit, than it has when it is moving slower far from sun. This isn’t a physically correct model; it’s just a back-of-the-envelope fudge. A physically correct version of planetary motion in the Newtonian framework would keep the geometric inverse square law and would then correctly modify the force by making the right changes for the relativistic mass variation with velocity. Ptolemy’s epicycles demonstrated the danger of constructing approximate mathematical model which have no physical validity, which then become fashion.]”
Maxwell’s theory based on Faraday’s field lines concept employs only rank-1 equations, for example the divergence of the electric field strength, E, is directly proportional to the charge density, q (charge density is here defined as the charge per unit surface area, not the charge per unit volume): div.E ~ q. The reason this is a rank-1 equation is simply because the divergence operator is the sum of gradients in all three perpendicular directions of space for the operand. All it says is that a unit charge contributes a fixed number of diverging radial lines of electric field, so the total field is directly proportional to the total charge.
But this is just Faraday’s way of visualizing the way the electric force operates! Remember that nobody has yet seen or reported detecting an “electric field line” of force! With our electric meters, iron filings, and compasses we only see the results of forces and accelerations, so the number and locations of electric or magnetic field lines depicted in textbook diagrams is due to purely arbitrary conventions. It’s merely an abstract aetherial legacy from the Faraday-Maxwell era, not a physical reality that has any experimental evidence behind it. If you are going to confuse Faraday’s and Maxwell’s imaginary concept of field “lines” with experimentally defensible reality, you might as well write down an equation in which the invisible intermediary between charge and force is an angel, a UFO, a fairy or an elephant in an imaginary extra dimension. Quantum field theory tells us that there are no physical lines. Instead of Maxwell’s “physical lines of force”, we have known since QED was verified that there are field quanta being exchanged between charges.
So if we get rid of our ad hoc prejudices, getting rid of “electric field strength, E” in volts/metre and just expressing the result of the electric force in terms of what we can actually measure, namely accelerations and forces, we’d have a rank-2 tensor, basically the same field equation as is used in general relativity for gravity. The only differences will be the factor of ~1040 difference between field strengths of electromagnetism and gravity, the differences in the signs for the curvatures (like charges repel in electromagnetism, but attract in gravity) and the absence of the contraction term that makes the gravitational field contract the source of the field, but supposedly does not exist in electromagnetism. The tensor rank will be 2 for both cases, thus disproving the arm-waving yet popular idea that the rank number may be correlated to the field quanta spin. In other words, the electric field could be modelled by a rank-2 equation if we simply make the electric field consistent with the gravitational field by expressing both field in terms of accelerations, instead of using the gradient of the Faraday-legacy volts/metre “field strength” for the electric field. This is however beyond the understanding of the mainstream, who are deluded by fashion and historical ad hoc conventions. Most of the problems in understanding quantum field theory and unifying Standard Model fields with gravitational fields result from the legacy of field definitions used in Maxwellian and Yang-Mills fields, which for purely ad hoc historical reasons are different from the field definition in general relativity. If all fields are expressed in the same way as accelerative curvatures, all field equations become rank-2 and all rank-1 divergencies automatically disappear, since are merely an historical legacy of the Faraday-Maxwell volts/metre field “line” concept, which isn’t consistent with the concept of acceleration due to curvature in general relativity!
However, we’re not advocating the use of any particular differential equations for any quantum fields, because discontinuous quantized fields can’t in principle be correctly modelled by differential equations, which is why you can’t properly represent the source of gravity in general relativity as being a set of discontinuities (particles) in space to predict curvature, but must instead use a physically false averaged distribution such as a “perfect fluid” to represent the source of the field. The rank-2 framework of general relativity has relatively few easily obtainable solutions compared to the simpler rank-1 (vector calculus) framework of electrodynamics. But both classical fields are false in ignoring the random field quanta responsible for quantum chaos (see, for instance, the discussion of first-quantization versus second-quantization in the previous post here, here and here).
1. The electric field is defined by Michael Faraday as simply the gradient of volts/metre, which Maxwell correctly models with a first-order differential equation, which leads to a rank-1 tensor equation (vector calculus). Hence, electromagnetism with spin-1 field quanta has a rank-1 tensor purely because of the way it is formulated. Nobody has ever seen Faraday’s electric field, only accelerations/forces. There is no physical basis for electromagnetism being intrinsically rank-1; it’s just one way to mathematically model it, by describing it in terms of Faraday rank-1 fields rather than the directly observable rank-2 accelerations and forces which we see/feel.
2. The gravitational field has historically never been expressed in terms of a Faraday-type rank-1 field gradient. Due to Newton, who was less pictorial than Faraday, gravity has always been described and modelled directly in terms of the end result, i.e. accelerations/forces we see/feel.
This difference between the human formulations of the electromagnetic and gravitational “fields” is the sole reason for the fact that the former is currently expressed with a rank-1 tensor and the latter is expressed with a rank-2 tensor. If Newton had worked on electromagnetism instead of aether crackpots like Maxwell, we would undoubtedly have a rank-2 mathematical model of electromagnetism, in which electric fields are expressed not in volts/metre, but directly in terms of rank-2 acceleration (curvatures), just like general relativity.
Both electromagnetism and gravitation should define fields the same way, with rank-2 curvatures. The discrepancy that electromagnetism uses instead rank-1 tensors is due to the inconsistency that in electromagnetism fields are not defined in terms of curvatures (accelerations) but in terms of a Faraday’s imaginary abstraction of field lines. This has nothing whatsoever to do with particle spin. Rank-1 tensors are used in Maxwell’s equations because the electromagnetic fields are defined (inconsistently with gravity) in terms of rank-1 unobservable field gradients, whereas rank-2 tensors are used in general relativity purely because the definition of a field in general relativity is acceleration, which requires a rank-2 tensor to describe it. The difference is purely down to the way the field is described, not the spin of the field.
The physical basis for rank-2 tensors in general relativity
I’m going to rewrite the paper linked here when time permits.
The real reason why gravitons supposedly “must” be spin-2 is due to the mainstream investment of energy and time in worthless string theory, which is designed to permit the existence of spin-2 gravitons. We know this because whenever the errors in spin-2 gravitons are pointed out, they are ignored. These stringy people aren’t interested in physics, just grandiose fashionable speculations, which is the story of Ptolemy’s epicycles, Maxwell’s aether, Kelvin’s vortex atom, Piltdown Man, S-matrices, UFOs, Marxism, fascism, etc. All were very fashionable with bigots in their day, but:
“… reality must take precedence over public relations, for nature cannot be fooled.” – Richard P. Feynman, Appendix F to Rogers’ Commission Report into the Challenger space shuttle explosion of 1986.
Above: the mainstream groupthink on the spin of the graviton goes back to Pauli and Fierz’s paper of 1939, which insists that gravity is attractive (that we’re not being pushed down), which leads to a requirement for the spin to be an even number, not an odd number:
‘In the particular case of spin 2, rest-mass zero, the equations agree in the force-free case with Einstein’s equations for gravitational waves in general relativity in first approximation …’
– Conclusion of the paper by M. Fierz and W. Pauli, ‘On relativistic wave equations for particles of arbitrary spin in an electromagnetic field’, Proc. Roy. Soc. London., v. A173, pp. 211-232 (1939).
Pauli and Fierz obtained spin-2 by merely assuming without any evidence that gravity is attractive, not repulsive, i.e. they merely assume that we’re not being pushed down by the convergence of the inward component of graviton exchange with the immense isotropically distributed masses of the universe around us, which will obviously greatly exceed the repulsion of two nearby masses with relatively small gravitational charges. Pauli and Fiertz simply did not know the facts about cosmological repulsion (there was simply no evidence for this until 1998). The advocacy of spin-2 today is similar to the advocacy of Ptolemy’s mainstream earth centred universe from 150 to 1500 A.D., which merely assumed – but then arrogantly claimed this mere assumption to be observational fact – that the Earth was not rotating and that the sun’s apparent daily motion around the Earth is proof that the sun was really orbiting the Earth daily. There is no evidence for a spin-2 graviton!
There is evidence for a spin-1 graviton. For example, the
‘Some physicists speculate that dark energy could be a repulsive gravitational force that only acts over large scales. “There is precedent for such behaviour in a fundamental force,” Wesson says. “The strong nuclear force is attractive at some distances and repulsive at others.”’
This possibility was ignored by Pauli and Fierz when first proposing that the quanta of gravitation has spin-2.
(1) gives cosmological repulsion of large masses, and
(2) gives a push that appears as LeSage “attraction” for small nearby masses, which only have weak mutual graviton exchange due to their small gravitational charges, and therefore on balance get pushed together by the much larger graviton pressure due to implosive focussing of gravitons converging inwards from the exchange with immense, distant masses (the galaxy clusters isotropically distributed across the sky).
Above: Perlmutter’s discovery of the acceleration of the universe, based on the redshifts of fixed energy supernovae, which are triggered as a critical mass effect when sufficient matter falls into a white dwarf. A type Ia supernova explosion, always yielding 4 x 1028 megatons of TNT equivalent, results from the critical mass effect of the collapse of a white dwarf as soon as its mass exceeds 1.4 solar masses due to matter falling in from a companion star. The degenerate electron gas in the white dwarf is then no longer able to support the pressure from the weight of gas, which collapses, thereby releasing enough gravitational potential energy as heat and pressure to cause the fusion of carbon and oxygen into heavy elements, creating massive amounts of radioactive nuclides, particularly intensely radioactive nickel-56, but half of all other nuclides (including uranium and heavier) are also produced by the ‘R’ (rapid) process of successive neutron captures by fusion products in supernovae explosions. Because we can model how much energy is released using modified computer models of nuclear fusion explosions developed originally by weaponeer Sterling Colgate at Lawrence Livermore National Laboratory to design the early H-bombs, the brightness of the supernova flash tells us how far away the Type Ia supernova is, while the redshift of the flash tells us how fast it is receding from us. That’s how the acceleration of the universe was discovered. Note that “tired light” fantasies about redshift are disproved by Professor Edward Wright on the page linked here.
You can go to an internet page and see the correct predictions on the linked page here or the about page. This isn’t based on speculations, cosmological acceleration has been observed since 1998 when CCD telescopes plugged live into computers with supernova signature recognition software detected extremely distant supernova and recorded their redshifts (see the article by the discoverer of cosmological acceleration, Dr Saul Perlmutter, on pages 53-60 of the April 2003 issue of Physics Today, linked here). The outward cosmological acceleration of the 3 × 1052 kg mass of the 9 × 1021 observable stars in galaxies observable by the Hubble Space Telescope (page 5 of a NASA report linked here), is approximately a = Hc = 6.9 x 10-10 ms-2 (L. Smolin, The Trouble With Physics, Houghton Mifflin, N.Y., 2006, p. 209), giving an immense outward force under Newton’s 2nd law of F = ma = 1.8 × 1043 Newtons. Newton’s 3rd law gives an equal inward (implosive type) reaction force, which predicts gravitation quantitatively. What part of this is speculative? Maybe you have some vague notion that scientific laws should not for some reason be applied to new situations, or should not be trusted if they make useful predictions which are confirmed experimentally, so maybe you vaguely don’t believe in applying Newton’s second and third law to masses accelerating at 6.9 x 10-10 ms-2! But why not? What part of “fact-based theory” do you have difficulty understanding?
It is usually by applying facts and laws to new situations that progress is made in science. If you stick to applying known laws to situations they have already been applied to, you’ll be less likely to observe something new than if you try applying them to a situation which nobody has ever applied them to before. We should apply Newton’s laws to the accelerating cosmos and then focus on the immense forces and what they tell us about graviton exchange.
The theory makes accurate predictions, well within experimental error, and is also fact-based unlike all other theories of quantum gravity, especially the 10500 universes of string theory’s landscape.
Above: The mainstream 2-dimensional ‘rubber sheet’ interpretation of general relativity says that mass-energy ‘indents’ spacetime, which responds like placing two heavy large balls on a mattress, which distorts more between the balls (where the distortions add up) than on the opposite sides. Hence the balls are pushed together: ‘Matter tells space how to curve, and space tells matter how to move’ (Professor John A. Wheeler). This illustrates how the mainstream (albeit arm-waving) explanation of general relativity is actually a theory that gravity is produced by space-time distorting to physically push objects together, not to pull them! (When this is pointed out to mainstream crackpot physicists, they naturally freak out and become angry, saying it is just a pointless analogy. But when the checkable predictions of the mechanism are explained, they may perform their always-entertaining “hear no evil, see no evil, speak no evil” act.)
Above: LeSage’s own illustration of quantum gravity in 1758. Like Lamarke’s evolution theory of 1809 (the one in which characteristics acquired during life are somehow supposed to be passed on genetically, rather than Darwin’s evolution in which genetic change occurs due to the inability of inferior individuals to pass on genes), LeSage’s theory was full of errors and is still derided today. The basic concept that mass is composed of fundamental particles with gravity due to a quantum field of gravitons exchanged between these fundamental particles of mass, is now a frontier of quantum field theory research. What is interesting is that quantum gravity theorists today don’t use the arguments used to “debunk” LeSage: they don’t argue that quantum gravity is impossible because gravitons in the vacuum would “slow down the planets by causing drag”. They recognise that gravitons are not real particles: they don’t obey the energy-momentum relationship or mass shell that applies to particles of say a gas or other fluid. Gravitons are thus off-shell or “virtual” radiations, which cause accelerative forces but don’t cause continuous gas type drag or the heating that occurs when objects move rapidly in a real fluid. While quantum gravity theorists realize that particle (graviton) mediated gravity is possible, LeSage’s mechanism of quantum gravity is still as derided today as Lamarke’s theory of evolution. Another analogy is the succession from Aristarchus of Samos, who first proposed the solar system in 250 B.C. against the mainstream earth-centred universe, to Copernicus’ inaccurate solar system (circular orbits and epicycles) of 1500 A.D. and to Kepler’s elliptical orbit solar system of 1609 A.D. Is there any point in insisting that Aristarchus was the original discoverer of the theory, when he failed to come up with a detailed, convincing and accurate theory? Similarly, Darwin rather than Lamarke is accredited with the theory of evolution, because he made the theory useful and thus scientific.
If someone fails to come up with a detailed, accurate and successfully convincing theory, and merely gets the basic idea right without being able to prove it against the mainstream fashions and groupthink, then the history of science shows that the person is not credited with a big discovery: science is not merely guesswork. Maxwell based his completion of the theory of classical electrodynamics upon an ethereal displacement current of virtual charges in the vacuum, in order to correct Ampere’s law for the case of open circuits such as capacitors using the permittivity of free space (a vacuum) for the dielectric. Maxwell believed, by analogy to the situation of moving ions in a fluid during electrolysis, that current appears to flow through the vacuum between capacitor plates while the capacitor charges and discharges; although in fact the real current just spreads along the plates, and electromagnetic induction (rather than ethereal vacuum currents) produces the current on the opposite place.
Maxwell nevertheless suggested (in an Encyclopedia Britannica article) an experiment to test whether light is carried at an absolute velocity by a mechanical spacetime fabric. After the Michelson-Morley experiment was done in 1887 to test Maxwell’s conjecture, it was clear that no absolute motion was detectable: suggesting (1) that motion appears relative, not absolute, and (2) that light always appears to go at the same velocity in the vacuum. In 1889, FitzGerald published an explanation of these “relativity” results in Science: he argued that the physical vacuum contracted moving masses like the Michelson-Morley experiment, by analogy to the contraction of anything moving in a fluid due to the force from the head-on fluid pressure (wind drag, or hydrodynamic resistance). This fluid-space based explanation predicted quantitatively the relativistic contraction law, and Lorentz showed that since mass depends inversely on the classical radius of the electron, it predicted a mass increase with velocity. Given the equivalence of space and time via the velocity of light, Lorentz showed that the contraction predicted time-dilation due to motion.
Above: In Science in 1889, FitzGerald used the Michelson-Morley result to argue that moving objects at velocity v must contract in length in the direction of their motion by the factor (1 – v2/c2)1/2 so that there is no difference in the travel times of light moving along two perpendicular paths. Groupthink crackpots claim that if the lengths of the arms of the instrument are different, FitzGerald’s argument for absolute motion is destroyed since the travel times are still cancelled out. Actually, the arms of the Michelson-Morley instrument can never be the same length to within the accuracy of the relative times implied by interference fringes! The instrument does not measure the absolute times taken in two different directions: it merely determines if there is a difference in the relative times (which are always slightly different, since the arms can’t be machined to perfectly identical length) when the instrument is rotated by 90 degrees. Another groupthink crackpot argument is that although the FitzGerald theory predicts relativity from length contraction in an absolute motion universe, other special relativity results like time dilation, mass increase, and E = mc2 can only be obtained from Einstein. Actually, all were obtained by Lorentz and Poincare: Lorentz showed that evidence for space-time from electromagnetism implies that apparent time dilates like distance when an clock moves, while he argued that since the classical electromagnetic electron radius is inversely proportional to its mass, its mass should thus increase with velocity by a factor equal to the reciprocal of the FitzGerald contraction factor. Likewise, a force F = d(mv)/dt acting on a body moving distance dx imparts kinetic energy dE = F.dx = d(mv).dx/dt = d(mv)v = v.d(mv) = v2dm + mvdv. Comparison of this purely Newtonian result with the derivative of Lorentz’s relativistic mass increase formula mv = m0(1 – v2/c2)-1/2 gives us dm = dE/c2 or E = mc2. (See for example, Dr Glasstone’s Sourcebook on Atomic Energy, 3rd ed., 1967.)
Carlos Barceló and Gil Jannes, ‘A Real Lorentz-FitzGerald Contraction’, Foundations of Physics, Volume 38, Number 2 / February, 2008, pp. 191-199 (PDF file: http://digital.csic.es/bitstream/10261/3425/3/0705.4652v2.pdf):
“Many condensed matter systems are such that their collective excitations at low energies can be described by fields satisfying equations of motion formally indistinguishable from those of relativistic field theory. The finite speed of propagation of the disturbances in the effective fields (in the simplest models, the speed of sound) plays here the role of the speed of light in fundamental physics. However, these apparently relativistic fields are immersed in an external Newtonian world (the condensed matter system itself and the laboratory can be considered Newtonian, since all the velocities involved are much smaller than the velocity of light) which provides a privileged coordinate system and therefore seems to destroy the possibility of having a perfectly defined relativistic emergent world. In this essay we ask ourselves the following question: In a homogeneous condensed matter medium, is there a way for internal observers, dealing exclusively with the low-energy collective phenomena, to detect their state of uniform motion with respect to the medium? By proposing a thought experiment based on the construction of a Michelson-Morley interferometer made of quasi-particles, we show that a real Lorentz-FitzGerald contraction takes place, so that internal observers are unable to find out anything about their ‘absolute’ state of motion. Therefore, we also show that an effective but perfectly defined relativistic world can emerge in a fishbowl world situated inside a Newtonian (laboratory) system. This leads us to reflect on the various levels of description in physics, in particular regarding the quest towards a theory of quantum gravity. …
“… Remarkably, all of relativity (at least, all of special relativity) could be taught as an effective theory by using only Newtonian language. …In a way, the model we are discussing here could be seen as a variant of the old ether model. At the end of the 19th century, the ether assumption was so entrenched in the physical community that, even in the light of the null result of the Michelson-Morley experiment, nobody thought immediately about discarding it. Until the acceptance of special relativity, the best candidate to explain this null result was the Lorentz-FitzGerald contraction hypothesis. … we consider our model of a relativistic world in a fishbowl, itself immersed in a Newtonian external world, as a source of reflection, as a Gedankenmodel. By no means are we suggesting that there is a world beyond our relativistic world describable in all its facets in Newtonian terms. Coming back to the contraction hypothesis of Lorentz and FitzGerald, it is generally considered to be ad hoc. However, this might have more to do with the caution of the authors, who themselves presented it as a hypothesis, than with the naturalness or not of the assumption. … The ether theory had not been disproved, it merely became superfluous. Einstein realised that the knowledge of the elementary interactions of matter was not advanced enough to make any claim about the relation between the constitution of matter (the ‘molecular forces’), and a deeper layer of description (the ‘ether’) with certainty. Thus his formulation of special relativity was an advance within the given context, precisely because it avoided making any claim about the fundamental structure of matter, and limited itself to an effective macroscopic description.”
In 1905, Einstein took the two implications of the Michelson-Morley research (that motion appears relative not absolute, and that the observed velocity of light in the vacuum is always constant) and used them as postulates to derive the FitzGerald-Lorentz transformation and Poincare mass-energy equivalence. Einstein’s analysis was preferred by Machian philosophers because it was purely mathematical and did not seek to explain the principle of relativity and constancy of the velocity of light in the vacuum by invoking a physical contraction of instruments. Einstein postulated relativity; FitzGerald explained it. Both predicted a similar contraction quantitatively. Similarly, Newton’s theory or gravitation is the combination of Galileo’s principle that dropped masses all accelerate at the same rate due to the constancy of the Earth’s mass, with Kepler’s laws of planetary motion. Newton postulated his universal gravitational law based on this evidence plus the guess that the gravitational force is directly proportional to the mass producing it, and he checked it by the Moon’s centripetal acceleration; LeSage tried to explain what Newton had postulated and checked.
The previous post links to Peter Woit’s earlier article about string theorist Erik Verlinde’s arXiv preprint On the Origin of Gravity and the Laws of Newton, which claims: “Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies.” String theorist Verlinde derives Newton’s laws and other results using only “high-school mathematics” (which brings contempt from mathematical physicist Woit, probably one of the areas of agreement he has with string theorist Jacques Distler), i.e. no tensors, and he is derives the Newtonian weak field approximation for gravity, not the relativistic Einsteinian gravity law which also includes contraction. This contraction is physically real but small for weak gravitational fields and non-relativistic velocities: Feynman famously calculated in his published Lectures on Physics that the contraction term in Einstein’s field equation contracts the Earth’s radius by MG/(3c2) = 1.5 mm. Consider two ways to predict contraction using Einstein’s equivalence principle.
First, Einstein’s way. Einstein began by expressing Newton’s law of gravity in tensor field calculus which allows gravity to be represented by non-Euclidean geometry, incorporating the equivalence of inertial and gravitational mass: Einstein started with a false hypothesis that the curvature of spacetime (represented with the Ricci tensor) which causes acceleration (“curvature” is literally the curve of a line on a graph of distance versus time, i.e. it implies acceleration) simply equals the source of gravity (the stress-energy tensor, since in Einstein’s earlier special relativity, mass and energy are equivalent, albeit via the well-known very large conversion factor, c2). (Non-Euclidean geometry wasn’t Einstein’s innovation; it was studied by Riemann and Minkowski, while Ricci and Levi-Civita pioneered tensors to generalize vector calculus to any number of dimensions.)
Einstein in 1915 found that this this simple equivalence was wrong: the Ricci curvature tensor could not be equivalent to the stress-energy tensor because the divergence (the sum of gradients in all spatial dimensions) of the stress-energy tensor is not zero. Unless this divergence is zero, mass-energy will not be conserved. So Einstein used Bianchi’s identity to alter source of gravity, subtracting from the stress-energy tensor, Tab, half the product of the metric tensor gab, and the trace of the stress-energy tensor, T (the trace of a tensor is simply the sum of the top-left to bottom-right diagonal elements of that tensor, i.e. energy density plus pressure, or trace T = T00 + T11 + T22 + T33), because this combination: (1) does have zero divergence and thereby satisfies the conservation of mass-energy, and (2) reduces the stress-energy tensor for weak fields, thereby correctly corresponding to Newtonian gravity in the weak field limit. This is how Einstein found that the Ricci tensor Rab = Tab – (1/2)gabT, which is exactly equivalent to the oft-quoted Einstein equation Rab – (1/2)gabR = Tab, where R is the trace of the Ricci tensor (R = R00 + R11 + R22 + R33).
Secondly, Feynman’s way. A more physically intuitive explanation to the modification of Newton’s gravitational law implied by Einstein’s field equation of general relativity is to examine Feynman’s curvature result: space-time is non-Euclidean in the sense that the gravitational field contracts the Earth’s radius by (1/3)MG/c2 or about 1.5 mm. This is unaccompanied by a transverse contraction, i.e. the Earth’s circumference is unaffected. To mathematically keep “Pi” a constant, therefore, you need to invoke an extra dimension, so that the n-1 = 3 spatial dimensions we experience are in string theory terminology a (mem)brane on a n = 4 dimensional bulk of spacetime. Similarly, if you draw a 2-dimensional circle upon the surface of the interior of a sphere, you will obtain Pi from the circle only by drawing a straight line through the 3-d bulk of the volume (i.e. a line that does not follow the 2-dimensional curved surface or “brane” of the sphere upon which the circle is supposed to exist). If you measure the diameter upon the curved surface, it will be different, so Pi will appear to vary.
A simple physical mechanism of Feynman’s (1/3)MG/c2 excess radius for symmetric, spherical mass M is that the gravitational field quanta compress a mass radially when being exchanged with distant masses in the universe: the exchange of gravitons pushes against masses. By Einstein’s principle of the equivalence of inertial and gravitational mass, the cause of this excess radius is exactly the same as the cause of the FitzGerald-Lorentz contraction of moving bodies in the direction of their motion, first suggested in Science in 1889 by FitzGerald. FitzGerald explained the apparent constancy of the velocity of light regardless of the relative motion of the observer (indicated by the null-result of the Michelson-Morley experiment of 1887) as the physical effect of the gravitational field. In the fluid analogy to the gravitational field, if you accelerate an underwater submarine, there is a head-on pressure from the inertial resistance of the water which it is colliding with, which causes it to contract slightly in the direction it is going in. This head-on or “dynamic” pressure is equal to half the product of the density of the water and the square of the velocity of the submarine. In addition to this “dynamic” pressure, there is a “static” water pressure acting in all directions, which compresses the submarine slightly in all directions, even if the submarine is not moving. In this analogy, the FitzGerald-Lorentz contraction is the “dynamic” pressure effect of the graviton field, while the Feynman excess radius or radial contraction of masses is the “static” pressure effect of the graviton field. Einstein’s special relativity postulates (1) relativity of motion and (2) constancy of c, and derives the FitzGerald-Lorentz transformation and mass-energy equivalence from these postulates; by contrast, FitzGerald and Lorentz sought to physically explain the mechanism of relativity by postulating contraction. To contrast this difference:
(1) Einstein: postulated relativity and produced contraction.
(2) Lorentz and FitzGerald: postulated contraction to produce “apparent” observed Michelson-Morley relativity as just an instrument contraction effect within an absolute motion universe.
These two relativistic contractions, the contraction of relativistically moving inertial masses and the contraction of radial space around a gravitating mass, are simply related under Einstein’s principle of the equivalence of inertial and gravitational masses, since Einstein’s other equivalence (that between mass and energy) then applies to both inertial and gravitational masses. In other words, the equivalence of inertial and gravitational mass implies an effective energy equivalence for each of these masses. The FitzGerald-Lorentz contraction factor [1 – (v/c)2]1/2 contains velocity v, which comes from the kinetic energy of the moving object. By analogy, when we consider a mass m at rest in a gravitational field from another much larger mass M (like a person standing on the Earth), it has acquired gravitational potential energy E = mMG/R, equivalent to a kinetic energy of E = (1/2)mv2, so by Einstein’s equivalence principle of inertial and gravitational field energy it can be considered to have an “effective” velocity of v = (2GM/R)1/2. Inserting this velocity into the FitzGerald-Lorentz contraction factor [1 – (v/c)2]1/2 gives [1 – 2GM/(Rc2)]1/2 which, when expanded by the binomial expansion to the first couple of terms as a good approximation, yields 1 – GM/(Rc2). This result assumes that all of the contraction occurs in one spatial dimension only, which is true for the FitzGerald-Lorentz contraction (where a moving mass is only contracted in the direction of motion, not in the two other spatial dimensions it has), but is not true for radial gravitational contraction around a static spherical, uniform mass, which operates equally in all 3 spatial dimensions. Therefore, the contraction in any one of the three dimensions is by the factor 1 – (1/3)GM/(Rc2). Hence, when gravitational contraction is included, radius R becomes R[1 – (1/3)GM/(Rc2)] = R – GM/(3c2), which is the result Feynman produced in his Lectures on Physics from Einstein’s field equation.
The point we’re making here is that general relativity isn’t mysterious unless you want to ignore the physical effects due to energy conservation and associated contraction, which produce its departures from Newtonian physics. Physically understanding the mechanism for how general relativity differs from Newtonian physics therefore immediately takes you to the facts of how the quantum gravitational field physically distorts static and moving masses, leading to checkable predictions which you cannot make with general relativity alone. It is therefore helpful if you want to understand physically how quantum gravity must operate in order to be consistent with general relativity within its domain of validity. Obviously general relativity breaks down outside that domain, which is why we need quantum gravity, but within the limits of validity for that classical domain, both theories are consistent. The reason why quantum gravity of the LeSage sort needs to be fully reconciled with general relativity in this way is that one objection to LeSage was by Laplace, who ignored the gravitational and motion contraction mechanisms of quantum gravity for relativity (Laplace was writing long before FitzGerald and Einstein) and tried to use this ignorance to debun LeSage by arguing that orbital aberration would occur in LeSage’s model due to the finite speed of the gravitons. This objection does not apply to general relativity due to the contractions incorporated into the general relativity theory by Einstein: similarly, Laplace’s objection does not apply to quantum gravity which inherently includes the contractions as physical results of quantum gravity upon moving masses.
In the past, however, FitzGerald’s physical contraction of moving masses as miring by fluid pressure has been controversial in physics, and Einstein tried to dispose of the fluid. The problem with the fluid was investigated by citics of Fatio and LeSage, who promoted a shadowing theory of gravity, whereby masses get pushed together by mutually shielding one another from the pressure of the fluid in space. These critics included some of the greatest classical physicists the world has ever known: Newton (Fatio’s friend), Maxwell and Kelvin. Feynman also reviewed the major objection, drag, to the fluid in his broadcast lectures on the Character of Physical Law. The criticisms of the fluid is that it the force it needs to exert to produce gravity would classically be expected to cause fast moving objects in the vacuum
(1) to heat up until they glow red hot or ablate at immense temperature,
(2) to slow down and (in the case of planets) thus spiral into the sun,
(3) while the fluid would diffuse in all directions and on large distance scales fill in the “shadows” like a gas, preventing the shadowing mechanism from working (this doesn’t apply to gravitons exchanged between masses, for although they will take all possible paths in a path integral, the resultant, effective graviton motion for force delivery will along the path of least action, due to the cancellation of the amplitudes of paths which interfere off the path of least action: see Feynman’s 1985 book QED),
(4) the mechanism would not give a force proportional to mass if the fundamental particles have a large gravitational interaction cross-sectional area, which would mean that in a big mass some of the shadows would “overlap” one another, so the net force of gravity from a big mass would be less than directly proportional to the mass, i.e. it would increase not in simple proportion to M but instead statistically in proportion to: 1 – e–bM, where b is a gravity cross-section and geometry-dependent coefficient, which allows for the probability of overlap. This 1 – e–bM formula has two asymptotic limits:
(a) for small masses and small cross-sections, bM is much smaller than 1, so: e–bM ~ 1 – bM, so 1 – e–bM ~ bM. I.e., for small masses and small cross-sections, the theory agrees with observations (there is no significant overlap).
(b) for larger masses and large cross-sections, bM might be much larger than 1, so e–bM ~ 0, giving 1 – e–bM ~ 1. I.e., for large masses and large cross-sections, the overlap of shadows would prevent any increase in the mass of a body from increasing the resultant gravitational force: once gravitons are stopped, they can’t be stopped again by another mass.
This overlap problem is not applicable for the solar system or many other situations because b is insignificant owing to the small graviton scattering cross-section of a fundamental particle of mass, since the total inward force is trillions upon trillions of times higher than the objectors believed possible: the force is simply determined by Newton’s 2nd and 3rd laws to be the product of the cosmological acceleration and the mass of the accelerating universe, 1.8 × 1043 Newtons, and the cross-section for shielding is the black hole event horizon area, which is so small that overlap is insignificant in the solar system or other tests of Newton’s weak field limit.
(5) the LeSage mechanism suggested that the gravitons which cause gravity would be slowed down by the energy loss when imparting a push to a mass, so that they would not be travelling at the velocity of light, contrary to what is known about the velocity of gravitational fields. However this is false and is due to the real (rather than virtual “off-shell”) radiation that LeSage assumed. The radiation goes at light velocity and merely shifts in frequency due to energy loss. For static situations, where no acceleration is produced (e.g. an apple stationary hanging on a tree) the graviton exchange results in no energy change; it’s a perfectly elastic scattering interaction. No energy is lost from the gravitons, and no kinetic energy is gained by the apple. Where the apple is accelerated, the kinetic energy it gains is that lost due to a shift to lower energy (longer wavelength) of the “reflected” or scattered gravitons. Notice that Dr Lubos Motl has objected to me by falsely claiming that virtual particles don’t appear to have wavelengths; on the contrary, the empirically confirmed Casimir effect is due to inability of virtual photons of wavelength longer than the distance between two metal plates, to exist and produce pressure between the plates (so the plates are pushed together from the complete spectrum of virtual photon wavelengths in the vacuum surrounding the places, which is stronger than the cut-off spectrum between the plates). Like the reflection of light by a mirror, the process is consists of the absorption of a particle followed by the emission of a new particle.
However, quantum field theory, which has been very precisely tested for electrodynamics, resurrects a quantum fluid or field in space which consists of gauge boson radiation, i.e. virtual (off-shell) radiation which carries “borrowed” or off-mass shell energy, not real energy. It doesn’t obey the relationship between energy and momentum that applies to real radiation. This is why the radiation can exert pressure without causing objects to heat up or to slow down: they merely accelerate or distort instead.
The virtual radiation is not like a regular fluid. It carries potential energy that can be used to accelerate and contract objects, but it cannot directly heat them or cause continuous drag to non-accelerating objects by carrying away their momentum in a series of impacts in the way that gas or water molecules cause continuous drag on non-accelerating objects. There is only resistance to accelerations (i.e., inertia and momentum) because of these limitations on the energy exchanges possible with the virtual (off-shell) radiations in the vacuum.
In a new blog post, Dr Woit quotes a New Scientist article about Erik Verlinde’s “entropic gravity”:
“Now we could be closing in on an explanation of where gravity comes from: it might be an emergent property of the way objects are organised, much as fluidity arises as a property of water…. This idea might seem exotic now, but to kids of the future it might be as familiar as apples.”
Like Woit, I don’t see much hope in Verlinde’s entropic gravity since it doesn’t make falsifiable predictions, just ad hoc ones, but the idea that gravity is an “emergent property of the way objects are organised, much as fluidity arises as a property of water” is correct: gravity predicted accurately from the shadowing of the implosive pressure from gravitons exchanged with other masses around us. At best, mainstream quantum gravity theories such as string theory and loop quantum gravity are merely compatible with a massless spin-2 excitation and thus are wrong, ad hoc theories of quantum gravity, founded on error and which fail to make any quantitative, falsifiable predictions.
“Gerard ‘t Hooft expresses pleasure at seeing a string theorist talking about “real physical concepts like mass and force, not just fancy abstract mathematics”. According to the article, the problem with Einstein’s General Relativity is that its “laws are only mathematical descriptions.” I guess a precise mathematical expression of a theory is somehow undesirable, much better to have a vague description in English about how it’s all due to some mysterious entropy.”
So Dr Woit has finally flipped, giving up on precise mathematical expressions and coming round to the “much better” vague and mysterious ideas of the mainstream string theorists. Well, I think that’s sad, but I suppose it can’t be helped. Newton in 1692 scribbled in his own printed copy of his Principia that Fatio’s 1690 gravity mechanism was “the unique hypothesis by which gravity can be explained”, although Newton did not publish any statement of his interest in the gravitational mechanism (just as he kept his alchemical and religious studies secret).
“I think you’re being a bit harsh when you say:
I guess a precise mathematical expression of a theory is somehow undesirable, much better to have a vague description in English about how it’s all due to some mysterious entropy.
“No-one is suggesting the existing mathematical models should be abandoned. The point being made is that the entropic approach may give us some physical insight into those mathematical models.”
This is a valid point: finding a way to make predictions with quantum gravity doesn’t mean “abandoning” general relativity, but supplementing it by giving additional physical insight and making quantitative, falsifiable predictions. Although Professor Halton Arp (of the Max-Planck Institut fuer Astrophysik) promotes heretical quasar redshift objections to the big bang which are false, he does make one important theoretical point in his paper The observational impetus for Le Sage Gravity:
‘The first insight came when I realized that the Friedmann solution of 1922 was based on the assumption that the masses of elementary particles were always and forever constant, m = const. He had made an approximation in a differential equation and then solved it. This is an error in mathematical procedure. What Narlikar had done was solve the equations for m= f(x,t). This a more general solution [to general relativity], what Tom Phipps calls a covering theory. Then if it is decided from observations that m can be set constant (e.g. locally) the solution can be used for this special case. What the Friedmann, and following Big Bang evangelists did, was succumb to the typical conceit of humans that the whole of the universe was just like themselves.’
The remainder of his paper is speculative, non-falsifiable or simply wrong, and Arp is totally wrong in dismissing the big bang since his quasar “evidence” has empirically been shown to be completely bogus, while it has also been shown that the redshift evidence definitely does require expansion, since other “explanations” fail. But Arp is right in arguing that the Friedmann et al. solutions to general relativity for cosmological models are all based on the implicit assumption that the source of gravity is not an “emergent” effect of the motion of masses in the surrounding universe. The Lambda-CDM model based on general relativity is typical of the problem, since it can be fitted in ad hoc fashion to virtually any kind of universe by adjusting the values of the dark energy and dark matter parameters to force the theory to fit the observations from cosmology (the opposite of science, which is to make falsifiable predictions and then to check those predictions). That’s a religion based on groupthink politics, not facts.
Copy of comment to:
“But there’s problems, too. There ought to be “air resistance” from the particles as the planets move through space. Then there’s the fact that the force is proportional to surface area hit by the particles, not to the mass. This can be remedied by assuming a tiny interaction cross-section due to the particles, but if this is true they must be moving very fast indeed to produce the required force – many times the speed of light. And in that case the heating due to the “air resistance” of the particles would be impossibly high. Furthermore, if the particle shadows of two planets overlapped, the sun’s gravity on the farther planet should be shielded. No such effect has been observed.
“For these and other reasons Fatio’s theory had to be rejected as unworkable.”
Wikipedia is a bit unreliable on this subject: Fatio assumed on-shell (“real”) particles, not a quantum field of off-shell virtual gauge bosons. The exchange of gravitons between masses in the universe would cause the heating, drag, etc., regardless of spin if the radiation were real. So it would dismiss spin-2 gravitons of attraction, since they’d have to be everywhere in the universe between masses, just like Fatio’s particles. But in fact the objections don’t apply to gauge boson radiations since they’re off-shell. Fatio didn’t know about relativity or quantum field theory.
Thanks anyway, your post is pretty funny and could be spoofed by writing a fictitious attack on “evolution” by ignoring Darwin’s work and just pointing out errors in Lamarke’s theory of evolution (which was wrong)…
“This can be remedied by assuming a tiny interaction cross-section due to the particles, but if this is true they must be moving very fast indeed to produce the required force – many times the speed of light.”
Or just increasing the flux of spin-1 gravitons when you decrease the cross-section …
Pauli’s role in predicting the neutrino by applying energy conservation to beta decay (against Bohr who falsely claimed that the energy conservation anomaly in beta decay was proof that indeterminancy applies to energy conservation which can violate energy conservation to explain the anomaly without having to predict the neutrino to take away energy). and in declaring Heisenberg’s vacuous (unpredictive) unified field theory “not even wrong” is well known, thanks to Peter Woit. There is a nice anecdote about Markus Fierz, Pauli’s collaborator in the spin-2 theory of gravitons, given by Freeman Dyson on p. 15 of his 2008 book The Scientist as Rebel:
“Many years ago, when I was in Zürich, I went to see the play The Physicists by the Swiss playwright Friedrich Dürrenmatt. The characters in the play are grotesque caricatures … The action takes place in a lunatic asylum where the physicists are patients. In the first act they entertain themselves by murdering their nurses, and in the second act they are revealed to be secret agents in the pay of rival intelligence services. … I complained about the unreality of the characters to my friend Markus Fierz, a well-known Swiss physicist, who came with me to the play. ‘But don’t you see?’ said Fierz. ‘The whole point of the play is to show us how we look to the rest of the human race’.”
“… reality must take precedence over public relations, for nature cannot be fooled.” – Feynman’s Appendix F to Rogers’ Commission Report into the Challenger space shuttle explosion of 1986.
Fig. 1 – Newton’s geometric proof that an impulsive pushing graviton mechanism is consistent with Kepler’s 3rd law of planetary motion, because equal areas will be swept out in equal times (the three triangles of equal area, SAB, SBC and SBD, all have an equal base of length SB, and they all have altitudes of equal length), together with a diagram we will use for a more modern analysis. Newton’s geometric proof of centripetal acceleration, from his book Principia, applies to any elliptical orbit, not just circular orbits as Hooke’s easier inverse-square law derivation did. (Newton didn’t include the graviton arrow, of course.) By Pythagoras’ theorem x2 = r2 + v2t2, hence x = (r2 + v2t2)1/2. Inward motion, y = x – r = (r2 + v2t2)1/2 – r = r[(1 + v2t2/r2)1/2 – 1], which upon expanding with the binomial theorem to the first two terms, yields: y ~ r[(1 + (1/2)v2t2/r2) – 1] = (1/2)v2t2/r. Since this result is accurate for infidesimally small steps (the first two terms of the binomial become increasingly accurate as the steps get smaller, as does the approximation of treating the triangles as right-angled triangles so Pythagoras’ theorem can be used), we can accurately differentiate this result for y with respect to t to give the inward velocity, u = v2t/r. Inward acceleration is the derivative of u with respect to t, giving a = v2/r. This is the centripetal force formula which is required to obtain the inverse square law of gravity from Kepler’s third law: Hooke could only derive it for circular orbits, but Newton’s geometric derivation (above, using modern notation and algebra) applies to elliptical orbits as well. This was the major selling point for the inverse square law of gravity in Newton’s Principia over Hooke’s argument.
See Newton’s Principia, Book I, The Motion of Bodies, Section II: Determination of Centripetal Forces, Proposition 1, Theorem 1:
‘The areas which revolving bodies describe by radii drawn to an immovable centre of force … are proportional to the times on which they are described. For suppose the time to be divided into equal parts … suppose that a centripetal [inward directed] force acts at once with a great impulse [like a graviton], and, turning aside the body from the right line … in equal times, equal areas are described … Now let the number of those triangles be augmented, and their breadth diminished in infinitum … QED.’
This result, in combination with Kepler’s third law, gives the inverse-square law of gravity, although Newton’s argument is using geometry plus hand-waving so it is actually far less rigorous than my rigorous algebraic version above. Newton failed to employ calculus and the binomial theorem to make his proof more rigorous, because he was the inventor of them, and most readers wouldn’t be familiar with those methods. (It doesn’t do to be so inventive as to both invent a new proof and also invent a new mathematics to use in making that proof, because readers will be completely unable to understand it without a large investment of time and effort; so Newton found that it payed to keep things simple and to use old-fashioned mathematical tools which were widely understood.)
Newton in addition worked out an ingeniously simple proof, again geometrically, to demonstrate that a solid sphere of uniform density (or radially symmetric density) has the same net gravity on the surface and at any distance, for all of its atoms in their three dimensional distribution, as would be the case if all the mass was concentrated in a point in the middle of the Earth. The proof for that is very simple: consider the sphere to be made up of a lot of concentric shells, each of small thickness. For any given shell, the geometry is such as that a person on the surface experiences small gravity effects from small quantities of mass nearby on the shell, while most of the mass of the shell is located at large distances. The inverse square effect, which means that for equal quantities of mass, the most nearby mass creates the strongest gravitational field, is thereby offset by the actual locations of the masses: only small amounts are nearby, and most of the mass of the shell is at a great distance. The overall effect is that the effective location for the entire mass of the shell is in the middle of the shell, which implies that the effective location of the mass of a solid sphere seen from a distance is in the middle of the sphere (if the density of each of the little shells, considered to be parts of the sphere, is uniform).
Feynman discusses the Newton proof in his November 1964 Cornell lecture on ‘The Law of Gravitation, an Example of Physical Law’, which was filmed for a BBC2 transmission in 1965 and can viewed on google video here (55 minutes). Feynman in his second filmed November 1964 lecture, ‘The Relation of Mathematics to Physics’, also on google video (55 minutes), stated:
‘People are often unsatisfied without a mechanism, and I would like to describe one theory which has been invented of the type you might want, that this is a result of large numbers, and that’s why it’s mathematical. Suppose in the world everywhere, there are flying through us at very high speed a lot of particles … we and the sun are practically transparent to them, but not quite transparent, so some hit. … the number coming [from the sun’s direction] towards the earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see, after some mental effort, that the farther the sun is away, the less in proportion of the particles are being taken out of the possible directions in which particles can come. So there is therefore an impulse towards the sun on the earth that is inversely as square of the distance, and is the result of large numbers of very simple operations, just hits one after the other. And therefore, the strangeness of the mathematical operation will be very much reduced the fundamental operation is very much simpler; this machine does the calculation, the particles bounce. The only problem is, it doesn’t work. …. If the earth is moving it is running into the particles …. so there is a sideways force on the sun would slow the earth up in the orbit and it would not have lasted for the four billions of years it has been going around the sun. So that’s the end of that theory. …
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
The error Feynman makes here is that quantum field theory tells us that there are particles of exchange radiation mediating forces normally, without slowing down the planets: this exchange radiation causes the FitzGerald-Lorentz contraction and inertial resistance to accelerations (gravity has the same mechanism as inertial resistance, by Einstein’s equivalence principle in general relativity). So the particles do have an effect, but only as a once-off resistance due to the compressive length change, not continuous drag. Continuous drag requires a net power drain of energy to the surrounding medium, which can’t occur with gauge boson exchange radiation unless acceleration is involved, i.e., uniform motion doen’t involve acceleration of charges in such a way that there is a continuous loss of energy, so uniform motion doesn’t involve continuous drag in the sea of gauge boson exchange radiation which mediates forces! The net energy loss or gain during acceleration occurs due to the acceleration of charges, and in the case of masses (gravitational charges), this effect is experienced by us all the time as inertia and momentum; the resistance to acceleration and to deceleration. The physical manifestation of these energy changes occurs in the FitzGerald-Lorentz transformation; contractions of the matter in the length parallel to the direction of motion, accompanied by related relativistic effects on local time measurements and upon the momentum and thus inertial mass of the matter in motion. This effect is due to the contraction of the earth in the direction of its motion. Feynman misses this entirely. The contraction of the earth’s radius by this mechanism of exchange radiation (gravitons) bouncing off the particles, gives rise to the empirically confirmed general relativity law due to conservation of mass-energy for a contracted volume of spacetime, as proved in an earlier post. So it is two for the price of one: the mechanism predicts gravity but also forces you to accept that the Earth’s radius shrinks, which forces you to accept general relativity, as well. Additionally, it predicts a lot of empirically confirmed facts about particle masses and cosmology, which are being better confirmed by experiments and observations as more experiments and observations are done.
As pointed out in a previous post giving solid checkable predictions for the strength of quantum gravity and observable cosmological quantities, etc., due to the equivalence of space and time, there are 6 effective dimensions: three expanding time-like dimensions and three contractable material dimensions. Whereas the universe as a whole is continuously expanding in size and age, gravitation contracts matter by a small amount locally, for example the Earth’s radius is contracted by the amount 1.5 mm as Feynman emphasized in his famous Lectures on Physics. This physical contraction, due to exchange radiation pressure in the vacuum, is not only a contraction of matter as an effect due to gravity (gravitational mass), but it is also a contraction of moving matter (i.e., inertial mass) in the direction of motion (the Lorentz-FitzGerald contraction).
This contraction necessitates the correction which Einstein and Hilbert discovered in November 1915 to be required for the conservation of mass-energy in the tensor form of the field equation. Hence, the contraction of matter from the physical mechanism of gravity automatically forces the incorporation of the vital correction of subtracting half product of the metric and the trace of the Ricci tensor, from the Ricci tensor of curvature. This correction factor is the difference between Newton’s law of gravity merely expressed mathematically as 4 dimensional spacetime curvature with tensors and the full Einstein-Hilbert field equation; as explained on an earlier post, Newton’s law of gravitation when merely expressed in terms of 4-dimensional spacetime curvature gives the wrong deflection of starlight and so on. It is absolutely essential to general relativity to have the correction factor for conservation of mass-energy which Newton’s law (however expressed in mathematics) ignores. This correction factor doubles the amount of gravitational field curvature experienced by a particle going at light velocity, as compared to the amount of curvature that a low-velocity particle experiences. The amazing thing about the gravitational mechanism is that it yields the full, complete form of general relativity in addition to making checkable predictions about quantum gravity effects and the strength of gravity (the effective gravitational coupling constant, G). It has made falsifiable predictions about cosmology which have been spectacularly confirmed since first published in October 1996. The first major confirmation came in 1998 and this was the lack of long-range gravitational deceleration in the universe. It also resolves the flatness and horizon problems, and predicts observable particle masses and other force strengths, plus unifies gravity with the Standard Model. But perhaps the most amazing thing concerns our understanding of spacetime: the 3 dimensions describing contractable matter are often asymmetric, but the 3 dimensions describing the expanding spacetime universe around us look very symmetrical, i.e. isotropic. This is why the age of the universe as indicated by the Hubble parameter looks the same in all directions: if the expansion rate were different in different directions (i.e., if the expansion of the universe was not isotropic) then the age of the universe would appear different in different directions. This is not so. The expansion does appear isotropic, because those time-like dimensions are all expanding at a similar rate, regardless of the direction in which we look. So the effective number of dimensions is 4, not 6. The three extra time-like dimensions are observed to be identical (the Hubble constant is isotropic), so they can all be most conveniently represented by one ‘effective’ time dimension.
Only one example of a very minor asymmetry in the graviton pressure from different directions, resulting from tiny asymmetries in the expansion rate and/or effective density of the universe in different directions, has been discovered and is called the Pioneer Anomaly, an otherwise unaccounted-for tiny acceleration in the general direction toward the sun (although the exact direction of the force cannot be precisely determined from the data) of (8.74 ± 1.33) × 10−10 m/s2 for long-range space probes, Pioneer-10 and Pioneer-11. However these accelerations are very small, and to a very good approximation, the three time-like dimensions – corresponding to the age of the universe calculated from the Hubble expansion rates in three orthagonal spatial dimensions – are very similar.
Therefore, the full 6-dimensional theory (3 spatial and 3 time dimensions) gives the unification of fundamental forces; Riemann’s suggestion of summing dimensions using the Pythagorean sum ds2 = å (dx2) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)2 can be included with the other dimensions dx2, dy2, and dz2. There is then the question as to whether the term d(ct)2 will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx2 + dy2 + dz2 = d(ct)2, which implies that d(ct)2 changes sign when passed across the equality sign to the other dimensions: ds2 = å (dx2) = dx2 + dy2 + dz2 – d(ct)2 = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/time-dilation in inertial motion); This formula, ds2 = å (dx2) = dx2 + dy2 + dz2 – d(ct)2, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854.
Professor Georg Riemann (1826-66) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the n-dimensional manifold be expressed by n variable quantities x1, x2, x3, and so on to xn, then … ds = Ö [å (dx)2] … I will therefore term flat these manifolds in which the square of the line-element can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’
[The algebraic Newtonian-equivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds2 = (1 – 2GM/r)-1(dx2 + dy2 + dz2) – (1 – 2GM/r) d(ct)2. This only reduces to the special relativity metric for the impossible, unphysical, imaginary, and therefore totally bogus case of M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]
WARNING: I’ve made a change to the usual tensor notation below and, apart from the conventional notation in the Christoffel symbol and Riemann tensor, I am indicating covariant tensors by positive subscript and contravariant by negative subscript instead of using indices (superscript) notation for contravariant tensors. The reasons for doing this will be explained and are to make this post easier to read for those unfamiliar with tensors but familiar with ordinary indices (it doesn’t matter to those who are familiar with tensors, since they will know about covariant and contravariant tensors already).
Professor Gregorio Ricci-Curbastro (1853-1925) took up Riemann’s suggestion and wrote a 23-pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of co-ordinate system. In 1901, Ricci and Tullio Levi-Civita (1873-1941) wrote a 77-pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute co-ordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):
g = ds2 = gm n dx–mdx–n. The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.
‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). … We call four quantities Av the components of a covariant four-vector, if for any arbitrary choice of the contravariant four-vector Bv, the sum over v, å Av Bv = Invariant. The law of transformation of a covariant four-vector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
The rank is denoted simply by the number of letters of subscript notation, so that Xa is a ‘rank 1’ tensor (a vector sum of first-order differentials, like net velocity or gradient over applicable dimensions), and Xab is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, Xa) and a contra-variant tensor of the same variable (say, X-a) are distinguished by the way they transform when converting from one system of co-ordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contra-variant tensor by superscript (for example xn). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor xn = x1 + x2 + x3 +… xn, whereas for powers or indices xn = x1 x2 x3 …xn). [Another step towards ‘beautiful’ gibberish then occurs whenever a contra-variant tensor is raised to a power, resulting in, say (x2)2, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x4,causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contra-variant notation, thus x-n is here the contra-variant version of the covariant tensor xn. Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and Levi-Civita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’
This was fine for Einstein who had by that time been working with the theory of Ricci and Levi-Civita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m and n able to take values representing the four components of space-time (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4-by-4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):
g = ds2 = gmn dx–mdx–n = å (gm n dx–m dx–n )= -(g11 dx-1 dx-1 + g21 dx-2 dx-1 + g31 dx-3 dx-1 + g41 dx-4 dx-1) + (-g12 dx-1 dx-2 + g22 dx-2 dx-2 + g32 dx-3 dx-2 + g42 dx-4 dx-2) + (-g13 dx-1 dx-3 + g23 dx-2 dx-3 + g33 dx-3 dx-3 + g43 dx-4 dx-3) + (-g14 dx-1 dx-4 + g24 dx-2 dx-4 + g34 dx-3 dx-4 + g44 dx-4 dx-4)
The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx-1 dx-1 = d(ct)2, while dx-2 dx-2 = dx2, dx-3 dx-3 = dy2, and so on. Therefore:
g = ds2 = gct d(ct)2 + gx dx2 + gy dy2 + gz dz2 + (a dozen trivial first order differential terms).
It is often asserted that Albert Einstein (1879-1955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and Levi-Civita who wrote the long-winded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute co-ordinate systems. If Ricci and Levi-Civita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully. Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into space-time geometry:
‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for “higher mathematics”, which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and Levi-Civita. In Zurich Einstein found a friend, Marcel Grossmann (1878-1936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)
General relativity equates the mass-energy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.
Curvature is best illustrated by plotting a graph of distance versus time and when the line curves (as for an accelerating car) that curve is ‘curvature’. It’s the curved line on a space-time graph that marks acceleration, be that acceleration due to a force acting upon gravitational mass or inertial mass (the equivalence principle of general relativity means that gravitational mass = inertial mass).
This point is made very clearly by Professor Lee Smolin on page 42 of the USA edition of his 1996 book, ‘The Trouble with Physics.’ See Figure 1 in the post here. Next, in order to mathematically understand the Riemann curvature tensor, you need to understand the operator (not a tensor) which is denoted by the Christoffel symbol (superscript here indicates contravariance):
G abc = (1/2)gcd [(dgda/dxb) + (dgdb/dxa) + (dgab/dxd)]
The Riemann curvature tensor is then represented by:
Racbe = ( dG bca /dxe ) – ( dG bea /dxc ) + (G tea G bct ) – (G tba G cet ).
If there is no curvature, spacetime is flat and things don’t accelerate. Notice that if there is any (fictional) ‘cosmological constant’ (a repulsive force between all masses, opposing gravity an increasing with the distance between the masses), it will only cancel out curvature at a particular distance, where gravity is cancelled out (within this distance there is curvature due to gravitation and at greater distances there will be curvature due to the dark energy that is responsible for the cosmological constant). The only way to have a completely flat spacetime is to have totally empty space, which of course doesn’t exist, in the universe we actually know.
To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to cò (- gmn dx–m dx–n)1/2, while the proper time is ò (gm n dx–mdx–n)1/2.
Notice that the ratio of proper length to proper time is always c. The Ricci tensor is a Riemann tensor contracted in form by summing over a = b, so it is simpler than the Riemann tensor and is composed of 10 second-order differentials. General relativity deals with a change of co-ordinates by using Fitzgerald-Lorentz contraction factor, g = (1 – v2/c2)1/2. Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible, not-in-our-universe, case of zero mass. Einstein at first built a representation of Isaac Newton’s gravity law a = MG/r2 (inward acceleration being defined as positive) in the form Rm n = 4p GTm n /c2, where Tmn is the mass-energy tensor, Tm n = r um un. ( This was incorrect since it did not include conservation of energy.) But if we consider just a single dimension for low velocities (g = 1), and remember E = mc2, then Tm n = T00 = r u2 = r (g c)2 = E/(volume). Thus, Tm n /c2 is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). We ignore pressure, momentum, etc., here:
Above: the components of the stress-energy tensor (image credit: Wikipedia).
The scalar term sum or “trace” of the stress-energy tensor is of course the sum of the diagonal terms from the top left to the top right, hence the trace is just the sum of the terms with subscripts of 00, 11, 22, and 33 (i.e., energy-density and pressure terms).
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.
However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: Fitzgerald-Lorentz contraction effect: g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + … . Gravitational contraction effect: g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + …, where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2]. Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without molecular viscosity (this is due to the Schwinger threshold for pair-production by an electric field: the vacuum only contains fermion-antifermion pairs out to a small distance from charges, and beyond that distance the weaker fields can’t cause pair-production – i.e., the energy is below the IR cutoff – so the vacuum contains just bosonic radiation without pair-production loops that can cause viscosity; for this reason the vacuum compresses macroscopic matter without slowing it down by drag). Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.
More information can be found in the earlier posts here, here, here, here, here and here.
Now back to the Washington Post
“The day before he was elected as Pope Benedict XVI, Cardinal Joseph Ratzinger addressed the cardinals assembled in St Peter’s and warned that society was “building a dictatorship of relativism that does not recognize anything as definitive.” … When I wrote Questioning Einstein: Is Relativity Necessary? (Vales Lake, 2009) I realized that the central claims of relativity and relativism are very similar.
“My book was based on work by Petr Beckmann, a Czech immigrant who defected to the U.S. and taught electrical engineering at the University of Colorado. … His main point was that the physical anomalies that led to relativity can be explained without it. For example, the famous equation “E = mc2” was derived using relativity theory. But later Einstein re-derived it, this time without relativity.
“A frequently heard statement of cultural relativism goes like this: “If it feels right for you, it’s OK. Who is to say you’re wrong?” One individual’s experience is as “valid” as another’s. There is no “preferred” or higher vantage point from which to judge these things. Not just beauty, but right and wrong are in the eye of the beholder. …
“The special theory of relativity imposes on the physical world a claim that is very similar to the one made by relativism. In the 1880s a scientist named Albert Michelson searched for the “ether” – the medium in which light waves were thought to travel. But his equipment could not detect it … Einstein resolved the problem by claiming that a light ray keeps moving toward you at the same speed no matter how fast you move toward it. Light’s speed is unaffected by the observer’s velocity, Einstein said. That was strange because other waves don’t behave that way. Move toward a sound wave, and you must add your speed to that of the oncoming wave to know its approach velocity. That didn’t apply to light, apparently.
“So how come the speed of light always stays the same? Einstein argued that when the observer moves relative to an object, distance and time always adjust themselves just enough to preserve light speed as a constant. Speed is distance divided by time. So, Einstein argued, length contracts and time dilates to just the extent needed to keep the speed of light ever the same.
“Relativity … elevated science into a priesthood of obscurity. Common sense could no longer be trusted.
“The contraction of space and the dilation of time are deductions from relativity. But they have not been observed. In easy Einstein books, drawings of spaceships that are shortened because they are moving at high speed are imagined by artists in accordance with theory. No physical experiment has ever detected length contraction.
“Atomic clocks do slow down when they move … But the slowing of clocks and the slowing of time are very different things. GPS has “relativistic” corrections to keep its clocks synchronized. But those corrections depart significantly from Einstein’s theory. They refer clock motion not to the observer but to an absolute reference frame, centered on the Earth.
“So there are reasons to think that experiments with atomic clocks have falsified special relativity. (The general theory is another matter. Beckmann said it gave the right results by a roundabout method.)”
Bethell’s article is full of misunderstandings:
1. Einstein simplified physics by opposing aether theories. Einstein showed how to take just two experimentally defensible principles, namely relative motion and constancy of the velocity of light, and used them to predict time-dilation and the increase inertial mass with velocity (both confirmed by particle physics, e.g., radioactive particles like muons decay more slowly when accelerated to relativistic – i.e. near c – velocities, and particles gain more momentum due to the mass increase due to their velocity, which has measurable effects when they collide).
Aether theories were ugly, ad hoc, and occurred in many varieties, sharing only the common problem that they all failed to explain let alone predict these results. Einstein’s genius was dumping the pictorial model and sticking to equations based on empirically defensible principles. The best of the aether theories were those of Lorentz and FitzGerald, in which the contraction of the Michelson-Morley apparatus in the direction of its motion (which is the same in Einstein’s, Lorentz’s and FitzGerald’s theories) is due to the pressure from the front of the moving particles in the instrument (electrons and quarks) pushing against the aether as the Earth moves through an absolute space. The pressure was supposed to contract the instrument in direct proportion to the factor (1 – v2/c2)1/2, where v is the velocity of the instrument and c is the velocity of light.
This is of course an analogy to the Prandtl-Glauert transformation, whereby the drag coefficient of an object is directly proportional to (1 – M2)-1/2, where the Mach number, M = v/c, so that the drag coefficient rises in proportion to (1 – v2/c2)-1/2, for an object moving in the air as it approaches the velocity of sound, the so-called “sound barrier” in that theory. The increase in drag coefficient contracts an aircraft in the direction of its motion, because the head-on pressure on the nose of the aircraft rises. Of course, this is not a perfect analogy. For one thing, there the total drag force in air is not just proportional to the drag coefficient, but to the square of the velocity. The total drag force is equal to the dynamic pressure, the product of half the air density and the square of the velocity, multiplied by the cross-sectional area and the drag coefficient. As explained in previous posts, the off-shell gauge boson radiations in the vacuum force fields are unable to slow down moving objects by carrying away kinetic energy like the air does to an aircraft.
So in the case of the vacuum, the mechanism alters the form of the mathematical model used to describe the system. Only accelerations are resisted, and this gives rise to inertial mass, which rises with velocity (an analogy to the snowplow effect, where the height of snow on a snowplow rises with the velocity of the snowplow, because snow piles up since it has its own inertia and isn’t shunted sideways out of the way of the plow in direct proportion to the forward speed; thus the effective mass of the snow being pushed by the plow increases with the forward velocity) by the same Lorentz transformation factor which describes the contraction in length and time-dilation. Furthermore, the Prandtl–Glauert transformation is false for air because you can in fact exceed the velocity of sound, albeit at the price of creating a supersonic shock wave. Air is not a perfect fluid. Vitally important for the analogy is the historical chance that Ludwig Prandtl and Hermann Glauert discovered their approximate theory of air drag in the 1920s, after special relativity had replaced the FitzGerald-Lorentz aether theory as the means of deriving the same basic equations. Prandtl had been lecturing on the subject before Glauert, although the latter published first in 1927, “The Effect of Compressibility on the Lift of Airfoils,” Proceedings of the Royal Society, vol. A118 (1927), pp. 113-9. No reference was made to the contraction in special relativity. The analogy is pertinent because in quantum field theory, space is filled with field quanta moving at light velocity, c which is the isothermal analogy to the adiabatic relation of mean air particle velocity to sound speed in the air. However, sound is a longitudinal wave whereas light oscillates perpendicularly to its direction of propagation.
Nevertheless, Feynman’s path integral explains the double-slit diffraction of light accurately by showing that each photon effectively follows all possible paths, most of which are not normally observable because they have amplitudes which cancel one another out. Therefore, long-range (electromagnetic, gravitational) forces have quantum field particles moving at the velocity of light between all charges in the universe, and “real” (on-shell) particles of radiation appear as asymmetries in the normal equilibrium of exchange of (off-shell) particles between long-lived (real) particles. From the experimentally confirmed Casimir effect, where there is a LeSage-type pushing of metal plates together by the fact that they are hit by the full spectrum of vacuum radiation in the outside but only by the reduced (cut-off) spectrum of wavelengths smaller than the size of the gap between them, on the inside, we know that this is real.
Notice in the video below that the lecturer is totally incompetent in claiming that all types of virtual particles contribute to the Casimir effect; this is incorrect because although the Casimir force operates over short distances between parallel conducting metal plates, the distances is large compared to the size of an atom, i.e. it is larger than the range of virtual hadrons in the vacuum (which are limited to nuclear sized distances and thus can’t even reach the radius of orbital electrons). So the guy’s arm-waving claims about all kinds of particle contributing to the Casimir effect is totally false. Likewise, his claims about subtracting infinities are unphysical and unnecessary; there is no evidence that the zero point vacuum energy density is infinite. Some people like to add junk speculations to physics to alloy it to science fiction. The Casimir force is actually evidence that the vacuum contains electromagnetic field quanta coming from all directions, not merely being exchanged between two charges we are looking at, but being exchanged with all the other charges in the universe (charges in distant stars all around). There is no mechanism for field quanta to resist being exchanged between charges A and B, just because those charges each have opposite charges near them (e.g., nuclei if they are electrons, or vice versa). Fundamental charges have no way of being influence by the overall neutrality of a distant system; if there is a positive and a negative charge at a distance, an electron will still exchange field quanta, although the overall force reslting from the gauge interaction (quanta exchange) may be balanced out if the distant negative and positive charges are close together. This leads to physical predictions and a deeper understanding of physical phenomena, unlike false claims about having to subtract infinities:
2. The claim “Relativity … elevated science into a priesthood of obscurity. Common sense could no longer be trusted”, is better levelled at lying, physically false and indefensible, non-relativistic first-quantization quantum mechanics which uses classical Coulomb fields and has to therefore falsely introduce chaos by applying intrinsic Planck-scale uncertainty to real particles:
“… Bohr … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”
– Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.
“I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!” [Emphasis by Feynman.]
– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.
“When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [amplitudes for different paths] to predict where an electron is likely to be.”
– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.
“The quantum collapse [in the multiple universes, non-relativistic, pseudoscientific first-quantization model of quantum mechanics] occurs when we model the wave moving according to Schroedinger time-dependent and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger time-independent. The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”
– Dr Thomas Love, California State University.
“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.”
3. Bethell doesn’t seem to be aware of the biggest anisotropy in the cosmic background radiation, the massive +/-3 milliKelvin anisotropy which was discovered right back in the early days of investigation of investigating the microwave background from 300,000 years after the big bang by Richard A. Muller using U-2 aircraft in 1978. See his Scientific American article, “The cosmic background radiation and the new aether drift”, vol. 238, May 1978, p. 64-74 (PDF linked here):
“U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.”
This anisotropy is literally magnitudes bigger in size than the smaller ripples detected more recently by satellites such as COBE and WMAP. Our Milky Way is moving towards Andromeda at a speed much faster than suggested by the gravitational attraction suggested by the masses of the stars in the two galaxies, which has led to suggestions of an invisible “great attractor” near Andromeda, such as a large black hole. However, for scientists who let science fiction take a back seat, there is no evidence for this and it isn’t even wrong – it’s not even a falsifiable hypothesis, it is just ad hoc fiddling of uncheckable theory to fit facts.
Instead of inventing an unobserved black hole or unobserved UFO parked around Andromeda to “explain” the blueshift towards it, we can adopt Occam’s Razor and take the simple explanation of Muller in that 1978 Scientific American article: the motion is the aether drift which Michelson and Morley failed to measure because their instrument contracted in the direction it was moving in absolute motion. The Milky Way is going at 600 km/s and, with occasional deflections and variations in its motion as it approaches and passes other galaxies, it has been travelling like this for a long time. Since the big bang, the Milky Way has gone at roughly this speed for 13.7 thousand million years, about 0.3% of the horizon radius of the universe today. Hence, the reason why we see a highly isotropic universe around us (stars every way we look) is because we’re in the middle, just 0.3% of the radius off centre! Special relativity then becomes a child’s simplification like the Bohr atom: absolutely false in its denial of the possibility of measuring absolute motion, but relatively correct in its results and handy for making quick, rough calculations. The Copernican “principle” that “we are at no special place” becomes ignorant, ill-informed pseudo-science because it simply has no experimental foundation.
Nobody has ever proved that the observed isotropy of the universe around us is a hoax due to curved spacetime, because – as observationally discovered in 1998 by Perlmutter from supernova redshifts – our universe is not curved, it is simply flat in geometry and the general relativistic gravitational curvature is limited to the spacetime near masses. At great distances from masses, the cosmological acceleration due to dark energy opposes curvature, flattening spacetime out!
“The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.
One funny or stupid denial of this was in a book called Einstein’s Mirror by a couple of physics lecturers, Patrick Hey and Tony Walters. They seemed to vaguely claim, in effect, that in the Michelson-Morley experiment the arms of the instrument are of precisely the same length and measure light speed absolutely, then they claimed that if anyone built a Michelson-Morley instrument with arms of unequal length, the contraction wouldn’t work. In fact, the arms were never of equal length to within a wavelength of light to begin with, and they only detected the relative difference in apparent light speed between two perpendicular directions by utilising interference fringes, which is a way to measure relative speed in one direction to another, not absolute speed in any direction. You can’t measure the speed of light with the Michelson-Morley instrument, it only shows if there is a difference between two perpendicular directions if you implicitly assume there is no length contraction!
It’s really funny that Eddington made Einstein’s special relativity (anti-aether) famous in 1919 by confirming aetherial general relativity. The media couldn’t be bothered to explain aetherial general relativity, so they explained Einstein’s earlier false special relativity instead!
‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, New Pathways in Science, v2, p39, 1935.
“The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.” – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.
“Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.” – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 184-5.
“Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.” – Einstein’s Legacy – Where are the “Einsteinians?”, by Professor Lee Smolin, http://www.logosjournal.com/issue_4.3/smolin.htm
“But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.” – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.
‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’… A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90. (However, this is a massive source of controversy in GR because it’s a continuous approximation to discrete lumps of matter as a source of gravity which gives rise to a falsely smooth Riemann curvature metric; really continuous differential equations in GR must be replaced by a summing over discrete – quantized – gravitational interaction Feynman graphs.)
‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.
‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2nd ed., v1, p. v, 1951.
‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.
‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, pp. 64-74.
‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.
‘… the Heisenberg formulae [virtual particle interactions cause random pair-production in the vacuum, introducing indeterminancy, like the Brownian motion of pollen fragments due to random air molecular bombardment] can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’ – G. Builder, ‘Ether and Relativity’, Australian Journal of Physics, v11 (1958), p279.
This paper of Builder on absolute velocity in ‘relativity’ is the analysis used and cited by the famous paper on the atomic clocks being flown around the world to validate ‘relativity’, namely J.C. Hafele in Science, vol. 177 (1972) pp 166-8. So it was experimentally proving absolute motion, not ‘relativity’ as widely hyped Absolute velocities are required in general relativity because when you take synchronised atomic clocks on journeys within the same gravitational isofield contour and then return them to the same place, they read different times due to having had different absolute motions. This experimentally debunks special relativity for this situation, where you need the absolute motions from accelerations modelled by curvature in general relativity. Hence, Einstein was wrong when he wrote in Ann. d. Phys., vol. 17 (1905), p. 891: ‘we conclude that a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ See, for example, the debunking of Einstein’s claim there on page 12 of the September 2005 issue of ‘Physics Today’, available at: http://www.physicstoday.org/vol-58/iss-9/pdf/vol58no9p12_13.pdf (PDF linked here)
So we see from this solid experimentally confirmed evidence that the usual statement that there is no ‘preferred’ frame of reference, i.e., a single absolute reference frame, is false. Experimentally, a swinging pendulum or spinning gyroscope is observed to stay true to the stars (which are not moving at sufficient angular velocities from our observation point to have any significant problem with being an absolute reference frame for most purposes).
If you need a more accurate standard, then use the cosmic background radiation, which is the truest blackbody radiation spectrum ever measured in history.
These different methods of obtaining measurements of absolute motion are not really examining ‘different’ or ‘preferred’ frames, or pet frames. They are all approximations to the same thing, the absolute reference frame. All the Copernican propaganda since the time of Einstein that: ‘Copernicus didn’t discover the earth orbits the sun, but instead Copernicus denied that anything really orbited anything because he thought there is no absolute motion, only relativism’, is a gross lie. That claim is just the sort of brainwashing double-think propaganda which Orwell accused the dictatorships of doing in his book ’1984′. Copernicus didn’t travel throughout the entire universe to confirm that the earth is: ‘in no special place’. Even if he did make that claim, it would not be founded upon any evidence. Science is (or rather, should be) concerned with being unprejudiced in areas where there is a lack of evidence.
If the cosmic background radiation, the Casimir experiment proof, and so on, had been around in Einstein’s time, I don’t think he would have been a laggard in trying to move physics forward. At some time, old ideas like aether, relativity, etc., which were all “mainstream religions” in their day, need to be given up and people need to move on, driven by empirical facts which can no longer be denied or covered-up. What does not advance peacefully, sooner or later succumbs to corruption, decay, and even radical revolution.
The famous 1930-64 New York Times science correspondent (science editor from 1956-64), William L. Laurence, who published the headline news of early American research on nuclear fission chain reactions in February 1939 and was the only journalist to witness the first nuclear test and the Nagasaki raid from a B-29 aircraft, published numerous stories hyping Einstein’s many failed efforts to find a classical unified field theory of electromagnetism and gravitation (ignoring nuclear forces). Laurence and his wife are photographed beside Einstein on the back cover of Laurence’s 1951 book about the development of the H-bomb, The Hell Bomb, which praises Einstein’s classical theory hype. However, Laurence changed tune after Einstein died and became cynical to the point of ridicule in his 1959 book Men and Atoms, chapter 29 (pp. 230-235 of the 1961 Hodder and Stoughton, London, edition):
“He [Einstein] first believed he had achieved his goal in 1929, after thirteen years of concentrated effort, only to find it illusory on closer examination. In 1950 he thought he almost had it within his grasp, having overcome ‘all the obstacles but one.’ In March 1953 he felt convinced that he had at last overcome that lone obstacle and thus had attained the crowning achievement of his life’s work. Yet even then he ruefully admitted that he had ‘not yet found a practical way to confront the theory with experimental evidence,’ the crucial test for any theory.
“Even more serious, his field theory failed to find room in the universe for the atom and its component particles … which appeared to be ‘singularities in the field’, like flies in the cosmic ointment. Despite these drawbacks, he never wavered in his confidence that the concept of the pure field, free from ‘singularities’ (i.e., the particle concept …) was the only true approach to a well-ordered universe, and that eventually ‘the field’ would find room in it for the enfant terrible of the cosmos – the atom and the vast forces within it. …
“Einstein believed that the physical universe was one continuous field, governed by one logical set of laws, in which every individual event is inexorably determined by immutable laws of causality. On the other hand, the vast majority of modern-day physicists champion the quantum theory, which leads to a discontinuous universe, made up of discrete particles and quanta of energy, in which probability takes the place of causality and determinism is supplanted by chance. …
“Einstein alone stood in majestic solitude against all these concepts of the quantum theory. Granting that it had had brilliant successes in explaining many of the mysteries of the atom and the phenomena of radiation, which no other theory had succeeded in explaining, he nevertheless insisted that a theory of discontinuity and uncertainty, of duality of particle and wave, and of a universe not governed by cause and effect was an ‘incomplete theory’; that eventually laws would be found showing a continuous, unitary universe, governed by immutable laws in which every individual event was predictable.
“‘I cannot believe’, he said, ‘that God plays dice with the cosmos!’ Rather, as he said on another occasion, ‘God is subtle but He is not malicious.’
“Paradoxically, as the years passed, the figure of Einstein the man became more and more remote, while that of Einstein the legend came ever nearer to the masses of mankind. They grew to know him not as a universe maker whose theories they could not hope to understand, but as a world citizen, one of the outstanding spiritual leaders of his generation, a symbol of the human spirit and its highest aspirations. … He radiated humor, warmth and kindliness. He loved jokes and laughed easily. Princeton residents would see him walk in their midst, a familiar figure yet a stranger; a close neighbor yet at the same time a visitor from another world. … Princetonians, old and young, soon got used to the long-haired figure in pullover sweater and unpressed slacks wandering in their midst, a knitted stocking cap covering his head in winter. …
“He was a severe critic of modern methods of education. ‘It is nothing short of a miracle,’ he said, ‘that modern methods of instruction have not yet entirely strangled the holy curiosity of inquiry. For this delicate little plant, aside from stimulation, stands mainly in need of freedom.’ …
“‘In my life,’ he said once, explaining his great love for music, ‘the artistic visionary plays no mean role. After all, the work of a research scientist germinates upon the seed of imagination, of vision. Just as the artist arrives at his conceptions partly by intuition, so a scientist must also have a certain amount of intuition.’
“While he did not believe in a formal, dogmatic religion, Einstein, like all true mystics, was of a deeply religious nature. …
“‘I assert [he wrote for The New York Times on November 9, 1930] that the cosmic religious experience is the strongest and the nobelest driving force behind scientific research. No one who does not appreciate the terrific exertions and, above all, the devotion without which pioneer creation in scientific thought cannot come into being, can judge the strength of the feeling out of which alone such work, turned away as it is from immediate, practical life, can grow.”
Of course, since 1984 the hype for “string theory”, in which quanta are supposedly manufactured from classical field theory using compactifications of otherwise invisible extra-dimensions by means of Rube-Goldberg stabilized Calabi-Yau manifolds, has returned Einstein’s spin machine to newspaper physics, complete with its lack of falsifiable predictions.
Update (4 June 2010): Carl Brannen has had his latest paper, “Spin Path Integrals and Generations”, accepted for publication in Foundations of Physics. He has a version of it in PDF format on his site (linked here) and I’ve read it. My first impression, for about the first 10 (out of 22) pages, was extremely good. Basically, the first half of the paper is a very competent and excellent, in my view, discussion of the physics of the path integral which helps for basic understanding of the physical basis of what is the mathematics is doing. The remainder of the paper is, to my mind, quite different to the first half, and is not so rigorous physically, although Carl does a good job of using some mathematics impressively in the second half (in my opinion covering up the lack of physical understanding which accompanies the mathematical correlations and guesswork about particle masses). However, it is today generally accepted (in my view wrongly) that people should approach physics mathematically and not worry about mechanisms. This view goes back to Mach and Einstein when they were getting rid of the mechanistic view of an aether communicating forces through the vacuum of space. Einstein went against Mach in the 1920s of course, when German mainstream physics under Heisenberg used the uncertainty principle to get rid of any casuality in principle for quantum mechanics. As expolained above, Feynman’s second-quantization brought some casuality back because Feynman explains the chaos of the electron’s motion in the atom by means of a quantum Coulomb field acting between electron and nucleus, whereas Heisenberg’s first-quantization quantum mechanics wrongly uses a classical Coulomb field and therefore requires a direct obediance to the uncertainty principle to “explain” electron chaos. Feynman doesn’t need the electron to intrinsically have an indeterminate position and momentum product according to the uncertainty principle (which he dumps in this context), because Feynman’s second quantization path integral introduces a mechanism for indeterminancy: the electron jumps around because the Coulomb field quanta are exchanged at random and cause interferences. You can use an uncertainty principle to model the quantum field, which in turn makes the electron’s position indeterminate because the electron is being moved by the fluctuating quantum Coulomb field, unlike the steady classical Coulomb field in Heisenberg’s first-quantization mythology. When will the hoax of first-quantization and the full physical mechanism for indeterminancy in second-quantization (which unlike first-quantization is the relativistic form of quantum mechanics, by which I mean empirically relativistic – i.e. in agreement with the Lorentz equations etc., rather than Einstein’s philosophy about there being no absolute motion possible in the universe) be widely promoted in the media and in undergraduate physics courses? When will reality be explained clearly by physicists, instead of being obfuscated in order to preserve the philosophy of Mach and Bohr and Heisenberg which Feynman denounced as ignorant?
From this mainstream point of view, Carl is doing the right job, and I hope that his and Marni’s investigations and will be helpful at least regarding the CKM matrix and neutrino masses. However, I think this mainstream “ignore mechanisms and just model with abstract mathematics”-approach is wrong for several reasons. Firstly, because mathematics which is unconstrained by physical mechanisms can go anywhere and there is no guarantee that you won’t end up like Ptolemy with a theory of accurate but physically hopeless epicycles instead of a mathematical model which is tied to physical facts. Secondly, it gets defended by mathematical obfuscation; you can hide behind mathematics and build a fortress out of it which is hard for others to understand enough to objectively criticise. Thirdly, it’s too easy to add further mathematical epicycles to explain any anomaly or disagreement with the data that comes along. I’m convinced, for experimentally defensible reasons spelled out in detail two posts ago, that particle masses have a simple basis in quantum field theory.
Vacuum polarization (shielding the electric field of the real electron core in the process) gives energy to off-shell virtual fermions in the vacuum, which makes some of them approach an on-shell energy state where they exist long enough to be affected by the exclusion principle, which thus begins to structure those semi-virtual fermions in the vacuum. When the do annihilate, some of the neutral weak gauge bosons produced (from that structured semi-virtual fermion vacuum) act as neutral currents which “mire” down the real core charge like a Higgs field. There is no Higgs boson to give mass: instead, weak bosons have intrinsic mass as the charge from a U(1) gauge theory of quantum gravity, and the neutral weak bosons behave like a Higgs field, a theory which predicts the general distribution of lepton, quark and hadron masses as shown in previous posts (e.g., two posts back for lepton and quark masses, and the about page of this blog for hadron masses).
However, I still have to look further into the details and try to follow through some of the ideas in the previous post, such as looking at beta decay afresh in a more consistent physical way than is presently used. Lunsford’s 6-dimensional spacetime (3 spatial dimensions and 3 timelike dimensions) is another example of something I have to get to grips with urgently. Mathematically, it’s as abstract and abstruse as you can get, but seems physically comprehensible to me. We like in an expanding geometric space; the universe expands in 3 spatial dimensions. If the expansion rates in the 3 different spatial dimensions, i.e. the Hubble parameters (v = HR, where H is Hubble parameter) were different in each different spatial dimension, then we would have 3 different ages of the universe, each being t = 1/H. The expansion rate, however, is isotropic (the same in all directions we look) so effectively the age of the universe is one value not three, and three effective time dimensions are thus identical and mathematically represented by one time dimension. Just because geometric space is expanding, does not imply that everything is expanding. The expansion of geometric space is not accompanied by the expansion of masses, which are contracted in general relativity by gravitational fields. This fact – which is not grasped by most popularizers of the big bang who believe falsely that masses expand like spaces between them – is well explained by Feynman in his famous “Lectures on Physics”; as an example, general relativity predicts that the Earth’s radius is contracted by the gravitational field a distance MG/(3c2), which is about 4 mm; we have shown on the earlier post linked here how this general relativistic contraction of gravitational charges is physically related to the Lorentz contraction and due to spin-1 quantum gravity fields. So the expansion of spatial geometric distances with time after the big bang is not the same thing as the expansion of the distances that describe masses such as particles. We need 3 dimensions to describe the size of contractable masses and 3 expanding dimensions to describe the expansion in time of the spatial geometric volume of the universe.
Thus, we have 3 contractable dimensions describing matter, and 3 expanding dimensions describing time: we cannot unambiguously use distance scales for measuring expanding spaces in our universe because by the time light (or any force fields) from a star eventually reaches us, the star has had the time to move still further away! Instead, we must really measure the location of a star in terms of time; the time in our past when the light was emitted. This is what we do when we measure cosmological distances in units like “light-years”, which are time units. In total we may need, therefore, 6 dimensions to describe everything consistently, although for practical purposes the isotropy of the universe’s expansion around us means that the 3 time dimensions can be lumped together as indistinguishable and thus treated like a single effective dimension for many purposes. Lunsford’s 6-d spacetime has a neat symmetry (3 time dimensions, 3 spatial distances) and predicts that there is no cosmological constant. In 1996, I showed that spin-1 quantum gravitons do the job now attributed to “dark energy” in accelerating the universe (the cosmological constrant) as well as a LeSage quantum gravitational effect. The two consequences of spin-1 gravitons are the same thing: distant masses are pushed apart, nearby small masses exchange gravitons less forcefully with one another than with masses around them, so they get pushed together like the Casimir force effect. There is no separate “dark energy” or CC; its all due to spin-1 gravitons (see two posts back for quantitative proof that you get this to work).
9 thoughts on “Relationship between relativity, classical fields and quantum gravity”