Above: the latest illustration (updated 27 September 2009) which has replaced the older illustration included in the post below. Improvements have been made.
In 1996, the cosmological acceleration a = -Hc = -6.9*10-10 ms-2 (the minus sign here indicating outward acceleration, against inward gravitational attraction) was predicted which was discovered two years later from supernova redshift observations. The observed magnitude of the acceleration is stated by Lee Smolin in his 2006 book The Trouble with Physics, page 209 to be a = -c2/R = –c2/(cT) = –c/T = –Hc = -6.9*10-10 ms-2. This post reviews the theoretical discovery and some of its implications.
Fig. 1: an improved illustration from the earlier post, The probability of a confirmed prediction – from a theory based entirely upon facts – being the way forward is not trivial! There are two measuring scales for time: (1) beginning at the big bang (13,700 million years ago) and going forward, and (2) beginning at the present age of the Earth and looking back in time with increasing distance. It turns out that there is a simple relationship between them. If we take Hubble’s equation v = HR = HcT where T is time past for the spacetime distance R = cT, then differentiating gives us outward effective acceleration a = dv/dT = d(HcT)/dT = Hc. Simple. However, you might find it confusing to deal with time past getting bigger with increasing distances; so you might prefer to instead use the increasing time since the big bang, t = H-1 – T. As proved in the diagram, the cosmological acceleration in that time coordinate system is a = dv/dt = d(HR)/dt = d(cTH)/dt = d[c(H-1 – t)H]/dt = d[c(1 – Ht)]/dt = –Hc. An identical result apart from the minus sign because the increasing time t. These calculations give us the cosmological acceleration at the furthest possible distance, R = cT. For smaller distances, r, the cosmological acceleration will be simply a = (r/R)Hc.
This can be seen from observing that if we define dr/dt = v, then dt = dr/v, so a = dv/dt = d(Hr)/(dr/v) = vH*dr/dr = vH = rH2 = (r/R)Hc.
Why isn’t this kind of simple proof, showing that the correct amount of acceleration is inherent in the empirical observation of recession velocity increasing with distance, widely published and accepted? As with evolution in 1859, it simply doesn’t fit into current dogma of mainstream physics, while outside mainstream physics the only people who have any interest in physics don’t know the difference between the facts (like the astronomical measurements) and mainstream stringy speculations, so they believe that all cosmology is speculative. In other words, the mainstream speculation mongers have discredited the scientific method to those who value facts above dogma. The same is allegedly true of mainstream mathematics, according to the experience of Grigori Perelman when his work on the Poincare conjecture was downplayed by mainstream conformist Shing-Tung Yau to give more credit to conformists Cao and Zhu: ‘Of course, there are many mathematicians who are more or less honest. But almost all of them are conformists. They are more or less honest, but they tolerate those who are not honest.’
They have to tolerate Shing-Tung Yau because he epitomises the mainstream, has won many awards, and is powerful. Nobody wants to argue someone else’s case with someone like that. It is similar to the dishonest claim ‘string theory has the remarkable property of predicting gravity’ by Edward Witten in the April 1996 issue of Physics Today. Anonymous peer-review can serve as a severe punishment for non-conformist work, by simply blocking publications.
– arXiv moderation page policy against new topics within science that upset the dogma of status quo (revolutionary discoveries by their very definition ‘do not fit well into our current classification scheme’!)
There are also other reasons for prejudice. Some people who rejected the work of Copernicus, Darwin, Boltzmann, and others did so not just because it was contrary to the dogma they had been taught, but because they thought that the facts were ugly and they didn’t want to take them seriously, or because others around them ignored the facts or laughed about the facts, and they wanted to fit in to their peer group. They ‘sincerely’ believed that all work which didn’t seem appealing to their prejudices was scientifically wrong and that if they only had the time to read the new paper they would be able to find a flaw in it; a convenient pseudoscientific belief system. The next logical step they take is to start to claim that it is wrong without having found any error, or better yet they claim to have found an error which doesn’t actually exist. The alleged ‘error’ they find is accepted by other conformists without question or checking as a good excuse to ignore the facts, although eventually turns out to be just a disagreement between newly discovered facts and old incorrect but well-established speculative prejudices in some textbook which is widely worshipped as accepted dogma, despite being at first unchecked against experiment and is eventually discovered to be wrong. Einstein modifies Newton’s law to make it compatible with conservation of energy, so those who object to progress state that Einstein is shown wrong by the difference with Newton’s law, and they ignore or dispute experimental facts to the contrary. Other prejudices are more obvious. If the new theory is presented using simple mathematics, it can be dismissed as simplistic; using complex mathematics, it can be dismissed as complex! However something is presented, it’s easy to sneer at it, to find an excuse to ignore it. In his Introduction to the 1992 Penguin edition of Feynman’s book The Character of Physical Law, Professor Paul Davies states on page 7:
“Each revolution comes with a cluster of so-called geniuses, men and women whose skill and imagination force the scientific community to break out of old habits of thought and embrace new and unfamiliar concepts.”
This is contrary to the usual message of science advancing by quiet discoveries, and the usual message that science is about discovering the facts, not marketing gimmicks, political conformism, and sociology.
Instead of taking non-quantum general relativity and fitting it to observations using arbitrarily selected amounts of unobserved ‘dark matter’ and unobserved ‘dark energy’ – like Ptolemy fitting his earth centred universe to observations by adding more epicycles then hailing the mathematical beauty of a world of epicycles – we should look at the data and try to find the simplest model which not only fits the data but makes other checkable predictions concerning gravity. This is precisely what we did when we predicted the cosmological acceleration in 1996. In May 1996, an 8-page-long paper was written, deriving cosmological acceleration a = Hc from the Hubble recession law v = HR, and applying it to the universe. When the more appropriate journals like Classical and Quantum Gravity didn’t want to know because of biased opinions about quantum gravity being a stringy phenomenon, we sent it to Martin Eccles, editor of Electronics World. He made it available via page 896 of the letters pages in the October 1996 issue. It was later published in the February 1997 issue of Science World, ISSN 1367-6172, and the prediction for cosmological acceleration, a = Hc = 6.9*10-10 ms-2 was criticised in private correspondence by Electronics World author Mike Renardson: the acceleration seemed far, far too small to ever detect. However, it was later detected in 1998 by Saul Perlmutter et al., using computer software to automatically detect distant supernovae directly from CCD telescope data live input, who published in Nature and failed to cite the prediction because the correct journals had refused to publish it properly. The prediction applied the acceleration to the mass of the universe using Newton’s laws of motion with relativistic corrections: F = ma where m is mass of accelerating universe and a is the cosmological acceleration. This gives a large outward force which predicts gravity via the 3rd law of motion (an implosion carrying an equal, graviton mediated, reaction force upon the observer, which predicts gravity with good accuracy for the input data; a fact totally ignored and indeed suppressed by the general relativity and stringy spin-2 obsessed mainstream, which believes in a false attraction-spin connection for off-shell graviton radiation, derived using incorrect implicit assumptions by Fierz and Pauli way back in 1939).
Fig. 2: why the universe accelerates. We see distant masses every direction we look so we experience an equilibrium of spin-1 graviton exchange on all sides (apart from the effect of nearby masses, called gravity, shown in Fig. 3 and Fig. 4 below), but in our frame of reference a mass so distant that it is near radius R = cT where T is the age of the universe, can’t have an equilibrium of exchange because it’s so far out that there is little or no mass beyond it to exchange gravitons with it. So it receives a radial asymmetry (from our point of view, e.g., from our frame of reference) and appears to accelerate away from us.
Fig. 3: Feynman diagrams for general relativity, spin-2 (mainstream, failed, non-falsifiable, over-hyped, stringy) quantum gravity, and spin-1 (non-standard, successful, predictive, totally-censored out) quantum gravity.
Hawking’s formula for the radiating power of the black hole electron tells us it radiates with power P = 3 × 1092 Watts; but this is field quanta emission not real radiation for technical reasons. Hawking’s mechanism for black hole radiation emission omits Schwinger’s threshold field strength for pair-production in the vacuum, so only charged black holes produce field strengths above Schwinger’s threshold at the event horizon radius. The charge necessitated for black holes to emit radiation also changes the nature of the emitted radiation because it means that only positive virtual charges will fall into the electron core, and only negative virtual charges can be emitted.
The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power. Hence, F = 2P/c = 2(3 × 1092)/c = 2 × 1084 Newtons. This is 1041 times the F = 1.8 × 1043 Newtons cosmological force, so this Hawking radiation force predicts the electromagnetic force strength, and is more empirical evidence that the cross-section for fundamental particles is the black hole event horizon size, not the Planck size.
Fig. 5: In a capacitor, energy enters at light velocity accompanied by electrons (drifting at typically about 1 mm/s). The light velocity Poynting-Heaviside vector energy (consisting of light velocity field quanta Maxwell knew nothing of) bounces off the far end and adds to the energy still flowing in, causing a discrete rise in the stored potential difference (so-called voltage). There is no mechanism for the gauge boson energy to ever slow down below the velocity of light. It doesn’t stop, but keeps going. Studying trapped light velocity energy is like studying a static electric charge, because the magnetic fields cancel out if there is an equilibrium (with equal energy going north as going south, and going east as going west, etc.). ‘A so-called steady charged capacitor is not steady at all. Necessarily, a TEM wave containing (hidden) magnetic field as well as electric field is vacillating from end to end.’ – Catt. This means we can study gauge bosons by studying trapped light velocity electromagnetic current. Actually, nobody – from J. J. Thomson onwards – has ever probed the Planck scale to see an actual static electric charge: they have only seen light velocity electromagnetic fields which mediate charge and which they falsely and implicitly assume are some kind of crackpot proof of a Planck scale static charge. Catt’s work suggests that there is no such thing: all electrons are just trapped electromagnetic energy.
For further evidence plus details of the effect on the Standard Model and general relativity, see the earlier posts linked here and here.
Fig. 6: Unification in supersymmetric stringy M-theory compared to the standard model. Notice the key difference that string theory assumes metaphysically (without any mechanism or evidence) that all couplings for fundamental interactions are equal at the numerological Planck scale (~1019 GeV energy or ~10-35 metre distance of closest approach between colliding particles), which implies a bare core charge for the electron which is far lower than 137 times the low energy value. The actual bare core charge of the electron can be shown to be 137 times the low-energy value, by comparing the value deduced from Heisenberg’s uncertainty principle (ignoring vacuum polariation shielding) to the shielded value measured by Coulomb. Heisenberg’s minimal energy-time uncertainty relation, h-bar = E*t.
F = dE/dx(h-bar/t)/dx[h-bar/(x/c)]/dx2.
= d
= d
= -h-bar*c/x
This inverse-square law force is a factor of ~137 or 1/alpha times the Coulomb force between two electrons (i.e. it doesn’t incorporate the polarized vacuum shielding factor of alpha). Hence, the bare core charge of an electron is a factor of 1/alpha or ~137 times stronger than than the value measured at low energy, i.e. below Schwinger’s 1.3*1018 v/m electromagnetic field strength for pair production in the vacuum, which correspondes to the distance of closest approach in a ~1 MeV collision, which is thus the IR cutoff energy for the logarithmic running coupling equations in QFT. This bare core charge disagrees with the Planck scale unification, but agrees with black hole scale unification, which requires a much higher collision energy than the Planck scale, corresponding to approach distances of ~10-57 metres for the black hole event horizon radius of an electron, which is far smaller and more fundamental than the ~10-35 metre Planck length.
Notice also that Fig. 6 includes the variation of strong, weak and electromagnetic charge strengths (coupling parameters) below the 100 GeV scale, which is excluded in all popular diagrams of unification. What you notice by including the full graph is that the strong and weak interactions only arise above the electromagnetic IR cutoff, when the electromagnetic coupling begins to rise. The bare core electromagnetic charge at the black hole scale radiates field quanta which get attenuated in the vacuum, the energy giving rise to every kind of particle you can imagine, including strong and weak field quanta. Hence, the attenuation of the electromagnetic field by the polarized virtual charges which are created by pair production in the vacuum in strong electric fields, absorbs electromagnetic energy and deposits that energy in the vacuum out to the IR cutoff radius, some femtometres from the core. This deposited energy gives rise to weak and strong field quanta, and hence powers those fields. By the conservation of energy, the variation of field strength of the electromagnetic field with distance inversely corresponds to that of the strong and weak fields. I.e., near the bare charge where little electromagnetic field energy has been absorbed by the vacuum, both the strong and weak fields are weak, because little energy has been deposited in the vacuum to create the field quanta corresponding to those field. At greater distances, more of the electromagnetic field energy has been absorbed in the vacuum, so the weak and strong fields can be mediated by more virtual particles and are stronger. This energy conservation mechanism for unification does not (unlike string theory) postulate equality of all charges at the smallest possible distance scale (the UV cutoff). Instead, it shows that the electromagnetic charge reaches a maximum value, and that at arbitrarily small distances from the bare core electron charge, negligible energy from the electromagnetic field has been deposited in the vacuum so there is negligible energy for weak and strong field quanta: therefore, the weak and strong charge strengths tends towards zero as you approach the black hole scale. Unification facts, in short, contradict the mainstream dogma of equal charges for the UV cutoff. These facts further substantiate the use of the black hole event horizon area for quantum gravity predictions, discussed earlier in this post using completely different physical evidence.
Relevant extract from a post on the other blog:
Above: Dr Zaius in Planet of the Apes simultaneously held religious and scientific positions, leading him to suppress scientific findings which contradicted the religious dogma. You know, like my suppression by Britain’s Open University physics department chairman, Professor Russell Stannard, author of books like Science and the Renewal of Belief: Actually, this makes some sense when you recognise that Stannard takes “physics” to include the religious belief in uncheckable pseudoscience: a landscape of 10500 different universes to account for the vast number of possible particle physics theories which can be generated by the 100 or more moduli for the shape of the unobservably small compactification of 6-dimensions assumed to exist in the speculative Calabi-Yau manifold of string theory, as well as other rubbish like Aspect’s alleged “experimental evidence” on entanglement via correlation of particle spins:
arguing that science should be alloyed with dogma again as a “unification” of physics and religion, as it was in the time of Galileo.
“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf
“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.
First quantization for QM (e.g. Schroedinger) quantizes the product of position and momentum of an electron, rather than the Coulomb field which is treated classically. This leads to a mathematically useful approximation for bound states like atoms, which is physically false and inaccurate in detail (a bit like Ptolemy’s epicycles, where all planets were assumed to orbit Earth in circles within circles). Feynman explains this in his 1985 book QED (he dismisses the uncertainty principle as complete model, in favour of path integrals) because indeterminancy is physically caused by virtual particle interactions from the quantized Coulomb field becoming important on small, subatomic scales! Second quantization (QFT) introduced by Dirac in 1929 and developed with Feynman’s path integrals in 1948, instead quantizes the field. Second quantization is physically the correct theory because all indeterminancy results from the random fluctuations in the interactions of discrete field quanta, and first quantization by Heisenberg and Schroedinger’s approaches is just a semi-classical, non-relativistic mathematical approximation useful for obtaining simple mathematical solutions for bound states like atoms:
‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’
– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.
‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’
– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.
As a physics student with a mechanism for gravity that predicted correctly the cosmological acceleration two years ahead of its discovery, Russell didn’t even personally reply but just passed my paper to Dr Bob Lambourne who in 1996 wrote to me that my prediction for quantum gravity and cosmological acceleration was not important because it is not within the metaphysical, non-falsifiable domain of Professor Edward Witten’s stringy speculations on 11-dimensional ‘M-theory’. In 1986, Professor Russell was awarded the Templeton Project Trust Award for ‘significant contributions to the field of spiritual values; in particular for contributions to greater understanding of science and religion’. So who says the Planet of the Apes story is completely fictional, aside from a little hairiness?
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.
‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’
– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56 (footnote). His path integrals rebuild and reformulate quantum mechanics itself, getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific baggage like ‘entanglement hype’ it brings with it:
‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger’s wave equation and Heisenberg’s matrix mechanics being the first two attempts, which both generate nonsense ‘interpretations’]. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.
‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’
– Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.
‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’
– Richard MacKenzie, Path Integral Methods and Applications, pp. 2-13.
‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’
– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.
‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’
– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.
Sound waves are composed of the group oscillations of large numbers of randomly colliding air molecules; despite the randomness of individual air molecule collisions, the average pressure variations from many molecules obey a simple wave equation and carry the wave energy. Likewise, although the actual motion of an atomic electron is random due to individual interactions with field quanta, the average location of the electron resulting from many random field quanta interactions is non-random and can be described by a simple wave equation such as Schroedinger’s.
This is fact, it isn’t my opinion or speculation: professor David Bohm in 1952 proved that “brownian motion” of an atomic electron will result in average positions described by a Schroedinger wave equation. Unfortunately, Bohm also introduced unnecessary “hidden variables” with an infinite field potential into his messy treatment, making it a needlessly complex, uncheckable representation, instead of simply accepting that the quantum field interations produce the “Brownian motion” of the electron as described by Feynman’s path integrals for simple random field quanta interactions with the electron.
Dirac was the first to achieve a relativistic field equation to replace the non-relativistic quantum mechanics approximations (the Schroedinger wave equation and the Heisenberg momentum-distance matrix mechanics). Dirac also laid the groundwork for Feynman’s path integrals in his 1933 paper “The Lagrangian in Quantum Mechanics” published in Physikalische Zeitschrift der Sowjetunion where he states:
“Quantum mechanics was built up on a foundation of analogy with the Hamiltonian theory of classical mechanics. This is because the classical notion of canonical coordinates and momenta was found to be one with a very simple quantum analogue …
“Now there is an alternative formulation for classical dynamics, provided by the Lagrangian. … The two formulations are, of course, closely related, but there are reasons for believing that the Lagrangian one is the more fundamental. … the Lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; while the Hamiltonian method is essentially nonrelativistic in form …”
Update, 6 September 2009:
Mathematical physicist and noted string theory critic Peter Woit of Columbia has a new post which states: ‘The latest Forbes magazine has an article entitled String Theory Skeptic, which gives me a lot more credit for the problems of string theory than I deserve.’
You can see why he is keen to take a back seat, namely that Woit’s unfortunate hero, stringy M-theory creator Edward Witten of the Institute for Advanced Studies in Princeton is quoted being elitist:
“Princeton’s Witten declines to discuss Woit, saying in an e-mail that he prefers to debate these issues only with “critics who are distinguished scientists rather than with people who have become known by writing books.”
“That sounds like elitism. Physicists, though, defend themselves by saying that in the Internet age, when anyone can put out an opinion about anything, they have to draw limits around who they can get into arguments with. There are only 24 hours in the day. [Yeah, they spend all their time hyping lies.]
“Which raises the question: Why should anyone take a nonphysicist seriously on such a fundamental physics issue? [Duh, it’s the content of what is being said, not their groupthink authority status, which counts in science; which is the whole difference between religion and science.]
“Physics itself might hold the answer to that question. John Baez, a UC, Riverside physicist, famously created the Crackpot Index, a tongue-in-cheek but nonetheless useful guide to evaluating scientific claims by nonscientists. For example, it awards one 40 points “for claiming that the scientific establishment is engaged in a conspiracy to prevent your work from gaining its well-deserved fame.” [Actually, string theory is a public conspiracy to hype-out all alternative ideas; if string was a quiet failure, then nobody would need to complain about it! Like Cold Fusion in 1989, the problem isn’t that an idea is a failure, the problem is that uses authority to gain media attention and lie to millions of people with unproved hype.]
“Using Baez’s index, it’s clear Woit is no crackpot. He doesn’t play the role of the persecuted truth-teller. For example, Woit says that Witten is “a genius, who works very hard and who just doesn’t want to spend time arguing.” [That was precisely what was said in the media of a certain German Chancellor from election in 1933 until after Munich in 1938, when he just dictated and didn’t engage in arguments with critics who merely wrote books.]
“Woit also acknowledges he might be wrong. It’s hard to think of an example from the history of science when so many of the field’s best people took to a new idea that ended up being utterly mistaken, a fact that Woit himself is the first to admit. [Duh, then what about all the history of failure in fundamental particle physics such as unsplittable atoms, Ptolemy’s epicycles, vortex atoms, aether, etc.]
“A lot of really smart guys are doing it, and sometimes I wonder, ‘Who am I to be challenging them?'” he says. “The strongest argument in favor of string theory is that Ed Witten thinks it’s right.”
“It’s common in physics for people to have incredibly ambitious ideas that don’t pan out but lead to rich mathematical ideas that end up being very useful.”
Senior editor Lee Gomes covers technology from our Silicon Valley bureau. Visit him at http://www.forbes.com/gomes/.
This indicates that Woit understands and respects Witten’s 1979 work on solid checkable physics, which holds him back from a general attack on Witten’s later “work” on string theory speculations. (You know, the kind of “logic” which says that Hitler ended unemployment – by, ahem, conscripting a massive army – so he can’t have been 100% evil. Or the airplane you were due to fly on crashed with no survivors, so really you should be grateful to the thugs who stopped you catching the flight, thus saving your life.)
As an alternative to stringy ideas, Woit suggested, for example, that: “spontaneous gauge symmetry breaking is somehow related to the other mysterious aspect of electroweak gauge symmetry: its chiral nature. … The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time.”
In the Standard Model, the SU(2) isospin charge weak force only operates on left handed particles because all neutrinos – which are needed for weak interactions – are left handed. Our explanation of SU(2) differs from the Standard Model in that SU(2) with massive gauge bosons is the weak interaction and SU(2) with massless gauge bosons is the electromagnetic (charged field quanta) and gravitational (neutral field quanta) interaction: so maybe as Woit suggests, the SU(2) weak interaction left handedness arises from the way that mass is acquired by the massive weak SU(2) field quanta. No checkable explanation of this left handedness is given in the Standard Model. However, the previous post gives some speculations from Penrose and a potential application to work by Koide and Brannen. Further work is being done. Feynman, in the final chapter of his 1985 book QED, makes it clear that the “electroweak unification” theory is not a a perfect unification: it’s held together by the unobserved Higgs mechanism and unexplained ad hoc Weinberg mixing angle, which act like leaky duct tape.
The lagrangian of the Standard Model (SM) for low energies (i.e., broken symmetry) is well verified, but this doesn’t prove the SM electroweak group structure or that the mass of weak bosons and other particles at low energy is being provided by Higgs bosons according to the Higgs mechanism, whereby they lose mass and unify at high energy. Even the electroweak theory successes doesn’t prove that the unification is correct: the arbitrary value of the Weinberg mixing angle doesn’t prove that electromagnetism and the weak force are unified in the way specified by the SM. It is just a mathematical model for unification which works well at the (broken symmetry) energies used in experiments so far.
E.g. if weak field bosons acquire mass at all energies, the electroweak force symmetry is broken at all energies. You can still have your Weinberg electroweak mixing angle. Just because two related fields are mixed, doesn’t prove they’re unified by all having massless field quanta at high energies. Mass can be acquired in a simpler way, just as the quantized charge for quantum gravity. Such a mass, as a quantum gravity charge, need not decay by either Higgs decay routes H->WW and H->ZZ. The quantized gravity charge (mass) would just give particles charges (gravitational mass). There’s no need for it to consist of decaying Higgs type bosons.
Why does the SM actually “model” electroweak symmetry – as if electroweak symmetry has been seen – when it hasn’t been seen? Sheer dogmatic prejudice, which is exactly what real scientists should guard against.
Returning to the Forbes article, it mentions:
“There is no direct evidence that the world really is made of strings; the idea was first proposed simply because it made a certain amount of mathematical sense. The theory became more popular when physicists realized that replacing dots with strings would solve an enormous math problem left over from 20th-century physics: unifying the force of gravity with the forces that explain the interaction of atomic particles.”
This final sentence ends on a falsehood, because “unification” pipe-dreams, at least in the way they are currently defined by mainstream dogma (i.e., all couplings becomign equal at the Planck scale) has not been shown to actually exist in nature. What you actually want to do in science is to come up with a predictive quantum gravity that can be checked against the real world, instead of building ivory towers. Unification can be achieved with the other forces not by the numerology of equal force couplings at the Planck scale, but instead by fitting gravity into the Standard Model as the neutral massless gauge boson with a revised SU(2) achieved by removing the unobserved Higgs fairy field and replacing it with the observed quantum gravity charge field.
However, it is true that point particles pose problems (like zero distances with infinite field strengths) that can be easily disposed of using some kind of extended object like a loop of string for a fundamental particle: “replacing dots with strings would solve an enormous math problem”. Closely related to this problem is a less widely known problem of the quantization of electromagnetic fields.
Suppose we take an electric field within a photon. QFT says that this field is composed of virtual photons. Those virtual photons in turn are composed of electromagnetic fields. What are those fields composed of? More virtual photons, within virtual photons, ad infinitum? Like a Russian doll, with an infinite number of shells? Maybe pure mathematicians would like the idea of an infinite amount of complexity, but real world physicists would suspect a problem and want to break the endless cycle of chickens-and-eggs coming before one another. (Due to evolution of complexity from simplicity, a proto-egg came before the chicken, because the egg is a single cell and is thus simpler than the chicken. An egg is closer in structure to the first organisms than a chicken is, simply because it is unicellular.)
Similarly, at least the virtual (off shell) photon is probably a primitive particle in its own right, with no other particulate fields within it. If string theory has any direct role in fundamental particle physics (i.e., aside from just potentially providing ways to do applied physics like make QCD calculations for quark-gluon plasmas with the conjectured AdS/CFT equivalence), then it should model the photon as vibrating string in a simple and checkable way (making solid, testable predictions!). This is not what string theorists do, and the whole problem is that they traditionally find any real world modelling heretical. Due to Bohr’s and Heisenberg’s metaphysical attacks on understanding nature, such as the complementarity and correspondence principles, they have slunk into a world of metaphysics in which the false dogma is that nature has transcended reason.
Update (7 September 2009):
Carlos Castro’s paper, “The Cosmological Constant and Pioneer Anomaly from Weyl Spacetimes and Mach’s Principle”, http://vixra.org/pdf/0908.0093v1.pdf states:
“It is shown how Weyl’s geometry and Mach’s Holographic principle furnishes both the magnitude and sign (towards the sun) of the Pioneer
anomalous acceleration aP ~ −c2/RHubble firstly observed by Anderson et al. Weyl’s Geometry can account for both the origins and the value of the observed vacuum energy density (dark energy). The source of dark energy is just the dilaton-like Jordan-Brans-Dicke scalar field that is required to implement Weyl invariance of the most simple of all possible actions. A nonvanishing value of the vacuum energy density of the order of 10−123M4Planck is found consistent with observations. Weyl’s geometry accounts also for the phantom scalar field in modern Cosmology in a very natural fashion.”
I discussed the mainstream problems with the cosmological constant on the about page of this blog:
Because of relativistic effects on the source of the gravitational field (i.e., accelerating bodies contract in the direction of motion and gain mass, which is gravitational charge, so a falling apple becomes heavier while it accelerates), the curvature of spacetime is affected in order for energy to be conserved when the gravitational source is changed by relativistic motion. This means that the Ricci tensor for curvature is not simply equal to the source of the gravitational field. Instead, another factor (equal to half the product of the trace of the Ricci tensor and the metric tensor) must be subtracted from the Ricci curvature to ensure the conservation of energy. As a result, general relativity makes predictions which differ from Newtonian physics. General relativity is correct as far as it goes, which is mathematical generalization of Newtonian gravity and a correction for energy conservation. It’s not, however, the end of the story. There is every reason to expect general relativity to hold good in the solar system, and to be a good approximation. But if gravity has a gauge theory (exchange radiation) mechanism in the expanding universe which surrounds a falling apple, there is a reason why general relativity is incomplete when applied to cosmology.
Sean’s paper ‘Why is the Universe Accelerating?’ asks why the energy of the vacuum is so much smaller than predicted by grand unification theories of supersymmetry, such as supergravity (a string theory). This theory states that the universe is filled with a quantum field of virtual fermions which have a ground state or zero-point energy of E = (1/2){h-bar}{angular frequency}. Each of these oscillating virtual charges radiates energy E = hf, so integrating over all frequencies gives you the total amount of vacuum energy. This is infinity if you integrate frequencies between zero and infinity, but this problem isn’t real because the highest frequencies are the shortest wavelengths, and we already know from the physical need to renormalize quantum field theory that the vacuum has a minimum size scale (the grain size of the vacuum), and you can’t have shorter wavelengths (or corresponding higher frequencies) than that size. Renormalization introduces cutoffs on the running couplings for interaction strengths; such couplings would become infinite at zero distance, causing infinite field momenta, if they were not cutoff by a vacuum grain size limit. The mainstream string and other supersymmetric unification ideas assume that the grain size is the Planck length although there is no theory of this (dimensional analysis isn’t a physical theory) and certainly no experimental evidence for this particular grain size assumption, and a physically more meaningful and also smaller grain size would be the black hole horizon radius for an electron, 2GM/c2.
But to explain the mainstream error, the assumption of the Planck length as the grain size tells the mainstream how closely grains (virtual fermions) are packed together in the spacetime fabric, allowing the vacuum energy to be calculated. Integrating the energy over frequencies corresponding to vacuum oscillator wavelengths which are longer than the Planck scale gives us exactly the same answer for the vacuum energy as working out the energy density of the vacuum from the grain size spacing of virtual charges. This is the Planck mass (expressed as energy using E = mc2) divided into the cube of the Planck length (the volume which each of the supposed virtual Planck mass vacuum particles is supposed to occupy within the vacuum).
The answer is 10112 ergs/cm3 in Sean’s quaint American stone age units, or 10111 Jm-3 in physically sensible S.I. units (1 erg is 10-7 J, and there are 106 cm3 in 1 m3). The problem for Sean and other mainstream people is why the measured ‘dark energy’ from the observed cosmological acceleration implies a vacuum energy density of merely 10-9 Jm3. In other words, string theory and supersymmetric unification theories in general exaggerate the vacuum energy density by a factor of 10111 Jm-3/10-9 Jm-3 = 10120.
That’s an error! (although, of course, to be a little cynical, such errors are common in string theory, which also predicts 10500 different universes, exceeding the observed number).
Now we get to the fun part. Sean points out in section 1.2.2 ‘Quantum zero-point energy’ at page 4 of his paper that:
‘This is the famous 120-orders-of-magnitude discrepancy that makes the cosmological constant problem such a glaring embarrassment. Of course, it is somewhat unfair to emphasize the factor of 10120, which depends on the fact that energy density has units of [energy]4.’
What Sean is saying here is that the mainstream-predicted vacuum energy density is at since the Planck length is inversely proportional to the Planck energy, the vacuum energy density of {Planck energy}/{Planck length3} ~ {Planck energy4} which exaggerates the error in the prediction of the energy. So if we look at the error in terms of energy rather than energy density for the vacuum, the error is only a factor of 1030, not 10120.
What is pretty obvious here is that the more meaningful 1030 error factor is relatively close to the factor 1040 which is the ratio between the coupling constants of electromagnetism and gravity. In other words, the mainstream analysis is wrong in using the electromagnetic (electric charge) oscillator photon radiation theory, instead of the mass oscillator graviton radiation theory: the acceleration of the universe is due to graviton exchange.
Another cause of error in the mainstream calculation of the vacuum’s “dark energy” is the use of the Planck scale for particle spacings in the vacuum, rather than the black hole scale for fundamental particles, as we have already discussed.
On the non-electromagnetic nature of the cosmological constant, Danny R. Lunsford has published a classical unification of electrodynamics and general relativity in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177, which uses 6 dimensions (3 time and 3 spatial, where the time dimensions are normally indistinguishable and can be lumped together) rather than the Kaluza-Klein 5 dimensional unification (1 time and 4 spatial dimensions). Kaluza-Klein requires compactification of the unobserved extra spatial dimension, while predicting nothing checkable (it is a non-dynamical unification). Lunsford’s 6-d unification predicts a zero-sized electromagnetic based cosmological constant, which as we have seen is consistent with the cosmological acceleration being due to a gravitational field rather than electromagnetic field. The mainstream think that a numerically unified field, i.e., effectively the electromagnetic field vacuum, causes the cosmological acceleration. Actually, the cosmological acceleration is small because it is not caused by such a numerically unified field, but by the weak gravitational field.
While energy serves as a source of gravitation in general relativity, making quantum gravity appear to be a Yang-Mills field (where the field quanta themselves carry gravity charge which are a source of gravitons and thus make the field escalate in strength in a very rapid, non-linear way as you get near a mass), in actual fact quantum gravity does not behave as such a field because gravitational charge (mass) is not an intrinsic property of energy: even in the Standard Model, gravitational charge (mass) is not an intrinsic property of particles but is supplied by a “miring” mechanism some external vacuum field (hence Higgs field speculations).
One example of how a vacuum field can give effective mass to energy is given by Penrose as discussed in the previous post. Hence, general relativity is wrong to lump together mass and energy: particles intrinsically contain energy, but mass (gravitational and inertial) is given from vacuum field interactions by a mechanism. General relativity completely ignores the dynamics by lumping together mass and energy, which is fine for modelling certain gross phenomena, but fails where quantum gravity effects are important. The field quanta of the gravitational field do not carry gravitational charge (mass) so the gravitational field does not grow in strength to equal other field running coupling at the Planck scale as “predicted” by mainstream unification theories.
Update (12 Sept. 2009):
On his blog post http://motls.blogspot.com/2009/09/schrodinger-virus-and-decoherence.html former Harvard assistant physics professor Lubos Motl writes about the typical arXiv pseudophysics paper Towards Quantum Superposition of Living Organisms (that title makes you glad you don’t have a paper on arXiv, doesn’t it?) http://arxiv.org/abs/0909.1469
“So all these [Schroedinger quantum entanglement due to wavefunction collapse upon measurement] things are cool and sexy and we’re used to viewing them as mysterious. And we often love the profound feelings of mystery. But in reality, there is no genuine question concerning the behavior of Schrödinger viruses (or even cats) that would remain uncertain as of 2009.”
Entanglement and the Copenhagen Interpretation are based on QM which is first quantization; i.e. quantization of particle position/momentum using a wave equation (Schroedinger) or uncertainty principle (Heisenberg), in each case having a classical Coulomb potential.
Actually, QM with 1st quantization is false: it is inconsistent with special relativity. 2nd quantization is correct, and quantizes the field not the position/momentum. I.e., the field quanta cause the indeterminancy in 2nd quantization. Indeterminancy is a physical effect of chaotically arriving field quanta on small scales of spacetime, such as inside an atom.
Dr Thomas Love of California State University has shown that:
“The quantum collapse [in the mainstream interpretation of of quantum mechanics, which has wavefunction collapse occur when a measurement is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”
http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf:
‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal’ explanations for the observations have been wrongfully neglected.’
Update:
General relativist Professor Sean Carroll is blogging from the religious Templeton Foundation “Philosophy and Cosmology” Conference at Oxford which has barred Dr Sheppeard on the false basis of a lack of space. Carroll reports:
“… multiverse proponents are proposing that we weaken the idea of scientific proof. Science is about two things: testability and explanatory power. Is it worth giving up the former to achieve the latter?”
It’s interesting to hear someone including “explanatory power” as a part of physics. I thought that only mathematical models which make testable predictions can count as physics? Even Ptolemy’s epicycles – regarded as pseudoscience – made falsifiable predictions of planetary positions in addition to “explaining” planetary orbits around the earth. The multiverse is a step back beyond even that pseudoscience, to totally non-falsifiable philosophy. If you want to explain the apparent fine-tuning of fundamental constants like cosmological acceleration aka dark energy, I suggest you look to falsifiable scientific predictions made prior to its discovery!
4 thoughts on “Revisiting cosmological acceleration and its prediction”