Revisiting cosmological acceleration and its prediction

zzz

AreaShielding
new illustration of quantum gravity

Above: the latest illustration (updated 27 September 2009) which has replaced the older illustration included in the post below. Improvements have been made.

In 1996, the cosmological acceleration a = -Hc = -6.9*10-10 ms-2 (the minus sign here indicating outward acceleration, against inward gravitational attraction) was predicted which was discovered two years later from supernova redshift observations. The observed magnitude of the acceleration is stated by Lee Smolin in his 2006 book The Trouble with Physics, page 209 to be a = -c2/R = –c2/(cT) = –c/T = –Hc = -6.9*10-10 ms-2. This post reviews the theoretical discovery and some of its implications.

figure 1
Fig. 1: an improved illustration from the earlier post, The probability of a confirmed prediction – from a theory based entirely upon facts – being the way forward is not trivial! There are two measuring scales for time: (1) beginning at the big bang (13,700 million years ago) and going forward, and (2) beginning at the present age of the Earth and looking back in time with increasing distance. It turns out that there is a simple relationship between them. If we take Hubble’s equation v = HR = HcT where T is time past for the spacetime distance R = cT, then differentiating gives us outward effective acceleration a = dv/dT = d(HcT)/dT = Hc. Simple. However, you might find it confusing to deal with time past getting bigger with increasing distances; so you might prefer to instead use the increasing time since the big bang, t = H-1T. As proved in the diagram, the cosmological acceleration in that time coordinate system is a = dv/dt = d(HR)/dt = d(cTH)/dt = d[c(H-1t)H]/dt = d[c(1 – Ht)]/dt = –Hc. An identical result apart from the minus sign because the increasing time t. These calculations give us the cosmological acceleration at the furthest possible distance, R = cT. For smaller distances, r, the cosmological acceleration will be simply a = (r/R)Hc.

This can be seen from observing that if we define dr/dt = v, then dt = dr/v, so a = dv/dt = d(Hr)/(dr/v) = vH*dr/dr = vH = rH2 = (r/R)Hc.

Why isn’t this kind of simple proof, showing that the correct amount of acceleration is inherent in the empirical observation of recession velocity increasing with distance, widely published and accepted? As with evolution in 1859, it simply doesn’t fit into current dogma of mainstream physics, while outside mainstream physics the only people who have any interest in physics don’t know the difference between the facts (like the astronomical measurements) and mainstream stringy speculations, so they believe that all cosmology is speculative. In other words, the mainstream speculation mongers have discredited the scientific method to those who value facts above dogma. The same is allegedly true of mainstream mathematics, according to the experience of Grigori Perelman when his work on the Poincare conjecture was downplayed by mainstream conformist Shing-Tung Yau to give more credit to conformists Cao and Zhu: ‘Of course, there are many mathematicians who are more or less honest. But almost all of them are conformists. They are more or less honest, but they tolerate those who are not honest.’

They have to tolerate Shing-Tung Yau because he epitomises the mainstream, has won many awards, and is powerful. Nobody wants to argue someone else’s case with someone like that. It is similar to the dishonest claim ‘string theory has the remarkable property of predicting gravity’ by Edward Witten in the April 1996 issue of Physics Today. Anonymous peer-review can serve as a severe punishment for non-conformist work, by simply blocking publications.

‘Inappropriate topic. While arXiv serves a variety of scientific communities, not all subjects are currently covered. Submissions that do not fit well into our current classification scheme may be removed…’

– arXiv moderation page policy against new topics within science that upset the dogma of status quo (revolutionary discoveries by their very definition ‘do not fit well into our current classification scheme’!)

There are also other reasons for prejudice. Some people who rejected the work of Copernicus, Darwin, Boltzmann, and others did so not just because it was contrary to the dogma they had been taught, but because they thought that the facts were ugly and they didn’t want to take them seriously, or because others around them ignored the facts or laughed about the facts, and they wanted to fit in to their peer group. They ‘sincerely’ believed that all work which didn’t seem appealing to their prejudices was scientifically wrong and that if they only had the time to read the new paper they would be able to find a flaw in it; a convenient pseudoscientific belief system. The next logical step they take is to start to claim that it is wrong without having found any error, or better yet they claim to have found an error which doesn’t actually exist. The alleged ‘error’ they find is accepted by other conformists without question or checking as a good excuse to ignore the facts, although eventually turns out to be just a disagreement between newly discovered facts and old incorrect but well-established speculative prejudices in some textbook which is widely worshipped as accepted dogma, despite being at first unchecked against experiment and is eventually discovered to be wrong. Einstein modifies Newton’s law to make it compatible with conservation of energy, so those who object to progress state that Einstein is shown wrong by the difference with Newton’s law, and they ignore or dispute experimental facts to the contrary. Other prejudices are more obvious. If the new theory is presented using simple mathematics, it can be dismissed as simplistic; using complex mathematics, it can be dismissed as complex! However something is presented, it’s easy to sneer at it, to find an excuse to ignore it. In his Introduction to the 1992 Penguin edition of Feynman’s book The Character of Physical Law, Professor Paul Davies states on page 7:

“Each revolution comes with a cluster of so-called geniuses, men and women whose skill and imagination force the scientific community to break out of old habits of thought and embrace new and unfamiliar concepts.”

This is contrary to the usual message of science advancing by quiet discoveries, and the usual message that science is about discovering the facts, not marketing gimmicks, political conformism, and sociology.

Instead of taking non-quantum general relativity and fitting it to observations using arbitrarily selected amounts of unobserved ‘dark matter’ and unobserved ‘dark energy’ – like Ptolemy fitting his earth centred universe to observations by adding more epicycles then hailing the mathematical beauty of a world of epicycles – we should look at the data and try to find the simplest model which not only fits the data but makes other checkable predictions concerning gravity. This is precisely what we did when we predicted the cosmological acceleration in 1996. In May 1996, an 8-page-long paper was written, deriving cosmological acceleration a = Hc from the Hubble recession law v = HR, and applying it to the universe. When the more appropriate journals like Classical and Quantum Gravity didn’t want to know because of biased opinions about quantum gravity being a stringy phenomenon, we sent it to Martin Eccles, editor of Electronics World. He made it available via page 896 of the letters pages in the October 1996 issue. It was later published in the February 1997 issue of Science World, ISSN 1367-6172, and the prediction for cosmological acceleration, a = Hc = 6.9*10-10 ms-2 was criticised in private correspondence by Electronics World author Mike Renardson: the acceleration seemed far, far too small to ever detect. However, it was later detected in 1998 by Saul Perlmutter et al., using computer software to automatically detect distant supernovae directly from CCD telescope data live input, who published in Nature and failed to cite the prediction because the correct journals had refused to publish it properly. The prediction applied the acceleration to the mass of the universe using Newton’s laws of motion with relativistic corrections: F = ma where m is mass of accelerating universe and a is the cosmological acceleration. This gives a large outward force which predicts gravity via the 3rd law of motion (an implosion carrying an equal, graviton mediated, reaction force upon the observer, which predicts gravity with good accuracy for the input data; a fact totally ignored and indeed suppressed by the general relativity and stringy spin-2 obsessed mainstream, which believes in a false attraction-spin connection for off-shell graviton radiation, derived using incorrect implicit assumptions by Fierz and Pauli way back in 1939).

fundamental interactions
why universe accelerates

Fig. 2: why the universe accelerates. We see distant masses every direction we look so we experience an equilibrium of spin-1 graviton exchange on all sides (apart from the effect of nearby masses, called gravity, shown in Fig. 3 and Fig. 4 below), but in our frame of reference a mass so distant that it is near radius R = cT where T is the age of the universe, can’t have an equilibrium of exchange because it’s so far out that there is little or no mass beyond it to exchange gravitons with it. So it receives a radial asymmetry (from our point of view, e.g., from our frame of reference) and appears to accelerate away from us.

Feynman diagrams for gravity

Fig. 3: Feynman diagrams for general relativity, spin-2 (mainstream, failed, non-falsifiable, over-hyped, stringy) quantum gravity, and spin-1 (non-standard, successful, predictive, totally-censored out) quantum gravity.

gravity1

Fig. 4: how the spin-1 mechanism predicts the Newtonian gravity law and the strength of gravity represented by G: but remember that it also predicts the general relativistic contraction of radius for mass by the amount (1/3)MG/c2 metres by simple squeezing akin to the Lorentz contraction (which is caused by off-shell gaviton exchange radiation force against accelerating masses), and therefore unlike the Newtonian law it is fully Lorentzian and is completely compatible with the classical approximation of gravity in the basic field equation of general relativity for its successful non-cosmological applications and tests.

Hawking’s formula for the radiating power of the black hole electron tells us it radiates with power P = 3 × 1092 Watts; but this is field quanta emission not real radiation for technical reasons. Hawking’s mechanism for black hole radiation emission omits Schwinger’s threshold field strength for pair-production in the vacuum, so only charged black holes produce field strengths above Schwinger’s threshold at the event horizon radius. The charge necessitated for black holes to emit radiation also changes the nature of the emitted radiation because it means that only positive virtual charges will fall into the electron core, and only negative virtual charges can be emitted.

The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power. Hence, F = 2P/c = 2(3 × 1092)/c = 2 × 1084 Newtons. This is 1041 times the F = 1.8 × 1043 Newtons cosmological force, so this Hawking radiation force predicts the electromagnetic force strength, and is more empirical evidence that the cross-section for fundamental particles is the black hole event horizon size, not the Planck size.

EM
photons
EMforcemechanism
randomwalk

Fig. 5: In a capacitor, energy enters at light velocity accompanied by electrons (drifting at typically about 1 mm/s). The light velocity Poynting-Heaviside vector energy (consisting of light velocity field quanta Maxwell knew nothing of) bounces off the far end and adds to the energy still flowing in, causing a discrete rise in the stored potential difference (so-called voltage). There is no mechanism for the gauge boson energy to ever slow down below the velocity of light. It doesn’t stop, but keeps going. Studying trapped light velocity energy is like studying a static electric charge, because the magnetic fields cancel out if there is an equilibrium (with equal energy going north as going south, and going east as going west, etc.). ‘A so-called steady charged capacitor is not steady at all. Necessarily, a TEM wave containing (hidden) magnetic field as well as electric field is vacillating from end to end.’ – Catt. This means we can study gauge bosons by studying trapped light velocity electromagnetic current. Actually, nobody – from J. J. Thomson onwards – has ever probed the Planck scale to see an actual static electric charge: they have only seen light velocity electromagnetic fields which mediate charge and which they falsely and implicitly assume are some kind of crackpot proof of a Planck scale static charge. Catt’s work suggests that there is no such thing: all electrons are just trapped electromagnetic energy.

For further evidence plus details of the effect on the Standard Model and general relativity, see the earlier posts linked here and here.

unification

Fig. 6: Unification in supersymmetric stringy M-theory compared to the standard model. Notice the key difference that string theory assumes metaphysically (without any mechanism or evidence) that all couplings for fundamental interactions are equal at the numerological Planck scale (~1019 GeV energy or ~10-35 metre distance of closest approach between colliding particles), which implies a bare core charge for the electron which is far lower than 137 times the low energy value. The actual bare core charge of the electron can be shown to be 137 times the low-energy value, by comparing the value deduced from Heisenberg’s uncertainty principle (ignoring vacuum polariation shielding) to the shielded value measured by Coulomb. Heisenberg’s minimal energy-time uncertainty relation, h-bar = E*t.

F = dE/dx(h-bar/t)/dx[h-bar/(x/c)]/dx2.

= d

= d

= -h-bar*c/x

This inverse-square law force is a factor of ~137 or 1/alpha times the Coulomb force between two electrons (i.e. it doesn’t incorporate the polarized vacuum shielding factor of alpha). Hence, the bare core charge of an electron is a factor of 1/alpha or ~137 times stronger than than the value measured at low energy, i.e. below Schwinger’s 1.3*1018 v/m electromagnetic field strength for pair production in the vacuum, which correspondes to the distance of closest approach in a ~1 MeV collision, which is thus the IR cutoff energy for the logarithmic running coupling equations in QFT. This bare core charge disagrees with the Planck scale unification, but agrees with black hole scale unification, which requires a much higher collision energy than the Planck scale, corresponding to approach distances of ~10-57 metres for the black hole event horizon radius of an electron, which is far smaller and more fundamental than the ~10-35 metre Planck length.

Notice also that Fig. 6 includes the variation of strong, weak and electromagnetic charge strengths (coupling parameters) below the 100 GeV scale, which is excluded in all popular diagrams of unification. What you notice by including the full graph is that the strong and weak interactions only arise above the electromagnetic IR cutoff, when the electromagnetic coupling begins to rise. The bare core electromagnetic charge at the black hole scale radiates field quanta which get attenuated in the vacuum, the energy giving rise to every kind of particle you can imagine, including strong and weak field quanta. Hence, the attenuation of the electromagnetic field by the polarized virtual charges which are created by pair production in the vacuum in strong electric fields, absorbs electromagnetic energy and deposits that energy in the vacuum out to the IR cutoff radius, some femtometres from the core. This deposited energy gives rise to weak and strong field quanta, and hence powers those fields. By the conservation of energy, the variation of field strength of the electromagnetic field with distance inversely corresponds to that of the strong and weak fields. I.e., near the bare charge where little electromagnetic field energy has been absorbed by the vacuum, both the strong and weak fields are weak, because little energy has been deposited in the vacuum to create the field quanta corresponding to those field. At greater distances, more of the electromagnetic field energy has been absorbed in the vacuum, so the weak and strong fields can be mediated by more virtual particles and are stronger. This energy conservation mechanism for unification does not (unlike string theory) postulate equality of all charges at the smallest possible distance scale (the UV cutoff). Instead, it shows that the electromagnetic charge reaches a maximum value, and that at arbitrarily small distances from the bare core electron charge, negligible energy from the electromagnetic field has been deposited in the vacuum so there is negligible energy for weak and strong field quanta: therefore, the weak and strong charge strengths tends towards zero as you approach the black hole scale. Unification facts, in short, contradict the mainstream dogma of equal charges for the UV cutoff. These facts further substantiate the use of the black hole event horizon area for quantum gravity predictions, discussed earlier in this post using completely different physical evidence.

Relevant extract from a post on the other blog:

dr zaius

Above: Dr Zaius in Planet of the Apes simultaneously held religious and scientific positions, leading him to suppress scientific findings which contradicted the religious dogma. You know, like my suppression by Britain’s Open University physics department chairman, Professor Russell Stannard, author of books like Science and the Renewal of Belief: Actually, this makes some sense when you recognise that Stannard takes “physics” to include the religious belief in uncheckable pseudoscience: a landscape of 10500 different universes to account for the vast number of possible particle physics theories which can be generated by the 100 or more moduli for the shape of the unobservably small compactification of 6-dimensions assumed to exist in the speculative Calabi-Yau manifold of string theory, as well as other rubbish like Aspect’s alleged “experimental evidence” on entanglement via correlation of particle spins:

“offering fresh insight into original sin, the trials experienced by Galileo, the problem of pain, the possibility of miracles, the evidence for the resurrection, the credibility of incarnation, and the power of steadfast prayer. By introducing simple analogies, Stannard clears up misunderstandings that have muddied the connections between science and religion, and suggests contributions that the pursuit of physical science can make to theology”,

arguing that science should be alloyed with dogma again as a “unification” of physics and religion, as it was in the time of Galileo.

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.

First quantization for QM (e.g. Schroedinger) quantizes the product of position and momentum of an electron, rather than the Coulomb field which is treated classically. This leads to a mathematically useful approximation for bound states like atoms, which is physically false and inaccurate in detail (a bit like Ptolemy’s epicycles, where all planets were assumed to orbit Earth in circles within circles). Feynman explains this in his 1985 book QED (he dismisses the uncertainty principle as complete model, in favour of path integrals) because indeterminancy is physically caused by virtual particle interactions from the quantized Coulomb field becoming important on small, subatomic scales! Second quantization (QFT) introduced by Dirac in 1929 and developed with Feynman’s path integrals in 1948, instead quantizes the field. Second quantization is physically the correct theory because all indeterminancy results from the random fluctuations in the interactions of discrete field quanta, and first quantization by Heisenberg and Schroedinger’s approaches is just a semi-classical, non-relativistic mathematical approximation useful for obtaining simple mathematical solutions for bound states like atoms:

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

As a physics student with a mechanism for gravity that predicted correctly the cosmological acceleration two years ahead of its discovery, Russell didn’t even personally reply but just passed my paper to Dr Bob Lambourne who in 1996 wrote to me that my prediction for quantum gravity and cosmological acceleration was not important because it is not within the metaphysical, non-falsifiable domain of Professor Edward Witten’s stringy speculations on 11-dimensional ‘M-theory’. In 1986, Professor Russell was awarded the Templeton Project Trust Award for ‘significant contributions to the field of spiritual values; in particular for contributions to greater understanding of science and religion’. So who says the Planet of the Apes story is completely fictional, aside from a little hairiness?

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

– Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56 (footnote). His path integrals rebuild and reformulate quantum mechanics itself, getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific baggage like ‘entanglement hype’ it brings with it:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger’s wave equation and Heisenberg’s matrix mechanics being the first two attempts, which both generate nonsense ‘interpretations’]. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.

‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’

– Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

– Richard MacKenzie, Path Integral Methods and Applications, pp. 2-13.

‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn’t enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

– Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

Sound waves are composed of the group oscillations of large numbers of randomly colliding air molecules; despite the randomness of individual air molecule collisions, the average pressure variations from many molecules obey a simple wave equation and carry the wave energy. Likewise, although the actual motion of an atomic electron is random due to individual interactions with field quanta, the average location of the electron resulting from many random field quanta interactions is non-random and can be described by a simple wave equation such as Schroedinger’s.

This is fact, it isn’t my opinion or speculation: professor David Bohm in 1952 proved that “brownian motion” of an atomic electron will result in average positions described by a Schroedinger wave equation. Unfortunately, Bohm also introduced unnecessary “hidden variables” with an infinite field potential into his messy treatment, making it a needlessly complex, uncheckable representation, instead of simply accepting that the quantum field interations produce the “Brownian motion” of the electron as described by Feynman’s path integrals for simple random field quanta interactions with the electron.

Dirac was the first to achieve a relativistic field equation to replace the non-relativistic quantum mechanics approximations (the Schroedinger wave equation and the Heisenberg momentum-distance matrix mechanics). Dirac also laid the groundwork for Feynman’s path integrals in his 1933 paper “The Lagrangian in Quantum Mechanics” published in Physikalische Zeitschrift der Sowjetunion where he states:

“Quantum mechanics was built up on a foundation of analogy with the Hamiltonian theory of classical mechanics. This is because the classical notion of canonical coordinates and momenta was found to be one with a very simple quantum analogue …

“Now there is an alternative formulation for classical dynamics, provided by the Lagrangian. … The two formulations are, of course, closely related, but there are reasons for believing that the Lagrangian one is the more fundamental. … the Lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; while the Hamiltonian method is essentially nonrelativistic in form …”

Update, 6 September 2009:

Mathematical physicist and noted string theory critic Peter Woit of Columbia has a new post which states: ‘The latest Forbes magazine has an article entitled String Theory Skeptic, which gives me a lot more credit for the problems of string theory than I deserve.’

You can see why he is keen to take a back seat, namely that Woit’s unfortunate hero, stringy M-theory creator Edward Witten of the Institute for Advanced Studies in Princeton is quoted being elitist:

“Princeton’s Witten declines to discuss Woit, saying in an e-mail that he prefers to debate these issues only with “critics who are distinguished scientists rather than with people who have become known by writing books.”

“That sounds like elitism. Physicists, though, defend themselves by saying that in the Internet age, when anyone can put out an opinion about anything, they have to draw limits around who they can get into arguments with. There are only 24 hours in the day. [Yeah, they spend all their time hyping lies.]

“Which raises the question: Why should anyone take a nonphysicist seriously on such a fundamental physics issue? [Duh, it’s the content of what is being said, not their groupthink authority status, which counts in science; which is the whole difference between religion and science.]

“Physics itself might hold the answer to that question. John Baez, a UC, Riverside physicist, famously created the Crackpot Index, a tongue-in-cheek but nonetheless useful guide to evaluating scientific claims by nonscientists. For example, it awards one 40 points “for claiming that the scientific establishment is engaged in a conspiracy to prevent your work from gaining its well-deserved fame.” [Actually, string theory is a public conspiracy to hype-out all alternative ideas; if string was a quiet failure, then nobody would need to complain about it! Like Cold Fusion in 1989, the problem isn’t that an idea is a failure, the problem is that uses authority to gain media attention and lie to millions of people with unproved hype.]

“Using Baez’s index, it’s clear Woit is no crackpot. He doesn’t play the role of the persecuted truth-teller. For example, Woit says that Witten is “a genius, who works very hard and who just doesn’t want to spend time arguing.” [That was precisely what was said in the media of a certain German Chancellor from election in 1933 until after Munich in 1938, when he just dictated and didn’t engage in arguments with critics who merely wrote books.]

“Woit also acknowledges he might be wrong. It’s hard to think of an example from the history of science when so many of the field’s best people took to a new idea that ended up being utterly mistaken, a fact that Woit himself is the first to admit. [Duh, then what about all the history of failure in fundamental particle physics such as unsplittable atoms, Ptolemy’s epicycles, vortex atoms, aether, etc.]

“A lot of really smart guys are doing it, and sometimes I wonder, ‘Who am I to be challenging them?'” he says. “The strongest argument in favor of string theory is that Ed Witten thinks it’s right.”

“It’s common in physics for people to have incredibly ambitious ideas that don’t pan out but lead to rich mathematical ideas that end up being very useful.”

Senior editor Lee Gomes covers technology from our Silicon Valley bureau. Visit him at http://www.forbes.com/gomes/.

Woit’s “earliest work verified Edward Witten’s 1979 quantum chromodynamic formula for the eta-prime mass in terms of the second derivative of the vacuum energy.”

This indicates that Woit understands and respects Witten’s 1979 work on solid checkable physics, which holds him back from a general attack on Witten’s later “work” on string theory speculations. (You know, the kind of “logic” which says that Hitler ended unemployment – by, ahem, conscripting a massive army – so he can’t have been 100% evil. Or the airplane you were due to fly on crashed with no survivors, so really you should be grateful to the thugs who stopped you catching the flight, thus saving your life.)

As an alternative to stringy ideas, Woit suggested, for example, that: “spontaneous gauge symmetry breaking is somehow related to the other mysterious aspect of electroweak gauge symmetry: its chiral nature. … The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time.”

In the Standard Model, the SU(2) isospin charge weak force only operates on left handed particles because all neutrinos – which are needed for weak interactions – are left handed. Our explanation of SU(2) differs from the Standard Model in that SU(2) with massive gauge bosons is the weak interaction and SU(2) with massless gauge bosons is the electromagnetic (charged field quanta) and gravitational (neutral field quanta) interaction: so maybe as Woit suggests, the SU(2) weak interaction left handedness arises from the way that mass is acquired by the massive weak SU(2) field quanta. No checkable explanation of this left handedness is given in the Standard Model. However, the previous post gives some speculations from Penrose and a potential application to work by Koide and Brannen. Further work is being done. Feynman, in the final chapter of his 1985 book QED, makes it clear that the “electroweak unification” theory is not a a perfect unification: it’s held together by the unobserved Higgs mechanism and unexplained ad hoc Weinberg mixing angle, which act like leaky duct tape.

The lagrangian of the Standard Model (SM) for low energies (i.e., broken symmetry) is well verified, but this doesn’t prove the SM electroweak group structure or that the mass of weak bosons and other particles at low energy is being provided by Higgs bosons according to the Higgs mechanism, whereby they lose mass and unify at high energy. Even the electroweak theory successes doesn’t prove that the unification is correct: the arbitrary value of the Weinberg mixing angle doesn’t prove that electromagnetism and the weak force are unified in the way specified by the SM. It is just a mathematical model for unification which works well at the (broken symmetry) energies used in experiments so far.

E.g. if weak field bosons acquire mass at all energies, the electroweak force symmetry is broken at all energies. You can still have your Weinberg electroweak mixing angle. Just because two related fields are mixed, doesn’t prove they’re unified by all having massless field quanta at high energies. Mass can be acquired in a simpler way, just as the quantized charge for quantum gravity. Such a mass, as a quantum gravity charge, need not decay by either Higgs decay routes H->WW and H->ZZ. The quantized gravity charge (mass) would just give particles charges (gravitational mass). There’s no need for it to consist of decaying Higgs type bosons.

Why does the SM actually “model” electroweak symmetry – as if electroweak symmetry has been seen – when it hasn’t been seen? Sheer dogmatic prejudice, which is exactly what real scientists should guard against.

Returning to the Forbes article, it mentions:

“There is no direct evidence that the world really is made of strings; the idea was first proposed simply because it made a certain amount of mathematical sense. The theory became more popular when physicists realized that replacing dots with strings would solve an enormous math problem left over from 20th-century physics: unifying the force of gravity with the forces that explain the interaction of atomic particles.”

This final sentence ends on a falsehood, because “unification” pipe-dreams, at least in the way they are currently defined by mainstream dogma (i.e., all couplings becomign equal at the Planck scale) has not been shown to actually exist in nature. What you actually want to do in science is to come up with a predictive quantum gravity that can be checked against the real world, instead of building ivory towers. Unification can be achieved with the other forces not by the numerology of equal force couplings at the Planck scale, but instead by fitting gravity into the Standard Model as the neutral massless gauge boson with a revised SU(2) achieved by removing the unobserved Higgs fairy field and replacing it with the observed quantum gravity charge field.

However, it is true that point particles pose problems (like zero distances with infinite field strengths) that can be easily disposed of using some kind of extended object like a loop of string for a fundamental particle: “replacing dots with strings would solve an enormous math problem”. Closely related to this problem is a less widely known problem of the quantization of electromagnetic fields.

Suppose we take an electric field within a photon. QFT says that this field is composed of virtual photons. Those virtual photons in turn are composed of electromagnetic fields. What are those fields composed of? More virtual photons, within virtual photons, ad infinitum? Like a Russian doll, with an infinite number of shells? Maybe pure mathematicians would like the idea of an infinite amount of complexity, but real world physicists would suspect a problem and want to break the endless cycle of chickens-and-eggs coming before one another. (Due to evolution of complexity from simplicity, a proto-egg came before the chicken, because the egg is a single cell and is thus simpler than the chicken. An egg is closer in structure to the first organisms than a chicken is, simply because it is unicellular.)

Similarly, at least the virtual (off shell) photon is probably a primitive particle in its own right, with no other particulate fields within it. If string theory has any direct role in fundamental particle physics (i.e., aside from just potentially providing ways to do applied physics like make QCD calculations for quark-gluon plasmas with the conjectured AdS/CFT equivalence), then it should model the photon as vibrating string in a simple and checkable way (making solid, testable predictions!). This is not what string theorists do, and the whole problem is that they traditionally find any real world modelling heretical. Due to Bohr’s and Heisenberg’s metaphysical attacks on understanding nature, such as the complementarity and correspondence principles, they have slunk into a world of metaphysics in which the false dogma is that nature has transcended reason.

Update (7 September 2009):

Carlos Castro’s paper, “The Cosmological Constant and Pioneer Anomaly from Weyl Spacetimes and Mach’s Principle”, http://vixra.org/pdf/0908.0093v1.pdf states:

“It is shown how Weyl’s geometry and Mach’s Holographic principle furnishes both the magnitude and sign (towards the sun) of the Pioneer
anomalous acceleration aP ~ −c2/RHubble firstly observed by Anderson et al. Weyl’s Geometry can account for both the origins and the value of the observed vacuum energy density (dark energy). The source of dark energy is just the dilaton-like Jordan-Brans-Dicke scalar field that is required to implement Weyl invariance of the most simple of all possible actions. A nonvanishing value of the vacuum energy density of the order of 10−123M4Planck is found consistent with observations. Weyl’s geometry accounts also for the phantom scalar field in modern Cosmology in a very natural fashion.”

I discussed the mainstream problems with the cosmological constant on the about page of this blog:

Because of relativistic effects on the source of the gravitational field (i.e., accelerating bodies contract in the direction of motion and gain mass, which is gravitational charge, so a falling apple becomes heavier while it accelerates), the curvature of spacetime is affected in order for energy to be conserved when the gravitational source is changed by relativistic motion. This means that the Ricci tensor for curvature is not simply equal to the source of the gravitational field. Instead, another factor (equal to half the product of the trace of the Ricci tensor and the metric tensor) must be subtracted from the Ricci curvature to ensure the conservation of energy. As a result, general relativity makes predictions which differ from Newtonian physics. General relativity is correct as far as it goes, which is mathematical generalization of Newtonian gravity and a correction for energy conservation. It’s not, however, the end of the story. There is every reason to expect general relativity to hold good in the solar system, and to be a good approximation. But if gravity has a gauge theory (exchange radiation) mechanism in the expanding universe which surrounds a falling apple, there is a reason why general relativity is incomplete when applied to cosmology.

Sean’s paper ‘Why is the Universe Accelerating?’ asks why the energy of the vacuum is so much smaller than predicted by grand unification theories of supersymmetry, such as supergravity (a string theory). This theory states that the universe is filled with a quantum field of virtual fermions which have a ground state or zero-point energy of E = (1/2){h-bar}{angular frequency}. Each of these oscillating virtual charges radiates energy E = hf, so integrating over all frequencies gives you the total amount of vacuum energy. This is infinity if you integrate frequencies between zero and infinity, but this problem isn’t real because the highest frequencies are the shortest wavelengths, and we already know from the physical need to renormalize quantum field theory that the vacuum has a minimum size scale (the grain size of the vacuum), and you can’t have shorter wavelengths (or corresponding higher frequencies) than that size. Renormalization introduces cutoffs on the running couplings for interaction strengths; such couplings would become infinite at zero distance, causing infinite field momenta, if they were not cutoff by a vacuum grain size limit. The mainstream string and other supersymmetric unification ideas assume that the grain size is the Planck length although there is no theory of this (dimensional analysis isn’t a physical theory) and certainly no experimental evidence for this particular grain size assumption, and a physically more meaningful and also smaller grain size would be the black hole horizon radius for an electron, 2GM/c2.

But to explain the mainstream error, the assumption of the Planck length as the grain size tells the mainstream how closely grains (virtual fermions) are packed together in the spacetime fabric, allowing the vacuum energy to be calculated. Integrating the energy over frequencies corresponding to vacuum oscillator wavelengths which are longer than the Planck scale gives us exactly the same answer for the vacuum energy as working out the energy density of the vacuum from the grain size spacing of virtual charges. This is the Planck mass (expressed as energy using E = mc2) divided into the cube of the Planck length (the volume which each of the supposed virtual Planck mass vacuum particles is supposed to occupy within the vacuum).

The answer is 10112 ergs/cm3 in Sean’s quaint American stone age units, or 10111 Jm-3 in physically sensible S.I. units (1 erg is 10-7 J, and there are 106 cm3 in 1 m3). The problem for Sean and other mainstream people is why the measured ‘dark energy’ from the observed cosmological acceleration implies a vacuum energy density of merely 10-9 Jm3. In other words, string theory and supersymmetric unification theories in general exaggerate the vacuum energy density by a factor of 10111 Jm-3/10-9 Jm-3 = 10120.

That’s an error! (although, of course, to be a little cynical, such errors are common in string theory, which also predicts 10500 different universes, exceeding the observed number).

Now we get to the fun part. Sean points out in section 1.2.2 ‘Quantum zero-point energy’ at page 4 of his paper that:

‘This is the famous 120-orders-of-magnitude discrepancy that makes the cosmological constant problem such a glaring embarrassment. Of course, it is somewhat unfair to emphasize the factor of 10120, which depends on the fact that energy density has units of [energy]4.’

What Sean is saying here is that the mainstream-predicted vacuum energy density is at since the Planck length is inversely proportional to the Planck energy, the vacuum energy density of {Planck energy}/{Planck length3} ~ {Planck energy4} which exaggerates the error in the prediction of the energy. So if we look at the error in terms of energy rather than energy density for the vacuum, the error is only a factor of 1030, not 10120.

What is pretty obvious here is that the more meaningful 1030 error factor is relatively close to the factor 1040 which is the ratio between the coupling constants of electromagnetism and gravity. In other words, the mainstream analysis is wrong in using the electromagnetic (electric charge) oscillator photon radiation theory, instead of the mass oscillator graviton radiation theory: the acceleration of the universe is due to graviton exchange.

Another cause of error in the mainstream calculation of the vacuum’s “dark energy” is the use of the Planck scale for particle spacings in the vacuum, rather than the black hole scale for fundamental particles, as we have already discussed.

On the non-electromagnetic nature of the cosmological constant, Danny R. Lunsford has published a classical unification of electrodynamics and general relativity in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177, which uses 6 dimensions (3 time and 3 spatial, where the time dimensions are normally indistinguishable and can be lumped together) rather than the Kaluza-Klein 5 dimensional unification (1 time and 4 spatial dimensions). Kaluza-Klein requires compactification of the unobserved extra spatial dimension, while predicting nothing checkable (it is a non-dynamical unification). Lunsford’s 6-d unification predicts a zero-sized electromagnetic based cosmological constant, which as we have seen is consistent with the cosmological acceleration being due to a gravitational field rather than electromagnetic field. The mainstream think that a numerically unified field, i.e., effectively the electromagnetic field vacuum, causes the cosmological acceleration. Actually, the cosmological acceleration is small because it is not caused by such a numerically unified field, but by the weak gravitational field.

While energy serves as a source of gravitation in general relativity, making quantum gravity appear to be a Yang-Mills field (where the field quanta themselves carry gravity charge which are a source of gravitons and thus make the field escalate in strength in a very rapid, non-linear way as you get near a mass), in actual fact quantum gravity does not behave as such a field because gravitational charge (mass) is not an intrinsic property of energy: even in the Standard Model, gravitational charge (mass) is not an intrinsic property of particles but is supplied by a “miring” mechanism some external vacuum field (hence Higgs field speculations).

One example of how a vacuum field can give effective mass to energy is given by Penrose as discussed in the previous post. Hence, general relativity is wrong to lump together mass and energy: particles intrinsically contain energy, but mass (gravitational and inertial) is given from vacuum field interactions by a mechanism. General relativity completely ignores the dynamics by lumping together mass and energy, which is fine for modelling certain gross phenomena, but fails where quantum gravity effects are important. The field quanta of the gravitational field do not carry gravitational charge (mass) so the gravitational field does not grow in strength to equal other field running coupling at the Planck scale as “predicted” by mainstream unification theories.

Update (12 Sept. 2009):

On his blog post http://motls.blogspot.com/2009/09/schrodinger-virus-and-decoherence.html former Harvard assistant physics professor Lubos Motl writes about the typical arXiv pseudophysics paper Towards Quantum Superposition of Living Organisms (that title makes you glad you don’t have a paper on arXiv, doesn’t it?) http://arxiv.org/abs/0909.1469

“So all these [Schroedinger quantum entanglement due to wavefunction collapse upon measurement] things are cool and sexy and we’re used to viewing them as mysterious. And we often love the profound feelings of mystery. But in reality, there is no genuine question concerning the behavior of Schrödinger viruses (or even cats) that would remain uncertain as of 2009.”

I’ve commented there (Lubos Motl probably won’t “understand” the following distinction between first and second quantization any more than Adolf Hitler could “understand” the distinction between ethics and eugenics):

Entanglement and the Copenhagen Interpretation are based on QM which is first quantization; i.e. quantization of particle position/momentum using a wave equation (Schroedinger) or uncertainty principle (Heisenberg), in each case having a classical Coulomb potential.

Actually, QM with 1st quantization is false: it is inconsistent with special relativity. 2nd quantization is correct, and quantizes the field not the position/momentum. I.e., the field quanta cause the indeterminancy in 2nd quantization. Indeterminancy is a physical effect of chaotically arriving field quanta on small scales of spacetime, such as inside an atom.

Dr Thomas Love of California State University has shown that:

“The quantum collapse [in the mainstream interpretation of of quantum mechanics, which has wavefunction collapse occur when a measurement is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf:

‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal’ explanations for the observations have been wrongfully neglected.’

Update:

General relativist Professor Sean Carroll is blogging from the religious Templeton Foundation “Philosophy and Cosmology” Conference at Oxford which has barred Dr Sheppeard on the false basis of a lack of space. Carroll reports:

“… multiverse proponents are proposing that we weaken the idea of scientific proof. Science is about two things: testability and explanatory power. Is it worth giving up the former to achieve the latter?”

It’s interesting to hear someone including “explanatory power” as a part of physics. I thought that only mathematical models which make testable predictions can count as physics? Even Ptolemy’s epicycles – regarded as pseudoscience – made falsifiable predictions of planetary positions in addition to “explaining” planetary orbits around the earth. The multiverse is a step back beyond even that pseudoscience, to totally non-falsifiable philosophy. If you want to explain the apparent fine-tuning of fundamental constants like cosmological acceleration aka dark energy, I suggest you look to falsifiable scientific predictions made prior to its discovery!

The Koide formula explained by flavour mixing in a Weyl 2-spinor, Schroedinger ‘Zitterbewegung’ lepton

DSC00089
Above: Fig. 25.13 on p. 644 of the 2004 Cape edition of Penrose’s Road to Reality. Caption reads: ‘In the zigzag picture of a Dirac particle, the vertices may be viewed as interactions with the (constant) Higgs field.’ Because the mass of the particle is acquired as it interacts with the constant mass vacuum field quanta at the vertices (some kind of mass-producing field, not necessarily any particular speculative Higgs boson, which has never been observed), it follows that the ‘coupling constant’ for the interaction must be different where the resultant particles are different in mass: the coupling per vertice (there are two vertices needed for each complete cycle of de Broglie wave oscillation as the particle moves in the zigzag motion) is a square root factor which apparently explains the Koide formula for leptons, including Carl Brannen’s modification for neutrino masses. This results from the decomposition of Dirac’s spinor into a 2-spinor form by Weyl in 1929, where one component of the spinor is left handed and the other right handed. A truly massless neutrino would be an entirely left-handed spinor, like just the zig part of the zigzag motion of an electron. But if they have a small mass an can change flavour (as observed for solar neutrinos), neutrinos must occasionally interact with a massive vacuum field and therefore have a very small zag component.

Let the masses of the electron, muon and tauon be Me, Mm, and Mt, respectively. Koide’s formula in its usual presentation is then:

(Me1/2 + Mm1/2 + Mt1/2)2/(Me + Mm + Mt) = 3/2

Multiplying out the squared numerator, simplifying and rearranging gives (see appendix B at the end of this post for detailed steps):

Me + Mm + Mt = 4[(MeMm)1/2 + (MeMt)1/2 + (MmMt)1/2]

To further simplify the Koide formula, remember from the law of indices that any mass can be represented like: Me = (MeMe)1/2. The Koide formula can then be seen to consist entirely of a summation of terms of the form (MaMb)1/2, some positive and some negative.

Physically, the Koide formula in its original form presumably is high-energy averaging for all flavours of leptons (electron, muon and tauon) as suggested in a previous post (which considered a different explanation for the square roots, however): at high enough energy (in either collisions or simply in a strong enough static field very close to the real core of a fundamental particle), all three flavours of leptons can spontaneously occur as virtual fermions in the vacuum due to pair-production. The original Koide formula seems to apply to the averaging of the masses of leptons at high energy when they can all briefly arise as virtual particles formed from preon interations in the vacuum.

A MECHANISM EXPLAINING HOW LEPTON MASSES CAN BE PRODUCTS OF PAIRS OF SQUARE ROOTS OF OTHER LEPTON MASSES (OR OF PREON COUPLING CONSTANT MASSES WHICH NORMALLY CANNOT BE DIRECTLY OBSERVED, BUT WHOSE PAIRED COMBINATIONS GIVES RISE TO THE OBSERVED PARTICLE MASSES)

The Dirac spinor was first resolved into 2-spinors by Weyl in 1929, and the relevance here is that, as Penrose shows (Road to Reality, 2004, pp. 629-30):

‘The Dirac equation can then be written as an equation coupling these two spinors, each acting as a kind of “source” for the other, with a “coupling constant” 2-1/2M describing the strength of the “interaction” between the two. …

‘From the form of these equations, we see that the Dirac electron can be thought of as being composed of two ingredients … It is possible to obtain a kind of physical interpretation of these ingredients. We form a picture in which there are two “particles”, one described by alphaA and the other by betaA’, each of which is massless, and where each one is continually converting into the other one. Let us call these the “zig” particle and the “zag” particle … this is a realization of the phenomenon referred to as “zitterbewegung“, according to which, the electron’s instantaneous motion is always to be the speed of light, owing to the electron’s jiggling motion, even though the overall averaged motion of the electron is less than light speed. Each ingredient has a spin about its direction of motion, of magnitude h-bar/2, where the spin is left-handed in the case of the zig and right-handed for the zag. …

‘In this interpretation, the zig particle acts as the source for the zag particle and the zag particle as a source for the zig particle, the coupling strength being determined by M.’

This zig-zag motion of the electron is a kind of oscillation as it propagates, as Penrose explains on p. 630:

‘… we find that the average rate at which this [zig-zagging] happens is (reciprocally) related to the mass coupling parameter M; in fact, this rate is essentially the de Broglie frequency of the electron.’

Hence, the electron (and leptons generally) have two components which drive one another cyclically with a mass coupling constant of 2-1/2. Each different lepton has a different mass, so the zig-zag amplitude will vary for electrons, muons, and tauons, each being a square root factor: the product of the two square root terms for the two components gives us the overall mass of the lepton of interest. The interaction of zigs and zags is a W-shape on a Feynman diagram, and according to Feynman’s rules (very nicely explained with clear examples in the 2008 book Quantum Field Theory Demystified) the amplitude for an interaction is simply the product of the various coupling constants and propagators involved, integrated in such a way as to conserve momentum for the interaction.

Now, at each zig and zag vertex in the W-shaped Feynman diagram for a lepton’s Zitterbewegung motion, the electron is actually interacting with a vacuum field particle which gives it mass, according to Figure 25.13 on page 644 of Penrose’s 2004 Road to Reality:

‘In the zig-zag picture of a Dirac particle, the vertices may be viewed as interactions with the (constant) Higgs field.’

This is how the mass of leptons mass arises, according to Sir Roger Penrose! My argument is that the square root products in the expanded Koide formula arise because the mass of a lepton is a composite of the product of two square roots of lepton masses (including neutrino masses, since Carl Brannen has shown that the Koide formula with a slight modification – a minus sign to the lightest neutrino mass square root term instead of a positive sign – fits the neutrino mass data very well); if mixing is allowed for massive particles as like electrons, muons and tauons, as occurs with other leptons (neutrinos) you get three distinct ways that the electron, muon and tauon mass square roots can mix:

1. electron-muon
2. muon-tauon
3. tauon-electron

We know from observations that some leptons at least, neutrinos, can mix varieties spontaneously while they are propagating: there are three flavours of neutrinos, only 33% of the flavour generated in the sun by fusion is detected here on earth, hence that neutrino has mixed flavours equally between all 3 flavours while on the way to the detector on the earth. In this way, the number of neutrinos of the original flavour has fallen by the factor 3 to just 33% as observed, and 67% are present as other flavours which the detector here (a larger underground swimming pool filled of carbon tetrachloride and a big scintillation photomultiplier) is simply unable to detect.

What happens with the electron, muon and tauon in Koide’s formula is simply that the sum of their masses is directly proportional to the sum of the products of all combinations of the square roots. The expanded Koide formula:

Me + Mm + Mt = 4[(MeMm)1/2 + (MeMt)1/2 + (MmMt)1/2]

shows that Me + Mm + Mt is equal to the sum of 4*3 = 12 separate terms, i.e., each of the masses is represented by 4 terms each. If we look at the right hand side of the equation above, the term (MmMt)1/2 is the biggest, (MeMt)1/2 is second biggest, and (MeMm)1/2 is the smallest.

It is probable that the Koide formula is a gross manifestation of something much deeper, because if we look at individual particle masses, we can’t get the electron mass by multiplying the square roots of two massive lepton masses together, unless maybe one of the masses is a neutrino mass and the other one is the muon or tauon mass.

The reason for bringing neutrinos in here is that Penrose states that neutrinos are equivalent to the ‘zig’ part of an electron, which makes them left handed (neutrinos are only left-handed in the Standard Model to make the weak interaction purely left-handed as experimentally observed; neutrinos interact with Z bosons in weak interactions). But because neutrinos have a very small mass, they can’t be entirely left-handed ‘zig’ particles and need to interact with a massive vacuum field occasionally in order to acquire mass so they can change flavour i.e. to ‘oscillate’ between electron neutrinos, muon neutrinos and tauon neutrinos. (Penrose, 2004, Figure 25.9.) Without mass, neutrinos would go at velocity c only and thus would be frozen and unable to oscillate; having a very small mass reduces their velocity to just under c and allows them to gradually change flavour on the 8.3 minute journey from the sun to the earth.

This zig neutrino idea is funny because it will force a change to the Standard Model: the neutrino is not 100% left-handed; it is very slightly right-handed in order to explain its mass. The small proportion of right-handedness presumably will be linked to the ratio by which matter outweighs antimatter in the universe; because it is the left-handed weak interaction which allows downquarks (in neutrons) to decay into upquarks (allowing neutrons to decay into protons), but not vice-versa (nobody has detected a proton decaying). The asymmetry of left-handedness of weak interactions is clearly tied into the asymmetry of matter over antimatter in the universe from the first fraction of a second onward.

SUMMARY

The underlying mechanism for the square roots of mass in the Koide formula is linked to the Weyl 2-spinor (left and right handed spinors) using Schroedinger’s ‘Zitterbewegung’ lepton as discussed by Penrose, Road to Reality, 2004. See Penrose’s Figure 25.13 for a Feynman diagram showing how two components of a lepton interact to produce the de Broglie oscillation of a moving particle. They acquire mass at the interaction vertex. The Zitterbewegung vertex coupling constant for the interaction is a square root factor, because you square the momentum integrated amplitude of coupling constants and propagators for a Feynman diagram to get the resultant probability or reaction rate.

Because Zitterbewegung involves interactions between two components, a zig and a zag, in a lepton, you need two interaction vertices in each complete oscillatory cycle of the particle as it propagates, and the coupling constants multiply together to give the amplitude according to Feynman’s rules; this provides the seed for an explanation of the square roots in the Koide formula. Since any lepton is acquiring mass from the same vacuum at vertices, it follows that to explain the variety of lepton masses that exist in nature, the Zitterbewegung vertex coupling constants must be proportional to the square root of the mass of the zig and zag components of a Zitterbewegung lepton. We know that neutrinos oscillate in flavour uniformly between three flavours while coming to us from the sun, which is why we detect just one third of the total (the third which have the flavour that our discriminate detector is searching for). If we extend this idea so that the zig and zag components of leptons in general, the masses of leptons will be represented by the sum of products of pairs of square roots of masses of different leptons. Although Koide’s original formula only applied to the masses of the electron, muon and tauon, Brannen has extended it with a modification to include neutrino masses.

The question now is to see if this mechanism for the Koide formula can be tied firmly into Carl Brannen’s application of the Koide formula to neutrino masses, mentioned in an earlier post.

One possibility is that the

Appendix A: History of Zitterbewegung (quoted from Wikipedia)

‘The existence of such motion was first proposed by Erwin Schrödinger in 1930 as a result of his analysis of the wave packet solutions of the Dirac equation for relativistic electrons in free space, in which an interference between positive and negative energy states produces what appears to be a fluctuation (at the speed of light) of the position of an electron around the median, with a circular frequency of , or approximately 1.6 × 1021 Hz.’

Appendix B: rewriting the Koide formula

Koide formula: (Me1/2 + Mm1/2 + Mt1/2)2/(Me + Mm + Mt) = 3/2.

Rearranging:

2(Me1/2 + Mm1/2 + Mt1/2)2

= 3(Me + Mm + Mt).

Expand 2(Me1/2 + Mm1/2 + Mt1/2)2:

2(Me1/2 + Mm1/2 + Mt1/2)2

= 2[Me + Mm + Mt + 2{(MeMm)1/2 + (MeMt)1/2 + (MtMm)1/2}]

= 2Me + 2Mm + 2Mt + 4{(MeMm)1/2 + (MeMt)1/2 + (MtMm)1/2}

Which equals 3(Me + Mm + Mt).

Hence:

Me + Mm + Mt = 4[(MeMm)1/2 + (MeMt)1/2 + (MmMt)1/2].

To further simplify the Koide formula, remember from the law of indices that for example Me = (MeMe)1/2, so Me + Mm + Mt = 4[(MeMm)1/2 + (MeMt)1/2 + (MmMt)1/2] can be written as:

(MeMe)1/2 + (MmMm)1/2 + (MtMt)1/2 = 4[(MeMm)1/2 + (MeMt)1/2 + (MmMt)1/2],

which may help.

Update (7 December 2011):

http://www.science20.com/quantum_diaries_survivor/alejandro_rivero_fermion_mass_coincidences_and_other_fun_ideas-85187?nocache=1

“Then Koide went some steps beyond and considered quarks and leptons with substructure, so that lepton mass quotients could predict the Cabibbo angle too, even if this is a mixing between quarks.”

{(sqrt(M_e)+sqrt(M_mu)+sqrt(M_tau))^2} /( M_e + M_mu +M_tau) = 2/3

The key factor of 2/3 in the Koide relationship is the fractional electric charge of the up/charm/truth quarks, which arises from a mixing effect. It’s the 2/3 electric charge of up/charm/truth quarks that’s so interesting. The -1/3 charge of the down/strange/bottom quarks is very easily predicted by analysis of vacuum polarization for the case of the omega minus baryon (Fig. 31 in http://rxiv.org/pdf/1111.0111v1.pdf ). It appears that the square root of the product of two very different masses gives rise to an intermediate mass (see https://nige.wordpress.com/2009/08/26/koide-formula-seen-from-a-different-perspective/ for the maths) that the Koide relationship implies a bootstrap model of fundamental particles (akin to the bootstrap concept Geoffrey Chew was trying to develop to explain the S-matrix in the 1960s before quarks were discovered). The square root of the product of the masses of a neutrino and a massive weak boson may give an electron mass, for instance. This seems to be the deeper significance of the Koide formula, from my perspective for what it’s worth. All fundamental particles are connected by various offshell field quanta exchanges, so their “charges” are dependent on other charges around them. This means that the ordinary approach of analysis fails, because of the reductionist fallacy. If your mathematical model of rope is the same for 100 one-foot lengths as for a single 100 foot length, it leads to customer complaints when you automatically send a sailor the former, not the latter. It’s no good patiently explaining to the sailor that mathematically they are identical, and the universe is mathematical. If the Koide formula is correct, then it points to an extension of the square root nature of the Dirac equation. Dirac made the error of ignoring Maxwell’s 1861 paper on magnetic force mechanisms: the chiral handedness of magnetism (the magnetic field curls left-handed around the direction of propagation of an electron) is explained in Maxwell’s theory by the spin of “field quanta” (Maxwell had gear cogs, but in QFT it’s just the spin angular momentum of field quanta). Maxwell’s theory makes EM an SU(2) Yang-Mills theory, throwing a different light on the Dirac’s spinor. It just so happens that the Yang-Mills equations automatically reduce to Maxwell’s if the field quanta are massless, because of the infinite self-inductance of electrically charged field quanta, so SU(2) Maxwellian electromagnetism in practice looks indistinguishable from Abelian U(1), explaining the delusions in modern physics.

The very interesting results Alejandro Rivero’s gives are from equation 4 on page 3 of his paper http://www.vixra.org/abs/1111.0062, which solves the Koide formula by writing one mass in terms of the two lepton other generation masses. Koide’s formula also implies (my 2009 post):

Me + Mm + Mt = 4 * [(Me * Mm)^1/2 + (Me * Mt)^1/2 + (Mm * Mt)^1/2]

where Me = electron mass, Mm = muon mass, Mt = tauon mass. I.e., the simple sum of lepton masses equals four times the sum of square roots of the products of all combinations of the masses, making it seem that if Koide’s formula is physically meaningful, then Geoffrey Chew’s bootstrap theory of particle democracy must apply to masses (gravitational charge) in 4-d. At high energy, early in the universe, tauons, muons and electrons were all represented and we only see an excess of electrons today because the other generations have decayed, although some of the other masses may actually exist as dark matter, and thus still undergoes the interaction of graviton exchange, which determines the Koide mass spectrum today (this dark matter is analogous to right-handed neutrinos). The basic physics of the Koide formula seems to be the Chew bootstrap applied to gravitation (Chew applied it to the strong force, pre-QCD):

“By the end of the 1950s, [Geoffrey] Chew was calling this [analytic development of Heisenberg’s empirical scattering or S-matrix] the bootstrap philosophy. Because of analyticity, each particle’s interactions with all others would somehow determine its own basic properties and … the whole theory would somehow ‘pull itself up by its own bootstraps’.” – Peter Woit, Not Even Wrong, Jonathan Cape, London, 2006, p148. (Emphasis added.)

The S-matrix went out when the SM was developed (although S-matrix results were used to help determine the Feynman rules), but at some stage a Chew-type bootstrap mechanism for Koide’s mass formula may be needed to further develop a physical understanding for the underlying theory of mass mixing, leading to a full theory of mixing angles for both gravitation (mass) and weak SU(2) interactions of leptons and quarks.

Casimir force

In the previous post, the Casimir force was discussed. It was discovered theoretically by Casimir in 1948 and experimentally proven in 1996 by Steve Lamoreaux and Dev Sen. Depending on the geometry of the situation, i.e., the shape of the plates, it can be either an attractive or a repulsive force.

The Casimir force between two parallel flat conducting metal conductors is attractive because the full spectrum (all wavelengths) of electromagnetic radiation fluctuations in the vacuum bombard the outside area of the plates (pushing them together), but only wavelengths smaller than the distance between the plates can arise in the space between the plates:

casimir mechanism

In other words, the shortest wavelengths of the “zero-point” (ground state) electromagnetic energy fluctuations of the vacuum bombard each plate equally from each side, so there is no asymmetry and no net force. Only the longer wavelengths contribute to the Casimir force, for they don’t exist in the small space between the plates but do bombard the plates from the outside, pushing them together in the LeSage fashion.

Looking at the Wiki page on the Casimir effect, they derive the Casimir force from the force equation

F = dE/dx

which we can use to formulate the basic (unshielded) QED force from Heisenberg’s minimal energy-time uncertainty relation, h-bar = Et.

F = dE/dx = d(h-bar/t)/dx = d[h-bar/(x/c)]/dx = -h-bar*c/x2.

This inverse-square law force is a factor of 1/alpha times the Coulomb force between two electrons (i.e. it doesn’t incorporate the polarized vacuum shielding factor of alpha).

The Casimir force calculation is considerably more complex. It relies on only wavelengths longer than the gap between two parallel metal plates pushing them together by acting on the outside, but not in between, the plates. According to the discussion of thie Casimir force mechanism in Zee’s QFT textbook p. 66: ‘Physical plates cannot keep arbitrarily high frequency waves from leaking out.’ This is one way of explaining why short wavelengths don’t contribute to the Casimir effect significantly: like very high energy gamma rays, they penetrate straight through the thin Casimir plates without interacting significantly with them. Longer wavelengths, on the other hand, are all stopped and impart momentum, producing the Casimir force. However, Zee’s explanation – just like his flawed explanation of Feynman’s path integrals using the double-slit experiment (where he doesn’t seem to grasp that the diffraction of the photons is physically caused by the interaction of the photon with the electromagnetic fields from the physical material at the edges of the slits in the screen, which doesn’t exist in the vacuum below Schwinger’s 1.3 * 1018 v/m IR cutoff) – is physically wrong.

Zee is wrong because if the shorter wavelengths were excluded from contributing by merely penetrating the Casimir plates, the wavelengths cutoff from the integral would depend not on the distance between the plates, but just on the nature of the plates themselves (their mass per unit area for example, as in gamma radiation shielding).

So rather than Zee’s theory of the plates shielding (stopping) long wavelengths and letting short wavelengths (high frequencies) penetrate by leaking through and thus not contributing, the Casimir mechanism must be one that explains why the wavelength cutoff is equal to the distance between the plates.

Notice that Zee is right that higher frequencies (shorter wavelengths) are more penetrating: I’m not disputing that. What I’m saying is that his shielding mechanism neglects to explain the wavelength dependence upon the distance between the plates.

The only way that the distance between the plates can determine the wavelengths contributing to the Casimir force is if wavelengths longer than the distance between the plates are unable to exist between the plates in the first place.

It simply doesn’t matter what happens to the shorter wavelengths, because it is only the longer wavelengths that contribute to the Casimir effect. Zee should be explaining what the mechanism is for the asummetry in the energy density of the longer wavelengths, not discussing the shorter wavelengths, because it’s just the asymmetry between the energy density of the longer wavelengths on each side of each metal plate which causes the Casimir force.

The actual mechanism for the exclusion from the space between the plates of wavelengths longer than the distance between the plates is simply the waveguide effect. When you have a radio frequency resonator (source) and want to send the radiation to a dish antenna for transmission, you can pipe the radiation inside a conductive metal tube or box (a so-called ‘waveguide’) with an internal size at least equal to the wavelength you’re using. If the wavelength is longer than the diameter of the metal tube, the radiation can’t propagate: it is absorbed by the sides and heats them up.

What happens is that the electromagnetic radiation is simply shorted out by the waveguide if its wavelength is bigger than the size of the waveguide, since the oscillation of the electric field strength in the photon is transverse (perpendicular to the direction it propagates in), not longitudinal. (Ignore the usual obfuscating ‘pictures’ of a Maxwellian photon in textbooks, since they are one-dimensional and merely plot field strength and magnetic field strength versus the one-dimension of propagation. Anyone glancing at those pictures is misled that they are looking at a 3-dimensional spatial illustration of the photon, when in fact two axes are field strengths, not spatial dimensions! It’s as nutty as plotting a graph of speed versus distance for an oscillating pendulum, and claiming that the sine wave graph is the real 2-dimensional outline of the pendulum.)

On p. 66, Zee calculates the Casimir force to be

F = dE/dt = Pi*h-bar*c/(24d2),

where d is the distance between the plates. Notice the inverse-square law! But the Wiki page on the Casimir effect calculates the following for the Casimir pressure (force per unit area):

P = F/A = -Pi2*h-bar*c/(240d4).

If both Zee and Wiki are correct, then the effective area of the Casimir plates will be the Zee formula for F divided by the Wiki formula for P:

A = F/P = 10d2/Pi.

Hence the distance of separation between the plates is d = (Pi*A/10)1/2. For the simplest geometric situation of circular shaped plates with area A = Pi*R2, the distance of separation is

d = (Pi*A/10)1/2 = (Pi2R2/10)1/2 = Pi*R/101/2.